Exhibit A: Hardware RAID5. Exhibit B: Software RAID5. Same controller, same disks. Seriously, fuck hardware RAID.pic.twitter.com/EsIurJpKlQ
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
100% agree... I made the swap to ZFS ~1 year ago, and it's fantastic! What are you using? dm-raid / zfs / etc?
Just dm-raid. This is a simple 3-disk RAID5 on a server. I'm slowly implementing Ceph for more serious storage setups.
Here I have half a mind to say these controllers' firmwarzs are deliberately subpar and here to screw you up if you don't have the expensive cache&battery unit... which vendor will OF COURSE try to peddle as THE solution.
I wouldn't be surprised at all.
Er, these show read rate? And same workload?
Write rate. It's literally just cat /dev/zero > /dev/device
Linux software raid has always treated me well. Probably better then it should.
I was equally surprised when I benchmarked recently, ended up swapping out my RAID-only PERC 6i for a non-raid HBA and let ZFS do all the work.
What kind of CPU load does that software RAID benchmark generate?
2% software RAID, 4% hardware RAID. Don't ask me how that's possible, it just is. I presume driver shittiness vastly overshadows the overhead of software parity calculation (which is so fast on modern CPUs as to be basically free).
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.