Conversation

Replying to
I've never had an SSD fail, let alone a high end one with high durability NAND and higher quality parts. It would be a major inconvenience but not a major problem beyond wasted time. RAID only makes things worse with a high-end NVMe SSD. Already insane overkill performance too.
1
I prefer having fewer things that can go wrong and from my perspective, redundancy via RAID only adds more that can go wrong. If the SSD is going to fail, I need a new SSD anyway, and I'd rather just deal with that. I don't want to deal with another layer of latency/complexity.
1
Replying to
Look at the specs for samsung.com/semiconductor/. 5GB/s sequential write, 7GB/s sequential read, 1,000,000 QD32 IOPS, 60,000 QD1 IOPS. It's immensely overkill for my needs. I need the Pro line because huge amounts of data is written but the performance is really not a factor.
1
It's substantially faster than the Samsung 960 Pro 2TB in my old workstation but that won't make any difference. Having 128GB of memory means there's a massive amount of cached data and plenty of space to buffer writes. Latency is something that does matter a lot though.
1
RAID hurts latency and IOPS. It's not really suited to the age of high-end NVMe drives. It's also really hard to actually be I/O bound with this kind of drive. Latency is what really ends up bottlenecking performance for I/O for anything that I'm likely to do, not the rest.
2
Replying to
Basically, if you have mostly CPU-bound tasks, it's very wasteful to make far more threads for them than you have cores. You really don't want to be context switching. Data is ideally almost always cached and there's generally enough write buffer that write latency is irrelevant.
2
The main thing that really matters to avoid I/O slowing everything down is having the lowest possible latency. The capabilities of these high end NVMe drives is ridiculous and unless it's some massive database server doing massively parallel workloads, it just doesn't matter.
1