I wonder when this "databases are nearly always bottlenecked on IO" perception finally is going to die. I think it's been false for > 50% of instances for at least 15 years. And it's just plainly wrong when we can have small-ish servers with >16 internal NVMe drives.
-
-
A lot of this perception was formed when there were no SSDs, and memory was so scarce that it'd never fit a meaningful percentage of ones workload. But especially the latter has been wrong for a looong time. A *lot* of databases are in ~100MB - ~10GB range.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Let’s play with GPU :) When expected I/O throughput exceed more than 10GB/s, PCI-E bus topology is also one of the significant factor in addition to xPU and NVME.http://kaigai.hatenablog.com/entry/2019/11/01/170159 …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Previous life leading data tier ops for a reasonable fleet - dashboard had a treemap visualization of AAS for all DBs. Healthy was 100% CPU. #1 early indicator of impending trouble was any bulge. Revealed nearly every problem before anyone else saw it... still my favorite metric.
-
My favorite page in EM
End of conversation
New conversation -
-
-
That’s true now, but IF one ever really hits IO problems, they are most difficult to fix
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
My case I have a small instance that fits in 50gb memory/cache. So cpu is the bottleneck no doubt. My ssd just sits there waiting for writes.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Io_uring should reduce syscall overhead a lot
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.