Back on my "essays as commit messages" shithttps://github.com/rust-lang/crates.io/pull/2203 …
The volume of data isn't, so much as how it's being accessed/laid out. Compressing rows would only delay the problem, and would likely slow down access patterns that are actually important to be fast. Only painful place right now is background jobs
-
-
The tuple overhead here wouldn't really be reduced. Every write produces a new tuple no matter what. The number of writes are the same. Duplicate date also isn't really reduced here since the overhead of an array element is the same as a date.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.