The insert benchmark now supports PostgreSQL. Time to burn out the SSDs attached to my Intel NUCs. Sorry, iibench.py is messy -
Conversation
Replying to
Cool! Low cardinality index would be interesting, especially with Postgres. (Perhaps a single column index on "productid").
1
Replying to
Indexes are wide to increase IO stress and make index covering for a query. I need to revisit PG support in Linkbench to see if wide indexes there can use “include”.
1
Replying to
Right -- the biggest LB index covers all columns in the table. Also, Postgres should have a storage param to force "split after new tuple" optimization, since the heuristics are too conservative to work with LinkBench despite clear benefits. (See git.postgresql.org/gitweb/?p=post)
1
1
Replying to
I expect to experiment with btree fill factor to determine whether that helps
1
2
Replying to
Check out pgstatindex() function from pgstattuple contrib extension for info on space utilization. Looks like Postgres gets classic "Natural logarithm of 2" space utilization with insert benchmark (i.e. ~69% full excluding the PK), so wouldn't expect FF to make a difference.
Replying to
I just learned of that via this post. Does it do a full index scan to compute the result or something faster?
1
Replying to
It just scans the whole index, but in physical order, without blowing out shared_buffers (it reuses a small, dedicated area of shared buffers). This seems to almost saturate I/O bandwidth. So it's very fast.
1
1
Show replies

