Conversation

๐Ÿ˜I insert 5 million rows with increasing value (think sequence, generator, current timestamp, ordered file import...) that takes 110 MB and still 110 MB after an index rebuild. Happy with that?
Image
1
๐Ÿ˜You decide to rebuild, but which fillfactor? 100 to pack for always increasing values? 90 because it is the default? 80 to give more room for same range values? With inserts on same range, leaf pages will be between 100 (full) and 50 (split) so it may eventually average to 75
Image
2
Replying to
I am prepared to say that ff 100 for B-Trees is practically always a bad idea. The risk of correlated page split storms is way too high to ever make up for the space savings. Split choice algorithm in Postgres is quite smart from version 12 on.
1
2