Does it read entropy from /dev/urandom to protect hash table against "Algorithmic Complexity Attacks"? Is getrandom(2) the solution? Or init API that receives entropy from caller?
On Linux, getrandom is always the right approach. The /dev/urandom and /dev/random APIs are obsolescent. They aren't always available since they require access to a populated /dev, there's the dynamic failure case from using dynamic allocation (of files) and the early init bug.
The maintainers should fix the early init security bug in /dev/urandom, but they aren't yet willing to do it based on the lackluster kernel entropy generation combined with broad deployment of broken environments not providing entropy such as poor virtual machine implementations.
To save you some trouble: We're primarily talking about Linux systems where getrandom syscall returns ENOSYS or where there is no such syscall and there getentropy and friends all open /dev/urandom.
I believe we need to find solutions that don't require randomization & crypto at these layers (malloc and hashtables). I understand why people do it as defense-in-depth but I prefer approaches that solve these problems statically (e.g. avoiding memory-unsafe langs & hashtables).
I agree that it's not needed for hash tables. I think chaining is the best approach to handling collisions because it can actually switch over to an O(1) data structure like a trie when the collisions start to happen. Providing bytes for a trie is the same API as bytes to hash.
Hash tables do not need to have performance worse than O(1) or O(log n) in the worst case because no one forces you to use a linked list for chaining. You can store one element inline, fall back to a vector and then fall back to a trie or other approach in pathological cases.