Attention CSEd Twitter: While Big-O matters, the constant-factors involved in things like cache-hit rate are at least as important. If you're teaching balanced-binary-trees for any reason other than tradition, I'd love to chat.
I don't care if someone can implement a red-black tree, at all. But ordered O(log N) structures solve a lot of problems and candidates should understand their use.
Pointer chasing to new allocations for “structural data” is one reason why set, map and list are bad. sorted vector is a good middle ground for larger set sizes and low insert frequency, (std::vector with std::lower_bound). Memory order is reduced to the “actual data”.
It should be noted that std::map and std::set are not representative of self-balancing search trees overall. They force an inefficient implementation for modern hardware due to the iterator invalidation guarantees. B-trees are a great data structure but the API doesn't allow it.
A good reason to use a red-black tree is for intrusive data structures where the nodes are going to be threaded through existing objects, often as a way of designing around avoiding dynamic allocation. It's a useful approach in memory allocators, kernels and robust libraries.
B-trees are ordered data structures providing O(log n) find, insert and delete while also having great data locality at the same time. The node size can be increased or decreased to tune performance. Great data locality is why they're so broadly used by databases / filesystems.
For storage, node size is tuned to match the storage block size. You could use those relatively massive nodes in-memory if you wanted, but you would usually want a significantly smaller node size. It's Rust's only standard ordered map/set for good reason: https://doc.rust-lang.org/std/collections/index.html….