Inspired by the awesome "Instant Neural Graphics Primitives" paper, I had a go at optimising hash tables with gradient descent in my toy ML code. Crop of a fitted image for a 4-layer ReLU thing vs multires hash tables (roughly same parameter count for each):
Conversation
Replying to
Toy ML code (array expressions in rust that generate vulkan compute shaders) is here if anyone is interested: github.com/sjb3d/descent/
2
1
29
Replying to
Not for that image, but the code has a comparison with an added positional encoding and also a siren network here: github.com/sjb3d/descent/

