Conversation

Inspired by the awesome "Instant Neural Graphics Primitives" paper, I had a go at optimising hash tables with gradient descent in my toy ML code. Crop of a fitted image for a 4-layer ReLU thing vs multires hash tables (roughly same parameter count for each):
Image
2
139
Replying to
Ooooh, that is incredible, thank you for sharing 💖 Multi-hash looks exciting for radiance caching, but NV's license and CUDA-only aspects were a stopper for me. Looking forward to studying your code!
1
1
Replying to
Thanks!🤩 The generated compute shaders are not particularly optimal, but should be fairly portable (needs float_atomic_add for scattered adds though now, so not quite vulkan 1.0 any more...)
2