But I donβt want to trace rays. I want to trace cones. Rays result in noisy aliased result and cones have very nice properties such as automated LOD (pixel is a frustum = better approximated by a cone than a ray).
Conversation
But... isnβt that just shooting rays in noise-based directions inside your pixel cone?
1
1
Not in the general case. You can pre-filter your geometry and trace against that with distance from origin to map the cone width.
2
1
Ok, makes sense.
How do first-bounce rays start at the geometry?
1
In my SDF based ray-tracer, the pixel wide cone ends when it hits something. Then I sample partial derivatives from SDF at the hit location to get gradient = normal. The normal gets "smoother" when it gets further away from the surface.
2
3
Could this be done by deriving another cone instead of a normal, and then trace for that cone again?
1
I was just thinking about that. We do 6 point partial derivative (XYZ). You could do the same trick that Toksvig mapping does, determine the roughness by normal length (less than 1 = rough, as averaging lots of different direction normals makes it shorter, if you don't renormal).
1
3
we do this. (or did... simon may have iterated :)) ie interpret normal computed via central differences of sdf same way as you a toksvig normal, in creases/edges it naturally gets shorter
1
2
6
Nice! So it works :). I was just wondering. Need to test how much it changes the look if I do the same for Claybook material model :)
1
iirc i also tried a blend of such normals computed up the mip chain, to soften and widen the kernel size. it can look a bit... voxely if over done, iirc. what do you do these days?
2
2
IIRC we used to do Toksvig in local neighbourhood as preprocess, when we had somewhere to store it. Right now we rely on TAA, I've not managed to get good results direct from central difference for normals.




