Found myself wondering what kind of data structure you'd need for representing fine geometry on the scale of a world-sized "metaverse." Naively, assuming you want micrometer scale… 1° ~ 1e5m => precision to 1e-11°; i.e. lat/long must distinguish ~1e13 values. A double suffices!
Conversation
(Of course, in practice, I'd guess geometry positions would be represented at runtime with cartesian coordinates relative to some local "sector" reference frame, so you could probably just use a float)
Oh, boy, this looks really great.
1
Replying to
The limiting factor will be the size of floats in the rendering pipeline. Nested frames almost certainly required if participation across a broad range of consumer hardware is expected.
1


