I’ve now implemented an all-GPU-compute pipeline in a Pathfinder branch, avoiding the rasterizer entirely (CPU still used for tiling). However, so far it seems that using compute shader to create vector tiles and the standard GPU rasterizer to composite them is fastest…
Yeah, I’ve thought about doing it that way. But other compute-based vector rendering solutions I’ve seen don’t work this way; they interpret command lists sequentially per tile.
-
-
The problem with the parallel sum approach is that it's not the most energy-efficient. Baking Bézier curves to an intermediate representation of MSDF would be quite efficient for scaling and compositing. Color MSDF might be tricky though. Cached coverage channels?
-
I’ve thought about mSDF but I don’t think it works for dynamic vector graphics (like canvas), because mSDF is too expensive to generate. I’m interested in trying a variant of mSDF combined with sparse virtual texturing and a background/foreground per cell though…
- 4 more replies
New conversation -
-
-
It's simplicity vs complexity. Unfortunately writing efficient compute code isn't straightforward. i.e. the obvious way usually isn't the fastest
-
Yeah, the thing is that the rasterizer is often simplest of all…let the silicon do the hard work :)
- 5 more replies
New conversation -
-
-
To be clear: I’m definitely inclined to believe that you’re correct and that you have to process overlapping pixels in parallel if you want to win against the rasterizer. It matches what I’ve seen. I’ve just heard conflicting evidence, that’s all
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.