Figured I’d throw this out there: does anyone know the fastest way to *conservatively* rasterize lines on GPU? I’ve tried: (1) drawing instanced thin quads; (2) drawing thin quads w/o instancing (~15% faster); (3) glLineWidth(2.0) (fastest on NVIDIA, but deprecated)
-
Show this thread
-
Replying to @pcwalton
Sidenote; are gpus good at lines? If rendered aa triangles it would be bad from a quad perspective they would be bad, but is there some optimization for lines?
1 reply 0 retweets 0 likes -
Replying to @anders_breakin
From what I’ve gathered they’re usually implemented with hardware Bresenham, but I could be totally wrong.
1 reply 0 retweets 0 likes -
Replying to @pcwalton @anders_breakin
GPUs I know run lines through the same pipe as everything else, with shaders and quads and the works.
1 reply 0 retweets 0 likes -
Replying to @bmcnett @anders_breakin
Sure, but I took the question to be how the hierarchical rasterization/edge equations/etc work for lines. At least that’s what I’m interested in :)
1 reply 0 retweets 0 likes -
Replying to @pcwalton @anders_breakin
Diamond rules seem pretty standard these days
1 reply 0 retweets 0 likes -
What I was getting at was that if lines were inefficient enough from a shader perspective then doing then in compute might be a win.. With some acceleration structure that is. If you need enough special features that ough to win at some point...
2 replies 0 retweets 0 likes -
Replying to @anders_breakin @pcwalton
definitely, a line rasterizer in compute can beat the fixed function pipeline, and any serious app with a lot of lines should consider software rasterizing them in compute
2 replies 0 retweets 4 likes -
Would be fun to write one too if you had a lot of time ;) Can tackle sorting/z-buffer/blending manually however you want... many choices... As usual best performance win if tailored to your choices!
1 reply 0 retweets 1 like -
Replying to @anders_breakin @bmcnett
Pathfinder actually does a fair bit in compute already. But the reason why I’m using the rasterizer for tiling is fragment scheduling. Hard to beat the fixed function hardware for this in compute (NVIDIA’s attempt was like 2x slower).
1 reply 0 retweets 2 likes
-
-
I had a feeling you had looked into this! Thanks for the link!
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.