I've tried this with zlib on text and weirdly, the thread communication cost more than it saved. (ymmv, this was a long time ago, maybe it wasn't as efficient comm. as it could have been, etc.)
-
-
-
I did it for PNG decoding and it was a massive speedup. In fact it was basically optimal—the entire thing got bottlenecked on zlib. The trick is that you have to have something for the main thread to pipeline with—in PNG’s case, that’s the prediction/filtering.
- 5 more replies
New conversation -
-
-
I've had mixed results with that. Never fully tracked it down but I suspect it might be a cross-core cache invalidation thing. If at all possible, just decoding the whole thing in one go on a background thread seemed to work best. Obviously decode multiple streams in parallel.
-
I had massive speedups doing this for PNG decoding. The trick is to have some CPU work to do while you pipeline it. In the case of PNG, that’s prediction/filtering.
- 5 more replies
New conversation -
-
-
I've wondered about this even for something like tar.gz extraction--would decompressing on one thread and doing disk writes on another thread be faster?
-
Quite possibly!
- 1 more reply
New conversation -
-
-
I wonder how convenient the inflate crate is for this. I think I originally meant for the bitstream decoder to be generic over the sliding window buffer, you could maybe replace it with a reading head in the PNG thread and a writing head in the inflate thread, spinlocked together
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.