Not really. Go uses established, general kernel apis. It doesn't work with the kernel to try and guarantee that a single kernel thread is executing on each core at any point. Futexes are not an API for user/kernel thread coordination. They are an API for inter-user thd coord.
-
-
Just because the coordination problem they solve is hard and fickle (it is), it doesn't mean it is as hard as Scheduler Activations.
1 reply 0 retweets 0 likes -
Replying to @__gparmer @kkourt
What about NGPT in Linux, though? That’s M:N, and didn’t require the same amount of kernel/user interaction. Also abandoned in favor of 1:1. Seehttp://www.drdobbs.com/open-source/nptl-the-new-implementation-of-threads-f/184406204 …
1 reply 0 retweets 0 likes -
Old library with benchmarks: https://lwn.net/Articles/10710/ For creation/destruction, 1:1 beat M:N by a factor of 4 (!!)
2 replies 0 retweets 0 likes -
Replying to @pcwalton @__gparmer
I wish they provided more details on why this happens, because this does not make sense to me. They even mention that they expected that M:N would be better in this scenario.
1 reply 0 retweets 1 like -
The NPTL paper (https://akkadia.org/drepper/nptl-design.pdf …) mentions that: "Here the two schedulers work closely together: the user-level scheduler can give the kernel scheduler hints while the kernel scheduler notifies the user-level scheduler about its decisions."
1 reply 0 retweets 0 likes -
and: "To allow context switching in the user-level scheduler it would be often necessary to copy the contents of the registers from the kernel space." Both look very suspect decisions to me.
1 reply 0 retweets 0 likes -
Systems that use tasks (green threads), are typically faster in task creation and switching. Task creation is mostly allocating a stack, and context switching is loading and storing a few registers (if you don't care about delivering POSIX signals to individual tasks).
2 replies 0 retweets 0 likes -
Replying to @kkourt @__gparmer
Task creation is mostly allocating a stack for the kernel too. I have to repeat that we did these measurements for Rust and found that M:N was not worth it *for performance reasons*, not just for compatibility.
1 reply 0 retweets 0 likes -
I don't know the history here. Did you use cactus stacks? Without scheduler activations, system interactions (eg syscalls) will often get slower. I'd imagine most rust apps use more system calls than thread operations.
1 reply 0 retweets 0 likes
We used M:N with segmented stacks for a while with a lot of caching, but were disappointed by the performance. So we moved to M:N larger stacks. Eventually we threw out the whole thing and went to 1:1.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.