It would be interesting to bridge the gap between language-level async features and the async execution of out-of-order CPUs.
I think many async multicore operations would complete faster than uncached main memory reads, making autothreading practical.
-
-
So microscale yes, but what if at macroscale you want a task oriented architecture that finishes in 10ms and not hog cores
-
Yes! I think this could be automated based on timing: tasks start on-core but later flush to main memory futures, like caches.
- 6 more replies
New conversation -
-
-
Having some level of autonomy on the CPU side does help though, expressing fine grained parallel ideas in code is hard
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I guess it would be workload/problem/algorithm dependent wether there would be benefits
-
If the distance to memory for the next task outweighs doing a quick parallel burst on in-cache data, then i guess yes.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.