This is up and running as a C++ library. Currently transactional variable reads and writes are ~50x slower than ordinary variables. 40% the overhead is transaction bookkeeping, 40% is concurrent garbage collection bookkeeping, 20% is platform atomics. Many optimizations to go.https://twitter.com/TimSweeneyEpic/status/1210260682605764611 …
-
Show this thread
-
This amount of overhead may be acceptable, as you'd only use transactional variables for shared globally-visible state (like properties of game objects) and not locals. If performance is dominated by low-level operations like collision detection, this may be negligible.
2 replies 0 retweets 27 likesShow this thread -
For comparison, UE3 UnrealScript and pre-il2cpp Unity Mono bytecode interpretation were ~30x slower than native.
4 replies 1 retweet 29 likesShow this thread -
One great side-effect of transactions is that you can automatically undo variable writes upon failure, for example so you can write something like "if(move a bunch of actors) ... else ...", and if any operations fail, then all of their effects are undone.
1 reply 0 retweets 32 likesShow this thread -
I've had to re-learn my code optimization intuition from the late 90's. C++ compiler optimization is magic now, and Skylake can reliably issue 4-6 instructions per clock. But control flow misprediction has become wildly expensive.
3 replies 4 retweets 61 likesShow this thread -
Anyway, there are two competing theories on how we'll unlock higher performance through parallelism. One is the data oriented design approach, asking programmers to rewrite gameplay code as highly parallel fragments of algorithms that pipe inputs and outputs among stages.
4 replies 0 retweets 61 likesShow this thread -
The other is transactions, hoping we can just write gameplay code using "var<int> Health;" instead of "int health;", write code to minimize unnecessary contention for shared state, and have the engine and API magically sort out concurrency for us.
12 replies 1 retweet 92 likesShow this thread -
Replying to @TimSweeneyEpic
I wanted to investigate a model where every entity could potentially be a separate thread, but you could only read the immutable previous frame’s data for other entities. Some challenges with complex interaction resolution.
8 replies 1 retweet 58 likes
If you do that, track reads and writes to each entity in the new frame updates, and rerun any updates that relied on state that was changed by a prior new-frame entity update, you have transactions via copy-on-write!
-
-
Replying to @TimSweeneyEpic @ID_AA_Carmack
Also, QuakeWorld’s rewind-and-rerun-local-update scheme resembles distributed transaction systems, which do work speculatively and either commit or discard depending on whether conflicts were later detected.
2 replies 0 retweets 11 likes -
Replying to @TimSweeneyEpic @ID_AA_Carmack
Rollback networking, like most modern fighting games use.
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.