This is up and running as a C++ library. Currently transactional variable reads and writes are ~50x slower than ordinary variables. 40% the overhead is transaction bookkeeping, 40% is concurrent garbage collection bookkeeping, 20% is platform atomics. Many optimizations to go.https://twitter.com/TimSweeneyEpic/status/1210260682605764611 …
-
Show this thread
-
This amount of overhead may be acceptable, as you'd only use transactional variables for shared globally-visible state (like properties of game objects) and not locals. If performance is dominated by low-level operations like collision detection, this may be negligible.
2 replies 0 retweets 27 likesShow this thread -
For comparison, UE3 UnrealScript and pre-il2cpp Unity Mono bytecode interpretation were ~30x slower than native.
4 replies 1 retweet 29 likesShow this thread -
One great side-effect of transactions is that you can automatically undo variable writes upon failure, for example so you can write something like "if(move a bunch of actors) ... else ...", and if any operations fail, then all of their effects are undone.
1 reply 0 retweets 32 likesShow this thread -
I've had to re-learn my code optimization intuition from the late 90's. C++ compiler optimization is magic now, and Skylake can reliably issue 4-6 instructions per clock. But control flow misprediction has become wildly expensive.
3 replies 4 retweets 61 likesShow this thread -
Anyway, there are two competing theories on how we'll unlock higher performance through parallelism. One is the data oriented design approach, asking programmers to rewrite gameplay code as highly parallel fragments of algorithms that pipe inputs and outputs among stages.
4 replies 0 retweets 61 likesShow this thread -
Replying to @TimSweeneyEpic
Data oriented design has got to be the way to go, IMHO. From your garbage collection experiments it seems like your betting on the other alternative though?
1 reply 0 retweets 0 likes -
Replying to @JimmiesCrowns
Epic isn’t betting on this approach. But I’m betting my two-week coding vacation on it.
1 reply 0 retweets 5 likes -
Replying to @TimSweeneyEpic
Garbage collectors is an interesting subject. Ive mainly been working with Boehm. Im a bit skeptic about it, seems its like opium for the devs. Makes the code slow and lots of ppl unaware of retention issues. Your opinion on GCs changed during these 2 weeks?
1 reply 0 retweets 0 likes
My hypothesis is that though GC, futures, transactions, covariance, and mostly functional data structures are very costly, much of the overhead is shared thus there’s a new sweet spot for C++ programming that trades overhead for easy thread scalability.
-
-
Replying to @TimSweeneyEpic
Its an interesting idea, and it sounds reasonable. But feels like the devil is in the details and for lots of realtime apps, the GC cost is going to outweight its convenience of use. Very cool that your working on it though, your gonna keep at it? Would love to see more on it
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.