There is a conversion layer that allows you to do D3D11 on top of D3D12, and you can even mix/match if you want to try out some new stuff but not have to write all D3D12.https://docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-11-on-12 …
-
-
Replying to @JJcoolkl @BartWronsk and
This may not be exactly what you want, but I don't think MS or IHVs wanted to have to support multiple D3D driver stacks indefinitely. D3D11 drivers are expensive if you want them to be fast, emulation of the higher-level API on the lower one is probably the best choice.
2 replies . 0 retweets 1 like -
IMO nobody knows "best performance practices" for given hw generation as well as IHVs, they should be the ones mapping some intermediate or high level constructs to whatever hardware consumes (with opt-in super low level hints, constructs, extensions)
2 replies . 0 retweets 0 likes -
Dx12 allows devs to expose information only the ISV knows. We understand what memory is needed when, what materials we’re about to draw, what the frame is about to do. We can argue that a simpler API is possible, but trying to optimize/guess this in drivers WAS the problem
1 reply . 0 retweets 5 likes -
I think there is a disconnect between our experiences as I *never* encountered any serious optimization that was so specific to my use-case vs API as opposed to Nvidia vs AMD (like whole handling of constant buffers vs constant fetches)
2 replies . 0 retweets 0 likes -
Multithreaded submission and guaranteed shader compilation don't count? I mean, those are two of the big ones. Stall to compile when you first draw was an API issue, not an IHV issue.
1 reply . 0 retweets 2 likes -
Oh this is different - definitive flaw in DX11, not dependent on the application, could be fixed in 11.3 or .5. I'm asking about something that only software developer knows and is so unique no reasonable driver can handle.
1 reply . 0 retweets 0 likes -
hmmmm "I need these shaders compiled, so here is all the state I will use that might affect the bytecode" seems like it would qualify? Sure d3d11 could have been made incrementally better, but switch to PSOs isn't an incremental change. That would be a rewrite.
2 replies . 0 retweets 4 likes -
Replying to @JJcoolkl @BartWronsk and
A rewrite of all the submission code, at least.
1 reply . 0 retweets 0 likes -
I'm ignorant when it comes to writing drivers, but seems to me solvable with good tooling for prerecording such intents, debugging and diagnosing what triggered recompilation, and more dynamic behavior. All opt-in. Basically standardize developer workarounds on platform level.
1 reply . 0 retweets 0 likes
We created a persistent cache in 2011 on iOS: pay for the first compilation on the first run of your app. We added separate shader objects to reduce the # permutations (requested by Epic). & explained recompilation rules. Still lots of hitches. PSO is much better solution.
-
-
Proposal: better tooling. Allow devs to record "traces" of used PSOs, then submit those together with the game to the app store. Could even precompile / optimize shaders per device server side this way. No need to be explicit/verbose. (But FWIW PSOs are not biggest problem to me)
2 replies . 0 retweets 0 likes -
Replying to @BartWronsk @JJcoolkl and
Recording binary traces is what we did on Metal with binary archives and I believe Vulkan has a similar harvesting mechanism.
0 replies . 0 retweets 0 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.