Here it is! Just released:
Enemies
A short film we’ve been working on for a while now on the Demo Team, here at Unity.
Some notes on rendering in the 🧵http://youtu.be/eXYUNrgqWUU#unity#gamedev
Releasing the tool like this because it can already be useful to folks.
The idea though is to make it built-in, under a unified interface with the VFX Graph offline sdf baker.
The pink character samples its SDF in the VFX Graph for spawning the particles and making them follow the surface, more or less.
Adam (remember that old demo of ours? 😄) is surrounded by some fun raymarched SDF sphere things, cutting away his soft-blended SDF shape.
I’ve also published a project with examples of how to use the output SDF for visual effects.
That’s the video from the top there: dynamic SDFs being used by two VFX Graph effects, and a raymarching shader.
https://github.com/robcupisz/mesh-to-sdf-examples…
I initially wrote it to animate Boston in its owl and golem form in The Heretic demo.
It would generate an SDF for the owl and golem (and Gawain inside it) skinned meshes, and the wires would crawl over the combined dynamic character SDFs and static environment SDFs.
I’ve just released mesh-to-sdf for Unity!
A light and fast real-time SDF generator, primarily for animated characters.
The dynamic SDF output can be used for all sorts of VFX 💥 It also enables hair-to-character collisions in the new hair package.
https://github.com/Unity-Technologies/com.unity.demoteam.mesh-to-sdf…
Unity hair! Real-time sim and rendering.
Wide range of realistic hair types and styles. Fur! Alembic groom import and procedural grooms.
Clustering/LODs for scalability.
Here it is! Just released:
Enemies
A short film we’ve been working on for a while now on the Demo Team, here at Unity.
Some notes on rendering in the http://youtu.be/eXYUNrgqWUU#unity#gamedev
- win 10/11, directx12
- a high-end gpu for running it on ultra/high settings
- or a mid-range discrete gpu for medium/low settings
- 16GB RAM, 8GB VRAM
1GB download
Here's a breakdown of our work for Enemies, the real-time cinematic demo we've had the pleasure of working on with the amazingly talented Unity Demo Team! Narrated by @matt_ostertag#unity3d
From the Unity projects, Book of the Dead was the last one that used the Built-in Render Pipeline in the beginning, before switching to HDRP. This is one of the very early tests from 2016
We need a better verb for referring to using AI art tools.
“I made art with” is not it, my dude.
“I commissioned art via” is better, but suggests a symbiotic relationship with the artists the ML trained on, while in actuality being
parasitic.
Happy to announce that the strand-based hair system developed alongside and featured in Unity's 'Enemies' is now available via GitHub. :) Been working on this system for a while now -- looking forward to see what people will make with it: https://github.com/Unity-Technologies/com.unity.demoteam.hair…
Unity hair! Real-time sim and rendering.
Wide range of realistic hair types and styles. Fur! Alembic groom import and procedural grooms.
Clustering/LODs for scalability.
Happy to announce that the strand-based hair system developed alongside and featured in Unity's 'Enemies' is now available via GitHub. :) Been working on this system for a while now -- looking forward to see what people will make with it: https://github.com/Unity-Technologies/com.unity.demoteam.hair…
And I would also add that on the diffuse shading side the Modified Lambertian model is a must ...
https://twitter.com/inciprocal/status/1166168178759630848…
Most of the real-world surfaces that exhibit a fair amount of sub-surfaces scattering are non-lambertian. Simple dot(N, L) doesn't fit.
Working on the latest @unity's Demo team Enemies demo was a blast. On our end a result of many years of advancing photometric processing to enable photo-realistic rendering in real-time. On the left is Unity HDRP vs one of the real photos under the same lighting condition. (1/4)
#WhatAGameDevLooksLike ?
Here’s one!
A rendering programmer and tech artist, really enjoying working with light and organic patterns emerging from elegant rule sets. ^_^
Wears shirts only when absolutely necessary, otherwise it’s climbing shorts or wetsuits.
the main depth buffer, to get occlusion from the bishop and the hand.
Then a blur with a simplified DoF kernel with a fixed CoC size, and composite with the main colour buffer - after the regular DoF (and moblur) pass.
Easy with HDRP's custom pass and custom post-process.
Smoke and flame.
Transparencies don't play nice with DoF in a rasterization renderer, and here we have close-ups of in-focus smoke/flame over out-of-focus face, and vice versa.
To get around that, we're rendering them into an off-screen buffer, but still depth-test against
Improvements to skin shading, hair shading (Marschner), eye caustics, tearline (the wet interface between the eyelids and the eye).
Most of these already shipping in HDRP. Remaining digital human features mentioned earlier shipping in Q2 as an updated Digital Human package.
Skin attachments.
Eyebrows, eyelashes, tearline, peach fuzz/vellus hair(!), etc. need to follow skin deformation.
We've developed a system for handling that through jobs for The Heretic, now moved to gpu.
Some attachments still resolved on cpu, like the teeth occlusion polygon
Skin tension.
Calculates edge length relative to the rest pose, to identify stretching and contraction, which allows us to drive wrinkle and blood flow maps.
In The Heretic we needed a facial rig just for that. This is much more convenient.
Ziva allows for full body performance while recording facial performance with HMC - in our old workflow matching 4D with the body is hard, and the actor performs differently with their head fixed.
It is also effectively an efficient compression, allows for interactivity, etc.
Facial animation is still a 4D capture. We've improved our workflow and quality, but 4D would still be impractical for any larger production.
The future of our productions is Ziva, where we only train the system with 4D, but final performance is captured even with and HMC.