Opens profile photo
Follow
rob cupisz
@robcupisz
Tech Lead at Unity Demo Team. Rendering, lighting, VFX. Enemies, The Heretic, Adam
Copenhageninstagram.com/robcupiszJoined May 2009

rob cupisz’s Tweets

The pink character samples its SDF in the VFX Graph for spawning the particles and making them follow the surface, more or less. Adam (remember that old demo of ours? 😄) is surrounded by some fun raymarched SDF sphere things, cutting away his soft-blended SDF shape.
2
9
Show this thread
I initially wrote it to animate Boston in its owl and golem form in The Heretic demo. It would generate an SDF for the owl and golem (and Gawain inside it) skinned meshes, and the wires would crawl over the combined dynamic character SDFs and static environment SDFs.
2
48
Show this thread
We need a better verb for referring to using AI art tools. “I made art with” is not it, my dude. “I commissioned art via” is better, but suggests a symbiotic relationship with the artists the ML trained on, while in actuality being parasitic.
11
17
Available now! 🎉
Quote Tweet
Happy to announce that the strand-based hair system developed alongside and featured in Unity's 'Enemies' is now available via GitHub. :) Been working on this system for a while now -- looking forward to see what people will make with it: github.com/Unity-Technolo
Show this thread
7
Show this thread
The hair sim system Lasse developed for Enemies is out!
Image
Quote Tweet
Happy to announce that the strand-based hair system developed alongside and featured in Unity's 'Enemies' is now available via GitHub. :) Been working on this system for a while now -- looking forward to see what people will make with it: github.com/Unity-Technolo
Show this thread
2
64
And I would also add that on the diffuse shading side the Modified Lambertian model is a must ... twitter.com/inciprocal/sta Most of the real-world surfaces that exhibit a fair amount of sub-surfaces scattering are non-lambertian. Simple dot(N, L) doesn't fit.
Quote Tweet
Working on the latest @unity's Demo team Enemies demo was a blast. On our end a result of many years of advancing photometric processing to enable photo-realistic rendering in real-time. On the left is Unity HDRP vs one of the real photos under the same lighting condition. (1/4)
Show this thread
Image
1
38
#WhatAGameDevLooksLike ? Here’s one! A rendering programmer and tech artist, really enjoying working with light and organic patterns emerging from elegant rule sets. ^_^ Wears shirts only when absolutely necessary, otherwise it’s climbing shorts or wetsuits.
Image
Image
Image
18
the main depth buffer, to get occlusion from the bishop and the hand. Then a blur with a simplified DoF kernel with a fixed CoC size, and composite with the main colour buffer - after the regular DoF (and moblur) pass. Easy with HDRP's custom pass and custom post-process.
2
43
Show this thread
Smoke and flame. Transparencies don't play nice with DoF in a rasterization renderer, and here we have close-ups of in-focus smoke/flame over out-of-focus face, and vice versa. To get around that, we're rendering them into an off-screen buffer, but still depth-test against
Image
3
79
Show this thread
Improvements to skin shading, hair shading (Marschner), eye caustics, tearline (the wet interface between the eyelids and the eye). Most of these already shipping in HDRP. Remaining digital human features mentioned earlier shipping in Q2 as an updated Digital Human package.
Image
2
83
Show this thread
Skin attachments. Eyebrows, eyelashes, tearline, peach fuzz/vellus hair(!), etc. need to follow skin deformation. We've developed a system for handling that through jobs for The Heretic, now moved to gpu. Some attachments still resolved on cpu, like the teeth occlusion polygon
Image
2
83
Show this thread
Ziva allows for full body performance while recording facial performance with HMC - in our old workflow matching 4D with the body is hard, and the actor performs differently with their head fixed. It is also effectively an efficient compression, allows for interactivity, etc.
1
46
Show this thread
Facial animation is still a 4D capture. We've improved our workflow and quality, but 4D would still be impractical for any larger production. The future of our productions is Ziva, where we only train the system with 4D, but final performance is captured even with and HMC.
3
68
Show this thread