The future is now: Real-time immersive latent space. 🔥
The engine sends clips to be diffused. The new content is imported onto the spherical projection when queued.
Tools used:
deforum.github.io
derivative.ca
#aiart #vr #stablediffusion #touchdesigner #deforum
Conversation
Replying to
Can you get stereoscopy in there - simulated binocular disparity using depth maps?
1
4
Replying to
That would be next level, but doable. Depth maps are easy to pull - but the trick is applying them in a meaningful way.
1
3
Show replies
Replying to
Real time voice to image, or thought to image immersive space next please.
1
9
Replying to
voiceAttack is a software that can listen for vocal commands and execute functions/keystrokes.
Your idea is brilliant!
4
8
Show replies
Replying to
Lovely work! Are these stitched together or did you directly rendered out a bunch of 360 photos. How did you managed the workflow and settings in deforum notepad? For the end I assume you put it into a sequence on a spherical texture in TD right? I’d love to experiment with it 😌
1
Replying to
It's a back and forth setup between diffusion and touchdesigner. Small clips and pieces are sent to be diffused in the background, and then they are composited back into the projection in TD as a queue system. It saves time from having to re-render the whole sphere.
2
1
12
Show replies
Replying to
Any chance of getting the hardware specs you're using? I'm sure there's already a line forming to replicate your work. Kick-ass, man!
1




