Happy to announce DreamFusion, our new method for Text-to-3D!
dreamfusion3d.github.io
We optimize a NeRF from scratch using a pretrained text-to-image diffusion model. No 3D data needed!
Joint work w/ the incredible team of
#dreamfusion
Conversation
Replying to
DreamFusion generates 3D models from diverse text prompts. Check out our gallery of hundreds of 3D models:
dreamfusion3d.github.io/gallery.html
2
31
206
We build on Dream Fields, replacing CLIP with a new loss computed from the Imagen text-to-image diffusion model (imagen.research.google) :
3
5
92
The 3D model we generate is an improved NeRF that produces a 3D volume with density, color, and surface normals:
3
23
142
DreamFusion represents appearance as a material color, which can be combined with normals for rendering under different lighting conditions:
2
9
75
We can even take several 3D models generated by DreamFusion and compose them into new scenes:
1
7
67
Check out the paper for more details, including a distillation-based loss function that could enable many new applications of pretrained diffusion models:
1
9
66
This was an incredibly fun team effort w/ NeRF wizards & , and NeRF + diffusion expert (graduating this year!).
We're excited to incorporate our methods with open source models and enable a new future for 3D generation! 🚀
#dreamfusion
6
10
104




