Our first batch of four diffusion models, along with super basic CLIP guided inference code, is out! github.com/crowsonkb/v-di The models are:
Danbooru SFW 128x128
ImageNet 128x128
WikiArt 128x128
WikiArt 256x256
No Colab yet but soon!
Conversation
Show replies
Replying to
Question line 20 of client_sample you call clip_jax but I don't see this anything in the requierement what it is ?
2
2
Replying to
It isn't a package on PyPI so I can't put it in requirements.txt, but it is a submodule of my repo and if you didn't clone it with --recursive you can clone it manually from github.com/kingoflolz/CLI.
1
7
Show replies
Replying to
WHOOOOAAAA. Monet's series of paintings of the Rouen Cathedral is a favourite of mine for capturing the light at different times of day across ~30 works. I love seeing the theme and variations for each of yours in one frame. Outstanding!
2
Replying to
Thanks for this! I assume that higher resolutions for final output can use esrgan, gigapixel or is final resolution independent of trained resolutions?
Replying to
So with CLIP guided inference is the output of CLIP differentiated over the parameters of the diffusion model? Sort of combining training and inference in the same step?








