Image Generation iOS app with Stable Diffusion v2🍀
just put the project on GitHub;
- Xcode 14.1, macOS 13.1, iOS/iPadOS 16.2
- iPhone 12+, iPad Pro M1/M2, Mac M1/M2
The project doesn't include SD2 CoreML models. See Readme.
GitHub👉 github.com/ynagatomo/ImgG
Conversation
Replying to
(cont)
I explained a little about the conversion of SD2 models because the project on GitHub does not contain CoreML models.🍀
GitHub:
1
11
(cont)
the app built for iPad on macOS (Designed for iPad) generates one image in 15 sec on MBA/M1/8G mem.
(0.75 sec/step)
- Chunked Unet model, SPLIT_EINSUM (for Neural Engine)
4
15
(cont)
SD v2 can generate good images with fewer steps than v1.4/v1.5.
With MBAir/M1/8GB mem:
- 5 steps: 4 sec
- 10 steps: 8 sec
- 15 steps: 12 sec
- 20 steps: 16 sec
1
7
Replying to
Why does the project not include the converted models? Also how big are the converted models?
1
2
Replying to
They total around 2.5GB. GitHub has a limitation of file size, 100MB. And It's related to checking the Term and Use of the Hugging Face's model. :)
1
3
Show replies
Replying to
updated iOS app - Image Gen with Stable Diffusion v2🍀
- Xcode 14.2 RC, macOS 13.1 RC, iOS/iPad OS 16.2 RC
- changed to delay the creation of StableDiffusionPipeline until the first image generation and it's executed in the background Task
check GitHub 👉github.com/ynagatomo/ImgG
2
2
3
(cont) At the cost of not blocking the main thread, the initial image generation takes longer. From the second image onwards, it is the same as before.🍀
1
1
Replying to
Oh sorry about that. If you like, please raise an issue on GitHub about your situation or DM me.🙏



