iOS app using Stable Diffusion v2🍀
- macOS 13, Xcode 14.1, iOS 16.2/iPadOS 16.2
- iPhone 12+ / iPad Pro M1/M2
The size of the CoreML model files is too large to put the entire project on GitHub, so I put the only Swift code on Gist.👉
gist.github.com/ynagatomo/fd87
Conversation
Replying to
You need to prepare the CoreML model files yourself, but the code on Gist should give you an overview of Apple/ml-stable-diffusion APIs.🍀
1
12
According to Apple, the iPad Pro/M1 produces one image in 29 seconds.
And the distillation of the Stable Diffusion v2 model is expected to shorten the processing time. I am looking forward to the future.🍀
1
6
Replying to
I don't have any compatible devices, but with iPad Pro 2020/A12Z, it took 100 sec for an image with 20 step count. The iPad is not a compatible device but it happened to be run. The generation process was executed on a background task with high priority.
1
1
6
Show replies
Replying to
What's the benefits of having it on the phone ? Why not have an endpoint and call stable diffusion in the cloud ?
1
1
Show replies
Replying to
The project is on GitHub.🍀
Quote Tweet
Image Generation iOS app with Stable Diffusion v2
just put the project on GitHub;
- Xcode 14.1, macOS 13.1, iOS/iPadOS 16.2
- iPhone 12+, iPad Pro M1/M2, Mac M1/M2
The project doesn't include SD2 CoreML models. See Readme.
GitHub
github.com/ynagatomo/ImgG
Show this thread
3



