People finally start to notice this.
Liu Liu
@liuliu
Maintains libnnc.org / libccv.org / dflat.io. Wrote iOS app Snapchat (2014-2020). Founded Facebook Videos with others (2013). Sometimes writes at liuliu.me
Liu Liu’s Tweets
Haven't done this for a while, just cut 0.6.0 for Dflat: github.com/liuliu/dflat/r Using it in so there are quite a bit of ergonomic fixes.
2
Devised some cool hacks to support custom weights with CoreML compiled Stable Diffusion model and learned more about MIL!
Quote Tweet
1.20230114.2 is out! This is an "in-between" version with some bugfixes and the optional CoreML based rendering. You don't need to download new models to enable CoreML and any SD v1.x custom models can benefit from it. But there are limitations so it is not the default yet:
Show this thread
7
1
4
Model mixing with inpainting model is very versatile! Remember the "style prompt + image" trick to do style transfer I shared a few days ago? It can be supercharged with model mixing! Here is how:
1
1
17
Show this thread
People asked, how can I apply a certain style to an image with your app? Inpainting model can actually do that magically! (And will be even better for this in the future). In 1.20221231.2, we did a little tweak to allow just this!
2
3
22
Show this thread
TBH, for lspace.swyx.io/p/reverse-prom I didn't expect the prompt to be so much role-playing. Although I haven't played LLM enough, don't really understand how can these role-playing languages exists so much statistically in the source materials.
1
1
9
Show this thread
1.20221222.0 is out! In this version, we added the ability to paste images into the app, some fixed presets to simplify common tasks, and depth2img support! Using iPhone-captured depth map, img2img generation can preserve it's structure, and it is awesome!
7
7
46
Show this thread
1.20221220.1 is out! Besides better support for model imports and more models, the headline feature in this release is the "upscaler" support. An upscaler helps you to increase the finished image resolution dramatically with minimal artifacts or artistic altering.
2
4
21
Show this thread
You don't need to pay and only generate limited images. Train your own model on Google Collab or Space and bring it to your phone, generate as much images as you want!
Quote Tweet
1.20221208.1 is out! The biggest feature in this release is to bring your own custom model to the app! Yes, that's right. You can use variety of DreamBooth model training providers, either paid or free @huggingface Space, and bring the trained model into the app. Here is how:
Show this thread
1
5
23
Just a small update ...
Quote Tweet
1.20221130.0 is out. We supported iPad and #stablediffusion2 in the version 3 days ago. So this supposed to be a smaller version. But we cannot resist ... First, now images saved to Photos will have prompt as part of the metadata. (1/4)
Show this thread
4
depth2img is going to be even better for this. There is a reason why your iPhone's front / back camera can take depth map. Everything fits together really nicely.
Quote Tweet
Specialized models (Dreambooth) is the best way to style your existing photos, a thread:
Show this thread
7
Don't worry, JPow and the Fed is working on destroying America too!
Quote Tweet
The three great anti-American powers are now in turmoil. China is in the midst of rioting and zero-covid chaos. Iran is being torn apart by woman's rights protestors. Russia is being drained and destroyed in Ukraine. The multi-polar world is not going well.
Show this thread
1
2
I have some basic code for #StableDiffusion2 to work with swift-diffusion: github.com/liuliu/swift-d It took me a bit more time than expected because this line change: github.com/Stability-AI/s Basically previously number of heads is unchanged like most networks, but in 2, it is not.
1
1
11
I am launching a new app after all these years! It is 100% offline and free. It is based on popular #stablediffusion model but runs on your iPhone. Read more on draw.nnc.ai
Quote Tweet
9
26
81




