Last week, a Redditor fine-tuned an AI image model on the work of one illustrator, sparking a debate about the ethics of reproducing a living artist's style. I talked to that artist to see how she felt about it, and the person who made it.
Conversation
Replying to
The Redditor used a technique developed by Google called DreamBooth, reimplemented for Stable Diffusion, using only 32 illustrations from the artist. Training took 2.5 hours of cloud GPU time at a cost of under $2. He then released the finetuned model for everyone to use.
2
6
46
DreamBooth allows you to introduce new subjects or styles to Stable Diffusion, with leaps in speed and usability happening weekly. I was able to train an AI on my own face in about 20 minutes on Google Colab. It cost me about $0.20. (You can also do it for free, but it's slower.)
6
11
83
New DreamBooth models are popping up daily, targeting everything from K-pop singers and porn stars to Disney and animations, raising thorny questions around copyright, fair use, personality rights, ethics, and consent. huggingface.co/sd-concepts-li
1
6
48
If you liked this piece, you might also be interested in my other writing on generative image AIs and where they get their data. Here's my first post about DALL-E 2, Stable Diffusion, and the complicated ethics of AI art.
Quote Tweet
I've never felt so conflicted about an emerging technology as AI text-to-image models, which are so immediately fun to play with, but raise so many ethical questions, it's hard to keep track of them all. waxy.org/2022/08/openin
Show this thread
1
1
40
A deep dive into the image training data that makes up Stable Diffusion with a data explorer, a joint project with .
Quote Tweet
What images are in the massive dataset that trained the Stable Diffusion text-to-image AI model? @simonw and I made a tool for you to explore/search a subset of it: 12 million image/caption pairs out of over 2 billion that it was trained on. waxy.org/2022/08/explor
Show this thread
1
1
21
How the biggest tech companies in AI are outsourcing their data collection and training to academic/nonprofit groups.
Quote Tweet
Tech companies working with AI — from big names like Google and Meta to upstarts like Stability AI — are outsourcing data collection to academic/nonprofit research groups, shielding them from potential accountability and legal liability. waxy.org/2022/09/ai-dat
Show this thread
1
8
34
So much of the response to generative AI seems to be either uncritical enthusiasm or absolute condemnation, but I think it's a nuanced, complex subject and I try to approach it that way when writing about it. Hope you like it.
4
4
90
My article spawned a lot of debate on Hacker News, Reddit, and in my own post comments. Everyone has an opinion on it!
HN: news.ycombinator.com/item?id=334229
Reddit: reddit.com/r/StableDiffus
My post: waxy.org/2022/11/invasi
1
15
Replying to
Hi Andy, your article made me think a lot and I want to address some points made by the redditor. I posted a comment on your blog directly but sadly it is an unreadable mess due to lacking formatting. I hope it's ok to share it on Twitter then. (Sorry for spamming your blog. orz)
4
2
33
Show replies

