Conversation

Last week, a Redditor fine-tuned an AI image model on the work of one illustrator, sparking a debate about the ethics of reproducing a living artist's style. I talked to that artist to see how she felt about it, and the person who made it.
26
1,130
The Redditor used a technique developed by Google called DreamBooth, reimplemented for Stable Diffusion, using only 32 illustrations from the artist. Training took 2.5 hours of cloud GPU time at a cost of under $2. He then released the finetuned model for everyone to use.
2
46
DreamBooth allows you to introduce new subjects or styles to Stable Diffusion, with leaps in speed and usability happening weekly. I was able to train an AI on my own face in about 20 minutes on Google Colab. It cost me about $0.20. (You can also do it for free, but it's slower.)
Image
6
83
If you liked this piece, you might also be interested in my other writing on generative image AIs and where they get their data. Here's my first post about DALL-E 2, Stable Diffusion, and the complicated ethics of AI art.
Quote Tweet
I've never felt so conflicted about an emerging technology as AI text-to-image models, which are so immediately fun to play with, but raise so many ethical questions, it's hard to keep track of them all. waxy.org/2022/08/openin
Show this thread
1
40
A deep dive into the image training data that makes up Stable Diffusion with a data explorer, a joint project with .
Quote Tweet
What images are in the massive dataset that trained the Stable Diffusion text-to-image AI model? @simonw and I made a tool for you to explore/search a subset of it: 12 million image/caption pairs out of over 2 billion that it was trained on. waxy.org/2022/08/explor
Show this thread
1
21
How the biggest tech companies in AI are outsourcing their data collection and training to academic/nonprofit groups.
Quote Tweet
Tech companies working with AI — from big names like Google and Meta to upstarts like Stability AI — are outsourcing data collection to academic/nonprofit research groups, shielding them from potential accountability and legal liability. waxy.org/2022/09/ai-dat
Show this thread
1
34
So much of the response to generative AI seems to be either uncritical enthusiasm or absolute condemnation, but I think it's a nuanced, complex subject and I try to approach it that way when writing about it. Hope you like it.
4
90
Replying to
As an artist my response is don’t steal from me. Just because users don’t know exactly what they’re stealing and who from, it’s still stealing. Watching tech bros take take take and try to destroy another profession without understanding it makes me sick.
1
4