I feel like at some point in the last few years we somehow confused "AI ethics" with "pointing at the mess you made and shrugging".
github.com/openai/dalle-2
Conversation
Replying to
Like I really appreciate the risks document OpenAI has put together here, it's impressively thorough, it's just there's this weird sort of unspoken... understanding that this is gonna happen anyway, so, buckle up. See also Google earlier:
Quote Tweet
There's a good bit in the Google PaLM paper where they explain the model is pretty racist, but hopefully this can be fixed in the future. Presuably it won't be fixed by the company that fired the person trying to say this several years ago, though, eh? Presumably not that.
Show this thread
2
14
106
Is there actually a point at which these ethical studies would result in work halting? Because if not, you might as well not bother doing them, right? Like the phrasing on this bit in particular is *bizarrely* detached, as if some otherworldly force is making this system exist.
read image description
ALT
10
27
158
Replying to
Its not hard: use the damn thing to make images that would balance the training set, retrain. Do the same with disturbing images. Can’t account for everything oc but the fact that they could use the damn tool to improve upon it is just glaringly obvious and glaringly not stated
Replying to
Training set is biased as all samples can be.
Trained AI picks up the bias.
(Surprised pikachu)
We can't train AIs with a set that is actually the whole universe of elements it's a sample from, people. That defeats the purpose of sampling.
1






