Conversation

Replying to
Like I really appreciate the risks document OpenAI has put together here, it's impressively thorough, it's just there's this weird sort of unspoken... understanding that this is gonna happen anyway, so, buckle up. See also Google earlier:
Quote Tweet
There's a good bit in the Google PaLM paper where they explain the model is pretty racist, but hopefully this can be fixed in the future. Presuably it won't be fixed by the company that fired the person trying to say this several years ago, though, eh? Presumably not that.
Show this thread
2
106
Is there actually a point at which these ethical studies would result in work halting? Because if not, you might as well not bother doing them, right? Like the phrasing on this bit in particular is *bizarrely* detached, as if some otherworldly force is making this system exist.
Quote: "As noted above, not only the model but also the manner in which it is deployed and in which potential harms are measured and mitigated have the potential to create harmful bias, and a particularly concerning example of this arises in DALL·E 2 Preview in the context of pre-training data filtering and post-training content filter use, which can result in some marginalized individuals and groups, e.g. those with disabilities and mental health conditions, suffering the indignity of having their prompts or generations filtered, flagged, blocked, or not generated in the first place, more frequently than others. Such removal can have downstream effects on what is seen as available and appropriate in public discourse."
10
158
Replying to
"the pattern being recreated is less immediately clear" is a wonderful way of writing "we don't know what the fuck it's doing and can't control it, but the abdication of responsibility is a feature not a bug"
1
22
Replying to
Its not hard: use the damn thing to make images that would balance the training set, retrain. Do the same with disturbing images. Can’t account for everything oc but the fact that they could use the damn tool to improve upon it is just glaringly obvious and glaringly not stated
Replying to
Training set is biased as all samples can be. Trained AI picks up the bias. (Surprised pikachu) We can't train AIs with a set that is actually the whole universe of elements it's a sample from, people. That defeats the purpose of sampling.
1
Replying to
I'm just starting to learn about , so I'm not familiar at all with where they get their samples. Are they just pulling randomly from the internet or are the algorithms only being "fed" certain pictures for training purposes?