I am a big fan of @OpenAI’s research. It is often very original in ways that more traditional research labs, like my own team, tend to ignore. While #gpt3 doesn’t bring any algorithmic innovation, the zero-to-few shot approach as a universal language API is groundbreaking. 2/13
-
-
I do take exception with some of
@OpenAI’s PR though. In particular, I don’t understand how we went from#gpt2 being too big a threat to humanity to be released openly to#gpt3 being ready to tweet, support customers or execute shell commands (https://beta.openai.com ). 3/133 replies 41 retweets 351 likesShow this thread -
Instead I wish
@OpenAI had been more open and less sensationalistic, by just open sourcing both for research, especially on#responsibleAI aspects, while acknowledging that neither was ready for production and discouraging services like https://thoughts.sushant-kumar.com/ 4/132 replies 7 retweets 163 likesShow this thread -
One criticism I got was that I cherry picked my examples. Ignoring the fact that 100% of examples touting
#gpt3 on Twitter are cherry picked, greatly inflating its perceived performance, cherry picking is a valid approach when highlighting harmful outputs. 5/133 replies 12 retweets 311 likesShow this thread -
This is a challenge with our current AI benchmarks which do not properly weigh harmful outputs. Even one very bad output in a million in a prod app (eg customer service) can be unacceptable, as shown by the deserved backlash my team got for bad machine translations on FB. 6/13
3 replies 10 retweets 172 likesShow this thread -
In this case, it just took a handful of tries to generate toxic
#gpt3 outputs from neutral, not even adversarial, inputs. AI algorithms need to be a lot more robust to be productized. The ease of generating these toxic outputs is what prompted my decision to share them. 7/131 reply 8 retweets 129 likesShow this thread -
Another criticism was that
#gpt3 was just reiterating what humans think. Yes AI algorithms do learn from humans but a deliberate choice can be made about which humans they learn from and which voices are amplified. 8/135 replies 30 retweets 230 likesShow this thread -
Just culling any data from the web or Reddit because it’s available is not a responsible training strategy. It will lead to amplifying unchecked biases, some very harmful. And we need objective functions that discourage toxic speech in the same way we do it in real life. 9/13
12 replies 23 retweets 249 likesShow this thread -
Others pointed that being at FB, I was badly placed to make this point. FB and my own team do indeed need to do better on this. But FB is also in an arm race against hate speech and misinformation, and AI needs to help rather than make the pb worse https://spectrum.ieee.org/computing/software/qa-facebooks-cto-is-at-war-with-bad-content-and-ai-is-his-best-weapon … 10/13
5 replies 5 retweets 120 likesShow this thread -
Jerome Pesenti Retweeted Paul Graham
Finally by far the most disturbing criticism I got was from
@paulg who compared my point to forcing AIs to be politically correct. 11/13https://twitter.com/paulg/status/1285534687457357824 …Jerome Pesenti added,
8 replies 23 retweets 209 likesShow this thread
This wasn't a criticism of you. In fact, I'm not even sure what "my point" refers to. Was it a tweet or an article?
-
-
Replying to @paulg @an_open_mind
Jerome is referring to this tweet of his last week https://twitter.com/an_open_mind/status/1284487376312709120 … which called GPT-3 “unsafe” [for production phase technology] due to the ease with which it can be prompted to produce hate speech (reflecting what he here calls OpenAI’s “irresponsible training strategy”)pic.twitter.com/CaHJFfTx4s
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.