I really regret having to say this, because the paper was actually very good (if incremental, but there's nothing wrong with that). But the surrounding PR blitz and public misrepresentation is causing serious damage to the field and its public perception. Please don't do this.
-
Show this thread
-
Yes, fear-mongering can be a good strategy to inflate the perceived significance of your research. It's also completely irresponsible, and even dangerous. The less PR circus, the more room we will have to talk about the safe and responsible deployment of the latest ML research.
6 replies 26 retweets 154 likesShow this thread -
Replying to @fchollet
When you say fear-mongering, do you mean reaching out to journalists in advance of publication, something wrong with the framing of the result in the blog post, or something wrong with the discussion of risks, all of the above, something else, etc.?
1 reply 1 retweet 7 likes -
Replying to @Miles_Brundage
SotA LMs get published on a regular basis (with or w/o the model). Why did this one need an all-out PR assault? Whether releasing some models raises security issues is a legitimate debate (tho this model prolly doesn't). Is Bloomberg or the Guardian the right place to have it?
5 replies 0 retweets 21 likes -
Replying to @fchollet @Miles_Brundage
How is it not misrepresentation to take best-of-25 samples from cherrypicked prompts and tell the public, "the AI wrote this, this is what AI does now"? BTW I have a trading algo that does 12,000% per year* (*evaluated with best-of-25 daily trades)
4 replies 4 retweets 50 likes -
Replying to @fchollet
I'm maybe missing something - the AI *did* write that/does that now, and the reason you know it was cherrypicked is because it said it right there?
2 replies 0 retweets 2 likes -
Replying to @Miles_Brundage @fchollet
Cherry picking allows humans to ascribe intelligence to a model that, even if it was only at a Mark V. Shaney n-gram level, is not there. It can act as an illusion, especially if it's read by the general public without context or understanding.https://en.wikipedia.org/wiki/Mark_V._Shaney …
3 replies 4 retweets 26 likes -
fwiw: i think a lot about cherry picking in my own work and decided that it's ok when also given the context of how big the pool size is. eg: here's a notable sample taken from a pool of 100 generated candidates.
3 replies 0 retweets 2 likes -
The issue isn't cherry picking by itself, it's what intelligence people guess is at play when they see the cherry picked output. Rarely will a human see a mangled surface feature focused generated image and say "It was so close to smart!" but they can and will do that for LMs.
1 reply 0 retweets 2 likes -
Researchers know what cherry picking means -- but the public just assumes the "AI" (actually a character-level LM) to be an autonomous agent with a level of understanding of the world that corresponds to what's displayed in the the cherry picked samples (as interpreted by them).
2 replies 0 retweets 11 likes
It's not science to cherry pick in papers (it's fine if you're doing creative AI though!). But when you're communicating with the public, it's plain misrepresentation.
-
-
Indeed it’s not just cherry picking. Advocating for every
#AI/#NLProc practitioner/researcher to always take#ethics into accnt is an uphill battle with many “not my job” & lots of free riders adding noise.@OpenAI are serious good researchers but hype ridiculed#AIEthics/#ethnlp0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.