We all want safe, responsible AI research. The first step is not misrepresenting the significance of your results to the public, not obfuscating your methods, & not spoon-feeding fear-mongering press releases to the media. That's our 1st responsibility. https://www.businessfast.co.uk/elon-musks-openai-builds-artificial-intelligence-so-powerful-it-must-be-kept-locked-up-for-the-good-of-humanity/ …
-
Show this thread
-
I really regret having to say this, because the paper was actually very good (if incremental, but there's nothing wrong with that). But the surrounding PR blitz and public misrepresentation is causing serious damage to the field and its public perception. Please don't do this.
6 replies 27 retweets 209 likesShow this thread -
Yes, fear-mongering can be a good strategy to inflate the perceived significance of your research. It's also completely irresponsible, and even dangerous. The less PR circus, the more room we will have to talk about the safe and responsible deployment of the latest ML research.
6 replies 26 retweets 154 likesShow this thread -
Replying to @fchollet
When you say fear-mongering, do you mean reaching out to journalists in advance of publication, something wrong with the framing of the result in the blog post, or something wrong with the discussion of risks, all of the above, something else, etc.?
1 reply 1 retweet 7 likes -
Replying to @Miles_Brundage
SotA LMs get published on a regular basis (with or w/o the model). Why did this one need an all-out PR assault? Whether releasing some models raises security issues is a legitimate debate (tho this model prolly doesn't). Is Bloomberg or the Guardian the right place to have it?
5 replies 0 retweets 21 likes -
Replying to @fchollet @Miles_Brundage
How is it not misrepresentation to take best-of-25 samples from cherrypicked prompts and tell the public, "the AI wrote this, this is what AI does now"? BTW I have a trading algo that does 12,000% per year* (*evaluated with best-of-25 daily trades)
4 replies 4 retweets 50 likes -
Replying to @fchollet @Miles_Brundage
How is it not needless fear mongering and hype to tell the journalists, "we'll keep the AI secret because releasing it would be too dangerous, trust us", when the actual model is an incrementally better LM? Where's the threat model? Etc. It's layer after layer of issues.
4 replies 0 retweets 23 likes -
Replying to @fchollet @Miles_Brundage
One difficulty is that the point at which you should switch to a partial release is likely to be a point of incremental improvement (if progress is fairly continuous) but it's hard to make that switch without implying that your model is different in kind from what's come before.
1 reply 1 retweet 6 likes
Very few people I talked to had any issues with not releasing the trained model, per se. It's the way this was handled, the ridiculous hype, and the negative impact on public perception that resulted from it, that are making everyone shake their head in disappointment.
-
-
Sounds like you're talking to a very limited community of people. Definitely not true that everyone is doing that. This is nothing like the ridiculous hype of, say, Google's AutoML, which was entirely unjustified. Whereas the public outreach in this case had clear benefits.
1 reply 0 retweets 16 likes -
This Tweet is unavailable.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.