We all want safe, responsible AI research. The first step is not misrepresenting the significance of your results to the public, not obfuscating your methods, & not spoon-feeding fear-mongering press releases to the media. That's our 1st responsibility. https://www.businessfast.co.uk/elon-musks-openai-builds-artificial-intelligence-so-powerful-it-must-be-kept-locked-up-for-the-good-of-humanity/ …
-
Show this thread
-
I really regret having to say this, because the paper was actually very good (if incremental, but there's nothing wrong with that). But the surrounding PR blitz and public misrepresentation is causing serious damage to the field and its public perception. Please don't do this.
6 replies 27 retweets 209 likesShow this thread -
Yes, fear-mongering can be a good strategy to inflate the perceived significance of your research. It's also completely irresponsible, and even dangerous. The less PR circus, the more room we will have to talk about the safe and responsible deployment of the latest ML research.
6 replies 26 retweets 154 likesShow this thread -
Replying to @fchollet
When you say fear-mongering, do you mean reaching out to journalists in advance of publication, something wrong with the framing of the result in the blog post, or something wrong with the discussion of risks, all of the above, something else, etc.?
1 reply 1 retweet 7 likes -
Replying to @Miles_Brundage
SotA LMs get published on a regular basis (with or w/o the model). Why did this one need an all-out PR assault? Whether releasing some models raises security issues is a legitimate debate (tho this model prolly doesn't). Is Bloomberg or the Guardian the right place to have it?
5 replies 0 retweets 21 likes -
Replying to @fchollet @Miles_Brundage
How is it not misrepresentation to take best-of-25 samples from cherrypicked prompts and tell the public, "the AI wrote this, this is what AI does now"? BTW I have a trading algo that does 12,000% per year* (*evaluated with best-of-25 daily trades)
4 replies 4 retweets 50 likes -
Replying to @fchollet @Miles_Brundage
How is it not needless fear mongering and hype to tell the journalists, "we'll keep the AI secret because releasing it would be too dangerous, trust us", when the actual model is an incrementally better LM? Where's the threat model? Etc. It's layer after layer of issues.
4 replies 0 retweets 23 likes
I like you guys and I think you do good research. Please be more careful next time.
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.