We all want safe, responsible AI research. The first step is not misrepresenting the significance of your results to the public, not obfuscating your methods, & not spoon-feeding fear-mongering press releases to the media. That's our 1st responsibility. https://www.businessfast.co.uk/elon-musks-openai-builds-artificial-intelligence-so-powerful-it-must-be-kept-locked-up-for-the-good-of-humanity/ …
-
-
Yes, fear-mongering can be a good strategy to inflate the perceived significance of your research. It's also completely irresponsible, and even dangerous. The less PR circus, the more room we will have to talk about the safe and responsible deployment of the latest ML research.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I hate saying this but if seeing some of the best minds doing such things, instead of showing progress making it a PR isn't sad, Idk what would else be!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Very much agree. It has been disheartening to see - especially coupled with the unfortunate and ill-advised "framing" of "too dangerous to share" noted by
@halvarflake here:https://twitter.com/halvarflake/status/1097420058370994176 …Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The paper?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
serious damage? and to public perception? who dares to say *the emperor has no clothes*? The public MUST believe all what is being fed to them by the 'AI Gods', lest they be taken for fools. Naturally, many people in the field are capitalizing on this.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.