We all want safe, responsible AI research. The first step is not misrepresenting the significance of your results to the public, not obfuscating your methods, & not spoon-feeding fear-mongering press releases to the media. That's our 1st responsibility. https://www.businessfast.co.uk/elon-musks-openai-builds-artificial-intelligence-so-powerful-it-must-be-kept-locked-up-for-the-good-of-humanity/ …
-
-
I like you guys and I think you do good research. Please be more careful next time.
-
For sure!
End of conversation
New conversation -
-
-
100% agree more transparency was needed around the threat model, and "trust us" is not a winning perspective.
-
Will also reiterate that we did say we were not sure it was the right decision, but there is an important asymmetry here as Amanda points out, and I'm glad the conversation is happening now vs. later:https://twitter.com/AmandaAskell/status/1097366100025589760 …
End of conversation
New conversation -
-
-
The main threat model is greatly scaled up disinformation campaigns.
-
Are state-sponsored APTs (advanced persistent threats) just going to give up because the whole model wasn't provided? Will this really make a difference 12 months from now? Who's to say they don't already have even better corpora they can use?
End of conversation
New conversation -
-
-
One difficulty is that the point at which you should switch to a partial release is likely to be a point of incremental improvement (if progress is fairly continuous) but it's hard to make that switch without implying that your model is different in kind from what's come before.
-
Very few people I talked to had any issues with not releasing the trained model, per se. It's the way this was handled, the ridiculous hype, and the negative impact on public perception that resulted from it, that are making everyone shake their head in disappointment.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.