My thoughts herehttps://twitter.com/XiXiDu/status/1144252654878679041 …
-
-
-
AI systems have identified subgoals since the beginning; can you flesh out how your proposed fire alarm differs from what we have today? What would be a specific example that would alarm you?
End of conversation
New conversation -
-
-
Having talked to some AI people here on twitter I think I can see what the problem is. AI people are dismissive of safety concerns for two reasons: (1) they correctly see lots of BS hype about it in the popular media. Most of them haven't actually engaged with serious (1/n)
-
I predict that less than 5% of the relevant experts will change their mind after reading the AI risk literature.
- 3 more replies
New conversation -
-
-
This is like one of those "estimate how many jelly beans are in this jar, say the biggest number without going over" questions. If someone mistakenly guesses higher than the actual number, it would be wrong to conclude that there are therefore few jellybeans in an absolute sense.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Why on earth should that alarm you? He was specifically asking for certainty about the least impressive thing. That by definition is asking for a high degree of precision about something inherently vague. That is why there was silence. This is a completely adequate explanation.
-
Yes, one should come at such questions with tremendous uncertainty; that's a large part of the point of the essay.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
