Isn't it good for several of MIRI's goals if AI has a bad public reputation?
-
-
-
I presently read MIRI's goals as unrelated to public opinion variance inside the Overton Window. Everything in that range is equally useless; nothing in it corresponds to policies unusual enough to result in non-default outcomes.
-
Doesn't AI having a negative halo effect at least improve your ability to fundraise?
-
I don't get the impression that's been the case. People who are scared of marching robots with glowing red eyes are rarely precise-enough thinkers to take a precisely productive action that is more expensive than ranting on Twitter and not as emotionally fun.
-
If you think that a hiccuping Doordash delivery is a Harbinger of the AGI Apocalypse (rather than totally unrelated), you're also liable to be impressed by "relevant" work involving sexy gradient descent tech that flails in the direction of something vaguely alignment-sounding.
-
In my experience, all imprecise thinking is the enemy of all precise thinkers advocating precise policies. Once you depart the narrow path, there is always somebody who looks sexier, cooler, easier, more rewarding, because they left the narrow path to optimize just for that.
-
That's fair. All I'll say, anecdotally, is that I've donated to ex-risk organizations in the past, and I wouldn't put it past myself to donate more on a particular day if I've recently had an experience that caused a negative mood affiliation with AI.
-
And I presume that MIRI's other donors aren't immune from all of the usual cognitive biases that would cause them to donate less or more
End of conversation
New conversation -
-
-
Well the video game crash of 1983 brought about stringent quality controls, so sometimes things have to get worse before they get better.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Reputation can be highly manipulable and politicized. Once that is solved however thing are different.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@tylercowen discusses this in his conversation with Garry Kasparaov. His initial encounters with things like self-checkout were incredibly frustrating because it was deployed too early, and that soured him (a bit) on the potentials of automationThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
depends on the business and who builds the AI. if something is so complex that an AI will do it best but sensitive enough that a large enough mistake could cost everyone, then using AI in false flags could become a new norm.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
We're finding out what they think their reputation is worth, I guess.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
2 years seems a bit hopeful.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.