Conversation

Replying to
"How can we make normal people understand existential AI risk? It's so complicated to explain because the concepts are difficult" is another example of this. Implication is that we're right, how do we get dumb people to understand. This is a pattern that manipulative people use
1
7
Replying to
Say that you truly believed some new technology really was an incredibly dangerous risk to our world, but the evidence that you found extremely convincing didn’t convince others. How would you go about trying to prevent that danger from coming to pass?
1
1
Replying to
I think a good approach would be to use the "leader without coercive power" methods, like pastors, political advocates, some politicians. Stuff like follow-then-lead, similiarity-based persuasion, prestige.
1
Replying to and
I mean I understand the current approach is coming from a "play to your outs" variance-increasing strategy, and it's an elites-focused strategy. But I don't think it's the most effective because it's asking too much
1
Replying to
It seems you’re accusing the people you disagree with of bad faith, manipulative tactics. Feels sort of similar to assuming that the people that disagree with you are stupid, no? What if we all stopped claiming that our opponents were engaging in bad faith without evidence?
1
1
Replying to and
It's more like, say, Jehovah's Witnesses. JWs truly believe that non-believers will suffer an awful fate. Therefore, they will use the most effective tactics to convert people. Manipulation is ethical because it's for their own good.
Replying to
Manipulating others “for their own good” is a move made in bad faith, because if an argument is true you can advance it solely on that basis. So yeah, you’re accusing AI Xrisk evangelists of arguing in bad faith right now.
1
1
Replying to and
They seem to genuinely believe I’m at risk and want to save me though Would you call preaching hellfire to convert people through fear bad faith? How about yelling “bomb” in a crowd after spotting a suspicious backpack? I’d just call it misplaced confidence in bad judgment
1
2
Replying to and
This is basically my view - they believe we are genuinely at risk of extinction from AI risk, so they are happy to use the most effective persuasion methods (in their view). This doesn't involve any bad faith
1
1
Replying to and
Mr. Miyagi paint the fence though is a third thing… openly declared manipulative teaching to get past learning blocks. Daniel does it thinking he’s doing chores to pay the dues to get to karate. So it’s a surprise to find out he already learned.
1
2
Replying to and
I really respect people who dare to use the Mr Miyagi thing and manage to pull it off. If you don't manage to pull it off you look like an idiot, which is why it's so exciting to watch