This is a sim. Omega has 1000s of years to convince you.
-
-
Replying to @MoralOfStory @ContentOfMedia and
The first two require far less computational power than simming 3^^^3 people. You need to also comprehend the hardware which will be unimaginably huge. You need to already know the output of the program.
1 reply 0 retweets 0 likes -
Replying to @ReferentOfSelf @MoralOfStory and
And there is no way you can know what you-after-1000-years-of-convincing is like or what they would choose.
1 reply 0 retweets 0 likes -
Replying to @ReferentOfSelf @ContentOfMedia and
The you-after-1000-years says to a forked copy of you, "I have exhaustively verified this computational machinery. Choose wisely."
1 reply 0 retweets 0 likes -
Replying to @MoralOfStory @ReferentOfSelf and
(Omega resets both copies & repeats this process a gajillion times until they find the combination of words which creates in you the correct state of belief)
2 replies 0 retweets 0 likes -
Replying to @MoralOfStory @ContentOfMedia and
What does that get you? Yes, we can hypothesize all sort of you-like things which come to believe arbitrary things. But you aren't them; you don't know what it is like to be them.
1 reply 0 retweets 0 likes -
Replying to @ReferentOfSelf @ContentOfMedia and
Yes, you can't access that epistemic state and nearly everyone who thinks they can is confabulating (that goes for much more mundane thought experiments, too). But is it invalid to reflect on what you'd want that you to choose?
1 reply 0 retweets 0 likes -
Replying to @MoralOfStory @ContentOfMedia and
It is not invalid to reflect on what you think you might choose, but understanding that your action will affect quantityOf(3^^^3) people seems like such an alien state of mind that you can't have a good idea what you might choose
1 reply 0 retweets 1 like -
Replying to @ReferentOfSelf @ContentOfMedia and
I think nearly all responses to "what would you do when faced with X" questions are confabulation and I don't think this case is especially worse than most.
2 replies 0 retweets 0 likes -
Replying to @MoralOfStory @ReferentOfSelf and
Anyway, (IMO) the point of the thought experiment isn't to reproduce your hypothetical internal state, it's to figure out without pressure what you'd want yourself to do.
2 replies 0 retweets 0 likes
The basic idea of putting in tension our impulse to save one from undue suffering with our impulse to allow one to suffer for the good of the group doesn't seem like a major stretch for the imagination. There are probably numbers vastly smaller than 3^^^3 that make it work.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.