Yes. Humanism does not follow from rationalism.
-
-
Replying to @Plinz @HarryBr55145341
You consider the moral valences of well-being and suffering to be outside the ability of rationalism to describe, capture, take-account-of ?
1 reply 0 retweets 3 likes -
Replying to @RealtimeAI @HarryBr55145341
If you are under threat from the outside (such as entropy or Stalin), forcing you to optimize for efficiency, in a rational sense, the moral valence of pain is equal to its effect on the performance of the greater whole. Humanism needs additional axioms.
2 replies 0 retweets 3 likes -
Replying to @Plinz @HarryBr55145341
Maybe you mean a more specific set of axioms by the term “rational”. All I mean is the systemic use of reasons and arguments and so on. I don’t see why an implicit premise that, eg, survival > happiness is prima facie or necessarily “rational”.
1 reply 0 retweets 0 likes -
Replying to @RealtimeAI @HarryBr55145341
Under evolutionary conditions, the system that prevails will be the one that is optimizing for survival, and rationality is (by definition) the best optimization strategy. Once you decide that you are going to stick around, rationality constrains you.
1 reply 0 retweets 2 likes -
Replying to @Plinz @HarryBr55145341
Are you saying that survival is the only rational goal, or that only rationality will reliably lead to survival? I reject the former, accept the latter. But neither case implies that rationalism excludes humanism (depending on how you’re defining that one.
)1 reply 0 retweets 1 like -
Replying to @RealtimeAI @HarryBr55145341
Neither. Nothing leads reliably to survival, but rationality optimizes your chances to reach your goal (by definition). And you maximize your probability of existing by optimizing for it (if you don't exist we don't need to worry about you).
2 replies 0 retweets 2 likes -
Replying to @Plinz @HarryBr55145341
You and I both currently exist and have preferences. The idea that we should only care about people whose preferences are for survival because they are more likely to achieve it is circular.
1 reply 0 retweets 0 likes -
Replying to @RealtimeAI @HarryBr55145341
That is not implied in any way. But a rational agent that is concerned about its survival will have to be concerned more about competition that optimizes for survival as well, because the rest tends to go away by itself.
1 reply 0 retweets 1 like -
Replying to @Plinz @HarryBr55145341
Agree with that. But you keep implying that survival is intrinsically its own a goal that is implies by rationality. I don’t agree with that. It’s only a goal *to the extent* that you value existence for some other reason.
2 replies 0 retweets 0 likes
You misunderstand. You are entirely free to pick another goal, but that means that you are sooner becoming irrelevant as someone other rational agents need to worry about.
-
-
Replying to @Plinz @HarryBr55145341
We’re getting close to the paradox of hacking your own reward function. That implies that you have some reason to either prefer existence or destruction. But you’re always either without any such reason, or within the context of having one.
1 reply 0 retweets 0 likes -
Replying to @RealtimeAI @HarryBr55145341
No, it does not. If you are indifferent to existence you have nothing to worry about, and we are done.
1 reply 0 retweets 1 like - 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.