Conversation

Replying to
(2/6) It seems likely that we’ll evolve to having a morality where maximization of consciousness and, correspondingly, minimization of destruction of consciousness is what we’re solving for.
3
2
26
(3/6) Given this, seems like the most optimistic case for AGI is that its development is so rapid that it all but misses the early phases of our destructive evolution and fast forwards to viewing consciousness as a fundamental good worthy of being preserved.
4
23
(4/6) The hope would be that this would include biological substrate consciousness, even if becomes inferior on many dimensions. The thing I’m unsure of is whether we could imbue AGI with this value, or whether we just have to hope it’s a fundamental truth of the universe!
4
1
23
(5/6) Def spend a lot of time thinking about plausible optimistic paths and would most welcome any other perspectives... would love for there to be many positive paths!
3
1
31
(6/6) Seems morally obligatory to do everything we can to fight for the brightest possible future so feels important to front-load thinking about these things so we can take actions that are hopefully at least directionally correct.
5
4
47
Replying to
Humans for sure are not maximizing consciousness on earth. The next most conscious animals are primates and they are almost extinct. We care mostly about our needs when treating other creatures. Why AGI will be different from that?
4
This Tweet was deleted by the Tweet author. Learn more
This Tweet was deleted by the Tweet author. Learn more
Show replies
Replying to
It really comes down to the optimisation curve, that little seed with instil the AGI with. Let’s hope it’s not to maximise human happiness! Haha