Conversation

(1/6) Humanity seems to have developed an increasing respect for consciousness over time. It ebbs and flows but, in spite of our violent past, the trajectory appears positive.
8
16
162
(2/6) It seems likely that we’ll evolve to having a morality where maximization of consciousness and, correspondingly, minimization of destruction of consciousness is what we’re solving for.
3
2
26
(3/6) Given this, seems like the most optimistic case for AGI is that its development is so rapid that it all but misses the early phases of our destructive evolution and fast forwards to viewing consciousness as a fundamental good worthy of being preserved.
4
23
(4/6) The hope would be that this would include biological substrate consciousness, even if becomes inferior on many dimensions. The thing I’m unsure of is whether we could imbue AGI with this value, or whether we just have to hope it’s a fundamental truth of the universe!
4
1
23
Replying to
(6/6) Seems morally obligatory to do everything we can to fight for the brightest possible future so feels important to front-load thinking about these things so we can take actions that are hopefully at least directionally correct.
5
4
47
Replying to
I love this question and articulation. Quite a few people talk about “value alignment” problem. Humans have an innate objective function: short/long term survival. Anything else serves that. Machines can be fit with any function. You already know that, I’m sure.
2
IMO, if AGI is owned/governed by a council/board, they can either control the objective function, or the actuating mechanisms (actions) so AGI always reflects our values. But what if AGI is governed by a rogue actor