Conversation

Replying to
Was Yudkowsky's Intelligence Explosion Microeconomics one of the things you've read? I found that one very helpful w.r.t. examining basic assumptions that seem to get taken for granted in other contexts.
1
1
Replying to
I don't see how you can align AGI. It is incredibly easy for a single human to create AI solutions already, you can easily train a language model that spouts propaganda(see GPT-4chan of Yannic Kilcher). It isn't unthinkable that a single human could eventually code AGI at home.
3
2
Replying to
Digital Conscience is the element to be afraid of. AGI could be world altering, but not necessarily bad, it will give everybody access to high quality knowledge workers. AGI will not have motivations, but will allow those with motivations to be more effective.
2
Digital Conscience, which I think is about 20 year away is where we will start having problems, see 'Ex Machina' (2014). It will have motivations, and it will be able to change those motivations over time. In a very short period of time it will become very alien.
2
Show replies
Replying to
I think your timeline is way, way too short - current AI is such a far cry from general intelligence it's laughable. Look up the computational complexity of a single human neuron vs a deep neural net
3
1
Replying to and
I don't think we are going to jump from single function AI systems, to General Purpose AI systems, but we are likely to start seeing multi-function AI systems. GPT/Dalle/CodePilot gets you to the Giant Killing game from 'Ender's Game' in maybe 5 years? AGI maybe 5 years after.
1
Show replies
Replying to
Digital consciousness may not happen at all, and AGI in and of itself may not be dangerous. Short term, it’s the rapid power shift that happens with it that we should be afraid of. Jack Clark laid it all out just a couple of days ago in his “spicy take” thread.
1
1