I got bored and wrote about AI risk πππ
Conversation
Thanks for acknowledging EY doesnβt think LLMs are an x-risk. Whatβs an example of someone who does? I donβt and I donβt think I know any
2
28
I just hear people freaking out a lot in casual conversation, but recognized that only people in my narrow circle are even thinking about this, so I just wrote this for fun π₯°
1
6
Show replies
I think said as much in our episode.
He doesn't think an LLM will destroy humanity. He thinks an AGI will.
Do you have an argument against this?
5
48
No, it's just impossible to know, which is why people drive themselves up the wall thinking about it
3
1
17
Show replies
LLMs are just guessing their way through the set of possible functions that let them make text as close as computationally possible to what a human would write.
Unfortunately, one sort of function likely to be great at this would be a fully sapient human brain analogue
1
That's dangerous for all the reasons AGI is dangerous, *plus* the obvious ethical nightmare if we end up creating and mistreating/enslaving sapient-as-in-human beings.
I suspect existing LLMs don't do this, but adding compute until stuff happens seems like a poor way to avoid it
1
This is what doomers actually believe
Quote Tweet
Current LLMs *can't* end the world
Future AIs with higher outcome-optimization power over a larger outcome-representation space *can* end the world
LLM progress is scary because it represents forward movement on these dimensions.
Show this thread
6
Imo the big risk that you only briefly address is AI as a really smart browser providing instructions for how to do really bad things to motivated bad actors.








