Conversation

Thanks for acknowledging EY doesn’t think LLMs are an x-risk. What’s an example of someone who does? I don’t and I don’t think I know any
2
28
I just hear people freaking out a lot in casual conversation, but recognized that only people in my narrow circle are even thinking about this, so I just wrote this for fun πŸ₯°
1
6
Show replies
No, it's just impossible to know, which is why people drive themselves up the wall thinking about it
3
17
Show replies
LLMs are just guessing their way through the set of possible functions that let them make text as close as computationally possible to what a human would write. Unfortunately, one sort of function likely to be great at this would be a fully sapient human brain analogue
1
That's dangerous for all the reasons AGI is dangerous, *plus* the obvious ethical nightmare if we end up creating and mistreating/enslaving sapient-as-in-human beings. I suspect existing LLMs don't do this, but adding compute until stuff happens seems like a poor way to avoid it
1
This is what doomers actually believe
Quote Tweet
Current LLMs *can't* end the world Future AIs with higher outcome-optimization power over a larger outcome-representation space *can* end the world LLM progress is scary because it represents forward movement on these dimensions.
Show this thread
Image
6
Imo the big risk that you only briefly address is AI as a really smart browser providing instructions for how to do really bad things to motivated bad actors.