My weakly-held opinion is that slowing AI progress would be good, but probably this is a better take than mine:
Conversation
Replying to
Somewhat ironically, yes. I think actually-existing AI (= social media algorithms) are probably large net harms, so nuking Facebook from orbit would be good. I don’t think “AGI” (which no one can explain) is imminent, but if it somehow happened, it would probably be bad.
1
1
6
Replying to
I’m basically AI-accelerationist. Floor the gas, solve actual bad consequences as they come up, ignore incoherent constructs like “AGI” and ill-posed general anxieties like “alignment”
I don’t think AI as it exists is bad. It’s just made existing badness elsewhere unsustainable.
just GPT-3d this for fun and it's funny how it went "the way to make it better... is to make it want to be more intelligent" which is sort of a basis for the alignment problemspace
1
Any technology that can get increasingly harder to control is scary. AI is the best example of this as it can do unexpected things without us realizing what it's doing.
1
Wot. AI as it exists is terrible. Look at the creepy uncanny valley nonsense that is dall-e and gpt3
Show more replies




