My weakly-held opinion is that slowing AI progress would be good, but probably this is a better take than mine:
Conversation
Replying to
Somewhat ironically, yes. I think actually-existing AI (= social media algorithms) are probably large net harms, so nuking Facebook from orbit would be good. I don’t think “AGI” (which no one can explain) is imminent, but if it somehow happened, it would probably be bad.
6
Replying to
I’m basically AI-accelerationist. Floor the gas, solve actual bad consequences as they come up, ignore incoherent constructs like “AGI” and ill-posed general anxieties like “alignment”
I don’t think AI as it exists is bad. It’s just made existing badness elsewhere unsustainable.
4
12
just GPT-3d this for fun and it's funny how it went "the way to make it better... is to make it want to be more intelligent" which is sort of a basis for the alignment problemspace
1


