Armchair critique of "AI Alignment": Defining an aligned utility function is no harder than defining a sufficiently sophisticated goal
All that said, I find that reformulating the catastrophe thought experiments to be about not being dumb is fun to think about
-
-
*farts*
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I get the impression the AI alignment people would basically agree; they're just trying to dereference "don't be stupid"
-
"stupid" from our perspective is "ruin everything for us"; "not stupid" is "make us happy"
- 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.