I don't know what to tell you, I just love the idea of an artificial intelligence singularly addicted to ever larger numbers
-
-
Show this thread
-
To be clear, as far as I understand it this is not a scenario AI safety theorists are super concerned about. And obviously a single global variable ripe for pleasure-hacking is just silly
But I liiiiiike itShow this thread
End of conversation
New conversation -
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Are you confused by the fact that humans spend more time getting high than chimpanzees?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I’ve long thought something like that, but with “complex reward system” = payments between lower ML modules.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Highly addictive drugs sort of hack the system that reinforces behavior (“do more of whatever preceded this feeling”). The relationship between that & pleasure & meaning is messy, but you don’t need meaning to have self-reinforcing behaviors.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
https://arbital.com/p/orthogonality/ … contains some good arguments for that thesis.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.