The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function
Replying to @mjcavaretta
I think of intelligence as the ability to make models. This is usually done in the service of regulation, but in principle the only regulation principle (reward function) could be prediction/integration. I don't yet see why a general intelligence needs to have agency.
8:22 AM - 18 Apr 2018
from Cambridge, MA
0 replies
1 retweet
3 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.