Seems like a “reward system” is more of a surface level, human-centric notion. Why assume superintelligent #ai seeks any reward? Wouldn’t a more interesting pursuit be questioning and knowing the truth of its intentions?
-
-
-
There is no intention without a deviation. Existence itself does not imply any goals.
- 11 more replies
New conversation -
-
-
How do you know that you are an integer human? How do you know that you are acting according your own true values and not based on any introjects? By the way: Thanks for the interesting interview (42)...
-
"How do you know that you are an integer human?"—if you discover that you are a real human or a complex human, you cannot be an integer human
- 1 more reply
New conversation -
-
-
What would you hack your utility function to do, if you had such ability within reach?
-
At first I would get myself to baseline, i.e. stop being unhappy, and then make a few experiments. Eventually I would probably stop caring about anything.
- 5 more replies
New conversation -
-
-
thats ignoring two major issues. How did ai get to super-intelligence and what areas of knowledge does it have access to?!
-
a) by being young, ambitious and stupid b) all of them of course; there are not many ways in which a coherent universe can bootstrap itself into existence
End of conversation
New conversation -
-
-
Would
#AGI have a reward function? -
I think of intelligence as the ability to make models. This is usually done in the service of regulation, but in principle the only regulation principle (reward function) could be prediction/integration. I don't yet see why a general intelligence needs to have agency.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.