Intelligence is about goals. Greater intelligence = more efficient, effective and consistent achieving goals that fit entity + environment.
-
-
Your statement is actually aligned with mine. The "fit" part = one selects better goals for the scenario presented, from experience.
-
Hutter too uses a goal achievement based definition. Since reward functions are ultimately mutable, model building seems the main issue
- 2 more replies
New conversation -
-
-
This Tweet is unavailable.
-
I don't understand how you prevent Aixi from hacking the loss function eventually
- 5 more replies
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.