This made me think of a "Reverse Scientist Theorem": Once a superintelligent AI has sufficient control over the environment, its prediction error will minimized by adapting the environment more so than by adapting the model.
-
-
-
This implies a normative regulation goal. Unless our goal involves reducing the complexity of the environment (which is usually seen as a bad thing), or the regulation targets are given by outside forces, it seems to be easier to change the regulation targets?
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.