I disagree. The scope of relevant problems might seem large to us, but why should it be to a planet size AI? Furthermore, the upper boundary of any problem's difficulty is given by how hard it is to circumvent the reward function and wirehead.
I think "foolish" is a category mistake. You are making a normative judgement when you try to understand and would need a descriptive or functional one. I think it might be more productive if you fully abstain from normative judgements in scientific or philosophical contexts.
-
-
This Tweet is unavailable.
-
This Tweet is unavailable.
- 4 more replies
-
-
This Tweet is unavailable.
-
This Tweet is unavailable.
-
This Tweet is unavailable.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.