The problem with theories like mirror neurons or IIT is not that they are wrong, but that they are superstitious. Their proposed elements have no way in which they could actually produce the result they pretend to explain.
I think that this is clearly wrong. My Roomba reacts to its environment, and it is certainly not conscious. A neural learning algorithm that improves future performance based on past interactions is also usually not self aware.
-
-
This Tweet is unavailable.
-
You are only conscious of things that required your attention. Once a general AI as you envision has solved all relevant problems, it will cease to be conscious, I think.
- 18 more replies
-
-
This Tweet is unavailable.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.