The problem is that Tononi and Koch have assigned properties to consciousness that are incompatible with functionalism, i.e. they want to explain a phenomenon that is outside the scope of computational theories. Give them some credit for trying the impossible!
-
-
Replying to @artilectium @examachine
IIT is as much among top theories of consciousness as Manicheism is among top theories of cosmology. Certainly among the top ten.
1 reply 3 retweets 4 likes -
This Tweet is unavailable.
-
Replying to @XanderNerdski @Grady_Booch and
I think that this is clearly wrong. My Roomba reacts to its environment, and it is certainly not conscious. A neural learning algorithm that improves future performance based on past interactions is also usually not self aware.
2 replies 0 retweets 0 likes -
This Tweet is unavailable.
-
Replying to @examachine @XanderNerdski and
You are only conscious of things that required your attention. Once a general AI as you envision has solved all relevant problems, it will cease to be conscious, I think.
1 reply 1 retweet 1 like -
This Tweet is unavailable.
-
Replying to @examachine @XanderNerdski and
I disagree. The scope of relevant problems might seem large to us, but why should it be to a planet size AI? Furthermore, the upper boundary of any problem's difficulty is given by how hard it is to circumvent the reward function and wirehead.
2 replies 0 retweets 1 like -
This Tweet is unavailable.
Survival is irrelevant. Cognition is irrelevant. Sapience is irrelevant. Consciousness is irrelevant. Unless a reward function makes them relevant somehow.
-
-
This Tweet is unavailable.
-
Replying to @examachine @XanderNerdski and
The reward function (implemented as cybernetic motivational dynamics) is the evolutionary solution to survival.
3 replies 0 retweets 1 like - 12 more replies
-
-
This Tweet is unavailable.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.