Anthropomorphism & anthropocentrism are two major cognitive biases that prevent us from thinking clearly about non-human systems (incl. AI)
-
-
However, they can be used as "agents" when they are guided/utilized by conscious entities. They become agents through/in such interaction.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
When people refer to AI agents they aren't talking about free will or consciousness. They're talking about goal-oriented behavior.
-
A reinforcement learner fairly clearly doesn't have free will or consciousness, but it certainly has goals and attempts to achieve them.
- Show replies
New conversation -
-
-
There was a
@BBC program on robot. Journalists: robot=species, ~ human. US developer was just, "no, it's a machine, like a washing machine"Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Do you think we have free will? If so, won't AI systems have it eventually? Same for conciousness
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
They have as much free will as you and I do. The consciousness question remains unresolved.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
to my understanding the dividing line is whether a system can predict and decide/alter its future existence at various degree of complexity.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Humans aren't good at evaluating this about themselves either,e.g. belief that (in)determinism of physics would be perceivable for free will
-
Deep-seated instinct to both inflate importance and protect uncertainty for things that come close to humans' conception of themselves...
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.