Hmm, yes, this does needs more explanation than is feasible on twitter!
-
-
If the program controls a robot and machine vision system that interact with a physical board, then it is more plausible to say that representations are inherently of board states (although even this turns out to be surprisingly tricky).
-
So, e.g., some neurons in human V1 cortex can reasonably be said to represent edge angles, because they’re causally coupled to those. The question is whether you can extend a representational story to cognition in general (and if so, how).
- 17 more replies
New conversation -
-
-
-
Yes; ultimately I don’t agree with the conclusions Brian came to in _Origin of Objects_, but he wrestled seriously with the hard problems.
@drossbucket has alerted me to a forthcoming book from BCS arguing against AI on anti-cognitivist grounds:https://mitpress.mit.edu/contributors/brian-cantwell-smith …
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
s