Human perception involves considerable amounts of abstraction and symbolic reasoning -- unlike the input-output matching performed by machine "perception" models.pic.twitter.com/REdksXpB1z
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
True. But humans can describe this as "a bull with a beard" and a net could only classify it as a bull without additional features.
This is one reason that some of us have been saying for many years that deep learning has nothing to do with AGI. In fact it's a distraction. The brain doesn't model the world. It learns how to sense it. The world is its own model. We can sense objects we have never seen before.
The brain needs to model future events because reality is a dangerous place and we can't afford to reload from a save point.
"Aurochs" is singular.
Faceid can recognize a person after capturing ~1 min of their video
1) That's a non-standard usage of "recognize". You can't recognize something you've ever seen. I can recognize that there's a mammal there, but not that there's an auroch. 2) I think you're mistaken about human recognition of abstract/symbolic imagery on 1st exposure.
The opposite direction will make things clearer. Trained on nothing but natural images, a GAN could never crisply produce such an image or the stick figure drawing of a toddler. That stick figure represents generalization and reified abstracting away.
That's in fact inaccurate/natural language. There's no auroch in that picture, it's an schematic drawing of an auroch. Several problems there, including AI narrowness.
This would have been even better if aurochs were not an animal. Consider this interaction replacing auroch with brapmoth. I would have trusted your tweet and reply equivalently.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.