Another failure mode for deep learning, precisely anticipated by my 2001 book, The Algebraic Mind.https://www.newscientist.com/article/2198761-deepmind-created-a-maths-ai-that-can-add-up-to-6-but-gets-7-wrong/ …
-
-
I suspect you need to be more specific to what capabilities are missing. This will align with what many researchers are seeking. Example: is analogy making sufficient or is something else needed? I've at least have a model that captures this.
-
sorry, but it’s really not fair to accuse me of being vague if you haven’t read the detailed and specific explication of variables and operations over variables etc in the book or engaged it any way.
- 2 more replies
New conversation -
-
-
I think the problem with that prediction is that humans can "manipulate symbols" so we know that at least in some sense you have to be right. It's too vague a prediction to be useful. I think a prediction of "analogy making" is a bit less vague and probably more useful.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Very interesting. The issues surrounding abstract reasoning are all here in my wiki-book 'Machine Psychology' https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Psychology%26NLP … -Grammar Induction -Natural Language Processing -Epistemology The answer could be Inductive Logic Programming (ILP)! https://en.wikipedia.org/wiki/Inductive_logic_programming …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.