One quick example that’s easy to do by phone - “Steps Toward Artificial Intelligence,” Minsky, 1960: https://courses.csail.mit.edu/6.803/pdf/steps.pdf … Remarkable how the taxonomy of research areas roughly resembles today - learning, planning, etc at high level, lower stuff like exploration/hierarchy, etc
-
-
Show this thread
-
Makes you think progress would have been vastly faster if those folks had today’s compute to try all this stuff out.
Show this thread -
Obv lots of blind allies but I think they would have pruned way faster in a different universe. He and many others asked all the right Qs.
Show this thread -
(Bit of hyperbole as it’s not clear we even have the right Qs today, but point is, most big Qs have been asked for decades)
Show this thread -
He did give NLP short thrift though (mostly discussed as a grammar induction prob)
Show this thread -
-
OK, the thread continues now that I can quote from stuff. As most folks know, AI has gone through many phases. Now deep learning is all the rage, and once expert systems were all the rage. People had the same sorts of discussions about excess hype, AI winters, robustness, etc.
Show this thread -
Consider IJCAI 1985, when there was a panel called "Expert Systems: How Far Can They Go?" covering all such topics but in a very different (and in some ways opposite) technological context. See parts 1 and 2 here (will excerpt): https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/729/647 … https://pdfs.semanticscholar.org/f6aa/427bf112ae7ebdc1b698d1b6f01c032f48cb.pdf …
Show this thread -
In addition to the meta topics (hype etc.) being discussed, I found an excerpt that illustrates how different the prevailing approach is today... expert systems are all about explicit knowledge and reasoning - neural nets (generally) are abt fast/reactive/"intuitive" processing..
Show this thread -
And these different emphases influenced how people characterized what AI could/couldn't do, where it could be applied, etc. Compare these quotes, one recent from
@AndrewYNg and one from Stuart Dreyfus in 1985. They are basically the exact opposite of each other.pic.twitter.com/JpJzEascuV
Show this thread -
And to his credit, Dreyfus was very clear that he wasn't saying intuition was forever unsolvable, and even plugged connectionism as a way to get at it.pic.twitter.com/kDecxxv1E4
Show this thread -
Last throwback of the night - there were really fascinating arguments back in the day about why commonsense reasoning was so hard, and how to solve it. One approach championed by Feigenbaum and Lenat was to encode a lot of world knowledge by hand. ...
Show this thread -
People like mentioning Cyc as a failure, but I find it more interesting to look at the detailed arguments people made for why it made sense, especially as people are thinking about ML-augmented approaches to address the same problem today. This is my favorite example -
Show this thread -
In "On the Thresholds of Knowledge," Lenat and Feigenbaum (1987) laid out in detail one line of thinking for how AI should be solved, and present several hypothesis related to the relationship between knowledge, learning speed, etc. including this little number here:pic.twitter.com/vz3wywb9uc
Show this thread -
I generally recommend poking around in AI paper/conference history (Quest for AI, Machines Who Think, Artificial Dreams, and other books are good, too. Other threads to follow are
@rodneyabrooks /@etzioni et al. debates in the 80s/90s +anything John McCarthy-related. /FinShow this thread -
P.S. forgot the link to Lenat + Feigenbaum - here it is: http://ijcai.org/Proceedings/87-2/Papers/122.pdf …
Show this thread -
OK I couldn't resist, one more: lots of funny stuff/notes to self in McCarthy's unfinished book including this to-be-maybe-deleted jab at ML. http://jmc.stanford.edu/articles/logicalai/logicalai.pdf …pic.twitter.com/w6RMhUT737
Show this thread
End of conversation
New conversation -
-
-
What about the real AI classic:https://academic.oup.com/mind/article/LIX/236/433/986238 …
-
Definitely
End of conversation
New conversation -
-
-
As a relative novice, I speculate that is particularly true if you are one who believes deep learning is running out of gas. Whether or not it’s a dead end, lots of people claiming to be looking for the next leap or another path Thoughts?
-
Yeah I think there’s a lot of inspiration to be found from what people with less compute and different biased came up with.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.