The evolutionary task is: what are the meaningful features of this situation I’m in right now? What possibilities do they afford? How can a depth-10 circuit compute this? Taking “Parallel Distributed Processing” seriously: only by considering all possibilities simultaneously.
-
Show this thread
-
Humans are *terrible* at reasoning. The one thing we are extraordinarily good at is bringing to bear relevant “background understanding” on everything we encounter.
@vervaeke_john argues, persuasively, that this is THE central issue for cognitive science: http://www.ipsi.utoronto.ca/sdis/Relevance-Published.pdf …2 replies 10 retweets 70 likesShow this thread -
Because Heidegger. Did you listen to
@vervaeke_john’s talk about that? (I recommended it an hour ago.) Or you could read the explanation in this TRUE story about how I became a character in a Ken Wilber novel, because Heidegger. https://meaningness.com/metablog/ken-wilber-boomeritis-artificial-intelligence …pic.twitter.com/6EZb3TpFHw
1 reply 6 retweets 31 likesShow this thread -
This was a central point in my PhD research: how do we get intelligent real-time activity while taking seriously the constraint that neural processing is extraordinarily slow?pic.twitter.com/cSoFOYn3ri
3 replies 3 retweets 37 likesShow this thread -
Replying to @Meaningness @vervaeke_john
Wow your thesis looks fascinating: https://apps.dtic.mil/dtic/tr/fulltext/u2/a228626.pdf … You might like Cisek’s version of neuroscience in this regard: http://www.cisek.org/pavel/Pubs/CisekKalaska2010.pdf …https://link.springer.com/article/10.3758/s13414-019-01760-1 …
2 replies 0 retweets 4 likes -
There are some recurrent ANN models now that do things more like visual routines https://arxiv.org/abs/1502.04623 And some that integrate continuous prediction and action https://arxiv.org/abs/1803.10760
2 replies 0 retweets 2 likes -
(Noting that it looks like you basically did deep predictive learning and/or RL in one of your appendices.) In any case no modern system has 10^11 parallel elements... barring that what do you think is the path forward?
2 replies 0 retweets 0 likes -
Replying to @AdamMarblestone @vervaeke_john
Yes I spent six months on reinforcement learning with backprop in 1988. (I wasn’t the first to do this!) I didn’t get far and gave myself an appendix as a consolation prize. TBF my then-supercomputer ran at less than one megaflop :(
1 reply 0 retweets 1 like -
In the early 90s I replicated a bunch of the hot backprop work and concluded that the researchers were mostly fooling themselves, and that gave me skeptical priors for the current DL work. Some results from that seem real but a lot of it is also self-deception (plus hype ofc).
1 reply 0 retweets 1 like -
If I could see a way forward for AI now, I'd be pursuing it (or, I suppose, trying to stop other people from pursuing it, depending on what I thought about the safety debate!). Nothing looks promising to me atm.
1 reply 0 retweets 2 likes
-
-
Replying to @AdamMarblestone @vervaeke_john
Thanks for saying nice things about my thesis!
0 replies 0 retweets 2 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.