Easy take: calling linear algebra “quantum physics” and “artificial intelligence” is hype. Deeper: the huge influx of physicists into AI has produced an intellectual monoculture that isn’t capable of addressing key problems in the field.https://twitter.com/WIRED/status/1181437300414275584 …
-
-
Replying to @Meaningness
Most of the crankiest people I know about the culture of AI are physicists-turned-AI people. I doubt that's a reason for a monoculture; seems quite the reverse, they're usually the ones saying "hey, fiddling with hyperparameters is the wrong thing to be doing..."
1 reply 0 retweets 18 likes -
Replying to @michael_nielsen
Yes, and "let's look at the energy surface" leads to insights that vaguely-motivated tweaking can't. (Such tweaking being an approach CS-educated folks are liable to fall into.) But it doesn't lead to the insights that understanding mechanism-domain interactions can.
2 replies 0 retweets 8 likes -
Replying to @Meaningness
This seems very different from the original assertion, now more like "Are ideas adapted from physics sufficient to solve the main problems of AI?" Seems the answer is "obviously not", though of course exploring ideas from many different domains seems likely a good thing.
3 replies 0 retweets 3 likes -
Replying to @michael_nielsen @Meaningness
Hopfield networks, Boltzmann machines, renormalization group explanations of machine learning models - I'm glad all these things are being explored, although I have low confidence any will turn out to be on the critical path to AI.
2 replies 0 retweets 0 likes -
Replying to @michael_nielsen @Meaningness
To your original point about monoculture: in each case, some determined exploration seems good. Taking over the field is bad. Funny: systemic positive feedback effects mean that directions seem to be either greatly underexplored or overexplored.
2 replies 0 retweets 2 likes -
Replying to @michael_nielsen @Meaningness
(By which I mean: ideas languish for a long time, with just one or two people interested. Then a program offficer starts to fund and promote, universities start to hire, students are trained, and you get an intellectual bubble/fashion. No mechanism for a good equilibrium.)
1 reply 0 retweets 3 likes
Yes, this is a malign dynamic. Five years ago I seriously considered pursuing funding for an anti-AI lab that would do the control experiments that the deep learning people avoid, thereby (probably) helping the field avoid going off the deep end (so to speak)!
-
-
Replying to @Meaningness @michael_nielsen
Skepticism keeps a field healthy, and it's conspicuously lacking here. I hoped some funder would recognize the value of that.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.