AI doesn’t replicate. Having worked in the field, I can usually see why a paper’s result is nonsense, but the public can’t, and many researchers can’t.https://twitter.com/stephaniemlee/status/964612382650646529 …
-
-
The one that got me really annoyed was XOR. The narrative was that Minsky&Papert unfairly killed perceptrons with that, and you could learn XOR if you added hidden layers. 1/
-
This turned out not to be true in any interesting sense. You can compute XOR with a feedforward network, but backprop won’t learn it reliably, nor in a reasonable length of time. It has to get lucky. 2/
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.