AI doesn’t replicate. Having worked in the field, I can usually see why a paper’s result is nonsense, but the public can’t, and many researchers can’t.https://twitter.com/stephaniemlee/status/964612382650646529 …
-
-
My recollection is that other people figured this out a few years later and that mostly killed off backprop research until ~2012. My memory of the details is vague however. There were a few others, but XOR and RL were the ones that seemed most significant. 3/3
-
There’s a new paper that came out just a couple days ago that’s making the rounds called “Deep Reinforcement Learning Doesn’t Work” or something like that. I haven’t had time to look at it, but it didn’t work in 1992 so I’m not surprised.
- 1 more reply
New conversation -
-
-
OH. And I thought that I was doing something wrong back when I was trying to get an intuition for backprop and was simulating it by hand in a spreadsheet, but couldn't get it to converge for XOR.
-
I did this 25 years ago, but my recollection is that backprop can only find the solution by accident. There’s no guiding gradient. You have to set the hyperparameters to force a random walk over the whole space, and hope it falls into the golf hole eventually.
- 6 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.