I’ve been skeptical about DL results because 25 years ago I reran the key experiments that were hyped as showing backprop (the underlying tech) was incredible. In each case I found that the researchers were fooling themselves. Not deliberate fraud, but sloppy work.
Parallel xor is still not a great test of anything; but the right defense of backprop should have been “xor is not a good test; here is a good test, and backprop passes.”
-
-
To be totally careful I should also say that my memory is that it’s exponential, but it was a long time ago, and I may be wrong.
-
It's probably sensitive to the objective, e.g., squared error on the output as an n-dim real vector should train each bit independently. Regardless, it's true that "ML is magic" is often the intuitive takeaway, which is wrong and misleading!
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.