I agree that authors need always to be prepared to fully demonstrate how they got their result. But if ML was like psychology, then it would turn out that GANs don't work, ImageNet challenges are mostly done with selected random guesses etc.
-
-
-
At the moment, it might be that some proudly reported 2% advantages over competing algorithms turn out to be flukes, but it is unlikely that they just don't work, or a whole family of models is going to collapse under scrutiny, as it happens in psychology.
End of conversation
New conversation -
-
-
The crisis may not be as big as psychology's, but you can have a problem without fraud.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.