So what you're saying is, to make more algorithmic progress towards AGI, we need not more ideas but more... computing power? ( ͡° ͜ʖ ͡°)
-
-
-
Nope, not at all (though perhaps that is true). What I am worried about is the decoupling of scientific communities, so you see a kind of tribalism emerging as some believe in value of empiricism/expensive testing over deep theory, and vice versa. What do you think?
-
If you have more ideas than you can test well, what good are they all? If just having ideas was enough, we would all have been using resnets since 1988, when they worked beautifully with a clear justification & writeup... on one toy problem, because all they could afford to run.
-
This is what I increasingly feel reading DRL papers the past two years: "half of these papers are useless, and the other half are powerful techniques which will be developed for decades to come; unfortunately, I have no idea which half."
-
Would love to hear a list of what you consider to be underpriced papers in the recent ML literature! e.g. net2net, what else?
-
I'm not sure - that's the problem! (I 𝘸𝘢𝘴 really pleased to see net2net used at industrial-scale in OA5, though, and for way faster neural architecture search.) As far as my pet interests go, I like the resurgence of deep environment models for planning the past year.
End of conversation
New conversation -
-
-
Tbh, this has existed even during the inception of deep learning. Famously, Hinton asked a vision professor(I think Mallik) what will convince him to try 'deep learning' and the answer was state of the art on Imagenet (mnist wasn't convincing)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I'm really waiting for RUDDER to be applied to various tasks and followups ideas of RUDDER. Please do it, somebody!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.