NN training goes through two distinct phases: (1) reducing classification error through drift (2) optimally compressing mutual information of hidden layers through diffusion https://arxiv.org/abs/1703.00810 pic.twitter.com/He7dNY3NSv
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Now I will drink my first a.m. coffee and stop being such a grump.
grump is good! that said I think there's a couple things here: (1) it's hard to expand this analysis past toy-ish data since you need to be able to define an actual distribution over the inputs, and NNs are usually used in cases where we don't have one (images, for instance)
I think the "hype" is cuz a well founded and motivated paradigm is being tried to explain neural black boxes. not that we gotten to the point of cool experimental verification.
Yes… I would reserve enthusiasm for the point at which there’s significant evidence. It’s easy to “apply” paradigms; hard to do so and get a meaningful result.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.