Medijski sadržaj
- Tweetovi
- Tweetovi i odgovori
- Medijski sadržaj, trenutna stranica.
-
#cosyne2020 submission deadline: 31Oct 11:59 **Pacific Time**.#cosyne2020 registration deadline: 31Jan 11:59 **Eastern Time**. (http://www.cosyne.org/c/index.php?title=Abstracts …)pic.twitter.com/taTBBAiY3x -
itermplot (https://github.com/daleroberts/itermplot …) is a matplotlib backend that displays directly in your terminal (iterm2). Really awesome resource if you (like me) enjoy working directly from the IPython repl!pic.twitter.com/oG0znEelVl
-
The greatest numerical algorithms of the 20th century, according to Nick Trefethen circa 2005 (http://people.maths.ox.ac.uk/trefethen/inventorstalk.pdf …)pic.twitter.com/1okDhTC2Zk
Prikaži ovu nit -
Finally, we can understand how the network processes individual words (tokens) by looking at projections of the embedding vectors onto the principal eigenvectors of the system. Overall, we think these tools will help us demystify and understand how recurrent networks work! (4/4)pic.twitter.com/BwVqB2n8SK
Prikaži ovu nit -
The network dynamics are organized around a roughly 1-D approximate line attractor, which we identify by studying the eigendecomposition of the recurrent Jacobian of the dynamics at approximate fixed points. (3/4)pic.twitter.com/mFnfUWomwC
Prikaži ovu nit -
We analyze recurrent networks trained to perform sentiment classification, a standard natural language processing (NLP) task. We find that RNNs trained on this task as surprisingly interpretable using tools from dynamical systems. (2/4)pic.twitter.com/qNzzF5AOL7
Prikaži ovu nit -
Sneak preview of our poster tonight (#146) at
#ICML2019! (Read the full paper at https://arxiv.org/abs/1806.10230 )pic.twitter.com/hQP4mBQXEF
-
What are useful metrics for comparing representations in RNNs with the brain? How sensitive are these to architecture choices? Find out at our poster (III-51) tonight at
#cosyne19! Awesome collaboration with@ItsNeuronal@MattGolub_Neuro@SuryaGanguli@SussilloDavidpic.twitter.com/fUyNP0NUAw
-
Great overview of recent work analyzing optimization trajectories of deep networks. In particular, for deep linear networks, you can get linear convergence to global minima if the initial weights are 'aligned' (see post for details) http://www.offconvex.org/2018/11/07/optimization-beyond-landscape/ …pic.twitter.com/5Krd3z5DzU
Prikaži ovu nit -
New work: Learning optimizers with less mind numbing pain! With improvements to stabilize meta-training, we find that we are able to train optimizers that beat well tuned baselines on wall-clock time. (1/2) https://arxiv.org/abs/1810.10180 pic.twitter.com/4j9aex1guB
Prikaži ovu nit -
Next up in great talks: Ben Recht
@beenwrekt on connections between control theory and reinforcement learning. (ICML 2018 tutorial)https://youtu.be/nF2-39a29PwPrikaži ovu nit -
If you’re into optimization, this talk from Mike Jordan is a really nice overview of a lot of interesting recent work:https://www.youtube.com/watch?v=wXNWVhE2Dl4 …
-
Check out our poster tonight at
#cosyne18 (III-8). We show that deep networks trained to reproduce retinal responses to natural scenes also exhibit a whole host of previously published phenomena. The same models trained on white noise surprisingly don't have this property!
pic.twitter.com/UdHWdYCpmB
Prikaži ovu nit -
The last figure from this paper (https://openreview.net/forum?id=HkmaTz-0W …) suggests that ~80-90% of the variance in parameter updates can be captured by just two dimensions--this seems insane to me. Is NN optimization inherently this low-dimensional?pic.twitter.com/G10PtOBBiU
-
I think people heavily underestimate the impact of international travel on carbon emissions—relevant for conferences https://www.washingtonpost.com/news/theworldpost/wp/2017/11/02/plane-pollution/ …pic.twitter.com/TKiGczpoiS
-
3D printed adversarial examples?! Seems to work from multiple angles. (Paper: https://openreview.net/pdf?id=BJDH5M-AW …)http://youtu.be/YXy6oX1iNoA
-
"Definite optimism as human capital" by
@danwwang -- insightful read on the factors that drive human welfare and economic growthpic.twitter.com/pEvqM1ifL9
-
Some evening visualization fun: traversing a sharp ridge in a toy error landscape (using noise perturbations to approximate gradients)pic.twitter.com/kM9ki8D3NS
-
This section from “A neural networks approach … to ask Barry Cottonfield to the junior prom” is too good
https://arxiv.org/abs/1703.10449 pic.twitter.com/28GHFVh8cX
-
New paper with @SuryaGanguli using proximal algorithms to learn nonlinear cascade models in the retina is up! http://biorxiv.org/content/early/2017/03/27/120956 …pic.twitter.com/trjRX44bAO
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.


