Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @RogerGrosse
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @RogerGrosse
-
Roger Grosse proslijedio/la je Tweet
"Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks" Poster #149 (Thu 10:45am, East Exh. Hall B+C) by awesome *undergrads*
@qiyang_li and Saminul Haque w/ @CemAnil1,@james_r_lucas,@RogerGrossepic.twitter.com/TARxwapSSm
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roger Grosse proslijedio/la je Tweet
For those are interested in VOGN, you might also like to read my noisy natural gradient (https://arxiv.org/abs/1712.02390 ) paper, which derived the same connection between optimization and variational inference as VOGN (we also discussed K-FAC approximation except the diagonal one).https://twitter.com/RobertTLange/status/1204109176261238789 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roger Grosse proslijedio/la je Tweet
(1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.
@georgejtucker@RogerGrosse@Mo_Norouzipic.twitter.com/bVkLCMuKmH
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roger Grosse proslijedio/la je Tweet
Will arrive in Vancouver for
#NeurIPS2019 a bit late (Monday night) due to a final exam. I will present two posters (see below) in the main conference and one poster in SGO workshop (@YuanhaoWang3 will give a 30-mins contributed talk on that). Reach out if you'd like to chat.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This architecture is practical (if a bit slow) to train, and competitive with other deterministic provable adversarial defenses. (Though still far behind randomized smoothing.)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Turns out this is OK, since if you use 2N channels and project down, there's one connected component that can represent any orthogonal convolution over N channels. So you lose at most a factor of 2.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Our architecture uses orthogonal convolutions based on Lechao Xiao's initialization scheme. But when optimizing over this space, there's a surprising problem: the space of orthogonal convolutions is disconnected!
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Previously we introduced fully connected architectures with tight Lipschitz bounds. Now we extended this to conv nets. Good for provable adversarial robustness and Wasserstein distance estimation. Joint work w/
@qiyang_li, Saminul Haque, et al. https://arxiv.org/abs/1911.00937Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is a phenomenon we also found from evaluating GAN likelihoods. Evaluating GAN likelihoods is computationally challenging, but we can learn a lot from it! https://arxiv.org/abs/1611.04273
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
From David Bau et al.: more evidence that GANs produce seemingly high-quality image samples by omitting hard-to-model objects. https://arxiv.org/pdf/1910.11626.pdf …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Neat work by
@youjiaxuan, Haoze Wu, et al. Something I hadn't appreciated until recently is that learning SAT solvers is bottlenecked by "data," i.e. interesting problem instances. They can generate good enough random instances to tune a solver: https://arxiv.org/abs/1910.13445Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roger Grosse proslijedio/la je Tweet
The
#NeurIPS2019 camera-ready version of our NQM paper (https://arxiv.org/abs/1907.04164 ) is out! We added a new section analyzing exponential moving average (EMA). EMA accelerates training a lot with little computation overhead. REALLY surprised that EMA hasn't been widely used so far!https://twitter.com/Guodzh/status/1148778925734150150 …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Quantum computing researchers are debating fundamental science while AI researchers are stuck arguing if the word "solve" maybe gave some people the wrong impression.https://twitter.com/thomasgwong/status/1186458912242970624 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
In ML, it's often simultaneously the case that (1) the announcement of a research result is accurate, informative, and measured, and (2) most of the excitement is from people misinterpreting the result as more profound than it really is.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
An engaging and accessible overview of the challenges involved in building AI systems consistent with human values, and what aspects of the problem our current algorithmic techniques can and can't address. We need more books like this!https://twitter.com/Aaroth/status/1169639864192684032 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roger Grosse proslijedio/la je Tweet
New work on solving minimax optimization locally. With
@YuanhaoWang3 Jimmy Ba. We propose a novel algorithm which converges to and only converges to local minimax. The main innovation is a correction term on top of gradient descent-ascent. Paper link: https://arxiv.org/pdf/1910.07512.pdf …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
In deep learning research, the sky turns out to be blue, but only if you measure it very carefully. Interesting meta-scientific paper on evaluating neural net optimizers, by Choi et al. https://arxiv.org/pdf/1910.05446.pdf …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roger Grosse proslijedio/la je Tweet
University of Toronto is hiring broadly in robotics across departments - CS, ME, ECE and CogSci. If you are planning to be on robotics academic market, get in touch. https://buff.ly/2q1gr5O
@UofTRobotics@VectorInst@UofTCompSci@uoftmie@uoftengineeringpic.twitter.com/SE1wXlN5sj
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Haha, they think they can make statistical techniques sound fancy by emphasizing that physicists used the same mathematical tools. OK now, back to estimating the partition function of this Boltzmann machine...https://twitter.com/WIRED/status/1181437300414275584 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I wonder if all the AI researchers they asked to review this book were unwilling to write articles for Nature.https://twitter.com/mark_riedl/status/1180988651473440768 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.