Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @james_r_lucas
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @james_r_lucas
-
James Lucas proslijedio/la je Tweet
Glad I don't have to keep this a secret anymore -
@fatconference is in Toronto next year! Excited to welcome everyone up to Canada in January



#FAT2020#FAccT2021pic.twitter.com/Zg5ymuzkVH
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
How do you increase AI capacity in a workforce? This article is about some of the professional development programming we're doing at Vector Institute
@VectorInst.#FutureOfWork https://www.linkedin.com/pulse/how-do-you-increase-ai-capacity-workforce-shingai-manjengwa … via@LinkedIn#AI#DataScience#MachineLearning#HR#ProfessionalDevelopmentpic.twitter.com/iX1tTc3aCB
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
To jump start the new year, a blog post on geometric series. https://francisbach.com/the-sum-of-a-geometric-series-is-all-you-need/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
"vague weasel words do not a reason for rejection make" Seriously, amazing work from this meta reviewer. (Though, it is unfair that the reviewers made their job so much harder.)https://twitter.com/shaohua0116/status/1207933671073665024 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I'm at the
#NeurIPS2019 ML With Guarantees workshop today, talking about how hard it is to generalize to data without iid assumptions. Contributed talk at 11:30am and poster all day. Come speak with me about theory in settings like few-shot learning! Work w/@mengyer + Rich Zemelpic.twitter.com/fynnKilUMa
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
Can we learn generative language models for the joint distribution over several languages? Come find my poster for
Multilingual KERMIT
at the Perception as Generative Reasoning (PGR) Workshop at East Meeting rooms 1-3 from 2:30-3:30pm! #NeurIPS2019pic.twitter.com/AKVBFaNUef
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Come speak with
@michaelrzhang and me about Lookahead now! Poster200#NeurIPS2019pic.twitter.com/0Fzpsks0m3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
Welcome to our (
@ssydasheng@roydanroy ) poster today at 05:00 -- 07:00 PM (#227) https://twitter.com/roydanroy/status/1164354701476999169 …pic.twitter.com/glZWD8nHit
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Come speak with us about posterior collapse in VAEs! (In 30 minutes...)pic.twitter.com/H0s23LKkx4
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(4) Information-Theoretic Limitations on Novel Task Generalization, ML with Guarantees workshop. We measure theoretical hardness of settings like few-shot learning. I'll be presenting this as a contributed oral at 11:30 and during the poster sessions.
@mengyer and Rich ZemelPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(3) Lookahead Optimizer: k steps forward, 1 step back. Thursday evening, East Hall B+C (#200) We propose a new optimization algorithm that wraps around existing optimizers, reducing variance and improving convergence. Work with
@michaelrzhang, Geoff Hinton, and Jimmy Ba.pic.twitter.com/DcbR0qL4sF
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(2) Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks. Thursday morning, East Hall B+C (#149) The secret is in finding the right way to learn orthogonal convolutions. Work with 2 awesome **undergrads** Qiyang Li + Saminul Haque, + otherspic.twitter.com/FcF0RkS3fb
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(1) See also our website with code/video/posterhttps://sites.google.com/view/dont-blame-the-elbo …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.
@georgejtucker@RogerGrosse@Mo_Norouzipic.twitter.com/bVkLCMuKmH
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I'm in Vancouver for
#NeurIPS2019. If you're here and want to chat let me know! Also, I'm presenting some work...Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Come see us on Thursday in Vancouver, Poster #200!https://twitter.com/michaelrzhang/status/1202252721933422595 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
Previously we introduced fully connected architectures with tight Lipschitz bounds. Now we extended this to conv nets. Good for provable adversarial robustness and Wasserstein distance estimation. Joint work w/
@qiyang_li, Saminul Haque, et al. https://arxiv.org/abs/1911.00937Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I saw quite a bit of negativity around NeurIPS reviews this year... While I'm sure it's not universal, all of the papers I am reviewing have had a healthy amount of constructive reviewer discussion! Stay hopeful, friends.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
New paper on studying how the critical batch size changes based on properties of the optimization algorithm (including momentum and preconditioning), through two different lenses: large scale experiments, and analysis of a simple noisy quadratic model. https://arxiv.org/pdf/1907.04164.pdf …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
James Lucas proslijedio/la je Tweet
1/5 New work w/
@EthanFetaya and Rich Zemel suggests likelihood-based conditional generative models will not solve robust classification. We show competitive models can be easily fooled, revealing fundamental issues with their learned representations and the likelihood objective.pic.twitter.com/vXDRJ9xqE4
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.