Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @mo_norouzi
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @mo_norouzi
-
"To honour the memory of those lost, the University of Toronto has established an Iranian Student Memorial Scholarship Fund."
#Flight752 https://www.canadahelps.org/en/dn/45925 The first $250K donations is matched 3:1 by@UofT. Any funds received beyond $250K will be matched at a rate of 1:1.Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Want to hear more about the latest research that happens in our East Coast offices? Stop by the
#NeurIPS2019 Google booth at 3:25 to chat with researchers@CorinnaCortes, Mehryar Mohri, Sanjiv Kumar, William Cohen,@marcgbellemare,@Mo_Norouzi, Mohammad Mahdian and D. Sculley!Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
(1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.
@georgejtucker@RogerGrosse@Mo_Norouzipic.twitter.com/bVkLCMuKmH
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Cross entropy loss is invariant to shifting logits by any constant. Our paper uses this extra degree of freedom to define an energy based generative model of input data. This improves calibration and adversarial robustness of the corresponding classifier.https://twitter.com/wgrathwohl/status/1203848404717228033 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
We introduce Dreamer, an RL agent that solves long-horizon tasks from images purely by latent imagination inside a world model. Dreamer improves over existing methods across 20 tasks. paper https://arxiv.org/pdf/1912.01603.pdf … code https://github.com/google-research/dreamer … Thread
pic.twitter.com/K5DnooVIUHPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Tired of your robot learning from scratch? We introduce RoboNet: a dataset that enables fine-tuning to new views, new envs, & entirely new robot platforms. https://robonet.wiki https://arxiv.org/abs/1910.11215 w/ Dasari
@febert8888 Tian@SurajNair_1 Bucher Schmeckpeper Singh@svlevinepic.twitter.com/BC82ZBx8YXHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: https://arxiv.org/abs/1910.10683 Code/models/data/etc: https://git.io/Je0cZ Summary
(1/14)pic.twitter.com/VP1nkkHefB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Random Ensemble Mixture (REM) is deceptively simple -- it enforces optimal Bellman consistency on random convex combinations of the Q-heads of a multi-head Q-network. This is inspired by dropout and can be thought of as an infinite ensemble of Q-values sharing their features.pic.twitter.com/1I1IlgBdAT
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Is it possible to learn to play Atari games by watching DQN's replay? Yes, one can do much better than DQN by learning from the logged experience of DQN, and fancy distributional RL algorithms do worse than our simple REM. http://arxiv.org/abs/1907.04543 by
@rishabh_467, D. Schuurmanspic.twitter.com/vJW0Bvq8QC
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
I will be presenting “Similarity of Neural Network Representations Revisited” (joint work with
@Mo_Norouzi,@honglaklee, and@geoffreyhinton) at@icmlconf Thursday (tomorrow!) at 12:15 in Hall A (Deep Learning) and at Poster #20. Paper/code: http://cka-similarity.github.io . Thread below.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Attending
#ICML2019? Care about test set generalization in RL but don't have access to high quality rewards? Come see my talk on "Learning to Generalize from Sparse and Underspecified Rewards" in Deep Sequence Models track this Thursday. See poster
.
http://bit.ly/merl2019 pic.twitter.com/e2FP9WW0xa
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Mohammad Norouzi proslijedio/la je Tweet
Interested in what it means to measure similarity between neural network representations? Come see me talk about https://arxiv.org/abs/1905.00414 (joint work with
@Mo_Norouzi@honglaklee@geoffreyhinton) at the@iclr2019#DebugML workshop, 10:30 AM tomorrow in R03Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Yoshua Bengio, Geoffrey Hinton and Yann LeCun, the fathers of
#DeepLearning, receive the 2018#ACMTuringAward for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing today. http://bit.ly/2HVJtdV pic.twitter.com/dmlupWYuLm
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Our
#ICLR2019 work on Contingency-Aware Exploration presents a self-supervised model that can discover controllable entities in the environment for efficient exploration: 11,000+ on Montezuma’s Revenge w/o demonstrations or resets. https://arxiv.org/abs/1811.01483 https://coex-rl.github.io/ pic.twitter.com/njBrU29xicPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
A neural net trained on weather forecasts & historical turbine data predicts wind power output 36 hours ahead of actual generation. Based on these, our model recommends optimal hourly delivery commitments to the power grid 24 hours in advance.pic.twitter.com/jiewUyWCBf
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Applying reinforcement learning to environments with sparse and underspecified rewards is an ongoing challenge, requiring generalization from limited feedback. See how we address this with a novel method that provides more refined feedback to the agent.http://goo.gl/P9C2s3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
1/4 XLM: Cross-lingual language model pretraining. We extend BERT to the cross-lingual setting. New state of the art on XNLI, unsupervised machine translation and supervised machine translation. https://arxiv.org/abs/1901.07291 Joint work with
@GuillaumeLamplepic.twitter.com/EyKEsTpFWE
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Mohammad Norouzi proslijedio/la je Tweet
Introducing Natural Questions, a new, large-scale corpus and challenge for training and evaluating open-domain question answering systems, and the first to replicate the end-to-end process in which people find answers to questions. Learn more at ↓http://goo.gl/bfqJ7a
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.