Mohammad Norouzi

@mo_norouzi

Scientist at Google Research, Brain team.

Vrijeme pridruživanja: studeni 2018.

Tweetovi

Blokirali ste korisnika/cu @mo_norouzi

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @mo_norouzi

  1. 13. sij

    "To honour the memory of those lost, the University of Toronto has established an Iranian Student Memorial Scholarship Fund." The first $250K donations is matched 3:1 by . Any funds received beyond $250K will be matched at a rate of 1:1.

    Poništi
  2. proslijedio/la je Tweet
    10. pro 2019.

    Want to hear more about the latest research that happens in our East Coast offices? Stop by the Google booth at 3:25 to chat with researchers , Mehryar Mohri, Sanjiv Kumar, William Cohen, , , Mohammad Mahdian and D. Sculley!

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    9. pro 2019.

    (1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.

    Prikaži ovu nit
    Poništi
  4. 9. pro 2019.

    Cross entropy loss is invariant to shifting logits by any constant. Our paper uses this extra degree of freedom to define an energy based generative model of input data. This improves calibration and adversarial robustness of the corresponding classifier.

    Poništi
  5. proslijedio/la je Tweet
    4. pro 2019.

    We introduce Dreamer, an RL agent that solves long-horizon tasks from images purely by latent imagination inside a world model. Dreamer improves over existing methods across 20 tasks. paper code Thread 👇

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    24. lis 2019.

    Tired of your robot learning from scratch? We introduce RoboNet: a dataset that enables fine-tuning to new views, new envs, & entirely new robot platforms. w/ Dasari Tian Bucher Schmeckpeper Singh

    Poništi
  7. proslijedio/la je Tweet
    23. lis 2019.

    New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: Code/models/data/etc: Summary ⬇️ (1/14)

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    2. lis 2019.
    Poništi
  9. 12. srp 2019.

    Random Ensemble Mixture (REM) is deceptively simple -- it enforces optimal Bellman consistency on random convex combinations of the Q-heads of a multi-head Q-network. This is inspired by dropout and can be thought of as an infinite ensemble of Q-values sharing their features.

    Prikaži ovu nit
    Poništi
  10. 11. srp 2019.

    Is it possible to learn to play Atari games by watching DQN's replay? Yes, one can do much better than DQN by learning from the logged experience of DQN, and fancy distributional RL algorithms do worse than our simple REM. by , D. Schuurmans

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    12. lip 2019.

    I will be presenting “Similarity of Neural Network Representations Revisited” (joint work with , , and ) at Thursday (tomorrow!) at 12:15 in Hall A (Deep Learning) and at Poster #20. Paper/code: . Thread below.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    10. lip 2019.

    Attending ? Care about test set generalization in RL but don't have access to high quality rewards? Come see my talk on "Learning to Generalize from Sparse and Underspecified Rewards" in Deep Sequence Models track this Thursday. See poster 👇.

    Prikaži ovu nit
    Poništi
  13. 8. svi 2019.

    in New Orleans is memorable.

    Poništi
  14. proslijedio/la je Tweet
    5. svi 2019.

    Interested in what it means to measure similarity between neural network representations? Come see me talk about (joint work with ) at the workshop, 10:30 AM tomorrow in R03

    Poništi
  15. proslijedio/la je Tweet

    Yoshua Bengio, Geoffrey Hinton and Yann LeCun, the fathers of , receive the 2018 for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing today.

    Poništi
  16. proslijedio/la je Tweet
    5. ožu 2019.

    Our work on Contingency-Aware Exploration presents a self-supervised model that can discover controllable entities in the environment for efficient exploration: 11,000+ on Montezuma’s Revenge w/o demonstrations or resets.

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    26. velj 2019.

    A neural net trained on weather forecasts & historical turbine data predicts wind power output 36 hours ahead of actual generation. Based on these, our model recommends optimal hourly delivery commitments to the power grid 24 hours in advance.

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet

    Applying reinforcement learning to environments with sparse and underspecified rewards is an ongoing challenge, requiring generalization from limited feedback. See how we address this with a novel method that provides more refined feedback to the agent.

    Poništi
  19. proslijedio/la je Tweet
    23. sij 2019.

    1/4 XLM: Cross-lingual language model pretraining. We extend BERT to the cross-lingual setting. New state of the art on XNLI, unsupervised machine translation and supervised machine translation. Joint work with

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    23. sij 2019.

    Introducing Natural Questions, a new, large-scale corpus and challenge for training and evaluating open-domain question answering systems, and the first to replicate the end-to-end process in which people find answers to questions. Learn more at ↓

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·