Danijar Hafner

@danijarh

Researcher and PhD student . Aiming to build unsupervised intelligent machines.

Toronto, Canada
Vrijeme pridruživanja: kolovoz 2013.

Tweetovi

Blokirali ste korisnika/cu @danijarh

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @danijarh

  1. Prikvačeni tweet
    4. pro 2019.

    We introduce Dreamer, an RL agent that solves long-horizon tasks from images purely by latent imagination inside a world model. Dreamer improves over existing methods across 20 tasks. paper code Thread 👇

    Prikaži ovu nit
    Poništi
  2. 18. sij

    If something in the forward pass needs more precision (e.g. numerically unstable ops), cast to float32 before and back to the original dtype after.

    Prikaži ovu nit
    Poništi
  3. 18. sij

    Tried mixed precision yet? Took 10 min to set up and my model runs almost 2x faster with same results. Vars and grads are still 32 bits so it usually doesn't affect predictive performance. E.g. in TF2, set option and make all input to your layers float16 (data, RNN states, ..):

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    17. sij

    Beautiful quantum physics animations from the basics like • Wave-Particle duality • How lasers work • Tunneling effect to research level stuff like • Bose-Einstein condensate • Pump-probe technique All freely available on

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    9. sij

    Training Neural SDEs: We worked out how to do scalable reverse-mode autodiff for stochastic differential equations. This lets us fit SDEs defined by neural nets with black-box adaptive higher-order solvers. With , and .

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    7. sij

    I'm excited to share that I have joined Imperial College London as a lecturer (asst prof)! I'm convinced it will be a great environment to continue working on GPs, Bayesian Deep Learning, and model-based RL. Do get in touch if you're interested joining to do a PhD!

    Prikaži ovu nit
    Poništi
  7. 5. sij

    RL shifts the question of what intelligent behavior is to finding a reward function. I think we should focus more on what environment and reward function rather than on what RL algorithm to use. Is there theory for how properties of env and reward affect the resulting behavior?

    Poništi
  8. proslijedio/la je Tweet
    26. pro 2019.

    Bayesian methods are *especially* compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization, not the prior, or Bayes rule. This difference will be greatest for underspecified models like DNNs. 1/18

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    24. pro 2019.

    As asked by and others authors with >= 5 submissions sorted by acceptance rate Zhiyuan Li, Mingyuan Zhou, Deva Ramanan 4/5 80.0% Le Song 7/9 77.8% Jimmy Ba 6/8 75.0% Martin Jaggi 5/7 71.4% Abhinav Gupta 5/7 71.4% Pushmeet Kohli 6/9 66.7% Max Welling 5/8 62.5%

    Poništi
  10. proslijedio/la je Tweet
    21. pro 2019.

    Trade talks, a prediction: UK - We don't like our deal EU - Why not? UK - We only get 95% of what we want EU - It only gives us 95% too UK - We want a new deal that gives us 100% of what we want EU - But that means we only get 90% of what we want 1/13

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    15. pro 2019.

    A highlight for me was 's refreshingly honest talk about the Neural ODEs paper, part of the retrospectives workshop. Check it out !

    Poništi
  12. proslijedio/la je Tweet
    4. pro 2019.

    i have been laughing at this since yesterday. please turn your volume up 😂

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    8. pro 2019.

    8:30am Sunday; starts with a bang! admits to the world that he’s a Bayesian! (meme-ready short version: ). :-)

    Poništi
  14. proslijedio/la je Tweet
    4. pro 2019.

    Dream to Control: Learning Behaviors by Latent Imagination The agent learns a latent world model via interactions, and backprops thru imagined latent trajectories of this model to learn useful behaviors. et al. pdf code

    Poništi
  15. 4. pro 2019.

    Thanks to my advisors on the project: , Tim Lillicrap, and Jimmy Ba. Let me know if you have any questions! ✨

    Prikaži ovu nit
    Poništi
  16. 4. pro 2019.

    We evaluate Dreamer across 20 challenging visual control tasks with image inputs, where it exceeds previous methods in terms of final performance, sample-efficiency, and wall-clock time. Dreamer is also applicable to discrete actions and episodes with early termination.

    Prikaži ovu nit
    Poništi
  17. 4. pro 2019.

    Naturally, the value function enables longsighted behavior and lets Dreamer be robust to the imagination horizon. This lets us solve new tasks that a policy without value function or online planning with PlaNet could not solve.

    Prikaži ovu nit
    Poništi
  18. 4. pro 2019.

    Dreamer learns a world model from experience. Inside the compact latent space of the model, it predicts actions and state values. The policy is optimized efficiently by propagating analytic value gradients back through imagined trajectories.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet

    Robo-PlaNet: Learning to Poke in a Day - a robotics project to learn a simple task from pixels on a single robot using model-based RL and real data only - this is a collaboration I worked on with Guillaume Alain & at

    Poništi
  20. proslijedio/la je Tweet
    11. stu 2019.

    A mental test of mine when I write a paper, is to see whether the paper is also suitable as a “blog post” intended for a general audience, without much modifications to the text. Most of my papers are written this way, and some of them have also been published at ML conferences.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·