Yusuf Aytar

@yusufaytar

Research Scientist @ DeepMind. Making machines smarter. Views are my own.

Vrijeme pridruživanja: listopad 2010.

Tweetovi

Blokirali ste korisnika/cu @yusufaytar

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @yusufaytar

  1. Prikvačeni tweet
    17. tra 2019.

    Check out Temporal Cycle-consistency Learning - our most recent work on video representation learning by aligning videos in the wild. Project Page: Video: Together with , Jonathan Tompson, and Andrew Zisserman

    Poništi
  2. proslijedio/la je Tweet
    18. pro 2019.

    This was one of the best AI projects I was privileged to participate in in 2019. did an amazing job with and Generative models and metalearning can have important positive impact.

    Poništi
  3. proslijedio/la je Tweet
    26. lis 2019.

    How can we finetune sim-trained policies on a real robot in the absence of real rewards? We used sequence-based self-supervision objectives to do so for stacking cubes w/Rae Jeong, , , Yuxiang Zhou, , Thomas Lampe,

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    27. ruj 2019.

    How can robots learn from humans and their own experience to manipulate objects using vision? Here is our take on the problem:

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    5. ruj 2019.

    R2D3 uses demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. It can solve tasks where SOTA methods fail to see a single reward. Paper: Videos & more:

    Poništi
  6. proslijedio/la je Tweet
    1. srp 2019.

    Deep RL agents are data hungry and often learn task-specific representations. Our model learns object-centric abstractions from raw videos. This enables highly data-efficient RL and structured exploration.

    Poništi
  7. proslijedio/la je Tweet
    14. lip 2019.

    Very excited about Self-Supervised Learning Workshop tmrw at , kicking off at 9am w/ Jacob Devlin (inventor of BERT)! Also: Alison Gopnik, , , A Zisserman, Abhinav Gupta, Alyosha Efros, and many contributed talks/posters :)

    Poništi
  8. proslijedio/la je Tweet
    4. lip 2019.

    VQVAE-2 finally out! Powerful autoregressive models in a hierarchical compressed latent space. No modes were collapsed in the creation of these samples ;) Arixv: With and More samples and details 👇 [thread]

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    27. tra 2019.

    Note that the deadline for the self-supervised learning workshop has been extended to May 6! Plenty of time to write up your work in a 4-page abstract :)

    Poništi
  10. proslijedio/la je Tweet
    19. tra 2019.

    The Self-Supervised Learning workshop submission deadline is next Thursday (25/04)! Consider submitting an extended abstract (4 pages) of your latest work. Work under review for other conferences welcome.

    Poništi
  11. proslijedio/la je Tweet
    17. tra 2019.

    Excited to share our work on self-supervised learning in videos. Our method, temporal cycle-consistency (TCC) learning, looks for similarities across videos to learn useful representations. Video: Webpage:

    Prikaži ovu nit
    Poništi
  12. 31. ožu 2019.

    Self-Supervised Learning workshop at with amazing set of speakers: , , Andrew Zisserman, Alexei Efros, Jacob Devlin, Abhinav Gupta and more.. Submissions now open - deadline April 25!

    Poništi
  13. proslijedio/la je Tweet
    6. velj 2019.

    Using a single network architecture and fixed set of hyper-parameters, Recurrent Replay Distributed DQN quadruples prev SoTA on Atari-57, and matches SoTA on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games.

    Poništi
  14. proslijedio/la je Tweet

    Happy that we could share progress with you all! Good Games and , and and for a great show! You can see all the details in the blog.

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    22. sij 2019.

    Join us and this Thursday at 6:00pm GMT for an exciting demonstration, hosted by and ! Livestream on YouTube: Read more about as an environment for AI research:

    Poništi
  16. proslijedio/la je Tweet

    Join us on Thursday at 18:00 GMT (19:00 CET) for a livestream with AI researchers from and an epic demonstration hosted by and ! ➡️

    Poništi
  17. proslijedio/la je Tweet
    13. pro 2018.

    The improvement in the quality of images sampled from generative models over the last few years has been astounding. See in particular 5:35. This GAN has learned 3d structure of cars and bedrooms through only 2d images, and interpolates smoothly!

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    13. pro 2018.

    Scene Recomposition by Learning-based ICP. New work from Steve Seitz’s group.

    Poništi
  19. proslijedio/la je Tweet
    4. pro 2018.

    Imitation by watching YouTube: learning features from YouTube videos through self-supervision allows us to solve hard exploration games in Atari. Paper: Video: Spotlight talk: Wed 4:20pm, 220CD Poster session: Wed 5-7pm, 210

    Poništi
  20. proslijedio/la je Tweet
    17. lis 2018.

    MetaMimic can imitate novel demonstration videos in one-shot.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·