Pierre Sermanet

@psermanet

Research scientist at Google brain in deep vision, robotics and self-supervised learning.

Vrijeme pridruživanja: studeni 2017.

Tweetovi

Blokirali ste korisnika/cu @psermanet

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @psermanet

  1. Prikvačeni tweet
    3. pro 2018.

    Give a robot a label and you feed it for a second; teach a robot to label and you feed it for a lifetime.

    Poništi
  2. proslijedio/la je Tweet
    10. stu 2019.

    😊Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data while learning in a supervised learning manner. My latest post covers many interesting ideas of self-supervised learning tasks on images, videos & control problems:

    Poništi
  3. proslijedio/la je Tweet
    30. lis 2019.

    We've open sourced the first playroom from the learning from play project (). Check out ! Many thanks to Michael Wu, .

    Poništi
  4. 16. lip 2019.

    We present this paper today at the LUV and DeepVision workshops at . Paper: Authors: , Mohi Khansari, Yunfei Bai, ,

    Prikaži ovu nit
    Poništi
  5. 16. lip 2019.

    One caveat is that we don’t actually train online (yet), we just show what’s possible if you did. But training on the fly is not even necessary in the case of a robot deployed in a new home, it can spend its first few days looking around, train itself overnight and overfit to it.

    Prikaži ovu nit
    Poništi
  6. 16. lip 2019.

    Our model recovers object attributes, colors, shapes and classes entirely from scratch, without any labels, as shown in the nearest neighbors here (ordered left to right by embedding distance to the leftmost object).

    Prikaži ovu nit
    Poništi
  7. 16. lip 2019.

    Our robot collects its own data, trains itself on it with our self-supervised objective using contrastive learning, then it can point to similar never-seen-before objects to the one in front of it, demonstrating generalization of object attributes.

    Prikaži ovu nit
    Poništi
  8. 16. lip 2019.

    Self-supervision allows you to train on your test data, so it’s pretty much guaranteed to do better than a supervised model trained offline. In this video our model converges to ~2% identification error after 160s while the offline baseline trained on ImageNet is stuck at ~50%.

    Prikaži ovu nit
    Poništi
  9. 16. lip 2019.

    A major benefit of self-supervision is we can truly scale and adapt on the fly. It could be 10% behind supervised ImageNet, it would still do better in real life. We show in that the longer our model looks at objects, the better it understands them.

    Prikaži ovu nit
    Poništi
  10. 15. lip 2019.
    Prikaži ovu nit
    Poništi
  11. 15. lip 2019.
    Prikaži ovu nit
    Poništi
  12. 15. lip 2019.
    Prikaži ovu nit
    Poništi
  13. 15. lip 2019.

    Come hear how to train the Cake at our workshop on Self-Supervised Learning today at ICML: Lineup: Jacob Devlin, Alison Gopnik, , , , , , Olivier Henaff A Zisserman, Abhinav Gupta, Alyosha Efros.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    30. tra 2019.

    I Now call it "self-supervised learning", because "unsupervised" is both a loaded and confusing term. In self-supervised learning, the system learns to predict part of its input from other parts of it input. In...

    Poništi
  15. 30. tra 2019.

    This was a large team effort with , , , Jonathan Tompson, Mohi Khansari, , , Yunfei Bai, , , , , , Andrew Zisserman, and .

    Prikaži ovu nit
    Poništi
  16. 27. tra 2019.

    My slides from the robotics symposium, the main message is self-supervision on lots of unlabeled play data is an effective recipe for robotics, and we propose multiple methods to implement this recipe for vision and control:

    Prikaži ovu nit
    Poništi
  17. 17. tra 2019.

    Very powerful self-supervised objective based on cycle consistency by , we show we can discover useful invariant representations of different states in videos although not having any labels for these states. Very useful for a bunch of things including imitation.

    Poništi
  18. 3. tra 2019.

    Great line-up at our ICML workshop on Self-Supervised Learning, it is very exciting to see this field gaining momentum!

    Poništi
  19. 19. ožu 2019.

    Is play-pretraining the imagenet-pretraining of robotics?

    Poništi
  20. 7. ožu 2019.

    We were able to perform 8 tasks in a row in zero shots using a single task-agnostic policy.

    Prikaži ovu nit
    Poništi
  21. 7. ožu 2019.

    How to scale-up multi-task learning? Self-supervise plan representations from lots of cheap unlabeled play data (no RL was used). by , Mohi Khansari, , , Jonathan Tompson, and

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·