Ali Eslami

@arkitus

Research scientist at DeepMind studying intelligence. How do we learn without explicit supervision?

London, UK
Vrijeme pridruživanja: prosinac 2007.

Tweetovi

Blokirali ste korisnika/cu @arkitus

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @arkitus

  1. Prikvačeni tweet
    9. pro 2019.

    Exciting updated results for self-supervised representation learning on ImageNet: - 71.5% top-1 with a *linear* classifier - 77.9% top-5 with only *1%* of the labels - 76.6 mAP when transferred to PASCAL VOC-07 (better than *fully-supervised's* 74.7 mAP)

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    17. sij

    Pleased to present our work , endowing DeepSets networks and Conditional Neural Processes with translation equivariance. Oral at ! Joint work with , , James Requeima, , Rich Turner. paper and code:

    Poništi
  3. 5. sij
    Poništi
  4. proslijedio/la je Tweet
    1. sij

    Very excited to share where we show an AI system that outperforms specialists at detecting breast cancer during screening in both the UK and US. Joint work with and published in today!

    Prikaži ovu nit
    Poništi
  5. 9. pro 2019.
    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    2. pro 2019.

    Got any burning questions for Josh Tenenbaum, , , or Niloy Mitra? 🤔 We are opening up the floor for discussion early by inviting *YOU* to pose questions. Simply reply to this tweet with your suggestions and we will feature the most liked ones!

    Poništi
  7. proslijedio/la je Tweet
    12. stu 2019.

    1/ Excited to share our NeurIPS paper, Geometry-Aware Neural Rendering! . With , , and the rest of the robotics team

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    7. stu 2019.

    Is it too late to move NeurIPS 2020? There is no way Canada should be hosting 3 years in a row when their visa practices continue to be this racist

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    6. stu 2019.

    This is some advice I had shared with my lab on how to shorten your paper to fit the page limit. With the deadline coming up, I thought I'd share it widely.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    I think deep learning is attracting lots of funding because it makes it seem like you can magically turn data into algorithms without the slow work of understanding the data first. This is mostly unrealistic. Eventually people will learn this, probably by losing lots of money.

    Poništi
  11. proslijedio/la je Tweet
    10. lis 2019.

    Wanna play around with SPIRAL but the installation seems complicated? I've just built a Docker image to make the experience as hassle-free as possible. To get the agent up and running on your machine follow the instructions here: Have fun!

    Prikaži ovu nit
    Poništi
  12. 3. lis 2019.

    recently open-sourced the RL environments we used for this work, check it out:

    Prikaži ovu nit
    Poništi
  13. 3. lis 2019.

    And when it comes to straight-up creativity, I'd be very curious to hear what artists and art researchers of the likes of , , , , , make of this! Collaborations, anyone? ;)

    Prikaži ovu nit
    Poništi
  14. 3. lis 2019.

    The combination of: 1. learned generative agents, 2. physically plausible environments, and 3. learned adversarial reward functions, could be useful for program synthesis, inverse graphics, chemical synthesis, music generation and so much more.

    Prikaži ovu nit
    Poništi
  15. 3. lis 2019.

    This means that, in certain scenarios and under certain circumstances, the representations that these agents produce can be considered to be 'semantic'.

    Prikaži ovu nit
    Poništi
  16. 3. lis 2019.

    The point is that these agents' representations can actually be instantiated in physical reality. This is not necessarily the case with purely neural autoencoder representations. Read more about the video below here:

    Prikaži ovu nit
    Poništi
  17. 3. lis 2019.

    When given more time with the canvas, agents produce images that look more natural. Of course, they're still constrained to draw with a brush, so their samples are not photo-real. But that's not the point.

    Prikaži ovu nit
    Poništi
  18. 3. lis 2019.

    When sufficiently constrained, agents learn to paint surprisingly abstract images. Some of the paintings remind me of cubist portraits. (Remember: no imitation or supervision). Can you spot any familiar faces? See for loads more emergent drawing styles.

    Prikaži ovu nit
    Poništi
  19. 3. lis 2019.
    Prikaži ovu nit
    Poništi
  20. 3. lis 2019.

    Now for something different! Deep RL + GAN training + CelebA = artificial caricature. Agents learn to draw simplified (artistic?) portraits via trial and error. @ creativity workshop. Animated paper: PDF: Thread.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·