Luke Metz

@Luke_Metz

Research Scientist at Google Brain. Formerly , , My opinions do not represent those of my employer.

San Francisco, CA
Vrijeme pridruživanja: listopad 2012.

Tweetovi

Blokirali ste korisnika/cu @Luke_Metz

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Luke_Metz

  1. proslijedio/la je Tweet
    22. sij

    1/5 In one my Residency projects we used CNNs to reparameterize structural optimization (w/ ). Our approach worked best on 99/116 structures. I just finished a blog post with GIFs, visualizations, and links to code + Colab.

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    9. sij

    I feel strange writing this at this time but I should tweet about papers at some point. This one is with the amazing @jo_historian--my first paper with a historian! I learned so much from her. People always talk about using ML for X, digital humanities etc

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    17. pro 2019.

    MetaInit: Initializing learning by learning to initialize They propose a strategy to automatically identify good initial parameters, and show that deep architectures *without* batch norm or residual connections can be trained to get near SOTA results. 🔥

    Poništi
  5. proslijedio/la je Tweet
    13. pro 2019.

    We're releasing "Dota 2 with Large Scale Deep Reinforcement Learning", a scientific paper analyzing our findings from our 3-year Dota project: One highlight — we trained a new agent, Rerun, which has a 98% win rate vs the version that beat .

    Poništi
  6. proslijedio/la je Tweet
    12. pro 2019.

    tips day 5 (h/t )! Conferences are a parade of successes. Remember that for every impressive paper there are many (unpublished) ideas that didn't pan out. Take this opportunity to ask people about negative results!

    Poništi
  7. proslijedio/la je Tweet
    10. pro 2019.

    Reminder: I’m going to be presenting this today! Come see the non-hexagon version of me, and ask me about inductive biases! (5:30-7:30 pm east exhibit hall b+c #188)

    Poništi
  8. proslijedio/la je Tweet
    9. pro 2019.

    Meta-learning has a peculiar, widespread problem that leads to terrible performance when faced with seemingly benign changes to the training set-up. We analyze this problem & provide a solution: w/ , , Zhou,

    Poništi
  9. proslijedio/la je Tweet
    9. pro 2019.

    As promised, we have made the Text-To-Text Transfer Transformer (T5) models much easier to fine-tune for new tasks, and we just released a Colab notebook where you can try it yourself on a free TPU! 👇 (1/3)

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    Anything that can be implemented in JAX *will* be implemented in JAX. Here's a differentiable path tracer (and a tutorial!) Blog Post: Code:

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    5. stu 2019.

    Check out our study on the effects of inductive bias and model capacity in video prediction models. This is work with Paper: Website:

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    31. lis 2019.

    Meta Reinforcement Learning is good at adaptation to very similar environments. But can we meta-learn general RL algorithms? Our new approach MetaGenRL is able to. With and Paper: Blog:

    Poništi
  13. proslijedio/la je Tweet
    29. lis 2019.

    Learning to Predict Without Looking Ahead: World Models Without Forward Prediction Rather than hardcoding forward prediction, we try to get agents to *learn* that they need to predict the future. Check out our paper!

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    28. lis 2019.
    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    24. lis 2019.

    The Meta-World paper is now out! Includes an eval of 8 methods & 5 eval modes. We look forward to seeing how your new algorithms fare on the suite of 50 tasks.

    Poništi
  16. proslijedio/la je Tweet
    23. lis 2019.

    New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: Code/models/data/etc: Summary ⬇️ (1/14)

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    7. lis 2019.

    Happy to announce our paper on Generalized Inner Loop Meta Learning, aka Gimli (), with , , Phu Mon Htut, Artem Molchanov, Franziska Meier, , , and . THREAD [1/6]

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    2. lis 2019.

    Unsupervised Doodling and Painting with Improved SPIRAL “Under the right circumstances, some aspects of human drawing can emerge from simulated embodiment, without the need for external supervision, imitation or social cues.” pdf

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet

    I'd like to organize a "manifesto track" at AI conferences where speakers come up and present their controversial, slightly unhinged views on how to achieve AGI. Experimental evidence is optional. - Numenta HTM - LeCake - - Non-Axiomatic Reasoning System

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    30. ruj 2019.

    In this article, we develop some key intuitions around Temporal Difference learning and why it is such an effective tool in Reinforcement Learning. I hope the interactive diagrams that and I built are useful!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·