William Fedus

@LiamFedus

PhD candidate with Yoshua Bengio and Hugo Larochelle. I also spend time

Montréal, Québec
Vrijeme pridruživanja: listopad 2012.

Tweetovi

Blokirali ste korisnika/cu @LiamFedus

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @LiamFedus

  1. Prikvačeni tweet
    12. pro 2019.

    Looking forward for workshops tomorrow - my favorite part of . and I will speak about the MEMENTO observation in Atari agents tomorrow at 4:15 in the BARL workshop. Come see us at the poster! , , yoshuawonttweet,

    Poništi
  2. proslijedio/la je Tweet
    3. velj

    The #1 cause of startup death is making something no one wants. The #2 cause is spending too much. Those two account for so many deaths that I'm not even sure what #3 is. If you merely make something people want and don't spend too much, you're way ahead.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    31. sij

    An Opinionated Guide to ML Research: “To make breakthroughs with idea-driven research, you need to develop an exceptionally deep understanding of your subject, and a perspective that diverges from the rest of the community—some can do it, but it’s hard.”

    Prikaži ovu nit
    Poništi
  4. 28. sij

    Pigeons as food-seeking contextual bandits capable of diagnosing breast cancer from images. I imagine much initial confusion stumbling upon this title: "Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images"

    Poništi
  5. 23. sij

    Whoops, my tired eyes mistook the double dagger as another asterix. Khandelwal was sole first author

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  7. 22. sij

    Enjoyed the kNN-LM paper by Khandelwal and Levy et al. (2019). Using an interpolated non-parametric and parametric model, they set a SOTA on Wikitext, reducing perplexity by 2.9 points. This approach helps with predicting long-tail language predictions.

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    14. sij

    I often meet research scientists interested in open-sourcing their code/research and asking for advice. Here is a thread for you. First: why should you open-source models along with your paper? Because science is a virtuous circle of knowledge sharing not a zero-sum competition

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    5. sij

    Consider: millions of years ago our antecedents gave a massive sacrifice of their left hemisphere. We lost a tremendous amount of short term memory and replaced it with Broca’s, Wernicke & the phonological loop. But why? So we can—talk. Thus chimpanzees can do this—we can’t:

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    ***What were the most interesting, scientifically insightful, and coherent papers that you read in 2019??*** Rules: 1) No restriction on methodology or aim (theoretical, experimental, & applications papers welcome). 2) Do not post your own papers.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    20. pro 2019.

    We're pleased to let you know that your submission, On Bonus Based Exploration Methods In The Arcade Learning Environment, has been accepted at ! This huge endeavor was led by . W/ , & . More👇🏼

    Prikaži ovu nit
    Poništi
  12. 19. pro 2019.

    Our paper led by is available! We consider RL improvements in text-adventure/interactive fiction games. Text-based games are an interesting RL env testing language understanding, reasoning and behavior in huge state-action spaces

    Poništi
  13. proslijedio/la je Tweet
    14. pro 2019.

    Come see our poster at the Deep RL Workshop... Bring your questions! West Exhibition Hall C

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet

    My band is presenting a poster at NeurIPS today :) also we will be playing a set at the banquet at 7:30. Come check it and other great artistic applications of machine learning out at the ml creativity workshop :)

    Poništi
  15. 14. pro 2019.

    Great talk by on hierarchical RL at DRL workshop! They find much of the benefit of hierarchical methods is attributable to exploration advantages, not due to a semantically meaningful action space or better optimization.

    Poništi
  16. proslijedio/la je Tweet
    13. pro 2019.

    Tomorrow I'll be talking about JAX MD: a hardware accelerated, end-to-end differentiable, molecular dynamics library at the ML4PS at 9:20am (along with tons of amazing speakers). Paper: Code: Colab:

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    12. pro 2019.

    Excited to speak about MEMENTO with at the BARL workshop (see details below). MEMENTO is a simple yet surprising observation that training a new agent with "no memory of the past" can help DRL algorithms break through learning plateaus on Atari. Check it out!

    Poništi
  18. proslijedio/la je Tweet
    12. pro 2019.

    GCSL is a simple algorithm which reduces goal conditioned RL to a supervised learning problem, avoiding the complexity of value functions or policy gradients. Work with , Justin Fu, Ashwin Reddy, Coline Devin, Ben Eysenbach,

    Poništi
  19. proslijedio/la je Tweet
    10. pro 2019.

    How to build 1000+ layer Transformers with 80+ billion parameters? By using GPipe 🙂 We will be presenting GPipe today - East Exhibition Hall B+C at poster #40 Paper > Poster and Slides > (1/4)

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    9. pro 2019.

    As promised, we have made the Text-To-Text Transfer Transformer (T5) models much easier to fine-tune for new tasks, and we just released a Colab notebook where you can try it yourself on a free TPU! 👇 (1/3)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·