George Tucker

@georgejtucker

Researcher at Google Brain, thinking about RL and sequence models

Vrijeme pridruživanja: veljača 2011.

Tweetovi

Blokirali ste korisnika/cu @georgejtucker

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @georgejtucker

  1. Prikvačeni tweet
    4. stu 2019.

    rely on approximate sampling algorithms, leading to a mismatch between the model and inference. Instead, we consider the sampler-induced distribution as the model of interest yielding a class of tractable . ()

    Poništi
  2. proslijedio/la je Tweet
    14. sij

    I'm delighted to announce that I have just started a new role as the Florence Nightingale Bicentennial Fellow and Tutor in Statistics and Probability at the Department of Statistics here in Oxford. I'm looking forward to growing my research group here over the next 5 years.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    13. pro 2019.

    I'll be discussing this work and other challenges in meta-learning at the Bayesian Deep Learning Workshop at 1:20 pm, West Exhibition Hall C.

    Poništi
  4. proslijedio/la je Tweet
    12. pro 2019.

    Looking forward for workshops tomorrow - my favorite part of . and I will speak about the MEMENTO observation in Atari agents tomorrow at 4:15 in the BARL workshop. Come see us at the poster! , , yoshuawonttweet,

    Poništi
  5. proslijedio/la je Tweet
    10. pro 2019.

    Come learn about how to effectively learn policies from entirely off-policy data with Bootstrap Error Accumulation Reduction (BEAR), presented tomorrow (Wed) at by Aviral Kumar, at 5:30 pm, poster #214

    Poništi
  6. 11. pro 2019.

    We're presenting our work on (EIM) which leverage a learned energy function. Unlike , EIMs are tractable to sample from and train via a lower bound on log-likelihood. 10:45am Wed #120 ()

    Poništi
  7. proslijedio/la je Tweet

    Our tutorial on importance sampling and sequential Monte Carlo methods is now published in Foundations and Trends in ML. Should be available at NeurIPS. Highlights include learning proposals, target distributions, unbiasedness results and more.

    Poništi
  8. proslijedio/la je Tweet
    9. pro 2019.

    Meta-learning has a peculiar, widespread problem that leads to terrible performance when faced with seemingly benign changes to the training set-up. We analyze this problem & provide a solution: w/ , , Zhou,

    Poništi
  9. proslijedio/la je Tweet
    9. pro 2019.

    (1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    5. pro 2019.

    Can we use deep RL to learn from data, rather than from online interaction? Aviral Kumar discusses challenges and recent work in this area, including our algorithm BEAR for fully-off policy RL:

    Poništi
  11. proslijedio/la je Tweet
    4. pro 2019.

    Very excited about my new paper! We formulate the on-policy max-return RL objective w.r.t *arbitrary* offline data and without *any* explicit importance correction. Amazingly, the gradient of the objective w.r.t pi is exactly the on-policy policy gradient!

    Poništi
  12. 5. pro 2019.

    The accepted papers here are a goldmine! Very excited about this workshop.

    Poništi
  13. proslijedio/la je Tweet
    27. stu 2019.

    Offline RL -what do you need to know about this notoriously difficult regime? Although recent papers propose a variety of algorithmic novelties, we find many of these unnecessary in practice. Extensive studies will hopefully guide future research &practice

    Poništi
  14. proslijedio/la je Tweet
    26. stu 2019.

    We are happy to announce the v2.0 release of the Google Research Football Environment. The most exciting feature of this release is the Game Server, which lets your agent compete online with other researchers' models. Visit and give it a try!

    Poništi
  15. proslijedio/la je Tweet
    12. stu 2019.

    I'm starting a professorship in the CS department at UNC in fall 2020 (!!) and am hiring students! If you're interested in doing a PhD please get in touch. More info here:

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    7. stu 2019.

    Applications for the 2020 Google AI Residency program are now open! Visit for application information. To learn more about the research accomplishments of the 2019 alumni, check out the post below!

    Poništi
  17. proslijedio/la je Tweet
    5. stu 2019.

    We are hiring in our approx-Bayes team at RIKEN-AIP. Post-docs (2 positions), RA (4 positions), interns (15 positions). Job posting: Team page: Email: jobs-abi-riken-aip@googlegroups.com Help me spread the word. Retweets appreciated

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    18. lis 2019.

    ⏳ The review period for is slowly wrapping up, and our reviewers have been working hard on their assessments. If you are reviewing , please help us get all our reviews in on time and submitted by next Wednesday 🙏🏾

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    16. lis 2019.

    We're extremely excited to release our BoTorch paper! Scalable, flexible, and modular Bayesian optimization integrated with GPyTorch and . It's been a pleasure working with , Max, and team. paper: code:

    Poništi
  20. proslijedio/la je Tweet
    11. lis 2019.

    Machine learning now has an over-agitated culture that IMHO is bad for researcher wellbeing. You're not a loser if hype, firehose arXiv, FOMO, feuds, overwork & random reviews cause you unhealthily-sustained stress, anxiety & gloom. We didn't & don't have to have this culture.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·