Brandon Amos

@brandondamos

Research scientist at (FAIR). I study machine learning and optimization. Sometimes deep, sometimes convex, sometimes both. PhD from CMU.

New York, NY
Дата регистрации: январь 2014 г.

Твиты

Вы внесли @brandondamos в черный список

Вы уверены, что хотите видеть эти твиты? Если вы просто просмотрите твиты, @brandondamos по-прежнему останется в черном списке.

  1. Закрепленный твит
    2 мая

    A life update. I've dodged all of the attacks and have successfully defended my PhD thesis. Thanks everybody! My thesis document and slides are available on GitHub

    Отменить
  2. ретвитнул(а)
    26 июн.

    New pre-print on learning task agnostic representations for partially observable environments in reinforcement learning. With Amy Zhang Luis Pineda Laurent Itti and Joelle Pineau

    Отменить
  3. ретвитнул(а)
    26 июн.

    Exited to share our new paper: 'Monte Carlo Gradient Estimation in Machine Learning', with . It reviews of all the things we know about computing gradients of probabilistic functions. 🐾Thread👇🏾

    Показать эту ветку
    Отменить
  4. ретвитнул(а)
    26 июн.

    I'm happy to share my implementation of Glow that reproduces results from "Do Deep Generative Models Know What They Don't Know?" ( et al.) Includes pretrained model, evaluation notebooks and training code!

    Отменить
  5. ретвитнул(а)
    24 июн.

    A Regularized Opponent Model with Maximum Entropy Objective

    Отменить
  6. ретвитнул(а)
    25 июн.

    Happy share our work: Shaping Belief States with Generative Environment Models for RL Thanks Karol Gregor, Frederic Besse, Yan Wu, Hamza Merzic and !

    Показать эту ветку
    Отменить
  7. ретвитнул(а)
    25 июн.

    Our new paper shows that to evaluate a dialog model, you need a human to actually talk to it! We then use self-play to accurately approximate the human Paper: Code: Platform:

    Отменить
  8. ретвитнул(а)
    25 июн.

    Polyhedral duality is cool and it's associated with lots of really neat centuries-old drawings such as those from Kepler:

    Отменить
  9. ретвитнул(а)
    25 июн.

    I'm honored to have been named one of MIT 's 35 Innovators Under 35! I'm lucky to have had such a wonderful collection of mentors, collaborators, and friends at during my PhD.

    Отменить
  10. ретвитнул(а)
    25 июн.

    New paper out! An advancement in properly estimating off-policy occupancy ratios. We apply it to off-policy policy evaluation with great results, but we believe it should be useful in many more off-policy settings!

    Отменить
  11. 24 июн.

    An interesting note about the swimmer gym environment:

    Показать эту ветку
    Отменить
  12. 24 июн.

    Exploring Model-based Planning with Policy Networks and Jimmy Ba Paper: Code:

    Показать эту ветку
    Отменить
  13. ретвитнул(а)
    22 июн.

    Video of my lecture at camp yesterday, "Frontiers of AI Arts." This class was mainly on advanced language models (like GPT-2) and generative audio

    Отменить
  14. 21 июн.

    Notably, we also revive and revisit the truncated top-k entropy loss from Lapin et al. as another reasonable baseline for top-k classification that Berrada, Zisserman, and Kumar did not consider and show how it can be extended to multi-label settings for scene graph generation

    Показать эту ветку
    Отменить
  15. 21 июн.

    We add the LML layer with a few lines of code to existing code for top-k CIFAR-100 classification and scene graph generation and recover or surpass the accuracy of the state-of-the-art models.

    Показать эту ветку
    Отменить
  16. 21 июн.

    Now you can maximize the top-k recall with the LML layer by just posing it as a maximum likelihood problem over the labels that you observe *without* worrying about your model collapsing

    Показать эту ветку
    Отменить
  17. 21 июн.

    We then propose that projecting onto another polytope, that we call LML polytope, is useful for learning in top-k settings. It doesn't have an explicit closed-form solution but we show that solving and differentiating through this projection operation is easy and tractable.

    Показать эту ветку
    Отменить
  18. 21 июн.

    We start by motiving our work with other projections in machine learning and reviewing that the ReLU, sigmoid, and softmax layers are just explicit closed-form solutions to convex and constrained optimization problems that project into polytopes.

    Показать эту ветку
    Отменить
  19. 21 июн.

    Excited to share my new tech report from my internship on the Limited Multi-Label projection layer! Joint work with Vladlen Koltun and Paper: Code:

    Показать эту ветку
    Отменить
  20. ретвитнул(а)
    20 июн.

    Our paper on the link between information matrices and generalization is out: This is the result of the fantastic work of , with help from , , and Yoshua Bengio.

    Показать эту ветку
    Отменить
  21. ретвитнул(а)
    20 июн.

    When should we use a model to improve RL? We've analyzed this theoretically and empirically, with a monotonic improvement result, error accumulation study (vid below), and proposing the most efficient RL method yet, MBPO w/ , J. Fu, M. Zhang

    Показать эту ветку
    Отменить

Загрузка может занять некоторое время.

Вероятно, серверы Твиттера перегружены или в их работе произошел кратковременный сбой. Повторите попытку или посетите страницу Статус Твиттера, чтобы узнать более подробную информацию.

    Вам также может понравиться

    ·