Brandon Amos

@brandondamos

Research scientist at (FAIR). I study machine learning and optimization. Sometimes deep, sometimes convex, sometimes both. PhD from CMU.

New York, NY
Vrijeme pridruživanja: siječanj 2014.

Tweetovi

Blokirali ste korisnika/cu @brandondamos

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @brandondamos

  1. proslijedio/la je Tweet
    31. sij

    Happy to present GradientDICE w/ Bo, , fixing key problems of GenDICE, the current state-of-the-art for behaviour-agnostic density-ratio-learning-based off-policy evaluation.

    Poništi
  2. proslijedio/la je Tweet
    30. sij

    We're standardizing OpenAI's deep learning framework on PyTorch to increase our research productivity at scale on GPUs (and have just released a PyTorch version of Spinning Up in Deep RL):

    Poništi
  3. proslijedio/la je Tweet
    30. sij

    Humans learn from curriculum since birth. We can learn complicated math problems because we have accumulated enough prior knowledge. This could be true for training a ML/RL model as well. Let see how curriculum can help an RL agent learn:

    Poništi
  4. proslijedio/la je Tweet
    29. sij

    New Medium article about my work w/ and on extracting representations of different senses of polysemic words from deep contextualized models like BERT, ELMo, and fastText. 👉 👈 (+ code on GitHub)

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet

    *Our paper diagnosing problems in fair ML now on arXiv!* . Took a few weeks, b/c our interdisciplinary collaboration broke the paper categorization system. 😂

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    28. sij

    Ensemble Rejection Sampling with and Sylvain Rubenthaler: Rejection sampling meets dynamic programming - exact simulation from the posterior of states of a class of continuous state HMM using randomized finite state HMM.

    Poništi
  8. proslijedio/la je Tweet
    28. sij

    Interesting looking work that is inspired by Minkowski space () which treats space-time as a single entity. It is also open source.

    Poništi
  9. proslijedio/la je Tweet
    26. sij

    Quaternions and Euler angles are discontinuous and difficult for neural networks to learn. They show 3D rotations have continuous representations in 5D and 6D, which are more suitable for learning. i.e. regress two vectors and apply Graham-Schmidt (GS).

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    26. sij

    Teaching Deep Unsupervised Learning (2nd edition) at this semester. You can follow along here: Instructor Team: , , , Wilson Yan, Alex Li, YouTube, PDF, and Google Slides for ease of re-use

    Poništi
  11. proslijedio/la je Tweet
    25. sij

    This semester I'm teaching a new PhD course "Economics, AI, and Optimization." I'll be covering how AI/Opt methods enable large-scale economic solution concepts. I'm planning to share lectures notes that I hope will be of broader interest.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    24. sij

    This is one of the strongest papers I co-authored, with unique results on robust point cloud registration and a manifesto of certifiable perception: Paper: Code: Video: Kudos to Hank and Jingnan!

    Poništi
  13. proslijedio/la je Tweet
    23. sij

    Super excited to share new work “TEASER: Fast and Certifiable Point Cloud Registration” with Jingnan Shi and Paper: Code: TEASER is the first algorithm of its kind in many practical and theoretical aspects:

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    23. sij

    My first experience with evolutionary robotics 🙉 Learning a neural-network-based controller of a robot with evolvable morphology. A great outcome from a collaboration with , , Fuda, and ! The arXiv version:

    Poništi
  15. proslijedio/la je Tweet
    23. sij

    Q-learning is difficult to apply when the number of available actions is large. We show that a simple extension based on amortized stochastic search allows Q-learning to scale to high-dimensional discrete, continuous or hybrid action spaces:

    Poništi
  16. proslijedio/la je Tweet
    23. sij

    A mixture of Gaussians model can be represented as a weighted points cloud (actually a measure) over the mean/covariance domain. Mixture fitting is a non-convex optimization problem.

    Poništi
  17. proslijedio/la je Tweet
    22. sij

    New results on non-stochastic control without knowing the system! joint work with and Karan Singh. You may notice the name is a tribute to one of my favorite papers of all times by Auer, Cesa-Bianchi, Freund and Schapire (a must read!)

    Poništi
  18. proslijedio/la je Tweet
    22. sij

    Excited to share PCGrad, a super simple & effective method for multi-task learning & multi-task RL: project conflicting gradients On Meta-World MT50, PCGrad can solve *2x* more tasks than prior methods w/ Tianhe Yu, S Kumar, Gupta, ,

    Poništi
  19. proslijedio/la je Tweet
    22. sij

    We are releasing a well-tuned and miniature implementation of Soft Actor-Critic () together with : . We test it on many continuous control tasks from the Control Suite and report the following results:

    Poništi
  20. proslijedio/la je Tweet
    21. sij

    Graduated Non-Convexity (GNC) is the counterpart for RANSAC: while RANSAC robustifies minimal solvers, GNC robustifies non-minimal solvers. The intriguing duality for robust estimation: {Consensus Maximization, Minimal Solver, RANSAC} and {M-estimation, Non minimal Solver, GNC}😀

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·