Lisa Lee

@rl_agent

Machine Learning PhD ; Visiting Researcher Brain & ; Mathematics

Vrijeme pridruživanja: veljača 2016.

Tweetovi

Blokirali ste korisnika/cu @rl_agent

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @rl_agent

  1. 25. pro 2019.

    Video & slides for LIRE workshop @ are now up: Check out the Talks and Panel by Jeff Bilmes Tom Griffiths & more. Thanks to all speakers & presenters for making the workshop a success!

    , , i još njih 3
    Poništi
  2. proslijedio/la je Tweet

    Congratulations to the MLD TA Awards 2019 recipients Liam Li, Lisa Lee, Nicholay Topin, Paul Liang, and Maruan Al-Shedivat! Being a TA in is challenging and significant. We are incredibly thankful for all of your efforts -

    Poništi
  3. proslijedio/la je Tweet
    5. stu 2019.

    Nice work from Lisa, Ben, Sergey et al! State marginal matching offers a new paradigm for teaching skills, differing from stationary reward maximization or imitation from demos. Inspired parts of our recent work!

    Poništi
  4. proslijedio/la je Tweet
    5. stu 2019.

    (first tweet!) Our paper got Best Paper Award at CoRL 2019! A summary and extension of imitation learning methods and their application to state marginal matching.

    Prikaži ovu nit
    Poništi
  5. 1. lis 2019.

    I just started at Stanford this week as a visiting researcher in 's lab, and I'm also still part-time at Google Brain Robotics. If you're around in the area and would like to chat about research, please feel free to reach out anytime! (My office is in Gates)

    Poništi
  6. proslijedio/la je Tweet
    30. srp 2019.

    We're organizing workshop on Learning with Rich Experience: Integration of Learning Paradigms, w/ an amazing lineup of speakers! Deadline: Sept 11 w/ Taylor Berg-Kirkpatrick & Eric Xing

    Poništi
  7. proslijedio/la je Tweet
    18. lip 2019.

    Code by for State Marginal Matching: Learning an exploration policy for which the state marginal distribution matches a given target distribution, incorporating prior knowledge about task Paper: Code:

    Poništi
  8. 18. lip 2019.

    We've released our code for State Marginal Matching: a principled objective that explores well in multi-task settings, easily incorporates prior knowledge, & unifies previous exploration methods. w/ B Eysenbach, E Parisotto, E Xing

    Poništi
  9. 15. lip 2019.

    Congratulations & thanks Kamalika () and Russ () for organizing ! And thanks for having us workflow chairs onboard! It was an amazing experience to see the conference come together from start to finish.

    Poništi
  10. proslijedio/la je Tweet
    13. lip 2019.

    Can we learn policies that match *state distributions* (randomly visit states with a desired distr.)? This generalization of RL is useful for exploration. Exploration via State Marginal Matching, w/ , B. Eysenbach, E. Parisotto, E. Xing,

    Poništi
  11. proslijedio/la je Tweet
    13. lip 2019.

    Can we use reinforcement learning together with search to solve temporally extended tasks? In Search on the Replay Buffer (w/ Ben Eysenbach and ), we use goal-conditioned policies to build a graph for search. Paper: Colab:

    Poništi
  12. proslijedio/la je Tweet
    7. lip 2019.

    For the first time in a major machine learning conference and I implemented a new Code-at-Submit-Time measure and we are delighted by the strong community response. Read about the outcome here:

    Poništi
  13. proslijedio/la je Tweet
    5. svi 2019.

    Check out Lisa Lee's and Ben Eysenbach's Contributed Talks @ Monday workshops on Exploration & Meta-RL via State Marginal Matching:

    Poništi
  14. 5. svi 2019.

    Excited to give two Contributed Talks @ on Monday w/ Ben Eysenbach on our new work: Exploration & Meta-RL via State Marginal Matching w/ Ben, Emilio 12:15 @ TARL 15:50 @ SPiRL

    Poništi
  15. 1. svi 2019.

    I wrote a Colab tutorial on MaxEnt RL: It implements the graphical model from 's "RL as Inference" tutorial for a simple chain environment. Play around with the reward function to learn different policies using the forward-backward algorithm!

    Poništi
  16. proslijedio/la je Tweet
    21. tra 2019.

    decisions are out! A big thanks to our reviewers, area chairs and senior area chairs for their hard work. See you all in Long Beach!

    Poništi
  17. proslijedio/la je Tweet
    21. tra 2019.

    And also big thanks to our ICML workflow chairs , and for their huge help!

    Poništi
  18. 7. ožu 2019.

    Check out Emilio's new paper: Concurrent Meta Reinforcement Learning (w/ , , and others) tl;dr CMRL learns a multi-agent communication protocol to coordinate exploration between parallel rollout agents.

    Poništi
  19. proslijedio/la je Tweet
    5. velj 2019.

    Posted a new paper on Embodied Multimodal Multitask Learning for semantic goal navigation and embodied question answering. (with , , , ) PDF: Demo Videos:

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    5. velj 2019.
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·