Yiding Jiang

@yidingjiang

AI resident . BS from . Giving this Twitter thing a try.

Vrijeme pridruživanja: prosinac 2015.

Tweetovi

Blokirali ste korisnika/cu @yidingjiang

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @yidingjiang

  1. proslijedio/la je Tweet
    9. sij

    What I did over my winter break! It gives me great pleasure to share this summary of some of our work in 2019, on behalf of all my colleagues at & .

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    10. pro 2019.

    Interested RL algorithms that learn to follow instructions, use hierarchical abstractions, and achieve compositional generalization? We investigate the many benefits of language in RL. We'll be at poster #197, at 10:45 am! w Murphy

    Poništi
  3. 5. pro 2019.

    We will be presenting this paper at next week Please stop by our poster if you are interested or want to chat more about it! (tagging who just recently discovered Twitter😉)

    Poništi
  4. 5. pro 2019.

    "What does it mean to explain generalization in deep learning?" We attempt to answer this question from a more causal perspective. (It was 4 am and the paper was still titled "placeholder", so I decided to have a little fun and everyone else was probably too tired to object. 😬)

    Poništi
  5. proslijedio/la je Tweet
    4. pro 2019.

    Fantastic Generalization Measures and Where to Find Them “We present the first large scale study of generalization in deep networks. We train over 10,000 convolutional networks by systematically varying commonly used hyperparameters.”

    , , i još njih 2
    Poništi
  6. proslijedio/la je Tweet
    4. pro 2019.

    Been waiting for this paper to drop. It's here. I've got my NeurIPS flight reading sorted out. I think this is an important step towards gaining clarity on what it might mean to "explain generalization".

    Poništi
  7. 10. srp 2019.

    I will be presenting this work at CASE 2019 in Vancouver!

    Poništi
  8. 9. srp 2019.
    Poništi
  9. proslijedio/la je Tweet
    4. srp 2019.

    1/ Can we use model-based planning in behavior space rather than action space? DADS can discover skills without any rewards, which can later be composed zero-shot via planning in the behavior space for new tasks. Paper: Website:

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    26. lip 2019.

    1/3 If you study dynamics of gradient descent, what properties of trajectory would be most useful for your research? Currently DEMOGEN (dataset of 756 trained models) has final weights, but we plan to extend and include information on intermediate weights.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    18. lip 2019.

    Check out what I have been working on during my residency! A new model compression approach that optimizes for accuracy and compressibility of parameters jointly. Our results include 19x compression in ResNet-50 with only a 1% accuracy drop!

    Prikaži ovu nit
    Poništi
  12. 19. lip 2019.

    No GPU’s were harmed in the making of this dataset

    Poništi
  13. proslijedio/la je Tweet
    18. lip 2019.

    Can the compositionality of language help RL agents solve long-horizon tasks? We develop an *open-source* CLEVR-like RL env, evaluate long-horizon reasoning + systematic generalization in RL Language as an Abstraction for HRL w S Gu, K Murphy

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·