Jane Wang

@janexwang

Research scientist at DeepMind, Thinking about thinking.

Vrijeme pridruživanja: ožujak 2009.

Tweetovi

Blokirali ste korisnika/cu @janexwang

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @janexwang

  1. proslijedio/la je Tweet

    "One of the things I really like about this article is how it integrates work from the fields of artificial intelligence, psychology, neuroscience, and evolutionary theory." editor , picks Reinforcement Learning, Fast and Slow as her review of 2019

    Poništi
  2. proslijedio/la je Tweet
    13. sij

    We are excited to announce our workshop 'Beyond "Tabula Rasa" in RL (BeTR-RL): Agents that remember, adapt, and generalize' w/ Ignasi Clavera, Kate Rakelly, , , CfP & more infos:

    Poništi
  3. proslijedio/la je Tweet
    9. sij

    We are organizing a workshop on Causal learning for Decision Making at along with , Jovana Mitrovic, , Stefan and . Consider submitting your work!

    Poništi
  4. proslijedio/la je Tweet
    6. sij

    We are really happy to announce that our paper on 'Long-term stability of cortical population dynamics underlying consistent behavior' has finally appeared in Nature Neuroscience . A great collaboration with 1/n

    Prikaži ovu nit
    Poništi
  5. 1. sij

    I wouldn't recommend basing your career on currently popular trends, since these are likely to change by the time you graduate. Instead, figure out what questions/problems most fascinate you and how to make a career out of those. Define your own fields if you have to.

    Poništi
  6. 22. pro 2019.

    Really enjoyed participating in the Biological and Artificial RL Workshop at , where I spoke about how AI and neuroscience make contact via meta-reinforcement learning. Congrats on a fantastic workshop!

    Poništi
  7. proslijedio/la je Tweet
    20. pro 2019.

    Interested in how neuroscience can inspire better AI? Come to NAISys, March 24-28 Abstracts (1 page) due Jan 10 registration {Please RETWEET ME}

    Prikaži ovu nit
    Poništi
  8. 13. pro 2019.

    You can find a livestream of the meta-learning workshop here

    Poništi
  9. proslijedio/la je Tweet
    13. pro 2019.

    So excited that our workshop, Biological and Artificial Reinforcement Learning, just started with Jane Wang's brilliant keynote "From brains to agents and back"

    Prikaži ovu nit
    Poništi
  10. 13. pro 2019.

    The meta-learning workshop at is off to a great start, with and kicking things off with fantastic talks! Come check it out in West Ballroom B!

    Poništi
  11. 6. pro 2019.

    Just a few days left to submit challenge questions for our speakers at the Meta-learning workshop, to be held on Dec 13!

    Poništi
  12. 28. stu 2019.

    The NeurIPS Meta-learning workshop is only 2 weeks away! This year you can submit "challenge" questions to our speakers ahead of time. We're especially interested in questions from junior researchers. Submit them at:

    Poništi
  13. proslijedio/la je Tweet
    26. stu 2019.

    Introducing the SHA-RNN :) - Read alternative history as a research genre - Learn of the terrifying tokenization attack that leaves language models perplexed - Get near SotA results on enwik8 in hours on a lone GPU No Sesame Street or Transformers allowed.

    The SHA-RNN is composed of an RNN, pointer based attention, and a “Boom” feed-forward with a sprinkling of layer normalization. The persistent state is the RNN’s hidden state h as well as the memory M concatenated from previous memories. Bake at 200◦F for 16 to 20 hours in a desktop sized oven.
    The attention mechanism within the SHA-RNN is highly computationally efficient. The only matrix multiplication acts on the query. The A block represents scaled dot product attention, a vector-vector operation. The operators {qs, ks, vs} are vectorvector multiplications and thus have minimal overhead. We use a sigmoid to produce {qs, ks}. For vs see Section 6.4.
    Bits Per Character (BPC) onenwik8. The single attention SHA-LSTM has an attention head on the second last layer and hadbatch size 16 due to lower memory use. Directly comparing the head count for LSTM models and Transformer models obviously doesn’tmake sense but neither does comparing zero-headed LSTMs against bajillion headed models and then declaring an entire species dead.
    Poništi
  14. proslijedio/la je Tweet
    21. stu 2019.

    This feels like a real breakthrough: Take the same basic algorithm as AlphaZero, but now *learning* its own simulator. Beautiful, elegant approach to model-based RL. ... AND ALSO STATE OF THE ART RESULTS! Well done to the team at

    Poništi
  15. 22. stu 2019.

    or if you watch too much apprentice before bedtime

    Prikaži ovu nit
    Poništi
  16. 22. stu 2019.

    does anyone else have the feeling that there's just no way it's almost 2020? the 2020s should be a "future" decade, when science fiction stories are set, and the ghost of christmas future brings you here to show what happens if you're too miserly with your employees

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    21. stu 2019.

    Last week, 100's of researchers gathered in Montevideo, Uruguay, for - a teaching conference aiming to strengthen the AI + ML communities in Latin America. Congratulations all those who organised, spoke and participated in such a special event. Proud to be a sponsor!

    Poništi
  18. proslijedio/la je Tweet
    7. stu 2019.

    An update: has been in contact with Canadian immigration officials. They told him that anyone who has been denied a visa to attend can request their case to be reconsidered via this form:   No guarantees, but please pass along!

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    31. lis 2019.

    Meta Reinforcement Learning is good at adaptation to very similar environments. But can we meta-learn general RL algorithms? Our new approach MetaGenRL is able to. With and Paper: Blog:

    Poništi
  20. proslijedio/la je Tweet

    We’ve made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought. Why? A few reasons…🧵

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·