Rezultati pretraživanja
  1. 20. pro 2019.

    Happy to share that my internship work "Depth-adaptive Transformer" has been accepted to . TL;DR: We dynamically adjust the computation per input and match the accuracy of a baseline Transformer with only 1/4 the decoder layers.

    Prikaži ovu nit
  2. 19. pro 2019.

    Our paper ‘Progressive Learning and Disentanglement of Hierarchical Representations’ spearheaded by Zhiyuan has been accepted at (oral). TL;DR: strategy to progressively learn disentangled hierarchical representations+new disentanglement metric!

  3. 20. pro 2019.

    Our paper studying the emergent cross-lingual properties of multilingual BERT is accepted to ! Lots of dedicated work from (undergrads) Karthikeyan K and Zihan Wang. tl;dr it's network depth, not wordpiece overlap.

  4. 22. pro 2019.

    Really exciting to have my first paper accepted at ! It provides the first group theoretical approach towards equivariant visual attention. Nice things coming up next! . Co-Attentive Equivariant Nets:

  5. 19. pro 2019.

    Finally ...... our paper on "foresight pruning" just got accepted by . We introduced a simple, yet effective pruning criterion for pruning networks before training and related the criterion to recent NTK analysis.

    Prikaži ovu nit
  6. 23. pro 2019.

    authors with > 5 submissions: 32 Sergey Levine 20 Yoshua Bengio 16 Cho-jui Hsieh 14 Pieter Abbeel 13 Liwei Wang, Tom Goldstein, Chelsea Finn, Bo Li, Jun Zhu # of accepted papers: 13 Sergey Levine 7 Le Song, Jun Zhu 6 Cho-jui Hsieh, Jimmy Ba, Liwei Wang, Pushmeet Kohli

  7. 20. pro 2019.

    We're pleased to let you know that your submission, Exploration in Reinforcement Learning with Deep Covering Options, has been accepted at ! This work was led by , with Jee Won Park and George Konidaris. More👇🏼

    Prikaži ovu nit
  8. 20. pro 2019.

    In ICLR last year, we show that existing few-shot classification methods perform poorly across different domains (). This year in , we show how we can make few-shot learners generalize better to unseen domains ()!

  9. 19. pro 2019.

    Our paper (joint work with my supervisor ) has been accepted as a spotlight (48 long talks and 108 spotlights out of 2594 submissions) at !!

  10. 19. pro 2019.

    Excited to share that our paper "Semi-Supervised Generative Modeling for Controllable Speech Synthesis" got accepted at   paper: demo:

    Prikaži ovu nit
  11. 21. pro 2019.

    We perform a rigorous evaluation of GNNs for graph classification and show that a simple baseline does very well. We provide a framework to test new models and splits for 9 datasets. With , D. Bacciu, A. Micheli. See u at !

    Prikaži ovu nit
  12. 15. sij

    1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with and to be at

    Prikaži ovu nit
  13. 20. pro 2019.

    Our work on nearest neighbor language models has been accepted to Woohoo!! Code coming in the new year!

  14. 26. pro 2019.

    Our paper "What can neural networks reason about?" in 3 slides, presented at Winter Festa Episode 5 on Christmas. Great discussion with researchers from all over Japan! See Stefanie's talks for more detail (NeurIPS), (IAS)

  15. 5. stu 2019.

    peer review in machine learning is broken

  16. 21. sij

    Excited to invite NLP researchers working in African Languages (or relevant NLP techniques) to submit to the workshop in Addis: "AfricaNLP - Unlocking Local Languages" 🌍 2-page extended abstracts! Deadline: 14th Feb 🔥

    Prikaži ovu nit
  17. announcement time! and I first use KGs for deep POMDP agents Then KGs for commonsense transfer Now, KGs+RL for combinatorially sized language action spaces w/

    Prikaži ovu nit
  18. 21. pro 2019.

    Our submission to was accepted as a spotlight, in it we try to clear up some confusion about Bayesian inference in RL, read it if you want to understand how Bayes and Bellman can get along!

  19. 19. pro 2019.

    Decisions released 🎉 Congratulations to accepted papers; to those who we could not accommodate, we wish you success in your ongoing research. See our blog for the first of our reflections. See you soon in Ethiopia. 🇪🇹🌍

  20. 14. sij

    Mathematical Reasoning in Latent Space will be featured at . Multiple step reasoning can be performed on embedding vectors of mathematical formulas.

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.