Rezultati pretraživanja
  1. 20. pro 2019.

    Happy to share that my internship work "Depth-adaptive Transformer" has been accepted to . TL;DR: We dynamically adjust the computation per input and match the accuracy of a baseline Transformer with only 1/4 the decoder layers.

    Prikaži ovu nit
  2. 20. pro 2019.

    Our paper studying the emergent cross-lingual properties of multilingual BERT is accepted to ! Lots of dedicated work from (undergrads) Karthikeyan K and Zihan Wang. tl;dr it's network depth, not wordpiece overlap.

  3. 22. pro 2019.

    Really exciting to have my first paper accepted at ! It provides the first group theoretical approach towards equivariant visual attention. Nice things coming up next! . Co-Attentive Equivariant Nets:

  4. 23. pro 2019.

    authors with > 5 submissions: 32 Sergey Levine 20 Yoshua Bengio 16 Cho-jui Hsieh 14 Pieter Abbeel 13 Liwei Wang, Tom Goldstein, Chelsea Finn, Bo Li, Jun Zhu # of accepted papers: 13 Sergey Levine 7 Le Song, Jun Zhu 6 Cho-jui Hsieh, Jimmy Ba, Liwei Wang, Pushmeet Kohli

  5. 19. pro 2019.

    Finally ...... our paper on "foresight pruning" just got accepted by . We introduced a simple, yet effective pruning criterion for pruning networks before training and related the criterion to recent NTK analysis.

    Prikaži ovu nit
  6. 19. pro 2019.

    Our paper (joint work with my supervisor ) has been accepted as a spotlight (48 long talks and 108 spotlights out of 2594 submissions) at !!

  7. 20. pro 2019.

    We're pleased to let you know that your submission, Exploration in Reinforcement Learning with Deep Covering Options, has been accepted at ! This work was led by , with Jee Won Park and George Konidaris. More👇🏼

    Prikaži ovu nit
  8. 19. pro 2019.

    Our paper ‘Progressive Learning and Disentanglement of Hierarchical Representations’ spearheaded by Zhiyuan has been accepted at (oral). TL;DR: strategy to progressively learn disentangled hierarchical representations+new disentanglement metric!

  9. 5. stu 2019.

    peer review in machine learning is broken

  10. 20. pro 2019.

    In ICLR last year, we show that existing few-shot classification methods perform poorly across different domains (). This year in , we show how we can make few-shot learners generalize better to unseen domains ()!

  11. 19. pro 2019.

    Excited to share that our paper "Semi-Supervised Generative Modeling for Controllable Speech Synthesis" got accepted at   paper: demo:

    Prikaži ovu nit
  12. 22. pro 2019.

    Accepted to : Settling permutation equivariance universality for popular deep models including DeepSets and some versions of PointNet.

  13. 21. pro 2019.

    Our submission to was accepted as a spotlight, in it we try to clear up some confusion about Bayesian inference in RL, read it if you want to understand how Bayes and Bellman can get along!

  14. 19. pro 2019.

    Decisions released 🎉 Congratulations to accepted papers; to those who we could not accommodate, we wish you success in your ongoing research. See our blog for the first of our reflections. See you soon in Ethiopia. 🇪🇹🌍

  15. 14. sij

    Mathematical Reasoning in Latent Space will be featured at . Multiple step reasoning can be performed on embedding vectors of mathematical formulas.

  16. 15. sij

    1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with and to be at

    Prikaži ovu nit
  17. How much supervision do you need to learn disentangled representations? Turns out, not that much! Joint work with , S. Bauer, , and . Accepted at

  18. 26. pro 2019.

    Our paper "What can neural networks reason about?" in 3 slides, presented at Winter Festa Episode 5 on Christmas. Great discussion with researchers from all over Japan! See Stefanie's talks for more detail (NeurIPS), (IAS)

  19. announcement time! and I first use KGs for deep POMDP agents Then KGs for commonsense transfer Now, KGs+RL for combinatorially sized language action spaces w/

    Prikaži ovu nit
  20. 20. pro 2019.

    Really pleased that our paper "Multiplicative Interactions and Where to Find Them" was accepted to . We present a unifying picture of several different neural net architectural motifs, that all involve multiplicative interactions... 1/3

    Prikaži ovu nit

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.