Rezultati pretraživanja
  1. 22. pro 2019.

    Really exciting to have my first paper accepted at ! It provides the first group theoretical approach towards equivariant visual attention. Nice things coming up next! . Co-Attentive Equivariant Nets:

  2. 5. stu 2019.

    peer review in machine learning is broken

  3. 20. pro 2019.

    Really pleased that our paper "Multiplicative Interactions and Where to Find Them" was accepted to . We present a unifying picture of several different neural net architectural motifs, that all involve multiplicative interactions... 1/3

    Prikaži ovu nit
  4. 20. pro 2019.

    Happy to share that my internship work "Depth-adaptive Transformer" has been accepted to . TL;DR: We dynamically adjust the computation per input and match the accuracy of a baseline Transformer with only 1/4 the decoder layers.

    Prikaži ovu nit
  5. 23. pro 2019.

    authors with > 5 submissions: 32 Sergey Levine 20 Yoshua Bengio 16 Cho-jui Hsieh 14 Pieter Abbeel 13 Liwei Wang, Tom Goldstein, Chelsea Finn, Bo Li, Jun Zhu # of accepted papers: 13 Sergey Levine 7 Le Song, Jun Zhu 6 Cho-jui Hsieh, Jimmy Ba, Liwei Wang, Pushmeet Kohli

  6. 19. pro 2019.

    Finally ...... our paper on "foresight pruning" just got accepted by . We introduced a simple, yet effective pruning criterion for pruning networks before training and related the criterion to recent NTK analysis.

    Prikaži ovu nit
  7. 20. pro 2019.

    Our paper studying the emergent cross-lingual properties of multilingual BERT is accepted to ! Lots of dedicated work from (undergrads) Karthikeyan K and Zihan Wang. tl;dr it's network depth, not wordpiece overlap.

  8. 14. sij

    Mathematical Reasoning in Latent Space will be featured at . Multiple step reasoning can be performed on embedding vectors of mathematical formulas.

  9. 19. pro 2019.

    Decisions released 🎉 Congratulations to accepted papers; to those who we could not accommodate, we wish you success in your ongoing research. See our blog for the first of our reflections. See you soon in Ethiopia. 🇪🇹🌍

  10. 19. pro 2019.

    Our paper (joint work with my supervisor ) has been accepted as a spotlight (48 long talks and 108 spotlights out of 2594 submissions) at !!

  11. 25. pro 2019.

    Some outcomes: Accepted (spotlight): Hamiltonian Generative Networks, Rejected: Causally Correct Partial Models for Reinforcement Learning Congrats to all my collaborators on both, independently of acceptance!

    Prikaži ovu nit
  12. 20. pro 2019.

    We're pleased to let you know that your submission, Exploration in Reinforcement Learning with Deep Covering Options, has been accepted at ! This work was led by , with Jee Won Park and George Konidaris. More👇🏼

    Prikaži ovu nit
  13. 26. pro 2019.

    Our paper "What can neural networks reason about?" in 3 slides, presented at Winter Festa Episode 5 on Christmas. Great discussion with researchers from all over Japan! See Stefanie's talks for more detail (NeurIPS), (IAS)

  14. 19. pro 2019.

    Our paper ‘Progressive Learning and Disentanglement of Hierarchical Representations’ spearheaded by Zhiyuan has been accepted at (oral). TL;DR: strategy to progressively learn disentangled hierarchical representations+new disentanglement metric!

  15. 21. pro 2019.

    Our submission to was accepted as a spotlight, in it we try to clear up some confusion about Bayesian inference in RL, read it if you want to understand how Bayes and Bellman can get along!

  16. 22. pro 2019.

    Accepted to : Settling permutation equivariance universality for popular deep models including DeepSets and some versions of PointNet.

  17. 20. pro 2019.

    Our work on nearest neighbor language models has been accepted to Woohoo!! Code coming in the new year!

  18. 21. sij

    Excited to invite NLP researchers working in African Languages (or relevant NLP techniques) to submit to the workshop in Addis: "AfricaNLP - Unlocking Local Languages" 🌍 2-page extended abstracts! Deadline: 14th Feb 🔥

    Prikaži ovu nit
  19. 20. pro 2019.

    In ICLR last year, we show that existing few-shot classification methods perform poorly across different domains (). This year in , we show how we can make few-shot learners generalize better to unseen domains ()!

  20. 20. pro 2019.

    Self-supervised FTW! 2/2 papers from the team accepted at 🎉 Neural Outlier Rejection for Self-Supervised Keypoint Learning Semantically-Guided Representation Learning for Self-Supervised Monocular Depth

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.