Tweetovi

Blokirali ste korisnika/cu @liyuajia

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @liyuajia

  1. proslijedio/la je Tweet

    The "Deep Learning Toolbox" has greatly expanded in the last decade thanks to our wonderful research community. Also, important progress has been made to make our community more inclusive and less toxic. Still, there's LOTS to do, and I plan to keep focusing on advancing both.

    Prikaži ovu nit
    Poništi
  2. 13. pro 2019.

    Graph representation learning is the most popular workshop of the day at . Amazing how far the field has advanced. I did not imagine so many people would get into this when I started working on graph neural nets back in 2015 during an internship. Time flies...

    Poništi
  3. 12. pro 2019.

    We were exploring ideas on using learning to help solve SAT. But realized this really simple algorithm (a slight variation of unit propagation) is hard to beat, at least for random SAT instances.

    Poništi
  4. 1. stu 2019.

    Our NeurIPS paper on learning to explore graph structured spaces uses graph neural networks to learn to explore (visit as many different nodes as possible) efficiently. We got great results on program testing (covering all code branches) and even testing Android apps!

    Poništi
  5. proslijedio/la je Tweet
    30. lis 2019.

    Our new paper: AlphaStar is the first learning system to reach the top tier of a major esport without any game restrictions, achieving Grandmaster status in StarCraft II.  Researchers have been working on the StarCraft series for over 15 years. 

    Prikaži ovu nit
    Poništi
  6. 3. lis 2019.

    New paper at on generative models of graphs. We explored many quality-efficiency trade-offs in this work and came up with a new model that gets good graph generation quality with much better efficiency. Paper: Code:

    Poništi
  7. 27. kol 2019.

    An example implementation of our graph matching networks is on github now! The code is for our ICML paper: . The release includes a reference implementation, a simple training loop, a synthetic task and some visualization tools.

    Poništi
  8. 27. kol 2019.

    We just released a dataset of synthetic computation graphs used in our REGAL paper . A learned GNN policy improves running time and memory consumption of TF and XLA graphs. Training on synthetic graphs generalizes to real graphs!

    Poništi
  9. proslijedio/la je Tweet
    28. lip 2019.

    Sparse graph neural networks can be trained efficiently on dense hardware (TPU), and large-batch training works: instead of a day on 1 GPU, a network trains in 13 minutes on a 512-core TPU. Work with , @subho87, , :

    Poništi
  10. 28. lip 2019.

    We reduced the training time of a sparse graph neural net from 1 day to 13 mins (!) on TPU with large batch training. Key is identifying band-diagonal structure in the adjacency matrix. Work with @subho87 Paper

    Poništi
  11. 13. lip 2019.

    CVPR thought I was an outstanding reviewer, surprise! No email or anything, only realized when someone else told me. Thought people would gradually stop reviewing for conferences as they get more senior, but wrong - ACs must be lucky to have Andrew Zisserman as a reviewer!

    Poništi
  12. proslijedio/la je Tweet
    11. lip 2019.

    Interested in discovering latent hierarchical structure and option discovery in RL? Come visit our talk/poster on CompILE at tomorrow! w/ et al. Talk: Wed. 4:40-5:00pm, Hall B (Poster #56) Paper:

    Poništi
  13. proslijedio/la je Tweet
    4. lip 2019.

    Accepted papers at the Workshop on Learning and Reasoning with Graph-Structured Data are now available on the workshop website: Papers: Schedule:

    Poništi
  14. proslijedio/la je Tweet
    15. svi 2019.

    The camera-ready version of our CompILE paper is out! Differentiable sequence segmentation for option discovery in RL — w/ et al. Paper: Code:

    Poništi
  15. 15. svi 2019.

    And we can do all these with weak or even no supervision!

    Poništi
  16. 8. svi 2019.

    Graph neural net + REINFORCE + genetic algorithm = 3% or more memory reduction for your computation graph. Learned model can generalize to unseen graphs 10x larger and with unseen op types. Runtime can also be optimized. Paper:

    Poništi
  17. proslijedio/la je Tweet
    3. svi 2019.

    Our latest work on neural network models for reasoning about similarity between graph structured objects, with implications for a broad spectrum of applications: By , , Thomas Dullien, and

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·