Hiroro Date

@hrtdate

a phd student. interested in deep learning and neuroscience.

Vrijeme pridruživanja: kolovoz 2017.

Tweetovi

Blokirali ste korisnika/cu @hrtdate

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @hrtdate

  1. 16. stu 2019.

    I'm going to give a talk on our recent work "Deep learning for natural image reconstruction from electrocorticography signals" at Workshop on Deep Learning in Bioinformatics, Biomedicine, and Healthcare Informatics (DLB2H), BIBM 2019.

    Poništi
  2. proslijedio/la je Tweet
    25. tra 2019.
    Poništi
  3. proslijedio/la je Tweet
    22. ožu 2019.

    Want to train your own BigGAN on just 4-8 GPUs? Today we're proud to release BigGAN-PyTorch, a full reimplementation that uses gradient accumulation to get the benefits of big batches even on small hardware. Repo joint work with @_alexandonian

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet

    Happy that we could share progress with you all! Good Games and , and and for a great show! You can see all the details in the blog.

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    5. kol 2018.

    Sunday classic paper: Hamming (1986), You and Your Research. Enduringly popular, moving and thought-provoking. I haven't shared it before so thought it would make a good classic reading for a reflective summer Sunday. 🎈

    Poništi
  6. proslijedio/la je Tweet

    great suggestions to improve rigor in the field.

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    14. srp 2018.

    The proceedings of GECCO 2018 are now available on-line. Follow the link to know how to access.

    Poništi
  8. proslijedio/la je Tweet
    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    27. lip 2018.

    Every Pixel Counts: Unsupervised Geometry Learning with Holistic 3D Motion Understanding. The vision world really loves that Zhou et al paper from last year’s CVPR!

    Poništi
  10. proslijedio/la je Tweet

    Efficient Neural Network Architecture Search, main idea: Softmax over "operations", such as convolution, max pooling, & zero (which means no connection), to make everything differentiable, thus jointly learn architecture & weights via gradient descent.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    12. lip 2018.

    Meta-learning enables fast learning, but needs hand-engineered meta-training tasks. Can we get the tasks themselves automatically? Our first attempt at this for RL: unsupervised meta-reinforcement learning: w/ Abhishek Gupta, Ben Eysebach,

    Poništi
  12. proslijedio/la je Tweet
    26. tra 2018.

    東京オフィスでAI研究に取組む仲間を募集します!Happy to see our efforts expanding w/ Google Brain now having a research presence in Tokyo. We’re hiring machine learning researchers there, if you’re interested in helping advance AI, apply here —>

    Poništi
  13. proslijedio/la je Tweet
    18. tra 2018.

    is an overview of 2018 ICML Workshops including links to workshop home pages.

    Poništi
  14. proslijedio/la je Tweet
    12. ožu 2018.

    Reptile learns how to learn by adjusting the initial parameters towards the result of multiple SGD updates of each task. This surprisingly simple algorithm has the similar effect as MAML and archives the similar performance.

    Poništi
  15. proslijedio/la je Tweet
    5. ožu 2018.

    Our new paper, with and Vladlen Koltun, on extensively evaluating convolutional vs. recurrent approaches to sequence tasks is on arXiv now!

    Poništi
  16. proslijedio/la je Tweet
    2. ožu 2018.

    With , we release pyvarinf, a package for Bayesian with Variational Inference for ! You can now make any neural network Bayesian in one line of code.

    Poništi
  17. proslijedio/la je Tweet
    1. ožu 2018.

    A brilliant lecture by Mike Jordan on optimization through the lens of variational analysis and conservation laws, some new developments, and many open problems:

    Poništi
  18. 1. ožu 2018.

    "Advances in Variational Inference" by Zhang et al. (2017) Good review on recent advances in Variational Inference! Contains basics, stochastic VI, black-box VI, and more...

    Poništi
  19. proslijedio/la je Tweet
    27. velj 2018.

    Stochastic Hyperparameter Optimization through Hypernetworks. Use hypernetworks to parametrize a network’s weights as a function of its hyperparams, so SGD can be used directly to optimize for hyperparams on validation set! By

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    21. velj 2018.

    Been waiting for something like this - "Continual Lifelong Learning with Neural Networks: A Review," Parisi et al.:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·