Tweetovi

Blokirali ste korisnika/cu @thanhng12

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @thanhng12

  1. proslijedio/la je Tweet
    6. pro 2019.

    Check out our extensive review paper on normalizing flows! This paper is the product of years of thinking about flows: it contains everything we know about them, and many new insights. With , , , . Thread 👇

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    5. pro 2019.

    A surprising deep learning mystery: Contrary to conventional wisdom, performance of unregularized CNNs, ResNets, and transformers is non-monotonic: improves, then gets worse, then improves again with increasing model size, data size, or training time.

    Poništi
  3. proslijedio/la je Tweet

    Go over the topic of over-parameterization. Gain an explanation for landscape connectivity of low-cost solutions for multilayer nets, see a proof that explores the fundamental reason behind the concept, and learn the memorization capacity of ReLU networks:

    Poništi
  4. proslijedio/la je Tweet

    New blog post: "A Recipe for Training Neural Networks" a collection of attempted advice for training neural nets with a focus on how to structure that process over time

    Poništi
  5. proslijedio/la je Tweet
    2. tra 2019.

    I wrote my first blog about my recent paper with John Hopfield.

    Poništi
  6. proslijedio/la je Tweet
    27. ožu 2019.

    I've decided to share my slide deck detailing my ... concerns about the F measure. tl;dr: just don't.

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    26. velj 2019.

    Good news! TensorBoard now works in Jupyter Notebooks, via magic commands "%" that match the command line. Example:

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    16. sij 2019.

    Really proud to share "What is torch.nn, really?", which takes you from a neural net written from scratch, refactored step by step using all the key concepts in `torch.nn`. If you want to really understand how neural nets work in , start here!

    Poništi
  9. proslijedio/la je Tweet
    10. sij 2019.

    Neural Ordinary Differential Equations .... blog explains

    Poništi
  10. proslijedio/la je Tweet
    22. pro 2018.
    Odgovor korisniku/ci

    Could you also consider taking a look at "fastprogress", our recent replacement for tqdm, which has some nice extra features (see the readme) and avoids some of tqdm's bugs:

    Poništi
  11. proslijedio/la je Tweet
    21. pro 2018.

    Paper on role of over-parametrization in generalization of neural nets is accepted to : … We have also released our code: … This is a joint work with with Zhiyuan Li, Srinadh Bhojanapalli, and Nati Srebro.

    Prikaži ovu nit
    Poništi
  12. 4. pro 2018.

    Attending from home with Facebook livestream 😂😂😂

    Poništi
  13. proslijedio/la je Tweet
    30. stu 2018.

    “So you want to be a Research Scientist” by Vincent Vanhoucke : •Your will spend a career working on things that don’t work •Your work will be obsolete the minute you publish it •Your entire career will largely be measured by 1 number (H-Index)

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet

    Great tips for beginning grad students - Twenty things I wish I’d known when I started my PhD

    Poništi
  15. proslijedio/la je Tweet
    1. stu 2018.

    Finally learned about using Einstein notation to flexibly multiply and sum across axes of high-dimensional arrays in using np.einsum() - very powerful! I found this tutorial very helpful:

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    7. lis 2018.

    Gradient Descent Provably Optimizes Over-parameterized (single hidden layer relu) Neural Networks (trained with l2 loss assuming random init and non degenerate data):

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    16. ruj 2018.

    Neural Processes in : Short blog post explaining “Neural Processes” (), connection to VAEs, and highlights shortcomings with the approach (with suggestions to make it work better).

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    13. ruj 2018.

    In order to incentivize and measure progress towards the goal of zero confident classification errors in models, we're announcing the Unrestricted Adversarial Examples Challenge. Learn how to participate in the blog post below!

    Poništi
  19. proslijedio/la je Tweet
    10. kol 2018.
    Poništi
  20. proslijedio/la je Tweet

    “Variational Inference: A Review for Statisticians” is an excellent introduction to variational inference from — highly recommended for those looking to learn!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·