Tweetovi

Blokirali ste korisnika/cu @letrungson1

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @letrungson1

  1. proslijedio/la je Tweet
    31. sij

    An Opinionated Guide to ML Research: “To make breakthroughs with idea-driven research, you need to develop an exceptionally deep understanding of your subject, and a perspective that diverges from the rest of the community—some can do it, but it’s hard.”

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    8. stu 2017.

    in today’s Stats 385 lecture, Jeffrey Pennington of Google studies Theory of Neural Nets using Random Matrix Theory

    Poništi
  3. proslijedio/la je Tweet
    8. sij

    Many aspiring AI engineers ask me how to take the next step and join an AI team. This report from , a affiliate, walks you through how AI teams work and which skills you need for different AI career tracks. Download it here:

    Poništi
  4. proslijedio/la je Tweet

    So, "deep learning" is the idea of doing representation learning via a chain of learned feature extractors. It's all about describing some input data via *deep hierarchies of features*, where features are *learned*. A further question is then: is the brain "deep learning"?

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet

    Courses 1 & 2 of ‘s newest Specialization is now available on ! Training a model is only one step in building a working AI system. These courses teach you how to navigate some key deployment scenarios. Enroll here:

    Poništi
  6. 28. stu 2019.
    Poništi
  7. proslijedio/la je Tweet
    22. lis 2019.

    More on domain randomization here by

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    12. stu 2019.

    Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    10. stu 2019.

    😊Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data while learning in a supervised learning manner. My latest post covers many interesting ideas of self-supervised learning tasks on images, videos & control problems:

    Poništi
  10. proslijedio/la je Tweet
    22. lis 2019.

    Pushy robots learn the fundamentals of object manipulation

    Poništi
  11. proslijedio/la je Tweet
    30. tra 2019.

    I Now call it "self-supervised learning", because "unsupervised" is both a loaded and confusing term. In self-supervised learning, the system learns to predict part of its input from other parts of it input. In...

    Poništi
  12. proslijedio/la je Tweet

    Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning on … - Yann

    Poništi
  13. proslijedio/la je Tweet
    20. ruj 2019.

    Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency by Tianwei Shen et al.

    Poništi
  14. proslijedio/la je Tweet

    RoBERTa demonstrates the potential for self-supervised training techniques to match or exceed the performance of more traditional, supervised approaches. Read more:

    Poništi
  15. proslijedio/la je Tweet
    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    6. ruj 2019.

    In our new blog post, we review how brains replay experiences to strengthen memories, and how researchers use the same principle to train better AI systems:

    Poništi
  17. proslijedio/la je Tweet

    We see more significant improvements from training data distribution search (data splits + oversampling factor ratios) than neural architecture search. The latter is so overrated :)

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    27. kol 2019.

    Can neural network architectures alone, without learning any weight parameters, encode solutions for a given task? We search for “weight agnostic neural network” architectures that can perform various tasks even when using random weight values. Learn more→

    Poništi
  19. proslijedio/la je Tweet

    The wonderful has been working on this podcast with the team for a while and I’m really excited to see the series launch! It was great fun talking to Hannah for episode 8, and I think she's captured 's culture and research brilliantly.

    Poništi
  20. proslijedio/la je Tweet
    19. kol 2019.

    Nice blog post about a series of optimizations to reduce training time of a CIFAR10 image model. Many of these options are likely applicable to lots of different kinds of models. Nice work, !

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·