Tweetovi

Blokirali ste korisnika/cu @nhatsmrt

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @nhatsmrt

  1. proslijedio/la je Tweet
    3. velj

    In January, , , and I ran a short class at on topics we think are missing in most CS programs — tools we use every day that everyone should know, like bash, git, vim, and tmux. And now the lecture notes and videos are online!

    Poništi
  2. proslijedio/la je Tweet
    3. velj

    This repo is full of amazing awesomeness. I don't know of anything else like it. Independent refactored carefully tested implementations of modern CNNs

    Poništi
  3. proslijedio/la je Tweet
    31. sij

    Note that this is *not* just about time series and trends. It's about the much more subtle issue of "domain shift". How do you know if you have domain shift? Here's a great method, from our forthcoming book ():

    Poništi
  4. proslijedio/la je Tweet
    29. sij

    New blog post: Contrastive Self-Supervised Learning. Contrastive methods learn representations by encoding what makes two things similar or different. I find them very promising and go over some recent works such as DIM, CPC, AMDIM, CMC, MoCo etc.

    Poništi
  5. proslijedio/la je Tweet
    26. sij

    Five out of the six top submissions in the M4 competition used, in one way or the other, the winner of the M3 competition: the Theta method (or one of its extensions, such as OTM, DOTM or Hybrid Theta).

    Poništi
  6. proslijedio/la je Tweet
    11. sij

    “Meet AdaMod: a new deep learning optimizer with memory” by Less Wright

    Poništi
  7. proslijedio/la je Tweet
    21. pro 2015.

    aut viam inveniam aut faciam

    Poništi
  8. proslijedio/la je Tweet
    6. pro 2019.

    Why do deep ensembles trained with just random initialization work surprisingly well in practice?  In our recent paper with & Huiyi Hu, we investigate this by using insights from recent work on loss landscape of neural nets.  More below:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    3. pro 2019.

    This video explains AdvProp from ! This technique leverages Adversarial Examples for ImageNet classification by using separate Batch Normalization layers for clean and adversarial mini-batches.

    Poništi
  10. proslijedio/la je Tweet
    1. pro 2019.

    This got me thinking. It is hard to achieve 1% growth every day. A more believable model is that "today = yesterday * (1+X)" where X is a random variable. The Japanese poster shows the special cases X=0.01 (and X=-0.01) every day. What happens when X is random?

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    3. pro 2019.

    We introduce LOGAN, a game-theory motivated algorithm, which improves the state-of-the-art in GAN image generation by over 30% measured in FID: Here are samples showing higher diversity:

    Poništi
  12. proslijedio/la je Tweet

    "A Simple yet Effective Way for Improving the Performance of GANs"

    Poništi
  13. proslijedio/la je Tweet
    29. stu 2019.
    Odgovor korisnicima

    Thank you, It should be really useful as according to this paper , the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base!

    Poništi
  14. proslijedio/la je Tweet
    25. stu 2019.

    AdvProp: One weird trick to use adversarial examples to reduce overfitting. Key idea is to use two BatchNorms, one for normal examples and another one for adversarial examples. Significant gains on ImageNet and other test sets.

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    21. stu 2019.

    This feels like a real breakthrough: Take the same basic algorithm as AlphaZero, but now *learning* its own simulator. Beautiful, elegant approach to model-based RL. ... AND ALSO STATE OF THE ART RESULTS! Well done to the team at

    Poništi
  16. proslijedio/la je Tweet
    19. stu 2019.
    Poništi
  17. proslijedio/la je Tweet
    18. stu 2019.

    Helping your neural network generalize requires preventing overfitting with these important methods.

    Poništi
  18. proslijedio/la je Tweet

    Frequent users of gradient penalty (WGAN-GP, StyleGAN, etc.), make sure to try out the new Linfinity hinge gradient penalty from for better results. See for how to quickly and easily implement it in .

    Poništi
  19. proslijedio/la je Tweet
    13. stu 2019.

    Excited to release a library for 3D deep learning! Check it out, and give us feedback! Great effort by Edward Smith, JF Lafleche Artem Rozantsev, Tommy Xiang, Gav State, . We plan to extend it with many exciting features!

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    9. stu 2019.

    I really enjoyed this paper - currently anonymous, but one of the highest scoring in ICLR reviews - that integrates topic models and language models to generate word-level text conditioned on dynamic, sentence-level topic distributions.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·