Tweetovi

Blokirali ste korisnika/cu @pitbull_wang

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @pitbull_wang

  1. proslijedio/la je Tweet
    15. sij

    1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with and to be at

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    16. sij

    Our new adversarial attack published @ NeurIPS 2019 is now available for Foolbox and CleverHans! The attack is SOTA in L0, L1, L2 & Linf, needs close to no hyperparameter tuning & is less susceptible to some types of gradient masking. Blog post @

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    6. sij

    We distill key components for pre-training representations at scale: BigTransfer ("BiT") achieves SOTA on many benchmarks with ResNet, e.g. 87.8% top-1 on ImageNet (86.4% with only 25 images/class) and 99.3% on CIFAR-10 (97.6% with only 10 images/class).

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    3. sij

    A General and Adaptive Robust Loss Function They propose an analytical function that can represent a family of well known robust cost functions just with a single parameter (alpha). Alpha lets you walk through L2, huber, cauchy, tukey and more.

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    14. pro 2019.

    Robust regression is obtained by using a l1 loss in place of least squares. Removes outliers. For d=0, computes the median instead of the mean.

    Poništi
  6. proslijedio/la je Tweet
    24. pro 2019.

    Some folks still seem confused about what deep learning is. Here is a definition: DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization....

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    25. stu 2019.
    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    3. pro 2019.

    This video explains AdvProp from ! This technique leverages Adversarial Examples for ImageNet classification by using separate Batch Normalization layers for clean and adversarial mini-batches.

    Poništi
  9. proslijedio/la je Tweet

    AR-Net combines the best of both traditional statistical models and neural network models for time series modeling using a feedforward neural network approach. Our proposed model is easier to use and scales well for large volumes of training data.

    Poništi
  10. proslijedio/la je Tweet

    Self-supervised AMDIM learns to represent observations of a shared cause ⁠— sights, scents & sounds of baking ⁠— driven by a desire to predict related observations ⁠— the taste of cookies. MSR Montreal trained SOTA image representations with only 4 GPUs

    Poništi
  11. proslijedio/la je Tweet

    What are the definitive early references for anomaly detection in machine learning?

    Poništi
  12. proslijedio/la je Tweet
    4. kol 2018.

    A tale of two AI summers: 1980s Now Expert systems Deep learning More rules! More data! LISP machines GPUs Cyc DeepMind Brittleness Brittleness

    Poništi
  13. proslijedio/la je Tweet
    3. kol 2018.
    Poništi
  14. proslijedio/la je Tweet

    Welcome back, gradients! This method is orders of magnitude faster than state-of-the-art non-differentiable techniques. DARTS: Differentiable Architecture Search by Hanxiao Liu, Karen Simonyan, and Yiming Yang. Paper: Code:

    Poništi
  15. proslijedio/la je Tweet
    28. kol 2017.

    Speech and Language Processing 3rd ed. partial draft of 21 chapters at Thanks to all you readers for advice/typos!

    Poništi
  16. proslijedio/la je Tweet
    16. sij 2018.

    I tend to see few real-world application of Deep RL outside of games. Surprised that I found 3 in today’s papers! Deep RL for input fuzzing (), characterizing cell movement () and wireless communication ().

    Poništi
  17. proslijedio/la je Tweet
    21. lis 2017.

    CMU Neural Nets for NLP (13): "Parsing w/ dynamic programs" Code example is 's Deep Bi-affine Attention.

    Poništi
  18. proslijedio/la je Tweet

    Ubuntu 17.10 releases with GNOME, Kubernetes 1.8 & minimal base images

    Poništi
  19. proslijedio/la je Tweet
    21. kol 2017.

    Beautiful exposition of variational inference (such as in gensim) and beyond 🍳 A must for all + lovers! 🔬

    Poništi
  20. proslijedio/la je Tweet
    26. srp 2017.

    - a new python toolkit from Cambridge University for statistical dialogue systems.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·