Dan Hendrycks

@DanHendrycks

UC Berkeley PhD student. Focusing on Trustworthy Machine Learning via value modeling, robustness, and uncertainty. Maximizing utility.

Berkeley, CA • Marshfield, MO
Vrijeme pridruživanja: kolovoz 2009.

Tweetovi

Blokirali ste korisnika/cu @DanHendrycks

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @DanHendrycks

  1. proslijedio/la je Tweet
    5. pro 2019.

    Excited to announce our new paper "AugMix", which proposes a simple yet surprisingly effective method to improve robustness & uncertainty particularly under dataset shift :)  Joint work with . More details below:

    Prikaži ovu nit
    Poništi
  2. 26. stu 2019.

    ML systems that can _act cautiously_ will require detailed uncertainty estimates. An important piece of uncertainty information is an anomaly's location. Consequently we created a benchmark for locating anomalies. Paper: Dataset:

    Poništi
  3. 9. lis 2019.

    A high-level look at neural network robustness, with many mentions of research done at Berkeley.

    Poništi
  4. proslijedio/la je Tweet
    22. kol 2019.

    We're releasing a new method to test for model robustness against adversaries not seen during training, and open-sourcing a new metric, UAR (Unforeseen Attack Robustness), which measures how robust a model is to an unanticipated attack:

    Poništi
  5. 21. kol 2019.

    While adversaries evolve their attack strategies and do not limit themselves to l_p perturbations, most academic research assumes just the opposite. We introduce several new attacks beyond l_p perturbations and measure robustness to unforeseen attacks:

    Poništi
  6. 8. kol 2019.

    PyTorch 1.2 implemented the GELU activation function proposed in my first undergrad paper with : While the ReLU is x * <step function>, the GELU is x * <smoothed step function>. Now the GELU the default activation in GPT-2 and in BERT.

    Poništi
  7. 17. srp 2019.

    Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: Dataset and code:

    Poništi
  8. 2. srp 2019.

    (2) is currently best done by spatial and temporal consistency checks ( work at ECCV 2018 and ICLR 2019), ensembling during evaluation (leaving dropout on during testing), and extreme input randomization (E-LPIPS, Barrage of Random Transformations). (3/3)

    Prikaži ovu nit
    Poništi
  9. 2. srp 2019.

    (1) involves training on adversarial noise (which requires harnessing more training information through pre-training, self-supervised learning, mixup augmentation, and semi-supervised learning) or automatically smoothing the loss surface (Guided Complement Entropy). (2/3)

    Prikaži ovu nit
    Poništi
  10. 2. srp 2019.

    Currently, adversarial robustness can be improved by (1) increasing stability on noise or by (2) reducing the freedom of the attacker. (tweet 1/3)

    Prikaži ovu nit
    Poništi
  11. 2. srp 2019.

    Our new work shows how self-supervised learning can surpass supervised learning for out-of-distribution detection, and that it can improve adversarial, common corruption, and label poisoning robustness:

    Poništi
  12. proslijedio/la je Tweet

    A dragonfly mislabeled as a manhole cover? What about a squirrel confused for a sea lion? These images are challenging algorithms to be more resilient to attacks.

    Poništi
  13. 1. lip 2019.

    Translated images are enough to break ConvNets. Richard Zhang proposes a max pooling modification that dramatically increases stability, even on images changed by ImageNet-P perturbations such as Gaussian noise, resizing, and viewpoint variation.

    Poništi
  14. 21. tra 2019.

    Our paper _Using Pre-Training Can Improve Model Robustness and Uncertainty_ will be presented at ICML 2019. In the paper we bring adversarial robustness from 46% to 57%, take issue with a previous characterization of pre-training, and more. Pre-print here:

    Poništi
  15. proslijedio/la je Tweet

    We ( ,, , & myself) are organizing a workshop on "Uncertainty & Robustness in Deep Learning" at . See for more info. Please submit your work (deadline: April 30, 2019) and attend the workshop! :)

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·