Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @DanHendrycks
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @DanHendrycks
-
Dan Hendrycks proslijedio/la je Tweet
Excited to announce our new paper "AugMix", which proposes a simple yet surprisingly effective method to improve robustness & uncertainty particularly under dataset shift :) Joint work with
@DanHendrycks@TheNormanMu@ekindogus@barret_zoph@jmgilmer. More details below:pic.twitter.com/hVWNvU0BfQPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
ML systems that can _act cautiously_ will require detailed uncertainty estimates. An important piece of uncertainty information is an anomaly's location. Consequently we created a benchmark for locating anomalies. Paper: https://arxiv.org/abs/1911.11132 Dataset: https://github.com/hendrycks/anomaly-seg …pic.twitter.com/B5XxGKfInG
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
A high-level look at neural network robustness, with many mentions of research done at Berkeley.
@dawnsongtweets@ARGleave@chelseabfinnhttps://twitter.com/NatureNews/status/1181929823750504449 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Dan Hendrycks proslijedio/la je Tweet
We're releasing a new method to test for model robustness against adversaries not seen during training, and open-sourcing a new metric, UAR (Unforeseen Attack Robustness), which measures how robust a model is to an unanticipated attack: https://openai.com/blog/testing-robustness/ …pic.twitter.com/8yJdd6oD5T
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
While adversaries evolve their attack strategies and do not limit themselves to l_p perturbations, most academic research assumes just the opposite. We introduce several new attacks beyond l_p perturbations and measure robustness to unforeseen attacks: https://arxiv.org/abs/1908.08016 pic.twitter.com/0Rs3XIxGdU
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
PyTorch 1.2 implemented the GELU activation function proposed in my first undergrad paper with
@kevingimpel : https://arxiv.org/abs/1606.08415 While the ReLU is x * <step function>, the GELU is x * <smoothed step function>. Now the GELU the default activation in@openai GPT-2 and in BERT.https://twitter.com/PyTorch/status/1159552940257923072 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: https://arxiv.org/abs/1907.07174 Dataset and code: https://github.com/hendrycks/natural-adv-examples …pic.twitter.com/pd75CyK54T
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(2) is currently best done by spatial and temporal consistency checks (
@dawnsongtweets work at ECCV 2018 and ICLR 2019), ensembling during evaluation (leaving dropout on during testing), and extreme input randomization (E-LPIPS, Barrage of Random Transformations). (3/3)Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
(1) involves training on adversarial noise (which requires harnessing more training information through pre-training, self-supervised learning, mixup augmentation, and semi-supervised learning) or automatically smoothing the loss surface (Guided Complement Entropy). (2/3)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Currently, adversarial robustness can be improved by (1) increasing stability on noise or by (2) reducing the freedom of the attacker. (tweet 1/3)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Our new work shows how self-supervised learning can surpass supervised learning for out-of-distribution detection, and that it can improve adversarial, common corruption, and label poisoning robustness: https://arxiv.org/abs/1906.12340 pic.twitter.com/KouzVtOE3W
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Dan Hendrycks proslijedio/la je Tweet
A dragonfly mislabeled as a manhole cover? What about a squirrel confused for a sea lion? These images are challenging algorithms to be more resilient to attacks.https://trib.al/Pp0wLUF
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Translated images are enough to break ConvNets. Richard Zhang proposes a max pooling modification that dramatically increases stability, even on images changed by ImageNet-P perturbations such as Gaussian noise, resizing, and viewpoint variation. https://richzhang.github.io/antialiased-cnns/ …pic.twitter.com/5zoqculLuX
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Our paper _Using Pre-Training Can Improve Model Robustness and Uncertainty_ will be presented at ICML 2019. In the paper we bring adversarial robustness from 46% to 57%, take issue with a previous characterization of pre-training, and more. Pre-print here: https://arxiv.org/abs/1901.09960
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Dan Hendrycks proslijedio/la je Tweet
We (
@SharonYixuanLi ,@DanHendrycks,@tdietterich,@jmgilmer & myself) are organizing a workshop on "Uncertainty & Robustness in Deep Learning" at#ICML2019. See https://sites.google.com/view/udlworkshop2019/ … for more info. Please submit your work (deadline: April 30, 2019) and attend the workshop! :)Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.