Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @RICEric22
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @RICEric22
-
Prikvačeni tweet
1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: https://arxiv.org/abs/2001.03994 Code: https://github.com/locuslab/fast_adversarial … Joint work with
@_leslierice and@zicokolter to be at#ICLR2020pic.twitter.com/2EmwFaX7Qp
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
4/ Did you try FGSM before and it didn't work? It probably failed due to "catastrophic overfitting": plotting the learning curves reveals that, if done incorrectly, FGSM adv training learns a robust classifier, up until it suddenly and rapidly deteriorates within a single epoch.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
3/ Save your valuable time with cyclic learning rates and mixed precision! These techniques can train robust CIFAR10 and ImageNet in 6 min and 12 hrs using FGSM adv training. Super easy to incorporate (just add a 3-4 lines of code), and can accelerate any training method.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
2/ Summary: Changing the initialization to be uniformly random is the main contributor towards successful FGSM adversarial training. Generated adversarial examples need to be able to actually span the entire threat model, but otherwise don't need to be that strong for training.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
New blog post from
@_vaishnavh on generalization for deep networks!https://twitter.com/_vaishnavh/status/1148592622509928450 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
Happy to share our new paper on provable robustness for boosting. For boosted stumps, we can solve the min-max problem *exactly*. For boosted trees, we minimize an upper bound on robust loss. Everything is nice & convex! Paper https://arxiv.org/abs/1906.03526 Code https://github.com/max-andr/provably-robust-boosting …pic.twitter.com/w23b6pCYTP
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
Excited to present a contributed talk at
#icml2019udl workshop tomorrow about our paper that analyzes overconfident predictions of ReLU networks and suggests a new training scheme to mitigate this. Paper: https://arxiv.org/abs/1812.05720 Code: https://github.com/max-andr/relu_networks_overconfident …pic.twitter.com/TuSXgzGa90
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
How can the
#MachineLearning community contribute to#ClimateChange solutions? We present recommendations for researchers, entrepreneurs, governments, and more in areas spanning mitigation, adaptation, and tools for action.#ClimateChangeAI http://arxiv.org/abs/1906.05433 pic.twitter.com/2ghRl9Ctb9
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
Come to our spotlight talk about uniform convergence in deep learning at 9:55am!
@zicokolter https://arxiv.org/abs/1902.04742 https://twitter.com/bneyshabur/status/1139321513851551745 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Today at
#ICML I'll be presenting our work on Wasserstein adversarial examples. Come listen to the short talk at 12:00 in the Adversarial Examples session or drop by poster 67 in the evening. Paper: https://arxiv.org/abs/1902.07906 Code:https://github.com/locuslab/projected_sinkhorn …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
Excited to have received a best paper honorable mention for our paper "SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver" (with
@_powei, Bryan Wilder, and@zicokolter) at#ICML2019!pic.twitter.com/UY2rIX5to9
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
1/ Integrate logic and deep learning with
#SATNet, a differentiable SAT solver!#icml2019 Paper: https://arxiv.org/abs/1905.12149 Code: https://github.com/locuslab/SATNet Joint work with@priyald17, Bryan Wilder, and@zicokolter.pic.twitter.com/YuVGHytMaVPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
We are now recruiting submissions for the "Climate Change: How Can AI Help?" workshop at
#ICML2019. Details at http://www.climatechange.ai .#climateChangeAIpic.twitter.com/eUvbBvK22k
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This adversarial defense introduces a regularizer that increases the distance of each point to the decision boundary. It also leverages released, open source software from other researchers to certify their results: let's all continue releasing usable code!https://twitter.com/maksym_andr/status/1105585316440784898 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
Our research group is starting a (technical) blog! First post, by
@RICEric22, covers provable adversarial defenses https://locuslab.github.io/2019-03-12-provable/ …. Each post is downloadable as a Jupyter notebook, so you can recreate all the examples. More info about blog here https://locuslab.github.io/2019-02-20-introduction/ ….Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Fellow lab member Alnur wrote a great post explaining precisely how using early-stopping on gradient descent for the least-squares problem is equivalent to adding regularization using ridge regressionhttps://twitter.com/mldcmu/status/1103747773629956096 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
New paper on Wasserstein adversarial examples, as a step towards considering convex metrics for perturbation regions that capture structure beyond norm-balls. Paper: https://arxiv.org/abs/1902.07906 Code: https://github.com/locuslab/projected_sinkhorn …pic.twitter.com/8rixoKuDNT
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
1/ I'm excited to share our work on randomized smoothing, a PROVABLE adversarial defense in L2 norm which works on ImageNet! We achieve a *provable* top-1 accuracy of 49% in the face of adversarial perturbations with L2 norm less than 0.5 (=127/255). https://arxiv.org/abs/1902.02918 pic.twitter.com/bX4rqNF2ge
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Eric Wong proslijedio/la je Tweet
At the heart of most deep learning generalization bounds (VC, Rademacher, PAC-Bayes) is uniform convergence (u.c.). We argue why u. c. may be unable to provide a complete explanation of generalization, even if we take into account the implicit bias of SGD. https://arxiv.org/pdf/1902.04742.pdf …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Fellow labmate Priya Donti's work on 'Inverse Optimal Power Flow' was recognized as a highlighted paper at the
#NeurIPS#AIforSocialGood workshop! Short paper link: http://goo.gl/jCXim8@priyald17Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.