Eric Wong

@RICEric22

Machine Learning PhD student at CMU working on optimization problems and the data science process.

Pittsburgh, PA
Vrijeme pridruživanja: srpanj 2009.

Tweetovi

Blokirali ste korisnika/cu @RICEric22

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @RICEric22

  1. Prikvačeni tweet
    15. sij

    1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with and to be at

    Prikaži ovu nit
    Poništi
  2. 15. sij

    4/ Did you try FGSM before and it didn't work? It probably failed due to "catastrophic overfitting": plotting the learning curves reveals that, if done incorrectly, FGSM adv training learns a robust classifier, up until it suddenly and rapidly deteriorates within a single epoch.

    Prikaži ovu nit
    Poništi
  3. 15. sij

    3/ Save your valuable time with cyclic learning rates and mixed precision! These techniques can train robust CIFAR10 and ImageNet in 6 min and 12 hrs using FGSM adv training. Super easy to incorporate (just add a 3-4 lines of code), and can accelerate any training method.

    Prikaži ovu nit
    Poništi
  4. 15. sij

    2/ Summary: Changing the initialization to be uniformly random is the main contributor towards successful FGSM adversarial training. Generated adversarial examples need to be able to actually span the entire threat model, but otherwise don't need to be that strong for training.

    Prikaži ovu nit
    Poništi
  5. 9. srp 2019.

    New blog post from on generalization for deep networks!

    Poništi
  6. proslijedio/la je Tweet

    Happy to share our new paper on provable robustness for boosting. For boosted stumps, we can solve the min-max problem *exactly*. For boosted trees, we minimize an upper bound on robust loss. Everything is nice & convex! Paper Code

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet

    Excited to present a contributed talk at workshop tomorrow about our paper that analyzes overconfident predictions of ReLU networks and suggests a new training scheme to mitigate this. Paper: Code:

    Poništi
  8. proslijedio/la je Tweet
    13. lip 2019.

    How can the community contribute to solutions? We present recommendations for researchers, entrepreneurs, governments, and more in areas spanning mitigation, adaptation, and tools for action.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    14. lip 2019.

    Come to our spotlight talk about uniform convergence in deep learning at 9:55am!

    Poništi
  10. 12. lip 2019.

    Today at I'll be presenting our work on Wasserstein adversarial examples. Come listen to the short talk at 12:00 in the Adversarial Examples session or drop by poster 67 in the evening. Paper: Code:

    Poništi
  11. proslijedio/la je Tweet
    11. lip 2019.

    Excited to have received a best paper honorable mention for our paper "SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver" (with , Bryan Wilder, and ) at !

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    30. svi 2019.

    1/ Integrate logic and deep learning with , a differentiable SAT solver! Paper: Code: Joint work with , Bryan Wilder, and .

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    3. tra 2019.

    We are now recruiting submissions for the "Climate Change: How Can AI Help?" workshop at . Details at .

    Poništi
  14. 13. ožu 2019.

    This adversarial defense introduces a regularizer that increases the distance of each point to the decision boundary. It also leverages released, open source software from other researchers to certify their results: let's all continue releasing usable code!

    Poništi
  15. proslijedio/la je Tweet
    12. ožu 2019.

    Our research group is starting a (technical) blog! First post, by , covers provable adversarial defenses . Each post is downloadable as a Jupyter notebook, so you can recreate all the examples. More info about blog here .

    Poništi
  16. 8. ožu 2019.

    Fellow lab member Alnur wrote a great post explaining precisely how using early-stopping on gradient descent for the least-squares problem is equivalent to adding regularization using ridge regression

    Poništi
  17. 23. velj 2019.

    New paper on Wasserstein adversarial examples, as a step towards considering convex metrics for perturbation regions that capture structure beyond norm-balls. Paper: Code:

    Poništi
  18. proslijedio/la je Tweet
    11. velj 2019.

    1/ I'm excited to share our work on randomized smoothing, a PROVABLE adversarial defense in L2 norm which works on ImageNet! We achieve a *provable* top-1 accuracy of 49% in the face of adversarial perturbations with L2 norm less than 0.5 (=127/255).

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    14. velj 2019.

    At the heart of most deep learning generalization bounds (VC, Rademacher, PAC-Bayes) is uniform convergence (u.c.). We argue why u. c. may be unable to provide a complete explanation of generalization, even if we take into account the implicit bias of SGD.

    Prikaži ovu nit
    Poništi
  20. 8. pro 2018.

    Fellow labmate Priya Donti's work on 'Inverse Optimal Power Flow' was recognized as a highlighted paper at the workshop! Short paper link:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·