Liwei Song

@lw_song

PhD student . Working on machine learning security and privacy.

Vrijeme pridruživanja: travanj 2017.

Tweetovi

Blokirali ste korisnika/cu @lw_song

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @lw_song

  1. proslijedio/la je Tweet
    6. pro 2019.

    Can backdoor attacks be successful without using incorrect labels? Yes, you just need to make poisoned inputs harder! Check out our work with and

    Poništi
  2. proslijedio/la je Tweet

    Secure & Private Federated Learning: and H. Vincent Poor's project will use data science to look at security, privacy & utility issues in federated learning (a technique allowing computer programs to train from decentralized data) & design them to be more robust.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet

    Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides:

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    4. stu 2019.

    After more than half a year of work, check out our latest paper Light Commands: Laser-Based Audio Injection on Voice-Controllable Systems. More details at Joint work with Takeshi Sugawara, Benjamin Cyr, Daniel Genkin, and

    Poništi
  5. proslijedio/la je Tweet
    26. lis 2019.

    workshop to be held at Abstract deadline is November 30, 2019.

    Poništi
  6. proslijedio/la je Tweet
    31. kol 2019.

    In case you haven't heard, is a Chinese app which completely blew up since Friday. Best application of 'Deepfake'-style AI facial replacement I've ever seen. Here's an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) 🤯

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    28. kol 2019.

    ML models in academia: robustness, privacy, fairness -- pick one ML models in industry: robustness, privacy, fairness -- pick zero

    Poništi
  8. proslijedio/la je Tweet
    28. kol 2019.

    Now it turns out that in protecting security of AI/machine learning from adversarial attacks ('Stop road/traffic sign misclassified as 130', 'pedestrian recognized as a traffic lane', etc.) is invasive and might mean privacy problems.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    27. kol 2019.

    We analyze the relation between 2 pillars of trust in ML: robustness and privacy. We show how an attacker can exploit robust models to infer members of their training set. Resolving this remains as a great challenge for building trustworthy ML.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    27. kol 2019.

    To secure models against adversarial examples, defense methods push decision boundaries away from (training) data. This increases the influence of training points on models, making them significantly vulnerable to privacy attacks that infer training data.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    28. svi 2019.

    Our paper on privacy vs security of ML, with and . We found that robust ML models tend to be more vulnerable to membership inference attacks!

    Poništi
  12. 26. svi 2019.

    Our paper about ML security vs privacy is on arxiv. We show that adversarial defenses, although improving robustness against adversarial examples, make models more susceptible to membership inference attacks. Joint work with and

    Poništi
  13. proslijedio/la je Tweet
    27. pro 2017.

    Acoustic Attacks on HDDs Can Sabotage PCs, CCTV Systems, ATMs, More

    Poništi
  14. proslijedio/la je Tweet
    22. pro 2017.

    Acoustic denial of service attacks on HDDs Sound waves can disable your hard drives. Our recent study is now available on arXiv: This is a significant vulnerability for many computing devices.

    Poništi
  15. 22. stu 2017.
    Poništi
  16. proslijedio/la je Tweet

    KRACK Attack (Demo & Details): Critical Key Reinstallation Attack Against WPA2 Wi-Fi Protocol — by

    Prikaži ovu nit
    Poništi
  17. 7. ruj 2017.

    BBC News - 'Dolphin' attacks fool Amazon, Google voice assistants

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·