Rezultati pretraživanja
  1. 29. pro 2019.

    If you're a student of and want to become a force for , learn: - AI - AI - AI - AI - AI defenses IMO - most folks still don't know how these fields work Be the change ( in replies 👇)

    Prikaži ovu nit
  2. 19. velj 2017.
    Odgovor korisnicima

    No one is arguing about a or press. It's the general lack of objectivity.

  3. 27. kol 2019.

    Do neural networks learn what we think they learn? reviews research that suggests that they often instead fall prey to the so-called Clever Hans effect and discusses its implications for NLP.

  4. 7. ruj 2018.

    Announcing 1st targeted physical attack on Faster R-CNN object detectors by Great collaboration: Cory Cornelius, Jason Martin ’18 paper Code:

  5. prije 7 sati
  6. A really funny example of the possible security issues of every-day deployed ML systems. Should this attack be predicted by a so spread best-route recommender model?

  7. 4. velj

    Just released a plugin for ! It includes the attack proposed in Happy adversarial time! 😎

  8. 4. velj

    Hello! And the reason for the long silence on Twitter is my new training: 😎"Securing Your AI and Machine Learning Systems" Training.

  9. 3. velj

    Good adversarial examples, that causes a machine learning model to make a false prediction.

  10. 3. velj

    vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift. (arXiv:2001.11821v1 [])

  11. 30. sij

    - You know what’d be ? - What? - If you dropped the tone and were, like, nice. - . Or not. - Free- aggro? Something I’ve done? - It's my new spirit guide, . She has a message for you: . Mean anything?

  12. 27. sij

    Her stance was . The great dragon craned her neck, her blacksteel chains rattling, an air of superiority radiating from her. Vahti stood motionless before Xinthir, her crimson scales shining in the sun. He fell his knees, understanding his folly.

  13. 24. lis 2019.
  14. 14. kol 2019.

    At , today presented a super interesting paper on understanding transferability of adversarial attacks. Awesome findings and fundamental research! They also release the secML library, can’t wait to try it!

  15. Is safe? Learn more on attacks and security for in a new blog post in collaboration with AI for People, and others:

  16. 17. svi 2019.

    "More and more people will try to manipulate systems not by breaking in but by fooling them."

    Prikaži ovu nit
  17. 13. svi 2019.

    Our paper "Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks" has been accepted for publication in ACM Computing Surveys. Preprint

  18. 23. tra 2019.
  19. 8. tra 2019.

    Engineers develop to Trick Systems Project to test & improve deep-learning algorithms for enhanced . New techniques developed by engineers can make objects "invisible" to image detection systems

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.