Part I - Attack Surface The report details the different kinds of attacks that are possible 1) When a model is built 2) When the model is deployed 3) When the output is served My jottings in the image - 1/pic.twitter.com/eqmIgPAz2R
U tweetove putem weba ili aplikacija drugih proizvođača možete dodati podatke o lokaciji, kao što su grad ili točna lokacija. Povijest lokacija tweetova uvijek možete izbrisati. Saznajte više
Part I - Attack Surface The report details the different kinds of attacks that are possible 1) When a model is built 2) When the model is deployed 3) When the output is served My jottings in the image - 1/pic.twitter.com/eqmIgPAz2R
One of the interesting theme of this section of how much of traditional vulns can tumble ML systems. It notes that components in training environment, deployment are all built on top of "traditional" network IT. Security of ML systems begin with basic software security. 2/
For instance, the report brings to attention reporting by @campuscodi on malicious libraries found in PyPI (here is a newer piece by Catalin - https://www.zdnet.com/article/malicious-python-libraries-targeting-linux-servers-removed-from-pypi/ …)
That's a problem when Python is the lingua franca of ML engineers https://www.zdnet.com/article/github-tops-40-million-developers-as-python-data-science-machine-learning-popularity-surges/ … 3/
It is also a stark reminder of how basic security hygiene is missing in ML conversations. For instance, @moyix work showed that a popular model hosted in Cafe Model Zoo had a mismatch in its SHA-1 hash, and how 22 models had no digests altogether.
https://arxiv.org/pdf/1708.06733.pdf … 4/
Side note: Even if you dont use ML, malicious Python Libraries hurts vanilla security analysts
@JohnLaTwC "Githubification" post https://medium.com/@johnlatwc/the-githubification-of-infosec-afbdbfaad1d1 … shows how threat hunters like @Cyb3rWard0g are increasingly using Jupyter notebooks for hunting. 5/
IMO, a zero day against matplotlib is going to be a 10x more a scramble than Spectre Meltdown. Atleast in Meltdown, it was localized to CPU processors. How many orgs have a detailed inventory of ML systems in their org, spanning cloud, federated learning, ML on edge? 6/
Part II: Adversarial ML and impact on National Security The report details how ML is currently used in National Security ( FRT, riot control, crisis prediction, recon, intelligence gathering) and more interesting observations like ML countermeasures. 7/
For non-US, non-China states, there is a huge challenge for NatSec: Think global supply chain for hardware and software in general, but as the report puts it "every other state might depend on US/China for powering their militaries" 8/
(FYI -- This is not without precedent. At the height of the trade war with China, The US tried to curb ML software, as @CadeMetz reported: https://www.nytimes.com/2019/01/01/technology/artificial-intelligence-export-restrictions.html …
One of the proposed ban was on Deep Learning. Let that sink in) 9/
The report also highlights how because of the interconnectedness of ML systems with a human analyst, how attacking the ML system will have a "cascading efffect" on policy implications where ML systems are deployed 10/
Finally, ML makes detection and attribution of attacks harder. In a simple case, who do you attribute to when your autonomous vehicle crashes because of an errant adversarial example? 11/
Part III: Nat Sec Follow ups
Here are some follow ups if you are interested in this:
1) @Gregory_C_Allen's AI and National Security is essential reading - https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf …
2) China's AI Investment report by @CSETGeorgetown - https://cset.georgetown.edu/wp-content/uploads/CSET_China_Access_To_Foreign_AI_Technology.pdf … 12/
3) The AI Index report - https://hai.stanford.edu/sites/g/files/sbiybj10986/f/ai_index_2019_report.pdf … (I think @jackclarkSF is doing a webinar if you dont want to read the report)
4) @Miles_Brundage mammoth and awesome Malicious AI report https://maliciousaireport.com/ 13/
4) Finally, if you are @RSAConference, @BetsOnTech (who is acknowledged in the report), @drhyrum @CristinGoodwin and I will be talking about the legal and policy implications of adversarial ML. 14/
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.