How will advances in computing transform human society? Tell us your ideas, aspirations and vision for what you think the future holds. Open to all and .
Prize details, resources and how to apply: computing.mit.edu/futurecomputin
Aleksander Madry
@aleks_madry
MIT faculty. Working on making machine learning better understood and more reliable. Thinking about the impact of machine learning on society too.
Aleksander Madry’s Tweets
What role do social media platforms play in our lives? Should we (or should we not) regulate them?
I'm really excited to be releasing a series of blogs (aipolicy.substack.com/p/socialmedias) on regulating social media written with and . Let us know your thoughts!
Quote Tweet
Recent events (ahem) have brought the debate on whether/how to regulate social media back to the forefront.
My students @cen_sarah @andrew_ilyas and I have been thinking about this for a *while*. Excited to share the first results of our thinking: aipolicy.substack.com/p/socialmedias (1/3)
Show this thread
3
26
PS: If you're short on time, convinced the problem is important, and just looking for an overview of the space, we'd recommend reading our second post: aipolicy.substack.com/p/socialmedia2. (4/3)
3
Show this thread
As always in this space, our thinking is still evolving, so please read and let us know what you think—and stay tuned for more on this subject (including some actual proposals!) (3/3)
1
3
Show this thread
The first four posts are out today, and try to explain why the debate on regulating social media is so important, so complex, and so unique.
For now, we aim to pinpoint where the problems are + potential ways forward, and will focus on the solutions later. (2/3)
1
2
Show this thread
Recent events (ahem) have brought the debate on whether/how to regulate social media back to the forefront.
My students and I have been thinking about this for a *while*. Excited to share the first results of our thinking: aipolicy.substack.com/p/socialmedias (1/3)
1
13
35
Show this thread
4
31
60
Show this thread
Stable diffusion can visualize + improve model failure modes!
Leveraging our method, we can generate examples of hard subpopulations, which can then be used for targeted data augmentation to improve reliability.
Blog: gradientscience.org/failure-direct
A.Moitra
15
69
Our #NeurIPS2022 poster on in-context learning will be tomorrow (Thursday) at 4pm! Come talk to and me at poster #928 🔥
Quote Tweet
LLMs can do in-context learning, but are they "learning" new tasks or just retrieving ones seen during training? w/ @shivamg_13, @percyliang, & Greg Valiant we study a simpler Q:
Can we train Transformers to learn simple function classes in-context?
arxiv.org/abs/2208.01066
Show this thread
6
36
At #NeurIPS2022? Come talk to us about *3DB*:
nips.cc/virtual/2022/p
- Tuesday 11am-1pm: Hall J Poster#1042
- Wednesday 2-2:30pm: 's booth
I will be there with , and we would love to chat with you!
2
12
ModelDiff finds subpopulations on which any two models behave differently (even if they don’t *predict* differently!). We can use these subpops to uncover the biases induced by common design choices.
See our paper (arxiv.org/abs/2211.12491) & code (github.com/MadryLab/model)! (8/8)
1
5
21
Show this thread
From these subpopulations, we can infer a salient feature that distinguishes the behavior of the two models, which we *counterfactually verify*.
Ex: Waterbirds models trained from scratch are more sensitive to the color yellow than models pre-trained on ImageNet. (7/8)
1
1
4
Show this thread
By analyzing these residual datamodels in aggregate (with PCA), we can automatically extract “distinguishing subpopulations,” clusters of inputs on which the two models behave differently. (6/8)
1
8
Show this thread
Given two datamodels for X (one for each alg), we isolate the “direction in training set space” that influences one alg but not the other by computing a “residual datamodel,” projecting away the common direction (5/8)
1
4
Show this thread
Key idea behind ModelDiff: represent datapoints with their *datamodel embeddings* (from our recent work arxiv.org/abs/2202.00622).
For a fixed alg, embedding of an example X = how each training ex impacts model predictions on X. This allows direct comparison across algs! (4/8)
1
10
Show this thread
How would we normally do this? For NNs, we might try to pass the test set through each model and compare the resulting “feature embeddings.”
But: aligning diff. archs’ embedding spaces isn't easy. And if our model is, e.g., a decision tree, not clear what to use at all! (3/8)
1
5
Show this thread
What do we mean by "compare" exactly? We're interested in finding *features* used by one learning alg but not the other (e.g., maybe one alg induces more background-reliance).
Crucially, we want to do this without any prior hypotheses about how the algs differ. (2/8)
1
5
Show this thread
You’re deploying an ML system, choosing between two models trained w/ diff algs. Same training data, same acc... how do you differentiate their behavior?
ModelDiff (gradientscience.org/modeldiff) lets you compare *any* two learning algs!
w/ (1/8)
4
64
290
Show this thread
The Media Lab and have extended the deadline for this joint, tenure-track faculty search for an assistant or associate professor without tenure in the area of AI and Human Experience. Applications are now due 12/8. Please share with your networks! media.mit.edu/posts/assistan
8
14
Slides from my webinar "faculty jobs at MIT EECS & beyond: why and how to get them"
15
57
Show this thread
Connect computing across disciplines: 6 searches are underway to recruit new faculty in shared positions between and . Deadlines are fast approaching! Links to all job postings: computing.mit.edu/faculty-openin
read image description
ALT
11
18
3
9
12
I'll host a webinar: Faculty jobs and applications at MIT EECS and beyond.
Webinar and Q&A
Monday, Nov 21, 2022 01:00 PM Eastern Time
mit.zoom.us/j/92196344368
With Prof. Fredo Durand and Prof. Mina Konakovic Lukovic
1
16
62
Show this thread
Congratulations to Connor Coley and Dylan Hadfield-Menell for being selected as part of the first cohort of AI2050 Early Career Fellows to work on solving hard problems in AI through interdisciplinary research.
read image description
ALT
1
9
Very excited our latest work is featured on ! We demonstrate the feasibility of *immunizing* photos against manipulation by #StableDiffusion.
Blog post: gradientscience.org/photoguard
Code: github.com/MadryLab/photo
w\
3
7
24
, with , seeks candidates for a tenure-track faculty position in Data and Design, dedicated to design research efforts that leverage information, creative computation, and digital tools at many scales. Learn more and apply: architecture.mit.edu/jobs#assistant
7
6
Simple and useful proof that differential privacy induces robustness by and .
h/t for pointing out the paper
Quote Tweet
Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean Estimation ift.tt/Cl1429D
5
21
Last week asked :
How can we safeguard against AI-powered photo editing for misinformation?
MIT students hacked a way to "immunize" photos against edits: gradientscience.org/photoguard/
2
42
146
6
12
100
Show this thread
This works for other edits too (although, for now, might be specific to the photo-editing engine we had on our hands)! Check out our blog post gradientscience.org/photoguard/ for more examples and more details. And stay tuned for the paper! (8/8)
1
2
44
Show this thread
However, again, had this selfie been “immunized”, this would not have been possible! Indeed, images generated from an immunized version of Hadi’s photo with Trevor are totally unrealistic. (7/8)
read image description
ALT
read image description
ALT
1
1
27
Show this thread
And it is not only about Trevor’s and Michael’s photo. In fact, the lead student on this project has a selfie with Trevor too. Now, Hadi is attempting to “deepen” his (imaginary) friendship with by manipulating this selfie (and he succeeds!) (6/8)
read image description
ALT
read image description
ALT
2
2
30
Show this thread
After such “immunization”, the same edit of this photo looks much worse.
So, Trevor could have applied such “immunization” to his photo before posting it to protect it against this kind of malicious edits. (5/8)
read image description
ALT
3
31
Show this thread
Could Trevor have done anything to prevent this? My students spent an enjoyable weekend hacking together a potential answer: adding small (imperceptible) noise to the original photo can make it “immune” to such edits! (4/8)
read image description
ALT
3
5
45
Show this thread
Using cutting-edge image generation models like #dalle2 and #stablediffusion, someone can easily manipulate the above photo to get this (fake) one: (3/8)
read image description
ALT
1
17
Show this thread
read image description
ALT
1
15
Show this thread
Last week on , asked a (v. important) Q: how can we safeguard against AI-powered photo editing for misinformation? youtu.be/Ba_C-C6UwlI?t=
My students hacked a way to "immunize" photos against edits: gradientscience.org/photoguard/ (1/8)
read image description
ALT
24
243
1,018
Show this thread
Please share with your networks! The Media Lab and have opened a joint, tenure-track faculty search for an assistant or associate professor without tenure in the area of AI and Human Experience. Applications are due 11/15. media.mit.edu/posts/assistan
12
17
A lot of PhD advising is just prompt engineering
Quote Tweet
If you think prompt engineering is bad now, just wait until large speech models:
"For some reason, when Lucia reads the prompt we get 10% higher accuracy"
"Have you tried singing the prompt?"
"Speaking Slowly Improves Chain of Thought Prompting"
1
7
70
Great opportunity for a faculty position , in the area of "Health of the Planet. We are looking for candidates for a joint position between the Department of Materials Science and Engineering (DMSE) and the Schwarzman College of Computing (SCC):
6
27
















