Opens profile photo
Follow
Max Tegmark
@tegmark
Known as Mad Max for my unorthodox ideas and passion for adventure, my scientific interests range from artificial intelligence to the ultimate nature of reality
Science & TechnologyMITspace.mit.edu/home/tegmark/Joined May 2014

Max Tegmark’s Tweets

Good post by Dr Buolamwini. I agree strongly re: the opportunities for strategic cooperation between groups who don't see all aspects of the risks the same way; many concrete interventions that serve to make the world safer across these concerns. 1/2
Quote Tweet
ICYMI , this morning Tawana Petty spoke alongside Professors Yoshua Bengio (Mila) and Max Tegmark on @democracynow about AI threats. I have immense respect for Dr. Bengio who put his reputation on the line to defend early AI bias research from corporate attack. I do not view… Show more
2
13
Show this thread
Another inspiring example of how to discuss AI disagreements in a constructive way:
Quote Tweet
Had an insightful conversation with @geoffreyhinton about AI and catastrophic risks. Two thoughts we want to share: (i) It's important that AI scientists reach consensus on risks-similar to climate scientists, who have rough consensus on climate change-to shape good policy.… Show more
Embedded video
6:10
229.6K views
8
164
Dagan Shani has IMHO made the most important film of the year - about the harsh #AI truth - see it right here on Twitter:
Quote Tweet
Don't Look Up - The Documentary: The Case For AI As An Existential Threat.
Show this thread
Embedded video
17:15
9.9K views
26
252
𝗧𝗵𝗲 𝗽𝗮𝘀𝘁 𝟲 𝗺𝗼𝗻𝘁𝗵𝘀: “Of course, we won’t give the AI internet access” 𝘔𝘪𝘤𝘳𝘰𝘴𝘰𝘧𝘵 𝘉𝘪𝘯𝘨: 🤪 “Of course, we’ll keep it in a box” 𝘍𝘢𝘤𝘦𝘣𝘰𝘰𝘬: 😜 “Of course, we won’t build autonomous weapons” 𝘗𝘢𝘭𝘢𝘯𝘵𝘪𝘳: 😚 “Of course, we’ll coordinate and… Show more
146
2,810
Thanks both for demonstrating how to handle intellectual disagreements professionally!
Quote Tweet
Had a great conversation with Yoshua Bengio. Both of us agreed that a good step forward for AI risk is to articulate the concrete scenarios where AI can lead to significant harm. More to come, and looking forward to continuing the conversation!
Embedded video
0:48
96.5K views
5
216
It still amazes me that Yann and Andrew Ng can confidently predict that human-like/level AI is too far off to worry about... ...and act like people who disagree are the ones who are overconfident in their beliefs.
Quote Tweet
Replying to @ESYudkowsky and @erikbryn
My entire career has been focused on figuring what's missing from AI systems to reach human-like intelligence. I tell you, we're not there yet. If you want to know what's missing, just listen to one of my talks of the last 7 or 8 years, preferably a recent one like this:… Show more
26
179
Very happy that Tawana Petty, Yoshua Bengio and I all agreed that mitigating #AI extinction and mitigating ongoing threats to marginalized groups are *not* distractions from one another, but all worthy goals that we can and should pursue together:
11
159
How to lose an argument quickly by making a really weak argument. No, we didn’t and shouldn’t open-source nuclear weapons technology…
Quote Tweet
Please watch this unedited clip from today's AI safety debate between @NPCollapse and @JosephJacks_:
Show this thread
Embedded video
3:48
40.8K views
40
259
"Don't regulate AI – just trust the companies!" Does he also support abolishing the FDA and letting biotech companies sell whatever meds they want without FDA approval, because biotech is too complicated for policymakers to understand?
Quote Tweet
WATCH: Former Google CEO @ericschmidt tells #MTP Reports the companies developing AI should be the ones to establish industry guardrails — not policy makers. “There’s no way a non-industry person can understand what’s possible.”
Embedded video
1:01
384.9K views
106
625
IMHO, now captures the magnitude of what's happening in #AI better than most tech pundits with their financial conflicts of interest – and most policymakers and corporate lobbyists...
Quote Tweet
And now, the collected wisdom of Snoop Dogg on AI and existential risk: "Like, what the f**k?"
Show this thread
Embedded video
0:40
443.3K views
31
666
Here's a great new paper AI paper generalizing the "grandmother neuron" to k>1 neurons that collectively encode input features:
Quote Tweet
Neural nets are often thought of as feature extractors. But what features are neurons in LLMs actually extracting? In our new paper, we leverage sparse probing to find out arxiv.org/abs/2305.01610. A 🧵:
Show this thread
Image
5
92
Yup!
Quote Tweet
Dr. @PaulFChristiano, inventor of the RLHF that @OpenAI uses, says: * Running a GPT model in a novel scenario might cause incredible harm * It may disempower humans in the next 5 years Paul defies every ad-hominem attack. Hi credentials, character, & competence are impeccable.
Show this thread
Embedded video
2:05
22.4K views
21
181
Let's make #AI like biotech, where companies must demonstrate safety, rather than the civilian nuclear industry, where poor safety standards gave us Three Mile Island, Chernobyl, Fukushima and a backlash that crushed the industry:
145
780
Unregulated general-purpose #AI would IMHO be even dumber than eliminating all seat belts, traffic lights and speed limits.
Quote Tweet
We're thrilled to see Members of the @Europarl_EN respond to FLI's open letter! A group of 12 MEP's led by @IoanDragosT and @brandobenifei are calling for: - Tailored rules for foundation models in the EU AI Act - A high level global summit on AI - Democratic oversight and… twitter.com/IoanDragosT/st… Show more
Image
Image
48
365
The best video I've seen about how #AGI can kill democracy. If you're skeptical of existential threat talk, rest assured that this film does *not* focus on it.
Quote Tweet
Today we’re releasing “The A.I. Dilemma” – a new talk @aza and I gave on 3/9, a week before GPT4 launched. *Pls share it widely.* It's critical for institutions to understand how the race between AI labs is accelerating the likelihood of catastrophe: youtube.com/watch?v=xoVJKj
Show this thread
39
472