Opens profile photo
Follow
Matthew Barnett
@MatthewJBar
I share things. Married to
San Francisco, CAJoined June 2020

Matthew Barnett’s Tweets

I feel like a lot of people are assuming that LLM scaling over the next 4 years will resemble LLM scaling over the last 4 years, but that seems unlikely to me. GPT-2 was reportedly trained at a cost of $256/h. It's much easier to scale up fast if that's where you're starting.
11
120
Show this thread
At some point I expect ignoring risks from AI will be like ignoring World War 2 in 1943. It just seems almost inconceivable that humanity would be asleep at the wheel while AI is radically changing the world.
2
29
Show this thread
But in fact, people are now starting to "wake up" as AI risk becomes more concrete and less abstract. I expect this process will continue, all the way until AI risk is taken very seriously by almost everyone, at all levels of society and governance.
2
34
Show this thread
In that context, it's understandable that many became pessimistic about the prospects of AI alignment. Back then, if you naively extrapolated the status quo, it seemed to indicate that society would never "wake up" to the importance of AI, until perhaps it was too late.
1
25
Show this thread
For a long time, one of the most striking facts about AI risk was the contrast between how obviously important the topic was, and the fact that mainstream society didn't seem to care about it.
3
66
Show this thread
That's not to say that a violent AI revolution can't happen. I give that scenario some credence. But the default scenario seems more likely to me: Over time, we're going to delegate more tasks and responsibilities to AIs, until eventually humans aren't running the show anymore.
7
43
Show this thread
Many AI risk arguments focus on showing that AIs could take control in a sudden, violent takeover. But I think we're already going to be giving AIs control of our civilization by default. We're going to give up the keys voluntarily. A dramatic takeover event isn't necessary.
26
194
Show this thread
Do you think the AI alignment problem is more like solving a really hard math problem with a deadline, or more like engineering cars to be safer?
  • Math problem
    39.3%
  • Car safety
    60.7%
1,018 votesFinal results
34
29
A big crux for my views on AI risk is that I'm not very worried about AI takeover from an AI that leaks from a lab, as opposed to AI that we cede control to voluntarily. I have yet to hear anyone explain a story that seems plausible to me.
11
32
Show this thread
However, now that language models are starting to have a sizable economic impact, it is worth reconsidering the lifetime anchor. The singularity is probably not imminent. But AGI at 10^26-10^29 FLOP (roughly 10-10,000x GPT-4 training) should be taken seriously.
6
35
Show this thread
Cotra wrote, "If it were possible to train a transformative model with only a few OOMs more FLOP, I would expect some company to have already trained a transformative model, or at least to have trained models that have had massive economic impact..."
1
13
Show this thread
The main reason why Cotra put little weight on the lifetime anchor seems to be that it predicted AGI was imminent. That was because even after adding a few OOMs of compute due to algorithmic efficiency, the lifetime anchor forecasted very short timelines.
1
15
Show this thread
Artificial neural networks have been scaled up by over 20 orders of magnitude since they were first trained. The fact that we're finally getting very impressive neural networks now is a significant update in favor of using simple biological anchors to forecast AGI, in my opinion.
2
24
Show this thread
One could also imagine adding 2-5 extra OOMs to this estimate to adjust for the fact that our algorithms will be less compute efficient than the human brain, which was optimized over hundreds of millions of years to be highly energy efficient.
2
14
Show this thread
Imagine someone in 1956 predicting that we'd get human-level AI after we trained an artificial neural network with as much computation as the human brain uses during its first 30 years of life. After 66 years we finally get there, and then ChatGPT comes out the same year.
2
28
Show this thread
Joseph Carlsmith estimated that the human brain uses approximately 10^15 FLOP/s. Over 30 years, that's about 10^24 FLOP. Language models exploded in popularity in the last year, timed almost exactly with the release of ML models trained using over 10^24 FLOP.
Image
9
139
Show this thread
Hanson predicted ems before AI, and that's looking very wrong. But in most other respects, his views were relatively good. He emphasized the importance of compute, and thought that AI takeoff would be collective rather than individual. That's looking correct.
2
26
Show this thread
You're offered $300k to do an arithmetic problem that takes 1000 hours to do by hand. The contract explicitly states you must complete the problem by hand, but it's easy to cheat w/o getting caught. You: (1) take job, do problem by hand (2) take job, cheat (3) don't take job
  • (1) take job, don't cheat
    36.3%
  • (2) take job, cheat
    45.5%
  • (3) don't take job
    18.2%
303 votesFinal results
4
7
I've argued that many underrate people's fear of AI risk; and that they therefore also underrate the social response to it. But I also think many overrate how interested people will be in some of the (potential) benefits of transformative AI. I think people aren't nearly as… Show more
22
148
I'm not sure what it means for AGI to be "upon us", but I think it's telling that doesn't mention cognitive labor. Five years ago, people frequently mentioned how AI fails at basic cognitive tasks (e.g. the Winograd schema challenge) as evidence that AGI was far.
Quote Tweet
To those who think AGI is upon us: 1. why don't we have level-5 autonomous driving? Any 17 year old can learn to drive in 20 hours of training. 2. Why don't we have domestic robots that can clear the dinner table and fill the dishwasher? Any 10 year old can learn to do that in… Show more
2
36
Show this thread
For what it's worth, I agree with the people saying that this isn't a good strategy because investing in capital is a better idea right now.
Quote Tweet
Is it worth trying to learn a physical skill that will be in high demand during the potential years between when all cognitive labor is automated and when all labor is automated? If so, what should I learn?
1
9
Is it worth trying to learn a physical skill that will be in high demand during the potential years between when all cognitive labor is automated and when all labor is automated? If so, what should I learn?
31
75
I still think society is likely vastly underrating the plausibility of a dramatic acceleration in economic growth in the coming decades. It's worth taking actions now to capture value from being early to catch this trend.
3
17
Show this thread
I kind of wish I had made this argument more forcefully at the time. Oh well.
Quote Tweet
EA messaging has heavily emphasized funding AI safety research, with the idea being that such work is currently vastly undervalued by society. By the same argument, shouldn't we also emphasize investments in semiconductor stocks? I have seen much less of such advice.
Show this thread
2
18
Show this thread
These are three wildly different levels of "smart" that an entity can be, and lumping them together is confusing. There's a gigantic range from "1 IQ point above John von Neumann" to "a maximally smart brain the size of Jupiter".
1
22
Show this thread
I suspect the word "superintelligence" should not be used without clarification. I've seen it used to refer to: 1. An entity that's smarter than the smartest human 2. An entity that's vastly smarter than all of humanity 3. An entity that's as smart as physically possible
19
79
Show this thread
1. Do you believe in AI foom? (I provided a definition of AI foom in the next tweet.) 2. Are you familiar with arguments against AI foom made by people who think AI will become vastly smarter than humans later this century?
  • Yes || Yes
    39%
  • Yes || No
    12.7%
  • No || Yes
    36.6%
  • No || No
    11.7%
213 votesFinal results
6
5
Show this thread
Define AI foom as the thesis that at some point, a unified agentic AI will grow to become vastly more powerful than the rest of the world combined, and the world either will not notice this happening until it's too late, or it will happen so fast that it cannot be stopped.
3
2
Show this thread