Peter Hartree’s Tweets

Pinned Tweet
"Schmitt thought that liberalism had an overly benign, naive view of human affairs, overestimating the extent to which government could function based on rules and procedures alone, or politics on the basis of reasoned discussion. Life was too unruly for that."
Image
Image
2
6
Show this thread
* People ask LLMs to write code * LLMs recommend imports that don't actually exist * Attackers work out what these imports' names are, and create & upload them with malicious payloads * People using LLM-written code then auto-add malware themselves
92
8,357
Show this thread
First shocking adult realisation : "everybody's winging it all the time!" #2: "they are as dumb as me!" #3: "it still works, somehow?" #4: "the people who *think* they know what they are doing are really dangerous..."
Quote Tweet
One disappointing thing you discover about the Adult World is that the minimum competence level of professionals – doctors, lawyers – is much lower than you would hope Not a little bit lower, much lower
8
205
Quote Tweet
“Very recently, I realised that maybe the digital intelligences are actually learning *better* than the brain. With 1% of the storage capacity, GPT-4 knows thousands of times more than us. That strongly suggests its got a better way of getting information into the connections.”
Show this thread
Embedded video
2:19
23.2K views
3
Show this thread
It’s like watching a baby become a toddler. But it takes an hour instead of 1-2 years.
Quote Tweet
4 legged robot capable of learning how to walk directly in the real physical world without relying on simulations, it created a model of the world around it, and was able to plan its actions and learn how to walk in just one hour Danijar Hafner
Show this thread
Embedded video
1:40
604.3K views
1
19
Show this thread
The government is looking very carefully at this. Last week I stressed to AI companies the importance of putting guardrails in place so development is safe and secure. But we need to work together. That's why I raised it at the and will do so again when I visit the US.
Quote Tweet
We’ve released a statement on the risk of extinction from AI. Signatories include: - Three Turing Award winners - Authors of the standard textbooks on AI/DL/RL - CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic - Many more safe.ai/statement-on-a
612
1,594
It is unprecedented that a CEO would say “the activities of my company might kill everyone”. Yesterday, *three* CEOs said this. Google owns Deepmind and invested in Anthropic; OpenAI is partnered with Microsoft. Some of the most respected & valuable companies in the world.
Quote Tweet
The CEOs of the top 3 AI labs just endorsed this message: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Yes—they endorsed the phrase: “risk of extinction”. So did many others.
Show this thread
Image
Image
Image
Image
4
18
The bad news: this is evidence that the “AI systems might kill us all” story is real. The good news: admitting that you have a problem is the first step towards recovery.
Quote Tweet
The CEOs of the top 3 AI labs just endorsed this message: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Yes—they endorsed the phrase: “risk of extinction”. So did many others.
Show this thread
Image
Image
Image
Image
1
18
The CEOs of the top 3 AI labs just endorsed this message: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Yes—they endorsed the phrase: “risk of extinction”. So did many others.
Image
Image
Image
Image
2
17
Show this thread
👍
Quote Tweet
One consideration in explaining Parfit’s influence: he came up with a way of doing moral philosophy that was accessible to philosophy graduate students in their early 20s who needed to write a doctoral thesis. Williams’ approach, by contrast, could not be so easily imitated.
Show this thread
1
1
Show this thread
This is one of the most viscerally disturbing art pieces I've ever seen.
Quote Tweet
Section two of Allen Ginsberg's Howl performed by photographic lumps of flesh. You have to see this. Moloch! Moloch! Our data whose body is a prison of light! Moloch! Mind pulsing to life! Moloch! Child of man! Moloch! Profane cathedrals of data burning in the night!
Show this thread
Embedded video
3:21
21.1K views
8
“Very recently, I realised that maybe the digital intelligences are actually learning *better* than the brain. With 1% of the storage capacity, GPT-4 knows thousands of times more than us. That strongly suggests its got a better way of getting information into the connections.”
Embedded video
2:19
23.2K views
5
180
Show this thread
This guy is donating his body to science. His work, his money and his skin in the game might yield extremely valuable medical breakthroughs. If it does, the benefits to others will dwarf any benefit he personally enjoys. Hero!
Quote Tweet
Bryan Johnson 45-year-old founder of Braintree Payment Solutions which also owned Venmo sold his companies to PayPal in 2013 for $800 Million. Johnson just revealed that he spends $2 million per year to retain youth and he uses his teenage son as what he calls his ‘blood boy.’
Image
12
"Selective accelerationism" is probably easier to grok. So, according to me: s/acc > e/acc Up for a rebrand ?
Quote Tweet
Differential accellerationism: 1. Go slow when the downside risk is millions of deaths (or worse). 2. Go fast when it isn't. The world can get so much better if we really go for it on both (1) and (2).
Show this thread
Image
2
Differential accellerationism: 1. Go slow when the downside risk is millions of deaths (or worse). 2. Go fast when it isn't. The world can get so much better if we really go for it on both (1) and (2).
Image
6
221
Show this thread
This is one of the funniest podcast moments that I know of. Tyler Cowen asks Sam Altman about YIMBY stuff, Sam says "I haven't thought about that because we're about to build God" and Cowen just ploughs through and asks him about Chattanooga's land use policy
Image
38
3,055
Show this thread
Robin Hanson argues against FOOM, but he also predicts explosive economic growth. It seems like he broadly shares the model above, but thinks that (1) won't happen soon, and that bottlenecks will slow things down for a while. is that right?
Quote Tweet
Bonus: the first part of @robinhanson’s attempt to be reassuring made me smile. My summary: “Don’t worry dear, the big bad AI monster isn’t going to get you. It’s just going to cause the world economy to double every few months and lead to a world dominated by digital minds.”
Show this thread
Image
1
Show this thread
I'm doubtful about (3): it's hard to imagine concrete stories. In the world of atoms, superhuman robots are on the way. Superintelligent systems will rediscover tacit knowledge. Coordination between AIs will be fast. But the world economy is extremely complex, so... maybe...?
1
1
Show this thread
3. There are bottlenecks to GDP growth that wildly superintelligent AI systems will not overcome. Claims (1) and (2) are looking less and less plausible. But: coordination & regulation might make (1) true, at least for a while.
1
2
Show this thread
Common claims from those who *disagree* with the above: 1. AI systems will *not* become capable of human-level AI research. 2. AI systems will *not* become wildly superintelligent even after a huge research effort by millions of AI researchers.
1
1
Show this thread