Opens profile photo
Follow
Click to Follow liron
Liron Shapira
@liron
Rationalist, entrepreneur, angel investor, AI doom pointer-outer
Science & Technologybloatedmvp.comJoined July 2008

Liron Shapira’s Tweets

Image
Quote Tweet
Because it's awkward, not many people are revealing publicly their "p(doom)", i.e. their estimate of the chances that humanity goes extinct, but you'd be surprised by how many AI scientists in top AGI labs have stated probabilities of extinction greater than 20% and sometimes… Show more
1
10
The Web3 emperor-has-no-clothes moment was 1 year ago today: twitter.com/liron/status/1 It's now common knowledge that makes logically-flimsy claims with immediately catastrophic consequences. Do NOT trust his AI safety claims.
Quote Tweet
Today @pmarca was asked by @tylercowen to explain a Web3 use case. I clipped this gem from 28:08 of Conversations With Tyler. Highly recommended...
Show this thread
Embedded video
3:19
993K views
3
18
Pitch was that real intelligence comes from the Ligence region of France; OpenAI is just sparkling statistics.
Quote Tweet
France's Mistral AI blows in with a $113M seed round at a $260M valuation to take on OpenAI tcrn.ch/3N2P4iB by @ingridlunden
2
21
Lots of people talk a big talk about building tools to accelerate AI alignment research, but these folks have been successfully doing just that for years. Dunno much about their other projects, but I’m very happy with how they run alignmentforum.org & lesswrong. Donated!
Quote Tweet
Lightcone Infrastructure is looking for funding. We run LessWrong.com and the AI Alignment Forum, wrote a lot of the code behind the EA Forum, and recently built a lot of in-person infrastructure for people working on reducing existential risk lesswrong.com/posts/9iDw6ugM 🧵
Show this thread
22
A high-stakes game of blind shuffleboard is being played by people foolish enough to think they can see.
Quote Tweet
I can get the “open source models because we’re not near the danger point” POV But be aware that the danger point is *not once we have wild superintellligence* The danger point is *once LLMs are reliable* Once this happens, AutoGPT AI agents actually work & scour the internet
8
Marc has many good qualities, but there’s large gap between the “thought leader” reputation he’s sometimes known for and his actual pattern of argumentation and behavior.
Quote Tweet
Unfortunate! But in all seriousness, like I said in the post, Marc is usually thoughtful and interesting. I tried to be polite and focus on the substance. Hope he gets a chance to read and consider the arguments.
Image
1
15
Show this thread
And here's a reckless object-level claim:
Quote Tweet
Great news from @martin_casado and @pmarca of @a16z: Arbitrarily powerful technologies don't change the equilibrium of good & bad forces. That's why rogue AI won't kill us. (Why do we try to stop countries from acquiring nukes? Let 'em proliferate & enjoy the equilibrium!)
Show this thread
Embedded video
0:26
4.2K views
2
5
Show this thread
Great to see people holding to account for low-quality discourse on AI doom. I have no beefs with "Software Is Eating The World", "It's Time To Build"; even most of his AI post is good. But on the #1 most critical point, he failed to meet basic standards of discourse.
Quote Tweet
In his essay on AI, @pmarca fails to actually engage with the arguments about AI misalignment. Instead, he calls people names, questions their motives, and conflates them with woke “trust & safety” people. I rebut his arguments point by point here: dwarkeshpatel.com/p/contra-marc-
Image
3
101
Show this thread
. says he's building an AI "as smart as all of human civilization" and has "the power of all of human civilization". So… any psychopath can run an open-source superintelligent agent soon after? Sam is calling for regulation. But aren't we already doomed in that scenario?
50
145
Show this thread
Every innovator & technologist in AI to date has been a hero. There are no villains to blame. But this generation's AI leaders may ultimately be *tragic* heroes - striving laudably to help the world based on what they believe in, only to lead to its destruction.
4
11
It's not "defense in depth". It's "don't worry, no defense needed because it's not superintelligent yet".
Quote Tweet
🚨 From @labenz, a GPT-4 red team member: “[GPT-4] is safe to deploy, but really only because it's limited in power.” “The AI still does the bad thing… with the exact same prompt that I used in the red teaming.” “We just don't know how to control it.” youtu.be/N3kvmKfVDwo?t=
Show this thread
Embedded video
2:11
97.6K views
1
5
Show this thread
Pretty crazy that an outward-facing screen, a sophisticated power-hungry feature that can’t ever be seen by the wearer, made it into the v1 product requirement spec.
Image
183
1,477
Darwin's theory of evolution by natural selection is another case of an object-level insight with epistemological reverberations. He didn't just teach us how we came to have these bodies and minds, it taught us that we're allowed to know the answer to that deep age-old question.
1
6
Show this thread
What's another case of object-level reasoning with meta-level epistemological ramifications? I submit Euclid's proof of the infinitude of primes. He didn't just teach us about primes, he taught us something new about types of proposition whose truth is knowable
2
9
Show this thread
I hope folks learn a lesson about epistemology from this insane 98% drop: The crypto skeptics' arguments last year weren't just object-level correct, they were meta-level correct that crypto's uselessness was knowable in advance. Empiricism wasn't epistemologically necessary.
Quote Tweet
JUST IN: Fundraising for crypto VC has fallen off a cliff in 2023, down 98% since 2022
Show this thread
Image
18
195
Show this thread
“There’s absolutely no way a text predictor can attack us”
Quote Tweet
To save Guam’s remaining birds, researchers placed nests on top of smooth poles they were sure no snake could climb, but found that brown tree snakes climbed the poles with a never-before-seen lassolike gripping technique: buff.ly/2Kmw6az
Embedded video
0:21
447.8K views
4
39
Lol
Quote Tweet
This is one of the most unhinged things I've read in a long time. It reads like the author dropped acid and decided to pontificate about robots and Web3 all over a Bloomberg column. bloomberg.com/opinion/articl
5
Show this thread
I continue to appreciate that says stuff like this. But dude, your a-priori probability about the horribleness of burning the last knowledge barrier to superintelligence when no one knows how to control superintelligence, should be pretty high.
Quote Tweet
Sam Altman: "I guess the thing I lose the most sleep over is that we have already done something really bad...I don't think we have, but the hypothetical that we, by launching ChatGPT into the world, shot the industry out of a railgun and we now don't get to have much impact… Show more
3
8
Show this thread