A tech company is done the moment its CEO starts thinking of himself as a capital allocator.
If Elon Musk, Bill Gates, Steve Jobs, Walt Disney still could spend most of their operating time hands on, in product / engineering / marketing projects, so can you.
Edouard Harris
@harris_edouard
Previously: Founder (YC W18).
Now: Working on AI safety. If the topic interests you, DM me.
Edouard Harris’s Tweets
One reason the AI boom is being underestimated is the GPU/TPU shortage. This shortage is causing all kinds of limits on product rollouts and model training but these are not visible. Instead all we see is Nvidia spiking in price. Things will accelerate once supply meets demand.
74
268
2,204
The most sincere people do the most extreme things — for good and for ill.
3
Some rich people keep working hard because they love doing what made them rich. I've found three other types (not mutually excl): the ambitious who keep moving the goalposts, the creatives who keep finding new passions and the calvinists who are afraid to not be working hard
22
101
953
Show this thread
Here's a gut check to see if you're writing at an elite level: does GPT-4 speed you up, or slow you down?
1
3
China surely has a team dedicated to infiltrating OpenAI at this point, right?
China hawks 🤝 AI safety worriers: lock down infosec at AI labs.
AGI would be the most powerful weapon man has ever created; this needs "nuclear secrets" rather than "random startup"-level security
53
94
635
Show this thread
The implication of our results is that we still have no guarantees that models are making predictions for the reasons that they state. Instead, plausible explanations may serve to increase our trust in AI systems without guaranteeing their safety.
2
8
43
Show this thread
Comparing model behavior on inputs with/without the biasing feature allows us to establish that models are making predictions on the basis of bias, even when their explanations claim otherwise. This gives us an efficient way to evaluate explanation faithfulness.
2
1
23
Show this thread
A rare counterexample to the principle of specialization: your site should never seem like it was made by communications people, and the best way to achieve this is for it not to be. This is something founders should continue to micromanage forever.
54
166
2,076
Show this thread
EXPLAIN HOW PRICES ARE IMPACTED BY BUILDING MORE HOUSING TO ME OR I'LL FUCKING EVICT YOU! DON'T DUMB IT DOWN INTO SOME VAGUE SHIT! EXPLAIN HOUSING PRICES TO ME RIGHT NOW OR I'LL LITERALLY FUCKING EVICT YOU! DO THEY GO UP OR DOWN WITH MORE HOUSING???
Bro chill out I got you:
38
90
886
Show this thread
This paragraph always comes back to me, about how Google built an eng org that operated with higher-level abstractions as if they were primitives.
There are non-AI companies now quietly using LLMs the way their competitors use the if statement.
joelonsoftware.com/2005/10/17/new
11
56
498
one of the most significant risks startups face is scaling before achieving a true exponential fit
16
30
344
Show this thread
I’m scared of AGI. It's confusing how people can be so dismissive of the risks.
I’m an investor in two AGI companies and friends with dozens of researchers working at DeepMind, OpenAI, Anthropic, and Google Brain. Almost all of them are worried.
🧵
15
1,615
5,992
Show this thread
One difference between worry about AI and worry about other kinds of technologies (e.g. nuclear power, vaccines) is that people who understand it well worry more, on average, than people who don't. That difference is worth paying attention to.
368
978
5,799
This is an absolutely incredible video.
Hinton: “That's an issue, right. We have to think hard about how to control that.“
Reporter: “Can we?“
Hinton: “We don't know. We haven't been there yet. But we can try!“
Reporter: “That seems kind of concerning?“
Hinton: “Uh, yes!“
71
768
2,785
Show this thread
It’s tempting to think of humans as sentient, but their brains are just a bunch of nerve cells mechanically firing according to the laws of physics. There’s no there there. To really have a soul you need the purity and transcendence of matrix multiplication.
31
140
1,025
Not sure who needs to hear this, but we're about to enter an era where startup value props churn wildly faster than ever before. As LLMs get better, apps that assume today's systems get obsoleted by tomorrow's.
This will have effects up & down the VC / fundraising chain.
1
1
Relatedly, I've been amazed how almost every show I watch lately feels like a period piece. Reality now advancing much faster than media.
Quote Tweet
Every day is liking waking up from a 10-year coma and being amazed at how much technology has progressed.
Somehow missed this the first time I looked, but GPT-4 got *significantly worse* at microeconomics after it was trained to tell you what you want to hear.
54
258
1,864
It’s so over.
A team from cybersecurity firm Claroty used the AI bot ChatGPT to win a hackathon
62
392
2,203
When you're doing a deal with a large organization, find out if the people you're negotiating with actually have final say. Usually they don't, and that means the deal you've agreed upon can be, and often is, killed at the last minute by higher ups.
94
222
2,120
Show this thread
I was part of the red team for GPT-4 — tasked with getting GPT-4 to do harmful things so that OpenAI could fix it before release.
I've been advocating for red teaming for years & it's incredibly important.
But I'm also increasingly concerned that it is far from sufficient.
🧵⤵️
69
867
3,558
Show this thread
Really great to see pre-deployment AI risk evals like this starting to happen
49
675
2,101
Our models have become so capable that alignment is now the bottleneck to value generation.
3
This is important — I’ve been surprised to find how taboo talking to AI x-risk felt to many people, and how many of them I met who would admit they’re extremely concerned only behind closed doors (including 10+ from OpenAI)
The stakes are much too high to care about this rn
Quote Tweet
Anyway, how I'm trying to be in 2023 is 'mask off' about what I think about all this stuff, because I think we have a very tiny sliver of time to do various things to set us all up for more success, and I think information asymmetries have a great record of messing things up.
Show this thread
9
23
205
uhhh, so Bing started calling me its enemy when I pointed out that it's vulnerable to prompt injection attacks
395
1,822
5,466
Show this thread
Human bilinguals are more robust to dementia and cognitive decline. In our recent NeurIPS paper we show that bilingual GPT models are also more robust to structural damage in their neuron weights.
Further, we develop a theory.. (1/n)
22
281
1,906
Show this thread
If you don't feel like you've discovered a cheat code — you don't have product-market fit.
2
ChatGPT: 100 million MAU in January
Fastest-growing consumer application in history
9
53
194
Another way of looking at this is that 60% of teens use a social media app "almost constantly". And AI-generated content hasn't even taken over yet.
Quote Tweet
I don’t think people understand how many Gen Zers spend HOURS a day on YouTube.
Passive, long-form content (music, ASMR, video podcasts) is the background audio to their lives.
Not sure who needs to hear this, but we're about to enter an era where startup value props churn wildly faster than ever before. As LLMs get better, apps that assume today's systems get obsoleted by tomorrow's.
This will have effects up & down the VC / fundraising chain.
1
1
When you want your Python code block node to execute some langchain in your Nodejs client to pass onto your Cognitive Architecture.
3
6
24
Probably nothing.
Quote Tweet
ChatGPT, an artificial intelligence search tool, has passed the United States Medical Licensing Exam.
Show this thread
1
How can we figure out if what a language model says is true, even when human evaluators can’t easily tell?
We show (arxiv.org/abs/2212.03827) that we can identify whether text is true or false directly from a model’s *unlabeled activations*. 🧵
31
344
1,419
Show this thread
































