Had a great conversation with Yoshua Bengio. Both of us agreed that a good step forward for AI risk is to articulate the concrete scenarios where AI can lead to significant harm. More to come, and looking forward to continuing the conversation!
regulation should take effect above a capability threshold.
AGI safety is really important, and frontier models should be regulated.
regulatory capture is bad, and we shouldn't mess with models below the threshold. open source models and small startups are obviously important.
I think people talking about regulatory capture missed the part where @sama said that regulation should be stricter on orgs that are training larger models with more compute (like @OpenAI), while remaining flexible enough for startups and independent researchers to flourish.
Some analogies are poor because they are not actually similar in a relevant way, and some are offensive because of the choice of relevant dimension, but then it should be possible to specify why. Merely comparing two things, even if one of them is bad, is not by default offensive
One of my least favorite conversational moves is when people express outrage at someone for making a comparison.
1. Comparing is not the same as equating
2. Analogies are not meant to be exact in every way, just similar in a relevant dimension
See also
It’s particularly frustrating when there is a component of moral outrage: “how *dare* you compare X [thing I don’t think is that bad] to Y [thing I think is absolutely horrific].” People seem to forget comparing =/= equating
PSA: if everyone stands a couple of feet away from the luggage carousel, everyone gets a good view of the bags and can easily step in to collect without obstruction, at ~0 personal cost.
I get so unreasonably mad when people obliviously stand right up close and block it
I now predict 5 to 20 years but without much confidence. We live in very uncertain times. It's possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now.
Not generally a fan of academic gatekeeping, but I suspect current debates around AI safety would be better if new entrants (especially big names) would familiarise themselves with some of the existing concepts; instrumental convergence, orthogonality, mesaoptimisers, etc.
Thanks @reasonisfun for making me aware of @IncrementsPod ! The interview/conversation on the subject of AI and AGI, with @RosieCampbell was among the most fantastic examples of honest truth seeking I've experienced. But wasn't she supposed to get tipsy? False advertisement?
It is possible to be concerned about the future catastrophic effects of climate change on the world while also caring about the populations and environments being affected today.
The same is true of AI.
Fun GPT-4 use case: I told it about my preferred clothing style/aesthetic and asked it which brands I should check out - it suggested a bunch I hadn't heard of before but I really like!
🤖 Best podcast episode I’ve found comparing the Bay rat view of AGI with the crit rat one, and exploring what ‘AI safety’ means:
“AGI: Could The End Be Neigh?” on Increments by
Both are important, but likely require very different approaches (though insights from one may inform the other), and I keep seeing people talking past each other
Wish we didn't use "alignment" to mean both:
- Ensuring AI acts broadly inline with human values and is robust to malicious actors directing it to do harm
- Ensuring a powerful optimization system can be safely directed at a goal of our choosing without catastrophic side effects
Five years in the US and I am still baffled that I have to do my own tax return. Thankfully GPT-4 has made navigating my complicated tax situation much easier this year 🎉
(If you try this, remember hallucinations are a thing, proceed with caution)
New ep! @RosieCampbell joins us to push back on our nonchalant attitudes about the possible end of the world. We talk AI, existential risk, how to test if models are creative, and whether deep learning can get us to AGI
https://incrementspodcast.com/49
ALEXANDRIA:
Dear Thomas, your news doth bring a "Ray of Light" to our hearts,
And we shall follow you to this wondrous place.
For I shall "Dance the Night Away," and let my heart be free,
Under the moonlit sky, we shall find our destiny.
[Exeunt]
FREDERICK:
Ah, Thomas, a timely arrival indeed!
For in the arms of Alexandria, I shall be "Free Fallin'."
Let us "Jump Around," and leave our worries behind,
In this midsummer night's dream of mirth and revelry.
THOMAS:
Hail, good Frederick! Pray, "Don't Speak" of things so somber,
For I bring news that shall uplift thy spirits!
A party grand shall take place tonight, "Gangsta's Paradise,"
Where music and dance shall lift thee high.
FREDERICK:
Oh, Alexandria, thou art as the "Black Hole Sun,"
Absorbing all light and warmth from the world.
I would gladly go thy way, if only thou'd be mine,
For in thy presence, I am "Losing My Religion."
[Enter THOMAS, a friend of FREDERICK]
ALEXANDRIA:
Ah, Frederick, thou art as "Smooth" as thy words,
And I am ever pleased to hear thy voice.
Yet, I must implore, for it is my duty,
To ask of thee, "Are You Gonna Go My Way?"
FREDERICK:
Fair Alexandria, 'tis a question of fate, I wonder,
If I may be so bold, to ask you here today:
"What's My Age Again?" I feel as if the years have vanished,
And I am but a youth in love's sweet embrace.
ALEXANDRIA:
Good morrow, Frederick! Pray, what news dost thou bring today?
I shall not be surprised, for I am a strong believer in "No Scrubs,"
As my father oft hath said, in matters of the heart, be wise.
, I asked GPT-4 to write a play in the style of Shakespeare but to sneak in titles of nineties songs...
---
Title: A Midsummer Night's Anthem
[Act 1, Scene 1]
[Enter ALEXANDRIA, a noble lady, and her suitor, FREDERICK]
1) Do you hold beliefs you think are true but you feel guilty about?
2) If you could press a button to change these beliefs in yourself (the world otherwise remains the same) would you?
The way that the most prominent critics of AI risk totally fail to engage with even the most basic arguments made by people in the field suggests that they don't have any good counterarguments. That's very concerning!
“Don’t Fear the Terminator”
Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others.
Article by @TonyZador@YLeCunhttps://blogs.scientificamerican.com/observations/dont-fear-the-terminator/…
The entire concept of the “off switch” is under theorized in all the x-risk stuff.
First, all actually existing LLM-type AIs run on giant supercompute clusters. They can easily be turned off.
In the event they get decentralized down to smartphone level, again each person can…
) and all other browsers now feel hugely outdated.
- Organize tabs into folders
- Command Palette
- Split screen
- Favorite / Pinned / Transient tabs
- Work / Personal profiles
- Tab previews
So good
Relatedly, I would like it if we could stop conflating:
- Consciousness (whether AI has subjective experience)
- Danger (whether AI has the capability or propensity to cause catastrophe)
- Moral patienthood (whether we have an ethical obligation to consider AI welfare)