We've seen increasing calls for approvals before training AI systems. So what's the case for a pre-emptive authorization? I see three: 1) the risk of proliferation, 2) dangers arising during the training, and 3) practical benefits related to compute.
blog.heim.xyz/the-case-for-p
🧵
Conversation
I argue that such a “pre-emptive authorization” might be warranted. Ultimately, I’m discussing a mechanism that would apply to only a few actors. I see three primary reasons in support of this:
1
First, there is a proliferation risk associated with trained AI models. Even if a model is not intended for a public release, such as GPT-4, it could still be stolen via hacking.
1
The second concern revolves around the potential dangers that could emerge during the training process itself, especially as we continue to scale models at the current pace.
Nonetheless, this argument is not well supported yet, and more research is warranted.
1
Lastly, practical benefits, i.e., compute governance. The compute lever is most potent during the training run, given the significant compute requirements during training.
1
2
So while all of this may sound drastic, I do think these are strong arguments in favor of such a regime.
Here's the quick write-up: blog.hreim.xyz/the-case-for-p.
1
1
Would appreciate any feedback or other reasons you might see. Note that this is just a brief one-sided overview highlighting key arguments I frequently refer to. I quickly drafted it in response to recurring discussions.
2
1
I'll also try to share more work in progress thoughts, but, ugh, busy times, folks. 🥱
1
1
Show additional replies, including those that may contain offensive content
Show
Discover more
Sourced from across Twitter
How could we build a collaborative ecosystem to enable access to the world's most impactful models?
A year ago, we (, , and I) wrote an article on how an access method could look. Back then with a focus on the US NAIRR, but still timely.
governance.ai/post/compute-f
Quote Tweet
We'll lead on AI at home
@DeepMind, @OpenAI and @AnthropicAI will give us early access to their models to help us understand any potential risks.
And we’re launching two new fellowships to enhance machine learning in drugs and food design, and AI scientist training.
Show this thread
1
2
Show this thread

