Conversation

1/15: given 'IAEA for AI' is becoming a canonical ai global governance idea, here's a 🧵🧵🧵 on how the International Atomic Energy Agency came to be and what its creation can tell us about a sibling agency to regulate powerful AI models
Image
2/15: the IAEA's story begins after WW2 as the US went about shaping the emergence of global institutions like the IMF and the UN as part of a wider strategy to 'contain' the USSR (the brainchild of US diplomat george kennan)
1
6
3/15: in 1953, the US State Department Panel of Consultants on Disarmament published a report recommending greater transparency about the capabilities and risks of nuclear technology to the American public and warning against an 'arms race' with the USSR
1
4
4/15: accepting the recommendations, President Eisenhower proposed the creation of an international body to regulate the use of nuclear technology in his 'Atoms for Peace' address to the UN, which outlined the basis for an agency to encourage peaceful uses of nuclear fission
1
4
5/15: the idea was that governments should make joint contributions from their stockpiles of fissionable materials to an international atomic energy agency. this agency would devise methods to allocate this material for peaceful pursuits, such as agriculture and medicine etc
1
4
6/15: however, some historians have suggested that Eisenhower's proposal was also a strategic move to put pressure on the Soviet Union. the amount the US proposed to donate to the IAEA would be manageable for the US, but difficult for the USSR to match
1
5
7/15: by 1954, negotiations were underway for the new agency's statute. the US saw safeguards on the proliferation of nuclear technologies as a crucial function of the IAEA to ensure that fissile materials were not misused
1
4
8/15: the proposed IAEA could serve as a control mechanism to prevent a "fourth country" (following the US, UK, and USSR) from developing nuclear weapons by ensuring nuclear resources intended for peaceful purposes weren't diverted towards military activities
1
4
9/15: by the end of 1958, the US, Canada, UK and others agreed that "100% effective control was impossible under any system and that audit and spot inspection would provide as effective control as could reasonably be expected"
2
4
10/15: the concept was, according to a Canadian diplomat, "analogous to having available policemen in sufficient number to deter the criminal but not to have one policeman assigned to each potential criminal"
1
5
11/15: the newly formed IAEA drafted versions of the safeguards policy, which were intensely debated between 1957 and 1960. however, the safeguards system proved controversial amidst suggestions that it consolidated power amongst countries that already had nuclear technology
1
4
12/15: despite the controversy and numerous amendments, the IAEA finalised the safeguards document with a vote of 17 to 6 at the annual meeting in January 1961
1
3
13/15: as we talk about an 'IAEA for AI', it's probably useful to remind ourselves that the IAEA was a product of a different world, developed to solve different problems based on different motivations
1
7
14/15: nuclear technology is organised around a scarce resource, fissile materials, which can be detected in its raw format and when refinement begins. AI, on the other hand, can be built anywhere in the world with appropriate access to data, compute and technical capability
2
8
15/15 & the risks associated with nuclear technology were already (mostly) known, whereas the risk profile of today’s AI models will increase as capabilities do. as a result, we need to make sure to plan for risks that (unlike nuclear technology in the 1950s) don't yet exist
2
7

Discover more

Sourced from across Twitter
This is honestly half of AI X-risk criticism
Quote Tweet
Remembering a girl I used to work with who didn’t believe in dinosaurs, not for religious reasons but bc “It’s just so silly…” Any time I tried to get her to elaborate, she’d be like “I mean…big monsters? Like…” and mime being a T-Rex until she was giggling too much to breathe
Show this thread
8
137
Each AI org must pivot to its strengths: 👉Anthropic ➡️ CERN for AI safety (strength: alignment research) 👉 DeepMind & OpenAI ➡️ Apollo programs for AI (strength: ambitious projects) 👉 Meta ➡️ Wuhan Institute of Virology for AI (strength: lab leaks)
8
172
The headline is funny, the post is quite good. Found more AI safety labs! It's hard, it's painful, you will likely fail. But the goal is to prevent existential risk, not "be highly respected" or "avoid horribly embarrassing mistakes." Someone has to do the work.
Quote Tweet
therapist: no-one is as critical of your ideas as you are 🥰 the EA forum:
Image
1
39
Show this thread