The current trajectory of fully private sector development of super intelligence is unsustainable, we reasons we explore. Rather, we need new models of public-private sector interaction on cutting edge AI systems. There are some key issues in our current trajectory.
Conversation
We need to sharply improve the structures supporting AI research in the UK. We have a range of policies from new ‘polymath fellowships’ allowing technical experts in other fields to become deeply AI literate, and vice versa. We also need to expand other training programs.
1
2
4
, the new ARPA-like research org, is ideally suited to rapidly orient to the AI revolution and bring its benefits across disciplines in an interdisciplinary way so key for AI. But it needs a much bigger budget by the end of the next parliament to be a major player.
1
2
5
budget should rise to at least £2bn annually by the end of the next parliament. To avoid sacrificing ARIA's autonomy, this should not be earmarked specifically for AI. However, ARIA's research agenda will likely be widely touched by AI-relevant work.
1
2
4
As previously argued, the Turing Institute’s AI function has not kept us at the cutting edge. The Turing Institute’s core AI focus should be wound down and focus on Digital Twins, whilst a new effort is launched.
1
2
1
Rather, the UK needs a lab oriented and setup to deal with the future.
1
2
Create the AI Sentinel lab: We argue in the paper for the UK to seed an international laboratory network, which we call AI Sentinel. It should be focussed on 3 areas to complement and collaborate with the private sector.
2
2
First, to develop and deploy methods to interrogate and interpret advanced AI systems for safety, while devising regulatory approaches in tandem. It both researches and deploys safety tech, acting as a ‘test bed’ for AI.
1
Secondly, to ensure states and their regulators can understand the latest AI systems. Notably, it is not aiming to push capability development beyond the frontier absent safety improvements, which would create an unsafe race.
2
1
It also not intended to compete directly with private companies. For example, we do not suggest creating a ‘BritGPT’ in the paper, but rather lay a recipe for creating an ecosystem that creates the safer next generation of technology, rather than play catch up.
1
1
3rd, Sentinel should act to promote a plurality of research endeavours/approaches to AI, especially new algorithms that are more interpretable/controllable. Incentives in the private sector favour pushing capabilities of black box algorithms further, not improving legibility.
We provide greater detail on the motivations and requirements for AI Sentinel in the paper, including it being led by a frontier tech expert, and being open to international partners from the start.
1
1
A key point we make is that regulation will be largely inseparable from research in AI for some time, and any intelligent regulation effort will need to be unusually closely joined with research. This is a key motivation for creating AI Sentinel.
1
1
We also call for government, thru AI Sentinel, to bring some of the worlds best minds across fields to work in AI Safety, inspired by how figures like John Von Neumann were brought into government during WW2. Currently there is a talent misallocation between capability/safety.
1
2
On regulation, we emphasise that this must be closely to coupled to AI Sentinel and other’s research. We call for a divergence from EU rules and standards, but close engagement. We outline a range of measures to achieve this. Much more in the report.
1
Section 3 shifts gear. The above sections all focus on laying a platform from which the UK can be a leader in deploying safe, interpretable AI to transform public services and the economy. Achieving this is what section 3 of the report covers.
1
We need to create new kinds of interdisciplinary research labs working at the intersection of science and engineering, transforming fields of research with AI. This calls for ‘Disruptive innovation laboratories’.
1
These could work at the intersection of AI and other chosen fields, applying AI’s benefits to those fields to address major challenges.
1
‘s work on creating physical computing is an example of what such a lab could do. He turns entire rooms into communal computers where people can co-create and learn in the physical world without the need for screens/virtual-reality glasses..
2
1
Much more detail in our papers and and ‘s work, and in this tweet of mine:
Quote Tweet
Alongside ARIA, our sci/tech team wanted to create a 'twin for ARIA' composed of physical labs organised very differently to conventional research environments to pursue groundbreaking research and invention, putting young scientists first. Read more here
jameswphillips.substack.com/p/lovelace-vis
1
1
1
We then outline how the government should approach integrating AI into public services, beginning with known technology to build expertise, then moing to more speculative approaches.
1
We need to upskill departments too, with Chief AI officers able to help departments understand AI, and use their technical skills to interact with the Office for AI and the AI Taskforce.
1
We also discuss how to address safety in the context of public services, such as having AI Safety incident registers.
1
2
To realise AI’s benefits, the UK needs markedly increased compute power. The recent future of compute review was the first time a country has taken a comprehensive view of its compute.
1
2
But even if fully implemented by its 2026 delivery date, the review leaves the entire UK state far behind the frontier even for relatively small US labs like Anthropic and OpenAI in 2022. We address how such compute should be governed to promote responsible use.
1
1
The UK’s semiconductor strategy was devised in an era of lower investment and b4 AI became frontpage news. The UK needs to urgently reassess its long-term strategy for semicompute, with a rapid review led by a technical expert. We have promising companies, but a lack of support.
2
We outline a range of recommendations on making the country ‘AI legible’ so AI can improve public services, whilst protecting data privacy. This includes funding dataset creation, and major national programs to implement AI in these AI legible sectors like grid optimisation.
2
To spur the private sector and utilise its strengths, we need improved procurement. AI is too illegible, fast paced, and technical for current procurement approaches. We repeat our call for a ‘DARPA of procurement’, acting as a buyer and tester of first resort for AI products.
1
We also urge organisations like UKRI to prioritise toward physical AI through robotics research, and for challenge funding in this area. We outline how we should begin to approach deep fakes and labour market disruption, addressing the democratic deficit and much more.
2
This thread is far from comprehensive despite its length - please read the report, share it, and request your MPs take it seriously and lobby for it to become government policy.
1
I wrote a piece recently in the Sun on Sunday explaining why getting AI right is so critical to the future of our country and the world.
Quote Tweet
My op-ed today in the Sun on Sunday arguing for increased government focus on and investment in AI safety, and for increased public engagement.
The Sun is the UK’s highest circulation newspaper, so hopefully can help persuade some people
thesun.co.uk/news/22570912/
1
