AI giants grant access of their models to UK government for research and safety

Google Deepmind, OpenAI and Anthropic have all agreed “early or priority access” to research

AI giants grant access of their models to UK government for research and safety

Prime Minister Rishi Sunak announced during a speech at the tenth London Tech Week that Google Deepmind, OpenAI and Anthropic had granted priority access to their foundational systems in order "to help build better evaluations and help us better understand the opportunities and risks of these systems."

This development follows hot on the heels of last week's heavily promoted announcement on a global AI summit which is due to be held in the UK. Attendees are as yet unknown but the government has stated it would like to see the AI companies themselves, researchers and countries in attendance.

Sunak's speech urged the audience to consider the potential of AI, particularly when combined with other nascent technology such as quantum computing, noting that the "possibilities were extraordinary."

Indeed they are but it's the darker side of those possibilities that's worrying AI ethicists, academics and wider society.

This is why Sunak emphasised the government's announcement in April of a £100m foundational taskforce to promote the building of safe and reliable foundational models, into which this latest development fits.

The government focus on AI safety and the development of "guardrails" appears to be the result of a succession of pronouncements of existential risks from seemingly repentant AI researchers and tech companies, as well as a raft of meetings between these companies and ministers.

Concern has been expressed by some individuals and advocacy groups that the government, keen to be seen as an agile, friendly home for AI development, is a soft target for AI companies keen to shape the conversation about safety, and any resulting legislative frameworks. Allowing selective access to their research would certainly help them to achieve that end.

There is also the frustration expressed by some academics and commentators that pronouncements of abstract future risks are pulling the media spotlight away from algorithmically driven real world harm happening in the here and now such as bias, privacy abuse and misinformation.

In fairness, not having the research and the relevant experts involved in the process would hamper efforts to work out how to create guardrails flexible enough to fit such a fast-developing field, and there is a genuine opportunity to the UK to lead global safety efforts.

Nonetheless, concerns remain about the voices of those who stand to benefit most from generative AI drowning out those of those who are in a less secure position. In order to truly lead global AI safety, the UK needs to ensure that all voices are heard in this debate.