As artificial intelligence (AI) continues to advance, so too do the potential risks associated with its misuse. The UK government has recently released a report outlining some of these threats, including deadly bioweapons, automated cybersecurity attacks, and the possibility of powerful AI models escaping human control. This report is a precursor to an international summit on AI safety hosted by the UK next week.
“These aren’t machine-to-human challenges. These are human-to-human challenges.”
These words from Joe White, the UK’s technology envoy to the US, encapsulate the essence of the discussions at the summit. The event aims to bring countries and leading AI companies together to better understand the risks posed by the technology and strategize on managing these potential downsides.
The Summit and Its Critics
The UK’s AI Safety Summit, scheduled for November 1 and 2, will focus primarily on the ways people can misuse or lose control of advanced forms of AI. While the summit is seen as a step in the right direction, it has not been without its critics. Some AI experts and executives in the UK argue that the government should prioritize more immediate concerns, such as helping the UK compete with global AI leaders like the US and China. They caution that overemphasizing far-off AI scenarios could distract regulators and the public from more immediate problems, such as biased algorithms or AI technology strengthening already dominant companies.
Exploring Potential Threats
The report looks at the national security implications of large language models and the AI technology behind ChatGPT. It explores what could happen if bad actors combined an LLM with secret government documents. One unsettling possibility discussed in the report suggests a large language model that accelerates scientific discovery could also boost projects trying to create biological weapons.
Despite these concerns, Joe White emphasizes that the report is not intended to serve as a comprehensive list of all potential misuses of AI. Instead, it is a high-level document designed to provoke thought and discussion about the societal implications of advanced AI.
Keeping AI in Check
Among the contributors to the report were policy and ethics experts from Google’s DeepMind AI lab and Hugging Face, a startup developing open-source AI software. Yoshua Bengio, one of three “godfathers of AI” who won the Turing Award for machine-learning techniques central to the current AI boom, was also consulted. Bengio has expressed the need for a new “humanity defense” organization to help keep AI in check, reflecting a growing sentiment among experts that proactive measures are needed to safeguard against potential AI threats.
The upcoming summit is a critical opportunity for international leaders and AI experts to discuss these challenges and work towards comprehensive solutions. As AI continues to evolve and permeate various aspects of society, such discussions and collaborations are becoming increasingly vital.