The United Nations General Assembly convened this week in New York, and during the proceedings, the U.N. Secretary-General’s technology envoy, Amandeep Gill, organized an event titled “Governing AI for Humanity.” The purpose of this gathering was to engage in discussions regarding the potential risks associated with AI and the complex challenges of fostering international cooperation in the realm of artificial intelligence.
Secretary-General António Guterres and Gill share the belief that establishing a new U.N. agency will become necessary to facilitate global collaboration in the management of this influential technology. However, the precise objectives and structure of this new entity have yet to be determined. Some observers express skepticism, noting that ambitious initiatives for worldwide cooperation often struggle to garner the support of influential nations.
Gill has previously spearheaded efforts to enhance the safety of advanced technologies. He chaired the Group of Governmental Experts for the Convention on Certain Conventional Weapons, a forum where the Campaign to Stop Killer Robots aimed to convince governments to prohibit the development of lethal autonomous weapons systems. Unfortunately, this campaign failed to gain traction, particularly with major global powers like the United States and Russia. Now, Gill is leading an even more ambitious endeavor to promote international collaboration in the field of AI.
The pace of AI development has been astonishing in recent years, and experts anticipate that this progress will continue at an accelerated rate. The repercussions of AI innovation extend far beyond the borders of the countries where it is developed, prompting leaders and technologists to advocate for global cooperation in addressing AI-related issues.
In July, at a meeting of the U.N. Security Council, Secretary-General Guterres made the case for the United Nations as the ideal platform for fostering such cooperation. The next step in establishing a U.N. AI agency is the formation of a High-Level Advisory Body on Artificial Intelligence, the membership of which will be announced in October.
Gill, who previously served as the executive director of the U.N. High-Level Panel on Digital Cooperation from 2018 to 2019, assumed the role of tech envoy in June 2022. In August of the same year, the tech envoy’s office initiated the selection process for experts to serve on the High-Level Advisory Body on Artificial Intelligence, inviting nominations from the public.
A separate call for nominations, reviewed by TIME, was circulated among member states and outlined the terms of reference for the Body. These terms specified that the Body would comprise up to 32 members from diverse sectors, ensuring a mix of genders, ages, geographic representation, and areas of expertise.
In an interview with TIME on August 30, Gill revealed that his office had received over 1,600 nominations through the public call for nominations. When combined with nominations from member states, the total number is expected to exceed 2,000. Gill explained that, in collaboration with other U.N. organizations, his office will create a shortlist from which the Secretary-General will select the final 32 members. The Body is scheduled to convene for the first time in October.
The terms of reference, as seen in the document shared with TIME and confirmed by Gill, outline the Body’s responsibilities. It mandates the Body to produce an interim report by December 31, 2023, presenting a high-level analysis of options for the international governance of artificial intelligence. A second report, due by August 31, 2024, may provide detailed recommendations concerning the functions, structure, and timeline for a new international agency dedicated to the governance of artificial intelligence.
Aki Enkenberg, the team lead for innovation and digital cooperation at Finland’s Ministry for Foreign Affairs, expressed reservations about specifying that the Body should offer recommendations regarding a new international agency. Enkenberg argues that much of the necessary international governance for AI could be facilitated by existing U.N. bodies, suggesting that an assessment of potential gaps in the U.N. system should have been conducted before proposing the establishment of a new agency. Gill countered this by denying that the terms of reference predispose the Body to advocate for a new agency.
When questioned about whether the advisory body’s size might make it unwieldy and potentially allow the secretariat to exert undue influence over the report’s content, Gill emphasized that the secretariat has no intentions of swaying the Body’s findings.
Looking ahead to September 2024, the U.N. will host its Summit of the Future. By that time, Gill hopes that the findings of the High-Level Advisory Body will provide member states with the necessary information to determine whether and how they should support the establishment of a U.N. AI agency. Gill views this summit as an opportune moment for leaders to make decisions on this matter, describing it as a “leaders-level opportunity.”
In a blog post released in May, top executives from OpenAI, the organization responsible for ChatGPT, raised the notion that a future AI governance system might resemble the International Atomic Energy Agency (IAEA). They argued that as AI systems become more sophisticated, there may be a need for an international authority to ensure their safe development and deployment. The post stated, “Any [superintelligence] effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”
The IAEA, established in 1957, conducts inspections to prevent non-nuclear weapon states from developing nuclear weapons and supports the peaceful use of nuclear energy through technical cooperation.
UN Secretary-General Antonio Guterres expressed support for the idea of an AI agency akin to the IAEA, stating in June, “I would be favorable to the idea that we could have an artificial intelligence agency …inspired by what the international agency of atomic energy is today.”
However, replicating the IAEA model for AI governance is just one possibility. A paper published in July by researchers from Google DeepMind, OpenAI, and various academic and nonprofit institutions laid out four potential forms an international AI body could take, each not mutually exclusive.
One proposal is to establish an IAEA-like agency that develops industry-wide standards and monitors compliance with those standards. Another option is to create an organization resembling the Intergovernmental Panel on Climate Change (IPCC), which facilitates expert consensus on technical AI issues. The report also suggests an international partnership between public and private sectors, ensuring equitable access to beneficial AI, similar to the Gavi, the Vaccine Alliance, model for vaccines. Lastly, the report mentions the possibility of international collaboration on AI safety research, akin to the European Organization for Nuclear Research (CERN).
Dr. James Gill, who led the interview with TIME, noted that the High-Level Advisory Body must decide which model, if any, is most appropriate. He believes that the fourth option, an international research effort, could involve a U.N. agency in addressing potential AI risks.
However, some experts argue that establishing an IAEA-like agency for AI might face resistance from policymakers. This is because it would necessitate countries like the U.S. and China to grant international inspectors full access to their advanced AI labs to mitigate unforeseen risks.
Bill Drexel, an associate fellow at the Center for a New American Security, highlights that current levels of international cooperation on managing potentially hazardous technologies are at a historic low. He suggests that it might require a significant AI-related incident to generate the political will needed for a substantial international agreement. In the interim, setting up an international body, either through the U.N. or via smaller multilateral or bilateral groups, with minimal obligations on participants could lay the groundwork for more substantial cooperation if the need arises.
Gill, drawing from his expertise as a nuclear historian, concurs with this approach. He emphasizes that while the history of the IAEA might not be directly applicable, there’s a possibility to establish international controls should AI risks become more pronounced.
An alternative governance model, with the potential for future expansion, is proposed in a white paper from August, authored by researchers from academic, nonprofit, and Microsoft. It suggests the creation of an International AI Organization (IAIO) to collaborate with national regulators on setting standards and certifying jurisdictions instead of individually monitoring AI companies. Under this model, governments would enforce these standards by permitting only those companies operating in jurisdictions that meet these standards to access their domestic markets. They would also restrict exports of AI components, such as computing chips, to countries failing to meet these standards.
Robert Trager, one of the lead authors of the white paper, believes that jurisdictional certification makes this arrangement less intrusive, making it more appealing to powerful countries. Given the global nature of AI safety risks, it’s crucial for countries developing advanced AI systems, particularly the U.S. and China, to be part of any international agreement.
Sihao Huang, a researcher at the University of Oxford focusing on AI governance in Beijing, argues that a narrow focus on shared issues, such as assessing models for biosecurity risks, will be necessary for China’s participation in an international agreement.
However, Russia’s stance remains resistant to the idea of supranational oversight bodies for AI. Dmitry Polyanskiy, first deputy permanent representative of the Russian Federation to the U.N., expressed opposition to such bodies during a U.N. Security Council meeting in July.
Although the International Civilian Aviation Organization and the International Maritime Organization, which served as inspiration for the IAIO, are U.N. agencies, Trager suggests that the U.N. is not the only potential host for such governance efforts. He argues that multiple models could involve a wide range of stakeholders in AI governance.
Bill Drexel emphasizes that international cooperation can be slow and inefficient. Instead, he suggests considering bilateral or more limited multilateral forums for governing advanced AI systems, allowing for scalability with the expansion of involved companies.
Gill maintains that the U.N., as a universal intergovernmental organization with experience in managing new technologies, is well-positioned to host a treaty or organization for AI governance.
While previous attempts to manage AI internationally, like the Campaign to Stop Killer Robots, may have faltered due to idealism, Trager insists that the global community must continue trying to address AI governance. He acknowledges the complexity of the world but sees a window of opportunity for meaningful progress.