UN experts called for oversight to ensure AI developments respect human rights and international law on Saturday.
At the Global Conference on AI Security and Ethics hosted by the UN Institute for Disarmament Research (UNIDIR), experts drew parallels between the advent of artificial intelligence and the nuclear age, frequently termed AI’s “Oppenheimer moment.” Panelists at the event raised concerns on the urgent necessity to slow down the Artificial Intelligence development race to prevent its misuse on the battlefield around the globe.
The frequent and multiple use of Artificial Intelligence technologies, applicable in civilian and military contexts, poses a security dilemma. Arnaud Valli, head of public affairs at Command AI, warned that developers risk overlooking battlefield realities, where AI errors could be deadly. Michael Karimian, director of digital diplomacy at Microsoft, emphasized the importance of collaboration among firms and the need to break down silos, stating, “Innovation isn’t something that just happens within one organization. There is a responsibility to share.” Peggy Hicks, director of the Right to Development division of the UN Human Rights Office (OHCHR), echoed concerns raised over a decade ago by human rights expert Christ of Heyns regarding Lethal Autonomous Robotics (LARs), insisting that robots must not replace humans in making life and death decisions.
At the conference, UN Secretary-General António Guterres noted the recent conflicts and AI applications that violated international humanitarian law and harmed civilians, highlighting UN General Assembly resolution 79/239 as an essential preliminary step in turning the commitments from states to assess the risks and opportunities of Artificial Intelligence in military applications into action.
Delegates at the UNIDIR conference emphasized strategic foresight to assess tech risks. Diplomats from various countries, including China and the Netherlands, also shared their perspectives on building trust between nations. Ambassador Shen Jian of China suggested defining a “line of national security in terms of export control of high-tech technologies.” Disarmament Ambassador Robert in den Bosch of the Netherlands emphasized looking at AI in convergence with other technologies like cyber, quantum, and space.
Despite commitments to developing “fair, secure, robust” AI algorithms, significant challenges remain in implementation and determining what constitutes system robustness. Current challenges include renewed major-power competition and the expiration of treaties like New START, raising concerns about the stability of nuclear deterrence in a multipolar world, and parallels that can be drawn to the nascent stages of Artificial Intelligence arms development and control.
Relatedly, the EU has already implemented its own AI regulation since August 2024. The regulation categorizes AI systems based on their risk levels and prohibits AI systems that pose significant threats to individuals and society, including systems that may perpetuate discrimination. AI systems at other levels face corresponding levels of scrutiny in accuracy, transparency and reliability.