These 5 tough principles can help keep AI under our control

United States

Everyone has a stake in AI’s safe, equitable and accountable development.

Artificial intelligence has been quietly helping us for decades, with progress accelerating in recent years, but 2023 will be remembered as a “big bang” moment.

With the advent of generative AI, the technology has broken through in popular consciousness and is shaping public discourse, influencing investment and economic activity, sparking geopolitical competition, and changing all manner of human activities, from education to health care to the arts. Each week brings some new breathtaking development. AI is not going away, and change is accelerating.

Policymaking is moving almost as fast, with the launch of new regulatory initiatives seeking to meet the moment. But while ongoing efforts by the G7, the European Union, and the United States are encouraging, none of them is universal, representing the global commons. In fact, with AI development driven by a handful of CEOs and market actors in just a few countries, the voices of the majority, particularly from the Global South, have been absent from governance discussions.

The unique challenges that AI poses demand a coordinated global approach to governance, and only one institution has the inclusive legitimacy needed to organize such a response: the United Nations. We must get AI governance right if we are to harness its potential and mitigate its risks. With that in mind, the UN High-level Advisory Body on AI was established to offer analysis and recommendations for addressing the global governance deficit. It comprises a group of 38 individuals from around the world, representing a diversity of geographies, gender, disciplinary backgrounds, and age, and drawing on expertise from government, civil society, the private sector and academia.

We feel privileged to serve as the Advisory Body’s Executive Committee. Today, we released the group’s interim report which proposes five principles for anchoring AI governance and addressing several interrelated challenges.

First, since the risks differ across diverse global contexts, each will require solutions that are tailored accordingly. But that means recognizing how rights and freedoms can be jeopardized by specific design, use (and misuse), and governance choices. Failing to apply AI constructively — what we call “missed uses” — can needlessly exacerbate existing problems and inequalities.

Second, since AI is a tool for economic, scientific, and social development, and since it is already assisting people in daily life, it must be governed in the public interest. That means bearing in mind goals related to equity, sustainability, and societal and individual well-being, as well as broader structural issues like competitive markets and healthy innovation ecosystems.

Third, the emerging regulatory frameworks across different regions will need to be harmonized in order to address AI’s global governance challenges effectively.

Fourth, AI governance should go hand-in-hand with measures to uphold agency and to protect privacy and the security of personal data.

Lastly, governance should be anchored in the UN Charter, international human-rights law and other international commitments where there is a broad global consensus, such as the Sustainable Development Goals.

The risks of ungoverned AI are unacceptable.

Affirming these principles in the context of AI requires overcoming some stubborn challenges. AI is built on massive amounts of computing power, data, and — of course — specific human talents. Global governance must consider how to develop and ensure broad access to all three. It also must address capacity building for the basic infrastructure that underpins the AI ecosystem — such as reliable broadband and electricity — especially for the Global South.

Greater efforts also are needed to confront both known and still-unknowable risks that could emerge from AI’s development, deployment, or use. AI risk is a hotly debated subject. While some focus on eventual end-of-humanity scenarios, others are more worried about the harms to people here and now; but there is little disagreement that the risks of ungoverned AI are unacceptable.

Read: AI needs a strong code of ethics to keep its dark side from overtaking us

Good governance is anchored in solid evidence. We foresee the need for objective assessments of the state of AI and its trajectory, to give citizens and governments a sound foundation for policy and regulation. At the same time, an analytical observatory to assess AI’s societal impact — from job displacement to national-security threats — would help policymakers keep up with the immense changes that AI is driving offline. The international community will need to develop a capacity to police itself, including by monitoring and responding to potentially destabilizing incidents (as major central banks do in the face of financial crises), and by facilitating accountability and even enforcement action.

These are just a few of the recommendations we are advancing. They should be seen as a floor, not a ceiling. More than anything, they are an invitation for more people to tell us what kinds of AI governance they would like to see.

If AI is to fulfill its global potential, new structures and guardrails are needed to help us all thrive as it evolves. Everyone has a stake in AI’s safe, equitable and accountable development. The risks of inaction are also clear. We believe that global AI governance is essential to reap the significant opportunities and navigate the risks that this technology presents for every state, community, and individual today and for generations to come.

Ian Bremmer, Carme Artigas, James Manyika, and Marietje Schaake are members of the executive committee of the UN High-level Advisory Body on Artificial Intelligence.

Bremmer is founder and president of Eurasia Group and GZERO Media. Artigas is Spain’s secretary of state for digitalization and artificial intelligence. Manyika is senior vice-president of research, technology, and society at Google/Alphabet. Schaake, a former member of the European Parliament, is policy director of the Cyber Policy Center at Stanford University and president of the CyberPeace Institute.

This commentary was published with the permission of Project Syndicate — What Global AI Governance Must Do.

More: AI could end up being more risky than profitable if we’re not careful

Also read: AI may not replace your job, but it’s going to remake your work. One thing must not change.