For the fourth consecutive year, the Norwegian Board of Technology (NBT) and the Norwegian Human Rights Institution (NIM) have combined their expertise and perspectives on human rights and technology in a joint report. In this report, we explore different future scenarios for a 2030 shaped by superintelligence, and assess the implications for human rights depending on who controls the technology: companies, states, global cooperation, or an ecosystem of diverse actors.

What is superintelligence?

There are currently many terms used to explain and define intelligence in machines. Broadly speaking, the aim of developing artificial intelligence is to replicate human intelligence in machines. However, there are differing views on what this would entail in practice.

The terms artificial general intelligence and superintelligence are often used interchangeably, and the distinction between them is frequently unclear. Both refer to the idea of machines with human-level intelligence, but what this concretely means is not precisely defined. Typically, however, a distinction is made between artificial general intelligence (at human level) and superintelligence (beyond human level). In this report, we define superintelligence as a form of machine intelligence that surpasses human capabilities.

Download the report in Norwegian below:

Superintelligens, makt og menneskerettigheter

Download

Who will control superintelligence?

A central challenge in discussions about superintelligence is whether humans will be able to retain control over systems that become increasingly autonomous, complex and unpredictable.

While there is disagreement about the extent to which control over increasingly advanced AI—and eventually superintelligence—can be maintained, there is broad consensus that control is crucial. This is not because superintelligence is guaranteed to emerge, but because the consequences of losing control over highly autonomous and advanced AI systems could be severe for individuals and society.

Four scenario narratives

To better understand how superintelligence may affect human rights, we use scenario narratives to explore how rights could be impacted depending on who controls the technology.

The future scenarios show that superintelligence is not merely a technological issue, but a fundamental question of power distribution and governance: Who is in control? Who can be held accountable? And who is afforded protection?

Through four different visions of the future in which superintelligence is governed by either:

  • companies
  • states
  • global institutions
  • or everyone (and no one)

we show that the choice of governance model has profound implications for democracy, rights, security and societal development.

The scenarios do not point to a single correct answer. Rather, they highlight the risks and opportunities that arise under different power structures. They illustrate how vulnerable human rights become when control is lost, how powerful positive outcomes can emerge under responsible governance, and how quickly the balance between state, market and individual can shift.

Key conclusions

  • Control over superintelligence is essential to protect human rights. Without a responsible actor bound by legal obligations, the protection of fundamental rights is weakened.
  • A shift in power from states to companies undermines the position of human rights. It is states—not companies—that are responsible for upholding and guaranteeing human rights. When superintelligent systems operate without public oversight, rights in practice become dependent on contractual terms and market logic.
  • Democratic autonomy and national security may be weakened if critical services and societal functions become dependent on commercially controlled superintelligence. Reliance on technological development in other jurisdictions is likely to limit transparency and control.
  • State control over superintelligence may render state power excessively strong. If the state is equipped with superintelligence, power may concentrate in government at the expense of countervailing forces, potentially weakening or undermining democracy and civil and political rights.
  • Global cooperation on the responsible development and use of AI is becoming increasingly important. Individual states are unlikely to be able to regulate superintelligence alone; international governance structures, cooperation arenas and shared rules will be essential in addressing advanced and autonomous AI systems.
  • The ability to maintain control over complex AI systems is a key challenge. Auditability, distribution of power, transparency, and effective enforcement of existing and new regulations are crucial to strengthening accountability as AI becomes more powerful.

Our analysis shows that ensuring the responsible development and use of powerful AI systems is an urgent democratic challenge. It requires solid understanding of technological developments, political resolve, and strong protection of fundamental rights.

The project has been led by Hanne Sofie Lindahl and Joakim Valevatn from the NBT, and Cecilie Hellestveit, Vidar Strømme and Vetle Seierstad from NIM.

Newsletter

With our newsletter, you will get the latest in technological development