Artificial intelligence (AI) is a term that describes computer systems that learn from experience and are able to solve complex problems in a multitude of contexts and environments. As such, intelligent machines bear some resemblance to human beings.

Behind the recent advances in AI, we find a group of statistical and mathematical techniques collectively known as machine learning. By providing the machine with past “examples” through data, the machine is trained to solve a task and may improve with experience. This differs from traditional computer programs, which rely precise step-by-step instructions to solve a problem.

Self-driving cars do not have pre-programmed instructions for how to act in every situation they encounter. Instead, they rely on sensors and machine learning to understand the environment and to make decisions on the fly.

The article continues below the timeline.

AI springs into action

Experts often distinguish between general AI and narrow AI. General AI is the idea of making a system that is as flexible as human beings are, and that is able to solve a wide range of assignments. Narrow AI solves task-specific assignments, such as recognizing faces in images. While there is consensus in the research community that general AI is still decades away, narrow AI has made tremendous progress in the past few years.

The increasing availability of  data and comparatively cheap processing power, combined with improved algorithms, have facilitated rapid development, especially within neural nets — a machine learning technique inspired by biological neural tissues in the brain. In a popular image recognition challenge, the error rate of the best performing AI system dropped from 26 percent to 3,5 percent between 2011 and 2015. Humans, by contrast, have an estimated error rate of 5 percent.

As a result of the progress made in recent years, all the major technology giants are investing heavily in AI-technology.

 

Opportunities – not least within the public sector:

Smart government: AI-systems can be leveraged to predict needs, personalize services and detect fraud and error. Government agencies can use AI for decision support. Certain administrative procedures may be automated to deliver instant responses. In complex procedures, digital assistants may help caseworkers to evaluate cases and suggest measures.

Speech recognition allows young and old to talk to machines directly  in many different languages. This can lower the barrier for using digital public services among citizens.

Decision support for doctors and patients: AI can help process large amounts of information and assist decision making under time pressure. While a radiologist may analyze a few of thousand images over a professional lifespan, an AI-system can train on millions in a short amount of time. Systems like these can give citizens access to health services of equal quality no matter where they live.

By pairing AI with digital assistants, patients can get instant advice. British health authorities are now testing systems that answer inquiries to the non-emergency helpline. The user answers questions regarding his or her condition, and the algorithm makes an assessment and gives advice for treatment. Consultations like these are far quicker than seeing a human doctor.

One teacher, one pupil: Individualized learning is a key principle in the Norwegian school system. Adaptive learning tools give each pupil assignments and reading material based on individual development, achievements and needs. These tools can also give the pupil targeted and individual feedback in real-time.

Traffic flow: Self-driving cars promise a safer and more environmentally friendly traffic system. Cars that communicate and coordinate in a network can improve the traffic flow. In the UK, some highways are equipped with sensors and AI-systems that predict and optimize traffic management.

Challenges

Biased data: Artificial intelligence is trained using data. Biases in data sets will therefore manifest in the AI-systems. Biased systems may reinforce existing inequalities or amplify discrimination.

Automatic systems that assess job applications and choose the best candidates have become more popular. Such systems are often trained on former hires, and may be affected by biased selections and practices in former interviews. Using such algorithms may not only continue unfair practices, but can also make them harder to discover.

The black box problem: Artificial intelligence and big data allow for automated decisions, for instance in processing applications for schools, loans, insurances or government support. It may, however, be difficult to understand precisely how these systems evaluate data to reach a conclusion. AI-systems risk becoming “black boxes” that obscure important assumptions, uncertainties and normative choices. This applies especially to more complex machine learning techniques.

The EU General Data Protection Regulation will give citizens subjected to algorithmic decisions a “right to explanation”. How is this right  secured in face of advanced AI?

Who is responsible? AI is pushing the limit for what cognitive tasks computers are able to solve. This raises questions regarding security and responsibility. Many professions require licences to operate. If a machine performs a procedure customarily done by a doctor, will it require the same level of quality assurance?

The EU-parliament is currently evaluating a framework for regulating AI. Amongst others, they suggest legal accountability and registration of advanced AI-systems. The framework will provide guidelines for responsible development, and demand that companies must cover losses caused by such systems.

The winner takes it all: A few large multinational internet corporations dominate commercial AI. These have access to the large amounts of data and computing resources needed to develop AI-systems. When consumers use these systems, they hand over even more data, which in turn is used to further improve the technology. When the commercial development of AI is driven by a handful of key players, this may exacerbate the challenge to Norwegian companies competing in the digital economy

The public sector is a custodian for large amounts of data that can be used to develop AI-systems. In the UK, the National Health Services has been criticized for sharing anonymized health data with the Google-owned AI-company DeepMind. One worry is that the company might use the data to develop a commercial AI-platform for health services and displace smaller players from the market.

Super intelligence: In an open letter from 2015, the physicist Steven Hawking, entrepreneur Elon Musk and director of research at Google, Peter Norvig amongst others, express deep concern  for the risks associated with artificial intelligence, and call for more research on societal consequences. Rather than a future with malicious and conscious robots, the signatories warn against a mature and competent technology with values and goals misaligned with humanity’s best interest.

In the Global Risk Report for 2017 published by the World Economic Forum, AI is perceived as one of the emerging technologies with the greatest benefits, but also the technology with the greatest potential for damage.

 

 

Newsletter

With our newsletter, you will get the latest in technological development