This article by Robindra Prabhu was also published in the book “The Nex Horizon of Technology Assessment – Proceedings from the PACITA 2015 Conference in Berlin” (PDF).

The rise of the algorithm presents both Technology Assessment (TA) practitioners and policymakers with nuanced and novel governance challenges, yet we often lack the tools and frameworks to tease out the ethical conundrums and the wider social stakes of these developments. This article argues that sound algorithmic governance rests in part on finding appropriate responses to the challenges associated with meaningful transparency, accountability and fairness.

The rise of the algorithm

When debating the various challenges related to the big data paradigm, the TA discussion has largely focused on the tail end of the buzzword, namely “data”. Such a focus triggers interesting and hugely important discussions about the myriad of data traces left in the wake of our techno-driven lives, the novel pressures on data privacy these create, which fragments we should be able to collect and store and, once collected, how to ensure adequate protection against theft or misuse. While undeniably important, a singular focus on data and their associated risks often fails to capture all the nuanced ethical questions that emerge in the complex big data machinery, many of which are only remotely connected to the data as such, but nonetheless have ethical ramifications and by consequence very real and important policy implications.

In particular, we will argue, there is a need for TA practitioners and policy makers to direct attention to the variety of algorithmic tools in use that help make data a utility. Be it online nudging, self-driven cars, patient risk scoring, credit evaluations, news aggregation or predictive policing — algorithms are quickly becoming more pervasive in society and are rapidly gaining traction in decision-making systems that are subtly weaved into our day-to-day lives. With the advent of big data, algorithmic systems are poised to influence ever-larger portions of human activity, creating unique and distinct governance challenges that the “data protection and privacy” debate will often fail to elucidate. In “The Real Privacy Problem”, Morozov argues that algorithms are starting to infringe on human decision-making processes (Morozov 2013). We seldom understand how they work, but have nonetheless become dependent on them, and afraid or unable to disregard their guidance (Danaher 2014).

Computational systems are certainly not new objects of study in the TA community, having long since become fundamental pillars of modern society. As Manovich puts it: “What  electricity and the combustion engine were to the early 20th century, software is to the early 21st century. I think of it as a layer that permeates contemporary societies” (Manovich 2013). Yet rapid advances in digital connectivity, machine learning and artificial intelligence, coupled with novel data streams, have both necessitated and catapulted algorithms to the fore. As Wagner et al. remark, algorithms are now integral, or at least supporting tools, in an increasing number of decision-making processes, at times even acting as the sole decision-maker (Wagner 2015). Not only confined to the online sphere, where they regulate the information returned by a search engine or the news feed in our social networks, algorithms are now moving into areas of life where decision-making processes have traditionally been dominated by human judgment. Healthcare, employment, advertising, finance, law enforcement and education are but a few examples.

Peeking into the black box and beyond

The purpose of this paper is not to bemoan these developments, nor is it to praise the merits of algorithmic decision-making systems. The purpose is rather to precipitate a discussion on how the TA practitioner can create a framework for probing algorithms as unique sociotechnical entities and to devise appropriate policy responses for mitigating risks and for harnessing potentials. In particular, the discussion will center on challenges related to transparency, accountability and fairness, all considered important pillars of sound algorithmic governance.

Transparency

To the outside observer, algorithms may often appear to operate subtly and quietly behind the scenes. Opacity can make it difficult to understand precisely how it operates, when it is in use, and to what end it is employed. Moreover, TA practitioners and policymakers may often feel we lack the tools to study algorithms in action, to scrutinise their inner workings, to assess the wider social stakes and to design interventions that help mitigate risk.

As a result, it is easy to denounce algorithmic systems as “black boxes”, and a common response to this predicament is to demand more transparency. But how does one bestow meaningful transparency on an algorithmic system?

One gut response for breaking algorithmic opacity is perhaps to ask for access to the computer source code. While source code undeniably gives valuable insights into the workings of an algorithm, many algorithms are proprietary, and there are very real arguments for maintaining trade secrecy. Third party access to source code is therefore not trivially achieved. And even when access to source code is granted, there are at least two challenges to achieving meaningful transparency:

Complexity: The internal workings of an algorithmic system are often best understood by the developer team. Complexity and interdependencies can make algorithms practically challenging to decipher even for competent third party examiners. In the worst case, “source code” transparency may become little more than symbolic transparency, in much the same way as it may be argued that online notice-and-choice agreements wrapped in tedious and cryptic legal writing do not provide the online user with any real support and choice for making an informed decision. Another instructive analogy is tracking data that cell phone carriers in many countries are required to release to the cell-phone users upon request (Biermann 2011): unless you have the time, knowledge and resources to analyze and visualize such data in meaningful ways, your legal access rights to the data may be of little import and may at worst mask the privacy implications at play.

Values and judgments: Even if the source code can be fully deciphered, the source code alone may not be sufficient to shed light on the full “algorithmic complex”. Here it might be useful to examine the parallels with the modern factory assembly line (Gillespie 2014), such as a car plant. Along the assembly line we find a series of robots programmed to execute very specific operations on the input data (a proto-car). In this analogy the source code may perhaps be likened to the technical blueprint of the robot: while it is possible to check that the robot performs its tasks according to its blueprint, it is more challenging to gauge exactly what the orchestra of various robots output at the end of the assembly line simply by studying the blueprints of the individual robots.

More importantly perhaps, it is hard to know from the robot blueprints alone how the car will behave on the road with human beings in it and around it. Moreover, assembly lines are seldom void of people. Like the robots, these workers will typically have very specific operational tasks closely intertwined with the operations of the machines (Gillespie 2014), however they do influence the final product in important ways — otherwise, they would not be there. And just as the assembly line is a man–machine system, so too is the “algorithmic complex”, in ways that far exceed the source code alone:

1) The algorithm exists to perform a specific task, part of a solution to a wider problem. This problem is defined by people, and its framing may influence the algorithmic output in important ways.

2) Models are created which contain assumptions, choices and simplifications made by people. These judgments may significantly impact the algorithmic output.

3) Embedded in these underlying models, the source code instructs the algorithm to respond in certain ways to certain inputs. For example, the algorithm may be trained and optimized using training data that has been selected and curated by people.

4) The machinery is then fed some input data, which may also have been trimmed, selected or filtered in some way. The operational selection choices are again all made by people.

5) Finally the algorithmic system will output a result that is framed, interpreted and acted upon in a larger human decision-making complex.

At all these junctions there are people involved. And while their tasks are often highly technical, specialized, procedural and focused, it creates a number of entry points for value judgments, arbitrary choice, biases, harmful assumptions and potential discrimination.

In the final output of the algorithmic complex, these junctions are often rendered invisible to outsiders. Meaningful transparency should therefore aim to expose these junctions and the values at play. This requires mechanisms to shed light on the entire “algorithmic complex” as a man–machine system. The governance challenge is to find ways of making this dynamic transparent and to this end, source code access alone will seldom suffice.

Accountability and oversight

Closely related to meaningful transparency is the problem of accountability and oversight. Especially when algorithmic systems make decisions of import on and in people’s lives, a natural regulatory response is to demand someone (or something?) to watch over these systems and hold players accountable when something goes wrong.
As with transparency, such oversight may not always be trivial to achieve, and it brings at least four challenges to the fore:

1.    Locating agency: proper accountability and oversight necessitates some knowledge of who does what, when they do it, to what end and whether it is in line with protocol. But complex man–machine systems like the “algorithmic complex” can make causal chains diffuse and distance the people involved from the wider societal consequences (Gillespie 2014).

2.    Efficacy: Algorithmic systems are put in place to achieve a certain predefined goal, a goal that is often defined by the employing institution or actor. For example a predictive risk assessment of individual crime propensity may be employed with the aim to reduce crime and prevent individuals from pursuing criminal pathways. But how does one measure the efficacy of such systems and weigh them against alternative non-algorithmic practices?

3.    Uncertainties and side effects: Algorithms are embedded in models that are shaped by assumptions, simplifications, human judgment, arbitrary choices, value choices and approximations. How can we ensure that the uncertainties that arise in the algorithmic output are duly accounted for in the decision-making process and that appropriate steps have been taken to mitigate unwanted side effects?

4.    Recourse and contestation: Does the subject of an algorithmic decision-making system have any real opportunity to contest the decisions made? Providing opportunities to contest single and unique decisions, such as a credit risk score or the qualification for a social benefit, may seem straightforward, but how does one provide meaningful actions of recourse against subtle and largely invisible algorithmic decisions that happen behind the scenes and whose effects are only visible after a long time has passed (such as online “filter bubbles”).

Fairness

Strongly related to the topic of accountability, but more challenging still, is the issue of fairness in algorithmic systems. Algorithms are often touted as impartial, free of human bias and prejudice, neutral, procedural and hence “fair”. In fact these perceived qualities are often given as key reasons for replacing human decision-making systems with algorithmic ones.

But if algorithmic systems are at least partial products of human judgments, assumptions, simplifications and curatorship, can they ever be truly neutral and fair?

Perhaps more so than any other governance challenge related to the algorithmic complex, the issue of fairness is intimately tied up to the issue of framing: what are the goals the algorithmic system is set up to achieve and who are ultimately the intended beneficiaries?

Contrast the following entirely hypothetical cases:

1)    Genetic data is used to predict the risk of an infant developing a serious future disease. Early action can potentially reverse this path, lead to better quality of life for the infant and severely reduce costs for the health care system.

2)    Genetic data is used to predict the risk of an infant going down a criminal path. Early action can potentially reduce this risk with great savings for society, but will our response to this algorithmic output create real and viable alternative pathways for the infant, or will it be used to fence it off from society?

As with any policy measure, the issue of fairness in algorithmic systems is intricately linked to framing and the wider context of their deployment. Do the systems serve to provide more opportunities, alternative pathways and better services, or do they lead to more discrimination, stigma and exclusion (Stanley 2014)? While these issues are by no means unique to algorithmic systems, they do perhaps raise novel governance challenges in the context of such systems:

a) How do we measure an alleged discriminatory effect? How can we ensure socially and ethically sound algorithmic design?

b) How do we assess unwanted ills that have become ingrained in social systems and are hence silently transferred to algorithmic systems, without explicitly forming part of their design? Without meaningful transparency, algorithms may at worst become formalized systems of discrimination, hiding behind a false garb of impartiality, both propagating and obscuring unfair and unwanted practices.

Summary and conclusions

With the advent of big data, sensor networks and ubiquitous digital connectivity, algorithms are set to become ever more pervasive in decision-making processes across a variety of fields, ranging from online searches to credit risk scoring, from crime prevention to elementary school teaching, from financial trading to medical treatment. As algorithms become more pervasive in both quotidian and vital decision-making processes, often replacing or supplementing long established human decision-making processes, it becomes ever more important to establish frameworks that can tease out the values embedded in such automated systems. To do so, TA practitioners and policymakers will have to grapple with at least three challenges:

1)    Bestowing meaningful transparency on the entire sociotechnical algorithmic complex, exposing points of entry for value judgments, biases, arbitrary choices and human curatorship and by providing methods for evaluating their impact on the final output.

2)    Devising tangible mechanisms for ensuring oversight and accountability. In addition to locating agency in a diffuse man–machine system, this may involve devising novel methods to evaluate the efficacy and precision of the algorithmic output as well as identifying unwanted societal side effects. Finally policymakers will need to contemplate methods of recourse and contestation.

3)    Ensuring that algorithmic decision-making systems do not become formalized systems of veiled discriminatory practice by establishing mechanisms to ensure fairness and due process.

Developing the tools for probing the algorithms that shape our lives and the frameworks for their good governance is work that merits due attention from the TA practitioner.

 

References

Biermann, 2011: Was Vorratsdaten über uns verraten, Zeit Online (Feb 24 2011) (accessed: Jul 25 2015)

Danaher, 2014: Rule by Algorithm? Big Data and the Threat of Algocracy, Blog: Philosophical Disquisitions  (accessed: Feb 10 2015)

Gillespie, 2014: Algorithm, Culture Digitally (June 25 2014)  (accessed: Feb 12 2015)

Manovich, 2013: The Algorithms of Our Lives, The Chronicle of Higher Education, The Chronicle Review (Dec 16, 2013),  (accessed: Jul 25 2015)

Morozov, 2013: The Real Privacy Problem, MIT Technology Review (Oct 22, 2013)  (accessed: Feb 10 2015)

Stanley, 2014: Chicago Police “Heat List” Renews Old Fears About Government Flagging and Tagging, American Civil Liberties Union (Feb 25 2014)  (accessed: Feb 17 2015)

Wagner, 2015: The Ethics of Algorithms: from radical content to self-driving cars, Centre For Internet and Human Rights  (accessed: Jul 25 2015)

Newsletter

With our newsletter, you will get the latest in technological development

NewsWe are currently working on