Conference Explores Philosophical Dimensions of Artificial Intelligence

Picture of conference room at HLRS Building
The Science and Art of Simulation conference 2019.

Philosophers, social scientists, and historians considered the opportunities and limitations of AI, and its implications for society.

The increasing availability of applications involving artificial intelligence (AI) tends to provoke mixed responses. For many scientists, engineers, and companies, AI-related applications employing machine learning and deep learning algorithms offer opportunities for faster and more rational innovation, discovery, and management. For some critics, on the other hand, the steady infiltration of such frameworks into more corners of our lives signals a dangerous loss of human agency as decision making is outsourced to mysterious algorithms.

As artificial intelligence becomes more common, it raises many questions. What do we want it to do for us? How should society react to the changes it brings? And how could machine learning better incorporate society's needs and values? Coming to terms with such questions will require a clear understanding of exactly what learning algorithms are, how they compare to and differ from human intelligence, and what opportunities, limitations, and risks they bring.

In a three-day conference organized by HLRS's program in the Philosophy of Science & Technology of Computer Simulation, philosophers, social scientists, statisticians, computer scientists, and historians of science gathered to discuss how the perspectives that their disciplines offer could help elucidate such issues. The event, titled The Society of Learning Algorithms, was the 4th in an annual conference series called the Science and Art of Simulation. With 10 keynote talks and 16 additional presentations, the wide-ranging event focused on epistemological and ethical implications of artificial intelligence, historical perspectives on the rise of AI, the political consequences of AI models, and close analysis of specific AI applications.

"At this conference we would like to address the manifold sociality of learning algorithms," said Dr. Andreas Kaminski, head of HLRS's philosophy program, at the start of the conference. "We want to address this sociality on three levels: What conception and what design of society is found in the visions and in the algorithms? How do people and learning algorithms co-act? And how do learning algorithms interact with other algorithms?”

In addition to providing a forum for an informed, critical consideration of AI-related methods and their limits, the conference demonstrated that experts based in other disciplines outside of computer science have much to contribute to the development of more reliable and trustworthy computational tools.

The black box of machine intelligence

Many technologies are "black boxes" in the sense that the user need not understand how they work in order to operate them. After all, one does not need to know how a transmission works to drive an automobile. With classical technologies, however, it is typically possible to open them up and observe how they function in detail.

One troubling aspect of learning algorithms, however, is that it is much more difficult to understand or reconstruct how they produce their results. Typically, training data are fed into an algorithm, which iteratively uses a learning strategy (currently deep neural networks in many cases) to identify patterns. In the resulting model, these patterns then inform what decisions the algorithm makes when it encounters similar data in the future. The basis on which these models make decisions, even when they function well in practice, however, remain obscure.

For philosophers interested in the field of epistemology — the philosophy of how humans perceive and understand the world — the black box at the core of machine learning poses a problem: If it's impossible to reconstruct how an algorithm works, how do we know whether its outputs are correct? Can we trust machine learning algorithms, and do we really "know" anything when using them?

Despite these concerns, the gold rush into technologies for large-scale data analysis means that machine learning increasingly informs decision making about issues that have major impacts on people's lives. Banks use algorithms to decide whether a loan applicant is a good investment. During sentencing proceedings, courts have used algorithms to assess whether someone convicted of a crime poses a risk of future criminality. Machine learning is used in disease prediction, fraud detection, product manufacturing, forensics, online marketing, and countless other fields.

If the process through which a machine learning algorithm works is opaque, conference participants asked, how can we trust that the decisions it makes are rational and in line with our values as a society?

The inevitability of bias

As several speakers remarked, the issue of opacity is complicated by a problem that all forms of inductive reasoning encounter. Machine learning operates on the assumption that an algorithm can make predictions about the future based on observations extracted from past data. Philosophers going at least back to David Hume in the 18th century, however, have pointed out that these kinds of "inductive inferences" in human cognition can be fallible, as they rely on assumptions and limited experience (or in this case, data sets).

Because the data set used to train a learning algorithm can never account for every possible instance of a particular phenomenon, and because of decisions a programmer makes in how to manipulate such data, machine learning always contains a bias — and thus, a risk. As Aljoscha Burchardt of the German Research Center for Artificial Intelligence (DFKI) pointed out, this can include statistical bias (whether the data used adequately represents the complexity of the problem the algorithm is addressing), inductive bias (mistakes in how the algorithm processes the data), or cognitive bias (ways of thinking that might lead to false interpretation of the results).

In light of the opacity and bias of algorithmic decision making, Jan-Willem Romeijn, a philosopher of science at the University of Groningen, suggested that it is important that algorithms be understandable, have clear application criteria, avoid mistakes, and be accountable.

Romeijn also suggested that the history of science has something to contribute here, as generations of scholars have thought deeply about the problems surrounding inductive logic and the reliability of statistics, which also plays an important role in machine learning. He advocated for more cross-disciplinary dialogue on such issues, and suggested that scientists developing algorithms more systematically articulate the assumptions underlying their models and frame their results clearly within the limits of those assumptions. He also presented a case study focused on algorithms for classifying psychiatric illnesses as an example of how such a collaboration can function.

Living with AI

Although many wish for more interpretable algorithms, some have argued that as long as a computer program works in practice, a mechanistic understanding of its function is not necessary. After all, neuroscientists face a host of open questions about how the human brain functions, though we tend to operate on the assumption that we are reasonable and can make good decisions.

As machine learning applications become useful appendages to our daily lives, Burchardt remarked, the key question is not whether we want artificial intelligence. Instead, the question is about the conditions under which we can trust automated processes to make decisions, or perhaps more specifically, how we want to cooperate with these machines. Burchardt suggested that today's AI resembles intuition more closely than reason, and envisioned that in the future, it would be desirable for humans to be able to provide better feedback to machines in automated decision making.

Looking at the history of technology also provides context for understanding the situation we are now in. As Rafael Alvarado, chair of the School of Data Sciences at the University of Virginia, pointed out in a survey of the history of networked communication, large-scale collections of data have become the most important component of the Internet, particularly as data mining became the core activity of platforms like Google and Facebook.

As the global "datasphere" has insinuated itself into all parts of life, Alvarez traced a transformation in which communication is moving away from human-to-human interaction and increasingly taking place between machines, with humans being the mediator. Alvarez called the potential apotheosis of automated rationality — whereby machines are just talking to machines without human intervention — "magical," questioning whether this is actually something that is good for humanity.

Such cautionary words underscored the timeliness and importance of the discussions that took place at the conference, and the need for perspectives from outside of computer science in crafting the digital future.

— Christopher Williams