As deep learning algorithms and artificial intelligence applications become more widely used, concern has been growing about their ethical implications. In particular, questions have emerged both about the trustworthiness of AI algorithms and the risks of relying on them for automated decision-making. How can technologists, regulators, and the general public be confident that AI algorithms are designed and used in ways that are safe and consistent with our values?
Apr 08, 2020
HLRS Projects
Social Sciences
AI & Data Analytics
Philosophy & Ethics
See all news
Although numerous studies have proposed ethical considerations for artificial intelligence, there is currently no standardized framework for putting those values into effect. A new working paper from the AI Ethics Impact Group (AIEIG), however, seeks to change that. The study, From Principles to Practice: An Interdisciplinary Framework to Operationalize AI Ethics, proposes the first system for measuring and implementing European ethical principles in the development and use of AI algorithms.
The AIEIG was initiated and organized by the VDE (Association for Electrical, Electronic & Information Technologies) with the Bertelsmann Stiftung. The High-Performance Computing Center Stuttgart's Department of Philosophy of Science and Technology of Computer Simulation was a key member of the working group, along with a multidisciplinary team of researchers from the Institute for Technology Assessment and Systems Analysis at the Karlsruhe Institute of Technology, the International Center for Ethics in the Sciences and Humanities at Tübingen University, the Technical University Darmstadt, the Technical University Kaiserslautern, and the iRights.Lab think tank.
VDE creates technical standards for electronics products of all kinds, and so its goal in organizing the AIEIG was to begin the development of a standardized evaluation system for AI applications. In their new report, the group proposes a universally applicable AI ethics ratings system similar to that used in promoting energy efficiency.
"In developing our recommendations, we set out to avoid several potential traps," says Dr. Andreas Kaminski, who leads HLRS's Department of Philosophy of Science and Technology of Computer Simulation. "One approach might be to implement rules in the algorithm that would govern how it should make decisions, but it would be impossible to find consensus that would be internationally acceptable for doing so across all application contexts. A second approach might be to create advisory boards inside companies, but it's not hard to imagine them quickly becoming pro forma and ineffective. We chose a third way that aims to provide a clear, consistent framework people can use to orient themselves, and that at the same time is measurable and enforceable."
The AI Ethics Label with six selected values. Image: AEIEG. (Click image to enlarge.)
Although many people might believe that a certain value is important, finding agreement on when a technology achieves that value is often difficult. For this reason, the AIEIG realized the importance of defining criteria that clearly specify what a value means and when it is fulfilled. Because the fulfilment of criteria is not directly observable, the model defines indicators that state the degree of fulfilment as well as directly measurable observables.
The value of transparency, for example, refers to the explainability and interpretability of an algorithmic system. Here, one might ask where data that went into producing an algorithm came from, whether that data has been disclosed for others to review, and how well the algorithm's operation is explained and can be followed. For the value of justice, one might ask what steps were taken to prevent bias in the collection of data, whether assumptions were made in the design of the algorithm that could incorporate prejudices, and what effects on social justice could result from use of the tool. Each of the values is accompanied by a specific set of questions that consider observable aspects of that value, making it possible to easily evaluate the extent to which an AI algorithm incorporates it.
The VCIO framework establishes a rigorous set of criteria, indicators, and observables for characterizing the extent to which an AI algorithm incorporates desired values. This example is a sample of how the approach could be applied to assess justice as a value. Image: AEIEG. (Click image to enlarge.)
The report proposes a rating scale of A to G for each of the key values tracked in the framework, with a value of A meaning the best possible fulfillment of the criteria for the value. Once the evaluation is complete, an AI Ethics Label similar to that used in energy efficiency ratings would make it easy for others to quickly understand the ethical strengths and weaknesses of an AI tool.
By ranking specific key values separately and in a consistent way, the model is sensitive to the different contexts in which it might be applied. Depending on what field an AI system will be used in, the authors point out, certain values might be more important than others. If an AI system is primarily used as part of an industrial process, for example, transparency might not be so important. For a different AI tool used in a medical care, however, it might be essential. The flexibility of their model, the authors suggest, could make it universally adaptable for all AI algorithms.
In addition to considering values, the AEIEG's approach proposes a set of criteria for determining the potential risks involved in the use of an AI algorithm. This makes it possible to further refine a general ethical evaluation of the design of the tool to address important questions related to the context in which it will be used.
The framework proposes a set of criteria that can be used to identify the intensity of the potential harm that could result from a use of the algorithm. Here, for example, reviewers might ask whether an AI system could infringe on a person's legal rights, how many people it might impact, whether it could cause catastrophes or loss of life, or even whether it could change the function of society as a whole.
In addition, the model considers how dependent individuals could become on an automated decision making program. Reviewers would consider, for example, whether automated decisions are verified and processed by humans, whether individuals have a choice about the extent to which they are subjected to automated decision making, or whether procedures are in place to redress false or harmful decisions that an AI system might make.
Although the authors acknowledge that assessing risk can be a subjective undertaking, it proposes a five-tiered system for classifying an AI algorithm based on the two axes described above.
Where the potential for damage is low, the authors suggest, an AI system ethics rating might be unnecessary. As the potential for harm and dependency increases, however, ratings for values assessed using the VCIO model such as transparency, reliability or justice could emerge as taking high importance, requiring further review. In the most extreme cases, such as in weapons systems with a high damage potential, the authors propose that machine learning components should not be allowed, or at the least that they would have to be adjusted to achieve a permissible ethical classification level.
The AIEIG believes that the system they describe in their study could be useful for many different kinds of users. For manufacturers of AI systems, for example, it could help to recognize high-risk AI applications or provide advantages for their reputation if they can demonstrate a high ethical rating. For regulators, it could help to distinguish AI applications that require close regulation from those that pose low ethical risks. For ethically sensitive consumers and purchasers of AI systems, it would provide valuable information for decision-making, as well as reassurance that AI products meet regulatory standards.
The report acknowledges that it is not intended to be the final word on ethics in AI. In the future, the AIEIG looks forward to working with additional stakeholders to refine the ethical criteria proposed in the VCIO examples, and to using its work as a foundation for developing and implementing universal AI standards.
"We have timed the conclusion of our work to coincide with the consultation phase of the EU White Paper on AI," says Sebastian Hallensleben, Head of Digitalization and AI at VDE and leader of the AIEIG. "In fact, we have been in discussion with the key AI people in the European Commission for several months to ensure that the upcoming EU regulatory framework can benefit from the approach we have developed."
—Christopher Williams