The project is divided into six subprojects with independent but connected research questions:
We will undertake case studies in the field of engineering and computer simulations (CFD) to understand the relationship between epistemic opacity of computer based methods and different modes of trust (trust in experts, methods, results of methods).
The project will design algorithms to enhance trust for different applications like the interface between science and policy. The resulting tools will facilitate informed decision making in complex environments.
We analyze how the problem of adversarial examples is addressed in AI research. Such misclassifications do create specific security concerns for real world systems. We first want to know what are the characteristics of current adversarial defense strategies. Secondly we ask how realistic these different threat models are to reduce mistrust in real world AI systems.
Trust is a crucial element in medicine and healthcare. With the introduction and use of computational and informational technologies by physicians in diagnosis and treatment planning as well as by governments and health authorities for policy making in public health, stakeholders will benefit from knowing which clinical decision support they can trust. This project specifies the characteristics of trustworthy medical systems.
Our subproject regarding criminological contexts reflects on how investigations based on both human and computer-aided testimony gain trust in their results. We analyze the use of virtual crime scene reconstructions to get an understanding of what reasons justify these forms of trust. In whom or what do we trust, when we trust the results of investigations based on witness reports or virtual models?
Simulations are a promising tool for facilitating participatory urban planning processes. However, this requires that the stakeholders involved have sufficient trust in the simulations. In this sub-project, we first address the question of what trust in a simulation means in this context. Second, we want to know how various problematic forms of mistrust and doubt can be overcome? What factors play a role in how people assess techniques as reliable or unreliable and people involved as trustworthy or untrustworthy?
01. August 2020 - 30. June 2024
Philosophy & Ethics
Philosophy of Computational Sciences
MWK Baden-Württemberg
See all projects
Outcomes of the project Trust in Information include the following:
Follow-up projects will investigate the themes “Reproducibility and Simulation Avoidance” and “Modelling for Policy.”
Head, Department of Philosophy of Computational Sciences