Skip to main navigation menu Skip to main content Skip to site footer

Ethics and Normativity of Explainable AI: Explainability as a Social Practice

0 Items

Due to the opacity of AI systems, academics, policy makers, and engineers are pursuing the aim of attaining ‘explainable AI.’ Commonly, explainability of AI is understood as an epistemic quality with ethical ramification that attaches to the artifact and can be realized in a technical sense, for instance through counterfactual explanations or explanatory heat maps. However, explainability should be understood as a social practice in which humans and technologies (i.e., AI systems) relate in particular, discursive ways. Being embedded in social practices of explaining, explainable AI is not per se helpful or wanted, and can also be a burden. This raises the question of the ethics and normativity of explainable AI, as it is crucial to determine when and for whom which explanation will most likely be valuable and when not. This topical collection serves to sort out the different normative expectations and implications of explainable AI.

Guest editors:

  • Prof. Tobias Matzner, Paderborn University, Department of Media Studies
  • Prof. Suzana Alpsancar, Paderborn University, Department of Philosophy
  • Dr. Martina Philippi, Paderborn University, Department of Philosophy
  • Dr. Wessel Reijers, Paderborn University, Department of Media Studies

Submission deadline: February 1, 2025

For more information, read here and contact martina.philippi@uni-paderborn.de and wessel.reijers@uni-paderborn.de.

All Items

Nothing has been published in this category yet.

Update cookies preferences