Due to the opacity of AI systems, academics, policy makers, and engineers are pursuing the aim of attaining ‘explainable AI.’ Commonly, explainability of AI is understood as an epistemic quality with ethical ramification that attaches to the artifact and can be realized in a technical sense, for instance through counterfactual explanations or explanatory heat maps. However, explainability should be understood as a social practice in which humans and technologies (i.e., AI systems) relate in particular, discursive ways. Being embedded in social practices of explaining, explainable AI is not per se helpful or wanted, and can also be a burden. This raises the question of the ethics and normativity of explainable AI, as it is crucial to determine when and for whom which explanation will most likely be valuable and when not. This topical collection serves to sort out the different normative expectations and implications of explainable AI.
Guest editors:
Submission deadline: February 1, 2025
For more information, read here and contact martina.philippi@uni-paderborn.de and wessel.reijers@uni-paderborn.de.
Nothing has been published in this category yet.