Call for papers in the Topical Collection "Ethics and Normativity of Explainable AI: Explainability as a Social Practice"
Call for Papers for the Journal of Human-Technology Relations topical collection on
Ethics and Normativity of Explainable AI: Explainability as a Social Practice
Guest Editors
- Tobias Matzner, Paderborn University, Department of Media Studies
- Suzana Alpsancar, Paderborn University, Department of Philosophy
- Martina Philippi, Paderborn University, Department of Philosophy
- Wessel Reijers, Paderborn University, Department of Media Studies
Description
Due to the opacity of AI systems, academics, policy makers, and engineers are pursuing the aim of attaining ‘explainable AI.’ Commonly, explainability of AI is understood as an epistemic quality with ethical ramification that attaches to the artifact and can be realized in a technical sense, for instance, through counterfactual explanations or explanatory heat maps. However, explainability should be understood as a social practice in which humans and technologies (i.e., AI systems) relate in particular, discursive ways.
Understood as being embedded in social practices of explaining, explainable AI is not per se helpful or wanted, it can also be a burden. This raises the question of the ethics and normativity of explainable AI, as it is crucial to determine when and for whom which explanation will most likely be valuable and when not.
The normativity of explanations can generally be grasped from an economic point of view (efficiency, comfort), from a social point of view (roles, constellations, contexts, stakeholder interests), and from an ethical-political point of view (obligations, duties, virtues, impacts of (not) explaining).
The special issue serves to sort out these different normative expectations and implications and tries to link them to the task of value-oriented technology design. Particularly, we welcome papers focusing on the following themes:
First, what are the limits of explainable in supporting social practices of explaining? When is it helpful for users, needed institutionally or even ethically mandatory and when not? When does it draw the attention away from other ethical, e.g., transparency, concerns?
Second, we would like to explore the task of operationalizing ethical demands, such as explainability, and the context-dependency of the value and adequateness of explanations. In how far can existing ethical frameworks cover contextual factors of human-technology relations, how would these need to be modified and where are the limits of designing for situated actions?
Third, we would like to translate theoretical frameworks and principles into responsible innovation and design practices. This could be done, for instance, by connecting a perspective of explainable AI as realising discursive human-technology relations with more applied approaches such as value sensitive design (VSD).
We invite the submission of papers focusing on but not restricted to the following questions:
- How could we better understand explainable AI as a social practice through which humans and technologies discursively relate?
- Considering explainable AI as a social practice, how can we determine in which social contexts it is desirable and, potentially, undesirable?
- What theories in philosophy of technology, STS, and other related disciplines, may help constructing a framework for understanding the normative impacts of AI explainability?
- How could approaches to responsible innovation, such as value sensitive design (VSD) be leveraged to connect theoretical insights into explainable AI as a social practice to responsible innovation practices and policies?
- What are the ethical and political implications of a lack of explainability when AI systems are used, for instance in social welfare systems?
Timetable
Deadline for paper submissions: February 1st, 2025
Deadline for paper reviews: March 28th, 2025
Deadline for submission of revised papers: April 18th, 2025
Deadline for reviewing revised papers: May 30th, 2025
Accepted papers will be published on a rolling basis as they are accepted.
Submission guidelines: During the submission, please indicate on the first page of the cover letter that your paper is for the topical collection “Ethics and Normativity of Explainable AI”.
For any further information, please, contact: martina.philippi@uni-paderborn.de and wessel.reijers@uni-paderborn.de.