Skip to main navigation menu Skip to main content Skip to site footer

Student essays

Vol. 2 (2024)

Artificial Intelligence: Panacea or Non-Intentional Dehumanisation?

DOI
https://doi.org/10.59490/jhtr.2024.2.7272
Submitted
November 21, 2023
Published
2024-05-28

Abstract

Applications of artificial intelligence (AI) are able to optimise our lives par excellence, and it is clear that this will only increase as time passes. In many ways, this is very promising, but the forms that AI takes in our society have also sparked many concerns about dehumanisation. What is often recognised is that AI systems implicitly exert social power relations—whether intentionally or not, as might be the case with bias—such that the danger would be gone if only we improved our models and uncovered this hidden realm of intentional oppression. However, these views overlook the possibility that detrimental consequences may also arise precisely because AI is able to attain favourable goals flawlessly. This problem of adverse side effects, which are strictly accidental to the goals we set for AI to effectuate, is explored through the notion of “non-intentional dehumanisation”. To articulate this phenomenon, this essay consists of two parts. The first part will establish how naive AI usage presents a paradigmatic case of this problem. In the second part, we will argue that these issues occur in a two-fold fashion; not only does AI risk inducing harm to the “used-upon”, but also to the user. It is with this conceptual model that awareness may be brought to the counter side of our ready acceptance of AI solutions.

Update cookies preferences