dc.description.abstract | Artificial Intelligence (AI) has increasingly become a central topic of public discussion. However, in these discussions, both in the general public and academia, AI is not only portrayed as an engineering tool that addresses various problems but is also constantly compared to humans and attributed with distinctively human cognitive abilities. The tendency to anthropomorphize AI systems —attributing human-like feelings, mental states, cognitive features, or behaviors— poses significant risks. In particular, it promotes distorted views of what the technology truly \textit{is} and fuels the ongoing trend of AI Hype. One form of anthropomorphism particularly pervasive within the AI research community arises from the inference that when an AI model performs a cognitive task, it does it in a human-like manner, thereby acquiring the same type of cognitive property as in the human case. This inference is grounded in a supposed similarity between brains and AI systems, which is constantly fed by two lines of conceptual borrowing: AI researchers using terms from the cognitive sciences to describe their systems and brain scientists depicting the brain and mind as input-output machines akin to computers.
From an ontologically oriented perspective, addressing anthropomorphism in AI involves evaluating the extent to which AI systems are similar to humans and whether they might possess human-like traits. In this thesis, I argue that AI systems do not --or will-- possess such traits. Given the centrality of the psychological concepts used in AI to the process of anthropomorphism, I critically examine the longstanding practice of conceptual borrowing from historical and philosophical viewpoints. This examination highlights the origins, consequences, and challenges of anthropomorphizing AI systems. Furthermore, I show how anthropomorphism becomes relevant for AI's scientific endeavors, emphasizing the risks of drawing misguided parallels between AI and human cognition. Finally, I propose that the use of psychological terms in AI should be revised due to fundamental ontological differences between humans and AI systems. Contrary to claims that linguistic reform is unattainable, conceptual engineering can redefine psychological terms in AI, reducing anthropomorphism and enhancing our understanding of cognition. | |