Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorMeijer, J.J.C.
dc.contributor.authorPerdijk, N.R.B.
dc.date.accessioned2014-11-26T18:01:51Z
dc.date.available2014-11-26T18:01:51Z
dc.date.issued2014
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/18835
dc.description.abstractResearch into "Strong" or "True" Artificial Intelligence, the long-term project to create an Artificial Intelligence that is intelligent in the same sense that humans are, has most often focussed on creating the tools with which an AI can mimic our intelligent behaviour. Conspicuously less attention has been given to the reasons why humans do the things they do: our motivation, or the rewards and punishments we experience. This thesis aims to rectify that shortcoming. Rather than equipping an AI with tools for reasoning in the hopes that it will develop "meaning" on its own, this thesis proposes an AI that uses "meaning" as an important learning tool with which it can create its own reasoning. To find the source of meaning, this thesis explores the source of intelligence: the necessity for adaptability. By converting “intelligence” to Jack Copeland’s “massive adaptability” and then reducing this “massive adaptability” to something called “bare-bone adaptability” this thesis finds a grounding point for Artificial Learning in biological Natural Selection and the monitoring of Homeostasis. To do so, it explores the learning behaviour of both single-celled organisms and complex organisms with a central nervous system (brain). Homeostasis and the internal rewards that can be founded on it come back time and again as an important source of organism behaviour and adaptation. By embodying an AI in a body that it needs to maintain, it can be equipped with the basic tools to establish connections between its environment and any internal consequences. It can evaluate its interactions with the environment based on the impact they have on its internal "homeostasis" and use these evaluations to create meaningful storage and to motivate adjustments of its behaviour. Homeostasis can also form the basis on which self-initiated actions can be undertaken. This thesis contains a tentative first model of a potential Artificial Neuron structure designed to monitor homeostasis and modulate Neural connections based on a Global Reward Signal, using inhibiting and excitatory Neurons. This model, called “Motivated Artificial Intelligence” (MAI), is capable of detecting homeostatic changes, prompting action selection and triggering evaluative feedback that enhances network memory. By viewing intelligence as a form of adaptability, it becomes more recognisable that intelligence is about establishing relations between internal representation, the external world and internal consequences. Due to the nature of adaptability, it becomes clear that the nature of the underlying matter is insignificant: whether an intelligence is built using atoms or bits does not affect its ability to develop meaning. After all: we do not expect the atoms that make up our body and with it our mind, to have any intrinsic meaning, it would be unfair to expect something different from symbols used to construct an Artificial Intelligence. This way, the philosophical Symbol Grounding Problem is not so much solved as bypassed: the Computer is no longer required to understand the symbols it manipulates or attach meaning to them. Meaning is instead found in the AI's complex web of valued interactions that is constructed upon the symbols.
dc.description.sponsorshipUtrecht University
dc.format.extent2425209
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleArtificial Reward and Punishment: Grounding Artificial Intelligence through motivated learning inspired by biology and the inherent consequences for the Philosophy of Artificial Intelligence.
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsreward, punishment, homeostasis, AI, Artificial, intelligence, meaning, grounding, natural selection, connections, Neural Networks, Neural Nets, MAI, Motivated Artificial Intelligence, motivation, global reward signal, strong AI, true AI, adaptability, massive adaptability, internal consequences, adaptation, environment, interaction, adjustment, action selection, symbol grounding problem,
dc.subject.courseuuHistory and Philosophy of Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record