Permitting Subtle Inconsistencies in Modal Space to Improve the Knowledge Representation of a more Human-like Agent
Summary
The field of Artificial Intelligence endeavors to formally describe human knowledge. Modal logic, with its framework of possible worlds, tries to capture
the knowledge representation of an agent. However, some properties that follow the definition of knowledge assume an omniscient reasoner. To avoid the
omniscience problem, impossible worlds can be added to the world idiom.
This work addresses which specification of the definition of impossible worlds should be used to represent human knowledge, and thus to avoid logical omniscience.
First, by means of a discussion of the approaches of Jago [1] and Bjerrings [2] in the literature on impossible worlds. Second, by combining the
proposal of both [1, 2] in a suggestion for a model that uses a distinguishment between blatant and subtle impossible worlds and by permitting partial worlds.
Next to this, the model introduces a concept of inconsistency values. This value provides a deeper insight into the properties of impossible worlds that are most
representative for modeling a human-like agent. This work suggests that an improved knowledge representation for an agent who is neither omniscient, nor
too unintelligent lies in line with permitting partial worlds with their subtle inconsistencies. Such knowledge representations could be implemented in the
area of Artificial Intelligence which is concerned with the creation of intelligent systems that reason like humans do.