A proposed epistemic model representing human reasoning and knowledge
MetadataShow full item record
The aim of this paper is to find a logic that can model how agents reason about their knowledge. Accordingly, a model should be established that can solve logical omniscience and allow logical competence. Logical omniscience is the problem that assumes agents to have knowledge about all logical truths and about all consequences of their knowledge (Parikh 1987). In order to ac- quire the desired target model, existing models are tested on their capability to avoid logical omniscience while allowing logical competence. First, Kripke models are discussed, which fail to avoid logical omniscience (Fagin et al. 1995). Second, minimal models are introduced, also failing to circumvent logical om- niscience completely (Chellas 1980). Third, two hyperintensional models are proposed; awareness logic and impossible worlds semantics (Fagin et al. 1995). Both models solve logical omniscience, but they assume agents to be logically incompetent. Still, human agents are capable of making some trivial infer- ences from their knowledge, making them logically competent (Cherniak 1981). Therefore, the target logic should add a concept that can simulate how agents reason about their knowledge. The target logic thus applies a dynamized ver- sion of the impossible worlds model, which models logically non-omniscient, yet logically competent agents (Bjerring and Skipper 2019). Furthermore, it models agents with different degrees of cognitive resources. Further research could be done to find other logics that can model human reasoning and knowledge.