Towards Automatic Explaining Arti?cially Intelligent Behaviour
MetadataShow full item record
This thesis describes the research towards automatically generating textual explanations for the behaviour of techniques of Artificial Intelligence, in order to obtain a greater transparency for HeMAS. HeMAS is an expert-system-like multi agent system with agents that apply divergent Artificial Intelligence techniques. It applies datamining on medical patient data and deduction on the newly formed rules to give advice or a diagnosis concerning individual patients. In this thesis, a solution towards enhanced transparency is proposed that includes the adaption of existing HeMAS agents to make them explainable and the implementation of a new agent that combines all obtained information from the agents and forms coherent explanations. This research is one of the fields of Explainable AI as well as Natural Language Generation and borrows from the field of Argumentation Theory. Involved aspects concerning the generation of explanations are the application of Toulmin Models for the capturing of the reasoning of the agents and the translation of First Order Predicate logic to natural language which is largely covered by LogicBabelfish, a new small programming language for this very purpose. The in this document described solution has been partially implemented and was applied to a small test case. The resulting explanations seem convenient.