Automated Generation of Assessment Questions from Textbook Models
Summary
Assessment questions can greatly improve the learning process of students from several perspectives. Being interactive learning activities, assessment questions allow students to break from mundane consumption of reading material, thus making learning more engaging. In addition, they can be employed to assess students' knowledge of a particular topic, reinforce the learning process by repeating key domain concepts or offer valuable feedback to students. To be able to provide such educational benefits, assessment questions need to meet numerous (linguistic) requirements: a high-quality question is relevant to the student, has a clear focus and is written in clear, non-ambiguous language. Studies have shown, however, that these requirements are often not met. Moreover, the manual construction of assessment questions is a time-consuming process. Automatic question generation is a research area that tries to support this laborious and demanding process by automating the manual creation procedure. This thesis describes such a system that is able to automatically generate assessment questions for educational purposes. It is created as a component of the Intextbooks platform, which extracts fine-grained knowledge models from PDF textbooks and converts them into semantically annotated learning resources. The knowledge models are used as unique form of input and the created system demonstrates how different components of these knowledge models can be utilized for different generation tasks in order to construct a wide variety of question types. The system's design does not rely on manual annotations, explicit domain-specific knowledge or external knowledge sources and operates in a fully automated way. This makes the proposed system capable of creating assessment questions for every textbook of any domain. With the help of the developed assessment component, the processed textbooks by the Intextbooks platform become interactive educational tools that are able to assess students' knowledge of relevant concepts. The system was evaluated in an expert-based pilot evaluation in the statistics domain, of which the results show that the proposed generation approach is sound enough for most question types. From the point of assessment value, some generated questions types fall behind manually constructed assessment, while others obtain comparable results. The assessment questions are in general properly worded and have a good range in terms of difficulty, showing the overall potential of using extracted knowledge models for the purpose of automated assessment generation.