Towards unbiased assessement of adaptive expertise
Summary
Addressing Grand Societal Challenges (GSCs) such as climate change, equitable
resource distribution and healthcare requires multidisciplinary approaches and robust problem-
solving skills. Higher education institutions can play a critical role in addressing GSCs by
preparing students as change agents by equipping them with essential skills like Adaptive
Expertise (AE). AE is crucial for effective performance in unfamiliar situations, enabling
individuals to understand and adapt methodologies as necessary, making it a vital skill for
resolving GSCs.
In this study, we developed an instrument to measure AE externally, advancing beyond
traditional self-assessment methods to create an accurate and reliable assessment suitable
for educational and professional settings. We designed 72 AI-generated problem scenarios
featuring real-life problems that vary in complexity and knowledge domain. These variations
should challenge individuals to provide novel solutions, resulting in expression of their AE. Our
measurement method involves presenting individuals with four random scenarios from our
collection and asking them to propose solutions. These solutions are then evaluated through
an AI-driven pairwise comparison to construct a performance ranking, eliminating the need for
domain-specific experts and enabling for multidisciplinary assessment. We validated this
method by comparing the results of our external measurement with those obtained through a
previously verified self-assessment.
Our findings demonstrate that AE can be reliably measured externally across various
domains and levels of expertise, providing an instrument for the external assessment of AE in
Dutch educational and professional settings. This allows educational institutions to assess the
development of AE in their students, contributing to the resolution of GSCs. Additionally, we
validated the use of generative AI to create and assess educational content and advanced the
understanding of AE.