Visualizing Multi-Criteria Evaluations of Application Data in University Admissions: Supporting Holistic and Collaborative Decision-Making
Summary
University admissions processes are complex, requiring evaluators to assess multiple diverse data points—such as academic records, CVs, and motivation letters—to form a holistic view of an applicant’s potential. However, the manual nature of these assessments can lead to inconsistencies, particularly when multiple evaluators are involved. This thesis presents EvaluationViz, a visualization tool designed to support multi-criteria decision-making (MCDM) in university admissions. The tool enables evaluators to assess applicants across seven key data points—previous education, grades transcript, language proficiency, CV, motivation letter, reference letters, and writing sample—using three key metrics: Score, Weight, and Uncertainty. EvaluationViz integrates interactive visualizations, such as radar/spider charts, tabular panel bar charts, stacked bar charts, and lollipop charts, to present application evaluation data in a clear, intuitive manner, facilitating more transparent, consistent, and collaborative decision-making. The tool’s development was guided by a literature review on information visualization and MCDM, and further refined through a pilot study involving 9 admissions experts at Utrecht University, followed by a feedback session with 7 participants. The evaluation involved a controlled experiment with 12 participants, grouped into four teams of three. Each participant evaluated applicants in two phases: first individually, and then as part of a group. Each phase used different anonymized applicant profiles, but all participants applied the same evaluation rubric and decision-making task in both phases. In the first phase, participants assessed applicants without the use of visualizations, relying solely on provided forms. In the second phase, they repeated the assessment using EvaluationViz. Results revealed that participants struggled to synthesize the various application data points without visual aids, often focusing on isolated factors like grades and overlooking the integration of metrics. By contrast, in the second phase, participants reported that visualizations, particularly the radar chart, tabular panel bar chart, and lollipop chart, helped them better integrate scores, weights, uncertainties, and identity where in data these metrics affected the overall evaluation. One participant noted that the visualizations allowed them to “see where we placed the most weight and uncertainty in the decision,” leading to more structured and evidence-based group discussions. The tool also reduced fragmented discussions, allowing participants to focus on the holistic assessment of each applicant. Despite these promising results, the evaluation was conducted using simulated data in a controlled environment, which limits its direct applicability to real- world fast-paced admissions processes. Future research should focus on testing the tool with actual applicant data, automating specific evaluation tasks, adding more dimension, integrating historical data, and enabling comparisons with similar applicants through visual encodings such as Sankey diagrams. By addressing these limitations, EvaluationViz has the potential to become a valuable asset in university admissions, fostering more efficient, data-driven, and equitable decisions.