WHAT AM I LOOKING AT? - TOWARDS AUTOMATIC VIDEO ANNOTATION FOR MOBILE EYE TRACKING RESEARCH
Summary
The developments of wearable eye trackers in Eye Tracking research allowed for more freedom in the experimental setup, but at the cost of extensive manual analysis time. This research tried to decrease manual analysis time of mobile eye tracking experiments by developing a technique for semi-automatic annotation of the video material. First, different approaches proposed by other research were analysed, to find possible improvements in the current solutions. Using object detection models seemed most promising, but a big problem with this technique are the extensive training sets that are required to train these models. This research proposes a new way to annotate a small part of the video material, which can be used to create the required training sets with less manual effort than in traditional annotation tools. In a preliminary comparison to the completely manual annotation process an object detection model was trained that was was able to label around 70\% of the Areas of Interest in the video material. In this comparison the proposed approach became feasible after multiple participants, when the initial time it takes to gather the training data weighs up to the manual annotation of the gaze fixations, but from that moment this approach seems quicker than other proposed solutions for this problem, with the added freedom of training on custom objects. The new approach is still in its early days, but the proposed combination of techniques seem to be quite promising.