View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        WHAT AM I LOOKING AT? - TOWARDS AUTOMATIC VIDEO ANNOTATION FOR MOBILE EYE TRACKING RESEARCH

        Thumbnail
        View/Open
        Simons (5531179) thesis.pdf (11.78Mb)
        Publication date
        2021
        Author
        Simons, R.P.J.
        Metadata
        Show full item record
        Summary
        The developments of wearable eye trackers in Eye Tracking research allowed for more freedom in the experimental setup, but at the cost of extensive manual analysis time. This research tried to decrease manual analysis time of mobile eye tracking experiments by developing a technique for semi-automatic annotation of the video material. First, different approaches proposed by other research were analysed, to find possible improvements in the current solutions. Using object detection models seemed most promising, but a big problem with this technique are the extensive training sets that are required to train these models. This research proposes a new way to annotate a small part of the video material, which can be used to create the required training sets with less manual effort than in traditional annotation tools. In a preliminary comparison to the completely manual annotation process an object detection model was trained that was was able to label around 70\% of the Areas of Interest in the video material. In this comparison the proposed approach became feasible after multiple participants, when the initial time it takes to gather the training data weighs up to the manual annotation of the gaze fixations, but from that moment this approach seems quicker than other proposed solutions for this problem, with the added freedom of training on custom objects. The new approach is still in its early days, but the proposed combination of techniques seem to be quite promising.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/40570
        Collections
        • Theses
        Utrecht university logo