View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Biological and Artificial Neural Representations of Concepts: Effects of Multimodal Learning in Sighted and Blind Individuals vs. Foundation Models

        Thumbnail
        View/Open
        Thesis updated version.pdf (4.010Mb)
        Publication date
        2025
        Author
        Pieterse, Pien
        Metadata
        Show full item record
        Summary
        Human language acquisition, processing and generation is often deeply intertwined with visual experience. Hence, there have been claims (congenitally) blind individuals process linguistic features, such as the concreteness of words, differently from sighted individuals. However, newer research on the topic of conceptual processing shows a more nuanced view, as blind individuals do not necessarily seem to process concreteness much differently than sighted individuals. Furthermore, alignment between the human brain and foundation models in conceptual processing provides insight into how visual experience shapes conceptual representations in human and artificial neural networks. This study investigated how visual experience influences the neural representation of abstract and concrete concepts in blind and sighted individuals, and how these brain representations align with those of unimodal and multimodal language models. Using fMRI data and a representational similarity analysis, the study revealed that blind individuals had stronger activation in both visual and language areas during conceptual processing, indicating neural repurposing and plasticity independent of visual input. Comparisons between blind and sighted neural representations showed that visual experience enhances concrete concept representation, whereas blind individuals had stronger alignment with abstract concepts. Furthermore, foundation model–brain alignment was shown to be highly dependent on the specific foundation model, the layer depth and its visual experience. The multimodal CLIP model (with text-only input) showed stronger alignment with brain responses than the unimodal XLM-RoBERTa, particularly during concrete concept processing. Trends suggested most test foundation models had best alignment with neural data during concreteness processing. However, results showed that the effects of concreteness and abstractness on alignment were consistent across blind and sighted participants, suggesting that shared visual experience does not necessarily improve alignment in conceptual processing. Conceptual representations of multimodal models are not necessarily better aligned with sighted individuals and unimodal models with blind individuals during concreteness or abstractness processing. Overall, these findings showed how visual grounding in both human and artificial conceptual representations affect linguistic processing and understanding, offering new insights into neuro plasticity, embodied cognition, and further design of visually grounded AI systems.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/50489
        Collections
        • Theses
        Utrecht university logo