Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorBosch, Antal van den
dc.contributor.authorRisco Patón, Ainhoa
dc.date.accessioned2025-08-21T00:02:44Z
dc.date.available2025-08-21T00:02:44Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/49833
dc.description.abstractThis research studies the viability of memory-based large language models (LLMs) as an eco-friendly alternative to transformer-based LLMs, addressing the increasing environmental concerns associated with the energetic demands of AI. The study evaluates memory-based models (IB1-IG, IGTree, TRIBL2 from the TiMBL package) against transformer-based models (GPT-2, GPT-Neo) across various performance metrics including next token prediction accuracy, latency and carbon emissions during the inference phase. Findings indicate that while transformer-based models achieve higher accuracy, they also have significantly greater and exponentially scaling computational costs and latency. In contrast, the IGTree model demonstrates respectable accuracy (9% to 18%) with near-zero latency and minimal, stable computational cost, positioning it as a strong candidate for sustainable AI. While memory-based architectures like IGTree can offer effective performance without the high environmental and operational costs of transformer-based counterparts, it is still important to focus on researching the whole LLM lifecycle and push for more transparency regarding AI energy usage disclosure.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThis research studies the viability of memory-based large language models (LLMs) as an eco-friendly alternative to transformer-based LLMs, addressing the increasing environmental concerns associated with the energetic demands of AI. The study evaluates memory-based models (TiMBL) against transformer-based models (GPT) across various performance metrics including next token prediction accuracy, latency and carbon emissions during the inference phase.
dc.titleEco-friendly LLMs: Can memory-based large language models pave the way towards sustainable AI?
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsLLMs; AI; Ecological;
dc.subject.courseuuApplied Data Science
dc.thesis.id52082


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record