Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorHerder, E.
dc.contributor.authorCornelje, Joel
dc.date.accessioned2024-05-29T23:02:05Z
dc.date.available2024-05-29T23:02:05Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/46440
dc.description.abstractLarge language models (LLMs) have demonstrated great improvements in language-related tasks. The models are generally capable when it comes to ”fast thinking” tasks which can be solved in a continuous way, while they struggle with ”slow thinking” tasks which require overseeing the thought process. Prompt design can be used to improve the performance of the models in tasks associated with slow thinking. However, prompts often require considerable human effort to create, and frequently a meaningful response is expected after a single input. It would be useful to automate the prompting process, and enable the models to operate in an interactive prompt mechanism. Following these suggestions, this study proposes a LLM agent-agent dialogue architecture in order to evoke slow thinking characteristics. Since LLMs are known to be good evaluators, agents can adapt to and improve on the evaluations of the other agent throughout the dialogue. This approach was first investigated by researching how and experimenting with LLM agents based on the GPT-3.5-turbo model could interact and be conditioned on effectiveness and relevancy. Based on these findings, dialogue discussions between agents conditioned to have contrasting opinions were generated using GPT-4. These were analysed using the grounded theory method across three iterations, with in total eleven discussions around five different topics. Results show that the dialogues lack cohesion, with agents following a pattern that resembles an action-reaction behaviour and maintaining the same ”discussion structure” each utterance. The findings indicate that agents lack adaptability. It shows that while agents are known to be good evaluators, if these evaluations are not being adapted to, the output of the model will not lead to an output characterised by slow thinking.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectLLMs are known to perform well in "fast thinking" tasks which can be solved in a continuous way, while they struggle with ”slow thinking” tasks which require overseeing the thought process. This study analyses to what extent slow thinking in LLMs can be evoked through a LLM agent-agent dialogue architecture. Since LLMs are known to be good evaluators, agents can adapt to and improve on the evaluations of the other agent throughout the dialogue, leading to a more slow thinking output.
dc.titleAnalysing Slow Thinking Capabilities in Large Language Model Agent-Agent Dialogue
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsLarge language models, agent-agent dialogue, prompt design, fast and slow thinking, human-computer interaction
dc.subject.courseuuHuman-Computer Interaction
dc.thesis.id31118


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record