dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Herder, E. | |
dc.contributor.author | Cornelje, Joel | |
dc.date.accessioned | 2024-05-29T23:02:05Z | |
dc.date.available | 2024-05-29T23:02:05Z | |
dc.date.issued | 2024 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/46440 | |
dc.description.abstract | Large language models (LLMs) have demonstrated great improvements in language-related tasks. The models are generally capable when it comes to ”fast thinking” tasks which can be solved in a continuous way, while they struggle with ”slow thinking” tasks which require overseeing the thought process. Prompt design can be used to improve the performance of the models in tasks associated with slow thinking. However, prompts often require considerable human effort to create, and frequently a meaningful response is expected after a single input. It would be useful to automate the prompting process, and enable the models to operate in an interactive prompt mechanism. Following these suggestions, this study proposes a LLM agent-agent dialogue architecture in order to evoke slow thinking characteristics. Since LLMs are known to be good evaluators, agents can adapt to and improve on the evaluations of the other agent throughout the dialogue. This approach was first investigated by researching how and experimenting with LLM agents based on the GPT-3.5-turbo model could interact and be conditioned on effectiveness and relevancy. Based on these findings, dialogue discussions between agents conditioned to have contrasting opinions were generated using GPT-4. These were analysed using the grounded theory method across three iterations, with in total eleven discussions around five different topics. Results show that the dialogues lack cohesion, with agents following a pattern that resembles an action-reaction behaviour and maintaining the same ”discussion structure” each utterance. The findings indicate that agents lack adaptability. It shows that while agents are known to be good evaluators, if these evaluations are not being adapted to, the output of the model will not lead to an output characterised by slow thinking. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | LLMs are known to perform well in "fast thinking" tasks which can be solved in a continuous way, while they struggle with ”slow thinking” tasks which require overseeing the thought process. This study analyses to what extent slow thinking in LLMs can be evoked through a LLM agent-agent dialogue architecture. Since LLMs are known to be good evaluators, agents can adapt to and improve on the evaluations of the other agent throughout the dialogue, leading to a more slow thinking output. | |
dc.title | Analysing Slow Thinking Capabilities in Large Language Model Agent-Agent Dialogue | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | Large language models, agent-agent dialogue, prompt design, fast and slow thinking, human-computer interaction | |
dc.subject.courseuu | Human-Computer Interaction | |
dc.thesis.id | 31118 | |