Enhancing Customer Support Chatbots with LLMs: Comparative Analysis of Few-Shot Learning, Fine-Tuning, and RAG including the Proposal of an Integrated Architecture
Summary
Recent years have seen a growing interest in advanced chatbots, mainly due to breakthroughs in Artificial Intelligence (AI). Particularly, the integration of Large Language Models (LLMs) has increased chatbot capabilities, making them increasingly viable in various domains, such as customer support. This thesis delves into the area of chatbots, enhanced by LLMs and different techniques to increase context-specific capabilities like fine-tuning, few-shot learning, and retrieval-augmented generation, within the context of customer support. This thesis consisted of two separate phases, the first compared the effectiveness of fine-tuning, few-shot learning, and RAG, against each other to identify the most effective method for enhancing chatbot responses. An evaluation framework was developed, combining automated metrics with human judgment to analyze chatbot performance across various customer service scenarios, based on multiple metrics. The results indicate that RAG outperforms other methods on all metrics, demonstrating superior response quality in customer interactions. In the second phase, a final chatbot architecture was proposed, based on these findings, leveraging the strengths of RAG while integrating a handover module and template enhancements to address its limitations. The proposed architecture was tested using human judgment and compared to the RAG method for the first phase. A second evaluation showed that the complete architecture showed overall improved performance and reliability, with considerably higher scores on most metrics. The study highlights the crucial role of RAG in advancing chatbot intelligence and offers insights for the development of more effective LLM-driven customer service support bots.