Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributorMerel van Nuland, Cenkay Aςar, Ramon Contrucci, Sven Hilbrants, Lamyae Maanach, Toine Egberts, Paul D. van der Linden
dc.contributor.advisorExterne beoordelaar - External assesor,
dc.contributor.authorErdoğan, Abdullah
dc.date.accessioned2024-05-02T00:03:12Z
dc.date.available2024-05-02T00:03:12Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/46348
dc.description.abstractObjectives Natural Language Processing (NLP) models such as of Chat Generative Pre-trained Transformer (ChatGPT) are capable of generating human-like responses. Application in clinical pharmacy needs more research for safe and effective use to study the performance of Chat Generative Pre-trained Transformer on clinical pharmacy based questions. And to compare accuracy between ChatGPT and hospital pharmacists. Methods 264 Clinical pharmacy based questions distributed over 17 categories were prompted in ChatGPT. Performance was measured by evaluating responses on accuracy, concordance and quality of explanation (QoE). In addition, accuracy of ChatGPT was compared between the standard accuracy given by hospital pharmacists for the PKA-question . Reproducibility of ChatGPT was done by measuring the consistency of responses for three times a day for 5 days consecutively. Also, a language alteration test was performed to determine differences between phrases of Dutch (A), Dutch with English justification request (B) and English (C). Both reproducibility and language alteration were studied with two different validation tests, a collective-test for all categories of questions and a within-category test per category. Lastly, a separate prompt optimization test was done for three categories with low accuracy Results The overall accuracy of ChatGPT to the standard set of PKA-questions was 79,2% and scored higher compared to the accuracy of 66,0% given by hospital pharmacists. The overall concordance to the same PKA-questions was 95,0%. QoE scoring was available for 262 PKA-questions and were scored good to excellent for 72,5% in total. The language validation Across all Categories (AaC) test gave highest accuracy for phrase B of 88,2% and Single Category (SC) test 100,0% and 80,0% for Dutch phrase and Dutch with requesting English justification phrase. Reproducibility gave a total of average 93.3% for the between-day reproducibility over all categories between two users. And total average within-day reproducibility gave 100,0% for cardiology for both users and medical gasses gave within-day reproducibility of 92,0% and 96,0% for both users. Finally, prompt optimization gave improvement of 63,2% (12/19) for inaccurate responses from ChatGPT with only one prompt. Conclusion Performance shows high accuracy, concordance and QoE. On top of that, ChatGPT outperforms hospital pharmacists. High percentage of accuracy for both language- and reproducibility tests showcase that language alteration of prompts is a significant factor for responses and responses are highly reproducible between days and users. ChatGPT appears to have potential in the field of clinical pharmacy, because it has some degree of knowledge on many clinical pharmacy-related subjects. However, more varied studies are necessary to utilize ChatGPT in the clinical pharmacy.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectChatGPT application in clinical pharmacy needs more research for safe and effective use. Thus study of the performance of ChatGPT on clinical pharmacy based questions is researched. Performance was measured by evaluating responses on accuracy, concordance and quality of explanation. In addition, the accuracy for these questions were compared between ChatGPT and hospital pharmacists.
dc.titlePerformance of ChatGPT in the clinical pharmacy and direct comparison to pharmacists
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsChatGPT, language model, clinical pharmacy, exam questions
dc.subject.courseuuFarmacie
dc.thesis.id26987


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record