Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorSaurabh, Nishant
dc.contributor.authorKosakov, Rosen
dc.date.accessioned2025-03-27T00:01:19Z
dc.date.available2025-03-27T00:01:19Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/48676
dc.description.abstractServerless computing is a novel paradigm of cloud computing that allows cloud users to reduce underlying infrastructure down to the ”function” level of an application. Although, the function provides lightweight and efficient resource management, such control of resources comes at the expense of increased latency. The cold start problem might account for as much as 80% of the total latency, which creates the need for application optimization. This Master thesis proposes a two-tier approach for cold start mitigation utilizing Q-learning and K-means addressing latency, CPU, memory usage, and adherence to Service level objective. The Q-learning model takes into account the shortcomings of the existing approaches and is devised with adaptive to the latency level rewards due to the dynamic nature of the function invocations and uses multiple Q-tables to avoid long-size tables. The K-means clusters the invocations based on low, medium, and high latency. The method is trained with real-life function invocations from Huawei cloud and an evaluation dataset that was generated from function invocations in OpenFaaS. The method performed with 0.40 average reward per iteration during training and 0.46 and 0.47 during evaluation for one-by-one function invocation and parallel invocation respectively. In addition, the method proved cost efficiency by completing an iteration for around 5 seconds, required less than 8% CPU, and around 77 MB of memory for over 50 training iterations.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectMitigating the cold start issue in Serverless using Reinforcement learning, using Q-learning and K-means approach.
dc.titleMitigating the cold start issue in Serverless using Reinforcement learning
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuBusiness Informatics
dc.thesis.id44525


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record