View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Classifying Legally Actionable Threats using Language Models

        Thumbnail
        View/Open
        MBI_Thesis_NoudJan_de_Rijk_Classifying_Legally_Actionable_Threats_using_Language_Models.pdf (1.457Mb)
        Publication date
        2023
        Author
        Rijk, Noud de
        Metadata
        Show full item record
        Summary
        Threat classification is a relatively new research field in the Natural Language Pro- cessing domain. It pertains to models attempting to classify what texts constitute a threat and which texts do not. This is an essential research field as uttering threats is illegal as opposed to insulting someone. This research operationalizes the Dutch legal definition of what constitutes a threat and investigates to what extent a language model can classify legally actionable threats from texts. Language models are the state-of-the-art technique for numerous NLP tasks, including text classification. In the text classification domain, it allows a Machine Learning (ML) model to be pre-trained on millions of tokens before fine-tuning the model on a downstream task. In this way, a language model is created that learns the syntax of a language. This pre-training negates the problem of data scarcity, which is a recurring problem in threat classification. In this study, the application of a language model is compared to previously used models in the threat classification domain (i.e. BiLSTM, CNN, Naive Bayes, & SVM). The performance metrics that the models are compared with are F1-scores and Precision-Recall Area-Under-Curve (PR-AUC) score. All models are trained on publicly available datasets containing threats and non-threats that are manually re-annotated. Additionally, the models are evaluated with two datasets, namely a Dutch dataset and an English dataset. The goal of these models is to predict whether a threat that is uttered is legally actionable. The models were evaluated by means of a stratified 10-fold split. The results of the study are that it is possible to operationalize the Dutch le- gal definition by means of annotation guidelines. Two annotators re-annotated a Dutch and English threat dataset and their agreement was not caused by chance and the Dutch dataset was deemed sufficient for the target institution (i.e. the National Police). The language model subsequently statistically significantly out- performed the benchmark models in the majority of performance metrics.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/43817
        Collections
        • Theses
        Utrecht university logo