Classifying Legally Actionable Threats using Language Models
Summary
Threat classification is a relatively new research field in the Natural Language Pro-
cessing domain. It pertains to models attempting to classify what texts constitute
a threat and which texts do not. This is an essential research field as uttering
threats is illegal as opposed to insulting someone.
This research operationalizes the Dutch legal definition of what constitutes
a threat and investigates to what extent a language model can classify legally
actionable threats from texts. Language models are the state-of-the-art technique
for numerous NLP tasks, including text classification. In the text classification
domain, it allows a Machine Learning (ML) model to be pre-trained on millions of
tokens before fine-tuning the model on a downstream task. In this way, a language
model is created that learns the syntax of a language. This pre-training negates
the problem of data scarcity, which is a recurring problem in threat classification.
In this study, the application of a language model is compared to previously used
models in the threat classification domain (i.e. BiLSTM, CNN, Naive Bayes, &
SVM). The performance metrics that the models are compared with are F1-scores
and Precision-Recall Area-Under-Curve (PR-AUC) score. All models are trained
on publicly available datasets containing threats and non-threats that are manually
re-annotated. Additionally, the models are evaluated with two datasets, namely
a Dutch dataset and an English dataset. The goal of these models is to predict
whether a threat that is uttered is legally actionable. The models were evaluated
by means of a stratified 10-fold split.
The results of the study are that it is possible to operationalize the Dutch le-
gal definition by means of annotation guidelines. Two annotators re-annotated a
Dutch and English threat dataset and their agreement was not caused by chance
and the Dutch dataset was deemed sufficient for the target institution (i.e. the
National Police). The language model subsequently statistically significantly out-
performed the benchmark models in the majority of performance metrics.