|dc.description.abstract||When dogs are in pain, their body language varies based on their breed, mental condition, and the type of pain they are experiencing. This made it more difficult for veterinary nurses to identify and analyze the pain levels of dogs on a daily basis. To assist veterinary nurses in recognizing dogs' discomfort behavior, an automated surveillance system is sought. The basic aim of the pain recognition system is to use appropriate expression to portray dog behavior, integrate sufficient data with dogs' motion patterns in pain stages, and make accurate judgment.
In this study, we present a deep learning-based model for distinguishing between pain and non-pain in dog video recordings. This model features a two-stream design that enables the spatial-temporal information contained in RGB video frames to be combined with the sequence of corresponding keypoints. To extract posture information from video frames, we develop a hierarchical technique for extracting dog's body keypoints and compare its performance to that of other backbone pose estimation systems. We also present a unique video-based dog pain dataset with the assistance of veterinary professionals. On this dataset, we try different variants of our model, demonstrating the importance of the two-stream structure and the practicality of different model fusion strategies. While there has been no prior study utilizing machine learning algorithms to recognize dog pain. We compare our model to a state-of-the-art video-based approach for recognizing animal pain. Our model outperformed others in terms of prediction accuracy on the dog pain recognition task.||