Video-based infant monitoring: Recognizing appetite, pain and sleep in preterm infants
MetadataShow full item record
Preterm infants show their internal state through cues. Nurses attempt to observe these cues as often as possible. This is not always possible due to the dynamic environment of the Neonatal intensive care unit (NICU). Missing cues can lead to misdiagnosis and an overall longer stay at the NICU. A system that can support nurses in recognizing these cues is therefore highly sought after. The overall goal of this research is to detect certain behaviors of preterm infants and use these behaviors to recognize cues. In this study we present a rule based cue detection program. For this program, we have compared three facial landmark detection and human pose estimation models. The most robust models were used to generate key-points for videos of preterm infants. These key-points serve as the foundation of the rules. In this program, medical professionals are able to describe certain behaviors in the form of a straightforward rule. A rule describes the movements of certain key-points on a preterm infant during a cue. These rules are evaluated on the key-points extracted from each frame of the videos. The detections are generated by applying a threshold to the evaluation. These cues can be used to determine the current state of an infant. Whether the infant is in pain, experiences appetite, or is in a certain sleep state. During the experiment, three medical professionals have built rules for four different cues on a training set that spanned 6 minutes. The rules were evaluated on a test set of 15 minutes. The experiment showed that medical professionals are able to build rules that can detect human annotated cues in preterm infants without any additional learning.