"Deep Learning for Aircraft Noise Understanding: Source Classification and Power Quantification"
Summary
This thesis delves into aircraft noise analysis using artificial intelligence, specifically
through audio visualisation and a custom-built Convolutional Neural Network
(CNN). The study aims to enhance the understanding of distinct audio sources within
an aircraft, a relatively underexplored area compared to broader environmental noise
recognition. Audio samples were collected using a rooftop microphone at M+P, with
irrelevant sounds filtered out. Spectrograms were generated, and dominant sources
were annotated on these images and checked by an aircraft expert. The CNN was
trained on these annotated images, with various explainable AI methods applied to
analyse pixel attribution and understand the CNN’s decision-making. Despite efforts,
identifying dominant sound sources consistently yielded static results, and attempts to
detect significant contrasts using the CNN were inconclusive. The best-scoring CNN,
with mel-spectrogram as input, achieved an accuracy of 58.6% and a corresponding
F1-score of 60%. The intersection over the union (IoU) between the pixel attribution
map and the annotated mask was significantly lower, with a result of 14.6% over all
labels with SmoothGrad. The study concludes that while the research area holds potential,
more advanced techniques are needed for meaningful outcomes. If developed,
these techniques could enhance understanding of aircraft noise patterns, leading to
better monitoring and informed recommendations for optimising aircraft maintenance
and performance.