Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPoppe, R. W.
dc.contributor.advisorVeltkamp, R. C.
dc.contributor.authorSomeren, B. van
dc.date.accessioned2017-08-30T18:01:09Z
dc.date.available2017-08-30T18:01:09Z
dc.date.issued2017
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/27083
dc.description.abstractCombining different types of data from multiple views makes it easier to perform object detection. Our novel method enables multi-view deep convolutional neural networks to combine color information from panoramic images and depth information derived from Lidar point clouds for improved street furniture detection. Our focus is on the prediction of world positions of light poles specifically. In contrast to related methods, our method operates on data from real world environments consisting of many complex objects and supports the combination of information from recording locations that do not have fixed relative positions.
dc.description.sponsorshipUtrecht University
dc.format.extent22537625
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleNeural multi-view segmentation-aggregation for joint Lidar and image object detection
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordscomputer vision, machine learning, deep learning, object detection, segmentation, neural networks, image, transformation
dc.subject.courseuuGame and Media Technology


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record