Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorVeltkamp, R.C.
dc.contributor.authorHoogenkamp, T.R.
dc.date.accessioned2020-03-18T19:00:57Z
dc.date.available2020-03-18T19:00:57Z
dc.date.issued2019
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/35483
dc.description.abstractRobots with depth and color sensors, such as the humanoid robot Pepper, can beused for applications when autonomous navigation and object detection are avail-able. However, most of these robot vision sensors lack the quality for such tasks.For that reason we built a simulation where the generation of the output of thesesensors is the main focus. The simulation provides a testing environment and couldprovide assistance in real world navigation and object detection tasks.However, The quality of these synthesized views representing the vision sensorsof such robots is not good enough to perform these tasks. In this master thesis welook to create our own dataset of a real world environment for the simulation andwe improve the results of these synthesized views with our own captured depth andcolor data.In this work we have two objectives. The first objective is to create a datasetrepresenting a real world environment that can be used in the simulation. We showhow to best obtain the data. We generate the 3D reconstruction of the environmentand then combine the 3D reconstruction with our own captured depth data. Thesecond objective is to build a fusion algorithm that improves the quality of thedepth data generated by the 3D reconstruction, by combining it with our owncaptured data. The ultimate goal is to enhance the quality of the synthesizedimages representing the color and depth camera of a robot.We show how we create two datasets that can be used in the simulation en-vironment and design a depth fusion algorithm that improves the quality of thesynthesized views. We show that our fusion algorithm works in most cases, butthat for large distances the algorithm needs some additional tuning or additionalsteps during capturing the data should be taken. We discuss how to further enhancethe results of the fusion algorithm as well as how to improve the simulation andhow it could be used for testing purposes.
dc.description.sponsorshipUtrecht University
dc.format.extent22041516
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleA simulation environment for robot depth andcolor sensors
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsDepth sensing, DIBR, Simulation environment
dc.subject.courseuuGame and Media Technology


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record