A simulation environment for robot depth andcolor sensors
MetadataShow full item record
Robots with depth and color sensors, such as the humanoid robot Pepper, can beused for applications when autonomous navigation and object detection are avail-able. However, most of these robot vision sensors lack the quality for such tasks.For that reason we built a simulation where the generation of the output of thesesensors is the main focus. The simulation provides a testing environment and couldprovide assistance in real world navigation and object detection tasks.However, The quality of these synthesized views representing the vision sensorsof such robots is not good enough to perform these tasks. In this master thesis welook to create our own dataset of a real world environment for the simulation andwe improve the results of these synthesized views with our own captured depth andcolor data.In this work we have two objectives. The first objective is to create a datasetrepresenting a real world environment that can be used in the simulation. We showhow to best obtain the data. We generate the 3D reconstruction of the environmentand then combine the 3D reconstruction with our own captured depth data. Thesecond objective is to build a fusion algorithm that improves the quality of thedepth data generated by the 3D reconstruction, by combining it with our owncaptured data. The ultimate goal is to enhance the quality of the synthesizedimages representing the color and depth camera of a robot.We show how we create two datasets that can be used in the simulation en-vironment and design a depth fusion algorithm that improves the quality of thesynthesized views. We show that our fusion algorithm works in most cases, butthat for large distances the algorithm needs some additional tuning or additionalsteps during capturing the data should be taken. We discuss how to further enhancethe results of the fusion algorithm as well as how to improve the simulation andhow it could be used for testing purposes.