Subsoil on a mobile device: Visualizing and estimating the distance and depth of underground infrastructure
Summary
Augmented reality adds extra visual information to the real world by superimposing computer generated graphics over real world images. The end result can be displayed on mediums such as computer monitors, head mounted displays or mobile devices.
Today, augmented reality is becoming more mobile because smart phones, tablets and other handheld devices are much more capable of presenting real time overlays on camera images than before. This increased mobility allows for new applications of augmented reality such as the visualization of underground infrastructures, like cables and pipelines. Visualizing these infrastructures on location offers the advantage of immediately knowing whether there are any cables and pipelines nearby. Which is a particularly useful tool for excavators, city planners, and emergency services personnel.
Creating an augmented reality ‘app’ suitable for such applications requires the ability to get an accurate location dependent view. It also requires the ability to correctly estimate the location of virtual objects displayed on the screen, as if they were part of the real world. These two requirements are the focus of this thesis.
The first aspect is to determine whether augmented reality is suitable for professional applications which require an accurate display of information. The second aspect focuses on the human factors, the perception of distances and depth of augmented reality, when displayed on the screen of a mobile device. Determining the distance and depth of virtual objects, superimposed on an image of the real world is not a straightforward task. Cues that normally aid humans in seeing depth need to be artificially added to the virtual objects. Most prior research on this area focuses on relative depth and distance cues using head mounted devices.
In this research, two experiments were conducted in order to evaluate which depth and distance cues and techniques enable the user to best determine depth and distances. The experiments build upon techniques from previous research on depth cues, and in some cases they are an adaption of them. In the two experiments, participants were asked to estimate the distance or depth of a virtual target object. The participants also had to specify how confident they were that their estimation was correct. In the distance experiment, an extra task was performed in which the participants had to measure distances using a paper map. A third experiment was conducted to find out how well the mobile device can determine its geographical location and orientation.
The results for the user study indicate that all of the presented techniques improve the accuracy estimation of distance and depth. The estimation of depth, especially without any help of cues, was considered to be significantly more difficult than the estimation of distance. The technique resulting in the most accurate distance estimations in the quickest time for the distance experiment was the ‘range finder’. This technique estimates and presents the distance on the screen which gives a lot of confidence in the accuracy. A 2D depth cross section presented on the display resulted in the most accurate depth estimation and was the most preferred as well.
Based upon the third experiment, experiences during development and observations during the user study, it is evident that augmented reality on mobile devices still requires various improvements in order to be suitable for accurate professional use. Of which, most are hardware based. The accuracy of the orientation sensor, and especially the GPS sensor, are lacking the accuracy required and the readability of the screen becomes troublesome with sunlight.