Video-Based Scene and Material Editing
MetadataShow full item record
The technique presented in this report alters the appearance of objects in video by substituting the original material for another, synthetic, material. Inspired by methods of Khan et al. and Karsch et al., the object is tracked and re-rendered using global illumination data obtained from the input video, assisted by a brief user annotation. A model of the environment is constructed geometrically, onto which textures from the input sequence are projected. This leads to an approximate environment that provides indirect lighting. A major part of this system is identical to inserting objects into video. An implementation is shown that allows physically correct interaction of the inserted object with its environment; the object is shaded from the correct directions, casts shadows onto the environment and can even block out light sources, reducing the overall brightness of the result. The result is a system for synthetic object insertion or replacement into video, requiring no access to the physical scene, which works on low-quality recorded footage. Additionally, a contribution is presented on the subject of Additive Differential Rendering, a technique composing rendered objects into original footage, where the render time can be drastically decreased. Another contribution is shown on the subject of Exemplar-Based Image Inpainting, enabling the use of more image content to fill a region.