Shading Cues in Kinect IR Images for
In this paper, we propose a method to refine geometry of 3D meshes from Kinect fusion by exploiting shading cues captured from the infrared (IR) camera of Kinect. A major benefit of using the Kinect IR camera instead of a RGB camera is that the IR images captured by Kinect are narrow band images which filtered out most undesired ambient light that makes our system robust to natural indoor illumination. We define a near light IR shading model which describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve ambiguity in our model between normals and distance, we utilize an initial 3D mesh from Kinect fusion and multi-view information to reliably estimate surface details that were not reconstructed by Kinect fusion. Our approach directly operates on mesh model for geometry refinement. The effectiveness of our approach is demonstrated through several challenging realworld examples.