Robust shape from depth images with GR2T

Jonathan Ruttle, Claudia Arellano, Rozenn Dahyot

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

This paper proposes to infer accurately a 3D shape of an object captured by a depth camera from multiple view points. The Generalised Relaxed Radon Transform (GR2T) [1] is used here to merge all depth images in a robust kernel density estimate that models the surface of an object in the 3D space. The kernel is tailored to capture the uncertainty associated with each pixel in the depth images. The resulting cost function is suitable for stochastic exploration with gradient ascent algorithms when the noise of the observations is modelled with a differentiable distribution. When merging several depth images captured from several view points, extrinsic camera parameters need to be known accurately, and we extend GR2T to also estimate these nuisance parameters. We illustrate qualitatively the performance of our modelling and we assess quantitatively the accuracy of our 3D shape reconstructions computed from depth images captured with a Kinect camera.

Original languageEnglish
Pages (from-to)43-54
Number of pages12
JournalPattern Recognition Letters
Volume50
DOIs
StatePublished - 1 Dec 2014
Externally publishedYes

Keywords

  • (GRT)
  • Generalised
  • Noise modelling
  • Relaxed Radon Transform
  • Shape from depth

Fingerprint

Dive into the research topics of 'Robust shape from depth images with GR2T'. Together they form a unique fingerprint.

Cite this