Variational Depth from Defocus in Real-Time (IEEE)
With emerging of next generation of digital cameras offering a 3D reconstruction of a viewed scene, Depth from Defocus (DFD) presents an attractive option. In this approach the depth profile of the scene is recovered from two views captured in different focus setting. The DFD is well known as a computationally-intensive method due to the shift-variant filtering involved with its estimation. In this paper we present a parallel GPGPU implementation of DFD based on the variational framework, enabling computation up to 15 frames/sec for a SVGA sequence. This constitutes the first GPU application and the fastest implementation known for passive DFD. The speed-up is obtained by using the novel Fast Explicit Diffusion approach and the fine grain data parallelism in an explicit scheme. We evaluate our method on publicly available real data and compare its results to a recently published PDE based method. The proposed method outperforms previous DFD techniques in terms of accuracy/runtime, suggesting the DFD as an alternative for 3D reconstruction in real-time.
Paper available at IEEE.