An FPGA accelerator for PatchMatch multi-view stereo using OpenCL

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


PatchMatch multi-view stereo (MVS) is one method generating depth maps from multi-view images and is expected to be used for various applications such as robot vision, 3D measurement, and 3D reconstruction. The major drawback of PatchMatch MVS is its large computational amount, and its acceleration is strongly desired. However, this acceleration is prevented by two problems. First, though PatchMatch MVS estimates depth maps by propagating estimation results among neighbor pixels, it is not suitable for GPU-based acceleration. Second, since the shape of a matching window used for stereo matching is changed dynamically, reading its pixels is inefficient in memory access. This paper proposes an FPGA accelerator exploiting on-chip FIFOs efficiently to solve the propagation problem. Moreover, reading pixels of a matching window is improved by a cover window which has the fixed shape and covers the matching window. The FPGA accelerator is designed using a design tool based on Open Computing Language (OpenCL). Although parameters of PatchMatch MVS depend on object images, these parameters can be changed easily by the OpenCL-based design. The experimental results demonstrate that the FPGA implementation achieves 3.4 and 2.2 times faster processing speeds than the CPU and GPU ones, respectively, and the power-delay product of the FPGA implementation is 3.2 and 5.7% of the CPU and GPU ones, respectively.

Original languageEnglish
Pages (from-to)215-227
Number of pages13
JournalJournal of Real-Time Image Processing
Issue number2
Publication statusPublished - 2020 Apr 1


  • 3D reconstruction
  • Multi-view stereo (MVS)
  • OpenCL for FPGA
  • PatchMatch
  • Reconfigurable computing

ASJC Scopus subject areas

  • Information Systems


Dive into the research topics of 'An FPGA accelerator for PatchMatch multi-view stereo using OpenCL'. Together they form a unique fingerprint.

Cite this