Learning from multimodal and multitemporal earth observation data for building damage mapping

Bruno Adriano, Naoto Yokoya, Junshi Xia, Hiroyuki Miura, Wen Liu, Masashi Matsuoka, Shunichi Koshimura

Research output: Contribution to journalArticlepeer-review

34 Citations (Scopus)


Earth observation (EO) technologies, such as optical imaging and synthetic aperture radar (SAR), provide excellent means to continuously monitor ever-growing urban environments. Notably, in the case of large-scale disasters (e.g., tsunamis and earthquakes), in which a response is highly time-critical, images from both data modalities can complement each other to accurately convey the full damage condition in the disaster aftermath. However, due to several factors, such as weather and satellite coverage, which data modality will be the first available for rapid disaster response efforts is often uncertain. Hence, novel methodologies that can utilize all accessible EO datasets are essential for disaster management. In this study, we developed a global multimodal and multitemporal dataset for building damage mapping. We included building damage characteristics from three disaster types, namely, earthquakes, tsunamis, and typhoons, and considered three building damage categories. The global dataset contains high-resolution (HR) optical imagery and high-to-moderate-resolution SAR data acquired before and after each disaster. Using this comprehensive dataset, we analyzed five data modality scenarios for damage mapping: single-mode (optical and SAR datasets), cross-modal (pre-disaster optical and post-disaster SAR datasets), and mode fusion scenarios. We defined a damage mapping framework for semantic segmentation of damaged buildings based on a deep convolutional neural network (CNN) algorithm. We also compared our approach to another state-of-the-art model for damage mapping. The results indicated that our dataset, together with a deep learning network, enabled acceptable predictions for all the data modality scenarios. We also found that the results from cross-modal mapping were comparable to the results obtained from a fusion sensor and optical mode analysis.

Original languageEnglish
Pages (from-to)132-143
Number of pages12
JournalISPRS Journal of Photogrammetry and Remote Sensing
Publication statusPublished - 2021 May


  • Deep convolutional neural network
  • Disaster damage mapping
  • Multimodal remote sensing

ASJC Scopus subject areas

  • Atomic and Molecular Physics, and Optics
  • Engineering (miscellaneous)
  • Computer Science Applications
  • Computers in Earth Sciences


Dive into the research topics of 'Learning from multimodal and multitemporal earth observation data for building damage mapping'. Together they form a unique fingerprint.

Cite this