TY - JOUR
T1 - Learning from multimodal and multitemporal earth observation data for building damage mapping
AU - Adriano, Bruno
AU - Yokoya, Naoto
AU - Xia, Junshi
AU - Miura, Hiroyuki
AU - Liu, Wen
AU - Matsuoka, Masashi
AU - Koshimura, Shunichi
N1 - Funding Information:
The authors would like to thank JAXA for providing the ALOS-2 PALSAR-2 dataset through the 2nd Research Announcement on the Earth Observations (EO-RA2) and the Sentinel missions for providing the Sentinel-2 imagery. All cartographic maps were created using QGIS software version 3.4. The SAR dataset preprocessing was conducted using the SARscape v5.5 toolbox operating under ENVI 5.5 software. This research was funded by the Japan Society for the Promotion of Science (KAKENHI 19K20309, 19H02408, 18K18067, and 17H06108), the JSPS Bilateral Joint Research Projects (JPJSBP 120203211), and the Center for Environmental Remote Sensing (CEReS), Chiba University.
Publisher Copyright:
© 2021 The Author(s)
PY - 2021/5
Y1 - 2021/5
N2 - Earth observation (EO) technologies, such as optical imaging and synthetic aperture radar (SAR), provide excellent means to continuously monitor ever-growing urban environments. Notably, in the case of large-scale disasters (e.g., tsunamis and earthquakes), in which a response is highly time-critical, images from both data modalities can complement each other to accurately convey the full damage condition in the disaster aftermath. However, due to several factors, such as weather and satellite coverage, which data modality will be the first available for rapid disaster response efforts is often uncertain. Hence, novel methodologies that can utilize all accessible EO datasets are essential for disaster management. In this study, we developed a global multimodal and multitemporal dataset for building damage mapping. We included building damage characteristics from three disaster types, namely, earthquakes, tsunamis, and typhoons, and considered three building damage categories. The global dataset contains high-resolution (HR) optical imagery and high-to-moderate-resolution SAR data acquired before and after each disaster. Using this comprehensive dataset, we analyzed five data modality scenarios for damage mapping: single-mode (optical and SAR datasets), cross-modal (pre-disaster optical and post-disaster SAR datasets), and mode fusion scenarios. We defined a damage mapping framework for semantic segmentation of damaged buildings based on a deep convolutional neural network (CNN) algorithm. We also compared our approach to another state-of-the-art model for damage mapping. The results indicated that our dataset, together with a deep learning network, enabled acceptable predictions for all the data modality scenarios. We also found that the results from cross-modal mapping were comparable to the results obtained from a fusion sensor and optical mode analysis.
AB - Earth observation (EO) technologies, such as optical imaging and synthetic aperture radar (SAR), provide excellent means to continuously monitor ever-growing urban environments. Notably, in the case of large-scale disasters (e.g., tsunamis and earthquakes), in which a response is highly time-critical, images from both data modalities can complement each other to accurately convey the full damage condition in the disaster aftermath. However, due to several factors, such as weather and satellite coverage, which data modality will be the first available for rapid disaster response efforts is often uncertain. Hence, novel methodologies that can utilize all accessible EO datasets are essential for disaster management. In this study, we developed a global multimodal and multitemporal dataset for building damage mapping. We included building damage characteristics from three disaster types, namely, earthquakes, tsunamis, and typhoons, and considered three building damage categories. The global dataset contains high-resolution (HR) optical imagery and high-to-moderate-resolution SAR data acquired before and after each disaster. Using this comprehensive dataset, we analyzed five data modality scenarios for damage mapping: single-mode (optical and SAR datasets), cross-modal (pre-disaster optical and post-disaster SAR datasets), and mode fusion scenarios. We defined a damage mapping framework for semantic segmentation of damaged buildings based on a deep convolutional neural network (CNN) algorithm. We also compared our approach to another state-of-the-art model for damage mapping. The results indicated that our dataset, together with a deep learning network, enabled acceptable predictions for all the data modality scenarios. We also found that the results from cross-modal mapping were comparable to the results obtained from a fusion sensor and optical mode analysis.
KW - Deep convolutional neural network
KW - Disaster damage mapping
KW - Multimodal remote sensing
UR - http://www.scopus.com/inward/record.url?scp=85102615227&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102615227&partnerID=8YFLogxK
U2 - 10.1016/j.isprsjprs.2021.02.016
DO - 10.1016/j.isprsjprs.2021.02.016
M3 - Article
AN - SCOPUS:85102615227
SN - 0924-2716
VL - 175
SP - 132
EP - 143
JO - ISPRS Journal of Photogrammetry and Remote Sensing
JF - ISPRS Journal of Photogrammetry and Remote Sensing
ER -