2023 Volume 4 Issue 3 Pages 189-204
In this study, we developed a model that aims to generate information to support disaster response by quickly identifying damage caused by natural disasters such as earthquakes and typhoons. This model uses high-resolution optical satellite images to automatically extract buildings in the image using Mask R-CNN, an instance segmentation method based on deep learning, and automatically classifies building damage into three levels: undamaged, damaged, and destroyed, and two levels: with or without blue sheet coverage. As a result, the accuracy of building extraction (IoU) was about 35%, and the accuracy of damage classification (Fmeasure) for each building was about 52%, which was slightly lower than that of the semantic segmentation model of U-Net. However, it was confirmed that the model has a certain level of performance as a model that can simultaneously perform building extraction and damage classification. The building detection and damage classification model was constructed using three types of high-resolution satellite images: WorldView-3, Pleiades, and GeoEye-1. The accuracy of building extraction was about 39%, and the accuracy of damage classification was about 92% for no damage, 69% for damage, 56% for collapse, and 85% for covered buildings, indicating that the model has a certain degree of generalization performance and can be used for early damage assessment.