The 2011 Great East Japan Earthquake and Tsunami disaster taught us many lessons. Many new findings, insights, and suggestions have been made and implemented in damage determination and in disaster observation, sensing, and simulation. The challenges in terms of mitigating damage from future catastrophic natural disasters, such as the expected Metropolitan Tokyo Earthquake and Nankai Trough Earthquake and Tsunami, are how we share the visions of the possible impacts and prepare to mitigate loss and damage, how we enhance society’s disaster resilience and the ability of society and social systems to prepare well, how we respond promptly and effectively to natural disasters, and how we apply lessons learned to future disaster management.
In recent years, a huge amount of information known as “disaster big data,” including data related to the dynamic movement of a large number of people, vehicles, and goods as IoT, has been obtained to understand how our society responds to natural disasters, both inside and outside the affected areas. The key question is how to utilize disaster big data to enhance disaster resilience.
Researchers with various areas of expertise are working together in a collaborative project called JST CREST: “Establishing the Most Advanced Disaster Reduction Management System by Fusion of Real-Time Disaster Simulation and Big Data Assimilation.” The project aims to identify possible disaster scenarios caused by earthquakes and tsunamis that occur and progress in a chained or compound manner, as well as to create new technologies to lead responses and disaster mitigation measures that help societies recover from disasters.
Since 2016, we have published three special issues entitled “Disaster and Big Data,” and now we will publish a fourth one which includes 10 research papers and 1 report. These aim to share the recent progress of the project as a sequel to Part 3, published in March 2018. As a guest editor of this issue, I would like to express our deep gratitude for the insightful comments and suggestions made by the reviewers and members of the editorial committee. It is my hope that the fruits of everyone’s efforts and outcomes will be utilized in disaster management efforts to mitigate damage and losses from future catastrophic disasters.
We have developed a new numerical model suitable for rapid and wide-area estimation of tsunami inundation and damage. The model is based on the world-renowned TUNAMI code solving the two-dimensional nonlinear shallow water equations, and enables one-stop simulation of the initial tsunami distribution based on a fault model, tsunami propagation and inundation, and damage estimation. It extends the configuration of the grid system from conventional rectangular regions to polygonal regions so that deployment of high-resolution grids can be confined to the coastal lowland, resulting in remarkably improved efficiency in computation and better precision. For the purpose of real-time implementation of tsunami inundation simulation using a high-performance computing infrastructure, vectorization and MPI parallelization have also been conducted. Moreover, the model was verified and validated through several benchmark problems that the National Tsunami Hazard Mitigation Program, organized by federal agencies and states in the U.S., developed as the quality standards for simulating and assessing tsunami hazard and risk. The newly-developed model is named “Real-time Tsunami inundation (RTi) model,” and its computational performance was examined using the SX-ACE, a vector supercomputer installed at Tohoku University. The results show that it requires only 128 cores of the SX-ACE for implementing six-hour tsunami inundation simulation with a 10-meter grid resolution within 10 minutes for the 700 km long coastline of Kochi Prefecture, Japan. This means that the RTi model is over 10 times more efficient as the conventional tsunami model with the rectangular domains, and it can be inferred that 2,451 cores of the SX-ACE are the overall computational resources needed for real-time tsunami inundation forecast on the whole coastal regions along the Nankai Trough subduction zone, corresponding to the computational performance of 170 Tflop/s. The resources required are equivalent to 24% of all the SX-ACE resources at Tohoku University, indicating the feasibility of real-time tsunami inundation forecast on a regional scale by using the RTi model. Since the Disaster Information System operated by the Cabinet Office of the Japanese Government adopted a function of tsunami damage estimation using the aforementioned numerical model, at the end of this paper, a brief overview of the subsystem for rapidly estimating tsunami damage on a regional scale is described.
A clustering method that classifies earthquake scenarios and the local area on the basis of similarities in the spatial distribution of ground motion was applied to long-period ground-motion data computed by a seismic wave propagation simulation. The simulation utilized a large number of seismic source models and a three-dimensional velocity structure model in which megathrust earthquakes in the Sagami Trough were assumed. The relationship between the clusters, earthquake scenario parameters, and the velocity structure model was examined. In addition, the relationship between the earthquake scenario clusters for a case in which actual strong-motion observation points were treated as a mesh and those for a case in which an entire set of meshes was investigated, and a spatial interpolation method that estimated a ground-motion distribution from strong-motion observation data was examined.
A series of heavy rainfalls hit the western half of Japan from June 28 to July 8, 2018. Increased river water overflowed and destroyed river banks, causing flooding over vast areas. In this study, two pre-event and one co-event ALOS-2 PALSAR-2 images were used to extract inundation areas in Kurashiki and Okayama Cities, Okayama Prefecture, Japan. First, water regions were extracted by threshold values from three-temporal intensity images. The increased water regions in July 2018 were obtained as inundation. Inundated built-up areas were identified by the increase of backscattering intensity. Differences between the pre-and co-event coherence values were calculated. The area with decreased coherence was extracted as a possible inundation area. The results of a field survey conducted on July 16, 2018 were used to estimate the optimal parameters for the extraction. Finally, the results from the intensity and coherence images were verified by making comparisons between a web-based questionnaire survey report and the visual interpretation of aerial photographs.
When carrying out change detection for building damage assessment using synthetic aperture radar (SAR) intensity images, it is desirable that the observation conditions of the images are similar and acquisition time is close to the earthquake occurrence time. In this way, the influence of the radar operating system and ground temporal changes can be minimized, facilitating high-accuracy assessment results. However, in practice, especially in poor developing areas, it is difficult to obtain ideal images owing to limited pre-event data archives. In the 2015 Gorkha, Nepal earthquake, the TerraSAR-X satellite captured the influenced Sankhu area before and after the earthquake on May 30, 2010 and May 13, 2015, respectively. The pre-event data was obtained in an ascending path with an incidence angle of 41°, whereas the post-event data was obtained in a descending path with an incidence angle of 33°. To apply the obtained data that had different observation conditions and longtime intervals for building damage assessment, two ways were considered and studied. On one hand, the feasibility of change detection considering these factors was investigated. Pixel statistic characteristics were analyzed in twelve test areas to check the influence of temporal changes, and building footprints were buffered considering two different incidence angles. On the other hand, the reliability of classification based on only post-event data was studied. The results showed good classification performance of some texture parameters, such as the “range value” and “standard deviation,” which are worthy of further study. Moreover, the classification results obtained using the post-event data achieved similar accuracy to that using both the pre- and post-event data, preliminarily indicating the research value of post-event data-based building damage detection as it can solve the pre-event data limitation problem once and for all.
In this study, the traffic state of a commercial vehicle was analyzed from a macroscopic viewpoint by using the probe data of a commercial vehicle in the Shikoku region during a period of heavy rain that occurred in western Japan in July, 2018. A method is proposed to calculate indexes, such as the detour rate and reduction in the number of trips, through an analysis of a trip at each origin-destination (OD) and extracting the route of a detouring vehicle during a disaster by using the results of the calculation. Finally, a method for the early detection of abnormalities, which involves paying attention to U-turn action during traffic disturbances is proposed. The influence of heavy rain on a commercial vehicle was evaluated quantitatively by analyzing the probe data of the vehicle during a disaster period caused by heavy rain. Specifically, analysis was performed on the number of passing commercial vehicles before and after the occurrence of a disaster, changes in running speed, route changes at each OD, and the vehicle trajectory around a regulated area. From the results of the analysis, it was possible to grasp the macroscopic traffic state, OD influenced by the traffic restriction, route in use for the OD during a normal time period, and an alternate route (detour action) during the disaster time period. With the method for the early detection of abnormalities at the time of a traffic disturbance, which pays close attention to U-turn action, a U-turn after the traffic regulation can be detected; however, it was confirmed that there is a problem in detecting timing and the application range.
Techniques for quickly and easily estimating wide-area damage are required to support various activities for reducing damage in the event of a large earthquake. In this paper, using a wide-area evacuation simulator, we estimate property damage and human casualties in 32 densely built-up wooden residential areas in Tokyo, assuming a large earthquake. Furthermore, the relationships among local urban environment, property damage, and human casualties are analyzed using multiple regression analysis.
To improve the accessibility of emergency vehicles after a large earthquake, it is important to quantify the effects of risk mitigation strategies. In this paper, using a simulation model that describes the movement of emergency vehicles amidst property collapse after a large earthquake, we evaluate street network accessibility in the Tokyo Metropolitan Area. Moreover, by analyzing the relationships between local environments and street network accessibility, we discuss the effects of risk mitigation strategies on improving street network accessibility.
Disasters have caused serious damage on human beings throughout their long history. In a major natural disaster such as an earthquake, a key to mitigate the damage is evacuation. Evidently, secondary collateral disasters is account for more casualty than the initial one. In order to have citizens to evacuate safely for the sake of saving their lives, collecting information is vital. However at times of a disaster, it is a difficult task to gain environmental information about the area by conventional way. One of the solutions to this problem is crowd-sensing, which regards citizens as sensors nodes and collect information with their help. We considered a way of controlling the mobility of such sensor nodes under limitation of its mobility, caused by road blockage, for example. Aiming to make a mobility control scheme that enables high-quality information collection, our method uses preceding result of the measurement to control the mobility. Here it uses kriging variance to do that. We evaluated this method by simulating some measurements and it showed better accuracy than baseline. This is expected to be a method to enable a higher-quality input to the agent-based evacuation simulation, which helps to guide people to evacuate more safely.
Using various types of big data on the Nankai Trough earthquake and tsunami that struck Kochi, we describe a method of simulating how economic damage to inter-enterprise transactions propagates through the supply chain and how subsequent recovery occurs. First, we enter the human losses and material damage caused by the earthquake and tsunami to estimate the material damage and labor capital loss of each enterprise based on the position information of employees. Next, we simulate the damage repercussions regarding production capacity over the entire supply chain through many tiers, and extract bottlenecks of company risk within the supply chain. It was found that bottleneck companies tend have a slow recovery rate as compared to other affected companies. They also had a tendency of being in the construction, manufacturing, wholesale, and service industry sectors, and small in scale.
Owing to the advances in information technology and heightened awareness regarding disaster response, many evacuation simulations have been performed by researchers in recent years. It is necessary to develop suitable disaster prevention plans or evacuation plans using data generated by such simulations. However, it is difficult to understand the simulation results in their original form because of the detailed and voluminous data generated. In this study, we focus on tensor decomposition, which is employed for analyzing multi-dimensional data, in order to analyze the evacuation simulation data, which often consists of multiple dimensions such as time and space. Tensor decomposition is applied to the movement trajectory data generated in the evacuation simulation with the objective of acquiring important disaster or evacuation patterns.
This study analyzed quantitative big data from web news on the West Japan Heavy Rain disaster for a two-month period. The retrieved information was compared with previous natural disaster coverage. The results indicated the following. 1) For natural disasters that had occurred over the past 15 years, the “half-life period for media exposure” (i.e., the period in which the amount of news reporting halves) was approximately one week, while the half-life period of web media exposure on the West Japan Heavy Rain disaster was 24 days. Thus, the West Japan Heavy Rain disaster appeared to be the most significant social concern since the Great East Japan Earthquake. 2) The West Japan Heavy Rain disaster was large enough to affect both the Chugoku and Shikoku Districts, but the available human support was comparable to the extent of the human and material damages as well as the related amount of media coverage. No significant regional differences in the amount of media coverage or support were found.
It is necessary to identify the role and limitation of a tsunami protection facility in installing the hardware elements and performing damage-reduction activities necessary to mitigate aftermath of the calamity. This paper examines the probabilities of occurrence of earthquakes of various magnitudes based on the earthquake occurrence frequency, estimates the corresponding tsunami damage, and proposes a method to determine the expected damage reduction, B(H), and expected damage, D(H), when a tsunami protection facility of a given scale, H, is built. With the goal of facilitating an agreement with the local government regarding the scale of the tsunami protection facility, promoting damage reduction measures, and reducing the overall social cost, it also proposes a scheme built on the results of the aforementioned method and the cost, C(H). The present method can also be used to identify residual risks, which will allow concerned parties to set concrete damage reduction targets.