The 2011 Great East Japan Earthquake and Tsunami Disaster left behind many lessons to learn, and there have since been many new findings and insights that have led to suggestions made and implemented in disaster observation, sensing, simulation, and damage determination. The challenges for mitigating the damage from future catastrophic natural disasters, such as the Tokyo Metropolitan Earthquake or the Nankai Trough Earthquake and Tsunami, are in how we share our visions of the possible impacts, how we prepare to mitigate the losses and damages, and how we enhance society’s disaster resilience.
The huge amount of information obtained, called “disaster big data,” is related to the dynamic movement, as IoT, of a large number people, vehicles, and goods from inside and outside the affected areas. This has dramatically facilitated our understanding of how our society has responded to unprecedented catastrophes. The key question is how to utilize big data in establishing social systems that respond promptly, sensibly, and effectively to natural disasters, and in withstanding adversity with resilience.
Researchers with various types of expertise are working together under a collaborative project called JST CREST “Establishing the advanced disaster reduction management system by fusion of real-time disaster simulation and big data assimilation.” The project aims to identify possible earthquake and tsunami disaster scenarios that occur and progress in a chained or compound manner and to create new technologies to lead responses and disaster mitigation measures to help society to recover from disasters.
As we have published two previous special issues entitled “Disaster and Big Data” since 2016, this issue is our third. Included are 14 papers that aim to share the recent progress of the project as the sequel to Part 2, published in March 2017. As one of the guest editors of this issue, I would like to express our deep gratitude for the insightful comments and suggestions made by the reviewers and the members of the editorial committee. I do hope that this work will be utilized in disaster management efforts to mitigate the damage and losses in future catastrophic disasters.
Tsunami disasters can cause serious casualties and damage to social infrastructures. An early understanding of disaster states is required in order to advise evacuations and plan rescues and recoveries. We have developed a real-time tsunami inundation forecast system using a vector supercomputer SX-ACE. The system can complete a tsunami inundation and damage estimation for coastal city regions at the resolution of a 10 m grid size in under 20 minutes, and distribute tsunami inundation and infrastructure damage information to local governments in Japan. We also develop a new configuration for the computational domain, which is changed from rectangles to polygons and called a polygonal domain, in order to effectively simulate in the entire coast of Japan. Meanwhile, new supercomputers have been developed, and their peak performances have increased year by year. In 2016, a new Xeon Phi processor called Knights Landing was released for high-performance computing. In this paper, we present an overview of our real-time tsunami inundation forecast system and the polygonal domain, which can decrease the amount of computation in a simulation, and then discuss its performance on a vector supercomputer SX-ACE and a supercomputer system based on Intel Xeon Phi. We also clarify that the real-time tsunami inundation forecast system requires the efficient vector processing of a supercomputer with high-performance cores.
During the Great East Japan Earthquake in 2011, real-time estimate of the earthquake’s magnitude was quite low, and consequently, the first report about the tsunami also understated its severity. To solve this issue, some proposed a massive overhaul of Japan’s offshore tsunami observation networks and methods to predict tsunamis in real time. In this study, we built a database containing 3,967 scenarios of tsunamis caused by earthquakes with hypocenters along the Nankai Trough, and tested a tsunami prediction method that uses this database along with offshore tsunami observation networks. Thus, we found that an uneven distribution of observation points had a negative effect on predictive accuracy. We then used simulated annealing to select the observation points to be used at each observation site and found that the predictive accuracy improved while using a few selected observation points compared to using every point.
We developed a clustering method combining principal component analysis and the k-means algorithm, which classifies earthquake scenarios based on the similarity of the spatial distribution of earthquake ground-motion simulation data generated for many earthquake scenarios, and applied it to long-period ground-motion simulation data for Nankai Trough megathrust earthquake scenarios. Values for peak ground velocity and relative velocity response at approximately 80,000 locations in 369 earthquake scenarios were represented by 15 principal components each, and earthquake scenarios were categorized into 30 clusters. In addition, based on clustering results, we determined that extracting relationships between principal components and scenario parameters is possible. Furthermore, by utilizing these relationships, it may be possible to easily estimate the approximate ground-motion distribution from the principal components of arbitrary sets of scenario parameters.
It is difficult to evaluate the street network accessibility after a large earthquake occurs. In this paper, we construct a model to evaluate the street network accessibility for wide-area emergency behaviors under the condition of property damage in the Tokyo Metropolitan Area after a large earthquake. Additionally, we analyze the relationships between a local environment and street network accessibility by using multiple regression analysis. Finally, we discuss some important factors for evaluating risk mitigation strategies.
Fire-spread prevention activities, which are performed by local residents in an early phase of fire, play an important role in reducing the destruction damage caused by fire in large earthquakes. However, few studies have focused on the fire-spread prevention activities that are carried out in the confused situation immediately following a large earthquake. Therefore, to date, there has been no sufficiently comprehensive discussion regarding the effectiveness of such activities.
In this study, we estimate the possibility that local residents can prevent the spread of fire from the building of fire origin to an adjacent building by using equipment such as stand pipes. For this purpose, we utilize the agent-based simulator of property damage and human behavior at the time of a large earthquake that we developed in an earlier study. In addition, we demonstrate the effects of some measures for increasing the success rate of preventing fire-spread by comparing the simulation results under the following assumptions: (1) The percentage of streets blocked by the rubble of collapsed buildings is decreased, (2) the number of stand pipes is increased, and (3) the time before a fire spreads to an adjacent building is lengthened by planting trees between two buildings or the implementation of flame-retardation measures inside a building. Furthermore, on the basis of the simulation results, we discuss the requirements for successful fire-spread prevention activities by analyzing some factors (structure/area of buildings, time for fire-spread, time before spraying water, etc.) related to the activities.
Since synthetic aperture radar (SAR) sensors onboard satellites can work under all weather and sunlight conditions, they are suitable for information gathering in emergency response after disasters occur. This study attempted to extract collapsed bridges in Iwate Prefecture, Japan, which was affected by more than 15-m high tsunamis due to the Mw 9.0 earthquake on March 11, 2011. First, the locations of the bridges were extracted using GIS data of roads and rivers. Then, we attempted to detect the collapsed or washed-away bridges using visual interpretation and thresholding methods. The threshold values on the SAR backscattering coefficients and the percentage of non-water regions were applied to the post-event high-resolution TerraSAR-X images. The results were compared with the optical images and damage investigation reports. The effective use of a single SAR intensity image in the extraction of collapsed bridges was demonstrated with a high overall accuracy of more than 90%.
The 2016 magnitude 6.4 Meinong earthquake caused catastrophic damage to peoples lives and properties in Taiwan. Synthetic Aperture Radar remote sensing is a useful tool to rapidly grasp the near real-time building damage to areas affected by the earthquake. Previous studies employed X-band single polarized high-resolution synthetic aperture radar imagery to identify building damage. However, suitable X-band single polarized high-resolution synthetic aperture radar imagery is not always accessible. Therefore, this research applied L-band dual-polarimetric ALOS-2/PALSAR-2 data to analyze the radar scattering characteristics of three types of affected buildings in the 2016 Meinong earthquake. The results show that collapsed buildings are characterized by a weak double-bounce scattering due to a reduced dihedral structure, while the characteristics of slightly damaged buildings are similar to those of undamaged buildings. Furthermore, the discrimination ability of a series of polarimetric, texture, and color features derived from the dual-polarimetric SAR data for three types of buildings affected by the earthquake are quantified based on a statistical analysis using the pixels in the combined areas of layover, shadow, and building footprint of each building. The results of the statistical analysis show that the spaceborne dual-polarimetric ALOS-2/PALSAR-2 images have good potential to distinguish between slightly damaged buildings and collapsed and tilted buildings. However, it is still difficult to distinguish between collapsed and tilted buildings. In addition, the results of the statistical analysis show that the mean value and variance value of the Gray-Level Co-Occurrence Matrix of the span image are two suitable features by which the categories of building damage can be distinguished. The polarimetric and color features demonstrated poorer performance in terms of distinguishing between damage categories than the texture features.
In March 2011, the Great East Japan Earthquake occurred, killing approximately 20,000 people. Previous research has shown that evacuation start time and evacuation behavior are related to the disaster survival rate: immediate evacuation increases the survival rate and evacuation-disruption caused by traffic congestion decreases it. Therefore, it can be assumed that guiding people to safe locations will increase the survival rate. The detection of the human mobility flow is a key to rescuing more people, because its analysis can help determine the appropriate evacuation routes toward which people should be guided. The objective of our research is to develop a system for detecting the human mobility flows in a disaster scenario. We analyzed the requirements of human mobility flow detection for disaster evacuation guidance. In this paper, we propose a crowd sensing system that uses Bluetooth for recognizing human mobility flows. By detecting Bluetooth devices carried by pedestrians, the congestion degree can be estimated. Further, the devices’ movements can be detected by observing the received signal strength indicator (RSSI) of Bluetooth Low Energy (LE) beacons carried by pedestrians. The results of experimental evaluations of these two methods verify their usefulness. Our methods can estimate the congestion degree, as well as the velocity of walking pedestrians.
Copious footage of the Great East Japan Earthquake (GEJE) was obtained during the disaster, and much of it is still available for viewing on the Internet. This makes up part of the big data that is important for grasping the reality of the GEJE. We aimed to help mainstream users reflect on what happened during the GEJE through video data that would help them visualize the situations, types, timing, and locations of associated damage to obtain correct knowledge and awareness about tsunamis and their associated damage, and to investigate evacuation actions. To this end, we developed a portal system called “3.11 Video Portal – Great East Japan Earthquake Public Footage Finder,” which can be used to search publicly available online videos of the GEJE and the consequent damage. This article reports the results of an access analysis following the launch of this system and an analysis of user surveys. Much of the footage is not linked to metadata on location, so this was added manually. We have seen a lot of access since launch, and have achieved a certain evaluation of the meaning or operability of this system. Furthermore, users commented about incorrect locational information, and tagging was accomplished through a collaborative effort. Based on the respondents’ feedback, we made a lot of technical revisions and confirmed the necessity of further refinement.
In this study, we analyzed big data consisting of news published on the web about the 2016 Kumamoto earthquake over the course of a month and compared it with earthquakes that have occurred in the past. Our findings are summarized as follows: 1) In the case of web news on the Kumamoto earthquake, the “media half-life” of the disaster, or the time it took for media coverage to decrease to half of its peak amount, was one week, which is roughly the same as that of the 2004 Niigata Chuetsu earthquake. 2) The scope of human support corresponded to the scope and coverage of human and material damage, and no municipalities deviated from this, which means that, remarkably, we did not see any disparities in news coverage or human support, as was the case with the Great East Japan Earthquake.
This study proposes a real-time monitoring method for two-dimensional (2D) networks via the fusion of probe data and a traffic flow model. In the Great East Japan Earthquake occurring on March 11, 2011, there was major traffic congestion as evacuees concentrated in cities on the Sanriku Coast. A tragedy occurred when a tsunami overtook the stuck vehicles. To evacuate safely and efficiently, the state of traffic must be monitored in real time on a 2D network, where all networks are linked. Generally, the traffic state is monitored only at observation points. However, observation data presents the risk of errors. Additionally, in the estimated traffic state of the 2D network, unlike non-intersecting road sections (i.e., one-dimensional), it is necessary to model user route choice behavior and origin/destination (OD) demand to input in the model. Therefore, in this study, we develop a state-space model that assimilates vehicle density and divergence ratio data obtained from probe vehicles in a traffic flow model that considers route choice. Our state-space model considers observational errors in the probe data and can simultaneously estimate traffic state and destination component ratio of OD demand. The result of simulated traffic model verification shows that the proposed model has good congestion estimation precision in a small-scale test network.
Today, large-scale simulations are thriving because of the increase of computating performance and storage capacity. Understanding the results of these simulations is not easy, and hence, support for interactive and exploratory analysis is becoming more important. This study focuses on spatio-temporal simulations and attempts to develop an analysis technology to support them. It uses a database system for supporting interactive analysis of large-scale data.
Since the data gained via spatio-temporal simulations is not suitable for management in a relational DBMS (RDBMS), this study uses an array DBMS, a type of DBMS that has been garnering increased attention in recent years. An array DBMS is designed for the management of large-scale array data; it provides a logical model for array data, yet it also supports efficient query processing. SciDB is used as our specific array DBMS in this paper.
This study targets disaster evacuation simulation data and demonstrates via experimentation that the query-processing functions offered by an array DBMS provide effective analysis support.
Japan has suffered significant damage from countless earthquakes throughout its history. Thus, it is important to take prompt and effective measures against major future earthquakes predicted. Among the components of damage, measures against tsunamis are a top priority, as demonstrated by the catastrophic losses caused by the Great East Japan Earthquake (GEJE). To date, many studies have been conducted on tsunamis to investigate detection, prediction of flooding, and models of evacuation behaviors. Therefore, this study sought to integrate the results of increasingly advanced research in various fields to construct a system that generates data on human flow during a tsunami disaster. We proceeded with a scenario assuming the Great Nankai Trough Earthquake and considered Kochi City as a case study to conduct a trial tsunami evacuation simulation. We validated and evaluated the evacuation behaviors as a scheme for utilizing knowledge of the ever-changing conditions of evacuation and conducted a visualization and analysis of the simulation results.
This paper investigates whether it is possible to determine shelter locations and congestion spots at the time of an event, by resorting to the recently anticipated use of mobile space data at the time of a disaster. This study focused on the earthquake and the resulting tsunami that occurred off the coast of Fukushima Prefecture on November 22nd, 2016. We verified whether it is possible to identify congestion spots and shelter locations by comparing with the results of a questionnaire survey conducted in a previous study on evacuation behavior at the time of the occurrence of earthquakes and tsunamis. As a result, we found that it is difficult to determine evacuation behaviors from the data, as raw mobile space data extracted several hours after the tsunami event only gave information on where spots’ populations ordinarily converge to. We were able to determine the locations where populations gather by taking into consideration time-based differences between the raw data obtained. It, however, proved difficult to obtain a good determination of congestion spots.
The Great East Japan Earthquake, which occurred on March 11, 2011, was the greatest disaster in Japan since World War II. The establishment and operation of the Extreme Disaster Management Headquarters left various lessons about initial and emergency responses against future huge disasters. It is hoped that these lessons will be heeded in measures against huge disasters in Japan in the future. Based on this recognition, this study examines a specific direction in the discussion on introducing Disaster Emergency Provisions in the Constitution of Japan with a view toward huge disasters, such as Tokyo Inland Earthquake and Nankai Trough Earthquake. From the viewpoint of responses against huge disasters, there is a need to discuss what kind of Disaster Emergency Provisions are necessary in order to protect the people from huge disasters that were not considered when the Constitution of Japan was enacted. These provisions should possess a certain specificity, comprehensiveness, and flexibility, and address response measures when there is no time to await legislation by an extraordinary session of the Diet or when such measures cannot be addressed by legislation enacted during normal times. We hope that these lessons culled from initial and emergency responses to the Great East Japan Earthquake will further the discussion on special Constitutional rules on the relationship between the Cabinet and Diet or the national and local governments.
This article reports the development of a geographical information system (GIS) embedded text-based geospatial Big Data research toolbox (BigGIS-RTX) designed especially for mobile CDR (Call Details Record) data processing in urban transport planning and disaster management. BigGIS-RTX is a standalone computer program that aims to provide a bridge between geospatial Big Data and end users (i.e. students and researchers) by reducing difficulties in handling geospatial Big Data processing and analysis tasks. This research toolbox makes it possible to handle text-based geospatial Big Data cleaning, formatting, subsetting, and extraction by keywords or structured query language (SQL), CDR data aggregation by base transceiver stations (BTSs), generation of origin–destination (OD) trips, OD matrices, and OD routes, and computation of OD links. Moreover, this research toolbox can be integrated with current commercial GIS software to perform further geospatial analysis functions to improve spatial decision making in urban and transport planning and disaster management. In this report, we discuss two current research outputs using BigGIS-RTX: first, multitemporal grid square population estimation and second, human mobility studies in transportation planning. These research outputs are essential for disaster management and emergency preparedness in terms of providing knowledge and information about population distribution changes over space and time, human mobility flow by a user defined time frame, and travel volume or link count information for individual road segments. This research is part of the core project “Development of a Comprehensive Disaster Resilience System and Collaboration Platform in Myanmar” in a research collaboration between Yangon Technological University, Myanmar, and The University of Tokyo, Japan, sponsored by the Japan Science and Technology Agency (JST) and the Japan International Cooperation Agency (JICA).
The 2016 Kumamoto earthquake caused severe damage to economic activities and livelihood of residents by disrupting the supply chains of common resources, such as food, water, roads, and other infrastructure. This disaster has made recovery difficult for businesses in the region. The importance of addressing BCP in regional areas was made clear by the 2004 Niigataken Chuetsu earthquake and the 2007 Niigataken Chuetsu-oki earthquake. The 2011 Greate East Japan earthquake revealed that individual business continuity efforts were interrupted by disruption of common infrastructure. Therefore, a new concept of a region-wide business continuity plans (BCP) that focuses on collaboration among stakeholders, including private corporations, local government, and communities, was urgently required to enhance the resilience of the region against disasters. A new concept of Area BCP was proposed by JICA and Prefectural-scale District BCP was formulated by prefectural governments of Kyoto and Kagawa.
In order to evaluate the effect of the presence of a regional BCP on disaster response, this study focuses on one of the most important elements of a regional BCP: the disaster relief chain information-sharing factor. Based on the supply of relief goods from the distribution center in Tosu City, Saga Prefecture to the evacuation centers in Kumamoto Prefecture during the Kumamoto earthquake, the evaluation was conducted by quantitative analysis using agent simulations of relief logistics.
Extreme rainfall and associated flooding are common during the summer in Japan. Heavy rain caused extensive damage in many parts of Kyushu, Japan, on July 5–6, 2017. Many small mountainous river basins were subject to the core of this heavy rainfall event and were flooded, but no hydrological measurements were taken in most of these flooded basins during the event. There are few gauging stations in this mountainous region, and most that do exist are designed to monitor the larger watersheds. Consequently, it is difficult to determine the hydrological properties of the small subbasins within these larger watersheds. Therefore, to improve our understanding of the basic hydrological processes that affect small ungauged mountain river basins during periods of intense rainfall, a quasi-distributed model (i.e. the Hydrologic Engineering Center-Hydrologic Modeling System, HEC-HMS) was used in this study. The Hikosan (area: 65 km2) and Akatani (area: 21 km2) mountainous river basins were selected for the hydrological simulations. The model was validated using the Hikosan River basin because observational data are available from the outlet of this basin. However, there is no record of any hydrological observations for the Akatani River basin. Therefore, reference parameters from the Hikosan River basin were used for hydrological analysis of the Akatani River basin. This was possible because the basins are close to one another and have similar physiographic and topographic properties. The simulations of both basins, and the associated uncertainties, are discussed in detail in this paper. Based on the hydrological simulations, an attempt was made to analyze the maximum flood discharge caused by the event. The results generated using this approach to hydrological simulations in small ungauged basins could contribute to the management of water resources in these and other river basins during future extreme rain events.