In April 2016, our institute, NIED, under its new English name the “National Research Institute for Earth Science and Disaster Resilience,” commenced its fourth mid-to-long term planning period, set to last seven years.
We are constantly required to carry out comprehensive efforts, including observations, forecasts, experiments, assessments, and countermeasures related to a variety of natural disasters, including earthquakes, tsunamis, volcanic eruptions, landslides, heavy rains, blizzards, and ice storms.
Since this is NIED’s first special issue for the Journal of Disaster Research (JDR), works were collected on a wide variety of topics from research divisions and centers as well as from ongoing projects in order to give an overview of the latest achievements of the institute. We are delighted to present 17 papers on five topics: seismic disasters, volcanic disasters, climatic disasters, landslide disasters, and the development of comprehensive Information Communications Technology (ICT) for disaster management. Even though the achievements detailed in these papers are certainly the results individual research, NIED hopes to maximize these achievements for the promotion of science and technology for disaster risk reduction and resilience as a whole. It is our hope that this special issue awakens the readers’ interest in a study, and, of course, creates an opportunity for further collaborative works with us.
Tomographic analysis of the seismic velocity structure beneath oceans has always been difficult because offshore events determined by onshore seismic networks have large uncertainties in depth. In order to use reliable event locations for our computations, we have developed a method to use the hypocentral depths determined by the NIED F-net with moment tensor solutions using long-period (20-50 s) waves from offshore events away from onshore seismic networks. We applied seismic tomographic method to events occurring between the years 2000 and 2015 to generate a tomographic image of the Japanese Islands and the surrounding using travel time data picked by the NIED Hi-net, hypocenteral information for onshore earthquakes from the Hi-net, and hypocenter information for offshore events from the F-net. The seismic velocity structure at depths of 30-50 km beneath the Pacific Ocean off the east coast of northeastern Japan and onshore Japan was clearly imaged using both onshore and offshore event date. The boundary between high and low P-wave velocities (Vp) is clearly seen at the Median Tectonic Line beneath southwestern Japan at depths of 10 and 20 km. We discuss how the high-Vp lower crust and low-Vp upper crust beneath central Japan and towards the Sea of Japan are responsible for the failed rift structures formed during the opening of the Sea of Japan. Due to consequent shortening, the crustal deformation has been concentrated along the failed rift zone. Resolution of shallow structures beneath the ocean is investigated using S-net data, confirming the possibility of imaging depths of 5-20 km. In future studies, application of S-net data will be useful in evaluating whether the failed rift structure, formed during the late Cretaceous to early Tertiary, continues towards the shallow regions beneath the Pacific Ocean.
The shake table test of small-scaled steel frame structure was conducted using large-scale earthquake simulator at the National Research Institute for Earth Science and Disaster Resilience (NIED) in Tsukuba, Ibaragi. This paper presents the performance evaluation of Micro Electro Mechanical Systems (MEMS) type accelerometers, which are recently being used in various fields, comparing with the conventional servo type accelerometers. In addition, this paper discussed the integration method of the measured acceleration into displacements, which is suitable for structural damage evaluation due to strong earthquakes.
Underground structures are generally considered to have high seismic performance and expected to play an important role as a base for reconstruction even after a destructive earthquake. Rigidity changing points, such as jointed and curved portions of underground structure, where localized deformation and stress is supposed to be generated, are ones of the most critical portions in terms of seismic performance of underground structure. Because the underground structure in a mega-city functions as a network, local damage could lead to fatal dysfunction. Accordingly, rigidity changing points and their surrounding area could significantly influence the resiliency of urban functions, and it is indispensable to evaluate their seismic performance and dynamic responses during earthquakes. The responses of rigidity changing points and their surrounding area to earthquakes have been tried evaluating by using large-scale numerical analyses, there is no case available where the responses have been measured in detail. For this reason, it is difficult to verify the validity of the results of such evaluations.
In light of the above, the shake table test was conducted at E-Defense using a coupled specimen of soil and underground structures to obtain detailed data, especially on the localized responses around rigidity changing points during the earthquake. Based on the data obtained, the behavior of the underground structure with a curved portion at the time of an earthquake was analyzed comprehensively. As a result of the analysis on the test data, it is found that there is a strong correlation between the localized deformation of the curved portion of the tunnel and the displacement of the surrounding ground. In addition, it is necessary to conduct a three-dimensional seismic response analysis not only around the rigidity changing point but also in wider area.
To construct a virtual reality (VR) experience system for interior damage due to an earthquake, VR image contents were created by obtaining images, sounds, and vibration data from multiple devices, with synchronization information, in a room at the 10th floor of 10-story RC structure tested at E-Defense shake table. An application for displaying 360-degree images of interior damage using a head mount display (HMD) was developed. The developed system was exhibited in public disaster prevention events, and then a questionnaire survey was conducted to assess usefulness of VR experience in disaster prevention education.
The purpose of this study is to verify fault modeling in the source region of the 1940 Shakotan-Oki earthquake using active faults offshore of Japan. Tsunami heights simulated in previous studies are found to be lower than observed levels, which makes it difficult to explain historical tsunami records of this earthquake. However, the application of appropriate slip magnitudes in the fault models may explain these differences. In the “Project for the Comprehensive Analysis and Evaluation of Offshore Fault Informatics (the Project),” a new fault model is constructed using marine seismic data and geological and geophysical data compiled by the Offshore Fault Evaluation Group, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) as part of the Project for Fault Evaluation in the Seas around Japan (Ministry of Education, Culture, Sports, Science and Technology, MEXT). Single-channel and multichannel reflection seismic data were used that includes information from a new fault identified in previous surveys. We investigated fault geometries and their parameters using the above data. Here, we show that the geometric continuity of these faults is adjusted by increasing the magnitude of fault slip. Standard scaling laws are applied on the basis of strong ground motion of the fault parameters, and the validity of the fault model is examined by comparing tsunami heights along the Japanese coastline from historically observed records with tsunami height from simulation analysis. This verification quantitatively uses Aida’s K and κ scale and variance parameters. We determine that the simulated tsunami height determined using the new model approach the heights observed historically, which indicates that the model is valid and accurate for the source region.
In the 2016 Kumamoto earthquake, the Futagawa fault zone and the Hinagu fault zone were active in some sections, causing severe damage in neighboring areas along the faults. We conducted a detailed investigation of the surface earthquake fault, building damage, and site amplification of shallow ground within about 1 km of the neighboring areas of the fault. The focus was mainly on Kawayou district, Minamiaso village and Miyazono district, Mashiki town, and locations that suffered particularly severe building damage. We explored the relationship between local strong motion and building damage caused in areas that were in the immediate vicinity of the active fault.
This study quantitatively analyzes the differences between the actual focal region of the Nankai Trough Giant Earthquake, which is expected to occur in the future, and the conceptual focal region drawn on the map by 595 students. It also examines the differences between the subjective expectation and the scientific prediction of the seismic intensity at the residence of the respondents, to find out the relationship between such differences and the variables of respondents such as residence, attributes and experiences, and others. As a result of the examination, the following findings are clear: the subjective expectation of the focal region of the Nankai Trough Giant Earthquake deviates largely eastwards; those who have their own residence and parents’ home in the area forecasted to be affected by the Nankai Trough Giant Earthquake recognize the focal region of the earthquake better; and those who have taken measures toward disaster prevention such as stocking goods for emergencies and participating in disaster drills account for a smaller percentage of respondents who underestimated seismic intensity at their residence.
In response to the recommendation of the Council for Science and Technology (Subdivision on Geodesy and Geophysics), the National Research Institute for Earth Science and Disaster Resilience (NIED) constructed a network of stations to observe 11 volcanoes: Tokachidake, Usuzan, Tarumaesan, Hokkaido-Komagatake, Iwatesan, Kusatsu-Shiranesan, Asamayama, Asosan, Kirishimayama, Unzendake, and Kuchinoerabujima. At each new station, a borehole seismograph and tiltmeter, a broadband seismograph, and a GNSS (GPS) were installed. Now, NIED has established 55 stations at 16 volcanoes, adding five volcanoes, namely, Izu- Oshima, Miyakejima, Ogasawara Iwoto, Mt. Fuji and Nasu-dake, and has constructed a new volcano observation network linking the 11 original volcanoes. NIED calls the combination of the new and earlier network the fundamental volcano observation network (V-net).
Under a fully open policy, data from the borehole seismographs and tiltmeters, broadband seismographs, rain gauges, barometers,and quartz thermometers in the pressure vessels of the borehole seismographs and tiltmeters are distributed to institutes such as the Japan Meteorological Agency and universities in real time over NIED’s conventional seismic observation data distribution system. GNSS (GPS) data are regularly distributed to relevant research institutes, such as the Geospatial Information Authority of Japan, using file transfer protocol (FTP). In addition, since everyone can use these data for the promotion of volcano research and volcanic disaster prevention, it is now possible to view seismic waves and download data from NIED’s website.
Mt. Tarumae is an active volcano located in the southeast of the Shikotsu caldera, Hokkaido, Japan. Recently, crustal expansion occurred in 1999–2000 and 2013 near the summit of Mt. Tarumae, with a M5.6 earthquake recorded west of the summit on July 8, 2014. In this study, we determined hypocenter distributions and performed b-value analysis for the period between August 1, 2014 and August 12, 2016 to improve our understanding of the geometry of the magma system beneath the summit of Mt. Tarumae. Hypocenters were mainly distributed in two regions: 3–5 km west of Mt. Tarumae, and beneath the volcano. We then determined b-value distributions. Regions with relatively high b-values (∼1.3) were located at depths of –0.5 to 2.0 km beneath the summit and at depths greater than 6.0 km about 1.5–3.0 km northwest of the summit, whereas a region with relatively low b-values (∼0.6) was located at depths of 2.0–6.0 km beneath the summit. Based on comparison of the b-value distributions with other geophysical observations, it was found that the high b-value region from –0.5 to 2.0 km in depth corresponded to regions of lower resistivity, positive self-potential anomaly, and an inflation source detected in 1999–2000. Therefore, it is inferred that this region was generated by crustal heterogeneity, a decrease in effective normal stress, and change of frictional properties caused by the development of faults and fissures and the circulation of hydrothermal fluids. On the other hand, the inflation source detected in 2013 was located near the boundary between the low b-value region beneath the summit and the deeper high b-value region about 1.5–3.0 km northwest of the summit. Studies of other volcanoes have suggested that such high b-values likely correspond to the presence of a magma chamber. Based on the deeper high b-value region estimated in this study, the magma chamber is inferred to be located at depths greater than 6.0 km about 1.5–3.0 km northwest of the summit. Thus, these findings contribute to our understanding of the magma plumbing system beneath the summit of Mt. Tarumae.
In this study, we examined variations in predicted precipitable water produced from different Global Positioning System (GPS) zenith delay methods, and assessed the corresponding difference in predicted rainfall after assimilating the obtained precipitable water data. Precipitable water data estimated from the GPS and three-dimensional horizontal wind velocity field derived from the X-band dual polarimetric radar were assimilated in CReSS and rainfall forecast experiments were conducted for the heavy rainfall system in Kani City, Gifu Prefecture on July 15, 2010. In the GPS analysis, a method to simultaneously estimate coordinates and zenith delay, i.e., the simultaneous estimation method, and a method to successively estimate coordinates and zenith delay, i.e., the successive estimation method, were used to estimate precipitable water. The differences generated from using predicted orbit data provided in pseudo-real time from the International GNSS (Global Navigation Satellite System) Service for geodynamics (IGS) versus precise orbit data released after a 10-day delay were examined. The change in precipitable water due to varying the analysis methods was larger than that due to the type of satellite orbit information. In the rainfall forecast experiments, those using the successive estimation method results had a better precision than those using the simultaneous estimation method results. Both methods that included data assimilation had higher rainfall forecast precisions than the forecast precision without precipitable water assimilation. Water vapor obtained from GPS analysis is accepted as important in rainfall forecasting, but the present study showed additional improvements can be attained from incorporating a zenith delay analysis method.
This study reports preliminary results from the three-dimensional variational method (3DVAR) with incremental analysis updates (IAU) of the surface wind field, which is suitable for real-time processing. In this study, 3DVAR with IAU was calculated for the case of a tornadic storm using 500-m horizontal grid spacing with updates every 10 min, for 6 h. Radial velocity observations by eight X-band multi-parameter Doppler radars and three Doppler lidars around the Tokyo Metropolitan area, Japan, were used for the analysis. In this study, three types of analyses were performed between 1800 to 2400 LST (local standard time: UTC + 9 h) 6 September 2015. The first used only 3DVAR (3DVAR), the second used 3DVAR with IAU (3DVAR+IAU), and the third analysis did not use data assimilation (CNTL). 3DVAR+IAU showed the best accuracy of the three analyses, and 3DVAR alone showed the worst accuracy, even though the background was updated every 10 min. Sharp spike signals were observed in the time series of wind speed at 10 m AGL, analyzed by 3DVAR, strongly suggesting that a “shock” was caused by dynamic imbalance due to the instantaneous addition of analysis increments to the background wind components. The spike signal was not shown in 3DVAR+IAU analysis, therefore, we suggest that the IAU method reduces the shock caused by the addition of analysis increments. This study provides useful information on the most suitable DA method for the real-time analysis of surface wind fields.
The forecast accuracy of a numerical weather prediction (NWP) model for a very short time range (≤1 h) for a meso-γ-scale (2–20 km) extremely heavy rainfall (MγExHR) event that caused flooding at the Shibuya railway station in Tokyo, Japan on 24 July 2015 was compared with that of an extrapolation-based nowcast (EXT). The NWP model used CReSS with 0.7 km horizontal grid spacing, and storm-scale data from dense observation networks (radars, lidars, and microwave radiometers) were assimilated using CReSS-3DVAR. The forecast accuracy of the heavy rainfall area (≥20 mm h-1), as a function of forecast time (FT), was investigated for the NWP model and EXT predictions using the fractions skill score (FSS) for various spatial scales of displacement error (L). These predictions were started 30 minutes before the onset of extremely heavy rainfall at Shibuya station. The FSS for L=1 km, i.e., grid-scale verification, showed NWP accuracy was lower than that of EXT before FT=40 min; however, NWP accuracy surpassed that of EXT from FT=45 to 60 min. This suggests the possibility of seamless, high-accuracy forecasts of heavy rainfall (≥20 mm h-1) associated with MγExHR events within a very short time range (≤1 h) by blending EXT and NWP outputs. The factors behind the fact that the NWP model predicted heavy rainfall area within the very short time range of ≤1 h more correctly than did EXT are also discussed. To enable this discussion of the factors, additional sensitivity experiments with a different assimilation method of radar reflectivity were performed. It was found that a moisture adjustment above the lifting condensation level using radar reflectivity was critical to the forecasting of heavy rainfall near Shibuya station after 25 min.
The failure time of a slope is predicted by a method based on creep failure theory for slope displacement on natural slopes, embankments, and cutting slopes. These prediction methods employ several equations based on the relationship between the displacement rate (displacement velocity) and time. However, such methods harbor problems because the shape of the tertiary creep curve is affected by many conditions, and it is difficult to identify the phase of tertiary creep. This study examines the time change in the displacement rate of the slope and derives an index for identifying the phase of tertiary creep. Two models of large-scale composite granite slopes were tested by using a large-scale rainfall simulator. In the experiments, the slope displacements were monitored in real time. From these results, inflection points were found in the velocity of the slope displacement. It was found that the corresponding inflection points at different locations in the sliding soil mass occurred with the same timing. This paper discusses the effectiveness of the prediction method for slope failure time by using the inflection points of displacement rate in real-time monitoring records.
Every year in Japan, slope failures often occur due to heavy rainfall during the wet season and typhoon season. The main reasons for soil failure are thought to be the increase of soil weight from infiltrated precipitation, the decrease in shear strength, and effects of the increase groundwater elevation. It is therefore important to consider to characteristics of groundwater behavior to improve slope disaster prevention. Kiyomizu-dera experienced major slope failures in 1972, 1999, and 2013, and a large slope failure occurred nearby in 2015. The two most recent events occurred since observation of precipitation and groundwater conditions began at the site in 2004. In this research, we determine the relationship between rainfall and groundwater level using both a full-scale model experiment and field measurements. Results indicate strong connection between rainfall intensity and the velocity of increase in groundwater level, indicating that it is possible to predict changes in the groundwater level due to heavy rainfall.
In disaster response, wherein many organizations undertake activities simultaneously and in parallel, it is important to unify the overall recognition of the situation through information sharing. Furthermore, each organization must respond appropriately by utilizing this information. In this study, we developed the Shared Information Platform for Disaster Management (SIP4D), targeted at government offices, ministries, and agencies, to carry out information sharing by intermediating between various information systems. We also developed a prototype of the National Research Institute for Earth Science and Disaster Resilience (NIED) Crisis Response Site (NIED-CRS), which provides the obtained information on the web. We applied these systems to support disaster response efforts in the 2016 Kumamoto Earthquakes and other natural disasters. We analyzed the effects of and issues experienced with the information sharing systems. As effects, we found 1) the realization of increased overall efficiency, 2) validity of sharing alternative information, and 3) possibility of using the system as a basis for information integration. As future issues, we highlight the needs for 1) advance loading of data, 2) machine readability of top-down data, and 3) identifying the common minimum required items and standardization of bottom-top data.
The purpose of this paper is to consider the essential concept by which to formulate standardized information that supports effective disaster response. From the experiences of past disasters, we have learned that disaster response organizations could not work effectively without information sharing. In the context of disaster response, the purpose of “information sharing” is to ensure common recognition of the disaster situation being confronted. During the Kumamoto earthquake, we provided a set of disaster information products to disaster response organizations to support their relief activities. Based on the real disaster response experience, we extracted issues of information sharing between various organizations. To resolve these issues, we discuss the concept of information sharing first, and then consider the quality of information that supports disaster response activities by referring to the information needs of emergency support organizations such as the Disaster Medical Assistance Team (DMAT). We also analyze the Basic Disaster Management Plan published by the Central Disaster Management Council and extract a common disaster-information set for governmental organizations. As a result, we define the “Standard Disaster-information Set” (SDS) that covers most disaster response information needs. Based on the SDS, we formulate intermediate information products for disaster response that provide consistent information of best-effort quality, named the “Standardized Disaster-information Products” (SDIP). By utilizing the SDIP, disaster response organizations are able to consolidate the common recognition of disaster situations without consideration of data availability, update timing, reliability, and so on.
In order to efficiently gather and effectively utilize information fragments collected in the initial stage of disaster response, those who utilize shared information need to determinate which information to gather and conduct appropriate processing as necessary. On the occasion of the 2016 Kumamoto earthquakes, the National Research Institute for Earth Science and Disaster Resilience (NIED) sent a resident researcher to the Kumamoto Prefectural Office the following day to implement disaster information support that included organizing various pieces of disaster information collected via telephone, fax, and the like on a WebGIS to generate an information map that was then provided to bodies that carry out disaster response. In light of this series of disaster information support activities, this article analyzes the necessary requirements for utilizing disaster information at a disaster response site; in other words, it addresses a problem with the effective utilization of a large amount of shared information in conducting disaster response activities. As a result, an outline of the information items that are necessary for utilization of disaster information has become clear. This provides a suggestion for the conception of a system for each disaster response body to utilize disaster information for carrying out activities at the disaster site.
As our daily lives and socioeconomic activities have increasingly come to depend on information systems and networks, the impact of disruptions to these systems and networks have also become more complex and diversified.
In urban areas, where people, goods, money, and information are highly concentrated, the possibility of chain failures and confusion beyond our expectations and experience is especially high.
The vulnerabilities in our systems and networks on have become the targets of cyber attacks, which have come to cause socioeconomic problems with increasing likelihood. To counter these attacks, technological countermeasures alone are insufficient, and countermeasures such as the development of professional skills and organizational response capabilities as well as the implementation of cyber security schemes based on public-private partnerships (PPP) at the national level must be carried out as soon as possible.
In this JDR mini special issue on Cyber Security, I have tried to expand the scope of traditional cyber security discussions with mainly technological aspects. I have also succeeded in including non-technological aspects to provide feasible measures that will help us to prepare for, respond to, and recover from socioeconomic damage caused by advancing cyber attacks.
Finally, I am truly grateful for the authors’ insightful contributions and the referees’ acute professional advice, which together make this JDR mini special issue a valuable contribution to making our society more resilient to incoming cyber attacks.
With society’s increasing dependence on information technology (IT) systems, it is becoming increasingly difficult to resolve safety problems related to IT systems through conventional information security technology alone. Accordingly, under the heading of “IT risk” research, we have been investigating ways to address broader safety problems that arise in relation to IT systems themselves, along with the services and information they handle, in situations that include natural disasters, malfunctions, and human error, as well as risks arising from wrongdoing. Through our research, we confirmed that a risk communication-based approach is essential for resolving IT risk problems, and clarified five issues that pertain to a risk-based approach. Simultaneously, as tools to support problem resolution, we developed a multiple risk communicator (MRC) for consensus formation within organizations, along with Social-MRC for social consensus formation. The results of our research are detailed in this paper.
In this paper, we discuss the current situation and problems of cyberattacks from multiple viewpoints, and propose a guideline for future countermeasures. First, we provide an overview of some trends in cyberattacks using various survey data and reports. Next, we examine a new cyberattack countermeasure to control Internet use and propose a specific guideline. Specifically, we propose an Internet user qualification system as a policy to maintain cyber security and discuss ways to realize the system, the expected effects, and problems to be solved.
This paper introduces previous studies that propose a model supporting decision-making on information security risk treatment by the top management of an organization and its assessment using statistical data. The reason that statistical data are used to assess the model is that the data necessary for information security risk treatment are not generally disclosed for security reasons. A verification using actual data is generally difficult.
This paper therefore proposes improvements to the assessment of the model using statistical data. A method to calculate the values used in the model, closer to the actual data is proposed to have more effective results by the model.
In order to operate the Internet of Things (IoT) or Cyber Physical Systems (CPS) in the real world, the system needs to be structured to have people in the real world incorporated as a part of its process: Human-in-the-Loop CPS (HITLCPS). With people in the real world incorporated as a part of its process, the system must have a secure structure to be able to continue operating normally. With sensors, actuators and other devices connected in a network, it becomes vulnerable to cyberattacks; hence, its framework must be resilient and secure in order to ensure its safety in the face of any disturbances. In this paper, we describe a safety-based secure system structure, using a STAMP model and a covariance structure.
The purpose of this study is to illustrate how exercises can play the role of a driving power to improve an organization’s cyber security preparedness. The degree of cyber security preparedness varies significantly among organizations. This implies that training and exercises must be tailored to specific capabilities. In this paper, we review the National Institute of Standards and Technology (NIST) cybersecurity framework that formalizes the concept of tier, which measures the degree of preparedness. Subsequently, we examine the types of exercises available in the literature and propose guidelines that assign specific exercise types, aims, and participants to each level of preparedness. The proposed guideline should facilitate the reinforcement of cybersecurity risk management practices, reduce resource misuse, and lead to a smooth improvement of capabilities.