Recent seismological studies and numerical simulation based on rate- and statedependent friction law indicate that regularly-occurring characteristic earthquake models can be used for long-term earthquake prediction. GPS data and repeating earthquake data have enabled us to estimate the distributions of slip deficits on plate boundaries. Thus, after we experience one cycle of a large earthquake, we will be able to estimate slip distribution as well as the incidence of the next earthquake to a considerable extent by monitoring the slip deficit. Based on this background, I propose the following three projects that should be carried out in this new century. 1) Improving the accuracy and resolution to estimate slip deficit distribution. 2) Investigating the nature of asperities and aseismic slip areas. 3) Verification of the slow slips in the deeper extensions of active faults.
Extensive studies utilizing data from a high-density observation network deployed in the Tokai region have brought about remarkable progress in understanding the subducting process of the Philippine Sea plate. Configuration of the Philippine Sea slab has been modeled using precisely determined hypocentral data (Yamazaki and Oida, 1985; Noguchi, 1996; Harada et al., 1998), and the locked region between the Philippine Sea and the Eurasian plates has been surmised on the basis of spatial variation of the density and focal mechanisms of microerathquakes (Matsumura, 1997). The coupling region has been also estimated by inversion analysis using GPS data (Sagiya, 1998, 1999). Although the result that suggests a large back-slip in the sea region to the south off Omaezaki does not seem to be consistent with the estimation of the locked region that is surmised to exist beneath the western coast of Suruga Bay, we think it is a great achievement to be able to discuss the coupling state on the basis of the observational data. Besides, simulation studies on subduction process modeling the Tokai region have developed considerably in recent years, indicating that there will appear intermediate-term and short-term precursors that should be detectable before a great interplate earthquake (Kato and Hirasawa, 1999). Three-dimensional simulations supply more realistic information about the phenomena that occur in the preparatory process of a great earthquake (Kuroki et al., 2001). Recent detailed analyses on the fault motion of the 1944 Tonankai and the 1946 Nankai earthquakes as well as acoustic sounding investigations on the upper crustal structure in their focal regions have afforded us useful information to assess the focal area of the coming Tokai earthquake. In order to utilize those scientific attainments obtained since the proclamation of the Large-Scale Earthquake Countermeasures Act in 1978, the Central Disaster Prevention Council has commenced to re-consider the focal model of the Tokai earthquake and to propose a new one. In spite of the remarkable progress in studies on tectonic processes in the Tokai region, the probability of a successful prediction of the Tokai earthquake seems not to have much increased in the last 20 years. At present we are not very optimistic concerning the possibility of the detection of immediate precursors compared to when the Earthquake Assessment Committee for the Areas under Intensified Measures against Earthquake Disasters (EAC) had been established. Experiments on rock fracture, analyses of seismograms on the commencement of fault motion, computer simulations on the repeating occurrence of an interplate large earthquake, and other relevant studies all suggest that immediate precursors would not be as large as we expected. Standing on such a recent viewpoint about the magnitude of precursors, the Japan Meteorological Agency has revised criteria to convene EAC in 1998. The new criteriasa was prescribed based upon the detection level of volumetric strainmeters. This revision is considered to express the intension that JMA will do its best to catch the immediate precursors. Some researchers have been presenting the ideas that recently observed seismic quiescence and crustal deformation in the Tokai region exhibit warnings that the Tokai earthquake is actually impending. The data and their interpretations, however, seem insfficient for the majority of the researchers to have a common opinion about the forecasting. The divergence of opinions concerning various kinds of changes in observational data obtained by the densely deployed network itself shows the difficulty of the prediction of the Tokai earthquake. We have not yet achieved a unified method to interpret the data or a scenario to determine how precursory phenomena will appear in sequence. It is very important to acquire more knowledge and know-how concerning these problems.
Active Fault Research Center (AFRC) was launched in April 2001 as one of major research units for the new Geological Survey of Japan (GSJ), according to the establishment of the National Institute of Advanced Industrial Science and Technology (AIST) AFRC is one of the responsible organizations for active fault studies in Japan under the Japanese government Headquarters for Earthquake Research Promotion (HQERP) AFRC conducts the following researches that is closely related to active faults. Study of active faults : HQERP has selected 98 active faults that are subject to prompt survey. These major active faults will be surveyed using various methods such as geological mapping, trenching and boring, in order to reveal the distribution and extent of each fault, timing of the last rupture event and recurrence intervals. On the basis of such paleoseismological data, the AFRC evaluates the possibility of future rupture and earthquake magnitude for each fault. Study of fault segmentation : A long active fault consists of several segments which can produce both single- and multiple-segment earthquakes. The important factors for segmentation study are the geometry, length, displacement and paleoseismicity of the surface faults of recent earthquakes to reveal the characteristics of fault segments. Modeling a faulting behavior using seismological and geodetic data for the evaluating future earthquakes on long active faults is also an essential part of this study. The North Anatolian fault in Turkey and the San Andreas fault in California, USA are two major on going international studies field being undertaken with the cooperation of the MTA-Turkey and the US Geological Survey, respectively. Studies in earthquake hazard assessment : On the basis of active fault study and fault modeling, AFRC estimates ground shaking from earthquakes. Earthquake ground shaking and damage is strongly dependent upon the epicentral distance, the way rupture propagates in a fault, and the characteristics of the subsurface structure. Considering these factors, simulation of the generation and propagation of seismic waves will be studied. Maintenance of an active fault database : AFRC publishes Annual Reports on Active Fault and Paleoearthquake Researches, and also publishes active fault strip maps and 1 : 500, 000- scale seismotectonic maps. There are plans to publish maps showing possible hazards from future earthquakes, in addition to maps showing rupture probabilities of major active faults in Japan. The AFRC will collect as much existing geological data as possible that is related to active faults and their activities. The AFRC will also create and maintain a database for longterm evaluation dataset and survey results of major active faults. All literature and databases will be open to the public and researchers. The author sincerely hopes that the new AFRC will become a key research institutes for earthquake hazard mitigation in Japan and play an important role as the national center for active faults in Japan.
After the devastating Kobe earthquake of 1995, the Headquarters of Earthquake Research Promotion was established in the prime minister's office. Among the mandates given to the headquarters are collection, analysis, and evaluation of the results of surveys and observations related to earthquakes. The headquarters set up a plan to survey 98 major active faults in Japan for studies of paleoearthquakes on those faults. Mainly on the basis of the survey results and investigations on historical earthquakes, the Earthquake Research Committee of the headquarters evaluated earthquake potential and made public long-term earthquake forecasts in many source regions. These will be a basis for a new probabilistic estimate of seismic hazards throughout Japan, which will be completed by March 2005. The size of a future earthquake is estimated from the empirical relationship between earthquake magnitude and fault length/source area. The location of the event is fairly precisely known from active fault data and historical earthquake catalogs, except for a certain types of earthquake such as deep events. For the occurrence time, we can only give a probabilistic estimate. Several renewal models have been tested against available recurrence data in Japan by the Sub-committee for Long Term Evaluation. They are log normal, gamma, Weibull, double exponential, and BPT (Brownian passage time) distributions. Because none of the distributions can fit the data significantly better than the other models, the BPT model has been chosen because of the clear physical meaning of its model process and of the stability of parameters. The BPT model consists of a regular loading process and irregular Brownian motion disturbances. We found that the common relative standard deviation explains the data set better than different relative standard deviations for each sequence, so far as the four examples of Japanese shallow crustal earthquake sequence are concerned. By extrapolating the results, we use the common relative standard deviation to evaluate shallow crustal events. When sufficient data on successive events of co-seismic slip are available, the time-predictable models is used for evaluations of subduction-zone earthquakes.
Tsunami has caused heavy damage to the coastal areas in Japan where the activitive countermeasure have been carried out to reduce the possibility of disaster. Tsunami warnings that provide with the arrival time and tsunami heights before the attack was started in 1952 and recently improved incorporating a database using computer simulation, in which the initial of a tsunami is estimated from only just seismic information. Combined numerical simulation together with observation in real time is proposed to improve estimation of the initial tsunami conditions. Simulation used to evaluate the damage caused by tsunami such as casualties, destroyed houses, and so on has been developed with the GIS data and a model of human behavior for evacuation. Study on historical tsunami using the sedimentrological approach is introduced to understand the detailed behavior of recent and historical events on land.
Knowledge of the geometry and connectivity of seismogenic and active fault systems is a key to understanding on-going active tectonic processes and evaluating the risk of destructive earthquakes. Knowing the geometry of the deeper extensions of active faults in the seismogenic zone, contributes to estimating source parameters for scenario earthquakes, like fault geometry, co-seismic displacement, and the location of possible asperities. Together with monitoring seismicity and surface deformation, delineating the geometry of seismogenic faults allows us to construct quantitative models of crustal deformation. Seismic reflection profiling is a powerful tool to discern the deep geometry of faults. Since the Kobe earthquake of 1995, deep seismic profilings have been carried out across some active faults in the Japanese islands, such as the Ou Backbone Range in northern Honshu. Through these seismic experiments, using vibroseis trucks and explosive sources, the geometry of seismogenic and active fault systems is successfully obtained in spite of some difficulties due to rough topography, high attenuation and man-made noise. Deep seismic profiling has a great potential for the direct imaging deep crustal structures, including deep fault geometry.
We propose a recipe to predict strong ground motions from scenario earthquakes which are caused by active faults. From recent developments in waveform inversion analysis for estimating rupture processes during large earthquakes, we have understood that strong ground motion is relevant to slip heterogeneity rather than total moment on the fault plane. The source model is characterized by three kinds of parameters, which we call : outer fault parameters, inner fault parameters, and extra parameters. The outer fault parameters are parameters characterizing the entire source area such as total fault length, fault width, and seismic moment. The total fault length (L) is related to the grouping of active faults, i.e. the sum of the fault segments. The fault width (W) is related to the thickness of the seismogenic zones. The total fault area S (=LW) follows the self-similar scaling relation with the seismic moment (M0) for moderate-size crustal earthquakes and departs from the self-similar model for very large crustal earthquakes. The locations of the fault segments are estimated from the geological and geomorphological surveys of the active faults and/or the monitoring of seismic activity. The inner fault parameters are parameters characterizing fault heterogeneity inside the fault area. Asperities are defined as regions that exhibit large slip relative to the average slip on the fault area. The relationship between combined area of asperities and seismic moment M0 satisfies the self-similar scaling relation. The number of asperities is related to segmentation of active faults. The rake angles of slips on the asperities should be estimated from the geological survey and/or geodetic measurements. The extra fault parameters are related to the propagation pattern of rupture within the source area. Rupture nucleation and termination are related to the geometrical patterns of the active-fault segments. The recipe proposed here is to construct the procedure for characterizing those inner, outer, and extra parameters for scenario earthquakes. Then, we have confirmed that the scaling relations for the inner fault parameters as well as the outer fault parameters are valid for characterizing earthquake sources and calculating ground motions from recent large earthquakes, such as the 1995 Kobe (Japan) earthquake, the 1999 Kocaeli (Turkey) earthquake, and the 1999 Chi-Chi (Taiwan) earthquake. We have also examined the recipe for estimating strong ground motion during the 1948 Fukui (Japan) earthquake. The simulated ground motions clearly explain the damage distribution in the Fukui basin.
Based on Proceedings of the Japan Society of Civil Engineers, JSCE, Earthquake Engineering Symposium, recent trends of research in this field are analyzed statistically. The area attracting the most attention is, for example, earthquake ground motion and soil characteristics. On the whole, most topics presented in recent symposiums we are closely related to serious damage caused by the 1995 Hyogoken-nanbu (Kobe) earthquake. Following the 1995 earthquake, JSCE issued three separete times proposals to improve the earthquake resistance of civil engineering structures, which is summarised here. Among various items covered by the proposals, very strong near-field ground motion, known as Level 2 earthquake motion, needs to be considered in earthquake resistant designs. To take Level 2 earthquake motion into consideration, a performance-based design is a common concept that should be applied to all types of civil engineering structures made with steel, concrete and/or soil.
The real cause of heavy damage on buildings during the Hyogo-ken Nanbu (Kobe) earthquake of 1995 is presented and the future direction for disaster mitigation is discussed. First a seismological approach for strong motion prediction is shown and the simulated strong motions for the Hyogo-ken Nanbu (Kobe) earthquake are presented. It is found that the most important feature of the near-fault motion is the ocurrence of high-amplitude velocity pulses with a peak ground velocity more than 100 cm/sec and a peak ground acceleration more than 800 cm/sec2. To predict the amplitude and period of such a velocity pulse we need to predict the size of an asperity (a patch with a large slip) and the slip velocity within it. Once we obtain ground motions simulated in the whole Kobe area, we use them as input to our theoretical models that are capable of simulating the damage to buildings. When we use ultimate strengths for design, the estimated damage is much larger than the observed. We estimate the actual ultimate strengths of buildings so that we can obtain the same damage ratios in Kobe for different building heights and for different construction ages. Then we apply the established building models to the near-fault ground motions observed during the Tottori-ken Seibu earthquake of 2000 in order to verify their applicability. We found that the models do not show any heavy damage except for one site where soft surface soil strongly amplified the ground motion. This corresponds to the fact that we did not observe any heavy damage in the area. Finally we discuss the problems of the current anti-seismic design practices and countermeasures for earthquake disaster based on the facts described above. Basic consensus in the structural engineering community on the seismic safety of our buildings is that they are sufficiently safe as long as we design buildings based on the current seismic code that was enforced after 1982. It is true that the buildings constructed after 1982 performed quite well, but we have shown that it was due to their additional strengths not considered in the original design. It is apparent that we have a big gap between theoretical models for design and the actual buildings we have constructed. Unless we do quantify the gap, we cannot take any effective countermeasures for future destructive earthquakes.
From national to local governments and private sectors, many organizations have carried out earthquakes damage estimation of their facilities by comparing their strength and earthquake demand. Based on these estimations, they have established plans for disaster reduction measures. Although these earthquake damage estimations are conducted by consuming large budgets and time, in many cases the results are neither used well nor trusted, especially by specialists. Why? The major reason is that these estimation results are calculated based upon chains of assumptions. In other words, the probability of having the assumptions adopted in the damage estimation calculation is very low. Furthermore, simulations are carried out under static conditions while real phenomenon is very dynamic. Although a static condition is used in many simulations, the time factor is certainly very important, especially for human related activities. For a reliable assessment of earthquake disaster damage, three time factors must be considered. The first defines the characteristics of the social system and development level of the affected area. The second is related to the point in time when the event occurs : season, month, day, hour, etc. The last is time elapsed after the event. Recently, accurate estimations of the characteristics of an earthquake (location, magnitude, mechanism, etc.) and the strong ground motion at each site is improving. However, even if accuracy becomes reliable, damage caused by the event cannot be discussed quantitatively as there is great uncertainty due to time factors and research on how to manage this uncertainty is scarce. In this paper, the direction of future research on earthquake damage evaluation and disaster countermeasure plans are discussed stressing the above-mentioned issues. Some studies on these matters are introduced.
Earthquake disasters should be determined on not only by seismologic characteristics, but also by social activities and circumstantial conditions of the affected region. It is important to learn from actual earthquake damage and loss. And it is also necessary to research important hidden criteria. The 1995 Hanshin-Awaji (Kobe) Earthquake Disaster occurred early morning at 5 : 46 on the 17th of January. People were killed in their homes, and most human causalities were found in rooms. In this paper, a different earthquake disaster scenario is simulated. A case of disaster at 11 : 46, the 17th Jan. 1995, when social activities were assumed to be very high. If a big one should hit at that time, fire and human causalities are expected to occur in various places and facilities, for example, in not only residential areas but also in stores, traffic, industrial centers, and so on. Emergency response is expected to become more confused.
The objectives of this paper are to define the process of catastrophic disaster occurrence and to propose a new philosophy to reduce the damage generated under extremal natural forces such as typhoons, earthquakes and heavy rainfall. A disaster reduction policy is strongly recommended against catastrophic disasters, because we have not yet succeeded to completely prevent damage due to large-scale disasters. Acceptable and tolerable risks are new ideas for establishing future policy of countermeasures for catastrophic disasters.
On May 2001 Shizuoka prefecture announced the third estimation of disasters that could possibly be caused by Tokai Earthquake. Some social conditions have changed and earthquake resistance buildings have been developed since 1993 when the previous estimation was studied. 1, 400 billion yen, for example, has been invested to prepare against earthquake disasters. The new damage estimation methods were developed after studying the great Hanshin-Awaji (Kobe) earthquake, and this study contributed to improvement of methods to predict disasters. Ishibashi's model in 1976 and the Central Disaster Prevention Council model in 1978 were combined for the new estimation. The Western part of Shizuoka prefecture was predicted to be more intensive than in the previous estimation. Disasters caused by earthquakes depend on the time it occurrs. In the study disasters were estimated in 8 cases such as 5 am in the winter, noon in the spring or in the fall, and 6 pm in the winter. These cases also included whether people were notified in advance or not. As a result, it was discoverd that the worst disaster would ocurr at 5 am in the winter when most people were sleeping in their beds, the same as in the Kobe Earthquake. 5, 851 lives would be lost in the disaster. To further prepare for an earthquake, it is important to return to the basic concept of disaster countermeasures, to fix the target of countermeasure implementation, to announce the result of the estimation to the public, and to reflect the disaster scenario in local disaster countermeasure plans.
In the 21st Century, it has been announced that several earthquakes will occur beneath Tokyo Capital Region. The Tokyo Metropolitan Government implements two kinds of hazard assessments for countermeasures against the earthquake disasters. One is the disaster damage estimation for Tokyo and a disaster scenario for Earthquake that occurres just beneath Cental Tokyo. According to the Report published in 1997, approximately 378, 000 buildings will be burnt down, 43, 000 buildings will be demolished, 7, 200 people will be killed. A disturbance of the water supply will continue for one month and the interruption of piped gas will continue for two months, more than two million people will be evacuated to shelters located mainly in public schools. The other assessment concerns research on the vulnerability of built-up areas against earthquakes measured according to each community area. According to the area vulnerability research, the most vulnerable areas are located as a ring-zone around the CBD and as a finger-zone along Japan Railway's Chuo-line from central Tokyo to the west suburban region (Pictorial 4-1). These zones are made up wooden houses in crowded areas without city planning nor building permission. TMG learned much from the Kobe Earthquake of 1995 and has enlarged earthquake countermeasures to make Tokyo an earthquake-resistant city and to secure effective disaster countermeasures, quick recovery and reconstruction to protest the livelihood of the populace, and to ensure urban redevelopment. The most important measure is the Promotion Plan for Earthquake-resistant City Development Projects, because the implementation of such projects can reduce the damage in Tokyo (Pictorial 4-2). The Preparedness Plan for Urban Reconstruction is one of the new measures that was established after the Kobe Earthquake.
Disaster information systems have been drastically improved since the Great Hanshin-Awaji (Kobe) Earthquake occurred on January 17, 1995. For example, the seismic intensity scales of the Japan Meteorological Agency were revised, and a new information system (commentary information and observation information) related to the Tokai Earthquake was introduced. Moreover, the Earthquake Investigation Research Promotion Headquarters of the Government which was established after the Great Hanshin-Awaji Earthquake, began to announce evaluations of aftershocks of major earthquakes and long-term evaluations of active faults in terms of probability. When such information is disseminated to emergency organizations and general citizens, how should emergency organizations and general citizens interpret and respond to it. For instance, if an announcement that the probability of an aftershock of magnitude 6 is 20 % is disseminated, they will wonder whether that is high, and whether or not they should take countermeasures. It is necessary when Government organizations announce the probability of an aftershock that they comment on the degree of danger and the proper response to the expected aftershock. Active fault activities were also more closely studied after the Great Hanshin-Awaji Earthquake. The Government selected 98 main active faults, and began research on their histories and possible future activities. As well as aftershocks, the results of predictions are announced in terms of probability over a 30-years. But the probability is low because active faults move once in 2000 or 3000 years. Therefore, announcing such results is likely to make inhabitants feel relieved. We must study how to disseminate such information for disaster mitigation.
A large-scale earthquake causes various types of destruction to our society. The loss of the life, physical damage and the so-called direct damage such as the loss of the social infrastructure or the collapse/loss of the buildings and facilities can be expected at first. Next, is the indirect economic destruction, for instance, due to the halting of enterprise activities and transportation. Also, you can imagine the economic damage caused by the interruption of computer networks or information network services. It is concievable that damage suffered by business partners may result in severe damage to you. Economic damage due to confusion, collapse of the social, economic system, and the financial system, should be also recognized as so-called indirect damage and be included with damages indicated above. It is expected that such indirect damage will bring a far serious influence on the economy in comparison to direct damage, especially for certain types of highly developed economic/financial cities like Tokyo. A huge earthquake has yet to hit a city, that functions internationally as an economic and financial capital like Tokyo. For this reason, the research on seismic damage prediction was insufficient from such a point of view. We should take efficient measures to reduce seismic damage to a level that our society can accept, and accomplish this by using our precious social resources. It is very important to understand the severity damage an earthquake can bring to our society in order to accomplish the above tasks, and it is desirable to do it first. This paper attempts to clarify the characteristics of indirect seismic destruction, and attempts to suggest a realistic prediction process putting emphasis on the importance of the prediction of indirect seismic damage. The items that should be clarfied in such an indirect damage prediction and the steps to accomplish these items are as the follows. 1. Preparation of social damage scenarios based on the occurrence of a presumed earthquake 2. Qualitative/quantitative analysis of the seismic damage based on such scenarios 3. Consideration of socially acceptable limits of seismic damage, and preparation for plans to control the seismic damage to keep it within an allowable range 4. Preparation of countermeasures for the mitigation of seismic damage and consideration of the responsibility for executing the countermeasures
This paper introduces estimations and prevention measures for earthquake disasters as earthquake insurance. Property insurance covers losses that may occur in the future. Estimation of earthquake loss is expected to be a major factor used to determine insurance premium rates (prices). For earthquake insurance, simulations designed to estimate earthquake losses are conducted based upon earthquakes that have occurred in the past. This enables insurance premium rates to be calculated by estimating the loss that is likely to occur in the future. Many local governments estimate earthquake damage to plan disaster prevention measures the results are compared with the estimation of earthquake lossrelative to earthquake insurance. In recent years, earthquake insurance has increasingly been required to function not only as an ex post facto measure, but also as an advance measure to mitigate earthquake damage. In the premium rate adjustment conducted this year, an insurance premium discount program for highly earthquake-resistant residences was introduced. This program is expected to serve as an incentive to increase the number of residences that incorporate high earthquakeresistant capacity.
Land use control, as long-term hazard mitigation strategy, is one of the most effective measures to decrease the vulnerability of the urban society in view of the tight linkage between the administrative system of disaster prevention and of urban planning. We carried out a questionnaire survey of 694 local urban authorities, all SHI and Tokubetsu-KU in Japan, to assess the linkage between planning sections and disaster management sections from the view point of utilization and disclosure of information on hazards, residential participation in the process of planning, and the implementation of land use control measures and the difficulties of their introduction. As a result of the survey, information on hazards collected or estimated by the disaster prevention sections of local governments is neither widely disclosed to the public nor considered by urban planning sections during the process of zoning regulation and development control. We investigated three pioneering cases of land use control/planning for the mitigation of active fault hazards. Despite of the lack of earthquake fault zoning act in Japan, the cities of Matsumoto, Yokosuka, and Nishinomiya have resourcefully adopted unique measures, urban rehabilitation and community development procedures based upon a community hazard map, legal district plan with building setback and open spaces along fault lines, and administrative guidance for active fault detection before development, respectively.
In the process of long-term recovery following a devastating earthquake disaster, the basic principle would be to create a better society by considering the damage caused by the earthquake as “creative destruction”. On the other hand, the basic principle for the recovery processes from a flood disaster is restoring the damaged area to its condition before the disaster. It was pointed out that the 1995 Hanshin-Awaji (Kobe) earthquake disaster was the first disaster that let the Japanese people recognize this difference between recovery from earthquake disasters and from flooding disasters. The long-term recovery processes started with a trial-and-error basis with an enormous amount of service volumes. This paper, reviews the planning mechanism of local restoration plans, and the basic structure of physical, restoration economic revitalization, and social reconstruction. After evaluating the progress of restoration in the impacted area for the first five years, remaining issues were analyzed.