The role of Behaviormetrics is to develop and apply quantitative methodologies for analizing phenomena concerning human behaviors in various fields. To quantify, it is unavoidable to carry out data analysis, sooner or later in the research process. Data analysis, for example, is indispensable for either building or testing models. Consequently, Behaviormetrics has a lot to do with Statistics, the branch of science concerning data and data analysis. Some standard methodologies for statistical data analysis are in particular useful for behaviormetricians. On the other hand, Statistics is undergoing a dramatic change due to the drastic improve ment of both computing environment and the ability of computer itself. First, as a result, with the easy access of computers, the population involving data analysis has been seeing a great increase, not restricted in the fields of natural sciences and technology, but also in the fields of social sciences and humanity. While this increase obviously reflects the availability of many new and easy-to-use softwares, it is also the mighty resource for developing even better softwares. Secondly, the computational revolution exerts direct influences on statistical theories themselves, which may be summarized by the appearence in Statistics in late 70's and early 80' s of so called computer-intensive methods. Some of these methods had been of mere theoretical interest to statisticians, but are ready to be put into practical use now. Some are, of course, totally innovative and unthinkable even 20 or 30 years ago. Here we introduce to our readers some of these new methods for data analysis and discuss their potentials for practical use in near future. We realize that time is needed for practitioners to be at ease with any of these methods. However, this is generally true for any new idea in science and technology, if we look back at the history. For example, factor analysis and many other multivariate statistical methods, once a world out of our reach, are now employed by many of us, as confidently as we handle the more usual t-test or analysis of variance(although we can' t say there exists no problem in the way of application). In the same way, we have sound reason to believe that the day should not be too long for many of these new methods, as powerful and versatile tools, to be applied widely and efficiently. In the sequel, we introduce some of these computer-intensive methods, discuss why and how they are conceived, and indicate the implications for applications. Our discussion is divided into two parts, with each of considerable length and relatively independent of the other. With ample practical examples, Part I gives an easy introduction to sevaral methods for data analysis which utilize computer intensively. Section1deals briefly with the historicalintimate relationship between computation and Statistics. Section 2 concerns nonlinear regression analysis. Section 3 discusses computer-aided design of experiments. It ends with a brief summary in Section 4. Our second part deals with bootstrap, which may be regarded as the typical example of computer-intensive methods in Statistics. This part covers almost all the research work about bootstrap and related studies untill the present day time. The historical background and aquick introduction to bootstrap are given in Section1and2, respectively. The applications of bootstrap to error estimation, mainly biases and variances, and to construction of confidence intervals, are given detailed treatment in Section3 and 4, respectively. Section 5 treats acomparatively less known application, i. e. regression analysis. We are especially ignorant of non-parametric regression. In carrying out bootstrapping, it is of great importance to find efficient resampling schemes, to save time and money, and this is the topic of Section6. Last section, Section7, is devoted to the discussion of many undergoing researches.