Ouyou toukeigaku
Online ISSN : 1883-8081
Print ISSN : 0285-0370
ISSN-L : 0285-0370
Volume 30, Issue 2
Displaying 1-3 of 3 articles from this issue
  • Mitsuhiro Mazda, Hiroyuki Minami, Masahiro Mizuta
    2001 Volume 30 Issue 2 Pages 81-90
    Published: November 20, 2001
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    In the paper, we propose an SIRPP implementation on parallel processing with PVM, Parallel Virtual Machine.
    In regression analysis, a lot of explanatory variables sometimes make data analysis harder. When some explanatory variables are useless for predicting the value of response variable, the explanatory variable selection is very useful. But, if all of the explanatory variables are related to the response variable, we will search linear combinations of explanatory variables with projection pursuit regression and ACE (Alternating Conditional Expectations). These two methods are assumed special models.
    Sliced Inverse Regression (Li, 1991) is one of the approaches to reduce the number of explanatory variables in regression analysis. SIR does not get rid of some explanatory variables themselves but may reduce the dimension of a space of explanatory variables. It is based on the model (SIR model)
    y=f(β1x, β2x, …, βkx, ε),
    where x is the vector of p explanatory variables, βk(k=1, 2, …, K) are unknown row vectors, ε is independent of x, and f is an arbitrary unknown function on R[K+1].
    Mizuta (1999) proposed an algorithm for SIR model with projection pursuit, named SIRPP (Sliced Inverse Regression with Projection Pursuit). SIRPP has excellent performance in finding {βk}. However, the algorithm requires more computing power. It is a part of projection pursuit that the algorithm takes much time. In order to overcome the defect, we use PVM for SIRPP. We offer its effectiveness through numerical examples.
    Download PDF (541K)
  • Kanae Takekoshi, Manabu Iwasaki
    2001 Volume 30 Issue 2 Pages 91-106
    Published: November 20, 2001
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    In some experimental studies such as clinical trials for new drug approvals, data may be obtained in multiple stages. In such cases, the sample sizes of later stages are often estimated based on the result of previous stages. In order to estimate required sample sizes we have to know relevant parameter values. For example, we need to know the parameter values of null and alternative hypothesis in order to calculate the power of testing to be performed. The power which is calculated based on the true parameter values will be called the "true power" in this paper. Such parameter value is generally unknown and should be estimated, in which the power calculated is an "estimated power".
    In the present study, we will focus on testing of the difference of two binomial probabilities in two-stage sampling scheme. The test procedure considered here is Fisher's exact test. Sample sizes required to achieve statistical significance with pre-specified probability are estimated based on exact calculus of binomial probabilities except large sample sizes.
    Power assessment is carried out under several conditions which are often encountered in practical clinical trials. The true power and the estimated power are compared numerically. It is observed in this paper that the "estimated power" often overestimates the corresponding "true power". This find-ing has practical implications. Since it is generally impossible to know the true parameter values, we have to estimate required sample sizes from the knowledge of estimated power. In such cases estimated sample sizes may be somewhat less than the size that would be needed. In order to make the estimated sample size closer to the true sample size we must have knowledge of the difference between the "true power" and the "estimated power" at least approximately.
    Download PDF (636K)
  • Manabu Iwasaki
    2001 Volume 30 Issue 2 Pages 107-118
    Published: November 20, 2001
    Released on J-STAGE: December 02, 2009
    JOURNAL FREE ACCESS
    Statistical inference for an unknown single parameter θ of a population distribution often involves finding a solution to a non-linear equation f(θ)=0. Computation of the maximum likelihood estimate and calculation of confidence limits of a confidence interval are such examples. Instead of directly solving f(θ)=0 a promising iterative method is to solve an equivalent fixed-point equation θ=g(θ), which can be obtained by reformulating the original equation.
    In this paper, we consider the iterative method θt+1=gt) in detail. A sufficient condition for unique convergence has been given in terms of Lipschitz condition, which essentially states that|g′(θ)| should be less than 1 in a closed interval I. An alternative sufficient condition which is some-what weaker than the condition above is given in this article, in which the slope of g(θ) is allowed to be greater than 1. A simple example numerically illustrates the behaviors of the present iterative method in some detail.
    Interesting statistical examples are also shown to demonstrate that the present iterative method is useful and very powerful. An example comes from a classroom exercise and the others are taken from recent statistical literature. One of the messages of this paper is to emphasize the importance of numerical computation in everyday statistical data analysis and also in education and training of statistical methodologies.
    Download PDF (434K)
feedback
Top