In this paper we discuss projection pursuit into three dimensions. In the previous works, practically one- and two-dimensional projection pursuit have been dealt to find interesting structures of data. We identify structures of data with three-dimensional projection pursuit, which can not find with lower dimensional projection pursuit. The Friedman index is chosen as a projection index to extend to three dimensions, and compared with the moments index, which han been proposed by Nason in 1995. We also demonstrate the effectiveness of the method with some numerical examples. The examples show the three-dimensional projection pursuit can reveal three-dimensional structures in high dimensional data space that is difficult to find out with two-dimensional projection pursuit.
In medical science and biology, individuals are divided into the group in advance by some criteria, and statistical processing are often carried out on grouped observations of the individuals. In this case, the midpoint in the interval into which a group is divided is handled as a representative value, and some corrections such as Sheppard may be used, to calculate some statistics or quantities such as mean and variance. However, this procedure may not be sometime appropriate in practice because the numbers of intervals were often small or the lengths of intervals were not uniform. So, the probability that the observed value belongs to the interval is obtained by supposing some underlying distributions, which are appropriate for describing the phenomenon. Thus, the likelihood can be cnstructed and the inference about parameters of distribution can be carried out based on the likelihood. In practice, it is desirable that the inference can be carried out on the family of the distributions or data-adaptive distribution which include underlying distributions, if it were possible, normal distribution with historical and statistical “property”. In this paper, when underlying distributions are unknown, power normal distribution, which includes normal distribution, was adopted. Further, unstructured data were used in this investigation, as it is caught specifically to fit power normal distribution to grouped observations. Through some examples cited form published literature and simulated data, it was shown that interpretation and consideration could be done by using the property of normal distribution intended after the transformation of observation, by performing statistical processing on transformed scale, and inverse-transforming to original scale, that is, returning to original scale of the power normal distribution.
An optimal scaling method is proposed for analyzing a set of several mobility tables which have the same row and column categories, but are observed in different cases. The method is an extended version of Adachi (1997)'s constrained homogeneity analysis for a single mobility table. In the proposed method, the categories are represented as the case-invariant points in a low-dimensional space, and mobility trends are done as the case-varying vectors in the space. The optimal coordinates of the points and vectors are obtained analytically with a simple eigendecomposition. The resulting configuration enables us to easily grasp the mobility trend for each case and to compare the trend between the cases, as illustrated by examples. The relation of the method to three-way asymmetric multidimensional scaling is discussed.
Now we often discuss the importance of the education which values characteristics of students. However it is very difficult to develop the method by which students can be given the education that values their characteristics. In this paper, I propose a statistical method which is useful to find the vector representing characteristic of each student. I model each student's standardized observation vector by his/her student mean vector and his/her time-dependent characteristic vector. Then I propose that we grasp each student's mean vector standardized by the inverse matrix of its covariance matrix Σ as his/her first characteristic vector. Furthermore I propose that we grasp first eigen vector leaded through generalized spectral decomposition of each student's intra covariance matrix Ωi against Σ as his/her second characteristic vector during the period. And I show vectors representing characteristics of typical students.