JOURNAL OF THE JAPAN STATISTICAL SOCIETY
Online ISSN : 1348-6365
Print ISSN : 1882-2754
ISSN-L : 1348-6365
Volume 43, Issue 1
Displaying 1-5 of 5 articles from this issue
Articles
  • Hiroyuki Kato
    2013 Volume 43 Issue 1 Pages 1-16
    Published: August 08, 2013
    Released on J-STAGE: March 19, 2014
    JOURNAL FREE ACCESS
    This paper presents a moving average of independent random variables with normal distributions that approximates a stochastic process whose sample paths are periodic (we call it the periodic stochastic process). Since the periodic stochastic process does not have a spectral density, it can not be directly represented as a moving average according to the Wold decomposition theorem. The results of this paper are twofold. First, we point out that the theorem originally proved by Slutzky (1937) is not satisfactory in the sense that the moving average process constructed by him does not converge to any processes in L2 as the sum of white noise goes to infinity though the spectral distribution of it weakly converges to a step function which is the spectral distribution of a periodic stochastic process. Secondly we propose a new moving average process that approximates a nontrivial periodic stochastic process in L2 and almost surely.
    Download PDF (169K)
  • Hajime Yamato
    2013 Volume 43 Issue 1 Pages 17-28
    Published: August 08, 2013
    Released on J-STAGE: March 19, 2014
    JOURNAL FREE ACCESS
    The Ewens sampling formula is well-known as a distribution of a random partition of the positive integer n. For the number of distinct components of the Ewens sampling formula, we derive its Edgeworth expansion. It is different from the Edgeworth expansion for the sum of independent and identically distributed random variables. It contains the digamma function of the parameter of the Ewens sampling formula. Especially, for the random permutation, the Edgeworth expansion contains Euler's constant. The Edgeworth expansion is examined numerically using its graph.
    Download PDF (244K)
  • Toshio Ohnishi, Takemi Yanagimoto
    2013 Volume 43 Issue 1 Pages 29-55
    Published: August 08, 2013
    Released on J-STAGE: March 19, 2014
    JOURNAL FREE ACCESS
    Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.
    Download PDF (223K)
  • Masakazu Fujiwara, Tomohiro Minamidani, Isamu Nagai, Hirofumi Wakaki
    2013 Volume 43 Issue 1 Pages 57-78
    Published: August 08, 2013
    Released on J-STAGE: March 19, 2014
    JOURNAL FREE ACCESS
    Principal components analysis (PCA) is one method for reducing the dimension of the explanatory variables, although the principal components are derived by using all the explanatory variables. Several authors have proposed a modified PCA (MPCA), which is based on using only selected explanatory variables in order to obtain the principal components (see e.g., Jolliffie (1972, 1986), Robert and Escoufier (1976), Tanaka and Mori (1997)). However, MPCA uses all of the selected explanatory variables to obtain the principal components. There may, therefore, be extra variables for some of the principal components. Hence, in the present paper, we propose a generalized PCA (GPCA) by extending the partitioning of the explanatory variables. In this paper, we estimate the unknown vector in the linear regression model based on the result of a GPCA. We also propose some improvements in the method to reduce the computational cost.
    Download PDF (215K)
  • Kazumasa Mori, Hiroshi Kurata
    2013 Volume 43 Issue 1 Pages 79-89
    Published: August 08, 2013
    Released on J-STAGE: March 19, 2014
    JOURNAL FREE ACCESS
    This paper studies a prediction problem of factor scores with correlation-preserving linear predictors. We deal with three new risk functions that are obtained by modifying some typical risk functions in the literature, and derive optimal correlation-preserving linear predictors with respect to them. A necessary and sufficient condition for an identical equality among the predictors to hold is also derived.
    Download PDF (127K)
feedback
Top