JOURNAL OF THE JAPAN STATISTICAL SOCIETY
Search
OR
Browse
Search
Volume 43 , Issue 1
Showing 1-5 articles out of 5 articles from the selected issue
    • |<
    • <
    • 1
    • >
    • >|
Articles
  • Hiroyuki Kato
    Volume 43 (2013) Issue 1 Pages 1-16
    Released: March 19, 2014
    JOURNALS FREE ACCESS
    This paper presents a moving average of independent random variables with normal distributions that approximates a stochastic process whose sample paths are periodic (we call it the periodic stochastic process). Since the periodic stochastic process does not have a spectral density, it can not be directly represented as a moving average according to the Wold decomposition theorem. The results of this paper are twofold. First, we point out that the theorem originally proved by Slutzky (1937) is not satisfactory in the sense that the moving average process constructed by him does not converge to any processes in L2 as the sum of white noise goes to infinity though the spectral distribution of it weakly converges to a step function which is the spectral distribution of a periodic stochastic process. Secondly we propose a new moving average process that approximates a nontrivial periodic stochastic process in L2 and almost surely.
    View full abstract
    Download PDF (169K)
  • Hajime Yamato
    Volume 43 (2013) Issue 1 Pages 17-28
    Released: March 19, 2014
    JOURNALS FREE ACCESS
    The Ewens sampling formula is well-known as a distribution of a random partition of the positive integer n. For the number of distinct components of the Ewens sampling formula, we derive its Edgeworth expansion. It is different from the Edgeworth expansion for the sum of independent and identically distributed random variables. It contains the digamma function of the parameter of the Ewens sampling formula. Especially, for the random permutation, the Edgeworth expansion contains Euler's constant. The Edgeworth expansion is examined numerically using its graph.
    View full abstract
    Download PDF (244K)
  • Toshio Ohnishi, Takemi Yanagimoto
    Volume 43 (2013) Issue 1 Pages 29-55
    Released: March 19, 2014
    JOURNALS FREE ACCESS
    Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.
    View full abstract
    Download PDF (223K)
  • Masakazu Fujiwara, Tomohiro Minamidani, Isamu Nagai, Hirofumi Wakaki
    Volume 43 (2013) Issue 1 Pages 57-78
    Released: March 19, 2014
    JOURNALS FREE ACCESS
    Principal components analysis (PCA) is one method for reducing the dimension of the explanatory variables, although the principal components are derived by using all the explanatory variables. Several authors have proposed a modified PCA (MPCA), which is based on using only selected explanatory variables in order to obtain the principal components (see e.g., Jolliffie (1972, 1986), Robert and Escoufier (1976), Tanaka and Mori (1997)). However, MPCA uses all of the selected explanatory variables to obtain the principal components. There may, therefore, be extra variables for some of the principal components. Hence, in the present paper, we propose a generalized PCA (GPCA) by extending the partitioning of the explanatory variables. In this paper, we estimate the unknown vector in the linear regression model based on the result of a GPCA. We also propose some improvements in the method to reduce the computational cost.
    View full abstract
    Download PDF (215K)
  • Kazumasa Mori, Hiroshi Kurata
    Volume 43 (2013) Issue 1 Pages 79-89
    Released: March 19, 2014
    JOURNALS FREE ACCESS
    This paper studies a prediction problem of factor scores with correlation-preserving linear predictors. We deal with three new risk functions that are obtained by modifying some typical risk functions in the literature, and derive optimal correlation-preserving linear predictors with respect to them. A necessary and sufficient condition for an identical equality among the predictors to hold is also derived.
    View full abstract
    Download PDF (127K)
    • |<
    • <
    • 1
    • >
    • >|
feedback
Top