Journal of the Japanese Society of Computational Statistics
Online ISSN : 1881-1337
Print ISSN : 0915-2350
ISSN-L : 0915-2350
Volume 22, Issue 1
Displaying 1-5 of 5 articles from this issue
Review
  • Geert Molenberghs, Geert Verbeke
    2009 Volume 22 Issue 1 Pages 1_1-1_32
    Published: 2009
    Released on J-STAGE: August 31, 2010
    JOURNAL FREE ACCESS
    Repeated measures are obtained whenever an outcome is measured repeatedly within a set of units. The fact that observations from the same unit, in general, will not be independent poses particular challenges to the statistical procedures used for the analysis of such data. The current paper is dedicated to an overview of frequently used statistical models for the analysis of repeated measurements, with emphasis on model formulation and parameter interpretation.
      Missing data frequently occur in repeated measures studies, especially in humans. An important source for missing data is patients who leave the study prematurely, so-called dropouts. When patients are evaluated only once under treatment, then the presence of dropouts makes it hard to comply with the intention-to-treat (ITT) principle. However, when repeated measurements are taken then one can make use of the observed portion of the data to retrieve information on dropouts. Generally, commonly used methods to analyze incomplete longitudinal clinical trial data include complete-case (CC) analysis and an analysis using the last observation carried forward (LOCF). However, these methods rest on strong and unverifiable assumptions about the dropout mechanism. Over the last decades, a number of longitudinal data analysis methods have been suggested, providing a valid estimate for, e.g., the treatment effect under less restrictive assumptions.
      We will argue that direct likelihood methods, using all available data, require the relatively weak missing at random assumption only. Likewise, weighted generalized estimating equations and multiple imputation are discussed. Finally, because it is impossible to verify that the dropout mechanism is MAR, we argue that, to evaluate the robustness of the conclusion, a sensitivity analysis thereby varying the assumption on the dropout mechanism should become a standard procedure when analyzing the results of a clinical trial.
    Download PDF (409K)
Theory and Applications
  • Kazuyuki Koizumi, Takashi Seo
    2009 Volume 22 Issue 1 Pages 1_33-1_41
    Published: 2009
    Released on J-STAGE: August 31, 2010
    JOURNAL FREE ACCESS
    In this paper, we derive the exact distribution of a new test statistic for the equality of two mean vectors in the intraclass correlation model when monotone missing observations occur. Simultaneous confidence intervals for all contrasts of two mean vectors are given by using the idea in Seo and Srivastava (2000). Finally, in order to evaluate the procedure proposed in this paper, we investigate the power function of a new test statistic and the widths of simultaneous confidence intervals. The numerical results of a real example and simulation study are presented.
    Download PDF (226K)
  • Jinfang Wang, Ping Jing
    2009 Volume 22 Issue 1 Pages 1_43-1_56
    Published: 2009
    Released on J-STAGE: August 31, 2010
    JOURNAL FREE ACCESS
    A concept of matchability of survey data is introduced based on decompositions of the joint probability density functions. This definition of matchability naturally leads to restrictions on the joint distributions in the form of various conditional independence relations. The concept of partial matchability is defined as the global matchability with respect to a subset of the underlying variables. The global matchability does not imply partial matchability and vice versa, which constitutes part of Simpson's paradox. A numerical experiment is carried out to show possible merits of algorithms based on partial matchability. We also show numerically that when the ideal assumption of matchability holds only approximately, estimation accuracy is still guaranteed to some extent. Extension to the problem of matching three files is also briefly discussed.
    Download PDF (378K)
  • Xiaoling Dou, Shingo Shirahata
    2009 Volume 22 Issue 1 Pages 1_57-1_77
    Published: 2009
    Released on J-STAGE: August 31, 2010
    JOURNAL FREE ACCESS
    There are several methods to estimate regression functions and their derivatives. Among them, B-spline procedures and kernel procedures are known to be useful. However, at present, it is not determined which procedure is better than the others. In this paper, we investigate the performance of the procedures by computer simulations.
      Two B-spline procedures are considered. The first one is to estimate derivatives using a different roughness penalty for each degree of the derivative d. In this procedure, the smoothing parameters and the coefficients of the B-spline functions are different for each d. The second procedure is to estimate the dth derivative just by differentiating the estimated regression function d-times. In this case, the regression function and its derivatives have a common coefficient vector of B-spline functions. Two kernel procedures are also considered. The first kernel procedure used in our simulations is constructed with the Gasser-Müller estimator and a global plug-in bandwidth selector. The second one is a local polynomial fitting with a refined bandwidth selector.
      As a result of our simulations, we find that B-spline procedures can give better estimates than the kernel ones in estimating regression functions. For derivatives, we also find that in B-spline methods, it is necessary to choose a different smoothing parameter (or coefficient vector) for each degree of derivative; between the two kernel methods, the Gasser-Müller procedure gives better results than the local polynomial fitting in most cases. Furthermore, the first B-spline method can still work better than the Gasser-Müller procedure in the central area of the domain of the functions. But in the boundary areas, the Gasser-Müller procedure gives more stable derivative estimates than all the other methods.
    Download PDF (399K)
  • Yuta Minoda, Toshinari Kamakura, Takemi Yanagimoto
    2009 Volume 22 Issue 1 Pages 1_79-1_91
    Published: 2009
    Released on J-STAGE: August 31, 2010
    JOURNAL FREE ACCESS
    Bayesian estimation of the end point of a distribution is proposed and examined. For this problem, it is well known that the maximum likelihood method does not work well. By modifying the prior density in Hall and Wang (2005) and applying marginal inference, we derive estimators superior to existing ones. The proposed estimators are closely related to the estimating functions which are known to outperform maximum likelihood equations. Another advantage of the proposed method is to resolve the convergence problem. Our simulation results strongly support the superiority of the proposed estimators over the existing ones under the mean squared error. Illustrative examples are also given.
    Download PDF (214K)
feedback
Top