Japanese Journal of Biometrics
Online ISSN : 2185-6494
Print ISSN : 0918-4430
ISSN-L : 0918-4430
Volume 27, Issue 2
Displaying 1-5 of 5 articles from this issue
Original Ariticle
  • Takashi Sozu, Takeshi Kanou, Chikuma Hamada, Isao Yoshimura
    2006Volume 27Issue 2 Pages 83-96
    Published: December 01, 2006
    Released on J-STAGE: September 25, 2011
    JOURNAL FREE ACCESS
    This article proposes a method of power and sample size calculation for confirmatory clinical trials, with the objective of showing superiority for all multiple primary variables, assuming normality of the variables. Since one sided t-statistics are used to evaluate statistical significance, the power is calculated based on a Wishart distribution. A Monte Carlo integration is used to calculate the expectation of conditional power, conditioned on Wishart variables, where random numbers are generated using the Bartlett's decomposition. Numerical examples revealed that the required sample size decreases with increases in the correlation coefficient, although the dependency is not large when the correlation coefficient is negative or when the effect sizes, on which power is calculated, are far different between variables. A SAS program (version 9.1) for the proposed method is provided in the Appendix.
    Download PDF (273K)
  • Tetsuji Ohyama, Kimio Yoshimura, Takashi Yanagawa
    2006Volume 27Issue 2 Pages 97-108
    Published: December 01, 2006
    Released on J-STAGE: September 25, 2011
    JOURNAL FREE ACCESS
    The problem of testing the Hardy-Weinberg equilibrium (HWE) when the data are stratified in several strata is considered. In previous methods, null hypothesis is that HWE holds and alternative hypothesis is that HWE does not hold. But these methods cannot test the HWE positively. Therefore, we formulate the test of the HWE as the problem of testing equivalence. Considering an odds ratio as the measure of disequilibrium, it is assumed that the ratio is common across strata. We propose two tests based on the trinomial distribution and quadrinomial distribution. It is shown that those tests are asymptotically equivalent. Those methods are applied to practical data for illustration.
    Download PDF (211K)
  • Kimihiko Sakamoto, Yutaka Matsuyama, Yasuo Ohashi
    2006Volume 27Issue 2 Pages 109-119
    Published: December 01, 2006
    Released on J-STAGE: September 25, 2011
    JOURNAL FREE ACCESS
    Due to the selection process in academic publication, all meta-analysis of published literature is more or less affected by the so-called publication bias and tends to overestimate the effect of interest. Statistically, publication bias in meta-analysis is a selection bias which results from a non-random sampling from the population of unpublished studies. Several authors proposed methods of modelling publication bias using a selection model approach, which considers a joint modelling of the weight function representing the publication probability of each study and a regression of the outcome of interest. Copas (1999) showed that in this approach some of the model parameters are not estimable and a sensitivity analysis should be conducted. In implementing the Copas's sensitivity analysis of publication bias, a practical difficulty arises in determining the range of sensitivity parameters appropriately. We propose in this article a Bayesian hierarchical model which extends Copas's selectivity model and incorporates the experts' opinions as a prior distribution of sensitivity parameters. We illustrate this approach with an example of the passive smoking and lung cancer meta-analysis.
    Download PDF (234K)
  • Hisao Takeuchi, Isao Yoshimura, Chikuma Hamada
    2006Volume 27Issue 2 Pages 121-137
    Published: December 01, 2006
    Released on J-STAGE: September 25, 2011
    JOURNAL FREE ACCESS
    A generalized hazards model incorporating a cubic B-spline function into the baseline hazard function (GHMBS) was proposed as a model for estimating covariate effects in survival data analysis. The GHMBS integrated the three types of hazard models: the proportional hazards model (PHM), accelerated failure time model (AFTM), and accelerated hazards model (AHM), which enabled the likelihood principle for estimation and hypothesis testing to be applied irrespective of submodels (i.e., PHM, AFTM, and AHM). A procedure for adaptively choosing suitable knots from a set of candidate knots was proposed in order to actualize an appropriate baseline hazard function in GHMBS. The characteristic of the proposal was evaluated with bias and mean squared error of the estimation of covariate effects through a Monte-Carlo simulation experiment. A method for identifying a submodel appropriate for the data to be analyzed was also proposed based on GHMBS. The performance of the proposed model selection method was evaluated with the probability of selecting the true model through a Monte-Carlo simulation experiment based on PHM and AFTM. As a result, the proposed method achieved fairly high probabilities of identifying the true model. An application of the proposed method to actual data in a clinical trial provided a reasonable conclusion.
    Download PDF (495K)
Review
  • Chikuma Hamada, Yushi Nakanishi, Nobushige Matsuoka
    2006Volume 27Issue 2 Pages 139-157
    Published: December 01, 2006
    Released on J-STAGE: September 25, 2011
    JOURNAL FREE ACCESS
    Meta-analysis is defined to be ‘the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings'. Since the 1980s there has been an upsurge in the application of meta-analysis to medical research. The rapid increase in the number of meta-analysis being conducted during the last decade is mainly due to a greater emphasis on evidence based medicine and the need for reliable summaries of the vast and expanding volume of clinical studies. Over the same period there have been great developments and refinements of the associated methodology of meta-analysis. When judging the reliability of the results of a meta-analysis, attention should be focused on ‘publication bias’. Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. This bias can provide a flaw of the result of meta-analysis. In this article, the causes and origins of publication bias are reviewed, and then the history and some findings of publication bias in medical research are presented. Several statistical methods that have been developed to detect, quantify and assess the impact of publication bias in meta-analysis are demonstrated.
    Download PDF (465K)
feedback
Top