2009 Volume 22 Issue 1 Pages 1_1-1_32
Repeated measures are obtained whenever an outcome is measured repeatedly within a set of units. The fact that observations from the same unit, in general, will not be independent poses particular challenges to the statistical procedures used for the analysis of such data. The current paper is dedicated to an overview of frequently used statistical models for the analysis of repeated measurements, with emphasis on model formulation and parameter interpretation.
Missing data frequently occur in repeated measures studies, especially in humans. An important source for missing data is patients who leave the study prematurely, so-called dropouts. When patients are evaluated only once under treatment, then the presence of dropouts makes it hard to comply with the intention-to-treat (ITT) principle. However, when repeated measurements are taken then one can make use of the observed portion of the data to retrieve information on dropouts. Generally, commonly used methods to analyze incomplete longitudinal clinical trial data include complete-case (CC) analysis and an analysis using the last observation carried forward (LOCF). However, these methods rest on strong and unverifiable assumptions about the dropout mechanism. Over the last decades, a number of longitudinal data analysis methods have been suggested, providing a valid estimate for, e.g., the treatment effect under less restrictive assumptions.
We will argue that direct likelihood methods, using all available data, require the relatively weak missing at random assumption only. Likewise, weighted generalized estimating equations and multiple imputation are discussed. Finally, because it is impossible to verify that the dropout mechanism is MAR, we argue that, to evaluate the robustness of the conclusion, a sensitivity analysis thereby varying the assumption on the dropout mechanism should become a standard procedure when analyzing the results of a clinical trial.