Pretest-posttest research designs are frequently employed in various research fields to eliminate individual variability so as to precisely assess treatment effects. In pretest-posttest designs, screening is often performed on the baseline values to determine whether subjects are to be enrolled to the study. To assess the effectiveness of the treatment considered, the
t test or the analysis of variance is often employed. Such procedures require normality of the underlying distribution. Even if the pretest and posttest scores jointly follow a bivariate normal distribution, screening of the pretest score will unquestionably depart from the normality assumption. Little research, however, has been done to assess the extent of non-normality under such a situation. The present paper examines the extent of non-normality caused by screening of the pretest scores. Under a bivariate normal distribution for pretest and posttest scores, the degree of departure from normality is assessed in terms of Kullback-Leibler divergence, skewness, and kurtosis of distributions for several types of screening schemes. Situations of maximum departure from normality will be identified. It is shown that, even at such a maximum departure from normality, the extent of departure is not so large, and hence our use of the
t test and the analysis of variance can be validated from the viewpoint of robustness.
View full abstract