Noniterative methods for estimating the block diagonal unique variance matrix in multiple battery factor analysis are provided. A partitioning of the covariance matrix is employed and inter-battery and factor extension procedures are applied in a stagewise manner. The resulting methods involve the computation of a conditional inverse of a nondiagonal block of the covariance matrix. An example is provided.
Asymptotically distribution free (ADF) type test statistics play an important role in covariance structure analysis with nonnormal data. However, empirical studies have indicated that ADF-type test statistics rejected the correct models too frequently at all but very large sample sizes. This phenomenon may similarly occur for tests involving nested models. Several statistics related to ADF-type nested tests are given. The new test statistics have the same asymptotic properties as the original ADF-type test statistics. When used to evaluate model restrictions with small sample sizes, our statistics are more conservative in rejecting correct models. Numerical comparison of critical values are given for selected sample sizes and model parameter differences.
symptotic robustness studies have shown that normal theory based test statistic for the goodness of fit in factor analysis and related structural models retains its asymptotic chi-square distribution under the null hypothesis if the latent vector variables are independently distributed. The asymptotic test, however, may not be robust against the independence assumption, as suggested by a recent Monte Carlo study. A Monte Carlo experiment is conducted to compare the asymptotic and the bootstrap tests across 4 exploratory factor analysis models, 5 sample sizes and 6 distributional conditions ; in some of these conditions the common factors and the unique factors are taken to be dependent. Results of a simulation study indicate that the asymptotic (bootstrap) test rejects (accepts) the null hypothesis too often than expected from nominal levels of tests when the common factors and the unique factors are mutually independent. When they are just uncorrelated the asymptotic test completely broke down, while the bootstrap test performed much better, though it rejected the null hypothesis too often.
Normal distribution is usually assumed in the analysis of covariance structures. In practical applications, it is common to encounter a situation in which normal distribution is a reasonable one except at the tails. In these situations, the truncated normal distribution is a possible alternative. This paper proposes a method to analyze covariance structures under the truncated multivariate normal distribution. Results of simulation studies indicate that the method produces reasonable estimates and asymptotic results.
Asymptotic normal distributions of eigenvalue-normed eigenvectors (Pearson-Hotelling principal-component vectors) of sample variance and correlation matrices are derived. Each distribution follows from the asymptotic distribution of the whole matrix of eigenvectors, also obtained in the paper. The results are presented for the general case, where the existence of fourth-order finite population moments is assumed, and also specified for normal and elliptical populations. Population variance and correlation matrices are supposed to be nonsingular and without multiple eigenvalues. A comparison with unit-length eigenvectors (Anderson principal-component vectors) is being made. Special attention is being paid to asymptotic variances of eigenvectors of both sample variance and correlation matrices in the two approaches. It is shown that the asymptotic variance matrices of principal-component vectors for sample variance and correlation matrices are singular in the Anderson approach. In the Pearson-Hotelling approach, however, the asymptotic variance of eigenvectors of the sample variance matrix is nonsingular, whereas in the sample correlation matrix case the asymptotic variance matrix of eigenvectors is nonsingular under certain conditions.
The normal theory maximum likelihood and asymptotically distribution free methods are commonly used in covariance structure practice. When the number of observed variables is too large, neither method may give reliable inference due to bad condition numbers or unstable solutions. The main existing solution to the problem of high dimension is to build a model based on marginal variables. This practice is inefficient because the omitted variables may still contain valuable information regarding the structural model. In this paper, we propose a simple method of averaging proper variables which have similar factor structures in a confirmatory factor model. The effects of averaging variables on estimators and tests are investigated. Conditions on the relative errors of the measured variables are given that verify when a model based on averaged variables can give better estimators and tests than one based on omitted variables. Our method is compared to the method of variable selection based on mean square error of predicted factor scores. Some aspects related to averaging, such as improving the normality of observed variables, are also discussed.