Using the notion of differential information loss, this research note provides a theoretical explanation to the phenomenon of the exasperatingly slow convergence of the EM algorithm when it is applied to a multi-scale IRT model used in a large-scale national assessment in the United States. The result should be meaningful to researchers who work on international large-scale examinations with multiple-choice questions.
Formulas for the asymptotic biases of the estimators of the normal theory standard errors in factor analysis are given with and without the assumption of multivariate normality for observed variables. The biases are derived from the asymptotic variances of standard error estimators and the asymptotic biases of the estimated variances of parameter estimators. The latter biases are derived from the asymptotic variances/covariances and asymptotic biases of the parameter estimators. The formulas cover the cases for unstandardized and standardized variables. Numerical examples using factor analysis models show the accuracy of the formulas. The biases of standard error estimators are theoretically and empirically shown to be of the same order as that of the differences between the asymptotic standard errors neglecting higher-order terms and those considering them.
In the current consumer's market, for most products there are a number of brands to choose from. However, the number of brands contained in the choice set is usually very small and it is approximately two to five in number. Taking this point into consideration, it becomes very important for each company to strategize how to place their products into the consumer's choice set in order to expand its market share. To solve this issue, many studies regarding the choice set have been done in the preceding studies. In particular, the Moriguchi model (1996) has the feature such that the model is built based on the nested logit model. However, the number of alternatives available for the model is limited and the model has some weak points. After that, Sakamaki (2003) proposed an improved model against Moriguchi (1996) model. Sakamaki (2003) model has the feature such that we categorize the alternatives, and we can know the effect of choice set on category level. However, we cannot know the effect on alternative level. In this paper, we try to clarify the weak points of both of these models, and propose an improving choice set model in which we can know the approximate effect of choice set on alternative level even if there are many alternatives, based on nested logit model. We will use HII model (Hierarchical Information Integration model) for our improvement. In the preceding studies, HII model is used only for the study of conjoint measurement analysis to use many attributes in the model. In this sense, our study also includes the development of HII model from attribute level to alternative level. Moreover we apply questionnaire survey data to our model and verify the appropriateness of our proposal.
In our previous study using 179 post-stroke patients with hemiplegia, we employed the standard test consisting of 93 items for assessing gait or related ambulatory actions to investigate the relationships between the comprehensive evaluation by the rater and the sum of all 93 test scores, between the grade of functional disturbance of the patients and the number of tests which could be performed successfully by the same patients, and the mutual correlation among all tests expressed by the scores. In the present study, we first confirmed by means of principal component analysis whether each test item used in the previous study was effective for evaluation of the severity of ambulatory disability, but not of other functions such as body balance. Subsequently, we narrowed down the number of selected test items by referring to the results of Cronbach's alpha and their respective clinical features. Finally, we worked out an abbreviated test battery composed of 7 items. The alpha value of the combination of this refined test battery of 7 items stood at 0.958. The sum of the 7 item test scores for each individual was statistically compared with the subjective scores given by the raters in their comprehensive evaluation. The resulting correlation coefficient was 0.982. These results provide evidence that our new shorter test battery would maintain its higher performance both in validity and reliability, even when the number of test items is reduced from 32 to 7.
A latent variable model is proposed that specifies not only the relationship between latent variables, but also the missing mechanism in which the value of the latent variables influences the frequency of missing patterns. We propose an estimation method for our model that adopts the Monte Carlo EM algorithm. Unlike previous methods, our method can be applied when the missing data assumption “Missing at random” does not hold. Moreover, our method can comprehensively explain the missing mechanism using latent variables, and the proposed estimation does not include multiple group estimation, so we can avoid the limitation present in previous studies of the number of subjects in each missing pattern. The proposed model and method are generalized for several kinds of use, such as monotone missingness. We show how to test that the missing mechanism is MAR/MCAR in this model. We also show the validity of the estimation method in simulation studies of two kinds of missingness (non-ignorable missingness and MAR); we compared the proposed method with ML estimation under the MAR assumption and found it superior. A read data illustration shows that the proposed method provides a feasible explanation that personality affects the missingness of some questions.