This paper proposes a series of specification tests of the dynamic factor model. The Granger non-causality, linear dependency, and omitted explanatory variables tests are presented. All of the tests can be constructed as a natural byproduct of the routine used to calculate the ``smoothed'' moments, and they do not require the estimation of additional parameters. The actual size and power of the tests are examined in Monte Carlo experiments. The tests are applied to the term structure model of a yield curve.
A new class of stochastic covariance models based on Wishart distribution is proposed. Three categories of dynamic correlation models are introduced depending on how the time-varying covariance matrix is formulated and whether or not it is a latent variable. A stochastic covariance filter is also developed for filtering and predicting covariances. Extensions of the basic models enable the study of the long memory properties of dynamic correlations, threshold correlation effects and portfolio analysis. Suitable parameterization in the stochastic covariance models and the stochastic covariance filter facilitate efficient calculation of the likelihood function in high-dimensional problems, no matter whether the covariance matrix is observable or latent. Monte Carlo experiments investigating finite sample properties of the maximum likelihood estimator are conducted. Two empirical examples are presented. One deals with the realized covariance of high frequency exchange rate data, while the other examines daily stock returns.
A lot of work has been done in two-phase sampling for estimating the population mean of a study variable while considering the non-response at the second phase, see e.g. Khare and Srivastava (1993, 1995), Tabasum and Khan (2004), Singh and Kumar (2008) and Singh et al. (2010). Most authors have used information of a single auxiliary variable while estimating the mean of the study variable, whereas in a practical situation we may require/have auxiliary information on multiple characters. Khare and Sinha (2009, 2011) have used multi-auxiliary characters to estimate the population mean in the presence of nonresponse in the case of simple random sampling. In two-phase sampling, when auxiliary information is obtained at both phases, the nonresponse may occur at both phases as well. In this paper we have proposed a generalized class of estimators for estimating the population mean of a study variable under a two-phase sampling scheme using multi-auxiliary variable(s) in the presence of nonresponse at both phases. The information on all auxiliary variable(s) is not known for the population. The bias and mean square error have been derived for the suggested class. Special cases of the class have also been identified. An empirical study has been conducted for comparing the efficiency of the proposed estimators with a modified version of existing ones.
A method of the weighted score or penalized likelihood for estimation of ability reducing the asymptotic mean square error is derived. In this method, associated item parameters are assumed to be given or estimated by using a separate calibration sample with the size of an appropriate order. The method can be seen as an extension of the weighted likelihood method that removes the asymptotic bias of the maximum likelihood estimator. In the proposed method, some bias is retained while variance is reduced by using a multiplicative constant for the weight in the weighted score. A lower bound of the constant minimizing the asymptotic mean square error is found under the logistic model having identical items. The lower bound is numerically also shown to be reasonable in the case of the 3-parameter logistic model, with and without model misspecification.
Sometimes Markov chain Monte Carlo (MCMC) procedures work poorly. The identification of this inefficiency is important, but appropriate theoretical tools have not been investigated adequately. For this purpose, we propose the order of degeneracy, which measures the mixing property of an MCMC procedure. As an application, we consider major three sources of inefficiency, one being the fragility of the identification of parameters. We present a numerical simulation to show the effect of each source of inefficiency.