In order to reduce a large number of original variables to a smaller one that still contains most of the correlation information in statistical data without an external criterion, we propose ``the information-theoretic principal variable (iPV) selection criterion", from the perspective of Kullback—Leibler divergence. In addition, from the viewpoint of statistical parameters, we show that (a) the iPV selection criterion has a stopping rule, and (b) if a set of non-selected variables by the stopping rule is characterized as the independent set then the value of the iPV selection criteria is zero.
Estimation, statistical testing, and model selection are the main focusarea in statistics. This study focuses on the relationship betweenestimation precision (Fisher information) and model selection(Kullback—Leibler or KL information). This relationship is importantbecause researchers often conduct estimations or statistical tests aftermodel selection. Additionally, this study examines how ComputerizedAdaptive Testing (CAT), a stimulus selection method that maximizesFisher information, affects model selection performance. A simulationstudy demonstrates the relationship between the difference in Fisherinformation between two models and the degree of asymmetry in KLinformation. Furthermore, we confirm that controlling Fisher informationusing stimulus selection can influence model selection performance. Asimulation study suggests that increasing the Fisher information of afalse model reduces model selection performance.
For Q&A site, increasing the number of times users post, in otherwords, encouraging them to repost questions is important to strengthenthe loyalty to the site. To encourage users on the Q&A site to repostquestions, it is necessary for its management company to giveefficiently points to users, which the company offers them as incentiveto post. This study aimed to analyze what kind of answers makes iteasier for the questioner to post the next question in a short period.The data to be analyzed is the questions and answers data of Q&Awebsite. Considering the response to the question as an intervention,propensity score matching is used for this analysis to balance thecovariates between intervention group and non-intervention group. Also,we propose to balance the unobserved heterogeneity of the questioners byincorporating different random intercepts for each questioner into themodel of the propensity score calculation. The result of this study canbe used to propose Q&A site management companies the way of givingpoints effectively to promote questions.
Item writing guidelines define the general standards that item writersshould follow in order to avoid flawed items. These guidelines should beadjusted or elaborated when applied to specific assessments that differin their purposes and measurement contents. This study createdselected-response item-writing guidelines specific to the Scale forAcquisition of Japanese Word Meanings and produced pairs ofmultiple-choice test items for each of the same stem words so that oneof each pair follows a particular guideline and the other does not. Weexamined the effects of three particular guidelines on psychometricproperties of these test items. The results indicated that the itemdifficulty was higher when the guideline ``the correct response shouldnot be more difficult than the stem word'' was not followed, but noeffect was found for the other two guidelines. Practical issues andimplications in applying these guidelines have been discussed.
A governmental cashless point consumer refund program was implemented from October 1, 2019 to June 31, 2020 as part of a policy to promote cashless payments. This policy gave consumers a 2% or 5% (depending on the store) cashback in the form of points when they made purchases at small and medium-sized businesses, franchise chains, and convenience stores. In this paper, we examine whether this policy actually had an effect on promoting cashless usage among consumers. We used the SCM (synthetic control method) and DID (difference-in-differences) as our analysis methods.
Cognitive diagnostic assessments (CDAs) require a large number of itemsto measure the target attributes with high precision. An automatic itemgeneration (AIG) system would help to reduce the cost and effort of itemwriting in CDAs. This study aimed to develop a valid AIG system for CDAsin linear equations of mathematics by designing an AIG system andexamining two aspects of a generated CDA in cognitive diagnosticmodeling: (a) the M-matrix, which specifies the set of attributesrequired by each item model and (b) the item discrimination index, whichis computed from item parameters in the deterministic input,noisy-and-gate (DINA) model. First, we compared an original M-matrix totwo alternative M-matrices by using information criteria, posteriorpredictive model checks, and item discrimination indices. Second, weexamined the magnitude and variability of the item discriminationindices for every item model. No substantially large differences werefound among the results from all of the M-matrices. The discriminationindices tended to be high in items that measured major attributes, andthe variabilities of the indices were small within each item model,except for a few item models. These findings indicate the validity ofour M-matrix and AIG system. Furthermore, they suggest ways to improvethe AIG system. Research limitations and how future studies can help toenhance the AIG system are discussed.