Kodo Keiryogaku (The Japanese Journal of Behaviormetrics)
Online ISSN : 1880-4705
Print ISSN : 0385-5481
ISSN-L : 0385-5481
Volume 3 , Issue 2
Showing 1-5 articles out of 5 articles from the selected issue
  • Michiaki MITO, NAGAE Kazuhisa, Hiroaki UEMATSU
    1976 Volume 3 Issue 2 Pages 1-11
    Published: March 30, 1976
    Released: January 25, 2011
    JOURNALS FREE ACCESS
    Principal component analysis was applied to summarize test errors in 46 items of the Minnesota Test for Differential Diagnosis of Aphasia into an one-dimensional value. To avoid data dependency of principal components, an attempt was made to normalize the principal components from minimum 0.0 to maximum 1.0 and formalized as f;01=n∑i=1aixi/n∑i=1aiximax. where ai=li/√V(xi), the li is the first characteristic vector of the correlation matrix, and xi, ximax and V(xi)are errors, possible errors and variance of the ith item of the test respectively. This normalzied component was referred to as“0-1 score”which was also formalized as f;01=nΣi=1βixi, whereβi=ai/nΣi=1aiximax and n=46, the number of items to be analyzed. 0-1 score was applied to 176 aphasics and revealed: 1)0-1 score coefficient β i was considered to be stable regardless the samples by which correlation matrices was computed. 2)Correlation coefficient between 0-1 scores and scores of clinically evaluated overall daily-life-speech-activity was high(γ =0.91). 3)Values of0-1 score by the Minnesota Classification of aphasia were thought to be compatible to clinical impression of severity of aphasia. From these results, 0-1 score is a proper measure to summarize test data of the Minnesota Test for Differential Diagnosis of Aphasia to an one-dimensional numerical value and appers to be one of good method for evaulation of aphasia.
    Download PDF (1565K)
  • Toshiyuki FURUKAWA, Seiichi TAKASUGI, Michitoshi INOUE, Fumihiko KAJIY ...
    1976 Volume 3 Issue 2 Pages 12-22
    Published: March 30, 1976
    Released: June 28, 2010
    JOURNALS FREE ACCESS
    A stochastic model of aging and death was made to investigate the effect of environmental factors on the human mortality with the progress of againg.The death was assumed to be equivalent to the full occupancy of individual's“capacit”with deteriorating factors, where the“capacity”was defined by the total amount of vital function of an individual, while the deterioration factor was defined by a sum of environmental and internal noxious events including diseases. It was further assumed that the“capacity”increases linearly before the maturity phase, where a plateau is formed, and linearly declines thereafter to a given maximum age. Calculations were made with a Monte-Carlo method for generating the deterioration factor uniformly random at every age. It was shown that the simulated mortality curves of various countries at present and in the past were in good agreement with statistical observation curves. And the calculated amount of deterioration factors was shown to be a good index of environment in terms of assessing the health maintenance policies.
    Download PDF (2096K)
  • Hirosi HUDIMOTO
    1976 Volume 3 Issue 2 Pages 23-26
    Published: March 30, 1976
    Released: January 25, 2011
    JOURNALS FREE ACCESS
    We suppose that each individual in a given population π belongs to one of two mutually exclusive groupsπ1 and π2 Our purpose is to classify an individual or individuals randomly drawn from π to either π1 or π2 as correctly as possible. However, we can not directly identify each individual as a member of π1 or π2 but use the individual's responses to a battery of m dichotomous items to aid in classification.
    Let x=(e1, …, em) denote the total response to the given battery of items, where ej=1 if the response on the jth item is “positive” and ej=0 if otherwise, j=1, …, m. Let fi(x)be the probability function of x in πi, i=1, 2. In this case, if these probability distributions are completely known, as is well known, the best way of this classification should be based on the likelihood ratio L(x)= f2(x)/ f1(x)or, equivalently, on l(x)=logL(x)=log f2(x)-log f1(x).
    The optimum solution based on L(x)clearly requires knowledge of the probability distribution of response patterns in each group. But this is a strong requirement if m is large, for both f1(x)and f2(x)are distributions with 2m-1 parameters. Bahadur(1961)has shown that if the number of items in the battyery is fairly large and if the items are not highly interdependent, l(x)=log L(x)is approximately normally distributed in πi, with the mean μi=εfi(l(X))and the variance σi2=εfi(l(X)-μi)2, i=1, 2, respectively, and if f1(x)and f2(x)are not very different, both σ1 and σ2 are approximated by D=√μ2-μ1. But it seems to be an obstacle to classifications that f;1(x)and f;2(x)are restricted by the last condition to ensure approximately equal σ1 and σ2. We shall deal with the case that f1(x)and f2(x)are unknown but past observations obtained from π1 and π2 are available, respectively, and we shall use Ln1, n2(x)=fn2(x)/fn1(x)instead of L(x), where fn1(x)denotes the relative frequency based on ni observations from πi.
    Let ω1 and ω2 be the proporitions of individuals of π belonging to π1 and π2, respectively. In the usual situation of classifications, ω=(ω1, ω2)will be unknown. We shall regard ω as an unknown prior distribution having frequency interpretation as the chance that an individual randonly drawn from π belongs to π1 or π2. Some Bayesians may not approve of frequemcy interpretation for prior distribution in any case. But ω has the clear meaning as frequency in this case.
    Suppose now that a new random sample of the size n is obtained from π in order to classify each individual contained in this sample to either π1 or π2. The empirical Bayes procedure is considered based on fn1(x), fn2(x)and the new sample(x1, …, xn)for the classification in the case that x=(e1, …, em)denotes the response pattern of an individual to the m dichotomous items. The prior distribution ω is estimated from (x1, … xn) and an empirical Bayes rule which is asymptotically optimal relative to ω in the Robbins' sense is made for our classification problem.
    Download PDF (567K)
  • Giitiro SUZUKI
    1976 Volume 3 Issue 2 Pages 27-32
    Published: March 30, 1976
    Released: June 28, 2010
    JOURNALS FREE ACCESS
    Skilful and non-fictious model-building requires simultaneously two contradictory conditions. By the first, the model should be made to fit the actual phenomena, and by the second, it should be handled with ease. The essential part of data-analysis usually amounts to an input-output relation, where the input is the so-called data and the output is decision. In other words, data-analysis is the process of simplifying data for the convenience of decision. Some principles of simplification are presented. Especially in the case of model-fitting, the necessity for giving some subjective judgements is emphasized. As an example of the common principle of simplifing data, concepts of the simplest sufficient information and of the most efficient information are proposed. To increase fittness of model the necessity for modification of preassigned model is also emphasized. From the actual point of view, more global process of simplification must be considered, In this process, the input consists of the various types of information and the output is just the final conclusion. There may be no general principle of describing such global inputoutput relation. The true and reasonable data-analysis, however, must be grasped as a part of this global input-output relationship.
    Download PDF (1146K)
  • Taizo IIZIMA
    1976 Volume 3 Issue 2 Pages 33-39
    Published: March 30, 1976
    Released: June 28, 2010
    JOURNALS FREE ACCESS
    Download PDF (1387K)
feedback
Top