The F test was derived by R.A.Fisher. It was not, by any means, derived as a variance ratio test, but as a likelihood ratio test. An explanation of the shortest way of deriving the F test from the likelihood ratio is the purpose of this section. The author considers that the F value as the ratio of variance, since that R.A.Fisher does not take the standpoint of this section. Variance ratio is the concept that holds more broadly. The probability Pr that a phenomenon which occurs r times with probability p when tried n times independently is determined by the theory of probability as a binomial distribution,
pr=(n) pr(1-p)n-r (r=0, 1, ・・・, n)
(r)
That is
Premise regarding Phenomenon
How much degree the phenomenon occur?
But in statistics, we ask: What is the true value of p, after observing an actual result, that is
Observed r time after n trial
How much degree the phenomenon occur?
Shrinking device features demands improved fabrication technology. In silicon nitride CVD(SiN CVD), technical requirements demand the capability to form thinner films. In an initial attempt to form thirnner films by shortening the depostion time, there were two problems, breakdown voltage ana thickness uniformity, that could not be solved by simply tuning the signal factor. An attempt was therefore made to optimize the basic characteristics required in a silicon nitride film, namely its insulation performance, through an experiment. This resulted not only in an improvement in the basic insulation perormance of the silicon nitride film, but also in a solution to the problem of controllability of the film thicness. Another positive result was that the evaluation process,which would normally have taken two months, was completed in one week of actual work.
A study was made of the settings of production equipment by combining simulation with Taguchi methods to reduce image noise in photosensitive drums,which are key devices in copiers. The result was a major improvement in production speed under the optimal conditions. Furthermore, the result was obtained in only one week, instead of the three months that would have been required for an experiment with the actual equipment, and about two million yen was saved on material costs. Problems for future consideration include study and simulation of, for example, what happens when the drum enters the coating liquid at an angle, and further reduction of the simulation time.
Previous studies on the optimization of injection molding from various standpoints have shown that transferability alone provides only an extremely small gain and repeatability cannot be determined with adequate reliability, The reason was considered to lie in the processing of nonlinear effects. This time we reanalyzed transferability and shape retention by the standard signal-to-noise ratio, using a crystalline resin test piece. As a result, it became clear that uniform fillability is essential for transferability, and if this condition is not satisfied,shape retention is unsatisfactory.
The qualities demanded in the cutting of round glass rods include the absence of chips and other defects, and mininlal roughness of the cut surface. In this study, which sought to improve these quality characteristics, the test pieces were round glass rods; the process conditions were optimized by evaluating the power consumption of the spindle motor of the cutting apparatus. Since the power waveform during the cutting of a glass rod is nonlinear, it was analyzed with the standard S/N ratio. In the results obtained, the gain was generally reproducible.
Under current practice, copier manufacturers tend to let their designers rely on subjective judgment in designing the quality of the copy image. The designers'preferences, however, do not necessarily match those of consumers, so it has become necessary to find objective image evaluation criteria. In previous studies,although the MT system proved effective when applied to the evaluation of monochrome image quality, there remailed problems of suitability in the evaluation stage. In the present study, an objective and highly reliable evaluation method for copy images was constructed by using signals free of subjective factors and applying the MTA method.
The Guide to the Expression of Uncertainty in Measurement (GUM) published in 1993 under the names of seven international organizations including the ISO (Technical Advisory Group on Metrology (TAG4))gives methods of evaluating the reliability of measurement. Methods of estimating uncertainty are classified as type A (evaluated by statistcal methods) and type B (evaluated by the other methods). The uncertainty requirement was eliminated from the 2000 edftion of ISO 9000s (Quality System), however. This seems to have been done because general manufacturing companies find it difficult to implement the uncertainty calculations carried out in laboratories accredited under ISO/IEC 17025 or the Japan-Calibration Service System (JCSS). We therefore used JIS Z9090-1991 (Measurement-General rules for calibration system) to calculate and verify uncertainty in type A items.
Efficient development of software and assurance of its reliability will become increasingly important issues in the future. Debugging methods that use orthogonal arrays to evaluate software functionality efficiently and assure reliability were known to be effective, but they had not penetrated the corporate culture, despite the holding of seminars to explain them. It was decided that practical education was needed, so training tools that provided experience with debugging were developed and used in further seminars. The purpose of the tools was to have the user find bugs planted in a virtual drink vending-machine operating on a spreadsheet program. Experience was also gained by the author by using the above methods to develop the training tools efficiently and assure their quality. The effect of training with these tools was immediately evidenced by inquiries about using the method in the development of a new product. Due to the large number of signal factors(35), an L36 orthogonal array was divided horizontally into two parts, and the sequence of experiments in one part was randomized. Bug testing that would have taken about six weeks previously was reduced to only three weeks.