Products or systems degrade over time. Degradation over time is often modeled by stochastic processes to account for inherent randomness. Based on the assumption of additive accumulation of degradation, a family of degradation stochastic processes based on the Lévy process (the Wiener, gamma and inverse Gaussian (IG) processes are members of this family of processes) has been well studied in the literature. Recently, degradation stochastic model based on the generalized inverse Gaussian distribution was provided. This model is a generalization of the prominent existing degradation models in a sense, under the mild conditions. In this study, we propose a stochastic degradation model based on the Birnbaum-Saunders (BS) distribution. The BS distribution can be obtained as an approximation of an IG distribution. The BS distribution is proposed as the fatigue failure life distribution based on a physical consideration. The maximum likelihood estimation of the proposed model is also developed. Four case applications are performed to demonstrate the advantages of the proposed model, based on the well-known real degradation data sets of the GaAs laser devices and crack sizes of three kinds of specimens, and also a simulation study is conducted.
The D-optimal criterion evaluates the accuracy of estimators comprehensively, with assuming a model that describes the relationship between response and factors in appropriate manner. By using a design with high D-optimality level, we can estimate parameters accurately under the assumed model. However, it should be noted that there exists bias in the estimator if active factors and their interactions are not included in the model. In order to reduce the bias of estimators caused by not incorporating active two-factor interactions in the model, we construct experimental design considering both accuracy and robustness against the existence of interactions. This research constructs robust fractional factorial design against the existence of two-factor interactions by distributing low level of confounding on two-factor interactions into main effects. New designs considering robustness and accuracy are obtained by multiple criteria optimization with this new criterion of robustness and D_f criterion that is a sort of D-optimality. Various algorithms for constructing optimal designs have been proposed such as k-exchange and coordinate exchange. This study use the column wise-pairwise algorithm because of the advantage that it can be applied to supersaturated design. Also, we carry out multiple criteria optimization with reference to the method proposed by a previous study. In this way, we obtained 16×15 and 12×16 design matrixes, which is more robust to the existence of two-factor interactions than the conventional design.
Screening experiments are used to identify active factors that affect the process quality of products due to the inability to study all the factors. Thus, Plackett–Burman designs are widely used for screening experiments. However, experimental runs have to be increased with a larger number of factors, which will lead to increased cost and time. Hence 2-level supersaturated designs are appropriate as the number of factors is greater than the number of runs. Based on the sparsity-of-effects principle, 2-level supersaturated designs can select a small minority of active factors from a large number of factors in screening experiments. Therefore, this study evaluates experimental design and analysis methods for 2-level supersaturated designs based on previous studies. It also proposes a guideline that can be applied to 2-level supersaturated designs ranging from experimental designs to analysis methods in screening experiments. In addition, it summarizes the optimal combinations of experimental design and analysis methods which are useful in practice.
In computer experiments, we often collect sample data by space filling designs, and analyze relationship between response and factors using Gaussian process regression. This approach is believed effective in predicting response level from factor levels. The focus of this research is inverse estimation that is to find factor levels from the given response level. In inverse estimation problem, there is a need to evaluate the effectiveness of the combination of space filling design and Gaussian process regression. We focus on sample size in space filling design and hyper parameter of kernel function in Gaussian process regression. Some results of evaluation are introduced, such as accuracy of inverse estimates between space filling design and Gaussian process regression. We discuss the influence of hyper parameter level of kernel function, that is also important as same as the influence of sample size in space filling design in some cases. The suggested number of samples can be more than ten times of the number of factors, where ten times is suggested as guideline for prediction problem of Gaussian process regression in previous study. We need to determine hyper parameter level in each model, and sample size thinking cost of computer experiments.