1998 年 1998 巻 p. 115-120
The Tikhonov regularization converts ill-posed problems into well-posed problem by putting penalty on the solution sought. Instead of solving an inverse problem, the regularization theory minimizes a weighted sum of ”data error” and ”penalty” function, and it has been successfully applied to a variety of problems [1], including tomography, inverse scattering, detection of radiation sources and early vision algorithms. Since the function to be minimized is a weighted sum of functions, one should estimate appropriate weights (hyperparameters). This is a problem of hyperparameter estimation and a vast literature exists [2]. A more difficult problem is how one should compare a particular penalty function with another. We attempt to answer this question for a restricted class of regularization problems using a hierarchical Bayes approach (3-layer).
1. In the first layer (level 1) of this method. the solution sought is calculated as posterior probability using Bayes' theorem. Therefore we will compute error bars and samplings.
2. In the second layer (level 2) of this method, appropriate hyperparameters are estimated by maximizing marginal likelihood of level 1 (evidence for hyperparameters) using Bayes' theorem also.
3. In the third layer (level 3) of this method, exact formulas for Hessian of marginal likelihood of level 2 (evidence for a model) is computed. Therefore we will compare the evidence for a model of a particular penalty function with others.
Regularization problems with multiple-hyperparameters which often perform better than single-hyperparameter regularizers are also considered. Since the multiple-hyperparameters depend on the ”position”, the smoothness of solution sought can change depending on the ”position”. We remark that this work was first motivated by a desire to give theoretical background to regularization vision chip [3] which one of the present authors participated. The hyperparameters of the regularization vision chip ware decided ad hoc.