A practical and effective technique called "Fast Automatic Differentiation (FAD) " computes the partial derivatives with respect to all the variables of a function as well as an estimate of the rounding error incurred in the computed value of the function. It has a time complexity at most a (universal) constant times as large as that of evaluating the function alone. In large-scale numerical analysis and optimization, we can use FAD instead of the conventional techniques of which the complexity depends upon the number of variables. Moreover, with this technique, we can define a numerically meaningful "norm" to yield a convergence criterion for an iterative method. In the last decade, several researchers have proposed essentially the same techniques as FAD in different contexts so that there are several formalisms by which to get to FAD, e. g., "computational graph", "matrix multiplication" and "Lagrange multipliers". Thus, FAD will become one of the fundamental techniques in numerical computation.
抄録全体を表示