Abstract
THis report compares the behaviors of three multiplicative algorithms for floating-point divide operation. They are the Newton-Raphson method, Goldschmidt's algorithm, and another method that simply evaluates the Taylor series expansion of a reciprocal. Goldschmidt's algorithm is based on the same series but differs from the alternative method in a manner of evaluating the series. The behaviors of the three methods are compared using two kinds of models for each method : a performance model, which describes latency, and an accuracy model, which describes the upper bound for the error of the quotient. Particular emphasis is placed on development of the accuracy models. Validity of the accuracy models were empirically verified with numerical tests. It is shown that, with a practical choice of the number of iterations, k, the magnitude of the relative error is bounded by 3×2^<-p> (2k+1)×2^<-p> and (k+1)×2^<-p> where p is the size of the mantissa represented in bits, for the Newton-Raphson method, Goldschmidt's algorithm, and the alternative method, respectively (results on a floating-point unit with a multiplyadd-fused configuration). For the Newton-Raphson method, any number of additional iterations further reduce the bound to (8/3)×2^<-p>. The performance models indicate that, on a pipelined floating-point unit, Goldschmidt's algorithm and the alternative method are equally faster than the Newton-Raphson method. As a result, in general, the alternative method is promising in the case of k≤3, a very realistic value.