First of all we find the 2-adic values (say m and n) of a and b and also replace a and b by a/2m and b/2n respectively. As we saw above this can be done in time that is linear in the size of a and b. Let k denote the minimum of m and n. We now begin the iterative step.
For this iteration a and b will always be odd. First we compute
c - . Mp = a - b (where
is the ``borrow''). If
= 1,
then we replace c by its bitwise negation
(c) and then add 1
(this replaces c by Mp - c; the sign of c is then
(- 1)
).
If c is zero then the GCD is
2k . a. Otherwise, we calculate
the 2-adic value l of c and replace c by c/2l so that c is
odd once more. Since l is usually a smaller than log(M) this
step is linear in the size of c. Now if
= 1 then we replace
b by c and if
= 0 then we replace a by c and iterate.
(Thus we replace a if it is larger and b if it is larger).
Note that this involves no divisions at all. Thus it can be very fast in principle. Each iteration reduces the size of the largest of a and b by at least one bit. Thus there are at most as many steps as log2(a). Moreover, if a = 2N - 1 and b = 1, then we can see that the process takes exactly N steps. Thus, quite often an initial division is performed in order to ensure that a and b have roughly the same size (this condition reduces the number of iterations for all GCD algorithms).