Power series

Summing Power Series

Summing Power Series

A power series \sum_{n=0}^{\infty} a_n x^n has a radius of convergence which is given by the formula:

\begin{equation*} \frac{1}{R} = \limsup_n \left|a_n\right|^{\frac{1}{n}} \end{equation*}

This is obtained by comparison with the series for 1/(1-x) which has a radius of convergence 1.

Recall that for all x such that |x|\leq r< R, the series converges absolutely and uniformly; this works for any r<R.

To make use of this to calculate the sum in a useful way we would like to have a method that gives us a definite value for the number of bits of accuracy when use a particular value of r.

The series for 1/(1-x)

We first explore this series since others are to be compared with it. It is obvious. It is clear that n+2 terms of this series give the answer upto 2^n bits if we take r=1/2. Moreover, if we take r=1/2 then successive terms are not of the same order of magnitude. Thus, if we add the terms in the right-to-left order, there will be no loss of precision. as well.

In fact the error in summing n+2 terms is of the order of \frac{r^{n+2}}{1-r}. Thus, if r>1/2 then the number of terms required to obtain n-bit precision is larger and various successive terms are also of the same size.

In summary, this series "works well" for r =1/2 and that seems to be an optimal value as well.

General Power Series

We now apply what we have learnt to the case of a general power series. Suppose that s > 1/R. By definition of the limit supremum, we have an N so that a_n \leq s^n for all n> N. It follows that

\begin{equation*} \sum_{n>N} a_n x^n \leq \sum_{n>N} (xs)^n = \frac{(xs)^{N+1}}{1-xs} \end{equation*}

If R is large (for example, \infty!), then s can be very small and we can get a lot of accuracy with even a small number of terms for a big interval/disk of values for x. For example, even x\leq 1/(2^k s), for some k\geq 1 may be a big enough collection of x values for us and this would ensure kN bits of accuracy with N terms of the power series.

This gives us the initial strategy for computing with a power series. We need to carry out some "pre-computation" (steps 1 and 2 below) as well.

  • Choose s> 1/R and find N so that a_n \leq s^n for all n > N.
  • Choose a k\geq 1 so that we get a value of kN at least 24 (single precision accuracy).
  • Calculate the value of the polynomial \sum_{n=0}^N a_n x^n to be the approximate value of the function correct upto single precision. Note that this calculation should be done "right-to-left".

Note that choosing a smaller s (closer to 1/R) will lead to a bigger value for N. On the other hand, we may be able to also increase k if s is smaller. Thus, an optimal choice would need you to do so additional computation. Can this be "automated"?

Extending beyond the limit

The given power series is already convergent in a bigger range than the one chose. How does one evaluate the function in this larger range. The mathematical approach goes through the method of analytic continuation. However, that theoretical method has the disadvantage of requiring the computation of the function and many of its derivatives at a new centre.

In the case of some "nice" functions like the sine, cosine, tangent, exponential and their inverses, one can use the additivity properties of these functions to provide such an extension.

As an example, let us consider the logarithm. We have the power series:

\begin{equation*} \log(1-x) = - \sum_{k=0}^{\infty} \frac{(-1)^k}{k+1} x^k \end{equation*}

The methods of the previous section can be used to give accurate approximations of the sums of this series for x in the range 0 to 1/2. In other words, we have good values for \log(y) for y in the range (1/2,1]. Now we have the identity:

\begin{equation*} \log(y/2^k)=\log(y)-k\log(2) \end{equation*}

Hence, if we can calculate the value of \log(2) accurately, then this can be used to calculate the values of the logarithm for the entire range of numbers upto 0. Note that we are not losing much to rounding errors since \log(2) is larger in magnitude than the numbers we are adding it to.

Another example is the series for sine:

\begin{equation*} \sin(x) = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!} x^{2k+1} \end{equation*}

We can take as small an s as we want since the radius of convergence is infinite! Note that it is better to use the formula for \sin(2^kx) to give more values for sine rather than subtracting many multiples of \pi. (Explain why!)

Calculation of Constants

In order to complete our calculation, we need to calculate constants like \log(2) and \pi. For this we use a method due to Euler.

Euler gave a method to sum certain alternating series \sum_{n=0}^{\infty} a_n. We have the formal identity.

\begin{equation*} \sum_{n=0}^{\infty} (-1)^n a_n = \frac{1}{2} \left( a_0 + \sum_{n=0}^{\infty} (-1)^n (a_n-a_{n+1}) \right) \end{equation*}

Let us define (\Delta a)_n=a_n-a_{n+1}. We can then write the above as

\begin{equation*} \sum_{n=0}^{\infty} (-1)^n a_n = \frac{1}{2} \left( a_0 + \sum_{n=0}^{\infty} (-1)^n (\Delta a)_n \right) \end{equation*}

Iterating this procedure, (and taking (\Delta^0 a)_n=a_n), we see that we have a formal identity

\begin{equation*} \sum_{n=0}^{\infty} (-1)^n a_n = \sum_{k=0}^N \frac{1}{2^{k+1}} (\Delta^k a)_0 + \frac{1}{2^{N+1}} \sum_{n=0}^{\infty} (-1)^n (\Delta^N a)_n \end{equation*}

Thus, in case (\Delta^k a)_0 remains bounded for all k, we can choose to define the sum of the alternating series as

\begin{equation*} E(\sum_{n=0}^{\infty} (-1)^n a_n) := \sum_{k=0}^{\infty} \frac{1}{2^{k+1}} (\Delta^k a)_0 \end{equation*}

Of course, there is no reason a priori why this should coincide with the limit of the partial sums of the original series if the latter exists!

We know that the partial sums of the alternating series converge if a_n is a sequence of (positive) numbers monotonically decreasing to 0. Thus, the above alternating sums actually converge if (\Delta^N a)_n is a sequence of numbers monotonically decreasing to 0.

In summary:

  • The Euler sum of an alternating series converges if (\Delta^N a)_0 remains bounded as N goes to \infty.
  • If the sequences (\Delta^N a)_n monotonically decrease to 0 for each N, then the Euler sum converges to same limit as the sequence of partial sums of the original series.

In case both these conditions are satisfied, the Euler sum will given k-c bits of the sum if we take k terms and c is a suitable constant depending on the uniform bound for (\Delta^N a)_0. This makes it a good method for summing alternate series.

We can apply this to sum the alternating series for \log(2) and the alternating series for \pi/4 to get a N bits of accuracy in as many terms. (Note that to get N bits of accuracy by the usual partial sums we need 2^N terms!)

Last modified: Thursday, August 13, 2015, 10:53 PM