# Riemann zeta function

Standard

About 2.5 years ago I had promised Joseph Nebus that I will write about the interplay between Bernoulli numbers and Riemann zeta function. In this post I will discuss a problem about finite harmonic sums which will illustrate the interplay.

Consider the Problem 1.37 from The Math Problems Notebook:

Let $\{a_1, a_2, \ldots, a_n\}$ be a set of natural numbers such that $\text{gcd}(a_i,a_j)=1$, and $a_i$ are not prime numbers. Show that $\displaystyle{\frac{1}{a_1}+\frac{1}{a_2}+ \ldots + \frac{1}{a_n} < 2}$

Since each $a_i$ is a composite number, we have $a_i = p_i q_i s_i$ for some, not necessarily distinct, primes $p_i$ and $q_i$. Next, $\text{gcd}(a_i,a_j)=1$ implies that $p_i,q_i \neq p_j, q_j$. Therefore we have:

$\displaystyle{\sum_{i=1}^n \frac{1}{a_i} \leq \sum_{i=1}^n \frac{1}{p_i q_i} \leq \sum_{i=1}^n \frac{1}{(\text{min}(p_i,q_i))^2} < \sum_{k=1}^\infty \frac{1}{k^2}}$

Though it’s easy to show that $\sum_{k=1}^\infty \frac{1}{k^2} < \infty$, we desire to find the exact value of this sum. This is where it’s convinient to recognize that $\boxed{\sum_{k=1}^\infty \frac{1}{k^2} = \zeta(2)}$. Since we know what are Bernoulli numbers, we can use the following formula for  Riemann zeta-function:

$\displaystyle{\zeta(2n) = (-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!}}$

There are many ways of proving this formula, but none of them is elementary.

Recall that $B_2 = 1/6$, so for $n=1$ we have $\zeta(2) = \pi^2/6\approx 1.6 < 2$. Hence completing the proof

$\displaystyle{\sum_{i=1}^n \frac{1}{a_i} <\zeta(2) < 2}$

Remark: One can directly caculate the value of $\sum_{k=1}^\infty \frac{1}{k^2}$ as done by Euler while solving the Basel problem (though at that time the notion of convergence itself was not well defined):

The Pleasures of Pi, E and Other Interesting Numbers by Y E O Adrian [Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd.]

# Bernoulli Numbers

Standard

I have referred to them once so far. Also, the Euler-Maclaurin formula I discussed in that post, explains a lot about their occurrences (for example). Now I think it’s time to dive deeper and try to understand them.

In 1631, Johann Faulhaber   published Academia Algebra (it was a German text despite the Latin title). This text contains a generalisation of sums of powers, which in modern notations reads:

$\displaystyle{\sum_{m=1}^{n} m^{k-1} = \frac{1}{k}\left[n^k + \binom{k}{1} n^{k-1} \times \frac{1}{2} + \binom{k}{2}n^{k-2} \times \frac{1}{6} +\binom{k}{3}n^{k-3} \times 0+\binom{k}{4}n^{k-4} \times \frac{-1}{30}+ \ldots\right]}$

Observe that the expression on the right hand side in square brackets appears like binomial expansion, but there are some constant terms multiplied to them (which can also be 0). These constant terms were named “Bernoulli Numbers” by Abraham de Moivre, since they were intensively discussed by Jacob (Jacques) Bernoulli in Ars Conjectandi published in Basel in 1713, eight years after his death.

I will follow the notation from The book of numbers (published  in 1996). So we will denote $n^{th}$ Bernoulli number by $B^{n}$ where

$\displaystyle{B^0 = 1, B^1 = \frac{1}{2}, B^2 = \frac{1}{6}, B^3= B^{5} = \ldots = B^{odd} = 0, B^4 = B^8 = \frac{-1}{30}, B^6 = \frac{1}{42}, B^{10} = \frac{5}{66}, \ldots}$

This notation enables us to calculate sum of $k^{th}$ power of first $n$ natural numbers quickly. We can re-write above summation formula as:

$\displaystyle{\sum_{m=1}^n m^{k-1} = \frac{(n+B)^k - B^k}{k}}$

To illustrate, how to use this formula, let’s calculate sum of $5^{th}$ powers of first 1000 natural numbers:

$\displaystyle{\sum_{m=1}^{1000} m^{5} = \frac{(1000+B)^6 - B^6}{6}}$

$\displaystyle{ = \frac{1}{6}\left[1000^6 + 6B^1 1000^5 + 15B^2 1000^4 + 15 B^4 1000^2\right]}$

So, we have done binomial expansion of the right hand side and used the fact that $B^{odd>1} = 0$. Now we will replace corresponding values of Bernoulli Numbers to get:

$\displaystyle{\sum_{m=1}^{1000} m^{5} =\frac{1}{6}\left[10^{18} + 3\times 10^{15} + 2.5 \times 10^{12} - 0.5 \times 10^6\right]=\frac{1003002499995\times 10^5}{6}}$

$1^5 + 2^5 + \cdots + 1000^5 = 16716708333250000$

(This answer was cross-checked  using SageMath)

There are many ways to calculate the value of Binomial numbers, but the simplest one is to using the recursive definition:

$(B - 1)^k = B^k$ for k>1, gives value of $B^{k-1}$

There is another definition of Bernoulli  Numbers using power series:

$\displaystyle{\frac{z}{e^z-1} = \sum_{k=0}^{\infty} \frac{B^k z^k}{k!}}$

This gives slightly different sequence of Bernoulli numbers, since in this $B^{1}=\frac{-1}{2}$, and the recursive definition is

$(B+1)^{k} = B^{k}$ for k>1, gives value of $B^{k-1}$

This definition can be used to calculate val value of  $\tan(z)$,  since its infinite series expression has Bernoulli numbers in coefficients.

$\displaystyle{\tan(z)=\sum_{n=0}^{\infty}\frac{B^{2n}(-4)^n(1-4^n)}{2n!}z^{2n-1}}$

# Integration & Summation

Standard

A few months ago I wrote a series of blog posts on “rigorous”  definitions of integration [Part 1, Part 2]. Last week I identified an interesting flaw in my “imagination” of integration in terms of “limiting summation” and it lead to an interesting investigation.

While defining integration as area under curve, we consider rectangles of equal width and let that width approach zero. Hence I used to imagine integration as summation of individual heights, since width approaches zero in limiting case. It was just like extending summation over integers to summation over real numbers.

My Thought Process..

But as per my above imagination, since width of line segment is zero,  I am considering rectangles of zero width. Then each rectangle is of zero area (I proved it recently). So the area under curve will be zero! Paradox!

I realized that, just like ancient greeks, I am using very bad imagination of limiting process!

The Insight

But, as it turns out, my imagination is NOT completely wrong.  I googled and stumbled upon this stack exchange post:

There is the answer by Jonathan to this question which captures my imagination:

The idea is that since $\int_0^n f(x)dx$ can be approximated by the Riemann sum, thus $\displaystyle{\sum_{i=0}^n f(i) = \int_{0}^n f(x)dx + \text{higher order corrections}}$

The generalization of above idea gives us the Euler–Maclaurin formula

$\displaystyle{\sum_{i=m+1}^n f(i) = \int^n_m f(x)\,dx + B_1 \left(f(n) - f(m)\right) + \sum_{k=1}^p\frac{B_{2k}}{(2k)!}\left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R}$

where $m,n,p$ are natural numbers, $f (x)$ is a real valued continuous function, $B_k$ are the Bernoulli numbers and $R$ is an error term which is normally small for suitable values of $p$ (depends on $n, m, p$ and $f$).

Proof of above formula is by principle of mathematical induction. For more details, see this beautiful paper: Apostol, T. M.. (1999). An Elementary View of Euler’s Summation Formula. The American Mathematical Monthly, 106(5), 409–418. http://doi.org/10.2307/2589145