# Riemann zeta function

Standard

About 2.5 years ago I had promised Joseph Nebus that I will write about the interplay between Bernoulli numbers and Riemann zeta function. In this post I will discuss a problem about finite harmonic sums which will illustrate the interplay.

Consider the Problem 1.37 from The Math Problems Notebook:

Let $\{a_1, a_2, \ldots, a_n\}$ be a set of natural numbers such that $\text{gcd}(a_i,a_j)=1$, and $a_i$ are not prime numbers. Show that $\displaystyle{\frac{1}{a_1}+\frac{1}{a_2}+ \ldots + \frac{1}{a_n} < 2}$

Since each $a_i$ is a composite number, we have $a_i = p_i q_i s_i$ for some, not necessarily distinct, primes $p_i$ and $q_i$. Next, $\text{gcd}(a_i,a_j)=1$ implies that $p_i,q_i \neq p_j, q_j$. Therefore we have:

$\displaystyle{\sum_{i=1}^n \frac{1}{a_i} \leq \sum_{i=1}^n \frac{1}{p_i q_i} \leq \sum_{i=1}^n \frac{1}{(\text{min}(p_i,q_i))^2} < \sum_{k=1}^\infty \frac{1}{k^2}}$

Though it’s easy to show that $\sum_{k=1}^\infty \frac{1}{k^2} < \infty$, we desire to find the exact value of this sum. This is where it’s convinient to recognize that $\boxed{\sum_{k=1}^\infty \frac{1}{k^2} = \zeta(2)}$. Since we know what are Bernoulli numbers, we can use the following formula for  Riemann zeta-function:

$\displaystyle{\zeta(2n) = (-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!}}$

There are many ways of proving this formula, but none of them is elementary.

Recall that $B_2 = 1/6$, so for $n=1$ we have $\zeta(2) = \pi^2/6\approx 1.6 < 2$. Hence completing the proof

$\displaystyle{\sum_{i=1}^n \frac{1}{a_i} <\zeta(2) < 2}$

Remark: One can directly caculate the value of $\sum_{k=1}^\infty \frac{1}{k^2}$ as done by Euler while solving the Basel problem (though at that time the notion of convergence itself was not well defined):

The Pleasures of Pi, E and Other Interesting Numbers by Y E O Adrian [Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd.]

# Dimension clarification

Standard

In several of my previous posts I have mentioned the word “dimension”. Recently I realized that dimension can be of two types, as pointed out by Bernhard Riemann in his famous lecture in 1854. Let me quote Donal O’Shea from pp. 99 of his book “The Poincaré Conjecture” :

Continuous spaces can have any dimension, and can even be infinite dimensional. One needs to distinguish between the notion of a space and a space with a geometry. The same space can have different geometries. A geometry is an additional structure on a space. Nowadays, we say that one must distinguish between topology and geometry.

[Here by the term “space(s)” the author means “topological space”]

In mathematics, the word “dimension” can have different meanings. But, broadly speaking, there are only three different ways of defining/thinking about “dimension”:

• Dimension of Vector Space: It’s the number of elements in basis of the vector space. This is the sense in which the term dimension is used in geometry (while doing calculus) and algebra. For example:
• A circle is a two dimensional object since we need a two dimensional vector space (aka coordinates) to write it. In general, this is how we define dimension for Euclidean space (which is an affine space, i.e. what is left of a vector space after you’ve forgotten which point is the origin).
• Dimension of a differentiable manifold is the dimension of its tangent vector space at any point.
• Dimension of a variety (an algebraic object) is the dimension of tangent vector space at any regular point. Krull dimension is remotely motivated by the idea of dimension of vector spaces.
• Dimension of Topological Space: It’s the smallest integer that is somehow related to open sets in the given topological space. In contrast to a basis of a vector space, a basis of topological space need not be maximal; indeed, the only maximal base is the topology itself. Moreover, dimension is this case can be defined using  “Lebesgue covering dimension” or in some nice cases using “Inductive dimension“.  This is the sense in which the term dimension is used in topology. For example:
• A circle is one dimensional object and a disc is two dimensional by topological definition of dimension.
• Two spaces are said to have same dimension if and only if there exists a continuous bijective map between them. Due to this, a curve and a plane have different dimension even though curves can fill space.  Space-filling curves are special cases of fractal constructions. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn.
• Fractal Dimension:  It’s a notion designed to study the complex sets/structures like fractals that allows notions of objects with dimensions other than integers. It’s definition lies in between of that of dimension of vector spaces and topological spaces. It can be defined in various similar ways. Most common way is to define it as “dimension of Hausdorff measure on a metric space” (measure theory enable us to integrate a function without worrying about  its smoothness and the defining property of fractals is that they are NOT smooth). This sense of dimension is used in very specific cases. For example:
• A curve with fractal dimension very near to 1, say 1.10, behaves quite like an ordinary line, but a curve with fractal dimension 1.9 winds convolutedly through space very nearly like a surface.
• The fractal dimension of the Koch curve is $\frac{\ln 4}{\ln 3} \sim 1.26186$, but its topological dimension is 1 (just like the space-filling curves). The Koch curve is continuous everywhere but differentiable nowhere.
• The fractal dimension of space-filling curves is 2, but their topological dimension is 1. [source]
• A surface with fractal dimension of 2.1 fills space very much like an ordinary surface, but one with a fractal dimension of 2.9 folds and flows to fill space rather nearly like a volume.

This simple observation has very interesting consequences. For example,  consider the following statement from. pp. 167  of the book “The Poincaré Conjecture” by Donal O’Shea:

… there are infinitely many incompatible ways of doing calculus in four-space. This contrasts with every other dimension…

This leads to a natural question:

Why is it difficult to develop calculus for any $\mathbb{R}^n$ in general?

Actually, if we consider $\mathbb{R}^n$ as a vector space then developing calculus is not a big deal (as done in multivariable calculus).  But, if we consider $\mathbb{R}^n$ as a topological space then it becomes a challenging task due to the lack of required algebraic structure on the space. So, Donal O’Shea is actually pointing to the fact that doing calculus on differentiable manifolds in $\mathbb{R}^4$ is difficult. And this is because we are considering $\mathbb{R}^4$ as 4-dimensional topological space.

Now, I will end this post by pointing to the way in which definition of dimension should be seen in my older posts:

• Dimension ≡ Dimension of the underlying vector space
• Dimension ≡ Lebesgue covering dimension of the underlying topological space
• Special Numbers: update (note that here I am talking about “topological manifolds” of which “differentiable manifolds” are a special case)

# Real Numbers

Standard

Few days ago I found something very interesting on 9gag:

There are lots of interesting comments, but here is a proof from the comments:

…. Infinite x zero (as a limit) is indefinite. But infinite x zero (as a number) is zero. So lim( 0 x exp (x²) ) = 0 while lim ( f(X) x exp(X) ) with f(X)->0 is indefinite …

Though the statement made in the post is very vague and can lead to different opinions, like what about doing the product with surreal numbers, but we can safely avoid this by considering the product of real numbers only.

Now an immediate question should be (since every positive real number has a negative counterpart):

Is the sum of all real numbers zero?

In my opinion the answer should be “no”. As of now I don’t have a concrete proof but the intuition is:

Sum of a convergent series is the limit of partial sums, and for real numbers due to lack of starting point we can’t define a partial  sum. Hence we can’t compute the limit of this sum and the sum of series of real numbers doesn’t exist.

Moreover, since the sum of all “positive” real numbers is not a finite value (i.e. the series of positive real numbers is divergent) we conclude that we can’t rearrange the terms in series of “all” real numbers (Riemann Rearrangement Theorem). Thus the sum of real numbers can only be conditionally convergent. So, my above argument should work. Please let me know if you find a flaw in these reasonings.

Also I found following interesting answer on Quora:

The real numbers are uncountably infinite, and the standard notions of summation are only defined for countably many terms.

Note: Since we are dealing with infinite product and sum, we can’t argue using algebra of real numbers (like commutativity etc.).

# So many Integrals – I

Standard

We all know that, area is  the basis of integration theory, just as counting is basis of the real number system. So, we can say:

An integral is a mathematical operator that can be interpreted as an area under curve.

But, in mathematics we have various flavors of integrals named after their discoverers. Since the topic is a bit long, I have divided it into two posts. In this and next post I will write their general form and then will briefly discuss them.

Cauchy Integral

Newton, Leibniz and Cauchy (left to right)

This was rigorous formulation of Newton’s & Leibniz’s idea of integration, in 1826 by French mathematician, Baron Augustin-Louis Cauchy.

Let $f$ be a positive continuous function defined on an interval $[a, b],\quad a, b$ being real numbers. Let $P : a = x_0 < x_1 < x_2<\ldots < x_n = b$, $n$ being an integer, be a partition of the interval $[a, b]$ and form the sum

$S_p = \sum_{i=1}^n (x_i - x_{i-1}) f(t_i)$

where $t_i \in [x_{i-1} , x_i]f$ be such that $f(t_i) = \text{Minimum} \{ f(x) : x \in [x_{i-1}, x_{i}]\}$

By adding more points to the partition $P$, we can get a new partition, say $P'$, which we call a ‘refinement’ of $P$ and then form the sum $S_{P'}$.  It is trivial to see that $S_P \leq S_{P'} \leq \text{Area bounded between x-axis and function}f$

Since, $f$ is continuous (and positive), then $S_P$ becomes closer and closer to a unique real number, say $kf$, as we take more and more refined partitions in such a way that $|P| := \text{Maximum} \{x_i - x_{i-1}, 1 \leq i \leq n\}$ becomes closer to zero. Such a limit will be independent of the partitions. The number $k$ is the area bounded by function and x-axis and we call it the Cauchy integral of $f$ over $a$  to $b$. Symbolically, $\int_{a}^{b} f(x) dx$ (read as “integral of f(x)dx from a to b”).

Riemann Integral

Riemann

Cauchy’s definition of integral can readily be extended to a bounded function with finitely many discontinuities. Thus, Cauchy integral does not require either the assumption of continuity or any analytical expression of $f$ to prove that the sum $S_p$ indeed converges to a unique real number.

In 1851, a German mathematician, Georg Friedrich Bernhard Riemann gave a more general definition of integral.

Let $[a,b]$ be a closed interval in $\mathbb{R}$. A finite, ordered set of points $P :\{ a = x_0 < x_1 < x_2<\ldots < x_n = b\}$, $n$ being an integer, be a partition of the interval $[a, b]$. Let, $I_j$ denote the interval $[x_{j-1}, x_j], j= 1,2,3,\ldots , n$. The symbol $\Delta_j$ denotes the length of $I_j$. The mesh of $P$, denoted by $m(P)$, is defined to be $max\Delta_j$.

Now, let $f$ be a function defined on interval $[a,b]$. If, for each $j$, $s_j$ is an element of $I_j$, then we define:

$S_P = \sum_{j=1}^n f(s_j) \Delta_j$

Further, we say that $S_P$ tend to a limit $k$ as $m(P)$ tends to 0 if, for any $\epsilon > 0$, there is a $\delta >0$ such that, if $P$ is any partition of $[a,b]$ with $m(P) < \delta$, then $|S_P - k| < \epsilon$ for every choice of $s_j \in I_j$.

Now, if $S_P$ tends to a finite limit as $m(P)$ tends to zero, the value of the limit is called Riemann integral of $f$ over $[a,b]$ and is denoted by $\int_{a}^{b} f(x) dx$

Darboux Integral

Darboux

In 1875, a French mathematician, Jean Gaston Darboux  gave his way of looking at the Riemann integral, defining upper and lower sums and defining a function to be integrable if the difference between the upper and lower sums tends to zero as the mesh size gets smaller.

Let $f$ be a bounded function defined on an interval $[a, b],\quad a, b$ being real numbers. Let $P : a = x_0 < x_1 < x_2<\ldots < x_n = b$, $n$ being an integer, be a partition of the interval $[a, b]$ and form the sum

$S_P = \sum_{i=1}^n (x_i - x_{i-1}) f(t_i), \quad \overline{S}_P =\sum_{i=1}^n (x_i - x_{i-1}) f(s_i)$

where $t_i,s_i \in [x_{i-1} , x_i]$ be such that

$f(t_i) = \text{sup} \{ f(x) : x \in [x_{i-1}, x_{i}]\}$,

$f(s_i) = \text{inf} \{ f(x) : x \in [x_{i-1}, x_{i}]\}$

The sums $S_P$ and $\overline{S}_P$ represent the areas and  $S_P \leq \text{Area bounded by curve} \leq \overline{S}_P$. Moreover, if $P'$ is a refinement of $P$, then

$S_p \leq S_{P'} \leq \text{Area bounded by curve} \leq \overline{S}_{P'} \leq \overline{S}_{P}$

Using the boundedness of $f$, one can show that $S_P, \overline{S}_P$ converge as the partition get’s finer and finer, that is $|P| := \text{Maximum}\{x_i - x_{i-1}, 1 \leq i \leq n\} \rightarrow 0$, to some real numbers, say $k_1, k_2$ respectively. Then:

$k_l \leq \text{Area bounnded by the curve} \leq k_2$

If $k_l = k_2$ , then we have $\int_{a}^{b} f(x) dx = k_l = k_2$.

There are two more flavours of integrals which I will discuss in next post. (namely, Stieltjes Integral and Lebesgue Integral)