Monthly Archives: June 2017

Finite Sum & Divisibility

Standard

I wish to discuss a small problem from The USSR Olympiad Problem Book (problem 59) about the finite sum of harmonic series. The problem asks us to prove that

\displaystyle{\sum_{k=2}^{n} \frac{1}{k}}  can never be an  integer for any value of n.

I myself couldn’t think much about how to prove such a statement. So by reading the solution, I realised that how a simple observation about parity leads to this conclusion.

Firstly, observe that among the natural numbers from 2 to n there is exactly one natural number which has the highest power of 2 as its divisor. Now, while summing up the reciprocals of these natural numbers we will get a fraction as the answer. In that fraction, the denominator will be an even number since it’s the least common multiple of all numbers from 2 to n. And the numerator will be an odd number since it’s the sum of (n-2) even numbers with one odd number (corresponding to the reciprocal of the number with the highest power of 2 as the factor). Since under no circumstances an even number can completely divide an odd number, denominator can’t be a factor of the numerator. Hence the fraction can’t be reduced to an integer and the sum can never be an integer.

Repelling Numbers

Standard

An important fact in the theory of prime numbers is the Deuring-Heilbronn phenomenon, which roughly says that:

The zeros of L-functions repel each other.

Interestingly, Andrew Granville in his article for The Princeton Companion to Mathematics remarks that:

This phenomenon is akin to the fact that different algebraic numbers repel one another, part of the basis of the subject of Diophantine approximation.

I am amazed by this repelling relation between two different aspects of arithmetic (a.k.a. number theory). Since I have already discussed the post Colourful Complex Functions, wanted to share this picture of the algebraic numbers in the complex plane, made by David Moore based on earlier work by Stephen J. Brooks:

 

In this picture, the colour of a point indicates the degree of the polynomial of which it’s a root, where red represents the roots of linear polynomials, i.e. rational numbers,  green represents the roots of quadratic polynomials, blue represents the roots of cubic polynomials, yellow represents the roots of quartic polynomials, and so on.  Also, the size of a point decreases exponentially with the complexity of the simplest polynomial with integer coefficient of which it’s a root, where the complexity is the sum of the absolute values of the coefficients of that polynomial.

Moreover,  John Baez comments in his blog post that:

There are many patterns in this picture that call for an explanation! For example, look near the point i. Can you describe some of these patterns, formulate some conjectures about them, and prove some theorems? Maybe you can dream up a stronger version of Roth’s theorem, which says roughly that algebraic numbers tend to ‘repel’ rational numbers of low complexity.

To read more about complex plane plots of families of polynomials, see this write-up by John Baez. I will end this post with the following GIF from Reddit (click on it for details):

Prime Consequences

Standard

Most of us are aware of the following consequence of Fundamental Theorem of Arithmetic:

There are infinitely many prime numbers.

The classic proof by Euclid is easy to follow. But I wanted to share the following two analytic equivalents (infinite series and infinite products) of the above purely arithmetical statement:

  • \displaystyle{\sum_{p}\frac{1}{p}}   diverges.

For proof, refer to this discussion: https://math.stackexchange.com/q/361308/214604

  • \displaystyle{\sum_{n=1}^\infty \frac{1}{n^{s}} = \prod_p\left(1-\frac{1}{p^s}\right)^{-1}}, where s is any complex number with \text{Re}(s)>1.

The outline of proof,   when s is a real number, has been discussed here: http://mathworld.wolfram.com/EulerProduct.html

Solving Logarithmic Equations

Standard

While reading John Derbyshire’s Prime Obsession I came across the following statement (clearly explained on pp. 74):

Any positive power of \log(x) eventually increases more slowly than any positive power of x.

It is easy to prove this (existence) analytically, by taking derivative to compare slopes. But algebraically it implies that (for example):

There are either no real solution or two real solutions of the equation
\log(x) = x^\varepsilon
for any given \varepsilon>0.

Now the question that arises is “How to find this x?” I had no idea about how to solve such logarithmic equations, so I took help of Google and discovered this Mathematic.SE post. So, we put \log(x)=y and re-write the equation as:

y=e^{y\varepsilon}

Now to be able to use Lambert W function (also called the product logarithm function) we need to re-write the above equation, but I have failed to do so. 

But using WolframAlpha I was able to solve \log(x)=x^2 to get x=e^{\frac{-W(-2)}{2}} (which is an imaginary number, i.e. no real solution of this equation) but I was not able to figure out the steps involved. So if you have any idea about the general method or the special case of higher exponents, please let me know.