Finite Sum & Divisibility

Standard

I wish to discuss a small problem from The USSR Olympiad Problem Book (problem 59) about the finite sum of harmonic series. The problem asks us to prove that

\displaystyle{\sum_{k=2}^{n} \frac{1}{k}}  can never be an  integer for any value of n.

I myself couldn’t think much about how to prove such a statement. So by reading the solution, I realised that how a simple observation about parity leads to this conclusion.

Firstly, observe that among the natural numbers from 2 to n there is exactly one natural number which has the highest power of 2 as its divisor. Now, while summing up the reciprocals of these natural numbers we will get a fraction as the answer. In that fraction, the denominator will be an even number since it’s the least common multiple of all numbers from 2 to n. And the numerator will be an odd number since it’s the sum of (n-2) even numbers with one odd number (corresponding to the reciprocal of the number with the highest power of 2 as the factor). Since under no circumstances an even number can completely divide an odd number, denominator can’t be a factor of the numerator. Hence the fraction can’t be reduced to an integer and the sum can never be an integer.

Repelling Numbers

Standard

An important fact in the theory of prime numbers is the Deuring-Heilbronn phenomenon, which roughly says that:

The zeros of L-functions repel each other.

Interestingly, Andrew Granville in his article for The Princeton Companion to Mathematics remarks that:

This phenomenon is akin to the fact that different algebraic numbers repel one another, part of the basis of the subject of Diophantine approximation.

I am amazed by this repelling relation between two different aspects of arithmetic (a.k.a. number theory). Since I have already discussed the post Colourful Complex Functions, wanted to share this picture of the algebraic numbers in the complex plane, made by David Moore based on earlier work by Stephen J. Brooks:

 

In this picture, the colour of a point indicates the degree of the polynomial of which it’s a root, where red represents the roots of linear polynomials, i.e. rational numbers,  green represents the roots of quadratic polynomials, blue represents the roots of cubic polynomials, yellow represents the roots of quartic polynomials, and so on.  Also, the size of a point decreases exponentially with the complexity of the simplest polynomial with integer coefficient of which it’s a root, where the complexity is the sum of the absolute values of the coefficients of that polynomial.

Moreover,  John Baez comments in his blog post that:

There are many patterns in this picture that call for an explanation! For example, look near the point i. Can you describe some of these patterns, formulate some conjectures about them, and prove some theorems? Maybe you can dream up a stronger version of Roth’s theorem, which says roughly that algebraic numbers tend to ‘repel’ rational numbers of low complexity.

To read more about complex plane plots of families of polynomials, see this write-up by John Baez. I will end this post with the following GIF from Reddit (click on it for details):

Prime Consequences

Standard

Most of us are aware of the following consequence of Fundamental Theorem of Arithmetic:

There are infinitely many prime numbers.

The classic proof by Euclid is easy to follow. But I wanted to share the following two analytic equivalents (infinite series and infinite products) of the above purely arithmetical statement:

  • \displaystyle{\sum_{p}\frac{1}{p}}   diverges.

For proof, refer to this discussion: https://math.stackexchange.com/q/361308/214604

  • \displaystyle{\sum_{n=1}^\infty \frac{1}{n^{s}} = \prod_p\left(1-\frac{1}{p^s}\right)^{-1}}, where s is any complex number with \text{Re}(s)>1.

The outline of proof,   when s is a real number, has been discussed here: http://mathworld.wolfram.com/EulerProduct.html

Solving Logarithmic Equations

Standard

While reading John Derbyshire’s Prime Obsession I came across the following statement (clearly explained on pp. 74):

Any positive power of \log(x) eventually increases more slowly than any positive power of x.

It is easy to prove this (existence) analytically, by taking derivative to compare slopes. But algebraically it implies that (for example):

There are either no real solution or two real solutions of the equation
\log(x) = x^\varepsilon
for any given \varepsilon>0.

Now the question that arises is “How to find this x?” I had no idea about how to solve such logarithmic equations, so I took help of Google and discovered this Mathematic.SE post. So, we put \log(x)=y and re-write the equation as:

y=e^{y\varepsilon}

Now to be able to use Lambert W function (also called the product logarithm function) we need to re-write the above equation, but I have failed to do so. 

But using WolframAlpha I was able to solve \log(x)=x^2 to get x=e^{\frac{-W(-2)}{2}} (which is an imaginary number, i.e. no real solution of this equation) but I was not able to figure out the steps involved. So if you have any idea about the general method or the special case of higher exponents, please let me know.

Combinatorial Puzzle

Standard

This is a continuation of previous post:

How many distinct numbers can be formed by using four 2s and the four arithmetic operations +,-,\times, \div.

For example:
1 = \frac{2+2}{2+2}=\frac{2}{2}\times\frac{2}{2}
2 = 2+\frac{2-2}{2}=\frac{2}{2}+\frac{2}{2}
3 = 2+2 - \frac{2}{2}
4 = 2+2+2-2 = (2\times 2) + (2-2)
(note that some binary operations do not make sense without parenthesis)

I have no idea about how to approach this problem (since I am not very comfortable with combinatorics). So any help will be appreciated.

Edit[29 May 2017]: This problem has been solved in the comments below. 

Arithmetic Puzzle

Standard

Following is a very common arithmetic puzzle that you may have encountered as a child:

Express any whole number n using the number 2 precisely four times and using only well-known mathematical symbols.

This puzzle has been discussed on pp. 172 of Graham Farmelo’s “The Strangest Man“, and how Paul Dirac solved it by using his knowledge of “well-known mathematical symbols”:

\displaystyle{n = -\log_{2}\left(\log_{2}\left(2^{2^{-n}}\right)\right) = -\log_{2}\left(\log_{2}\left(\underbrace{\sqrt{\sqrt{\ldots\sqrt{2}}}}_\text{n times}\right)\right)}

This is an example of thinking out of the box, enabling you to write any number using only three/four 2s. Though, using a transcendental function to solve an elementary problem may appear like an overkill.  But, building upon such ideas we can try to tackle the general problem, like the “four fours puzzle“.

This post on Puzzling.SE describes usage of following formula consisting of  trigonometric operation \cos(\arctan(x)) = \frac{1}{\sqrt{1+x^2}} and \tan(\arcsin(x))=\frac{x}{\sqrt{1-x^2}} to obtain the square root of any rational number from 0:

\displaystyle{\tan\left(\arcsin\left(\cos\left(\arctan\left(\cos\left(\arctan\left(\sqrt{n}\right)\right)\right)\right)\right)\right)=\sqrt{n+1}}.

Using this we can write n using two 2s:

\displaystyle{n = (\underbrace{\tan\arcsin\cos\arctan\cos\arctan}_{n-4\text{ times}}\,2)^2}

or even with only one 2:

\displaystyle{n = \underbrace{\tan\arcsin\cos\arctan\cos\arctan}_{n^2-4\text{ times}}\,2}

Intra-mathematical Dependencies

Standard

Recently I completed all of my undergraduate level maths courses, so wanted to sum up my understanding of mathematics in the following dependency diagram:

mat-dependency (1)

I imagine this like a wall, where each topic is a brick. You can bake different bricks at different times (i.e. follow your curriculum to learn these topics), but finally, this is how they should be arranged (in my opinion) to get the best possible understanding of mathematics.

As of now, I have an “elementary” knowledge of Set Theory, Algebra, Analysis, Topology, Geometry, Probability Theory, Combinatorics and Arithmetic. Unfortunately, in India, there are no undergraduate level courses in Mathematical Logic and Category Theory.

This post can be seen as a sequel of my “Mathematical Relations” post.