Category Archives: My Research

Something interesting which I believe to have discovered

Polynomials and Commutativity

Standard

In high school, I came to know about the statement of the fundamental theorem of algebra:

Every polynomial of degree n with integer coefficients have exactly n complex roots (with appropriate multiplicity).

In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:

Every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots.

A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:

Let R be a commutative ring with identity and let p(x)\in R[x] be a polynomial with coefficients in R. The element a\in R is a root of p(x) if and only if (x-a) divides p(x).

A corollary of above theorem is that:

A polynomial f of degree n over a field F has at most n roots in F.

(In case you know undergraduate level algebra, recall that R[x] is a Principal Ideal Domain if and only if R is a field.)

The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and \mathbb{C} is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:

The equation X^2 + X + 1 has only 2 complex roots, namely \omega = \frac{-1+i\sqrt{3}}{2} and \omega^2 = \frac{-1-i\sqrt{3}}{2}. But if we want solutions over 2×2 matrices (non-commutative set) then we have at least  3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)

\displaystyle{A=\begin{bmatrix} 0 & -1 \\1 & -1 \end{bmatrix}, B=\begin{bmatrix} \omega & 0 \\0 & \omega^2 \end{bmatrix}, C=\begin{bmatrix} \omega^2 & 0 \\0 & \omega \end{bmatrix}}

if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.

Advertisements

Four Examples

Standard

Following are the four examples of sequences (along with their properties) which can be helpful to gain a better understanding of theorems about sequences (real analysis):

  • \langle n\rangle_{n=1}^{\infty} : unbounded, strictly increasing, diverging
  • \langle \frac{1}{n}\rangle_{n=1}^{\infty} : bounded, strictly decreasing, converging
  • \langle \frac{n}{1+n}\rangle_{n=1}^{\infty} : bounded, strictly increasing, converging
  • \langle (-1)^{n+1}\rangle_{n=1}^{\infty} : bounded, not converging (oscillating)

I was really amazed to found that x_n=\frac{n}{n+1} is a strictly increasing sequence, and in general, the function f(x)=\frac{x}{1+x} defined for all positive real numbers is an increasing function bounded by 1:

 

Thre graph of x/(1+x) for x>0. Plotted using SageMath 7.5.1

The graph of x/(1+x) for x>0, plotted using SageMath 7.5.1

 

Also, just a passing remark, since \log(x)< x+1 for all x>0, and as seen in prime number theorem we get an unbounded increasing function \frac{x}{\log(x)} for x>1

dort

The plot of x/log(x) for x>2. The dashed line is y=x for the comparison of growth rate. Plotted using SageMath 7.5.1

 

Prime Polynomial Theorem

Standard

I just wanted to point towards a nice theorem, analogous to the Prime Number Theorem, which is not talked about much:

# irreducible monic polynomials with coefficients in \mathbb{F}_q and of degree n \sim \frac{q^n}{n}, for a prime power q.

The proof of this theorem follows from Gauss’ formula:

# monic irreducible polynomialswith coefficients in \mathbb{F}_q and of degree n = \displaystyle{\frac{1}{n}\sum_{d|n}\mu\left(\frac{n}{d}\right)q^d}, by taking d=n.

 

For details, see first section of this: http://alpha.math.uga.edu/~pollack/thesis/thesis-final.pdf

Recursion and Periodicity

Standard

One of the simplest recursive formula that I can think of is the one which generates the Fibonacci sequence, F_{n+1} = F_n +F_{n-1}, n\geq 1 with F_0 = F_1=1. So, I will illustrate a following general concept about recursions using Fibonacci sequence.

A sequence generated by a recursive formula is periodic modulo k, for any positive integer k greater than 1.

I find this fact very interesting because it means that a sequence which is strictly increasing when made bounded using the modulo operation (since it will allow only limited numbers as the output of recursion relation), will lead to a periodic cycle.

Following are the first 25 terms of the Fibonacci sequence:

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025.

And here are few examples modulo k, for k=2,3,4,5,6,7,8

table110

As you can see, the sequence repeats as soon as 1,0 appears. And from here actually one can see why there should be a periodicity.

For the sequence to repeat, what we need is a repetition of two consecutive values (i.e. the number of terms involved in the recursive formula) in the sequence of successive pairs. And for mod k, the choices are limited, namely k^2.  Now, all we have to ensure is that “1,0” should repeat. But since consecutive pairs can’ repeat (as per recursive formula) the repetition of “1,0” must occur within the first k^2.

For rigorous proofs and its relation to number theory, see: http://math.stanford.edu/~brianrl/notes/fibonacci.pdf

Repelling Numbers

Standard

An important fact in the theory of prime numbers is the Deuring-Heilbronn phenomenon, which roughly says that:

The zeros of L-functions repel each other.

Interestingly, Andrew Granville in his article for The Princeton Companion to Mathematics remarks that:

This phenomenon is akin to the fact that different algebraic numbers repel one another, part of the basis of the subject of Diophantine approximation.

I am amazed by this repelling relation between two different aspects of arithmetic (a.k.a. number theory). Since I have already discussed the post Colourful Complex Functions, wanted to share this picture of the algebraic numbers in the complex plane, made by David Moore based on earlier work by Stephen J. Brooks:

 

In this picture, the colour of a point indicates the degree of the polynomial of which it’s a root, where red represents the roots of linear polynomials, i.e. rational numbers,  green represents the roots of quadratic polynomials, blue represents the roots of cubic polynomials, yellow represents the roots of quartic polynomials, and so on.  Also, the size of a point decreases exponentially with the complexity of the simplest polynomial with integer coefficient of which it’s a root, where the complexity is the sum of the absolute values of the coefficients of that polynomial.

Moreover,  John Baez comments in his blog post that:

There are many patterns in this picture that call for an explanation! For example, look near the point i. Can you describe some of these patterns, formulate some conjectures about them, and prove some theorems? Maybe you can dream up a stronger version of Roth’s theorem, which says roughly that algebraic numbers tend to ‘repel’ rational numbers of low complexity.

To read more about complex plane plots of families of polynomials, see this write-up by John Baez. I will end this post with the following GIF from Reddit (click on it for details):

Prime Consequences

Standard

Most of us are aware of the following consequence of Fundamental Theorem of Arithmetic:

There are infinitely many prime numbers.

The classic proof by Euclid is easy to follow. But I wanted to share the following two analytic equivalents (infinite series and infinite products) of the above purely arithmetical statement:

  • \displaystyle{\sum_{p}\frac{1}{p}}   diverges.

For proof, refer to this discussion: https://math.stackexchange.com/q/361308/214604

  • \displaystyle{\sum_{n=1}^\infty \frac{1}{n^{s}} = \prod_p\left(1-\frac{1}{p^s}\right)^{-1}}, where s is any complex number with \text{Re}(s)>1.

The outline of proof,   when s is a real number, has been discussed here: http://mathworld.wolfram.com/EulerProduct.html

Solving Logarithmic Equations

Standard

While reading John Derbyshire’s Prime Obsession I came across the following statement (clearly explained on pp. 74):

Any positive power of \log(x) eventually increases more slowly than any positive power of x.

It is easy to prove this (existence) analytically, by taking derivative to compare slopes. But algebraically it implies that (for example):

There are either no real solution or two real solutions of the equation
\log(x) = x^\varepsilon
for any given \varepsilon>0.

Now the question that arises is “How to find this x?” I had no idea about how to solve such logarithmic equations, so I took help of Google and discovered this Mathematic.SE post. So, we put \log(x)=y and re-write the equation as:

y=e^{y\varepsilon}

Now to be able to use Lambert W function (also called the product logarithm function) we need to re-write the above equation, but I have failed to do so. 

But using WolframAlpha I was able to solve \log(x)=x^2 to get x=e^{\frac{-W(-2)}{2}} (which is an imaginary number, i.e. no real solution of this equation) but I was not able to figure out the steps involved. So if you have any idea about the general method or the special case of higher exponents, please let me know.