Tag Archives: polynomials

Polynomials and Commutativity


In high school, I came to know about the statement of the fundamental theorem of algebra:

Every polynomial of degree n with integer coefficients have exactly n complex roots (with appropriate multiplicity).

In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:

Every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots.

A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:

Let R be a commutative ring with identity and let p(x)\in R[x] be a polynomial with coefficients in R. The element a\in R is a root of p(x) if and only if (x-a) divides p(x).

A corollary of above theorem is that:

A polynomial f of degree n over a field F has at most n roots in F.

(In case you know undergraduate level algebra, recall that R[x] is a Principal Ideal Domain if and only if R is a field.)

The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and \mathbb{C} is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:

The equation X^2 + X + 1 has only 2 complex roots, namely \omega = \frac{-1+i\sqrt{3}}{2} and \omega^2 = \frac{-1-i\sqrt{3}}{2}. But if we want solutions over 2×2 matrices (non-commutative set) then we have at least  3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)

\displaystyle{A=\begin{bmatrix} 0 & -1 \\1 & -1 \end{bmatrix}, B=\begin{bmatrix} \omega & 0 \\0 & \omega^2 \end{bmatrix}, C=\begin{bmatrix} \omega^2 & 0 \\0 & \omega \end{bmatrix}}

if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.


Prime Polynomial Theorem


I just wanted to point towards a nice theorem, analogous to the Prime Number Theorem, which is not talked about much:

# irreducible monic polynomials with coefficients in \mathbb{F}_q and of degree n \sim \frac{q^n}{n}, for a prime power q.

The proof of this theorem follows from Gauss’ formula:

# monic irreducible polynomialswith coefficients in \mathbb{F}_q and of degree n = \displaystyle{\frac{1}{n}\sum_{d|n}\mu\left(\frac{n}{d}\right)q^d}, by taking d=n.


For details, see first section of this: http://alpha.math.uga.edu/~pollack/thesis/thesis-final.pdf

Ulam Spiral


Some of you may know what Ulam’s spiral is (I am not describing what it is because the present Wikipedia entry is awesome, though I mentioned it earlier also). When I first read about it, I thought that it is just a coincidence and is a useless observation. But a few days ago while reading an article by Yuri Matiyasevich, I came to know about the importance of this observation. (Though just now I realised that Wikipedia article describes is clearly, so in this post I just want to re-write that idea.)

It’s an open problem in number theory to find a non-linear, non-constant polynomial which can take prime values infinitely many times. There are some conjectures about the conditions to be satisfied by such polynomials but very little progress has been made in this direction. This is a place where Ulam’s spiral raises some hope. In Ulam spiral, the prime numbers tend to create longish chain formations along the diagonals. And the numbers on some diagonals represent the values of some quadratic polynomial with integer coefficients.


Ulam spiral consists of the numbers between 1 and 400, in a square spiral. All the prime numbers are highlighted. ( Ulam Spiral by SplatBang)

Surprisingly, this pattern continues for large numbers. A point to be noted is that this pattern is a feature of spirals not necessarily begin with 1. For examples, the values of the polynomial x^2+x+41 form a diagonal pattern on a spiral beginning with 41.


Repelling Numbers


An important fact in the theory of prime numbers is the Deuring-Heilbronn phenomenon, which roughly says that:

The zeros of L-functions repel each other.

Interestingly, Andrew Granville in his article for The Princeton Companion to Mathematics remarks that:

This phenomenon is akin to the fact that different algebraic numbers repel one another, part of the basis of the subject of Diophantine approximation.

I am amazed by this repelling relation between two different aspects of arithmetic (a.k.a. number theory). Since I have already discussed the post Colourful Complex Functions, wanted to share this picture of the algebraic numbers in the complex plane, made by David Moore based on earlier work by Stephen J. Brooks:


In this picture, the colour of a point indicates the degree of the polynomial of which it’s a root, where red represents the roots of linear polynomials, i.e. rational numbers,  green represents the roots of quadratic polynomials, blue represents the roots of cubic polynomials, yellow represents the roots of quartic polynomials, and so on.  Also, the size of a point decreases exponentially with the complexity of the simplest polynomial with integer coefficient of which it’s a root, where the complexity is the sum of the absolute values of the coefficients of that polynomial.

Moreover,  John Baez comments in his blog post that:

There are many patterns in this picture that call for an explanation! For example, look near the point i. Can you describe some of these patterns, formulate some conjectures about them, and prove some theorems? Maybe you can dream up a stronger version of Roth’s theorem, which says roughly that algebraic numbers tend to ‘repel’ rational numbers of low complexity.

To read more about complex plane plots of families of polynomials, see this write-up by John Baez. I will end this post with the following GIF from Reddit (click on it for details):

Arithmetic Operations


There are only 4 binary operations which we call “arithmetic operations”. These are:

  • Addition (+)
  • Subtractions (-)
  • Multiplication (×)
  • Division (÷)

Reading this fact, an obvious question is:

Why only four out of the infinitely many possible binary operations are said to be arithmetical?

Before presenting my attempt to answer this question, I would like to remind you that these are the operations you were taught when you learnt about numbers i.e. arithmetic.

In high school when \sqrt{2} is introduced, we are told that real numbers are of two types: “rational” and “irrational”. Then in college when \sqrt{-1} is introduced, we should be told that complex numbers are of two types: “algebraic” and “transcendental“.

As I have commented before, there are various number systems. And for each number system we have some valid arithmetical operations leading to a valid algebraic structure. So, only these 4 operations are entitled to be arithmetic operations because only these operations lead to valid algebraic numbers when operated on algebraic numbers.

Now this leads to another obvious question:

Why so much concerned about algebraic numbers?

To answer this question, we will have to look into the motivation for construction of various number systems like integers, rational, irrationals, complex numbers… The construction of these number systems has been motivated by our need to be able to solve polynomials of various degree (linear, quadratic, cubic…). And the Fundamental Theorem of Algebra says:

Every polynomial with rational coefficients and of degree n in variable x has n solutions in  complex number system.

But, here is a catch. The number of complex numbers which can’t satisfy any polynomial (called transcendental numbers) is much more than the number of complex numbers which can satisfy a polynomial equation (called algebraic numbers). And we wish to find solutions of a polynomial equation (ie.e algebraic numbers) in terms of sum, difference, product, division or m^{th} root of rational numbers (since coefficients were rational numbers). Therefore, sum, difference, product and division are only 4 possible arithmetic operations.

My previous statement may lead to a doubt that:

Why taking m^{th} root isn’t an arithmetic operation?

This is because it isn’t a binary operation to start with, since we have fixed m. Also, taking m^{th} root is allowed because of the multiplication property.

CAUTION: The reverse of m^{th} root is multiplying a number with itself m times and it is obviously allowed. But, this doesn’t make the binary operation of taking exponents, \alpha^{\beta} where \alpha and \beta are algebraic numbers, an arithmetic operation. For example, 2^{\sqrt{2}} is transcendental (called Gelfond–Schneider constant or Hilbert number) even though 2 and \sqrt{2} are algebraic.