Tag Archives: polynomials

FLT for rational functions

Standard

Following is the problem 2.16 in The Math Problems Notebook:

Prove that if n>2, then we do not have any nontrivial solutions of the equation F^n + G^n = H^n where F,G,H are rational functions. Solutions of the form F = aJ, G=bJ, H=cJ where J is a rational function and a,b,c are complex numbers satisfying a^n + b^n = c^n, are called trivial.

This problem is analogous to the Fermat’s Last Theorem (FLT) which states that for n> 2, x^n + y^n = z^n has no nontrivial integer solutions.

The solution of this problem involves proof by contradiction:

Since any rational solution yields a complex polynomial solution, by clearing the denominators, it is sufficient to assume that (f,g,h) is a polynomial solution such that r=\max(\deg(f),\deg(g),\deg(h)) is minimal among all polynomial solutions, where r>0.

Assume also that f,g,h are relatively prime.  Hence we have f^n+g^n = h^n, i.e. f^n-h^n = g^n. Now using the simple factorization identity involving the roots of unity, we get:

\displaystyle{\prod_{\ell = 0}^{n-1}\left(f-\zeta^\ell h\right) = g^n}

where \zeta = e^{\frac{2\pi i}{n}} with i = \sqrt{-1}.

Since \gcd(f,g) = \gcd(f,h) = 1, we have \gcd(f-\zeta^\ell h, f-\zeta^k h)=1 for \ell\neq k. Since the ring of complex polynomials has unique facotrization property, we must have g = g_1\cdots g_{n-1}, where g_j are polynomials satisfying \boxed{g_\ell^n = f-\zeta^\ell h}.

Now consider the factors f-h, f-\zeta h, f-\zeta^2 h. Note that, since n>2, these elements belong to the 2-dimensional vector space generated by f,h over \mathbb{C}.  Hence these three elements are linearly dependent, i.e. there exists a vanishing linear combination with complex coefficients (not all zero) in these three elements. Thus there exist a_\ell\in\mathbb{C} so that a_0g_0^n + a_1g_1^n = a_2g_2^n. We then set h_\ell = \sqrt[n]{a_\ell}g_\ell, and observe that \boxed{h_0^n+h_1^n = h_2^n}.

Moreover, the polynomials \gcd(h_\ell,h_k)=1 for \ell \neq k and \max_{\ell}(h_\ell) = \max_{\ell} (g_\ell) < r since g_\ell^n \mid f^n-h^n. Thus contradicting the minimality of r, i.e. the minimal (degree) solution f,g,h didn’t exist. Hence no solution exists.

The above argument fails for proving the non-existence of integer solutions since two coprime integers don’t form a 2-dimensional vector space over \mathbb{C}.

Rational input, integer output

Standard

Consider the following polynomial equation [source: Berkeley Problems in Mathematics, problem 6.13.10]:

f(t) = 3t^3 + 10t^2 - 3t

Let’s try to figure out the rational values of t for which f(t) is an integer. Clearly, if t\in\mathbb{Z} then f(t) is an integer. So let’s consider the case when t=m/n where \gcd(m,n)=1 and m\neq \pm 1. Substituting this value of t we get:

\displaystyle{f\left(\frac{m}{n}\right) = \frac{3m^3}{n^3} + \frac{10m^2}{n^2} - \frac{3m}{n}= \frac{m(3m^2+10mn-3n^2)}{n^3}=k \in \mathbb{Z}}

Since, n^3\mid (3m^2+10mn-3n^2) we conclude that n\mid 3. Also it’s clear that m\mid k. Hence, n=\pm 3 and we just need to find the possible values of m.

For n=3 we get:

\displaystyle{f\left(\frac{m}{3}\right) =  \frac{m(m^2+10m-9)}{9}=k \in \mathbb{Z}}

Hence we have 9\mid (m^2+10m). Since \gcd(m,n)=\gcd(m,3)=1, we have 9\mid (m+10), that is, m\equiv 8\pmod 9.

Similarly, for m=-3 we get n\equiv 1 \pmod 9. Hence we conclude that the non-integer values of t which lead to integer output are:

\displaystyle{t = 3\ell+ \frac{8}{3}, -3\ell-\frac{1}{3}} for all \ell\in\mathbb{Z}

 

Polynomials and Commutativity

Standard

In high school, I came to know about the statement of the fundamental theorem of algebra:

Every polynomial of degree n with integer coefficients have exactly n complex roots (with appropriate multiplicity).

In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:

Every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots.

A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:

Let R be a commutative ring with identity and let p(x)\in R[x] be a polynomial with coefficients in R. The element a\in R is a root of p(x) if and only if (x-a) divides p(x).

A corollary of above theorem is that:

A polynomial f of degree n over a field F has at most n roots in F.

(In case you know undergraduate level algebra, recall that R[x] is a Principal Ideal Domain if and only if R is a field.)

The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and \mathbb{C} is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:

The equation X^2 + X + 1 has only 2 complex roots, namely \omega = \frac{-1+i\sqrt{3}}{2} and \omega^2 = \frac{-1-i\sqrt{3}}{2}. But if we want solutions over 2×2 matrices (non-commutative set) then we have at least  3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)

\displaystyle{A=\begin{bmatrix} 0 & -1 \\1 & -1 \end{bmatrix}, B=\begin{bmatrix} \omega & 0 \\0 & \omega^2 \end{bmatrix}, C=\begin{bmatrix} \omega^2 & 0 \\0 & \omega \end{bmatrix}}

if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.

Prime Polynomial Theorem

Standard

I just wanted to point towards a nice theorem, analogous to the Prime Number Theorem, which is not talked about much:

# irreducible monic polynomials with coefficients in \mathbb{F}_q and of degree n \sim \frac{q^n}{n}, for a prime power q.

The proof of this theorem follows from Gauss’ formula:

# monic irreducible polynomialswith coefficients in \mathbb{F}_q and of degree n = \displaystyle{\frac{1}{n}\sum_{d|n}\mu\left(\frac{n}{d}\right)q^d}, by taking d=n.

 

For details, see first section of this: http://alpha.math.uga.edu/~pollack/thesis/thesis-final.pdf

Ulam Spiral

Standard

Some of you may know what Ulam’s spiral is (I am not describing what it is because the present Wikipedia entry is awesome, though I mentioned it earlier also). When I first read about it, I thought that it is just a coincidence and is a useless observation. But a few days ago while reading an article by Yuri Matiyasevich, I came to know about the importance of this observation. (Though just now I realised that Wikipedia article describes is clearly, so in this post I just want to re-write that idea.)

It’s an open problem in number theory to find a non-linear, non-constant polynomial which can take prime values infinitely many times. There are some conjectures about the conditions to be satisfied by such polynomials but very little progress has been made in this direction. This is a place where Ulam’s spiral raises some hope. In Ulam spiral, the prime numbers tend to create longish chain formations along the diagonals. And the numbers on some diagonals represent the values of some quadratic polynomial with integer coefficients.

ulam_spiral_by_splatbang-d5b0yfj

Ulam spiral consists of the numbers between 1 and 400, in a square spiral. All the prime numbers are highlighted. ( Ulam Spiral by SplatBang)

Surprisingly, this pattern continues for large numbers. A point to be noted is that this pattern is a feature of spirals not necessarily begin with 1. For examples, the values of the polynomial x^2+x+41 form a diagonal pattern on a spiral beginning with 41.

 

Repelling Numbers

Standard

An important fact in the theory of prime numbers is the Deuring-Heilbronn phenomenon, which roughly says that:

The zeros of L-functions repel each other.

Interestingly, Andrew Granville in his article for The Princeton Companion to Mathematics remarks that:

This phenomenon is akin to the fact that different algebraic numbers repel one another, part of the basis of the subject of Diophantine approximation.

I am amazed by this repelling relation between two different aspects of arithmetic (a.k.a. number theory). Since I have already discussed the post Colourful Complex Functions, wanted to share this picture of the algebraic numbers in the complex plane, made by David Moore based on earlier work by Stephen J. Brooks:

 

In this picture, the colour of a point indicates the degree of the polynomial of which it’s a root, where red represents the roots of linear polynomials, i.e. rational numbers,  green represents the roots of quadratic polynomials, blue represents the roots of cubic polynomials, yellow represents the roots of quartic polynomials, and so on.  Also, the size of a point decreases exponentially with the complexity of the simplest polynomial with integer coefficient of which it’s a root, where the complexity is the sum of the absolute values of the coefficients of that polynomial.

Moreover,  John Baez comments in his blog post that:

There are many patterns in this picture that call for an explanation! For example, look near the point i. Can you describe some of these patterns, formulate some conjectures about them, and prove some theorems? Maybe you can dream up a stronger version of Roth’s theorem, which says roughly that algebraic numbers tend to ‘repel’ rational numbers of low complexity.

To read more about complex plane plots of families of polynomials, see this write-up by John Baez. I will end this post with the following GIF from Reddit (click on it for details):

Arithmetic Operations

Standard

There are only 4 binary operations which we call “arithmetic operations”. These are:

  • Addition (+)
  • Subtractions (-)
  • Multiplication (×)
  • Division (÷)

Reading this fact, an obvious question is:

Why only four out of the infinitely many possible binary operations are said to be arithmetical?

Before presenting my attempt to answer this question, I would like to remind you that these are the operations you were taught when you learnt about numbers i.e. arithmetic.

In high school when \sqrt{2} is introduced, we are told that real numbers are of two types: “rational” and “irrational”. Then in college when \sqrt{-1} is introduced, we should be told that complex numbers are of two types: “algebraic” and “transcendental“.

As I have commented before, there are various number systems. And for each number system we have some valid arithmetical operations leading to a valid algebraic structure. So, only these 4 operations are entitled to be arithmetic operations because only these operations lead to valid algebraic numbers when operated on algebraic numbers.

Now this leads to another obvious question:

Why so much concerned about algebraic numbers?

To answer this question, we will have to look into the motivation for construction of various number systems like integers, rational, irrationals, complex numbers… The construction of these number systems has been motivated by our need to be able to solve polynomials of various degree (linear, quadratic, cubic…). And the Fundamental Theorem of Algebra says:

Every polynomial with rational coefficients and of degree n in variable x has n solutions in  complex number system.

But, here is a catch. The number of complex numbers which can’t satisfy any polynomial (called transcendental numbers) is much more than the number of complex numbers which can satisfy a polynomial equation (called algebraic numbers). And we wish to find solutions of a polynomial equation (ie.e algebraic numbers) in terms of sum, difference, product, division or m^{th} root of rational numbers (since coefficients were rational numbers). Therefore, sum, difference, product and division are only 4 possible arithmetic operations.

My previous statement may lead to a doubt that:

Why taking m^{th} root isn’t an arithmetic operation?

This is because it isn’t a binary operation to start with, since we have fixed m. Also, taking m^{th} root is allowed because of the multiplication property.

CAUTION: The reverse of m^{th} root is multiplying a number with itself m times and it is obviously allowed. But, this doesn’t make the binary operation of taking exponents, \alpha^{\beta} where \alpha and \beta are algebraic numbers, an arithmetic operation. For example, 2^{\sqrt{2}} is transcendental (called Gelfond–Schneider constant or Hilbert number) even though 2 and \sqrt{2} are algebraic.