Tag Archives: algebra

Polynomials and Commutativity


In high school, I came to know about the statement of the fundamental theorem of algebra:

Every polynomial of degree n with integer coefficients have exactly n complex roots (with appropriate multiplicity).

In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:

Every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots.

A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:

Let R be a commutative ring with identity and let p(x)\in R[x] be a polynomial with coefficients in R. The element a\in R is a root of p(x) if and only if (x-a) divides p(x).

A corollary of above theorem is that:

A polynomial f of degree n over a field F has at most n roots in F.

(In case you know undergraduate level algebra, recall that R[x] is a Principal Ideal Domain if and only if R is a field.)

The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and \mathbb{C} is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:

The equation X^2 + X + 1 has only 2 complex roots, namely \omega = \frac{-1+i\sqrt{3}}{2} and \omega^2 = \frac{-1-i\sqrt{3}}{2}. But if we want solutions over 2×2 matrices (non-commutative set) then we have at least  3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)

\displaystyle{A=\begin{bmatrix} 0 & -1 \\1 & -1 \end{bmatrix}, B=\begin{bmatrix} \omega & 0 \\0 & \omega^2 \end{bmatrix}, C=\begin{bmatrix} \omega^2 & 0 \\0 & \omega \end{bmatrix}}

if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.


Intra-mathematical Dependencies


Recently I completed all of my undergraduate level maths courses, so wanted to sum up my understanding of mathematics in the following dependency diagram:

mat-dependency (1)

I imagine this like a wall, where each topic is a brick. You can bake different bricks at different times (i.e. follow your curriculum to learn these topics), but finally, this is how they should be arranged (in my opinion) to get the best possible understanding of mathematics.

As of now, I have an “elementary” knowledge of Set Theory, Algebra, Analysis, Topology, Geometry, Probability Theory, Combinatorics and Arithmetic. Unfortunately, in India, there are no undergraduate level courses in Mathematical Logic and Category Theory.

This post can be seen as a sequel of my “Mathematical Relations” post.

In the praise of norm


If you have spent some time with undergraduate mathematics, you would have probably heard the word “norm”. This term is encountered in various branches of mathematics, like (as per Wikipedia):

But, it seems to occur only in abstract algebra. Although the definition of this term is always algebraic, it has a topological interpretation when we are working with vector spaces.  It secretly connects a vector space to a topological space where we can study differentiation (metric space), by satisfying the conditions of a metric.  This point of view along with an inner product structure, is explored when we study functional analysis.

Some facts to remember:

  1. Every vector space has a norm. [Proof]
  2. Every vector space has an inner product (assuming “Axiom of Choice”). [Proof]
  3. An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space.  [Proof]
  4. All norms are equivalent in finite dimensional vector spaces. [Proof]
  5. Every normed vector space is a metric space (and NOT vice versa). [Proof]
  6. In general, a vector space is NOT same a metric space. [Proof]

Real vs Complex numbers


I want to talk about the algebraic and analytic differences between real and complex numbers. Firstly, let’s have a look at following beautiful explanation by Richard Feynman (from his QED lectures) about similarities between real and complex numbers:


From Chapter 2 of the book “QED – The Strange Theory of Light and Matter” © Richard P. Feynman, 1985.

Before reading this explanation, I used to believe that the need to establish “Fundamental theorem Algebra” (read this beautiful paper by Daniel J. Velleman to learn about proof of this theorem) was only way to motivate study of complex numbers.

The fundamental difference between real and complex numbers is

Real numbers form an ordered field, but complex numbers can’t form an ordered field. [Proof]

Where we define ordered field as follows:

Let \mathbf{F} be a field. Suppose that there is a set \mathcal{P} \subset \mathbf{F} which satisfies the following properties:

  • For each x \in \mathbf{F}, exactly one of the following statements holds: x \in \mathcal{P}, -x \in \mathcal{P}, x =0.
  • For x,y \in \mathcal{P}, xy \in \mathcal{P} and x+y \in \mathcal{P}.

If such a \mathcal{P} exists, then \mathbf{F} is an ordered field. Moreover, we define x \le y \Leftrightarrow y -x \in \mathcal{P} \vee x = y.

Note that, without retaining the vector space structure of complex numbers we CAN establish the order for complex numbers [Proof], but that is useless. I find this consequence pretty interesting, because though \mathbb{R} and \mathbb{C} are isomorphic as additive groups (and as vector spaces over \mathbb{Q}) but not isomorphic as rings (and hence not isomorphic as fields).

Now let’s have a look at the consequence of the difference between the two number systems due to the order structure.

Though both real and complex numbers form a complete field (a property of topological spaces), but only real numbers have least upper bound property.

Where we define least upper bound property as follows:

Let \mathcal{S} be a non-empty set of real numbers.

  • A real number x is called an upper bound for \mathcal{S} if x \geq s for all s\in \mathcal{S}.
  • A real number x is the least upper bound (or supremum) for \mathcal{S} if x is an upper bound for \mathcal{S} and x \leq y for every upper bound y of \mathcal{S} .

The least-upper-bound property states that any non-empty set of real numbers that has an upper bound must have a least upper bound in real numbers.
This least upper bound property is referred to as Dedekind completeness. Therefore, though both \mathbb{R} and \mathbb{C} are complete as a metric space [proof] but only \mathbb{R} is Dedekind complete.

In an arbitrary ordered field one has the notion of Dedekind completeness — every nonempty bounded above subset has a least upper bound — and also the notion of sequential completeness — every Cauchy sequence converges. The main theorem relating these two notions of completeness is as follows [source]:

For an ordered field \mathbf{F}, the following are equivalent:
(i) \mathbf{F} is Dedekind complete.
(ii) \mathbf{F} is sequentially complete and Archimedean.

Where we defined an Archimedean field as an ordered field such that for each element there exists a finite expression 1+1+\ldots+1 whose value is greater than that element, that is, there are no infinite elements.

As remarked earlier, \mathbb{C} is not an ordered field and hence can’t be Archimedean. Therefore, \mathbb{C}  can’t have least-upper-bound property, though it’s complete in topological sense. So, the consequence of all this is:

We can’t use complex numbers for counting.

But still, complex numbers are very important part of modern arithmetic (number-theory), because they enable us to view properties of numbers from a geometric point of view [source].

What is Algebra?


About 8 months ago I wrote about Analysis:

Thus, algebraic approximations produced the algebra of inequalities. The application of Algebra of inequalities lead to concept of Approximations in Calculus.


You may have seen/heard this quote several times…

Now the time has come to understand the term “Algebra” itself, which has very rich history and dynamic present. I will use following classification (influenced by Shreeram Abhyankar) of algebra in 3 levels:

  1. High School Algebra (HSA)
  2. College Algebra (CA)
  3. University Algebra (UA)

HSA (8th Century – 16th Century) is all about learning tricks and manipulations to solve mensuration problems which involve solving linear, quadratic and “special” cubic equations for real (or rational) numbers. This level was developed by Muḥammad ibn Mūsā al-KhwārizmīThābit ibn QurraOmar KhayyámLeonardo Pisano (Fibonacci)Maestro Dardi of PisaScipione del FerroNiccolò Fontana (Tartaglia)Gerolamo CardanoLodovico Ferrari and Rafael Bombelli.

CA (18th Century – 19th Century) is commonly known as abstract algebra. Its development was motivated by the failure of HSA to solve the general equations of degree higher than the fourth and later on the study of symmetry of equations, geometric objects, etc. became one of the central topics of interest. In this we study properties of various algebraic structures like fields, linear spaces, groups, rings and modules. This level was developed by Joseph-Louis LagrangePaolo RuffiniPietro Abbati MarescottiNiels AbelÉvariste GaloisAugustin-Louis Cauchy Arthur CayleyLudwig SylowCamille JordanOtto HölderCarl Friedrich GaussLeonhard EulerWilliam Rowan Hamilton, Hermann GrassmannHeinrich Weber Emmy Noether and Abraham Fraenkel .

UA (19th Century – present) has derived its motivations from many diverse subjects of study in mathematics like Number Theory, Geometry and Analysis.  In this level of study, the term “algebra” itself has a different meaning

An algebra over a field is a vector space (a module over a field) equipped with a bilinear product.

and topics are named like Commutative Algebra, Lie  Algebra and so on. This level was initially developed by Benjamin Peirce,  Georg FrobeniusRichard DedekindKarl WeierstrassÉlie CartanTheodor MolienSophus LieJoseph WedderburnMax NoetherLeopold Kronecker,  David HilbertFrancis Macaulay,  Emanuel LaskerJames Joseph SylvesterPaul Gordan, Emil ArtinKurt HenselErnst SteinitzOtto Schreier ….

Since algebra happens to be a fast developing research area, the above classification is valid only for this moment. Also note that, though Emmy Noether was daughter of Max Noether I have included the contributions of Emmy in CA and those of Max in UA. The list of contributors is not exhaustive.


[1] van der Waerden, B. L.  A history of algebra. Berlin and Heidelberg: Springer-Verlag, 1985. doi: 10.1007/978-3-642-51599-6

[2] Kleiner, I.  A History of Abstract Algebra. Boston : Birkhäuser, 2007. doi: 10.1007/978-0-8176-4685-1

[3] Burns, J. E. “The Foundation Period in the History of Group Theory.” American Mathematical Monthly 20, (1913), 141-148.  doi: 10.2307/2972411