1. Let any closed curve K be given. Draw any line l and the line l’ such that line l’ is parallel to l as shown in the fig 1.

2. Move the lines l and l’ closer to K till they just touch the curve K as shown in fig 2. Let the new lines be line m and line m’. Call these lines as the support lines of curve K with respect to line l.

3. Draw a line l* perpendicular to l and the line (l*)’ parallel to l* . Draw support lines with respect to line l* to the curve K as shown in the fig 3. Let the rectangle formed be ABCD .

4. The rectangle corresponding to a line will become square when AB and AD are equal . Let the length of line parallel to l (which is AB) be and line perpendicular to l (which is AD) be . For a given line n, define a real valued function on the set of lines lying outside the curve K . Now rotate the line l in an anti-clockwise direction till l coincides with l’. The rectangle corresponding to l* will also be ABCD (same as that with respect to l). When l coincides with l’, we can say that and .

5. We can see that when the line is l, . When we rotate l in an anti-clockwise direction the value of the function f changes continuously i.e. f is a continuous function (I do not know how to “prove” this is a continuous function but it’s intuitively clear to me; if you can have a proof please mention it in the comments). When l coincides with l’ the value of . Since and . Hence . So f is a continuous function which changes sign when line is moved from l to l’. Since f is a continuous function, using the generalization of intermediate value theorem we can show that there exists a line p between l and l* such that f(p) = 0 i.e. AB = AD. So the rectangle corresponding to line p will be a square.

Hence every curve K can be circumscribed by a square.

In high school, I came to know about the statement of the fundamental theorem of algebra:

Every polynomial of degree with integer coefficients have exactly complex roots (with appropriate multiplicity).

In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:

Every non-zero, single-variable, degree polynomial with complex coefficients has, counted with multiplicity, exactly complex roots.

A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:

Let be a commutative ring with identity and let be a polynomial with coefficients in . The element is a root of if and only if divides .

A polynomial of degree over a field has at most roots in .

(In case you know undergraduate level algebra, recall that is a Principal Ideal Domain if and only if is a field.)

The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:

The equation has only 2 complex roots, namely and . But if we want solutions over 2×2 matrices (non-commutative set) then we have at least 3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)

if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.

Recently I completed all of my undergraduate level maths courses, so wanted to sum up my understanding of mathematics in the following dependency diagram:

I imagine this like a wall, where each topic is a brick. You can bake different bricks at different times (i.e. follow your curriculum to learn these topics), but finally, this is how they should be arranged (in my opinion) to get the best possible understanding of mathematics.

If you have spent some time with undergraduate mathematics, you would have probably heard the word “norm”. This term is encountered in various branches of mathematics, like (as per Wikipedia):

But, it seems to occur only in abstract algebra. Although the definition of this term is always algebraic, it has a topological interpretation when we are working with vector spaces. It secretly connects a vector space to a topological space where we can study differentiation (metric space), by satisfying the conditions of a metric. This point of view along with an inner product structure, is explored when we study functional analysis.

I want to talk about the algebraic and analytic differences between real and complex numbers. Firstly, let’s have a look at following beautiful explanation by Richard Feynman (from his QED lectures) about similarities between real and complex numbers:

Before reading this explanation, I used to believe that the need to establish “Fundamental theorem Algebra” (read this beautiful paper by Daniel J. Velleman to learn about proof of this theorem) was only way to motivate study of complex numbers.

The fundamental difference between real and complex numbers is

Real numbers form an ordered field, but complex numbers can’t form an ordered field. [Proof]

Where we define ordered field as follows:

Let be a field. Suppose that there is a set which satisfies the following properties:

For each , exactly one of the following statements holds: , , .

For , and .

If such a exists, then is an ordered field. Moreover, we define .

Note that, without retaining the vector space structure of complex numbers we CAN establish the order for complex numbers [Proof], but that is useless. I find this consequence pretty interesting, because though and are isomorphic as additive groups (and as vector spaces over ) but not isomorphic as rings (and hence not isomorphic as fields).

Now let’s have a look at the consequence of the difference between the two number systems due to the order structure.

Though both real and complex numbers form a complete field (a property of topological spaces), but only real numbers have least upper bound property.

Where we define least upper bound property as follows:

Let be a non-empty set of real numbers.

A real number is called an upper bound for if for all .

A real number is the least upper bound (or supremum) for if is an upper bound for and for every upper bound of .

The least-upper-bound property states that any non-empty set of real numbers that has an upper bound must have a least upper bound in real numbers.
This least upper bound property is referred to as Dedekind completeness. Therefore, though both and are complete as a metric space [proof] but only is Dedekind complete.

In an arbitrary ordered field one has the notion of Dedekind completeness — every nonempty bounded above subset has a least upper bound — and also the notion of sequential completeness — every Cauchy sequence converges. The main theorem relating these two notions of completeness is as follows [source]:

For an ordered field , the following are equivalent:
(i) is Dedekind complete.
(ii) is sequentially complete and Archimedean.

Where we defined an Archimedean field as an ordered field such that for each element there exists a finite expression whose value is greater than that element, that is, there are no infinite elements.

As remarked earlier, is not an ordered field and hence can’t be Archimedean. Therefore, can’t have least-upper-bound property, though it’s complete in topological sense. So, the consequence of all this is:

We can’t use complex numbers for counting.

But still, complex numbers are very important part of modern arithmetic (number-theory), because they enable us to view properties of numbers from a geometric point of view [source].

In several of my previous posts I have mentioned the word “dimension”. Recently I realized that dimension can be of two types, as pointed out by Bernhard Riemann in his famous lecture in 1854. Let me quote Donal O’Shea from pp. 99 of his book “The Poincaré Conjecture” :

Continuous spaces can have any dimension, and can even be infinite dimensional. One needs to distinguish between the notion of a space and a space with a geometry. The same space can have different geometries. A geometry is an additional structure on a space. Nowadays, we say that one must distinguish between topology and geometry.

[Here by the term “space(s)” the author means “topological space”]

In mathematics, the word “dimension” can have different meanings. But, broadly speaking, there are only three different ways of defining/thinking about “dimension”:

Dimension of Vector Space: It’s the number of elements in basis of the vector space. This is the sense in which the term dimension is used in geometry (while doing calculus) and algebra. For example:

A circle is a two dimensional object since we need a two dimensional vector space (aka coordinates) to write it. In general, this is how we define dimension for Euclidean space (which is an affine space, i.e. what is left of a vector space after you’ve forgotten which point is the origin).

Dimension of a differentiable manifold is the dimension of its tangent vector space at any point.

Dimension of a variety (an algebraic object) is the dimension of tangent vector space at any regular point. Krull dimension is remotely motivated by the idea of dimension of vector spaces.

Dimension of Topological Space: It’s the smallest integer that is somehow related to open sets in the given topological space. In contrast to a basis of a vector space, a basis of topological space need not be maximal; indeed, the only maximal base is the topology itself. Moreover, dimension is this case can be defined using “Lebesgue covering dimension” or in some nice cases using “Inductive dimension“. This is the sense in which the term dimension is used in topology. For example:

A circle is one dimensional object and a disc is two dimensional by topological definition of dimension.

Two spaces are said to have same dimension if and only if there exists a continuous bijective map between them. Due to this, a curve and a plane have different dimension even though curves can fill space. Space-filling curves are special cases of fractal constructions. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn.

Fractal Dimension: It’s a notion designed to study the complex sets/structures like fractals that allows notions of objects with dimensions other than integers. It’s definition lies in between of that of dimension of vector spaces and topological spaces. It can be defined in various similar ways. Most common way is to define it as “dimension of Hausdorff measure on a metric space” (measure theory enable us to integrate a function without worrying about its smoothness and the defining property of fractals is that they are NOT smooth). This sense of dimension is used in very specific cases. For example:

A curve with fractal dimension very near to 1, say 1.10, behaves quite like an ordinary line, but a curve with fractal dimension 1.9 winds convolutedly through space very nearly like a surface.

The fractal dimension of the Koch curve is , but its topological dimension is 1 (just like the space-filling curves). The Koch curve is continuous everywhere but differentiable nowhere.

The fractal dimension of space-filling curves is 2, but their topological dimension is 1. [source]

A surface with fractal dimension of 2.1 fills space very much like an ordinary surface, but one with a fractal dimension of 2.9 folds and flows to fill space rather nearly like a volume.

This simple observation has very interesting consequences. For example, consider the following statement from. pp. 167 of the book “The Poincaré Conjecture” by Donal O’Shea:

… there are infinitely many incompatible ways of doing calculus in four-space. This contrasts with every other dimension…

This leads to a natural question:

Why is it difficult to develop calculus for any in general?

Actually, if we consider as a vector space then developing calculus is not a big deal (as done in multivariable calculus). But, if we consider as a topological space then it becomes a challenging task due to the lack of required algebraic structure on the space. So, Donal O’Shea is actually pointing to the fact that doing calculus on differentiable manifolds in is difficult. And this is because we are considering as 4-dimensional topological space.

Now, I will end this post by pointing to the way in which definition of dimension should be seen in my older posts:

Dimension ≡ Dimension of the underlying vector space

A couple of years ago, I was introduced to topology via proof of Euler’s Polyhedron formula given in the book “What is Mathematics?” by Richard Courant and Herbert Robbins. Then I got attracted towards topology by reading the book “Euler’s gem – the polyhedron formula and the birth of topology” by David S. Richeson. But now after doing a semester course on “introduction to topology” I have realized that all this was a lie. These books were not presenting the real picture of subject, they were presenting just the motivational pictures. For example, this is my favourite video about introduction to topology by Tadashi Tokieda (though it doesn’t give the true picture):

Few months ago I read the book “The Poincaré Conjecture” by Donal O’Shea and it gave an honest picture of algebraic topology. But, then I realized that half of my textbook on topology is about point-set topology (while other half was about algebraic topology). This part of topology has no torus or Möbius strip (checkout this photo) but rather dry set theoretic arguments. So I decided to dig deeper into what really Topology is all about? Is is just a fancy graph theory (in 1736, both Topology and graph theory started from Euler’s Polyhedron formula) or it’s a new form of Geometry which we study using set theory, algebra and analysis.

The subject of topology itself consists of several different branches, such as:

Point-Set topology

Algebraic topology

Differential topology

Geometric topology

Point-set topology grew out of analysis, following Cauchy’s contribution to the foundations of analysis and in particular trigonometric representation of a function (Fourier series). In 1872, Georg Cantor desired a more solid foundation for standard operations (addition, etc.) performed on the real numbers. To this end, he defined a Cauchy sequence of rational numbers. He creates a bijection between the number line and the possible limits of sequence of rational numbers. He took the converse, that “the geometry of the straight line is complete,” as an axiom (note that thinking of points on the real line as limits of sequence of rational numbers is “for clarity” and not essential to what he is doing). Then Cantor proved following theorem:

If there is an equation of form where and for all values of except those which correspond to points in the interval give a point set P of the th kind, where signifies any large number, then

This theorem lead to definition of point set to be a finite or infinite set of points. This in turn lead to definition of cluster point, derived set, …. and whole of introductory course in topology. Modern mathematics tends to view the term “point-set” as synonymous with “open set.” Here I would like to quote James Munkres (from point-set topology part of my textbook):

A problem of fundamental importance in topology is to find conditions on a topological space that will guarantee that it is metrizable…. Although the metrization problem is an important problem in topology, the study of metric spaces as such does not properly belong to topology as much as it does to analysis.

Now, what is generally publicised to be “the topology” is actually the algebraic topology. This aspect of topology is indeed beautiful. It lead to concepts like fundamental groups which are inseparable from modern topology. In 1895, Henri Poincaré topologized Euler’s proof of Polyhedron formula leading to what we call today Euler’s Characteristic. This marked the beginning of what we today call algebraic topology.

For long time, differential geometry and algebraic topology remained the centre of attraction for geometers.But, in 1956, John Milnor discovered that there were distinct different differentiable structures (even I don’t know what it actually means!) on seven sphere. His arguments brought together topology and analysis in an unexpected way, and doing so initiated the field of differential topology.

Geometric topology has borrowed enormously from the rest of algebraic topology it has returned very scant interest on this “borrowed” capital. It is however full of problems with some of the simplest, in formulation, as yet unsolved. Knot Theory (or in general low-dimensional topology) is one of the most active area of research of this branch of topology. Here I would like to quote R.J. Daverman and R.B. Sher:

Geometric Topology focuses on matters arising in special spaces such as manifolds, simplicial complexes, and absolute neighbourhood retracts. Fundamental issues driving the subject involve the search for topological characterizations of the more important objects and for topological classification within key classes.

[1] Nicholas Scoville (Ursinus College), “Georg Cantor at the Dawn of Point-Set Topology,” Convergence (May 2012), doi:10.4169/loci003861

[2] André Weil, “Riemann, Betti and the Birth of Topology.” Archive for History of Exact Sciences 20, no. 2 (1979): 91–96. doi:10.1007/bf00327626.

[3] Johnson, Dale M. “The Problem of the Invariance of Dimension in the Growth of Modern Topology, Part I.” Archive for History of Exact Sciences 20, no. 2 (1979): 97–188. doi:10.1007/bf00327627.

[4] Johnson, Dale M. “The Problem of the Invariance of Dimension in the Growth of Modern Topology, Part II.” Archive for History of Exact Sciences 25, no. 2–3 (December 1981): 85–266. doi:10.1007/bf02116242.

[5] Lefschetz, Solomon. “The Early Development of Algebraic Topology.” Boletim Da Sociedade Brasileira de Matemática 1, no. 1 (January 1970): 1–48. doi:10.1007/bf02628194.