# Confusing terms in topology

Standard

Following are some of the terms used in topology which have similar definition or literal English meanings:

• Convex set: A subset $U$ of $\mathbb{R}^n$ is called convex1 , if it contains, along with any pair of its points $x,y$, also the entire line segement joining the points.
• Star-convex set: A subset $U$ of $\mathbb{R}^n$ is called star-convex if there exists a point $p\in U$ such that for each $x\in U$, the line segment joining $x$ to $p$ lies in $U$.
• Simply connected: A topological space $X$ is called simply connected if it is path-connected2  and any loop in $X$ defined by $f : S^1 \to X$ can be contracted3  to a point.
• Deformation retract: Let $U$ be a subspace of $X$. We say  is a $X$ deformation retracts to $U$ if there exists a retraction4 $r : X \to U$ a retraction such that its composition with the inclusion is homotopic5  to the identity map on $X$.

Various examples to illustrate the interdependence of these terms. Shown here are pentagon, star, sphere, and annulus.

A stronger version of Jordan Curve Theorem, known as Jordan–Schoenflies theorem, implies that the interior of a simple polygon is always a simply-connected subset of the Euclidean plane. This statement becomes false in higher dimensions.

The n-dimensional sphere $S^n$ is simply connected if and only if $n \geq 2$. Every star-convex subset of $\mathbb{R}^n$ is simply connected. A torus, the (elliptic) cylinder, the Möbius strip, the projective plane and the Klein bottle are NOT simply connected.

The boundary of the n-dimensional ball $S^n$, that is, the $(n-1)$-sphere, is not a retract of the ball. Using this we can prove the Brouwer fixed-point theorem. However, $\mathbb{R}^n-0$ deformation retracts to a sphere $S^{n-1}$. Hence, though the sphere shown above doesn’t deformation retract to a point, it is a deformation retraction of $\mathbb{R}^3-0$.

#### Footnotes

1. In general, a convex set is defined for vector spaces. It’s the set of elements from the vector space such that all the points on the straight line line between any two points of the set are also contained in the set. If $a$ and $b$ are points in the vector space, the points on the straight line between $a$ and $b$ are given by $x = \lambda a + (1-\lambda)b$ for all $\lambda$ from 0 to 1.
2. A path from a point $x$ to a point $y$ in a topological space $X$ is a continuous function $f$ from the unit interval $[0,1]$ to $X$ with $f(0) = x$ and $f(1) = y$. The space $X$ is said to be path-connected if there is a path joining any two points in $X$.
3. There exists a continuous map $F : D^2 \to X$ such that $F$ restricted to $S^1$ is $f$. Here, $S^1$ and $D^2$ denotes the unit circle and closed unit disk in the Euclidean plane respectively. In general, a space $X$ is contractible if it has the homotopy-type of a point. Intuitively, two spaces $X$ and $Y$ are homotopy equivalent if they can be transformed into one another by bending, shrinking and expanding operations.
4. Then a continuous map $r: X\to U$ is a retraction if the restriction of $r$ to $U$ is the identity map on $U$.
5. A homotopy between two continuous functions $f$ and $g$ from a topological space $X$ to a topological space $Y$ is defined to be a continuous function $H : X \times [0,1] \to Y$ such that, if $x \in X$ then $H(x,0) = f(x)$ and $H(x,1) = g(x)$. Deformation retraction is a special type of homotopy equivalence, i.e. a deformation retraction is a mapping which captures the idea of continuously shrinking a space into a subspace.

# Enclosing closed curves in squares

Standard

Let’s look at the following innocent looking question:

Is it possible to circumscribe a square about every closed curve?

The answer is YES! I found an unexpected and interesting proof in the book “Intuitive Combinatorial Topology ” by V.G. Boltyanskii and V.A. Efremovich . Let’s now look at the outline of proof for our claim:

1. Let any closed curve K be given. Draw any line l and the line l’ such that line l’ is parallel to l as shown in the fig 1.

2. Move the lines l and l’ closer to K till they just touch the curve K as shown in fig 2. Let the new lines be line m and line m’. Call these lines as the support lines of curve K with respect to line l.

3. Draw a line l* perpendicular to l and the line (l*)’ parallel to l* . Draw support lines with respect to line l* to the curve K as shown in the fig 3. Let the rectangle formed be ABCD .

4. The rectangle corresponding to a line will become square when AB and AD are equal . Let the length of line parallel to l (which is AB)  be $h_1(\mathbf{l})$ and line perpendicular to l (which is AD) be $h_2(\mathbf{l})$. For a given line n, define a real valued function $f(\mathbf{n}) = h_1(\mathbf{n})-h_2(\mathbf{n})$ on the set of lines lying outside the curve .  Now rotate the line l in an anti-clockwise direction till l coincides with l’. The rectangle corresponding to l* will also be ABCD (same as that with respect to l). When l coincides with l’, we can say that  $AB = h_2(\mathbf{l^*})$ and $AD = h_1(\mathbf{l^*})$.

5. We can see that when the line is l$f(\mathbf{l}) = h_1(\mathbf{l})-h_2(\mathbf{l})$. When we rotate l in an anti-clockwise direction the value of the function f changes continuously i.e. f is a continuous function (I do not know how to “prove” this is a continuous function but it’s intuitively clear to me; if you can have a proof please mention it in the comments). When l coincides with l’ the value of $f(\mathbf{l^*}) = h_1(\mathbf{l^*})-h_2(\mathbf{l^*})$. Since $h_1(\mathbf{l^*}) = h_2(\mathbf{l})$ and $h_2(\mathbf{l^*}) = h_1(\mathbf{l})$. Hence $f(\mathbf{l^*}) = -(h_1(\mathbf{l}) - h_2(\mathbf{l}))$. So f is a continuous function which changes sign when line is moved from l to l’. Since f is a continuous function, using the generalization of intermediate value theorem we can show that there exists a line p between l and l* such that f(p) = 0 i.e. AB = AD.  So the rectangle corresponding to line p will be a square.

Hence every curve K can be circumscribed by a square.

# Polynomials and Commutativity

Standard

In high school, I came to know about the statement of the fundamental theorem of algebra:

Every polynomial of degree $n$ with integer coefficients have exactly $n$ complex roots (with appropriate multiplicity).

In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:

Every non-zero, single-variable, degree $n$ polynomial with complex coefficients has, counted with multiplicity, exactly $n$ complex roots.

A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:

Let $R$ be a commutative ring with identity and let $p(x)\in R[x]$ be a polynomial with coefficients in $R$. The element $a\in R$ is a root of $p(x)$ if and only if $(x-a)$ divides $p(x)$.

A corollary of above theorem is that:

A polynomial $f$ of degree $n$ over a field $F$ has at most $n$ roots in $F$.

(In case you know undergraduate level algebra, recall that $R[x]$ is a Principal Ideal Domain if and only if $R$ is a field.)

The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and $\mathbb{C}$ is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:

The equation $X^2 + X + 1$ has only 2 complex roots, namely $\omega = \frac{-1+i\sqrt{3}}{2}$ and $\omega^2 = \frac{-1-i\sqrt{3}}{2}$. But if we want solutions over 2×2 matrices (non-commutative set) then we have at least  3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)

$\displaystyle{A=\begin{bmatrix} 0 & -1 \\1 & -1 \end{bmatrix}, B=\begin{bmatrix} \omega & 0 \\0 & \omega^2 \end{bmatrix}, C=\begin{bmatrix} \omega^2 & 0 \\0 & \omega \end{bmatrix}}$

if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.

# Intra-mathematical Dependencies

Standard

Recently I completed all of my undergraduate level maths courses, so wanted to sum up my understanding of mathematics in the following dependency diagram:

I imagine this like a wall, where each topic is a brick. You can bake different bricks at different times (i.e. follow your curriculum to learn these topics), but finally, this is how they should be arranged (in my opinion) to get the best possible understanding of mathematics.

As of now, I have an “elementary” knowledge of Set Theory, Algebra, Analysis, Topology, Geometry, Probability Theory, Combinatorics and Arithmetic. Unfortunately, in India, there are no undergraduate level courses in Mathematical Logic and Category Theory.

This post can be seen as a sequel of my “Mathematical Relations” post.

# In the praise of norm

Standard

If you have spent some time with undergraduate mathematics, you would have probably heard the word “norm”. This term is encountered in various branches of mathematics, like (as per Wikipedia):

But, it seems to occur only in abstract algebra. Although the definition of this term is always algebraic, it has a topological interpretation when we are working with vector spaces.  It secretly connects a vector space to a topological space where we can study differentiation (metric space), by satisfying the conditions of a metric.  This point of view along with an inner product structure, is explored when we study functional analysis.

Some facts to remember:

1. Every vector space has a norm. [Proof]
2. Every vector space has an inner product (assuming “Axiom of Choice”). [Proof]
3. An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space.  [Proof]
4. All norms are equivalent in finite dimensional vector spaces. [Proof]
5. Every normed vector space is a metric space (and NOT vice versa). [Proof]
6. In general, a vector space is NOT same a metric space. [Proof]

# Real vs Complex numbers

Standard

I want to talk about the algebraic and analytic differences between real and complex numbers. Firstly, let’s have a look at following beautiful explanation by Richard Feynman (from his QED lectures) about similarities between real and complex numbers:

From Chapter 2 of the book “QED – The Strange Theory of Light and Matter” © Richard P. Feynman, 1985.

Before reading this explanation, I used to believe that the need to establish “Fundamental theorem Algebra” (read this beautiful paper by Daniel J. Velleman to learn about proof of this theorem) was only way to motivate study of complex numbers.

The fundamental difference between real and complex numbers is

Real numbers form an ordered field, but complex numbers can’t form an ordered field. [Proof]

Where we define ordered field as follows:

Let $\mathbf{F}$ be a field. Suppose that there is a set $\mathcal{P} \subset \mathbf{F}$ which satisfies the following properties:

• For each $x \in \mathbf{F}$, exactly one of the following statements holds: $x \in \mathcal{P}$, $-x \in \mathcal{P}$, $x =0$.
• For $x,y \in \mathcal{P}$, $xy \in \mathcal{P}$ and $x+y \in \mathcal{P}$.

If such a $\mathcal{P}$ exists, then $\mathbf{F}$ is an ordered field. Moreover, we define $x \le y \Leftrightarrow y -x \in \mathcal{P} \vee x = y$.

Note that, without retaining the vector space structure of complex numbers we CAN establish the order for complex numbers [Proof], but that is useless. I find this consequence pretty interesting, because though $\mathbb{R}$ and $\mathbb{C}$ are isomorphic as additive groups (and as vector spaces over $\mathbb{Q}$) but not isomorphic as rings (and hence not isomorphic as fields).

Now let’s have a look at the consequence of the difference between the two number systems due to the order structure.

Though both real and complex numbers form a complete field (a property of topological spaces), but only real numbers have least upper bound property.

Where we define least upper bound property as follows:

Let $\mathcal{S}$ be a non-empty set of real numbers.

• A real number $x$ is called an upper bound for $\mathcal{S}$ if $x \geq s$ for all $s\in \mathcal{S}$.
• A real number $x$ is the least upper bound (or supremum) for $\mathcal{S}$ if $x$ is an upper bound for $\mathcal{S}$ and $x \leq y$ for every upper bound $y$ of $\mathcal{S}$ .

The least-upper-bound property states that any non-empty set of real numbers that has an upper bound must have a least upper bound in real numbers.
This least upper bound property is referred to as Dedekind completeness. Therefore, though both $\mathbb{R}$ and $\mathbb{C}$ are complete as a metric space [proof] but only $\mathbb{R}$ is Dedekind complete.

In an arbitrary ordered field one has the notion of Dedekind completeness — every nonempty bounded above subset has a least upper bound — and also the notion of sequential completeness — every Cauchy sequence converges. The main theorem relating these two notions of completeness is as follows [source]:

For an ordered field $\mathbf{F}$, the following are equivalent:
(i) $\mathbf{F}$ is Dedekind complete.
(ii) $\mathbf{F}$ is sequentially complete and Archimedean.

Where we defined an Archimedean field as an ordered field such that for each element there exists a finite expression $1+1+\ldots+1$ whose value is greater than that element, that is, there are no infinite elements.

As remarked earlier, $\mathbb{C}$ is not an ordered field and hence can’t be Archimedean. Therefore, $\mathbb{C}$  can’t have least-upper-bound property, though it’s complete in topological sense. So, the consequence of all this is:

We can’t use complex numbers for counting.

But still, complex numbers are very important part of modern arithmetic (number-theory), because they enable us to view properties of numbers from a geometric point of view [source].

# Dimension clarification

Standard

In several of my previous posts I have mentioned the word “dimension”. Recently I realized that dimension can be of two types, as pointed out by Bernhard Riemann in his famous lecture in 1854. Let me quote Donal O’Shea from pp. 99 of his book “The Poincaré Conjecture” :

Continuous spaces can have any dimension, and can even be infinite dimensional. One needs to distinguish between the notion of a space and a space with a geometry. The same space can have different geometries. A geometry is an additional structure on a space. Nowadays, we say that one must distinguish between topology and geometry.

[Here by the term “space(s)” the author means “topological space”]

In mathematics, the word “dimension” can have different meanings. But, broadly speaking, there are only three different ways of defining/thinking about “dimension”:

• Dimension of Vector Space: It’s the number of elements in basis of the vector space. This is the sense in which the term dimension is used in geometry (while doing calculus) and algebra. For example:
• A circle is a two dimensional object since we need a two dimensional vector space (aka coordinates) to write it. In general, this is how we define dimension for Euclidean space (which is an affine space, i.e. what is left of a vector space after you’ve forgotten which point is the origin).
• Dimension of a differentiable manifold is the dimension of its tangent vector space at any point.
• Dimension of a variety (an algebraic object) is the dimension of tangent vector space at any regular point. Krull dimension is remotely motivated by the idea of dimension of vector spaces.
• Dimension of Topological Space: It’s the smallest integer that is somehow related to open sets in the given topological space. In contrast to a basis of a vector space, a basis of topological space need not be maximal; indeed, the only maximal base is the topology itself. Moreover, dimension is this case can be defined using  “Lebesgue covering dimension” or in some nice cases using “Inductive dimension“.  This is the sense in which the term dimension is used in topology. For example:
• A circle is one dimensional object and a disc is two dimensional by topological definition of dimension.
• Two spaces are said to have same dimension if and only if there exists a continuous bijective map between them. Due to this, a curve and a plane have different dimension even though curves can fill space.  Space-filling curves are special cases of fractal constructions. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn.
• Fractal Dimension:  It’s a notion designed to study the complex sets/structures like fractals that allows notions of objects with dimensions other than integers. It’s definition lies in between of that of dimension of vector spaces and topological spaces. It can be defined in various similar ways. Most common way is to define it as “dimension of Hausdorff measure on a metric space” (measure theory enable us to integrate a function without worrying about  its smoothness and the defining property of fractals is that they are NOT smooth). This sense of dimension is used in very specific cases. For example:
• A curve with fractal dimension very near to 1, say 1.10, behaves quite like an ordinary line, but a curve with fractal dimension 1.9 winds convolutedly through space very nearly like a surface.
• The fractal dimension of the Koch curve is $\frac{\ln 4}{\ln 3} \sim 1.26186$, but its topological dimension is 1 (just like the space-filling curves). The Koch curve is continuous everywhere but differentiable nowhere.
• The fractal dimension of space-filling curves is 2, but their topological dimension is 1. [source]
• A surface with fractal dimension of 2.1 fills space very much like an ordinary surface, but one with a fractal dimension of 2.9 folds and flows to fill space rather nearly like a volume.

This simple observation has very interesting consequences. For example,  consider the following statement from. pp. 167  of the book “The Poincaré Conjecture” by Donal O’Shea:

… there are infinitely many incompatible ways of doing calculus in four-space. This contrasts with every other dimension…

This leads to a natural question:

Why is it difficult to develop calculus for any $\mathbb{R}^n$ in general?

Actually, if we consider $\mathbb{R}^n$ as a vector space then developing calculus is not a big deal (as done in multivariable calculus).  But, if we consider $\mathbb{R}^n$ as a topological space then it becomes a challenging task due to the lack of required algebraic structure on the space. So, Donal O’Shea is actually pointing to the fact that doing calculus on differentiable manifolds in $\mathbb{R}^4$ is difficult. And this is because we are considering $\mathbb{R}^4$ as 4-dimensional topological space.

Now, I will end this post by pointing to the way in which definition of dimension should be seen in my older posts:

• Dimension ≡ Dimension of the underlying vector space
• Dimension ≡ Lebesgue covering dimension of the underlying topological space
• Special Numbers: update (note that here I am talking about “topological manifolds” of which “differentiable manifolds” are a special case)