Teaching Mathematics

Standard

One of the most challenging and rewarding thing associated with being a math enthusiast (a.k.a. mathematician) is an opportunity to share your knowledge about the not so obvious truths of mathematics. A couple of years ago, I tried to communicate that feeling through an article for high school students.

When I joined college, I tried to teach mathematics to some kids from financially not-so strong family. Since they had no exposure to mathematics, I had to start with  concepts like addition and multiplication of numbers. My experience can be summarized as the following stand-up comedy performance by Naveen Richard:

After trying for about a couple of months to teach elementary mathematics, I gave up and now I discuss mathematics only above the high school level. Last week I delivered a lecture discussing the proof of Poncelet’s Closure Theorem:

Whenever a polygon is inscribed in one conic section and circumscribes another one, the polygon must be part of an infinite family of polygons that are all inscribed in and circumscribe the same two conics.

I had spent sufficient time preparing the lecture, and believed that I was aware of all possible consequences of this theorem. But, almost half way through the lecture one person (Haresh) from the audience of 10 people, pointed out following fascinating consequence of the theorem:

If an n-sided polygon is inscribed in one conic section and circumscribed by the other one, then it must be a convex polygon and no other m-sided polygon (with m≠n) can be inscribed and circumscribed by this pair of conic sections.

This kind of insights by audience motivates me to discuss mathematics with others!

In the praise of norm

Standard

If you have spent some time with undergraduate mathematics, you would have probably heard the word “norm”. This term is encountered in various branches of mathematics, like (as per Wikipedia):

But, it seems to occur only in abstract algebra. Although the definition of this term is always algebraic, it has a topological interpretation when we are working with vector spaces.  It secretly connects a vector space to a topological space where we can study differentiation (metric space), by satisfying the conditions of a metric.  This point of view along with an inner product structure, is explored when we study functional analysis.

Some facts to remember:

  1. Every vector space has a norm. [Proof]
  2. Every vector space has an inner product (assuming “Axiom of Choice”). [Proof]
  3. An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space.  [Proof]
  4. All norms are equivalent in finite dimensional vector spaces. [Proof]
  5. Every normed vector space is a metric space (and NOT vice versa). [Proof]
  6. In general, a vector space is NOT same a metric space. [Proof]

Real vs Complex Plane

Standard

Real plane is denoted by \mathbb{R}^2 and is commonly referred to as  Cartesian plane. When we talk about \mathbb{R}^2 we mean that \mathbb{R}^2 is a vector space over \mathbb{R}. But when you view \mathbb{R}^2 as Cartesian plane, then it’s not technically a vector space but rather an affine space, on which a vector space acts by translations, i.e. there is no canonical choice of where the origin should go in the space, because it can be translated anywhere.

cart

Cartesian Plane (345Kai at the English language Wikipedia [Public domain, GFDL or CC-BY-SA-3.0], via Wikimedia Commons)

On the other hand, complex plane is denoted by \mathbb{C} and is commonly referred to as Argand plane. But when we talk about \mathbb{C}, we mean that \mathbb{R}^2 is a field (by exploiting the tuple structure of elements) since there is only way to explicitly define the field structure on the set \mathbb{R}^2 and that’s how we view \mathbb{C} as a field (if you allow axiom of choice, there are more possibilities; see this Math.SE discussion).

ar

Argand Plane (Shiva Sitaraman at Quora)

So, when we want to bother about the vector space structure of \mathbb{R}^2 we refer to Cartesian plane and when we want to bother about the field structure of \mathbb{R}^2 we refer to Argand plane. An immediate consequence of the above difference in real and complex plane is seen when we study multivariable analysis and complex analysis, where we consider vector space structure and field structure, respectively (see this Math.SE discussion for details). Hence the definition of differentiation of a function defined on \mathbb{C} is a special case of definition of differentiation of a function defined on \mathbb{R}^2.

Real vs Complex numbers

Standard

I want to talk about the algebraic and analytic differences between real and complex numbers. Firstly, let’s have a look at following beautiful explanation by Richard Feynman (from his QED lectures) about similarities between real and complex numbers:

img_20170304_1237442

From Chapter 2 of the book “QED – The Strange Theory of Light and Matter” © Richard P. Feynman, 1985.

Before reading this explanation, I used to believe that the need to establish “Fundamental theorem Algebra” (read this beautiful paper by Daniel J. Velleman to learn about proof of this theorem) was only way to motivate study of complex numbers.

The fundamental difference between real and complex numbers is

Real numbers form an ordered field, but complex numbers can’t form an ordered field. [Proof]

Where we define ordered field as follows:

Let \mathbf{F} be a field. Suppose that there is a set \mathcal{P} \subset \mathbf{F} which satisfies the following properties:

  • For each x \in \mathbf{F}, exactly one of the following statements holds: x \in \mathcal{P}, -x \in \mathcal{P}, x =0.
  • For x,y \in \mathcal{P}, xy \in \mathcal{P} and x+y \in \mathcal{P}.

If such a \mathcal{P} exists, then \mathbf{F} is an ordered field. Moreover, we define x \le y \Leftrightarrow y -x \in \mathcal{P} \vee x = y.

Note that, without retaining the vector space structure of complex numbers we CAN establish the order for complex numbers [Proof], but that is useless. I find this consequence pretty interesting, because though \mathbb{R} and \mathbb{C} are isomorphic as additive groups (and as vector spaces over \mathbb{Q}) but not isomorphic as rings (and hence not isomorphic as fields).

Now let’s have a look at the consequence of the difference between the two number systems due to the order structure.

Though both real and complex numbers form a complete field (a property of topological spaces), but only real numbers have least upper bound property.

Where we define least upper bound property as follows:

Let \mathcal{S} be a non-empty set of real numbers.

  • A real number x is called an upper bound for \mathcal{S} if x \geq s for all s\in \mathcal{S}.
  • A real number x is the least upper bound (or supremum) for \mathcal{S} if x is an upper bound for \mathcal{S} and x \leq y for every upper bound y of \mathcal{S} .

The least-upper-bound property states that any non-empty set of real numbers that has an upper bound must have a least upper bound in real numbers.
This least upper bound property is referred to as Dedekind completeness. Therefore, though both \mathbb{R} and \mathbb{C} are complete as a metric space [proof] but only \mathbb{R} is Dedekind complete.

In an arbitrary ordered field one has the notion of Dedekind completeness — every nonempty bounded above subset has a least upper bound — and also the notion of sequential completeness — every Cauchy sequence converges. The main theorem relating these two notions of completeness is as follows [source]:

For an ordered field \mathbf{F}, the following are equivalent:
(i) \mathbf{F} is Dedekind complete.
(ii) \mathbf{F} is sequentially complete and Archimedean.

Where we defined an Archimedean field as an ordered field such that for each element there exists a finite expression 1+1+\ldots+1 whose value is greater than that element, that is, there are no infinite elements.

As remarked earlier, \mathbb{C} is not an ordered field and hence can’t be Archimedean. Therefore, \mathbb{C}  can’t have least-upper-bound property, though it’s complete in topological sense. So, the consequence of all this is:

We can’t use complex numbers for counting.

But still, complex numbers are very important part of modern arithmetic (number-theory), because they enable us to view properties of numbers from a geometric point of view [source].

Division algorithm for reals

Standard

You must have seen long-division method to compute decimal representation for fractions. Astonishingly, I never pondered about how one would divide an irrational number to get decimal representation. Firstly, this representation will be approximate. Secondly, we have been doing this in name of “rationalizing the denominator” stating the reason that division by irrationals is not allowed. But, in fact, this is the same problem as faced while analysing division algorithm for Gaussian integers.

Bottom line: Numbers are just symbols. We tend to assign meaning to them as we grow up. Since the set of real numbers, rational numbers and integers  form an Euclidean domain, we can write a division algorithm for them. For example, we don’t have special set of symbols for 3 divided by π, but 3 divided by 2 is denoted by 1.5 in decimals.

Dimension clarification

Standard

In several of my previous posts I have mentioned the word “dimension”. Recently I realized that dimension can be of two types, as pointed out by Bernhard Riemann in his famous lecture in 1854. Let me quote Donal O’Shea from pp. 99 of his book “The Poincaré Conjecture” :

Continuous spaces can have any dimension, and can even be infinite dimensional. One needs to distinguish between the notion of a space and a space with a geometry. The same space can have different geometries. A geometry is an additional structure on a space. Nowadays, we say that one must distinguish between topology and geometry.

[Here by the term “space(s)” the author means “topological space”]

In mathematics, the word “dimension” can have different meanings. But, broadly speaking, there are only three different ways of defining/thinking about “dimension”:

  • Dimension of Vector Space: It’s the number of elements in basis of the vector space. This is the sense in which the term dimension is used in geometry (while doing calculus) and algebra. For example:
    • A circle is a two dimensional object since we need a two dimensional vector space (aka coordinates) to write it. In general, this is how we define dimension for Euclidean space (which is an affine space, i.e. what is left of a vector space after you’ve forgotten which point is the origin).
    • Dimension of a differentiable manifold is the dimension of its tangent vector space at any point.
    • Dimension of a variety (an algebraic object) is the dimension of tangent vector space at any regular point. Krull dimension is remotely motivated by the idea of dimension of vector spaces.
  • Dimension of Topological Space: It’s the smallest integer that is somehow related to open sets in the given topological space. In contrast to a basis of a vector space, a basis of topological space need not be maximal; indeed, the only maximal base is the topology itself. Moreover, dimension is this case can be defined using  “Lebesgue covering dimension” or in some nice cases using “Inductive dimension“.  This is the sense in which the term dimension is used in topology. For example:
    • A circle is one dimensional object and a disc is two dimensional by topological definition of dimension.
    • Two spaces are said to have same dimension if and only if there exists a continuous bijective map between them. Due to this, a curve and a plane have different dimension even though curves can fill space.  Space-filling curves are special cases of fractal constructions. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn.
  • Fractal Dimension:  It’s a notion designed to study the complex sets/structures like fractals that allows notions of objects with dimensions other than integers. It’s definition lies in between of that of dimension of vector spaces and topological spaces. It can be defined in various similar ways. Most common way is to define it as “dimension of Hausdorff measure on a metric space” (measure theory enable us to integrate a function without worrying about  its smoothness and the defining property of fractals is that they are NOT smooth). This sense of dimension is used in very specific cases. For example:
    • A curve with fractal dimension very near to 1, say 1.10, behaves quite like an ordinary line, but a curve with fractal dimension 1.9 winds convolutedly through space very nearly like a surface.
      • The fractal dimension of the Koch curve is \frac{\ln 4}{\ln 3} \sim 1.26186, but its topological dimension is 1 (just like the space-filling curves). The Koch curve is continuous everywhere but differentiable nowhere.
      • The fractal dimension of space-filling curves is 2, but their topological dimension is 1. [source]
    • A surface with fractal dimension of 2.1 fills space very much like an ordinary surface, but one with a fractal dimension of 2.9 folds and flows to fill space rather nearly like a volume.

This simple observation has very interesting consequences. For example,  consider the following statement from. pp. 167  of the book “The Poincaré Conjecture” by Donal O’Shea:

… there are infinitely many incompatible ways of doing calculus in four-space. This contrasts with every other dimension…

This leads to a natural question:

Why is it difficult to develop calculus for any \mathbb{R}^n in general?

Actually, if we consider \mathbb{R}^n as a vector space then developing calculus is not a big deal (as done in multivariable calculus).  But, if we consider \mathbb{R}^n as a topological space then it becomes a challenging task due to the lack of required algebraic structure on the space. So, Donal O’Shea is actually pointing to the fact that doing calculus on differentiable manifolds in \mathbb{R}^4 is difficult. And this is because we are considering \mathbb{R}^4 as 4-dimensional topological space.

Now, I will end this post by pointing to the way in which definition of dimension should be seen in my older posts:

Borsuk-Ulam Theorem

Standard

Yesterday, I was fortunate enough to attend a lecture delivered by Dr. Ritwik Mukherjee, one of my professors, to motivate the study of algebraic topology. Instead of using the “soft targets” like Möbius strip etc. he used the following profound theorem for motivation:

If f: S^n \to \mathbb{R}^n is continuous then there exists an x\in S^n such that:  f(-x)=f(x).

This is known as Borsuk-Ulam Theorem. To appreciate this theorem, one need to know a fundamental theorem about continuous functions known as Intermediate Value Theorem:

If a continuous function, f, with an interval, [a, b], as its domain, takes values f(a) and f(b) at each end of the interval, then it also takes any value between f(a) and f(b) at some point within the interval.

Here is a video by James Grime illustrating Borsuk-Ulam Theorem in 3D:

Though the implications of the theorem itself are beautiful, following corollary known as Ham sandwich theorem is even more interesting. Here is a video by Marc Chamberland explaining this theorem:

Also, yesterday Grant Sanderson uploaded a video exploring the relation of Borsuk-Ulam Theorem with a fair division problem known as Necklace splitting problem:

But, to my amazement, this theorem is related to one of the other most astonishing theorem of algebraic topology called Brouwer fixed-point theorem:

Every continuous function from a closed ball of a Euclidean space into itself has a fixed point.

Here is a video by Michael Stevens illustrating Brouwer fixed-point theorem in some interesting cases:

 

Now the applications of this theorem are numerous, and there is a book dedicated to this theorem: “Fixed Points” by Yu. A. Shashkin. But my favourite application of this fixed point theorem is to the board game called Hex, explained by Marc Chamberland here:

If you come across some other video/article discussing the coolness of “Borsuk-Ulam Theorem” please let me know.