# Hyperreal and Surreal Numbers

Standard

These are the two lesser known number systems, with confusing names.

Hyperreal numbers originated from what we now call “non-standard analysis”. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The term “hyper-real” was introduced by Edwin Hewitt in 1948. In non-standard analysis the concept of continuity and differentiation is defined using infinitesimals, instead of the epsilon-delta methods. In 1960, Abraham Robinson showed that infinitesimals are precise, clear, and meaningful.

Following is a relevant Numberphile video:

Surreal numbers, on the other hand, is a fully developed number system which is more powerful than our real number system. They share many properties with the real numbers, including the usual arithmetic operations (addition, subtraction, multiplication, and division); as such, they also form an ordered field. The modern definition and construction of surreal numbers was given by John Horton Conway in  1970. The inspiration for these numbers came from the combinatorial game theory. Conway’s construction was introduced in Donald Knuth‘s 1974 book Surreal Numbers: How Two Ex-Students Turned on to Pure Mathematics and Found Total Happiness.

In his book, which takes the form of a dialogue, Knuth coined the term surreal numbers for what Conway had called simply numbers. This is the best source to learn about their construction. But the construction, though logical, is non-trivial. Conway later adopted Knuth’s term, and used surreals for analyzing games in his 1976 book On Numbers and Games.

Following is a relevant Numberphile video:

Many nice videos on similar topics can be found on PBS Infinite Series YouTube channel.

# Rational input, integer output

Standard

Consider the following polynomial equation [source: Berkeley Problems in Mathematics, problem 6.13.10]:

$f(t) = 3t^3 + 10t^2 - 3t$

Let’s try to figure out the rational values of $t$ for which $f(t)$ is an integer. Clearly, if $t\in\mathbb{Z}$ then $f(t)$ is an integer. So let’s consider the case when $t=m/n$ where $\gcd(m,n)=1$ and $m\neq \pm 1$. Substituting this value of $t$ we get:

$\displaystyle{f\left(\frac{m}{n}\right) = \frac{3m^3}{n^3} + \frac{10m^2}{n^2} - \frac{3m}{n}= \frac{m(3m^2+10mn-3n^2)}{n^3}=k \in \mathbb{Z}}$

Since, $n^3\mid (3m^2+10mn-3n^2)$ we conclude that $n\mid 3$. Also it’s clear that $m\mid k$. Hence, $n=\pm 3$ and we just need to find the possible values of $m$.

For $n=3$ we get:

$\displaystyle{f\left(\frac{m}{3}\right) = \frac{m(m^2+10m-9)}{9}=k \in \mathbb{Z}}$

Hence we have $9\mid (m^2+10m)$. Since $\gcd(m,n)=\gcd(m,3)=1$, we have $9\mid (m+10)$, that is, $m\equiv 8\pmod 9$.

Similarly, for $m=-3$ we get $n\equiv 1 \pmod 9$. Hence we conclude that the non-integer values of $t$ which lead to integer output are:

$\displaystyle{t = 3\ell+ \frac{8}{3}, -3\ell-\frac{1}{3}}$ for all $\ell\in\mathbb{Z}$

# Discrete Derivative

Standard

I came across the following interesting question in the book “Math Girls” by Hiroshi Yuki :

Develop a definition for the differential operator $\Delta$ in discrete space,corresponding to the definition of the differential operator D in the continuous space.

We know that derivative of a function f at point x is the rate at which f changes at the point x . Geometrically, derivative at a point x is the slope of the tangent to the function f at x where a tangent is the limit of the secant lines as shown below :

But this happens only in the continuous world where x “glides smoothly” from one point to another. But this is not the case in a discrete world. In the discrete world there is nothing like “being close to each other”. Hence we cannot use the earlier definition of bringing h arbitrarily close to x. In a discrete world we cannot talk about getting “close” to something but instead we can talk about being “next” to each other.

We can talk about the change in x as it moves from x to x+1 while f changes from f(x) to f(x+1). We do not need limits here, so the definition of “difference operator” (analogous to differential operator ) will be :

$\Delta f(x) = \frac{ f(x+1) - f(x) }{(x+1) - x} = f(x+1) - f(x)$

Hence to find derivative of a function, say $g(x) = x^2$ , it is easy to verify that $Dg(x) = 2x$ but $\Delta g(x) = 2x + 1$ (using definitions mentioned above)

Now, when will we be able to get the same derivative in both discrete and continuous worlds? I read a little about this question in math girls and a little more in “An introduction to the calculus of finite differences” by C.H.Richardson.

Calculus of differences is the study of the relations that exist between the values assumed by the function whenever the independent variable takes on a series of values in arithmetic progression.

Let us write f(x) as $f_x$ instead from now onwards. So $f(x+1) - f(x) = \Delta f(x) = f_{x+1} - f_x$. Using above definition we can prove the following for functions $U_x$ and $V_x$ :

1) $\Delta^{k+1} U_x = \Delta^{k} U_{x+1} - \Delta^{k} U_x$

2) $\Delta (U_x + V_x) = \Delta U_x + \Delta V_x$ (or) $\Delta^k (U_x + V_x) =\Delta^k U_x + \Delta^k V_x$

3) $\Delta^k (cU_x) = c \Delta^k U_x$

Theorem. $\Delta^n x^n = n!$

Proof. $\Delta x^n = (x+1)^n - x^n = n\cdot x^{n-1} + \text{terms of degree lower than} (n - 1)$. Each repetition of the process of differencing reduces the degree by one and also adds one factor to the succession $n(n - 1) (n - 2) \cdots$. Repeating the process $n$ times we have $\Delta^k x^n = n!$.

Corollary 1. $\Delta^n ax^n = a\cdot n!$

Corollary 2. $\Delta^{n+1} x^n = 0$

Corollary 3. If $U_x$ is a polynomial of degree n i.e. $U_x= a_0+ a_1 x + a_3 x + \ldots + a_n x^n$ , then $\Delta^n U_x = a_n\cdot n!$.

We call the continued products $U_x^{|n|} = U_x\cdot U_{x+1}\cdot U_{x+2} \cdots U_{x+(n-1)}$ and $U_x^{(n)} = U_x \cdot U_{x-1}\cdot U_{x-2}\cdots U_{x-(n-1)}$ as factorial expressions.

If $U_x$ is the function ax+b for some real numbers a and b, then the factorial forms we get by replacing $U_x$ by ax+b is $(ax+b)^{|n|} = (ax+b)\cdot(a(x+1)+b)\cdot (a(x+2)+b)\cdots (a(x+n-1)+b)$ and $(ax+b)^{(n)} =(ax+b)\cdot (a(x-1)+b)\cdot (a(x-2)+b)\cdots (a(x-(n-1))+b)$.

We define $(ax+b)^{|0|}$ and $(ax+b)^{(0)}$ as 1.

Using the above definition of factorial we can show the following :

(i) $\Delta (ax+b)^{(n)} = a\cdot n \cdot (ax+b)^{(n-1)}$

(ii) $\displaystyle{\Delta \frac{1}{(ax+b)^{|n|}} = \frac{-an}{(ax+b)^{|n+1|}}}$

When we consider the special case of a=1 and b=0, the factorial representations are called raising and falling factorials :

$x^{|n|} = x \cdot (x+1)\cdot (x+2)\cdots (x+n-1)$ – rising factorial

$x^{(n)} =x\cdot (x-1) \cdot (x-2) \cdots (x-n+1)$ – falling factorial.

Substituting a=1 and b=0 in (i) and (ii) above , we get that

$\Delta x^{(n)} = n\cdot x^{(n-1)}$ , $\Delta^n x^{(n)} = n!$ and $\displaystyle{\Delta \frac{1}{x^{|n|}} = - \frac{n}{x^{|n+1|}}}$.

Summary:

source: Richardson, C. H. An introduction to the calculus of finite differences. pp. 10.

Due to the fact that $x^{(n)}$ plays in the calculus of finite differences a role similar to that played by $x^n$ in the infinitesimal calculus, for many purposes in finite differences it is advisable to express a given polynomial in a series of factorials. A method of accomplishing this is contained in Newton’s Theorem.

source: Richardson, C. H. An introduction to the calculus of finite differences. pp. 10.

Since these differences and $U_x$ are identities, they are true for all values of x, and consequently must hold for x = 0. Setting x = 0 in the given function and the differences, we have the required values for all $a_i$ and theorem is proved.

# Enclosing closed curves in squares

Standard

Let’s look at the following innocent looking question:

Is it possible to circumscribe a square about every closed curve?

The answer is YES! I found an unexpected and interesting proof in the book “Intuitive Combinatorial Topology ” by V.G. Boltyanskii and V.A. Efremovich . Let’s now look at the outline of proof for our claim:

1. Let any closed curve K be given. Draw any line l and the line l’ such that line l’ is parallel to l as shown in the fig 1.

2. Move the lines l and l’ closer to K till they just touch the curve K as shown in fig 2. Let the new lines be line m and line m’. Call these lines as the support lines of curve K with respect to line l.

3. Draw a line l* perpendicular to l and the line (l*)’ parallel to l* . Draw support lines with respect to line l* to the curve K as shown in the fig 3. Let the rectangle formed be ABCD .

4. The rectangle corresponding to a line will become square when AB and AD are equal . Let the length of line parallel to l (which is AB)  be $h_1(\mathbf{l})$ and line perpendicular to l (which is AD) be $h_2(\mathbf{l})$. For a given line n, define a real valued function $f(\mathbf{n}) = h_1(\mathbf{n})-h_2(\mathbf{n})$ on the set of lines lying outside the curve .  Now rotate the line l in an anti-clockwise direction till l coincides with l’. The rectangle corresponding to l* will also be ABCD (same as that with respect to l). When l coincides with l’, we can say that  $AB = h_2(\mathbf{l^*})$ and $AD = h_1(\mathbf{l^*})$.

5. We can see that when the line is l$f(\mathbf{l}) = h_1(\mathbf{l})-h_2(\mathbf{l})$. When we rotate l in an anti-clockwise direction the value of the function f changes continuously i.e. f is a continuous function (I do not know how to “prove” this is a continuous function but it’s intuitively clear to me; if you can have a proof please mention it in the comments). When l coincides with l’ the value of $f(\mathbf{l^*}) = h_1(\mathbf{l^*})-h_2(\mathbf{l^*})$. Since $h_1(\mathbf{l^*}) = h_2(\mathbf{l})$ and $h_2(\mathbf{l^*}) = h_1(\mathbf{l})$. Hence $f(\mathbf{l^*}) = -(h_1(\mathbf{l}) - h_2(\mathbf{l}))$. So f is a continuous function which changes sign when line is moved from l to l’. Since f is a continuous function, using the generalization of intermediate value theorem we can show that there exists a line p between l and l* such that f(p) = 0 i.e. AB = AD.  So the rectangle corresponding to line p will be a square.

Hence every curve K can be circumscribed by a square.