Understanding Geometry – 4

Standard

Aleksej Ivanovič Markuševič’s book, “Remarkable Curves” discusses the properties of ellipses, parabolas, hyperbolas, lemniscates, cycloids, brachistochrone, spirals and catenaries.  Among these “lemniscates” are the ones that I encountered only once before starting undergraduate education (all other curves appeared frequently in physics textbooks) and that too just to calculate the area enclosed by this curve. So I will discuss the properties of lemniscates in this post.

Let’s begin with the well-known curve, ellipse. An ellipse is the locus of points whose sum of distances from two fixed points (called foci) is constant. My favourite fact about ellipses is that we can’t find a general formula for the perimeter of an ellipse, and this little fact leads to the magical world of elliptic integrals. This, in turn, leads to the mysterious elliptic functions, which were discovered as inverse functions of elliptic integrals. Further, these functions are needed in the parameterization of certain curves, now called elliptic curves. For more details about this story, read the paper by Adrian Rice and Ezra Brown, “Why Ellipses are not Elliptic curves“.

Lemniscate is defined as the locus of points whose product of distances from two fixed points $F_1$ and $F_2$ (called foci) is constant. Lemniscate means, “with hanging ribbons” in Latin.  If the length of the segment $\overline{F_1F_2}$ is $c$ then for the midpoint of this line segment will lie on the curve if the product constant is $c^2/4$. In this case we get a figure-eight lying on its side.

Lemniscate of Bernoulli; By Kmhkmh (Own work) [CC BY 4.0], via Wikimedia Commons

The attempt to calculate the perimeter of the above curve leads to elliptic integral, hence can’t derive a general formula for its perimeter. Just like an ellipse!

If we equate the value of the constant product not to $c^2/4$ but to another value, the lemniscate will change its shape. When the constant is less than $c^2/4$, the lemniscate consists of two ovals, one of which contains inside it the point $F_1$, and the other the point $F_2$.

Cassini oval (x^2+y^2)^2−2c^2(x^2−y^2)=a^4−c^4; Source: https://www.encyclopediaofmath.org/legacyimages/common_img/c020700b.gif

When the product constant is greater than $c^2/4$ but less than $c^2/2$, the lemniscate has the form of a biscuit. If the constant is close to $c^2/4$, the “waist” of the biscuit is very narrow and the shape of the curve is very close to the figure-eight shape.

Cassini oval (x^2+y^2)^2−2c^2(x^2−y^2)=a^4−c^4; Source: https://www.encyclopediaofmath.org/legacyimages/common_img/c020700b.gif

If the constant differs little from $c^2/2$, the waist is hardly noticeable, and if the constant is equal or greater than $c^2/2$ the waist disappears completely, and the lemniscate takes the form of an oval.

Cassini oval (x^2+y^2)^2−2c^2(x^2−y^2)=a^4−c^4; Source: https://www.encyclopediaofmath.org/legacyimages/common_img/c020700a.gif

We can further generalize this whole argument to get lemniscate with an arbitrary number of foci, called polynomial lemniscate.

Galton Board

Standard

Like previous post, in this post I will discuss another contribution of Jacob (Jacques) Bernoulli. The motivation for this post came from Cédric Villain’s recent TED talk. Though I am not a fan of probability theory, but this “toy”, which I am going to discuss, is really interesting.  Consider following illustration from a journal’s cover:

“Galton Board” was invented by Francis Galton in 1894.  It provided a remarkable way to visualize the distribution obtained by performing several Bernoulli Trials in pre-digital computer era.  Bernoulli trial is the simplest possible random experiment with exactly two possible outcomes, “success” and “failure”, in which the probability of success (say, p) is the same every time the experiment is conducted.  If we perform these Bernoulli trials more than one time (say, n times) we get, what we call, Binomial Distribution. We get a discrete distribution like this:

And  when the number of Bernoulli trials is very large (theoretically what we would call infinite number of trials), this Binomial Distribution can be approximated to Normal Distribution, which is a continuous distribution.

The Normal Distribution is important because of the Central Limit Theorem. This theorem implies that if you have many independent variables that may be generated by all kinds of distributions, assuming that nothing too crazy happens, the aggregate of those variables will tend toward a normal distribution. This universality across different domains of science makes the normal distribution one of the centerpieces of applied mathematics and statistics.

Here is a video in which James Grime demonstrates how Galton Board can be used to visualize Normal Distribution approximation of Binomial Distribution for very large number of Bernoulli trials. The trial outcome are represented graphically as a path in the Galton board: success corresponds to a bounce to the right and failure to a bounce to the left.

Bernoulli Numbers

Standard

I have referred to them once so far. Also, the Euler-Maclaurin formula I discussed in that post, explains a lot about their occurrences (for example). Now I think it’s time to dive deeper and try to understand them.

In 1631, Johann Faulhaber   published Academia Algebra (it was a German text despite the Latin title). This text contains a generalisation of sums of powers, which in modern notations reads:

$\displaystyle{\sum_{m=1}^{n} m^{k-1} = \frac{1}{k}\left[n^k + \binom{k}{1} n^{k-1} \times \frac{1}{2} + \binom{k}{2}n^{k-2} \times \frac{1}{6} +\binom{k}{3}n^{k-3} \times 0+\binom{k}{4}n^{k-4} \times \frac{-1}{30}+ \ldots\right]}$

Observe that the expression on the right hand side in square brackets appears like binomial expansion, but there are some constant terms multiplied to them (which can also be 0). These constant terms were named “Bernoulli Numbers” by Abraham de Moivre, since they were intensively discussed by Jacob (Jacques) Bernoulli in Ars Conjectandi published in Basel in 1713, eight years after his death.

I will follow the notation from The book of numbers (published  in 1996). So we will denote $n^{th}$ Bernoulli number by $B^{n}$ where

$\displaystyle{B^0 = 1, B^1 = \frac{1}{2}, B^2 = \frac{1}{6}, B^3= B^{5} = \ldots = B^{odd} = 0, B^4 = B^8 = \frac{-1}{30}, B^6 = \frac{1}{42}, B^{10} = \frac{5}{66}, \ldots}$

This notation enables us to calculate sum of $k^{th}$ power of first $n$ natural numbers quickly. We can re-write above summation formula as:

$\displaystyle{\sum_{m=1}^n m^{k-1} = \frac{(n+B)^k - B^k}{k}}$

To illustrate, how to use this formula, let’s calculate sum of $5^{th}$ powers of first 1000 natural numbers:

$\displaystyle{\sum_{m=1}^{1000} m^{5} = \frac{(1000+B)^6 - B^6}{6}}$

$\displaystyle{ = \frac{1}{6}\left[1000^6 + 6B^1 1000^5 + 15B^2 1000^4 + 15 B^4 1000^2\right]}$

So, we have done binomial expansion of the right hand side and used the fact that $B^{odd>1} = 0$. Now we will replace corresponding values of Bernoulli Numbers to get:

$\displaystyle{\sum_{m=1}^{1000} m^{5} =\frac{1}{6}\left[10^{18} + 3\times 10^{15} + 2.5 \times 10^{12} - 0.5 \times 10^6\right]=\frac{1003002499995\times 10^5}{6}}$

$1^5 + 2^5 + \cdots + 1000^5 = 16716708333250000$

(This answer was cross-checked  using SageMath)

There are many ways to calculate the value of Binomial numbers, but the simplest one is to using the recursive definition:

$(B - 1)^k = B^k$ for k>1, gives value of $B^{k-1}$

There is another definition of Bernoulli  Numbers using power series:

$\displaystyle{\frac{z}{e^z-1} = \sum_{k=0}^{\infty} \frac{B^k z^k}{k!}}$

This gives slightly different sequence of Bernoulli numbers, since in this $B^{1}=\frac{-1}{2}$, and the recursive definition is

$(B+1)^{k} = B^{k}$ for k>1, gives value of $B^{k-1}$

This definition can be used to calculate val value of  $\tan(z)$,  since its infinite series expression has Bernoulli numbers in coefficients.

$\displaystyle{\tan(z)=\sum_{n=0}^{\infty}\frac{B^{2n}(-4)^n(1-4^n)}{2n!}z^{2n-1}}$

Integration & Summation

Standard

A few months ago I wrote a series of blog posts on “rigorous”  definitions of integration [Part 1, Part 2]. Last week I identified an interesting flaw in my “imagination” of integration in terms of “limiting summation” and it lead to an interesting investigation.

While defining integration as area under curve, we consider rectangles of equal width and let that width approach zero. Hence I used to imagine integration as summation of individual heights, since width approaches zero in limiting case. It was just like extending summation over integers to summation over real numbers.

My Thought Process..

But as per my above imagination, since width of line segment is zero,  I am considering rectangles of zero width. Then each rectangle is of zero area (I proved it recently). So the area under curve will be zero! Paradox!

I realized that, just like ancient greeks, I am using very bad imagination of limiting process!

The Insight

But, as it turns out, my imagination is NOT completely wrong.  I googled and stumbled upon this stack exchange post:

There is the answer by Jonathan to this question which captures my imagination:

The idea is that since $\int_0^n f(x)dx$ can be approximated by the Riemann sum, thus $\displaystyle{\sum_{i=0}^n f(i) = \int_{0}^n f(x)dx + \text{higher order corrections}}$

The generalization of above idea gives us the Euler–Maclaurin formula

$\displaystyle{\sum_{i=m+1}^n f(i) = \int^n_m f(x)\,dx + B_1 \left(f(n) - f(m)\right) + \sum_{k=1}^p\frac{B_{2k}}{(2k)!}\left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R}$

where $m,n,p$ are natural numbers, $f (x)$ is a real valued continuous function, $B_k$ are the Bernoulli numbers and $R$ is an error term which is normally small for suitable values of $p$ (depends on $n, m, p$ and $f$).

Proof of above formula is by principle of mathematical induction. For more details, see this beautiful paper: Apostol, T. M.. (1999). An Elementary View of Euler’s Summation Formula. The American Mathematical Monthly, 106(5), 409–418. http://doi.org/10.2307/2589145