Interesting Diagonal Discovery

Standard

Yesterday (while doodling in my Number Theory class) I was just playing with diagonals of polygons and something interesting appeared on my notebook. For the first time in my life I have discovered a theorem and its proof on my own (it’s original!). In this blog-post I will share it with you all!

Consider an $n$-sided polygon, and start drawing diagonals from each vertex one-by-one. While doing so count the number of new diagonals contributed by each vertex. Here is the “Experiment” done for $n=4,5,6,7,8$.

Number written near each vertex indicate the number of new diagonals contributed by that vertex (following anti-clockwise order and starting with the red one)

Based on above experiment, I observed:

The number of new diagonals contributed by each vertex of a $n$-sided polygon follows the sequence: $(n-3),(n-3),(n-4),\ldots, 1,0,0$

Now let’s try to  prove this observation:

Since we can’t draw diagonals to the adjacent vertices, each vertex will have $(n-1)-2 = (n-3)$ diagonals.

Now, let’s count the new contribution of each vertex by considering restrictions on the maximum possible contribution (which is $(n-3)$).

For first vertex, we have no restriction, so it will contribute $(n-3)$ diagonals.

Also, since second vertex in adjacent to first one and both can’t affect each-other’s contribution, it will also contribute $(n-3)$ diagonals.

But, starting with third vertex we observe that first vertex has already taken one of the diagonals from its maximum contribution (second vertex can’t affect its contribution count since it’s adjacent vertex), thus it contributes (new) $(n-3)-1 = (n-4)$ diagonals.

Continuing same way, for $k^{th}$ vertex, consider the restriction to contribution caused by $1^{st}$ to $(k-2)^{th}$ vertices. Thus for $1 we get the number of new diagonals contributed by $k^{th}$ vertex equal to $(n-3)-(k-2) = (n-k-1)$.

Since new contribution can’t be negative and for $(n-1)^{th}$ vertex we end up with zero (new) contribution, $n^{th}$ vertex will also contribute zero diagonals.

Combining all of the above arguments I complete the proof of my observation and call it “New Diagonal Contribution Theorem” (NDCT).

[poem] Harmonic Noise

Standard

———-
Harmonic Noise

One day I heard something musical,
It sounded to me mystical.
Since it was ordered,
A harmonic pattern was being followed.
The pattern was elegant,
It was inverse of each natural number element.
I made the number elements bigger hero,
The pattern approached symmetrical zero.
When I summed up the pattern without bound,
I got only noise without any musical sound.

-Gaurish
————
For more details about Harmonic series refer Wikipedia: https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)

Edit (20 May 2018): A video by Marc Chamberland discussing a puzzle involving harmonic series.

What is Analysis?

Standard

I do not know how to re-produce proofs in Analysis exams, but in this post I will try to know why we study Analysis. Most of us believe that Analysis is same as rigorous Calculus. Also, what makes Mathematics different from Physics is the “rigour”. But, why mathematicians worry so much about rigour? To understand answers of this question one need to understand, what is called “Analysis” in mathematics?

A standard definition of Analysis is (as in [R]):

Analysis is the systematic study of real and complex-valued continuous functions.

The above definition tells us what we will achieve by application of our understanding of Analysis, but this doesn’t explains what “Analysis” itself is.

Clearly, analysis has its roots in calculus. Newton and Leibniz defined differentiation and integration without bothering about definition of limit. Euler found correct value of limit of various infinite series by implicitly assuming “Algebra of infinite series”, which doesn’t exist! I myself used the commutativity of addition of real numbers for the terms in infinite series by assuming “Algebra of infinite series”!! Great mathematicians like Euler, Laplace etc. who even solved differential equations never bothered to think about foundations of calculus because they studied only real variable functions arising from physical problems and series which are power series.

Though without bothering about foundations, we could easily (intuitively) arrive at correct answers due to deep insights (of great mathematicians) but it became extremely difficult to teach such “deep insight” based mathematics to students. Without sense of rigour it became difficult to prove our claims for general cases (like the difference between point-wise continuity and uniform continuity).
This lead to a belief that:

Calculus (and thus Mathematics) is as good as theory of ghosts i.e. without any basis.

Also it became impossible for mathematicians to apply techniques of calculus beyond physical situations i.e. generalization of concepts was not possible.

To get rid of such allegations, Lagrange suggested that the only way to make calculus rigorous is to reduce it to Algebra (since algebra has inherent power of generalization). To illustrate this he defines derivative of a real function, $f'(x)$ as coefficient of the linear term in $h$ in Taylor series expansion for $f(x+h)$. Again this was wrong without consideration of limits and convergence, since there is no “Algebra of infinite series”!!! But this idea of using Algebra to make calculus rigorous was successfully realized by Cauchy, he used “Algebra of Inequalities” (but he also implicitly assumed the completeness property of real numbers) by introducing $\epsilon$ and $\delta$ (though not explicitly, but in words).

How “Algebra of Inequalities” became technique to create “rigorous calculus”, which we know as “Analysis” ? One main part of calculus was “Approximations”, i.e. to compute an upper bound on the error in the approximation — that is, the difference between the sum of the series and the $n^{th}$ partial sum. Thus the “Tool of Approximation” was transformed to “tool of rigour”.

Initially, integral was thought as inverse of differential. But sometimes the inverse could not be computed exactly, so Euler remarked that the integral could be approximated as closely as one liked by a sum (also the geometric picture of an area being approximated by rectangles). Again, we got better definition of integral by work done by various mathematicians to approximate the values of definite integrals. Poisson, was interested in complex integration and was concerned about behaviour and existence of integrals. He stated and proved  “The fundamental proposition of the theory of definite integrals”. He proved it by using an inequality-result: the Taylor series with remainder. This was the first attempt to prove the equivalence of the antiderivative and limit-of-sums conceptions of the integral. But, Poission implicitly assumes the existence of antiderivatives and bounded first derivatives for $f$ on the given interval, thus the proof assumes that the subintervals on which the sum is taken are all equal. Again, Cauchy added rigour to Poisson’s proof.

Since most algebraic formulas hold only under certain conditions, and for certain values of the quantities they contain, one could not assume that what worked for finite expressions automatically worked for infinite ones. Also,  just because there was an operation called “taking a derivative” did not mean that the inverse of that operation always produced a result. The existence of the definite integral had to be proved. Borrowing from Lagrange the mean value theorem for integrals, Cauchy finally proved the “Fundamental Theorem of Calculus”.

Thus, algebraic approximations produced the algebra of inequalities. The application of Algebra of inequalities lead to concept of Approximations in Calculus. The concept of approximations in calculus in turn lead to 3 key concepts : “error bounds for series” (d’Alembert), “inequalities about derivatives” (Lagrange) and “approximations to integrals” (Euler). I believe that, these three concepts combined with rigour lead to what we call “Analysis” in Mathematics.

The subject of analysis itself consists of 4 main flavours:

• Real Analysis
• Complex Analysis
• Functional Analysis
• Harmonic Analysis

with the generalization of basic tools in terms of measure theory (leading to generalization of integration) and calculus of several variables.  For example, the differentiation of a several variable function $f: \Omega \to \mathbb{R}^m$ where $\Omega \subset \mathbb{R}^n$ leads to a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$ (or equivalently, an $m\times n$ matrix with real values entries) instead of a real number with norm of limiting value in denominator. Also, we can generalize the concept of Taylor series for several variable functions using the notion of “partial derivatives” as

$\displaystyle{T(x_1,\ldots,x_d) = \sum_{n_1=0}^\infty \cdots \sum_{n_d = 0}^\infty \frac{(x_1-a_1)^{n_1}\cdots (x_d-a_d)^{n_d}}{n_1!\cdots n_d!}\,\left(\frac{\partial^{n_1 + \cdots + n_d}f}{\partial x_1^{n_1}\cdots \partial x_d^{n_d}}\right)(a_1,\ldots,a_d) }$

Using the “change of variable theorem” we can evaluate integrals of several variable functions over a “cell” by evaluating multiple integrals. Finally, using the concept of “differential forms”originating from geometry,  we can prove Stokes’ theorem, of which “fundamental theorem of calculus” turns out to be a special case (among many other important theorems like Green’s theorem and Divergence theorem).

References:
[G] J V Grabiner, “Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus”, American Mathematical Monthly 90 (1983), 185–194

[R] John Renze and Eric W. Weisstein, “Analysis.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/Analysis.html

[S] Ian Stewart,  “analysis | mathematics”. Encyclopedia Britannica.
http://www.britannica.com/topic/analysis-mathematics

[X] Mathematical analysis. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Mathematical_analysis&oldid=31489

[SM] Maurice Sion, History of measure theory in the twentieth century, www.math.ubc.ca/~marcus/Math507420/Math507420hist.pdf

[H] Barbara Hubbard and John H. Hubbard, “Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach”, Prentice Hall .

FLT proof fits on a shirt!

Standard

Around 1637, Pierre de Fermat wrote his Last Theorem in the margin of his copy of the Arithmetica next to Diophantus’ sum-of-squares problem:

It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.

In May 1995, Andrew Wiles proved “Fermat’s Last Theorem” (FLT) ! To celebrate his achievement various conferences and meetings were organized. A Fermat’s Last Theorem T-shirt was designed for the Boston University meeting on FLT, August 9-18, 1995.  The T-shirt was designed by members of the 1995 PROMYS counselor staff who attended ” A Conference On Number Theory And Fermat’s Last Theorem ” . The conference was intended to be as accessible as possible to a general mathematical audience.  The conference focused on two major topics: (1) Andrew Wiles’ proof of the Taniyama-Shimura-Weil conjecture for semistable elliptic curves; and (2) the earlier works of Frey, Serre, and Ribet showing that Wiles’ Theorem would complete the proof of Fermat’s Last Theorem.

PROMYS T-shirt which summarize the proof of FLT, with complete references on the back.

Remarking on information printed on these T-shirts, Fernando Q. Gouvêa wrote following poem:

They said the proof was long and hard,
and painful to behold,
But at the conference at BU,
we got the real dirt.
The proof, it sure is tricky,
but its length isn’t so bold–
It doesn’t fit the margin,
but it does fit on a shirt.

There are many more poems on FLT: Fermat’s Last Theorem and Poetry (Lecturas Matem´aticas
Volumen 22 (2001), p´aginas 137–147)