Tag Archives: real analysis

Enclosing closed curves in squares

Standard

Let’s look at the following innocent looking question:

Is it possible to circumscribe a square about every closed curve?

The answer is YES! I found an unexpected and interesting proof in the book “Intuitive Combinatorial Topology ” by V.G. Boltyanskii and V.A. Efremovich . Let’s now look at the outline of proof for our claim:

1. Let any closed curve K be given. Draw any line l and the line l’ such that line l’ is parallel to l as shown in the fig 1.

capture11

2. Move the lines l and l’ closer to K till they just touch the curve K as shown in fig 2. Let the new lines be line m and line m’. Call these lines as the support lines of curve K with respect to line l.

capture21

3. Draw a line l* perpendicular to l and the line (l*)’ parallel to l* . Draw support lines with respect to line l* to the curve K as shown in the fig 3. Let the rectangle formed be ABCD .

capture31.png

4. The rectangle corresponding to a line will become square when AB and AD are equal . Let the length of line parallel to l (which is AB)  be h_1(\mathbf{l}) and line perpendicular to l (which is AD) be h_2(\mathbf{l}). For a given line n, define a real valued function f(\mathbf{n}) = h_1(\mathbf{n})-h_2(\mathbf{n}) on the set of lines lying outside the curve .  Now rotate the line l in an anti-clockwise direction till l coincides with l’. The rectangle corresponding to l* will also be ABCD (same as that with respect to l). When l coincides with l’, we can say that  AB = h_2(\mathbf{l^*}) and AD = h_1(\mathbf{l^*}).

capture41

5. We can see that when the line is lf(\mathbf{l}) = h_1(\mathbf{l})-h_2(\mathbf{l}). When we rotate l in an anti-clockwise direction the value of the function f changes continuously i.e. f is a continuous function (I do not know how to “prove” this is a continuous function but it’s intuitively clear to me; if you can have a proof please mention it in the comments). When l coincides with l’ the value of f(\mathbf{l^*}) = h_1(\mathbf{l^*})-h_2(\mathbf{l^*}). Since h_1(\mathbf{l^*}) = h_2(\mathbf{l}) and h_2(\mathbf{l^*}) = h_1(\mathbf{l}). Hence f(\mathbf{l^*}) = -(h_1(\mathbf{l}) - h_2(\mathbf{l})). So f is a continuous function which changes sign when line is moved from l to l’. Since f is a continuous function, using the generalization of intermediate value theorem we can show that there exists a line p between l and l* such that f(p) = 0 i.e. AB = AD.  So the rectangle corresponding to line p will be a square.

Hence every curve K can be circumscribed by a square.

Advertisements

Four Examples

Standard

Following are the four examples of sequences (along with their properties) which can be helpful to gain a better understanding of theorems about sequences (real analysis):

  • \langle n\rangle_{n=1}^{\infty} : unbounded, strictly increasing, diverging
  • \langle \frac{1}{n}\rangle_{n=1}^{\infty} : bounded, strictly decreasing, converging
  • \langle \frac{n}{1+n}\rangle_{n=1}^{\infty} : bounded, strictly increasing, converging
  • \langle (-1)^{n+1}\rangle_{n=1}^{\infty} : bounded, not converging (oscillating)

I was really amazed to found that x_n=\frac{n}{n+1} is a strictly increasing sequence, and in general, the function f(x)=\frac{x}{1+x} defined for all positive real numbers is an increasing function bounded by 1:

 

Thre graph of x/(1+x) for x>0. Plotted using SageMath 7.5.1

The graph of x/(1+x) for x>0, plotted using SageMath 7.5.1

 

Also, just a passing remark, since \log(x)< x+1 for all x>0, and as seen in prime number theorem we get an unbounded increasing function \frac{x}{\log(x)} for x>1

dort

The plot of x/log(x) for x>2. The dashed line is y=x for the comparison of growth rate. Plotted using SageMath 7.5.1

 

So many Integrals – II

Standard

As promised in previous post, now I will briefly discuss the remaining two flavors of Integrals.

Stieltjes Integral

ss

Stieltjes

In 1894, a Dutch mathematician, Thomas Stieltjes, while solving the moment problem, that is, given the moments of all orders of a body, find the distribution of its mass, gave a generalization of the Darboux integral.

Let P : a = x_0 < x_1 < x_2<\ldots < x_n = b, n being an integer, be a partition of the interval [a, b].

For a function \alpha, monotonically increasing on [a,b], we write:

\Delta \alpha_i = \alpha(x_i) - \alpha(x_{i-1})

Let f be a bounded function defined on an interval [a, b],\quad a, b being real numbers. We define the sum 

S_P = \sum_{i=1}^n f(t_i)\Delta \alpha_i, \quad \overline{S}_P = \sum_{i=1}^n f(s_i)\Delta \alpha_i

where t_i,s_i \in [x_{i-1} , x_i] be such that

f(t_i) = \text{sup} \{ f(x) : x \in [x_{i-1}, x_{i}]\},

f(s_i) = \text{inf} \{ f(x) : x \in [x_{i-1}, x_{i}]\}

If the \text{inf}\{S_P\} and \text{sup}\{\overline{S}_P\} are equal, we denote the common value by  \int_{a}^{b} f(x) d\alpha(x) and call it Steiltjes integral of f with respect to \alpha over [a,b].


 

Lebesgue Integral

leb.jpg

Lebesgue

Let me quote Wikipedia article:

The integral of a function f between limits a and b can be interpreted as the area under the graph of f. This is easy to understand for familiar functions such as polynomials, but what does it mean for more exotic functions? In general, for which class of functions does “area under the curve” make sense? The answer to this question has great theoretical and practical importance.

In 1901, a French mathematician, Henri Léon Lebesgue generalized the notion of the integral by extending the concept of the area below a curve to include functions with uncountable discontinuities .

Lebesgue defined his integral by partitioning the range of a function and summing up sets of x-coordinates belonging to given y-coordinates, rather than, as had traditionally been done, partitioning the domain.

Lebesgue himself, according to his colleague, Paul Montel, compared his method with paying off a debt: (see:pp. 803,  The Princeton Companion to Mathematics)

I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.

A set \mathcal{A} is said to be Lebesgue measurable, if for each set \mathcal{E} \subset \mathbb{R} the Carathéodory condition:

m^{*} (\mathcal{E}) = m^{*}(\mathcal{E} \cap \mathcal{A}) + m^{*}(\mathcal{E}\backslash \mathcal{A})

is satisfied, where m^{*}(\mathcal{A}) is called outer measure and is defined as:

m^{*}(\mathcal{A}) = \inf\sum\limits_{n=1}^\infty (b_n-a_n)

where \mathcal{A} is a countable collection of closed intervals [a_n,b_n], a_n\leq b_n, that cover \mathcal{A}.

The Lebesgue integral of a simple function \phi(x) = \sum_{i=1}^n c_i \chi_{\mathcal{A}_i} (x) on \mathcal{A}, where \mathcal{A}=\bigcup_{i=1}^{\infty} \mathcal{A}_{i}, \mathcal{A}_i are pairwise disjoint measurable sets and c_1, c_2, \ldots are real numbers, is defined as:

\int\limits_{\mathcal{A}} \phi dm = \sum\limits_{i=1}^{n} c_i m(\mathcal{A}_i)

where, m(\mathcal{A}_i) is the Lebesgue measure of a measurable set \mathcal{A}_i.

An extended real value function f: \mathcal{A}\rightarrow \overline{\mathbb{R}} defined on a measurable set \mathcal{A}\subset\mathbb{R} is said to be Lebesgue measurable on \mathcal{A} if f^{-1} ((c,\infty]) = \{x \in\mathcal{A} : f(x) > c\} is a Lebesgue measurable subset of \mathcal{A} for every real number c.

If f is Lebesgue measurable and non-negative on \mathcal{A} we define:

\int\limits_{\mathcal{A}} f dm = \sup \int\limits_{\mathcal{A}} \phi dm

where the supremum is taken over all simple functions \phi such that 0\leq \phi \leq f.

The function f is said to be Lebesgue integrable on \mathcal{A} if it’s integral over \mathcal{A} is finite.


The Lebesgue integral is deficient in one respect. The Riemann integral generalizes to the improper Riemann integral to measure functions whose domain of definition is not a closed interval. The Lebesgue integral integrates many of these functions, but not all of them.

So many Integrals – I

Standard
So many Integrals – I

We all know that, area is  the basis of integration theory, just as counting is basis of the real number system. So, we can say:

An integral is a mathematical operator that can be interpreted as an area under curve.

But, in mathematics we have various flavors of integrals named after their discoverers. Since the topic is a bit long, I have divided it into two posts. In this and next post I will write their general form and then will briefly discuss them.

Cauchy Integral

nlc

Newton, Leibniz and Cauchy (left to right)

This was rigorous formulation of Newton’s & Leibniz’s idea of integration, in 1826 by French mathematician, Baron Augustin-Louis Cauchy.

Let f be a positive continuous function defined on an interval [a, b],\quad a, b being real numbers. Let P : a = x_0 < x_1 < x_2<\ldots < x_n = b, n being an integer, be a partition of the interval [a, b] and form the sum

S_p = \sum_{i=1}^n (x_i - x_{i-1}) f(t_i)

where t_i \in [x_{i-1} , x_i]f be such that f(t_i) = \text{Minimum} \{ f(x) : x \in [x_{i-1}, x_{i}]\}

By adding more points to the partition P, we can get a new partition, say P', which we call a ‘refinement’ of P and then form the sum S_{P'}.  It is trivial to see that S_P \leq S_{P'} \leq \text{Area bounded between x-axis and function}f

Since, f is continuous (and positive), then S_P becomes closer and closer to a unique real number, say kf, as we take more and more refined partitions in such a way that |P| := \text{Maximum} \{x_i - x_{i-1}, 1 \leq i \leq n\} becomes closer to zero. Such a limit will be independent of the partitions. The number k is the area bounded by function and x-axis and we call it the Cauchy integral of f over a  to b. Symbolically, \int_{a}^{b} f(x) dx (read as “integral of f(x)dx from a to b”).


 

Riemann Integral

Riemann_3

Riemann

Cauchy’s definition of integral can readily be extended to a bounded function with finitely many discontinuities. Thus, Cauchy integral does not require either the assumption of continuity or any analytical expression of f to prove that the sum S_p indeed converges to a unique real number.

In 1851, a German mathematician, Georg Friedrich Bernhard Riemann gave a more general definition of integral.

Let [a,b] be a closed interval in \mathbb{R}. A finite, ordered set of points P :\{ a = x_0 < x_1 < x_2<\ldots < x_n = b\}, n being an integer, be a partition of the interval [a, b]. Let, I_j denote the interval [x_{j-1}, x_j], j= 1,2,3,\ldots , n. The symbol \Delta_j denotes the length of I_j. The mesh of P, denoted by m(P), is defined to be max\Delta_j.

Now, let f be a function defined on interval [a,b]. If, for each j, s_j is an element of I_j, then we define:

S_P = \sum_{j=1}^n f(s_j) \Delta_j

Further, we say that S_P tend to a limit k as m(P) tends to 0 if, for any \epsilon > 0, there is a \delta >0 such that, if P is any partition of [a,b] with m(P) < \delta, then |S_P - k| < \epsilon for every choice of s_j \in I_j.

Now, if S_P tends to a finite limit as m(P) tends to zero, the value of the limit is called Riemann integral of f over [a,b] and is denoted by \int_{a}^{b} f(x) dx


 

Darboux Integral

Darboux.jpg

Darboux

In 1875, a French mathematician, Jean Gaston Darboux  gave his way of looking at the Riemann integral, defining upper and lower sums and defining a function to be integrable if the difference between the upper and lower sums tends to zero as the mesh size gets smaller.

Let f be a bounded function defined on an interval [a, b],\quad a, b being real numbers. Let P : a = x_0 < x_1 < x_2<\ldots < x_n = b, n being an integer, be a partition of the interval [a, b] and form the sum

S_P = \sum_{i=1}^n (x_i - x_{i-1}) f(t_i), \quad \overline{S}_P =\sum_{i=1}^n (x_i - x_{i-1}) f(s_i)

where t_i,s_i \in [x_{i-1} , x_i] be such that

f(t_i) = \text{sup} \{ f(x) : x \in [x_{i-1}, x_{i}]\},

f(s_i) = \text{inf} \{ f(x) : x \in [x_{i-1}, x_{i}]\}

The sums S_P and \overline{S}_P represent the areas and  S_P \leq \text{Area bounded by curve} \leq \overline{S}_P. Moreover, if P' is a refinement of P, then

S_p \leq S_{P'} \leq \text{Area bounded by curve} \leq \overline{S}_{P'} \leq \overline{S}_{P}

Using the boundedness of f, one can show that S_P, \overline{S}_P converge as the partition get’s finer and finer, that is |P| := \text{Maximum}\{x_i - x_{i-1}, 1 \leq i \leq n\} \rightarrow 0, to some real numbers, say k_1, k_2 respectively. Then:

k_l \leq \text{Area bounnded by the curve} \leq k_2

If k_l = k_2 , then we have \int_{a}^{b} f(x) dx = k_l = k_2.


There are two more flavours of integrals which I will discuss in next post. (namely, Stieltjes Integral and Lebesgue Integral)

What is Analysis?

Standard

I do not know how to re-produce proofs in Analysis exams, but in this post I will try to know why we study Analysis. Most of us believe that Analysis is same as rigorous Calculus. Also, what makes Mathematics different from Physics is the “rigour”. But, why mathematicians worry so much about rigour? To understand answers of this question one need to understand, what is called “Analysis” in mathematics?

A standard definition of Analysis is (as in [R]):

Analysis is the systematic study of real and complex-valued continuous functions.

The above definition tells us what we will achieve by application of our understanding of Analysis, but this doesn’t explains what “Analysis” itself is.

Clearly, analysis has its roots in calculus. Newton and Leibniz defined differentiation and integration without bothering about definition of limit. Euler found correct value of limit of various infinite series by implicitly assuming “Algebra of infinite series”, which doesn’t exist! I myself used the commutativity of addition of real numbers for the terms in infinite series by assuming “Algebra of infinite series”!! Great mathematicians like Euler, Laplace etc. who even solved differential equations never bothered to think about foundations of calculus because they studied only real variable functions arising from physical problems and series which are power series.

Though without bothering about foundations, we could easily (intuitively) arrive at correct answers due to deep insights (of great mathematicians) but it became extremely difficult to teach such “deep insight” based mathematics to students. Without sense of rigour it became difficult to prove our claims for general cases (like the difference between point-wise continuity and uniform continuity).
This lead to a belief that:

Calculus (and thus Mathematics) is as good as theory of ghosts i.e. without any basis.

Also it became impossible for mathematicians to apply techniques of calculus beyond physical situations i.e. generalization of concepts was not possible.

To get rid of such allegations, Lagrange suggested that the only way to make calculus rigorous is to reduce it to Algebra (since algebra has inherent power of generalization). To illustrate this he defines derivative of a real function, f'(x) as coefficient of the linear term in h in Taylor series expansion for f(x+h). Again this was wrong without consideration of limits and convergence, since there is no “Algebra of infinite series”!!! But this idea of using Algebra to make calculus rigorous was successfully realized by Cauchy, he used “Algebra of Inequalities” (but he also implicitly assumed the completeness property of real numbers) by introducing \epsilon and \delta (though not explicitly, but in words).

How “Algebra of Inequalities” became technique to create “rigorous calculus”, which we know as “Analysis” ? One main part of calculus was “Approximations”, i.e. to compute an upper bound on the error in the approximation — that is, the difference between the sum of the series and the n^{th} partial sum. Thus the “Tool of Approximation” was transformed to “tool of rigour”.

Initially, integral was thought as inverse of differential. But sometimes the inverse could not be computed exactly, so Euler remarked that the integral could be approximated as closely as one liked by a sum (also the geometric picture of an area being approximated by rectangles). Again, we got better definition of integral by work done by various mathematicians to approximate the values of definite integrals. Poisson, was interested in complex integration and was concerned about behaviour and existence of integrals. He stated and proved  “The fundamental proposition of the theory of definite integrals”. He proved it by using an inequality-result: the Taylor series with remainder. This was the first attempt to prove the equivalence of the antiderivative and limit-of-sums conceptions of the integral. But, Poission implicitly assumes the existence of antiderivatives and bounded first derivatives for f on the given interval, thus the proof assumes that the subintervals on which the sum is taken are all equal. Again, Cauchy added rigour to Poisson’s proof.

Since most algebraic formulas hold only under certain conditions, and for certain values of the quantities they contain, one could not assume that what worked for finite expressions automatically worked for infinite ones. Also,  just because there was an operation called “taking a derivative” did not mean that the inverse of that operation always produced a result. The existence of the definite integral had to be proved. Borrowing from Lagrange the mean value theorem for integrals, Cauchy finally proved the “Fundamental Theorem of Calculus”.

Thus, algebraic approximations produced the algebra of inequalities. The application of Algebra of inequalities lead to concept of Approximations in Calculus. The concept of approximations in calculus in turn lead to 3 key concepts : “error bounds for series” (d’Alembert), “inequalities about derivatives” (Lagrange) and “approximations to integrals” (Euler). I believe that, these three concepts combined with rigour lead to what we call “Analysis” in Mathematics.

The subject of analysis itself consists of 4 main flavours:

  • Real Analysis
  • Complex Analysis
  • Functional Analysis
  • Harmonic Analysis

with the generalization of basic tools in terms of measure theory (leading to generalization of integration) and calculus of several variables.  For example, the differentiation of a several variable function f: \Omega \to \mathbb{R}^m where \Omega \subset \mathbb{R}^n leads to a linear transformation from \mathbb{R}^n to \mathbb{R}^m (or equivalently, an m\times n matrix with real values entries) instead of a real number with norm of limiting value in denominator. Also, we can generalize the concept of Taylor series for several variable functions using the notion of “partial derivatives” as

\displaystyle{T(x_1,\ldots,x_d)  = \sum_{n_1=0}^\infty \cdots \sum_{n_d = 0}^\infty  \frac{(x_1-a_1)^{n_1}\cdots (x_d-a_d)^{n_d}}{n_1!\cdots n_d!}\,\left(\frac{\partial^{n_1 + \cdots + n_d}f}{\partial x_1^{n_1}\cdots \partial x_d^{n_d}}\right)(a_1,\ldots,a_d) }

Using the “change of variable theorem” we can evaluate integrals of several variable functions over a “cell” by evaluating multiple integrals. Finally, using the concept of “differential forms”originating from geometry,  we can prove Stokes’ theorem, of which “fundamental theorem of calculus” turns out to be a special case (among many other important theorems like Green’s theorem and Divergence theorem).

References:
[G] J V Grabiner, “Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus”, American Mathematical Monthly 90 (1983), 185–194

[R] John Renze and Eric W. Weisstein, “Analysis.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/Analysis.html

[S] Ian Stewart,  “analysis | mathematics”. Encyclopedia Britannica.
http://www.britannica.com/topic/analysis-mathematics

[X] Mathematical analysis. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Mathematical_analysis&oldid=31489

[SM] Maurice Sion, History of measure theory in the twentieth century, www.math.ubc.ca/~marcus/Math507420/Math507420hist.pdf

[H] Barbara Hubbard and John H. Hubbard, “Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach”, Prentice Hall .

Kempner Series & MMC

Standard

In Madhava Mathematics Competition 2015 (held in January 2015), we were asked to prove the convergence of Kempner Series (first time proved in 1914). Recently I discovered the paper by A. J. Kempner (http://www.jstor.org/stable/2972074), so in this blog post I will state and prove that problem.

The basic idea behind proof is to divide whole series into chunks (finding symmetry) and then construct a converging geometric series which will act as upper bound of Kempner Seires.

A detailed study of this cartoon has been done by Ben Orlin (http://bit.ly/1KD4shF)

A detailed study of this cartoon has been done by Ben Orlin (http://bit.ly/1KD4shF)

Theorem (Kempner Series, 1914). Harmonic Series,\sum_{k=1}^{\infty} \frac{1}{k} = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} +\ldots, converge, if the denominators do not include all natural numbers 1,2,3,\ldots , but only those numbers which do not contain any figure 9.

Proof: Given Series is:
K = \frac{1}{1}+\ldots +\frac{1}{8}+\frac{1}{10}+\ldots +\frac{1}{18}+\frac{1}{20} + \ldots + \frac{1}{28}+\frac{1}{30}+\ldots +\frac{1}{38}+\frac{1}{40}+\ldots +\frac{1}{48}+\frac{1}{50} + \ldots + \frac{1}{58}+\frac{1}{60}+\ldots+\frac{1}{68}+\frac{1}{70}+\ldots +\frac{1}{78}+\frac{1}{80} + \ldots + \frac{1}{88}+\frac{1}{100}+\ldots
Now we can rewrite above series as:
S = a_1 + a_2 + \ldots + a_n + \ldots
where a_n is the sum of all terms in K of denominator d with 10^{n-1} \leq d < 10^{n}.
Observe that, each term of K which forms part of a_n, is less than or equal to 1/10^{n-1}.

Now count the number of terms of K which are contained in a_1, in a_2, \ldots, in a_n. Clearly, a_1, consists of 8 terms, and a_1 < 8 \cdot 1 < 9. In a_2 there are, as is easily seen, less than 9^2 terms of K, and a_2 < (9^2/10). Altogether there are in K less than 9^2 + 9 terms with denominators under 100.

Assume now that we know the number of terms in K which are contained in a_n to be less than 9^n, for n = 1, 2, 3, \ldots, n. Then, because each term of K which is contained in a_n is not greater than 1/10^{n-1}, we have a_n < (9^n/10^{n-1}), and the total number of terms in K with denominators under 10^n is less than 9^n + 9^{n-1} + 9^{n-2} + \ldots + 9^2 + 9.

Now, let’s go for induction. For n = 1 and n = 2 we have verified all this, and we will now show that if it is true for n, then a_{n+1} < (9^{n+1}/10^n). a_{n+1} contains all terms in K of denominator d, 10^n \leq d < 1^{n+1}. This interval for d can be broken up into the nine intervals,  \alpha \cdot 10^n \leq d < (\alpha + 1)10^n, \alpha = 1,2, \ldots, 9. The last interval does not contribute any term to K, the eight remaining intervals contribute each the same number of terms to K, and this is the same as the number of terms contributed by the whole interval 0 < d < 10^n, that is, by assumption, less than 9^n + 9^{n-1} + 9^{n-2} + \ldots + 9^2 + 9.

Therefore, a_{n+1} contains less than 8(9^n + 9^{n-1} + 9^{n-2} +\ldots + 9^2 + 9) < 9^{n+1} terms of K, and each of these terms is less than or equal to 1/10, we have a_{n+1} < (9^{n+1}/10^n).

Hence, S = a_1 + a_2 + a_3 + \ldots < 9 + \frac{9}{10} + \ldots + \frac{9^{n+1}}{10^n}+ \ldots = 90
Thus, S converges, and since, K=S, K also converges.

——————————–
Note: There is nothing special about 9 here, the above method of proof holds unchanged if, instead of 9, any other figure 1, 2, \ldots, 8 is excluded, but not for the figure 0.