When we hear the word number, symbols like 1,,¼, π (area enclosed by a unit circle), ι (symbol for ), ε (infinitesimal), ω (ordinal infinity), ℵ (cardinal infinity), …. appear in our mind. But not all numbers are computable:

A computable number [is] one for which there is a Turing machine which, given *n* on its initial tape, terminates with the *nth* digit of that number [encoded on its tape].

In other words, a real number is called computable if there is an algorithm which, given *n*, returns the first *n* digits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number. For example, π is a computable number (why? see here).

Using Cantor’s diagonal argument on a list of all computable numbers, we get a non-computable number (here is the discussion). For example, a sum of series of real numbers called Chaitin’s constant, denoted by Ω, is a non-computable number (why? see here).

**Fun fact**: We don’t know whether π is a normal number or not (though we want it to be a normal number), but Ω is known to be a normal number (just like the Mahler’s Number discussed here).

### Like this:

Like Loading...

*Related*

Pingback: Intra-mathematical Dependencies | Gaurish4Math

But, may I know why is a computable number defined the way it is? what is the motivation? Just trying to understand…

LikeLike

Computable numbers were defined by Alan Turing in his 1936 paper:”On Computable Numbers, with an Application to the Entscheidungsproblem”, in which he introduced Turing machines in order to deal with the classic Halting problem. Turing machine is the fundamental object of computational theory, which lead to birth of rigorous definition of algorithm. Alan Turing is therefore called father of modern computer science.

LikeLiked by 1 person