# Why do mathematicians denote the set of integers with Z?

One of the great changes that took place in the late nineteenth and early twentieth century was a loss of certainty and absolutism in science and mathematics. This change largely came from the German speaking world. Its effect spread beyond the worlds of mathematics, science, and technology.

One of my students impressed me and her classmates yesterday with her recitation of the first one hundred digits of pi. In a subway station we saw the number pi written on the wall (part of an exhibit of interesting numbers and statistics). Kat walked to the “3”, put her finger on that first digit, turned her back to the wall, and walking slowly while dragging her finger on the wall recited from memory the first 100 digits of pi.

Ferdinand Lindemann, a German mathematician, was the first to prove that pi is a transcendental number. Lindemann completed his career at the University of Munich.

Mathematicians recognize several kinds of numbers.

• Z (for Zahl) is the set of integers (whole numbers).
• Q (although the word is the same in both languages, MathWorld, a wonderful online encyclopedia of mathematics, says Q is from the German word Quotient) is the set of rational numbers. These are the numbers that can be expressed as ratios of two integers (fractions such as 3/4).
• R is the set of real numbers. Z and Q are subsets of R. However, R includes numbers that are neither integers nor the ratios of integers. Some real numbers are irrational—it is not possible to specify their value with a finite number of digits (such as 0.123) or with repeating sequences of finite length (such as 0.33333….).
• Algebraic numbers are another subset of the real numbers. An algebraic number is the root of a polynomial equation (we all saw these in high school). For example, the square root of two is a root of the equation x^2 – 2 = 0. Some algebraic numbers (such as the square root of 2) are irrational.
• A transcendental number is an irrational number that is not algebraic. Pi is an important example.
There are algorithms that will approximate pi to any desired degree of precision. If you are patient enough, you can have as many digits of pi as you like. This property makes pi a computable number.

There are non-computable numbers. In fact most numbers are non-computable.

Alan Turing, then a young English mathematician, published a paper in 1936 with the title “On Computable Numbers, with an Application to the Entscheidungsproblem.” The title refers to challenge posed by German mathematician David Hilbert in the previous decade, but has roots that go back to Gottfried Leibniz’ dream in the seventeenth century of a machine that could answer any question posed in logic.Turing built his proof that some numbers cannot be computed (and that we can describe some problems for which there is no computer program that will produce a solution) by building on a method that Georg Cantor, yet another German mathematician, had developed in the late nineteenth century. Cantor had proved that there are degrees of infinity—the set of integers and the set of real numbers both have infinite size, but the reals are more infinite. Turing wrote his landmark paper only five years after Kurt Gödel (then at the University of Vienna, later Albert Einstein’s colleague at the Institute for Advanced Study in Princeton) uncovered limits to the methods of proof.

Turing’s paper marked the start of the study of computer science. Every student of computer science sees the word “Entscheidung.”

Once mathematicians believed that any mathematical proposition could, with enough time and effort, be proven true or false. No longer.

Similarly, once physicists believed that they could, with enough time and effort, measure with arbitrary accuracy, and then predict future positions and velocities with as much accuracy as desired. No longer. The physicists’ story is also filled with German names (e.g., Werner Heisenberg and his Uncertainty Principle).