This is a file in the archives of the Stanford Encyclopedia of Philosophy. 
version history

Stanford Encyclopedia of PhilosophyA  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z

last substantive content change

Although certain individuals — most notably Kronecker — had expressed disapproval of the “idealistic”, nonconstructive methods used by some of their nineteenth century contemporaries, it is in the polemical writings of L.E.J. Brouwer (18811966), beginning with his Amsterdam doctoral thesis (Brouwer 1907) and continuing over the next fortyseven years, that the foundations of a precise, systematic approach to constructive mathematics were laid. In Brouwer's philosophy, known as intuitionism, mathematics is a free creation of the human mind, and an object exists if and only if it can be (mentally) constructed.
Before mathematicians assert something (other than an axiom) they are supposed to have proved it true. What, then, do mathematicians mean when they assert a disjunction P Q, where P and Q are syntactically correct statements in some (formal or informal) language that a mathematician can use? A natural — although, as we shall see, not the unique — interpretation of this disjunction is that not only does (at least) one of the statements P, Q hold, but also we can decide which one holds. Thus just as mathematicians will assert that P only when they have decided that P by proving it, they may assert P Q only when they either can decide — that is, prove — that P or decide (prove) that Q.
With this interpretation, however, mathematicians run into a serious problem in the special case where Q is the negation, ¬P, of P. To decide that ¬P is to show that P implies a contradiction (such as 0=1). But it will often be that mathematicians have neither decided that P nor decided that ¬P. To see this, we need only reflect on the following:
Goldbach Conjecture:
Every even integer > 2 can be written as a sum of two primes,
which remains neither proved nor disproved despite the best efforts of many of the leading mathematicians since it was first raised in a letter from Goldbach to Euler in 1742. We are forced to conclude that, under the very natural interpretation of P Q just canvassed, only an optimist can retain a belief in the law of excluded middle, P ¬P.
Traditional, or classical, mathematics gets round this by widening the interpretation of disjunction: it interprets P Q as ¬(¬P¬Q), or in other words, “it is contradictory that both P and Q be false”. In turn, this leads to the idealistic interpretation of existence, in which xP(x) means ¬x¬P(x) (“it is contradictory that P(x) be false for every x”). It is on these interpretations of disjunction and existence that mathematicians have built the grand, and apparently impregnable, edifice of classical mathematics which serves a foundation for the physical, the social, and (increasingly) the biological sciences. However, the wider interpretations come at a cost: for example, when we pass from our initial, natural interpretation of P Q to the unrestricted use of the idealistic one, ¬(¬P¬Q), the resulting mathematics cannot generally be interpreted within computational models such as recursive function theory.
This point is illustrated by a wellworn example, the proposition:
There exist irrational numbers a, b such that a^{b} is rational.
A slick classical proof goes as follows. Either ^{} is rational, in which case we take a = b = ; or else ^{} is irrational, in which case we take a = ^{} and b = (see Dummett 1977, 10). But as it stands, this proof does not enable us to pinpoint which of the two choices of the pair (a,b) has the required property. In order to determine the correct choice of (a,b), we would need to decide whether ^{} is rational or irrational, which is precisely to employ our initial interpretation of disjunction with P the statement “^{} is rational”.
Here is another illustration of the difference between interpretations. Consider the following simple statement about the set R of real numbers:
(*) x ∈ R (x = 0 x ≠ 0),
where, for reasons that we divulge shortly, x ≠ 0 means that we can find a rational number r with 0 < r < x. A natural computational interpretation of (*) is that we have a procedure which, applied to any real number x, either tells us that x=0 or else tells us that x ≠ 0. (For example, such a procedure might output 0 if x=0, and output 1 if x ≠ 0.) However, because the computer can handle real numbers only by means finite rational approximations, we have the problem of underflow, in which a sufficiently small positive number can be misread as 0 by the computer; so there cannot be a decision procedure that justifies the statement (*). In other words, we cannot expect (*) to hold under our natural computational interpretation of the quantifier and the connective .
Let's examine this from another angle. Let G(n) act as shorthand for the statement “2n + 2 is a sum of two primes”, where n ranges over the positive integers, and define an infinite binary sequence a = (a_{1},a_{2},…) as follows:
a_{n} = { 0 if G(n) holds for all k ≤ n
1 if ¬G(n) holds for some k ≤ n.
There is no question that a is a computationally welldefined sequence, in the sense that we have an algorithm for computing a_{n} for each n: check the even numbers 4,6,8,…,2n+2 to determine whether each of them is a sum of two primes; in that case, set a_{n}=0; in the contrary case, set a_{n}=1. Now consider the real number whose n^{th} binary digit is a_{n} :
x = (0·a_{1}a_{2}a_{2}···)_{ 2} = 2^{−1}a_{1} + 2^{−2}a_{2} + ··· =
∞
∑
n=12^{−n}a_{n}.
If (*) holds under our computational interpretation, then we can decide between the following two alternatives:
In the latter case, by testing a_{1},…,a_{N}, we can find n ≤ N such that a_{n} = 1. Thus the computational interpretation of (*) enables us to decide whether there exists n such that a_{n} = 1; in other words, it enables us to decide the status of the Goldbach Conjecture.
The use of the Goldbach Conjecture here is purely dramatic. To avoid it, we define a function ƒ on the set of binary sequences as follows:
ƒ(a) = { 0 if a_{n} = 0 for all n
1 if a_{n} = 1 for some n.
The argument of the preceding paragraph can then be modified to show that, under our computational interpretation, (*) provides us with a procedure for calculating ƒ(a) for any computationally welldefined binary sequence a. Now, the computability of the function ƒ can be expressed informally by the following:
Limited Principle of Omniscience (LPO):
For each binary sequence (a_{1},a_{2},…) either a_{n} = 0 for all n or else there exists n such that a_{n} = 1,
which is generally regarded as an essentially nonconstructive principle, for several reasons. First, its recursive interpretation,
There is a recursive algorithm which, applied to any recursively defined binary sequence (a_{1},a_{2},…), outputs 0 if a_{n} = 0 for all n, and outputs 1 if a_{n} = 1 for some n,
is provably false within recursive function theory, even with classical logic (see Bridges & Richman 1987, Chapter 3); so if we want to allow a recursive interpretation of all our mathematics, then we cannot use LPO. Secondly, there is a model theory (Kripke models) in which it can be shown that LPO is not derivable in Heyting arithmetic — that is, Peano arithmetic using the computational interpretation of the connectives and quantifiers that we state in more detail in the next section (Bridges & Richman 1987, Chapter 7).
It should, by now, be clear that a fullblooded computational development of mathematics disallows the idealistic interpretations of disjunction and existence upon which most classical mathematics depends. In fact, in order to work constructively, we need to return from the classical interpretations back to the natural, constructive ones, as follows.
(or): to prove P Q we must have either a proof of P or a proof of Q. (and): to prove P Q we must have a proof of P and a proof of Q. → (implies): a proof of P → Q is an algorithm that converts a proof of P into a proof of Q. ¬ (not): to prove ¬P we must show that P implies 0 = 1. (there exists): to prove xP(x) we must construct an object x and prove that P(x) holds. (for each/all): a proof of xP(x) is an algorithm that, applied to any object x, proves that P(x) holds.
These BHKinterpretations (the name reflects their origin in the work of Brouwer, Heyting and Kolmogorov) can be made more precise using Kleene's notion of realizability; see (Dummett 1977, 318335; Beeson 1985, Chapter VII).
Why would we want to reinterpret logic in this way? First, there is the desire to retain, as far as possible, computational interpretations of our mathematics. Ideally, we are trying to develop mathematics in such a way that if a theorem asserts the existence of an object x with a property P, then the proof of the theorem embodies algorithms for constructing x and for demonstrating, by whatever calculations are necessary, that x has the property P. Here are some examples of theorems, each followed by an informal description of the requirements for its constructive proof.
(A)  For each real number x, either x = 0 or x ≠ 0. 
Proof requirement: An algorithm which, applied to a given real number x, would decide whether x = 0 or x ≠ 0. Note that, in order to make this decision, the algorithm might use not only the data describing x but also the data showing that x is actually a real number.  
(B)  Each nonempty subset S of R that is bounded above has a least upper bound. 
Proof requirement: An algorithm which, applied to a set
S of real numbers, a member s of S, and an
upper bound for S,


(C)  If ƒ is a continuous realvalued mapping on the closed interval [0,1] such that ƒ(0)ƒ(1) < 0, then there exists x such that 0 < x < 1 and ƒ(x) = 0. 
Proof requirement: An algorithm which, applied to the
function ƒ, a modulus of continuity for ƒ, and the values
ƒ(0) and ƒ(1),


(D)  If ƒ is a continuous realvalued mapping on the closed interval [0,1] such that ƒ(0)ƒ(1) < 0, then for each ε > 0 there exists x such that 0 < x < 1 and ƒ(x) < ε. 
Proof requirement: An algorithm which, applied to the
function ƒ, a modulus of continuity for ƒ, the values
ƒ(0) and ƒ(1), and a positive number ε,

We already have reasons for doubting that (A) has a constructive proof. If the proof requirements for (B) can be fulfilled, then, given a binary sequence (a_{1},a_{2},…), we can apply our proof of (B) to the set {a_{1},a_{2},…} in order to determine its supremum σ. Computing σ with an error < 1/2, we then determine whether σ = 0 or σ = 1; in the first case, a_{n} = 0 for all n, whereas in the second, we can easily find N such that a_{N} = 1. Thus (B) implies LPO and is therefore essentially nonconstructive. However, in Bishop's constructive theory of the real numbers, based on Cauchy sequences with a preassigned convergence rate, we can prove the following:
Constructive LeastUpperBound Principle:
Let S be a nonempty subset of R that is bounded above. Then S has a least upper bound if and only if it is order located, in the sense that for all real numbers , β with < β, either β is an upper bound for S or else there exists x ∈ S with x > (Bishop & Bridges 1985, p. 37, Proposition (4.3)).
Each of statements (C) and (D), which are classically equivalent, is a version of the Intermediate Value Theorem. In these statements, a modulus of continuity for ƒ is a set Ω of ordered pairs (ε ,δ) of positive real numbers with the following two properties:
Statement (C) is essentially nonconstructive, since it entails the following nonconstructive principle:
Lesser Limited Principle of Omniscience (LLPO):
For each binary sequence (a_{1},a_{2},…) with at most one term equal to 1, either a_{n} = 0 for all even n or else a_{n} = 0 for all odd n.
Statement (D), a weak form of (C), can be proved constructively, using an intervalhalving argument of a standard type. The following stronger constructive intermediate value theorem, which suffices for most practical purposes, is proved using an approximate intervalhalving argument:
Let ƒ be a continuous realvalued mapping on the closed interval [0,1] such that ƒ(0) and ƒ(1) have opposite signs. Suppose also that ƒ is locally nonzero, in the sense that for each x ∈ [0,1] and each r > 0, there exists y such that x − y < r and ƒ(y) ≠ 0. Then there exists x such that 0 < x < 1 and ƒ(x)=0.
The situation of the intermediate value theorem is typical of many in constructive analysis, where we find one classical theorem with several constructive versions, some or all of which may be equivalent under classical logic. (See also, for example, Bridges et al. 1982.)
There is one omniscience principle whose constructive status is less clear than that of LPO and LLPO, namely, the following:
Markov's Principle (MP):
For each binary sequence (a_{n}), if it is contradictory that all the terms a_{n} equal 0, then there exists a term equal to 1.
This principle is equivalent to a number of simple classical propositions, including the following:
Markov's Principle represents an unbounded search: if you have a proof that all terms a_{n} being 0 leads to a contradiction, then, by testing the terms a_{1},a_{2},a_{3},… in turn, you are “guaranteed” to come across a term equal to 1; but this guarantee does not extend to an assurance that you will find the desired term before the end of the universe. Most practitioners of constructive mathematics view Markov's Principle with at least suspicion, if not downright disbelief. Such views are reinforced by the observation that there is a Kripke Model showing that MP is not derivable in Heyting arithmetic under our computational interpretation of logic. (See Bridges & Richman 1987, 137138.)
The desire to retain the possibility of a computational interpretation is one motivation for using the constructive reinterpretations of the logical connective and quantifiers that we gave above; but it is not exactly the motivation of the pioneers of constructivism in mathematics. In this section we look at some of the different approaches to constructivism in mathematics over the past 125 years.
In the late nineteenth century, certain individuals — most notably Kronecker and Poincaré — had expressed doubts, or even disapproval, of the idealistic, nonconstructive methods used by some of their contemporaries; but it is in the polemical writings of L.E.J. Brouwer (18811966), beginning with his Amsterdam doctoral thesis (1907) and continuing over the next fortyseven years, that the foundations of a precise, systematic approach to constructive mathematics were laid. In Brouwer's philosophy, known as intuitionism, mathematics is a free creation of the human mind, and an object exists if and only if it can be (mentally) constructed. If one takes that philosophical stance, then one is inexorably drawn to the foregoing constructive interpretation of the logical connectives and quantifiers: for how could a proof of the impossibility of the nonexistence of a certain object x describe a mental construction of x?
Brouwer was not the clearest expositor of his ideas, as is shown by the following quotation:
Mathematics arises when the subject of twoness, which results from the passage of time, is abstracted from all special occurrences. The remaining empty form [the relation of n to n+1] of the common content of all these twonesses becomes the original intuition of mathematics and repeated unlimitedly creates new mathematical subjects. (quoted in Kline 1972, pp. 11992000)A modern precis of Brouwer's view was given by Errett Bishop (Bishop 1967, p. 2):
The primary concern of mathematics is number, and this means the positive integers. We feel about number the way Kant felt about space. The positive integers and their arithmetic are presupposed by the very nature of our intelligence and, we are tempted to believe, by the very nature of intelligence in general. The development of the positive integers from the primitive concept of the unit, the concept of adjoining a unit, and the process of mathematical induction carries complete conviction. In the words of Kronecker, the positive integers were created by God.However obscure Brouwer's writings could be, one thing was always clear: for him, mathematics took precedence over logic. One might say, as Hermann Weyl does in the following passage, that Brouwer saw classical mathematics as flawed precisely in its use of classical logic without reference to the underlying mathematics:
According to [Brouwer's] view and reading of history, classical logic was abstracted from the mathematics of finite sets and their subsets. ... Forgetful of this limited origin, one afterwards mistook that logic for something above and prior to all mathematics, and finally applied it, without justification, to the mathematics of infinite sets. This is the Fall and original sin of set theory, for which it is justly punished by the antinomies. It is not that such contradictions showed up that is surprising, but that they showed up at such a late stage of the game. (quoted in Kline 1972, p. 2001)In particular, this misuse of logic led to nonconstructive existence proofs which, in Hermann Weyl's words, “inform the world that a treasure exists without disclosing its location”.
In order to describe the logic used by the intuitionist mathematician, it was necessary first to analyse the mathematical processes of the mind, from which analysis the logic could be extracted. In 1930, Brouwer's most famous pupil, Arend Heyting, published a set of formal axioms which so clearly characterise the logic used by the intuitionist that they have become universally known as the axioms for intuitionistic logic (Heyting 1930). These axioms captured the informal computational interpretations of the connectives and quantifiers that we gave earlier.
Intuitionistic mathematics diverges from other types of constructive mathematics in its interpretation of the term “sequence”. Normally, a sequence in constructive mathematics is given by a rule which determines, in advance, how to construct each of its terms; such a sequence may be said to be lawlike or predeterminate. Brouwer generalised this notion of a sequence to include the possibility of constructing the terms onebyone, the choice of each term being made freely, or subject only to certain restrictions stipulated in advance. Most manipulations of sequences do not require that they be predeterminate, and can be performed on these more general free choice sequences.
Thus, for the intuitionist, a real number x = (x_{1},x_{2},…) — essentially, a Cauchy sequence of rational numbers — need not be given by a rule: its terms x_{1},x_{2},… , are simply rational numbers, successively constructed, subject only to some kind of Cauchy restriction such as the following one used by Bishop (1967):
mn[x_{m} − x_{n} ≤ (1/m + 1/n)]Once free choice sequences are admitted into one's mathematics, so, perhaps to one's initial surprise, are certain strong choice principles. Let P be a subset of N^{N} × N (where N denotes the set of natural numbers and, for sets A and B, B^{A} denotes the set of mappings from A into B), and suppose that for each a∈N^{N} there exists n∈N such that (a,n)∈P. From a constructive point of view, this means that we have a procedure, applicable to sequences, that computes n for any given a. According to Brouwer, the construction of an element of N^{N} is forever incomplete: a generic sequence a is purely extensional, in the sense that at any given moment we can know nothing about a other than a finite set of its terms. It follows that our procedure must be able to calculate, from some finite initial sequence (a_{0},…,a,_{N}) of terms of a, a natural number n such that P(a,n). If b∈N^{N} is any sequence such that b_{k} = a_{k} for 0 ≤ k ≤ N, then our procedure must return the same n for b as it does for a. This means that n is a continuous function of a with respect to the topology on N^{N} given by the metric
ρ : (a,b) inf{2^{−n} : a_{k} = b_{k} for 0≤k≤n}.Thus we are led to the following principle of continuous choice, which we divide into a continuity part and a choice part:
Continuous Choice 1 (CC1):
Any function from N^{N} to N is continuous.Continuous Choice 2 (CC2):
If P N^{N} × N, and for each a∈N^{N} there exists n∈N such that (a,n)∈P, then there is a function ƒ : N^{N} → N such that (a,ƒ(a))∈P for all a∈N^{N}.
If P and ƒ are as in CC2, then we say that ƒ is a choice function for P.
The omniscience principles LPO and LLPO are demonstrably false under the hypotheses CC12; but MP is consistent with it. Among the remarkable consequences of CC12 are the following.
φ(x) = sup{ u_{n}(x) : n ∈ N }exists. Then there exists c>0 such that u_{n}(x) ≤ c for all n∈N and all unit vectors x of X
Each of these statements appears to contradict known classical theorems. However, the comparison with classical mathematics should not be made superficially: in order to understand that there is no real contradiction here, we must appreciate that the meaning of such terms as “function” and even “real number” in intuitionistic mathematics is quite different from that in the classical setting.
Brouwer's introspection over the nature of functions and the continuum led him to a second principle, which, unlike that of continuous choice, is classically valid. This principle requires a little more background for its explanation.
For any set S we denote by S^{*} the set of all finite sequences of elements of S, including the empty sequence ( ). If = (a_{1},…,a_{n}) is in S^{*}, then n is called the length of and is denoted by . If m∈N, and ∈S^{N} or ∈S^{*} is of length at least m, then we denote by (m) the finite sequence consisting of the first m terms of . Note that (0) = ( ). If ∈S^{*} and β=(m) for some m, we say that is an extension of β, and that β is a restriction of .
A subset σ of S is said to be detachable (from S) if
x ∈ S (x∈σ x σ).A detachable subset σ of N^{*} is called a fan if
{ * n : n ∈ N }is finite or empty, where * n denotes the finite sequence obtained by adjoining the natural number n to the terms of .
A path in a fan σ is a sequence , finite or infinite, such that (n) ∈ σ for each applicable n. We say that a path is blocked by a subset B if some restriction of is in B; if no restriction of is in B, we say that misses B. A subset B of a fan σ is called a bar for σ if each infinite path of σ is blocked by B; a bar B for σ is uniform if there exists n∈N such that each path of length n is blocked by B.
At last we can state Brouwer's next principle of intuitionism:
Fan Theorem (FT):
Every detachable bar of a fan is uniform.
In its classical contrapositive form, the fan theorem is known as König's Lemma: if for every n there exists a path of length n that misses B, then there exists an infinite path that misses B (see Dummett 2000, 6871). It is simple to construct a Brouwerian counterexample to König's Lemma.
Attempts to prove the fan theorem constructively rely on an analysis of how we could know that a subset is a bar, and led Brouwer to a notion of bar induction, which we shall not describe here.
Of the many applications of Brouwer's principles, the most famous is his uniform continuity theorem (which follows from the pointwise continuity consequences of CC12 together with FT):
Every mapping from a compact (that is, complete, totally bounded) metric space into a metric space is uniformly continuous.The reader is warned once again to interpret this carefully within Brouwer's intuitionistic framework, and not to jump to the erroneous conclusion that intuitionism contradicts classical mathematics. It is more sensible to regard the two types of mathematics as incomparable. For further discussion, see the entry on intuitionistic logic.
Unfortunately — and perhaps inevitably, in the face of opposition from mathematicians of such stature as Hilbert — Brouwer's intuitionist school of mathematics and philosophy became more and more involved in what, at least to classical mathematicians, appeared to be quasimystical speculation about the nature of constructive thought, to the detriment of the practice of constructive mathematics itself. This unfortunate polarisation between the Brouwerians and the Hilbertians culminated in the notorious Grundlagenstreit of the 1920s, details of which can be found in the Brouwer biographies by van Dalen (1999) and van Stigt (1990).
One obstacle faced by the mathematician attempting to come to grips with RUSS is that, expressed in the language of recursion theory, it is not easily readable; indeed, on opening a page of Kushner's excellent lectures (1985), one might be forgiven for wondering whether this is analysis or logic. Fortunately, one can get to the heart of RUSS by an axiomatic approach due to Fred Richman (1983) (see also Bridges & Richman 1987).
First, we define a set S to be countable if there is a mapping from a detachable subset of N onto S. By a partial function on N we mean a mapping whose domain is a subset of N; if the domain is N itself, then we call the function a total partial function on N. Richman's approach to RUSS is based on intuitionistic logic and a single axiom, namely,
Computable Partial Functions (CPF):It is remarkable what can be deduced cleanly and quickly using this principle. For example, we can prove the following result, which almost immediately shows that LLPO, and hence LPO, are false in the recursive setting.
There is an enumeration φ_{0},φ_{1},… of the set of all partial functions from N to N with countable domains.
There is a total partial function ƒ : N × N → {0,1} such thatOf more interest, however, are results such as the following within RUSS.
 for each m there exists at most one n such that ƒ(m,n) = 1; and
 for each total partial function ƒ : N → {0,1}, there exist m,k in N such that ƒ(m, 2k+ƒ(m)) = 1.
(i) R ∞
n=1I_{n} , and
(ii) N
∑
n=1I_{n} < ε for each N.
From a classical viewpoint, these results fit into place when one realises that words such as “function” and “real number” should be interpreted as “recursive function” and “recursive real number” respectively. Note that the second of these four is a strong recursive counterexample to the open—cover compactness property of the (recursive) real line; and the fourth is a recursive counterexample to the classical theorem that every uniformly continuous mapping of a compact set into R attains its infimum.
Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists. (Hilbert 1928)Not only did Bishop's mathematics (BISH) have the advantage of readability — if you open Bishop's book at any page, what you see is clearly recognisable as analysis, even if, from time to time, his moves in the course of a proof may appear strange to one schooled in the use of the law of excluded middle — but, unlike intuitionistic or recursive mathematics, it admits many different interpretations. Intuitionistic mathematics, Markov's recursive constructive mathematics, and even classical mathematics all provide models of BISH. In fact, the results and proofs in BISH can be interpreted, with at most minor amendments, in any reasonable model of computable mathematics, such as, for example, Weihrauch's Type Two Effectivity Theory (Weihrauch 1996, 2000).
How is this multiple interpretability achieved? At least in part by Bishop's refusal to pin down his primitive notion of “algorithm” or, in his words, “finite routine”. This refusal has led to the criticism that his approach lacks the precision that a logician would normally expect of a foundational system. However, this criticism can be overcome by looking more closely at what practitioners of BISH actually do, as distinct from what Bishop may have thought he was doing, when they prove theorems: in practice, they are doing mathematics with intuitionistic logic. Experience shows that the restriction to intuitionistic logic always forces mathematicians to work in a manner that, at least informally, can be described as algorithmic; so algorithmic mathematics appears to be equivalent to mathematics that uses only intuitionistic logic. If that is the case, then we can practice constructive mathematics using intuitionistic logic on any reasonably defined mathematical objects, not just some class of “constructive objects”.
This view, more or less, appears to have first been put forward by Richman (1990, 1996). Taking the logic as the primary characteristic of constructive mathematics, it does not reflect the primacy of mathematics over logic that was part of the belief of Brouwer, Heyting, Markov, Bishop, and other pioneers of constructivism. On the other hand, it does capture the essence of constructive mathematics in practice.
Thus one might distinguish between the ontological constructivism of Brouwer and others who are led to constructive mathematics through a belief that mathematical objects are mental creations, and the epistemological constructivism of Richman and those who see constructive mathematics as characterised by its methodology, based on the use of intuitionistic logic. Of course, the former approach to constructivism inevitably leads to the latter; and the latter is certainly not inconsistent with a Brouwerian ontology.
Here, in his own words, is an informal explanation of the ideas underlying ML:
We shall think of mathematical objects or constructions. Every mathematical object is of a certain kind or type [… and] is always given together with its type. […] A type is defined by describing what we have to do in order to construct an object of that type. […] Put differently, a type is welldefined if we understand […] what it means to be an object of that type. Thus, for instance N → N [functions from N to N] is a type, not because we know particular number theoretic functions like the primitive recursive ones, but because we think we understand the notion of number theoretic function in general. (1975)In particular, in this system every proposition can be represented as a type: namely, the type of proofs of the proposition. Conversely, each type determines a proposition: namely, the proposition that the type in question is nonempty. So when we think of a certain type T as a proposition, we interpret the formula
x ∈ Tas “x is a proof of the proposition T”.
MartinLöf goes on to construct new types, such as Cartesian products and disjoint unions, from old. For example, the Cartesian product
(Πx ∈ A) B(x)
is the type of functions that take an arbitrary object x of type A into an object of type B(x). In the propositionsasproofs interpretation, where B(x) represents a proposition, the above Cartesian product corresponds to the universal proposition
(x ∈ A) B(x).
He distinguishes carefully between proofs and derivations: a proof object is a witness to the fact that some proposition has been proved; whereas a derivation is the record of the construction of a proof object. Also, MartinLöf exercises two basic forms (one dare not say “types” here!) of judgement. The first is a relation between proof objects and propositions, the second a property of some propositions. In the first case, the judgement is either one that a proof object a is a witness to a proposition A,or else one that two proof objects a and b are equal and both witness that A has been proved. The first case of the second form of judgement states that a proposition A is wellformed, and the second records that two propositions A and B are equal.
There is a careful, and highly detailed, set of rules for formalising ML. We will not go into those here, but refer the reader to other sources such as Bridges & Reeves 1999. However, there is one further technical matter we would like to mention: the axiom of choice is derivable in ML.
The full axiom of choice can be stated as follows:
If A,B are nonempty sets, and S a subset of A×B such thatthen there exists a choice function ƒ : A → B such that
x∈A y∈B ((x,y) ∈ S), (1) x ∈ A ((x,ƒ(x)) ∈ B).
Now, if this is to hold under a constructive interpretation, then for a given x∈A, the value ƒ(x) of the choice function will depend not only on x but also on the data proving that x belongs to A. In general, we cannot expect to produce a choice function of this sort. On the other hand, the BHK interpretation of (1) is that there is an algorithm which, applied to any given x∈A, produces an element y∈B such that (x,y) ∈ S. If A is a completely presented (or, in Bishop's words, basic) set, one for which no work beyond the construction of each element in the set is required to prove that the element does indeed belong to A, then we might reasonably expect the algorithm to be a choice function. In Bishopstyle mathematics, basic sets are rare (N is one). In MartinLöf's type theory, every set is completely presented and, in keeping with what we wrote above about the BHK interpretation of (1), the axiom of choice is derivable therein. In this connection, we should point the reader to the Goodman & Myhill (1978) proof that the axiom of choice entails the law of excluded middle (see also Problem 2 on page 58 of Bishop 1967). Clearly, the GoodmanMyhill theorem applies under the assumption that not every set is completely presented.
Every constructive proof embodies an algorithm that, in principle, can be extracted and recast as a computer program; moreover, the constructive proof is itself a verification that the algorithm is correct — that is, meets its specification. One major advantage of MartinLöf's formal approach to constructive mathematics is that it greatly facilitates the extraction of programs from proofs. This has led to serious work on the implementation of constructive mathematics in various locations (see MartinLöf 1980, Constable 1986, and Hayashi & Nakano 1988).
The traditional route taken by mathematicians wanting to analyse the constructive content of mathematics is the one that follows classical logic; in order to avoid decisions, such as whether or not a real number equals 0, that cannot be made by a real computer, the mathematician then has to keep within strict algorithmic boundaries such as those formed by recursive function theory. In contrast, the route taken by the constructive mathematician follows intuitionistic logic, which automatically takes care of computationally inadmissible decisions. This logic (together with an appropriate settheoretic framework) suffices to keep the mathematician within constructive boundaries. Thus one is free to work in the natural style of an analyst, algebraist (e.g., Mines, et al. 1988), geometer, topologist (e.g., Bridges & Vîta 2003), or other normal mathematician, the only constraints being those imposed by intuitionistic logic. As Bishop and others have shown, the traditional belief promulgated by Hilbert and still widely held today, that intuitionistic logic imposes such restrictions as to render the development of serious mathematics impossible, is patently false: large parts of deep modern mathematics can be, and have been, produced by purely constructive methods. Moreover, the link between constructive mathematics and programming holds great promise for the future implementation and development of abstract mathematics on the computer.
Douglas Bridges d.bridges@math.canterbury.ac.nz 
A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z