Definitions have interested philosophers since ancient times. Plato’s early dialogues portray Socrates raising questions about definitions (e.g., in the Euthyphro, “What is piety?”)—questions that seem at once profound and elusive. The key step in Anselm’s “Ontological Proof” for the existence of God is the definition of “God,” and the same holds of Descartes’s version of the argument in his Meditation V. More recently, the Frege-Russell definition of number and Tarski’s definition of truth have exercised a formative influence on a wide range of contemporary philosophical debates. In all these cases—and many others can be cited—not only have particular definitions been debated; the nature of, and demands on, definitions have also been debated. Some of these debates can be settled by making requisite distinctions, for definitions are not all of one kind: definitions serve a variety of functions, and their general character varies with function. Some other debates, however, are not so easily settled, as they involve contentious philosophical ideas such as essence, concept, and meaning.
Ordinary discourse recognizes several different kinds of things as possible objects of definition, and it recognizes several kinds of activity as defining a thing. To give a few examples, we speak of a commission as defining the boundary between two nations; of the Supreme Court as defining, through its rulings, “person” and “citizen”; of a chemist as discovering the definition of gold, and the lexicographer, that of ‘cool’; of a participant in a debate as defining the point at issue; and of a mathematician as laying down the definition of “group.” Different kinds of things are objects of definition here: boundary, legal status, substance, word, thesis, and abstract kind. Moreover, the different definitions do not all have the same goal: the boundary commission may aim to achieve precision; the Supreme Court, fairness; the chemist and the lexicographer, accuracy; the debater, clarity; and the mathematician, fecundity. The standards by which definitions are judged are thus liable to vary from case to case. The different definitions can perhaps be subsumed under the Aristotelian formula that a definition gives the essence of a thing. But this only highlights the fact that “to give the essence of a thing” is not a unitary kind of activity.
In philosophy, too, several different kinds of definitions are often in play, and definitions can serve a variety of different functions (e.g., to enhance precision and clarity). But, in philosophy, definitions have also been called in to serve a highly distinctive role: that of solving epistemological problems. For example, the epistemological status of mathematical truths raises a problem. Immanuel Kant thought that these truths are synthetic a priori, and to account for their status, he offered a theory of space and time—namely, of space and time as forms of, respectively, outer and inner sense. Gottlob Frege and Bertrand Russell sought to undermine Kant’s theory by arguing that arithmetical truths are analytic. More precisely, they attempted to construct a derivation of arithmetical principles from definitions of arithmetical concepts, using only logical laws. For the Frege-Russell project to succeed, the definitions used must have a special character. They must be conceptual or explicative of meaning; they cannot be synthetic. It is this kind of definition that has aroused, over the past century or so, the most interest and the most controversy. And it is this kind of definition that will be our primary concern. Let us begin by marking some preliminary but important distinctions.
John Locke distinguished, in his Essay, “real essence” from “nominal essence.” Nominal essence, according to Locke, is the “abstract Idea to which the Name is annexed (III.vi.2).” Thus, the nominal essence of the name ‘gold’, Locke said, “is that complex Idea the word Gold stands for, let it be, for instance, a Body yellow, of a certain weight, malleable, fusible, and fixed.” In contrast, the real essence of gold is “the constitution of the insensible parts of that Body, on which those Qualities [mentioned in the nominal essence] and all other Properties of Gold depend (III.vi.2).” A rough way of marking the distinction between real and nominal definitions is to say, following Locke, that the former states real essence, while the latter states nominal essence. The chemist aims at real definition, whereas the lexicographer aims at nominal definition.
This characterization of the distinction is rough because a zoologist’s definition of “tiger” should count as a real definition, even though it may fail to provide “the constitution of the insensible parts” of the tiger. Moreover, an account of the meaning of a word should count as a nominal definition, even though it may not take the Lockean form of setting out “the abstract idea to which the name is annexed.” Perhaps it is helpful to indicate the distinction between real and nominal definitions thus: to discover the real definition of a term \(X\) one needs to investigate the thing or things denoted by \(X\); to discover the nominal definition, one needs to investigate the meaning and use of \(X\). Whether the search for an answer to the Socratic question “What is virtue?” is a search for real definition or one for nominal definition depends upon one’s conception of this particular philosophical activity. When we pursue the Socratic question, are we trying to gain a clearer view of our uses of the word ‘virtue’, or are we trying to give an account of an ideal that is to some extent independent of these uses? Under the former conception, we are aiming at a nominal definition; under the latter, at a real definition.
See Robinson 1950 for a critical discussion of the different activities that have been subsumed under “real definition,” and see Charles 2010 for ancient views on the topic. Fine 1994 defends the conception that a real definition defines an object by specifying what the object is; in other words, a real definition spells out the essence of the object defined. Rosen 2015 offers an explanation of real definition in terms of grounding: the definition provides the ground of the essence of the object. The meaning of ‘essence’ and of ‘ground’ remain under active debate, however.
Nominal definitions—definitions that explain the meaning of a term—are not all of one kind. A dictionary explains the meaning of a term, in one sense of this phrase. Dictionaries aim to provide definitions that contain sufficient information to impart an understanding of the term. It is a fact about us language users that we somehow come to understand and use a potential infinity of sentences containing a term once we are given a certain small amount of information about the term. Exactly how this happens is a large mystery. But it does happen, and dictionaries exploit the fact. Note that dictionary entries are not unique. Different dictionaries can give different bits of information and yet be equally effective in explaining the meanings of terms.
Definitions sought by philosophers are not of the sort found in a dictionary. Frege’s definition of number (1884) and Alfred Tarski’s definition of truth (1983, ch. 8) are not offered as candidates for dictionary entries. When an epistemologist seeks a definition of “knowledge,” she is not seeking a good dictionary entry for the word ‘know’. The philosophical quest for definition can sometimes fruitfully be characterized as a search for an explanation of meaning. But the sense of ‘explanation of meaning’ here is very different from the sense in which a dictionary explains the meaning of a word.
A stipulative definition imparts a meaning to the defined term, and involves no commitment that the assigned meaning agrees with prior uses (if any) of the term. Stipulative definitions are epistemologically special. They yield judgments with epistemological characteristics that are puzzling elsewhere. If one stipulatively defines a “raimex” as, say, a rational, imaginative, experiencing being then the judgment “raimexes are rational” is assured of being necessary, certain, and a priori. Philosophers have found it tempting to explain the puzzling cases of, e.g., a priori judgments, by an appeal to stipulative definitions.
Saul Kripke (1980) has drawn attention to a special kind of stipulative definition. We can stipulatively introduce a new name (e.g., ‘Jack the Ripper’) through a description (e.g., “the man who murdered \(X, Y\), and \(Z\)”). In such a stipulation, Kripke pointed out, the description serves only to fix the reference of the new name; the name is not synonymous with the description. For, the judgment
(1) Jack the Ripper is the man who murdered \(X, Y\), and \(Z\), if a unique man committed the murders
is contingent, even though the judgment
Jack the Ripper is Jack the Ripper, if a unique man committed the murders
is necessary. A name such as ‘Jack the Ripper’, Kripke argued, is rigid: it picks out the same individual across possible worlds; the description, on the other hand, is non-rigid. Kripke used such reference-fixing stipulations to argue for the existence of contingent a priori truths—(1) being an example. Reference-fixing stipulative definitions can be given not only for names but also for terms in other categories, e.g., common nouns.
See Frege 1914 for a defense of the austere view that, in mathematics at least, only stipulative definitions should be countenanced. [1]
Descriptive definitions, like stipulative ones, spell out meaning, but they also aim to be adequate to existing usage. When philosophers offer definitions of, e.g., ‘know’ and ‘free’, they are not being stipulative: a lack of fit with existing usage is an objection to them.
It is useful to distinguish three grades of descriptive adequacy of a definition: extensional, intensional, and sense. A definition is extensionally adequate iff there are no actual counterexamples to it; it is intensionally adequate iff there are no possible counterexamples to it; and it is sense adequate (or analytic) iff it endows the defined term with the right sense. (The last grade of adequacy itself subdivides into different notions, for “sense” can be spelled out in several different ways.) The definition “Water is H2O,” for example, is intensionally adequate because the identity of water and H2O is necessary (assuming the Kripke-Putnam view about the rigidity of natural-kind terms); the definition is therefore extensionally adequate also. But it is not sense-adequate, for the sense of ‘water’ is not at all the same as that of ‘H2O’. The definition ‘George Washington is the first President of the United States’ is adequate only extensionally but not in the other two grades, while ‘man is a laughing animal’ fails to be adequate in all three grades. When definitions are put to an epistemological use, intensional adequacy is generally insufficient. For such definitions cannot underwrite the rationality or the apriority of a problematic subject matter.
See Quine 1951 & 1960 for skepticism about analytic definitions; see also the entry on the analytic/synthetic distinction. Horty 2007 offers some ways of thinking about senses of defined expressions, especially within a Fregean semantic theory.
Sometimes a definition is offered neither descriptively nor stipulatively but as, what Rudolf Carnap (1956, §2) called, an explication. An explication aims to respect some central uses of a term but is stipulative on others. The explication may be offered as an absolute improvement of an existing, imperfect concept. Or, it may be offered as a “good thing to mean” by the term in a specific context for a particular purpose. (The quoted phrase is due to Alan Ross Anderson; see Belnap 1993, 117.)
A simple illustration of explication is provided by the definition of ordered pair in set theory. Here, the pair \(\langle x,y\rangle\) is defined as the set \(\, \\>\). Viewed as an explication, this definition does not purport to capture all aspects of the antecedent uses of ‘ordered pair’ in mathematics (and in ordinary life); instead, it aims to capture the essential uses. The essential fact about our use of ‘ordered pair’ is that it is governed by the principle that pairs are identical iff their respective components are identical:
\[ \langle x, y\rangle = \langle u, v\rangle \text < iff >x = u \amp y = v. \]
And it can be verified that the above definition satisfies the principle. The definition does have some consequences that do not accord with the ordinary notion. For example, the definition implies that an object \(x\) is a member of a member of the pair \(\langle x, y\rangle\), and this implication is no part of the ordinary notion. But the mismatch is not an objection to the explication. What is important for explication is not antecedent meaning but function. So long as the latter is preserved, the former can be let go. It is this feature of explication that led W. V. O. Quine (1960, §53) to extol its virtues and to uphold the definition of “ordered pair” as a philosophical paradigm.
The truth-functional conditional provides another illustration of explication. This conditional differs from the ordinary conditional in some essential respects. Nevertheless, the truth-functional conditional can be put forward as an explication of the ordinary conditional for certain purposes in certain contexts. Whether the proposal is adequate depends crucially on the purposes and contexts in question. That the two conditionals differ in important, even essential, respects does not automatically disqualify the proposal.
Ostensive definitions typically depend on context and on experience. Suppose the conversational context renders one dog salient among several that are visible. Then one can introduce the name ‘Freddie’ through the stipulation “let this dog be called ‘Freddie’.” For another example, suppose you are looking at a branch of a bush and you stipulatively introduce the name ‘Charlie’ thus: “let the insect on that branch be called ‘Charlie’.” This definition can pin a referent on ‘Charlie’ even if there are many insects on the branch. If your visual experience presents you with only one of these insects (say, because the others are too small to be visible), then that insect is the denotation of your use of the description ‘the insect on that branch’. We can think of experience as presenting the subject with a restricted portion of the world. This portion can serve as a point of evaluation for the expressions in an ostensive definition. [2] Consequently, the definition can with the aid of experience pin a referent on the defined term when without this aid it would fail to do so. In the present example, the description ‘the insect on that branch’ fails to be denoting when it is evaluated at the world as a whole, but it is denoting when it is evaluated at that portion of it that is presented in your visual experience. See Gupta 2019 for an account of the contribution of experience to the meaning of an ostensively defined term.
An ostensive definition can bring about an essential enrichment of a language. The ostensive definition of ‘Charlie’ enriches the language with a name of a particular insect, and it could well be that before the enrichment the language lacked resources to denote that particular insect. Unlike other familiar definitions, ostensive definitions can introduce terms that are ineliminable. (So, ostensive definitions can fail to meet the Eliminability criterion explained below; they can fail to meet also the Conservativeness criterion, also explained below.)
The capacity of ostensive definitions to introduce essentially new vocabulary has led some thinkers to view them as the source of all primitive concepts. Thus, Russell maintains in Human Knowledge that
all nominal definitions, if pushed back far enough, must lead ultimately to terms having only ostensive definitions, and in the case of an empirical science the empirical terms must depend upon terms of which the ostensive definition is given in perception. (p. 242)
In “Meaning and Ostensive Definition”, C. H. Whiteley takes it as a premiss that ostensive definitions are “the means whereby men learn the meanings of most, if not all, of those elementary expressions in their language in terms of which other expressions are defined.” (332) It should be noted, however, that nothing in the logic and semantics of ostensive definitions warrants a foundationalist picture of concepts or of language-learning. Such foundationalist pictures were decisively criticized by Ludwig Wittgenstein in his Philosophical Investigations. Wittgenstein’s positive views on ostensive definition remain elusive, however; for an interpretation, see Hacker 1975.
Ostensive definitions are important, but our understanding of them remains at a rudimentary level. They deserve greater attention from logicians and philosophers.
The kinds into which we have sorted definitions are not mutually exclusive, nor exhaustive. A stipulative definition of a term may, as it happens, be extensionally adequate to the antecedent uses of the term. A dictionary may offer ostensive definitions of some words (e.g., of color words). An ostensive definition can also be explicative. For example, one can offer an improvement of a preexisting concept “one foot” thus: “let one foot be the present length of that rod.” In its preexisting use, the concept “one foot” may be quite vague; the ostensively introduced explication may, in contrast, be relatively precise. Moreover, as we shall see below, there are other kinds of definition than those considered so far.
Many definitions—stipulative, descriptive, and explicative—can be analyzed into three elements: the term that is defined \((X)\), an expression containing the defined term \((\ldots X\ldots)\), and another expression \((- - - - - - -)\) that is equated by the definition with this expression. Such definitions can be represented thus:
\[\tag X: \ldots X \ldots \eqdf - - - - - - - . \]
(We are setting aside ostensive definitions, which plainly require a richer representation.) When the defined term is clear from the context, the representation may be simplified to
The expression on the left-hand side of ‘\(\eqdf\)’ (i.e., \(\ldots X\ldots)\) is the definiendum of the definition, and the expression on the right-hand side is its definiens—it being assumed that the definiendum and the definiens belong to the same logical category. Note the distinction between defined term and definiendum: the defined term in the present example is \(X\); the definiendum is the unspecified expression on the left-hand side of ‘\(\eqdf\)’, which may or may not be identical to \(X\). (Some authors call the defined term ‘the definiendum’; some others use the expression confusedly, sometimes to refer to the defined term and sometimes to the definiendum proper.) Not all definitions found in the logical and philosophical literature fit under scheme (2). Partial definitions, for example, fall outside the scheme; another example is provided by definitions of logical constants in terms of introduction and elimination rules governing them. Nonetheless, definitions that conform to (2) are the most important, and they will be our primary concern.
Let us focus on stipulative definitions and reflect on their logic. Some of the important lessons here carry over, as we shall see, to descriptive and explicative definitions. For simplicity, let us consider the case where a single definition stipulatively introduces a term. (Multiple definitions bring notational complexity but raise no new conceptual issues.) Suppose, then, that a language \(L\), the ground language, is expanded through the addition of a new term \(X\) to an expanded language \(L^\), where \(X\) is stipulatively defined by a definition \(\mathcal\) of form (2). What logical rules govern \(\mathcal\)? What requirements must the definition fulfill?
Before we address these questions, let us take note of a distinction that is not marked in logic books but which is useful in thinking about definitions. In one kind of definition—call it homogeneous definition—the defined term and the definiendum belong to the same logical category. So, a singular term is defined via a singular term; a general term via a general term; a sentence via a sentence; and so on. Let us say that a homogeneous definition is regular iff its definiendum is identical to the defined term. Here are some examples of regular homogeneous definitions:
Note that ‘The True’, as defined above, belongs to the category of sentence, not that of singular term.
It is sometimes said that definitions are mere recipes for abbreviations. Thus, Alfred North Whitehead and Bertrand Russell say of definitions—in particular, those used in Principia Mathematica—that they are “strictly speaking, typographical conveniences (1925, 11).” This viewpoint has plausibility only for regular homogeneous definitions—though it is not really tenable even here. (Whitehead and Russell’s own observations make it plain that their definitions are more than mere “typographical conveniences.” [3] ) The idea that definitions are mere abbreviations is not at all plausible for the second kind of definition, to which we now turn.
In the second kind of definition—call it a heterogeneous definition—the defined term and the definiendum belong to different logical categories. So, for example, a general term (e.g., ‘man’) may be defined using a sentential definiendum (e.g., ‘\(x\) is a man’). For another example, a singular term (e.g., ‘1’) may be defined using a predicate (e.g., ‘is identical to 1’). Heterogeneous definitions are far more common than homogeneous ones. In familiar first-order languages, for instance, it is pointless to define, say, a one-place predicate \(G\) by a homogeneous definition. These languages have no resources for forming compound predicates; hence, the definiens of a homogeneous definition of \(G\) is bound to be atomic. In a heterogeneous definition, however, the definiens can easily be complex; for example,
\[\tag Gx \eqdf x \gt 3 \amp x \lt 10. \]
If the language has a device for nominalization of predicates—e.g., a class abstraction operator—we could give a different sort of heterogeneous definition for G:
Observe that a heterogeneous definition such as (4) is not a mere abbreviation. For one thing, we regard the expression \(x\) in it as a genuine variable which admits of substitution and binding. So, the definiendum Gx is not a mere abbreviation for the definiens. Moreover, if such definitions were abbreviations, they would be subject to the requirement that the definiendum must be shorter than the definiens, but no such requirement exists. On the other hand, genuine requirements on definitions would make little sense. The following stipulation is not a legitimate definition:
\[\tag Gx \eqdf x \gt y \amp x \lt 10. \]
But if it is viewed as a mere abbreviation, there is nothing illegitimate about it. (Indeed, mathematicians routinely make use of abbreviations of this kind, suppressing variables that are temporarily uninteresting.)
Some stipulative definitions are nothing but mere devices of abbreviation (e.g., the definitions governing the omission of parentheses in formulas; see Church 1956, §11). However, many stipulative definitions are not of this kind; they introduce meaningful items into our discourse. Thus, definition (4) renders \(G\) a meaningful unary predicate: \(G\) expresses, in virtue of (4), a particular concept. In contrast, under stipulation (6), \(G\) is not a meaningful predicate and expresses no concept of any kind. But what is the source of the difference? Why is (4) legitimate, but not (6)? More generally, when is a definition legitimate? What requirements must the definiens fulfill? And, for that matter, the definiendum? Must the definiendum be, for instance, atomic, as in (3) and (4)? If not, what restrictions (if any) are there on the definiendum?
It is a plausible requirement on any answer to these questions that two criteria be respected. [4] First, a stipulative definition should not enable us to establish essentially new claims—call this the Conservativeness criterion. We should not be able to establish, by means of a mere stipulation, new things about, for example, the moon. It is true that unless this criterion is made precise, it is subject to trivial counterexamples, for the introduction of a definition materially affects some facts. Nonetheless, the criterion can be made precise and defensible, and we shall soon see some ways of doing this.
Second, the definition should fix the use of the defined expression \(X\)—call this the Use criterion. This criterion is plausible, since only the definition—and nothing else—is available to guide us in the use of \(X\). There are complications here, however. What counts as a use of \(X\)? Are occurrences within the scope of ‘say’ and ‘know’ included? What about the occurrence of \(X\) within quotation contexts, and those within words, for instance, ‘Xenophanes’? The last question should receive, it is clear, the answer, “No.” But the answers to the previous questions are not so clear. There is another complication: even if we can somehow separate out genuine occurrences of \(X\), it may be that some of these occurrences are rightfully ignored by the definition. For example, a definition of quotient may leave some occurrences of the term undefined (e.g., where there is division by 0). The orthodox view is to rule such definitions as illegitimate, but the orthodoxy deserves to be challenged here. Let us leave the challenge to another occasion, however, and proceed to bypass the complications through idealization. Let us confine ourselves to ground languages that possess a clearly determined logical structure (e.g., a first-order language) and that contain no occurrences of the defined term \(X\). And let us confine ourselves to definitions that place no restrictions on legitimate occurrences of \(X\). The Use criterion now dictates then that the definition should fix the use of all expressions in the expanded language in which \(X\) occurs.
A variant formulation of the Use criterion is this: the definition must fix the meaning of the definiendum. The new formulation is less determinate and more contentious, for it relies on “meaning,” an ambiguous and theoretically contentious notion.
Note that the two criteria govern all stipulative definitions, irrespective of whether they are single or multiple, or of whether they are of form (2) or not.
The traditional account of definitions is founded on three ideas. The first idea is that definitions are generalized identities; the second, that the sentential is primary; and the third, that of reduction. The first idea—that definitions are generalized identities—motivates the traditional account’s inferential rules for definitions. These are, put crudely, that (i) any occurrence of the definiendum can be replaced by an occurrence of the definiens (Generalized Definiendum Elimination); and, conversely, (ii) any occurrence of the definiens can be replaced by an occurrence of the definiendum (Generalized Definiendum Introduction).
The second idea—the primacy of the sentential—has its roots in the thought that the fundamental uses of a term are in assertion and argument: if we understand the use of a defined term in assertion and argument then we fully grasp the term. The sentential is, however, primary in argument and assertion. Hence, to explain the use of a defined term \(X\), the second idea maintains, it is necessary and sufficient to explain the use of sentential items that contain \(X\). (Sentential items are here understood to include sentences and sentence-like things with free variables, e.g., the definiens of (4); henceforth, these items will be called formulas.) The issues the second idea raises are, of course, large and important, but they cannot be addressed in a brief survey. Let us accept the idea simply as a given.
The third idea—reduction—is that the use of a formula \(Z\) containing the defined term is explained by reducing \(Z\) to a formula in the ground language. This idea, when conjoined with the primacy of the sentential, leads to a strong version of the Use criterion, called the Eliminability criterion: the definition must reduce each formula containing the defined term to a formula in the ground language, i.e., one free of the defined term. Eliminability is the distinctive thesis of the traditional account and, as we shall see below, it can be challenged.
Note that the traditional account does not require the reduction of all expressions of the extended language; it requires the reduction only of formulas. The definition of a predicate \(G\), for example, need provide no way of reducing \(G\), taken in isolation, to a predicate of the ground language. The traditional account is thus consistent with the thought that a stipulative definition can add a new conceptual resource to the language, for nothing in the ground language expresses the predicative concept that \(G\) expresses in the expanded language. This is not to deny that no new proposition—at least in the sense of truth-condition—is expressed in the expanded language.
Let us now see how Conservativeness and Eliminability can be made precise. First consider languages that have a precise proof system of the familiar sort. Let the ground language \(L\) be one such. The proof system of \(L\) may be classical, or three-valued, or modal, or relevant, or some other; and it may or may not contain some non-logical axioms. All we assume is that we have available the notions “provable in \(L\)” and “provably equivalent in \(L\),” and also the notions “provable in \(L^\)” and “provably equivalent in \(L^\)” that result when the proof system of \(L\) is supplemented with a definition \(\mathcal\) and the logical rules governing definitions. Now, the Conservativeness criterion can be made precise as follows.
Conservativeness criterion (syntactic formulation): Any formula of \(L\) that is provable in \(L^\) is provable in \(L\).
That is, any formula of \(L\) that is provable using definition \(\mathcal\) is also provable without using \(\mathcal\): the definition does not enable us to prove anything new in \(L\). The Eliminability criterion can be made precise thus:
Eliminability criterion (syntactic formulation): For any formula \(A\) of \(L^\), there is a formula of \(L\) that is provably equivalent in \(L^\) to \(A\).
(Folklore credits the Polish logician S. Leśniewski for formulating the criteria of Conservativeness and Eliminability, but this is a mistake; see Dudman 1973, Hodges 2008, Urbaniak and Hämäri 2012 for discussion and further references.) [5]
Now let us equip \(L\) with a model-theoretic semantics. That is, we associate with \(L\) a class of interpretations, and we make available the notions “valid in \(L\) in the interpretation \(M\)” (a.k.a.: “true in \(L\) in \(M\)”) and “semantically equivalent in \(L\) relative to \(M\).” Let the notions “valid in \(L^\) in \(M^\)” and “semantically equivalent in \(L^\) relative to \(M^\)” result when the semantics of \(L\) is supplemented with that of definition \(\mathcal\). The criteria of Conservativeness and Eliminability can now be made precise thus:
Conservativeness criterion (semantic formulation): For all formulas \(A\) of \(L\), if \(A\) is valid in \(L^\) in all interpretations \(M^\), then \(A\) is valid in \(L\) in all interpretations \(M\).
Eliminability criterion (semantic formulation): For any formula \(A\) of \(L^\), there is a formula \(B\) of \(L\) such that, relative to all interpretations \(M^\), \(B\) is semantically equivalent in \(L^\) to \(A\).
The syntactic and semantic formulations of the two criteria are plainly parallel. However, even if we suppose that strong completeness theorems hold for \(L\) and \(L^\), the two formulations need not be equivalent: it depends on our semantics for definition \(\mathcal\). Indeed, several different, non-equivalent formulations of the two criteria are possible within each framework, the syntactic and the semantic.
There is another, more stringent notion of semantic conservativeness that has been prominent in the literature on truth (Halbach 2014, p. 69). Say that an interpretation \(M^\) of \(L^\) is an expansion of an interpretation \(M\) of \(L\) iff \(M\) and \(M^\) assign the same domain(s) to the quantifier(s) in L, and assign the same semantic values to the non-logical constants in \(L\). Then we have:
Conservativeness criterion (strong semantic formulation): Every interpretation \(M\) of \(L\) can be expanded to an interpretation \(M^\) of \(L^\).
In other words, a definition is strongly semantically conservative if it does not rule out any previously available interpretations of the original language.
Observe that the satisfaction of Conservativeness and Eliminability criteria, whether in their semantic or their syntactic formulation, is not an absolute property of a definition; the satisfaction is relative to the ground language. Different ground languages can have associated with them different systems of proof and different classes of interpretations. Hence, a definition may satisfy the two criteria when added to one language, but may fail to do so when added to a different language. For further discussion of the criteria, see Suppes 1957 and Belnap 1993.
For concreteness, let us fix the ground language \(L\) to be a classical first-order language with identity. The proof system of \(L\) may contain some non-logical axioms \(T\); the interpretations of \(L\) are then the classical models of \(T\). As before, \(L^\) is the expanded language that results when a definition \(\mathcal\) of a non-logical constant \(X\) is added to \(L\); hence, \(X\) may be a name, a predicate, or a function-symbol. Call two definitions equivalent iff they yield the same theorems in the expanded language. Then, it can be shown that if \(\mathcal\) meets the criteria of Conservativeness and Eliminability then \(\mathcal\) is equivalent to a definition in normal form as specified below. [6] Since definitions in normal form meet the demands of Conservativeness and Eliminability, the traditional account implies that we lose nothing essential if we require definitions to be in normal form.
The normal form of definitions can be specified as follows. The definitions of names \(a, n\)-ary predicates \(H\), and \(n\)-ary function symbols \(f\) must be, respectively, of the following forms:
\[\begin \tag a = x &\eqdf \psi(x), \\ \tag H(x_,\ldots , x_) &\eqdf \phi\,(x_,\ldots, x_), \\ \tag f(x_,\ldots,x_)= y &\eqdf \chi(x_,\ldots, x_, y), \end\]
where the variables \(x_\), …, \(x_\), \(y\) are all distinct, and the definiens in each case satisfies conditions that can be separated into a general and a specific part. [7] The general condition on the definiens is the same in each case: it must not contain the defined term or any free variables other than those in the definiendum. The general conditions remain the same when the traditional account of definition is applied to non-classical logics (e.g., to many-valued and modal logics). The specific conditions are more variable. In classical logic, the specific condition on the definiens \(\psi(x)\) of (7) is that it satisfy an existence and uniqueness condition: that it be provable that something satisfies \(\psi(x)\) and that at most one thing satisfies \(\psi(x)\). [8] There are no specific conditions on (8), but the condition on (9) parallels that on (7). An existence and uniqueness claim must hold: the universal closure of the formula
\[\begin \exists y\,\chi(x_,\ldots, x_, y) \amp \forall u\forall v[&\chi(x_,\ldots, x_, u) \\ &\amp\ \chi(x_,\ldots, x_, v) \rightarrow u = v] \end\]
must be provable. [9]
In a logic that allows for vacuous names, the specific condition on the definiens of (7) would be weaker: the existence condition would be dropped. In contrast, in a modal logic that requires names to be non-vacuous and rigid, the specific condition would be strengthened: not only must existence and uniqueness be shown to hold necessarily, it must be shown that the definiens is satisfied by one and the same object across possible worlds.
Definitions that conform to (7)–(9) are heterogeneous; the definiendum is sentential, but the defined term is not. One source of the specific conditions on (7) and (9) is their heterogeneity. The specific conditions are needed to ensure that the definiens, though not of the logical category of the defined term, imparts the proper logical behavior to it. The conditions thus ensure that the logic of the expanded language is the same as that of the ground language. This is the reason why the specific conditions on normal forms can vary with the logic of the ground language. Observe that, whatever this logic, no specific conditions are needed for regular homogeneous definitions.
The traditional account makes possible simple logical rules for definitions and also a simple semantics for the expanded language. Suppose definition \(\mathcal\) has a sentential definiendum. (In classical logic, all definitions can easily be transformed to meet this condition.) Let \(\mathcal\) be
where \(x_\), …, \(x_\) are all the variables free in either \(\phi\) or \(\psi\). And let \(\phi(t_,\ldots,t_)\) and \(\psi(t_,\ldots,t_)\) result by the simultaneous substitution of terms \(t_\), …, \(t_\) for \(x_\), …, \(x_\) in, respectively, \(\phi(x_,\ldots, x_)\) and \(\psi(x_,\ldots, x_)\); changing bound variables as necessary. Then the rules of inference governing \(\mathcal\) are simply these:
The semantics for the extended language is also straightforward. Suppose, for instance, \(\mathcal\) is a definition of a name \(a\) and suppose that, when put in normal form, it is equivalent to (7). Then, each classical interpretation \(M\) of \(L\) expands to a unique classical interpretation \(M^\) of the extended language \(L^\). The denotation of \(a\) in \(M^\) is the unique object that satisfies \(\psi(x)\) in \(M\); the conditions on \(\psi(x)\) ensure that such an object exists. The semantics of defined predicates and function-symbols is similar. The logic and semantics of definitions in non-classical logics receive, under the traditional account, a parallel treatment.
Note that the inferential force of adding definition (10) to the language is the same as that of adding as an axiom, the universal closure of
\[\tag \phi(x_,\ldots, x_) \leftrightarrow \psi(x_,\ldots,x_). \]
However, this similarity in the logical behavior of (10) and (11) should not obscure the great differences between the biconditional (‘\(\leftrightarrow\)’) and definitional equivalence (‘\(\eqdf\)’). The former is a sentential connective, but the latter is trans-categorical: not only formulas, but also predicates, names, and items of other logical categories can occur on the two sides of ‘\(\eqdf\)’. Moreover, the biconditional can be iterated—e.g., \(((\phi \leftrightarrow \psi) \leftrightarrow \chi)\)—but not definitional equivalence. Finally, a term can be introduced by a stipulative definition into a ground language whose logical resources are confined, say, to classical conjunction and disjunction. This is perfectly feasible, even though the biconditional is not expressible in the language. In such cases, the inferential role of the stipulative definition is not mirrored by any formula of the extended language.
The traditional account of definitions should not be viewed as requiring definitions to be in normal form. The only requirements that it imposes are (i) that the definiendum contain the defined term; (ii) that the definiendum and the definiens belong to the same logical category; and (iii) the definition satisfies Conservativeness and Eliminability. So long as these requirements are met, there are no further restrictions. The definiendum, like the definiens, can be complex; and the definiens, like the definiendum, can contain the defined term. So, for example, there is nothing formally wrong if the definition of the functional expression ‘the number of’ has as its definiendum the formula ‘the number of \(F\)s is the number of \(G\)s’. The role of normal forms is only to provide an easy way of ensuring that definitions satisfy Conservativeness and Eliminability; they do not provide the only legitimate format for stipulatively introducing a term. Thus, the reason why (4) is, but (6) is not, a legitimate definition is not that (4) is in normal form and (6) is not.
\[\begin \tag Gx &\eqdf x \gt 3 \amp x \lt 10. \\ \tag Gx &\eqdf x \gt y \amp x \lt 10. \end\]
The reason is that (4) respects, but (6) does not, the two criteria. (The ground language is assumed here to contain ordinary arithmetic; under this assumption, the second definition implies a contradiction.) The following two definitions are also not in normal form:
\[\begin \tag Gx &\eqdf (x \gt 3 \amp x \lt 10) \amp y = y. \\ \tag Gx &\eqdf [x = 0 \amp(G0 \vee G1)] \vee [x = 1 \amp (G0 \amp G1)]. \end\]
But both should count as legitimate under the traditional account, since they meet the Conservativeness and Eliminability criteria. It follows that the two definitions can be put in normal form. Definition (12) is plainly equivalent to (4), and definition (13) is equivalent to (14):
\[\tag Gx \eqdf x = 0. \]
Observe that the definiens of (13) is not logically equivalent to any \(G\)-free formula. Nevertheless, the definition has a normal form.
Similarly, the traditional account is perfectly compatible with recursive (a.k.a.: inductive) definitions such as those found in logic and mathematics. In Peano Arithmetic, for example, exponentiation can be defined by means of the following equations:
Here the first equation—called the base clause—defines the value of the function when the exponent is 0. And the second clause—called the recursive clause—uses the value of the function when the exponent is \(n\) to define the value when the exponent is \(n + 1\). This is perfectly legitimate, according to the traditional account, because a theorem of Peano Arithmetic establishes that the above definition is equivalent to one in normal form. [10] Recursive definitions are circular in their format, and indeed it is this circularity that renders them perspicuous. But the circularity is entirely on the surface, as the existence of normal forms shows. See the discussion of circular definitions below.
It is a part of our ordinary practice that we sometimes define terms not absolutely but conditionally. We sometimes affirm a definition not outright but within the scope of a condition, which may either be left tacit or may be set down explicitly. So, for example, we may define F(x, y) to express the notion “first cousin once removed” by stipulating that
where it is understood that the variables range over humans. For another example, when defining division, we may explicitly set down as a condition on the definition that the divisor not be 0. We may stipulate that
\[\tag (y/x = z) \eqdf (y = x . z), \]
but with the proviso that \( x \neq 0 \). This practice may appear to violate the Eliminability criterion, for it appears that conditional definitions do not ensure the eliminability of the defined terms in all sentences. Thus (16) does not enable us to prove the equivalence of
\[\tag \exists x F(x, 2) \]
with any F-free sentence because of the tacit restriction on the range of variables in (16). Similarly (17) does not enable us to eliminate the defined symbol from
However, if there is a violation of Eliminability here, it is a superficial one, and it is easily corrected in one of two ways. The first way—the way that conforms best to our ordinary practices—is to understand the enriched languages that result from adding the definitions to exclude sentences such as (18) and (19). For when we stipulate a definition such as (16), it is not our intention to speak about the first cousins once removed of numbers; on the contrary, we wish to exclude all such talk as improper. Similarly, in setting down (17), we wish to exclude talk of division by 0 as legitimate. So, the first way is to recognize that a conditional definition such as (16) and (17) brings with it restrictions on the enriched language and, consequently, respects the Eliminability criterion once the enriched language is properly demarcated. This idea can be implemented formally by seeing conditional definitions as formulated within languages with sortal quantification.
The second way—the way that conforms best to our actual formal practices—is to understand the applications of the defined term in cases where the antecedent condition fails as “don’t care cases” and to make a suitable stipulation concerning such applications. So, we may stipulate that nothing other than a human has first cousins once removed, and we may stipulate that the result of dividing any number by 0 is 0. Thus we may replace (17) by
\[\tag (y/x = z) \eqdf [x \neq 0 \amp y = x.z] \vee [x = 0 \amp z = 0]. \]
The resulting definitions satisfy the Eliminability criterion. The second way forces us to exercise care in reading sentences with defined terms. So, for example, the sentence
\[\tag \exists x \exists y \exists z(x \gt y \amp x/z = y/z), \]
though true when division in defined as in (20), does not express an interesting mathematical truth but one that is merely a byproduct of our treatment of the “don’t care cases.” Despite this cost, the gain in simplicity in the notion of proof may well warrant, in some contexts, the move to a definition such as (20).
See Suppes 1957 for a different perspective on conditional definitions.
The above viewpoint allows the traditional account to bring within its fold ideas that might at first sight seem contrary to it. It is sometimes suggested that a term \(X\) can be introduced axiomatically, that is, by laying down as axioms certain sentences of the expanded language \(L^\). The axioms are then said to implicitly define \(X\). This idea is easily accommodated within the traditional account. Let a theory be a set of sentences of the expanded language \(L^\). Then, to say that a theory \(T^*\) is an implicit (stipulative) definition of X is to say that \(X\) is governed by the definition
where \(\phi\) is the conjunction of the members of \(T^*\). (If \(T^*\) is infinite then a stipulation of the above form will be needed for each sentence \(\psi\) in \(T^*\).) [11] The definition is legitimate, according to the traditional account, so long as it meets the Conservativeness and Eliminability criteria. If it does meet these criteria, let us call \(T^*\) admissible (for a definition of X). So, the traditional account accommodates the idea that theories can stipulatively introduce new terms, but it imposes a strong demand: the theories must be admissible. [12]
Consider, for concreteness, the special case of classical first-order languages. Let the ground language \(L\) be one such, and let its interpretations be models of some sentences \(T\). Let us say that
\(T^*\) is an implicit semantic definition of X iff, for each interpretation \(M\) of \(L\), there is a unique model \(M^\) of \(T^*\) such that \(M^\) is an expansion of \(M\).
Then, from the normal form theorem, the following claim is immediate:
If \(T^*\) is admissible then \(T^*\) is an implicit semantic definition of \(X\).
That is, an admissible theory fixes the semantic value of the defined term in each interpretation of the ground language. This observation provides one natural method of showing that a theory is not admissible:
Padoa’s method. To show that \(T^*\) is not admissible, it suffices to construct two models of \(T^*\) that are expansions of one and the same interpretation of the ground language \(L\). (Padoa 1900)
Here is a simple and philosophically useful application of Padoa’s method. Suppose the proof system of \(L\) is Peano Arithmetic and that \(L\) is expanded by the addition of a unary predicate \(Tr\) (for “Gödel number of a true sentence of \(L\)”). Let \(\mathbf\) be the theory consisting of all the sentences (the “Tarski biconditionals”) of the following form:
\[ Tr(s) \leftrightarrow \psi, \]
where \(\psi\) is a sentence of \(L\) and \(s\) is the canonical name for the Gödel number of \(\psi\). Padoa’s method implies that \(\mathbf\) is not admissible for defining \(Tr\). For \(\mathbf\) does not fix the interpretation of \(Tr\) in all interpretations of \(L\). In particular, it does not do so in the standard model, for \(\mathbf\) places no constraints on the behavior of \(Tr\) on those numbers that are not Gödel numbers of sentences. (If the coding renders each natural number a Gödel number of a sentence, then a non-standard model of Peano Arithmetic provides the requisite counterexample: it has infinitely many expansions that are models of \(\mathbf\).) A variant of this argument shows that Tarski’s theory of truth, as formulated in \(L^\), is not admissible for defining \(Tr\).
What about the converse of Padoa’s method? Suppose we can show that in each interpretation of the ground language, a theory \(T^*\) fixes a unique semantic value for the defined term. Can we conclude that \(T^*\) is admissible? This question receives a negative answer for some semantical systems, and a positive answer for others. (In contrast, Padoa’s method works so long as the semantic system is not highly contrived.) The converse fails for, e.g., classical second-order languages, but it holds for first-order ones:
Beth’s Definability Theorem. If \(T^*\) is an implicit semantic definition of \(X\) in a classical first-order language then \(T^*\) is admissible.
Note that the theorem holds even if \(T^*\) is an infinite set. For a proof of the theorem, see Boolos, Burgess, and Jeffrey 2002; see also Beth 1953.
The idea of implicit definition is not in conflict, then, with the traditional account. Where conflict arises is in the philosophical applications of the idea. The failure of strict reductionist programs of the late-nineteenth and early-twentieth century prompted philosophers to explore looser kinds of reductionism. For instance, Frege’s definition of number proved to be inconsistent, and thus incapable of sustaining the logicist thesis that the principles of arithmetic are analytic. It turns out, however, that the principles of arithmetic can be derived without Frege’s definition. All that is needed is one consequence of it, namely, Hume’s Principle:
Hume’s Principle. The number of \(F\)s = the number of \(G\)s iff there is a one-to-one correspondence between the \(F\)s and \(G\)s.
If we add Hume’s Principle to axiomatic second-order logic, then we obtain a consistent theory from which we can analytically derive second-order Peano Arithmetic. (The essentials of the argument are found already in Frege 1884.) It is a central thesis of Neo-Fregeanism that Hume’s Principle is an implicit definition of the functional expression ‘the number of’ (see Hale and Wright 2001). If this thesis can be defended, then logicism about arithmetic can be sustained. However, the neo-Fregean thesis is in conflict with the traditional account of definitions, for Hume’s Principle violates both Conservativeness and Eliminability. The principle allows one to prove, for arbitrary \(n\), that there are at least \(n\) objects.
Another example: The reductionist program for theoretical concepts (e.g., those of physics) aimed to solve epistemological problems that these concepts pose. The program aimed to reduce theoretical sentences to (classes of) observational sentences. However, the reductions proved difficult, if not impossible, to sustain. Thus arose the suggestion that perhaps the non-observational component of a theory can, without any claim of reduction, be regarded as an implicit definition of theoretical terms. The precise characterization of the non-observational component can vary with the specific epistemological problem at hand. But there is bound to be a violation of one or both of the two criteria, Conservativeness and Eliminability. [13]
A final example: We know by a theorem of Tarski that no theory can be an admissible definition of the truth predicate, \(Tr\), for the language of Peano Arithmetic considered above. Nonetheless, perhaps we can still regard theory \(\mathbf\) as an implicit definition of \(Tr\). (Paul Horwich has made a closely related proposal for the ordinary notion of truth.) Here, again, pressure is put on the bounds imposed by the traditional account. \(\mathbf\) meets the Conservativeness criterion, but not that of Eliminability.
In order to assess the challenge these philosophical applications pose for the traditional account, we need to resolve issues that are under current philosophical debate. Some of the issues are the following. (i) It is plain that some violations of Conservativeness are illegitimate: one cannot make it true by a stipulation that, e.g., Mercury is larger than Venus. Now, if a philosophical application requires some violations of Conservativeness to be legitimate, we need an account of the distinction between the two sorts of cases: the legitimate violations of Conservativeness and the non-legitimate ones. And we need to understand what it is that renders the one legitimate, but not the other. (ii) A similar issue arises for Eliminability. It would appear that not any old theory can be an implicit definition of a term \(X\). (The theory might contain only tautologies.) If so, then again we need a demarcation of theories that can serve to implicitly define a term from those that cannot. And we need a rationale for the distinction. (iii) The philosophical applications rest crucially on the idea that an implicit definition fixes the meaning of the defined term. We need therefore an account of what this meaning is, and how the implicit definition fixes it. Under the traditional account, formulas containing the defined term can be seen as acquiring their meaning from the formulas of the ground language. (In view of the primacy of the sentential, this fixes the meaning of the defined term.) But this move is not available under a liberalized conception of implicit definition. How, then, should we think of the meaning of a formula under the envisioned departure from the traditional account? (iv) Even if the previous three issues are addressed satisfactorily, an important concern remains. Suppose we allow that a theory \(T\), say, of physics can stipulatively define its theoretical terms, and that it endows the terms with particular meanings. The question remains whether the meanings thus endowed are identical to (or similar enough to) the meanings the theoretical terms have in their actual uses in physics. This question must be answered positively if implicit definitions are to serve their philosophical function. The aim of invoking implicit definitions is to account for the rationality, or the apriority, or the analyticity of our ordinary judgments, not of some extraordinary judgments that are somehow assigned to ordinary signs.
The literature on neo-Fregeanism presents an interesting case study in respect of these issues. Much of the debate concerning the neo-Fregean thesis can fruitfully be viewed as a debate over the extent and precise formulation of the criteria of Conservativeness and Use. For example, the so-called Julius Caesar objection (due to Frege 1884) urges that Hume’s Principle cannot be a legitimate definition of ‘the number of’ because it does not determine the use of this expression in mixed identity contexts, such as ‘\(\textF\text = \text\)’. Other classic objections (Field 1984, Boolos 1997) focus on the non-conservativeness of Hume’s Principle. Boolos 1990 raises a particularly sharp point, known as the Bad Company problem. Definitions of the same kind as Hume’s Principle are known as abstraction principles. Boolos exhibits an abstraction principle that is consistent by itself, but inconsistent in conjunction with Hume’s Principle. This pathological situation never arises with conservative definitions. So, the Bad Company problem illustrates what can go wrong when the Conservativeness requirement is violated.
Friends of neo-Fregeanism have responded to these objections in various ways. Wright 1997 argues that abstraction principles need only satisfy a restricted version of Conservativeness, and needn’t satisfy Eliminability at all. (However, Wright’s proposal suffered the revenge of the Bad Company problem: see Weir 2003.) By contrast, Linnebo 2018 argues for much more stringent requirements on abstraction principles. He countenances only predicative abstraction principles, which satisfy both Conservativeness and Eliminability in suitable contexts. Mackereth and Avigad (forthcoming) defend an intermediate position. They hold that abstraction principles must satisfy Conservativeness in an unrestricted sense, but needn’t satisfy Eliminability. Furthermore, Mackereth and Avigad show that in the absence of Eliminability, the precise formulation of Conservativeness (syntactic vs. semantic, etc.) makes a big difference. In particular, it is possible to get impredicative versions of Hume’s Principle that are semantically conservative, but the same does not appear to be true for syntactic conservativeness.
For further discussion of these issues, see Horwich 1998, especially chapter 6; Hale and Wright 2001, especially chapter 5; and the works cited there.
Another departure from the traditional theory begins with the idea not that the theory is too strict, but that it is too liberal, that it permits definitions that are illegitimate. Thus, the traditional theory allows the following definitions of, respectively, “liar” and the class of natural numbers \(\mathbf\):
(22) \(z\) is a liar \(\eqdf\) all propositions asserted by \(z\) are false; (23) \(z\) belongs to \(\mathbf\) \(\eqdf\) \(z\) belongs to every inductive class, where a class is inductive when it contains 0 and is closed under the successor operation.
Russell argued that such definitions involve a subtle kind of vicious circle. The definiens of the first definition invokes, Russell thought, the totality of all propositions, but the definition, if legitimate, would result in propositions that can only be defined by reference to this totality. Similarly, the second definition attempts to define the class \(\mathbf\) by reference to all classes, which includes the class \(\mathbf\) that is being defined. Russell maintained that such definitions are illegitimate. And he imposed the following requirement—called, the “Vicious-Circle Principle”—on definitions and concepts. (Henri Poincaré had also proposed a similar idea.)
Vicious-Circle Principle. “Whatever involves all of a collection must not be one of the collection (Russell 1908, 63).”
Another formulation Russell gave of the Principle is this:
Vicious-Circle Principle (variant formulation). “If, provided a certain collection had a total, it would have members only definable in terms of that total, then the said collection has no total (Russell, 1908, 63).”
In an appended footnote, Russell explained, “When I say that a collection has no total, I mean that statements about all its members are nonsense.”
Russell’s primary motivation for the Vicious-Circle Principle were the logical and semantic paradoxes. Notions such as “truth,” “proposition,” and “class” generate, under certain unfavorable conditions, paradoxical conclusions. Thus, the claim “Cheney is a liar,” where “liar” is understood as in (16), yields paradoxical conclusions, if Cheney has asserted that he is a liar, and all other propositions asserted by him are, in fact, false. Russell took the Vicious-Circle Principle to imply that if “Cheney is a liar” expresses a proposition, it cannot be in the scope of the quantifier in the definiens of (16). More generally, Russell held that quantification over all propositions, and over all classes, violates the Vicious-Circle Principle and is thus illegitimate. Furthermore, he maintained that expressions such as ‘true’ and ‘false’ do not express a unique concept—in Russell’s terminology, a unique “propositional function”—but one of a hierarchy of propositional functions of different orders. Thus the lesson Russell drew from the paradoxes is that the domain of the meaningful is more restricted than it might ordinarily appear, that the traditional account of concepts and definitions needed to be made more restrictive in order to rule out the likes of (16) and (17).
In application to ordinary, informal definitions, the Vicious-Circle Principle does not provide, it must be said, a clear method of demarcating the meaningful from the meaningless. Definition (16) is supposed to be illegitimate because, in its definiens, the quantifier ranges over the totality of all propositions. And we are told that this is prohibited because, were it allowed, the totality of propositions “would have members only definable in terms of the total.” However, unless we know more about the nature of propositions and of the means available for defining them, it is impossible to determine whether (16) violates the Principle. It may be that a proposition such as “Cheney is a liar”—or, to take a less contentious example, “Either Cheney is a liar or he is not”— can be given a definition that does not appeal to the totality of all propositions. If propositions are sets of possible worlds, for example, then such a definition would appear to be feasible.
The Vicious-Circle Principle serves, nevertheless, as an effective motivation for a particular account of legitimate concepts and definitions, namely that embodied in Russell’s Ramified Type Theory. The idea here is that one begins with some unproblematic resources that involve no quantification over propositions, concepts, and such. These resources enable one to define, for example, various unary concepts, which are thereby assured of satisfying the Vicious-Circle Principle. Quantification over these concepts is thus bound to be legitimate, and can be added to the language. The same holds for propositions and for concepts falling under other types: for each type, a quantifier can be added that ranges over items (of that type) that are definable using the initial unproblematic resources. The new quantificational resources enable the definition of further items of each type; these, too, respect the Principle, and again, quantifiers ranging over the expanded totalities can legitimately be added to the language. The new resources permit the definition of yet further items. And the process repeats. The result is that we have a hierarchy of propositions and of concepts of various orders. Each type in the type hierarchy ramifies into a multiplicity of orders. This ramification ensures that definitions formulated in the resulting language are bound to respect the Vicious-Circle Principle. Concepts and classes that can be defined within the confines of this scheme are said to be predicative (in one sense of this word); the others, impredicative.
For further discussion of the Vicious-Circle Principle, see Russell 1908, Whitehead and Russell 1925, Gödel 1944, and Chihara 1973. For a formal presentation of Ramified Type Theory, see Church 1976; for a more informal presentation, see Hazen 1983. See also the entries on type theory and Principia Mathematica, which contain further references.
The paradoxes can also be used to motivate a conclusion that is the very opposite to Russell’s. Consider the following definition of a one-place predicate \(G\):
\[\tag \begin Gx \eqdf x = \text &\vee (x = \text \amp Gx) \\ &\vee (x = \text \amp Gx). \end\]
This definition is essentially circular; it is not reducible to one in normal form. Still, intuitively, it provides substantial guidance on the use of \(G\). The definition dictates, for instance, that Socrates falls under \(G\), and that nothing apart from the three ancient philosophers mentioned does so. The definition leaves unsettled the status of only two objects, namely, Plato and Aristotle. If we suppose that Plato falls under \(G\), the definition yields that Plato does fall under \(G\) (since Plato satisfies the definiens), thus confirming our supposition. The same thing happens if we suppose the opposite, namely, that Plato does not fall under \(G\); again our supposition is confirmed. With Aristotle, any attempt to decide whether he falls under \(G\) lands us in an even more precarious situation: if we suppose that Aristotle falls under \(G\), we are led to conclude by the definition that he does not fall under \(G\) (since he does not satisfy the definiens); and, conversely, if we suppose that he does not fall under \(G\), we are led to conclude that he does. But even on Plato and Aristotle, the behavior of \(G\) is not unfamiliar: \(G\) is behaving here in the way the concept of truth behaves on the Truth Teller (“What I am now saying is true”) and the Liar (“What I am now saying is not true”). More generally, there is a strong parallel between the behavior of the concept of truth and that of concepts defined by circular definitions. Both are typically well defined on a range of cases, and both display a variety of unusual logical behavior on the other cases. Indeed, all the different kinds of perplexing logical behavior found with the concept of truth are found also in concepts defined by circular definitions. This strong parallelism suggests that since truth is manifestly a legitimate concept, so also are concepts defined by circular definitions such as (18). The paradoxes, according to this viewpoint, cast no doubt on the legitimacy of the concept of truth. They show only that the logic and semantics of circular concepts is different from that of non-circular ones. This viewpoint is developed in the revision theory of definitions.
In this theory, a circular definition imparts to the defined term a meaning that is hypothetical in character; the semantic value of the defined term is a rule of revision, not as with non-circular definitions, a rule of application. Consider (18) again. Like any definition, (18) fixes the interpretation of the definiendum if the interpretations of the non-logical constants in the definiens are given. The problem with (18) is that the defined term \(G\) occurs in the definiens. But suppose that we arbitrarily assign to \(G\) an interpretation—say we let it be the set \(U\) of all objects in the universe of discourse (i.e., we suppose that \(U\) is the set of objects that satisfy \(G)\). Then it is easy to see that the definiens is true precisely of Socrates and Plato. The definition thus dictates that, under our hypothesis, the interpretation of \(G\) should be the set \(\< \text, \text\>\). A similar calculation can be carried out for any hypothesis about the interpretation of \(G\). For example, if the hypothesis is \(\\>\), the definition yields the result \(\<\text, \text\>\). In short, even though (18) does not fix sharply what objects fall under \(G\), it does yield a rule or function that, when given a hypothetical interpretation as an input, yields another one as an output. The fundamental idea of the revision theory is to view this rule as a revision rule: the output interpretation is better than the input one (or it is at least as good; this qualification will be taken as read). The semantic value that the definition confers on the defined term is not an extension—a demarcation of the universe of discourse into objects that fall under the defined term, and those that do not. The semantic value is a revision rule.
The revision rule explains the behavior, both ordinary and extraordinary, of a circular concept. Let \(\delta\) be the revision rule yielded by a definition, and let \(V\) be an arbitrary hypothetical interpretation of the defined term. We can attempt to improve our hypothesis \(V\) by repeated applications of the rule \(\delta\). The resulting sequence,
\[ V, \delta(V), \delta(\delta(V)), \delta(\delta(\delta(V))),\ldots, \]
is a revision sequence for \(\delta\). The totality of revision sequences for \(\delta\), for all possible initial hypotheses, is the revision process generated by \(\delta\). For example, the revision rule for (18) generates a revision process that consists of the following revision sequences, among others:
Observe the behavior of our four ancient philosophers in this process. After some initial stages of revision, Socrates always falls in the revised interpretations, and Xenocrates always falls outside. (In this particular example, the behavior of the two is fixed after the initial stage; in other cases, it may take many stages of revision before the status of an object becomes settled.) The revision process yields a categorical verdict on the two philosophers: Socrates categorically falls under \(G\), and Xenocrates categorically falls outside \(G\). Objects on which the process does not yield a categorical verdict are said to be pathological (relative to the revision rule, the definition, or the defined concept). In our example, Plato and Aristotle are pathological relative to (18). The status of Aristotle is not stable in any revision sequence. It is as if the revision process cannot make up its mind about him. Sometimes Aristotle is ruled as falling under \(G\), and then the process reverses itself and declares that he does not fall under \(G\), and then the process reverses itself again. When an object behaves in this way in all revision sequences, it is said to be paradoxical. Plato is also pathological relative to \(G\), but his behavior in the revision process is different. Plato acquires a stable status in each revision sequence, but the status he acquires depends upon the initial hypothesis.
Revision processes help provide a semantics for circular definitions. [14] They can be used to define semantic notions such as “categorical truth” and logical notions such as “validity.” The characteristics of the logical notions we obtain depend crucially on one aspect of revision: the number of stages before objects settle down to their regular behavior in the revision process. A definition is said to be finite iff, roughly, its revision process necessarily requires only finitely many such stages. [15] For finite definitions, there is a simple logical calculus, \(\mathbf_\), that is sound and complete for the revision semantics. [16] With non-finite definitions, the revision process extends into the transfinite. [17] And these definitions can add considerable expressive power to the language. (When added to first-order arithmetic, these definitions render all \(\Pi^_\) sets of natural numbers definable.) Because of the expressive power, the general notion of validity for non-finite circular definitions is not axiomatizable (Kremer 1993). We can give at best a sound logical calculus, but not a complete one. The situation is analogous to that with second-order logic.
Let us observe some general features of the revision theory of definitions. (i) Under this theory, the logic and semantics of non-circular definitions—i.e., definitions in normal form—remain the same as in the traditional account. The introduction and elimination rules hold unrestrictedly, and revision stages are dispensable. The deviations from the traditional account occur only over circular definitions. (ii) Under the theory, circular definitions do not disturb the logic of the ground language. Sentences containing defined terms are subject to the same logical laws as sentences of the ground language. (iii) Conservativeness holds. No definition, no matter how vicious the circularity in it, entails anything new in the ground language. Even the utterly paradoxical definition
respects the Conservativeness requirement. (iv) Eliminability fails to hold. Sentences of the expanded language are not, in general, reducible to those of the ground language. This failure has two sources. First, revision theory fixes the use, in assertion and argument, of sentences of the expanded language but without reducing the sentences to those of the ground language. The theory thus meets the Use criterion, but not the stronger one of Eliminability. Second, in this theory, a definition can add logical and expressive power to a ground language. The addition of a circular definition can result in the definability of new sets. This is another reason why Eliminability fails.
It may be objected that every concept must have an extension, that there must be a definite totality of objects that fall under the concept. If this is right then a predicate is meaningful—it expresses a concept—only if the predicate necessarily demarcates the world sharply into those objects to which it applies and those to which it does not apply. Hence, the objection concludes, no predicate with an essentially circular definition can be meaningful. The objection is plainly not decisive, for it rests on a premiss that rules out many ordinary and apparently meaningful predicates (e.g., ‘bald’). Nonetheless, it is noteworthy because it illustrates how general issues about meaning and concepts enter the debate on the requirements on legitimate definitions.
The principal motivation for revision theory is descriptive. It has been argued that the theory helps us to understand better our ordinary concepts such as truth, necessity, and rational choice. The ordinary as well as the perplexing behavior of these concepts, it is argued, has its roots in the circularity of the concepts. If this is correct, then there is no logical requirement on descriptive and explicative definitions that they be non-circular.
For more detailed treatments of these topics, see Gupta 1988/89, Gupta and Belnap 1993, and Chapuis and Gupta 1999. See also the entry on the revision theory of truth. For critical discussions of the revision theory, see the papers by Vann McGee and Donald A. Martin, and the reply by Gupta, in Villanueva 1997. See also Shapiro 2006.
How to cite this entry. | |
Preview the PDF version of this entry at the Friends of the SEP Society. | |
Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). | |
Enhanced bibliography for this entry at PhilPapers, with links to its database. |