final version forthcoming in: Logic, Epistemology, and the Unity of Science 33: On the Sorites Paradox, eds. Otavio Bueno and Ali Abasnezhad (New York: Springer)

 

 

VAGUE DISAGREEMENTS AND SORITES[1]

Ted Everett

SUNY-Geneseo

 

abstract: 

When you and I seriously argue over whether a man of seventy is old enough to count as an "old man", it seems that we are appealing neither to our own, separate standards of oldness nor to a common standard already fixed in the language.  Instead, it seems that both of us implicitly invoke an ideal, shared standard that has yet to be agreed upon: the place where we ought to draw the line.  As with other normative ideals, it is hard to know whether such places exist prior to our coming to agree on where they are.  But epistemicists plausibly argue that precise boundaries must presently exist whether we ever agree on them or not, as this provides the only logically acceptable response to the sorites paradox.  This paper argues that such boundaries typically exist only as hypothetical, to-be-agreed-upon limits for the application of vague terms, not as objective features of the real world.  There is no general solution to the sorites paradox, but attention to practice in resolving vague disagreements shows that its instances can be dealt with separately, as they arise, in many reasonable ways.

 

1. The problem

          Here is an instance of the sorites paradox.  These three statements are jointly inconsistent:

         

          (a)  You are a human being. 

          (b)  The parents of any human being are human beings. 

          (c)  You are descended from non-human beings. 

 

          Each of the statements appears to be true, but at least one of them must be false.  (a) is undeniable.  (c) is also undeniable, at least for those of us who favor Darwin over Intelligent Design.  So, it looks like the problem must reside in (b), though (b) is also very hard to deny.  If (b) is false, then at some point in the past a particular human ancestor of yours had pre-human apes for parents.  This must have been awkward for them.  But it is also awkward for us, because we seem to lack any criterion for the predicate "human" that is precise enough to tell us, even in theory, which ancestor that could have been. 

          Nevertheless, epistemicists like Williamson (1994) and Sorensen (2001) argue that precise borderlines for vague terms are in fact determinate and real.  It seems odd initially to say that there are perfect boundaries between humans and non-humans, heaps and non-heaps, big and not-big bowls of cereal, etc., that we cannot detect.  But it is obvious that many pairs of statements like (a) and (c) are both true and jointly inconsistent with statements like (b), which leaves the falsehood of statements like (b) as the only logically possible solution.  Therefore, there must be precise boundaries in such cases, despite our inability to locate them.  Sorensen finds the philosophical majority's denial of this conclusion hard to fathom:

 

          The basic argument for boundaries is a simple, sound argument, that has been in wide circulation for many years.  Yet less than 10 percent of current experts are persuaded by it.  Worse, most of the experts complain that they cannot make sense of the claim that "heap" is sensitive to differences of a single grain.  Epistemicism strikes these able philosophers as conceptually absurd…In my opinion, the sorites paradox has a single, exciting solution.  The meta-paradox is that people, especially those with philosophical acumen, fail to accept the solution.  (Sorensen 2001: 15)

 

          I think that philosophers tend to dismiss epistemicism out of hand because the language in which vague terms appear is a human institution, presumably under our own control.  If a precise boundary between heaps and non-heaps exists, and this is a fact about our language, why can't we figure it out analytically?  If, alternatively, it is a fact about the world, why can't we find it out empirically?  It seems instead that there is nothing in the world, inside our heads, or in our language that would make one precise boundary correct while a neighboring boundary was incorrect.  Sorenson (2001: 171) acknowledges the intuitive oddness of this situation, but argues that the "truthmaker principle" that makes it seem so odd is actually false: things like soritic boundaries do not need to depend on something in the world that sets them where they are; sometimes they just are where they are, for no reason at all.[2]  But to most philosophers concerned with the sorites paradox, solving the problem in terms of such inexplicable, totally ungrounded borderlines makes little sense.  Instead, it seems that specific boundaries must have an explanation of some sort, and all that could explain a boundary being in one place rather than another indistinguishably different place is arbitrary, human choice in how to speak.  In David Lewis's (1986) phrase, vagueness is just "semantic indecision" over cases where we have no good reason to set boundaries in one place rather than another.  So, many philosophers are tempted to deny that there is any fact of the matter as to boundaries in sorites cases, any objective answer to questions like which of your ancestors was the first human in your lineage, because the meanings of vague predicates like "human" are simply not precise enough to fully determine their extensions.  This is okay, though, seemingly, because such words are still precise enough to differentiate among the clear cases that we usually care about; if we need greater precision in the meaning of "human", we can always supply it ourselves by stipulation.  In the meantime, we can speak as we like about the middle cases, with no right or wrong usage as long as we're consistent with ourselves and tolerant of speakers who draw their lines a little differently.

          The meta-paradox for me is that the epistemic and semantic approaches are both obviously right.  It seems obvious, as a matter of logic, that precise boundaries for vague predicates must exist.  But it seems equally obvious, as a matter of fact, that they do not.  Any successful resolution to this meta-paradox will have to do justice to both of these contrary intuitions. 

          A standard approach to such conflicts of intuition is to search for ambiguities in the concepts involved.  In this case, we might try to find one sense in which precise boundaries for soritic terms exist, or strongly seem to exist, and another sense in which they don't.  It is not, after all, uncommon for philosophers to distinguish senses of "exist" in order to allow for reference to extraordinary sorts of objects such as fictional characters, abstractions like imaginary numbers, posited objects like dark matter or the proletarian revolution, or statistical items like the "average family".  Does Hamlet exist?  In the play, of course he does, but in real life, of course he does not.  Does a perfect circle exist?  In Euclidean geometry it does, but in the physical world there is no such thing.  Does the average family exist?  Well, in one sense it obviously does, since it can be defined simply by averaging all the measurable characteristics of families.  But in another sense it plainly does not, since there is no actual family with exactly all of those average characteristics.  I think that precise boundaries for vague terms, the things that epistemicists say must exist and semanticists say do not exist, are ideal objects whose existence is ambiguous in a similar way: they do exist as hypothetically agreed-upon limits for the application of vague terms, but not as fixed boundaries of properties or kinds in the real world.[3]  The upshot is that epistemicism may be formally correct – true "in the play", as it were – but it is not true in the substantive way that its proponents take it to be. 

          How, then, is the sorites paradox to be resolved?  I argue that we already use a variety of reasonable, philosophically acceptable methods for addressing particular sorites problems one at a time, as they arise in what I call vague disagreements.  Attention to this wide range of practical solutions suggests shows that no universal response to the paradox is either possible or necessary.

 

2. A "thermostat" model for vague predicates

          Let me begin with a skeletal analysis of vague predicates based on a simple extension of classical logic, one that separates their two essential elements of gradability and standards.[4] 

          Gradability. Vague predicates permit us to inquire how much, not just whether or not, they apply.  This is to say, the properties they represent come in degrees or measures, so that a person can be very tall, or six feet tall, or taller than somebody else, not just plain tall.  Fuzzy logic offers a simple but controversial way to do this, by abandoning classical bivalence in favor of a continuum of values from totally true to absolutely false.  So, instead of the usual pair of truth-values {0, 1}, an interpretation will assign each sentence a degree-of-truth value in the range [0, 1].  In this system, predicates function like thermometers, raising or lowering their truth-readings according to the changing degrees to which things have the properties they represent: the taller something is, the truer it is to call it tall.  But among other, more technical problems, this makes basic assertions difficult to understand.  For example, in my saying simply that it is cold outside, am I implying that it is absolutely true that it is cold outside, or should I be taken to mean that it is merely more true than false, more true than usual, or something else? 

          In any case, there is no need for us to abandon bivalence as fuzzy logic does in order to represent the gradability of predicates.  It is enough to add a module to the syntax of atomic formulas in classical logic that will allow for comparative and metric formulas as well.  The comparative logic CL provides a simple way to do it.  Add to the usual vocabulary a pair of two-place connectives {>, =} plus a set of metric constants {i, j, k…}, and allow molecular sentences to be constructed with the new connectives on the model of conjunction or disjunction, but restricted to atomic formulas and the new constants.  Now, we can use a comparative sentence like Fa > Fb to represent a's being more F than b, Fa = Fb to represent a's being as F as b, Fa = i to represent a's being F to extent I (not Fa being true to degree i), and so on.[5]  Then we can add to the semantics a category of non-classical "how-much" extensions e(F, a), etc., in the set of real numbers to interpret atomic formulas like Fa without assigning them truth-values.  The comparative sentences can now be interpreted as true or false in the obvious way.  Sorites problems could be avoided altogether in an artificial language like this, where atomic formulas are not even interpreted as sentences.

          Standards. In natural languages like English, however, we do want to make categorical assertions of atomic form: this thing is this way, simpliciter.  So, if we know that a person is six feet tall, say, and we want to decide whether or not to call him just plain tall, we must have in mind (perhaps unconsciously) some threshold level of tallness, above which someone can truthfully be called tall and below which he cannot.  Unlike a thermometer but like a thermostat or an alarm clock, we need a variable setting that is separate from the reading so that the predicate is "turned on" only when the reading reaches that setting.  So, to allow for formulas like Fa to be fully interpreted as sentences in CL, we include in the formal semantics a standard value s(F) for each predicate F to which the how-much extension e(F, a) can be compared: if e(F, a) is equal to or greater than s(F), the sentence Fa is true; otherwise, it's false.  These standard values in CL interpretations represent the boundary lines we seek in trying to solve sorites problems that arise in ordinary language.[6] 

          The gradability of vague predicates in English seems easy enough to understand: when something is some way to some extent, like this dog having a weight between twenty and thirty pounds, this is an ordinary, non-linguistic fact about the world, not a matter of semantic decision or anything else that would seem to detract from its objective reality.  But standards for English predicates, like thirty pounds being the upper bound for an animal to count as a "small dog", are something of a mystery.  Where do they come from, exactly?  Are they supplied by individual speakers?  By groups of speakers?  By entire linguistic communities?  Or are they somehow implicit in nature, independent of how people are disposed to speak?

          The big picture that I have in mind is this.  The world is full of continuities, small differences on different scales, motion, forked paths, objects and processes that grow from other things and fade away, gradual changes in the properties of such things, and the like, as well as ever-changing human desires and beliefs.  An uncountably infinite language would be needed to state every fact in worlds like this precisely.  But natural language has to be finitary and compact, as our time and resources are limited and we have other things to do in life than make long lists of statements.  Vague predicates allow us to describe a world full of gradations with simple sentences by setting standards at different points in the relevant scales.  They can be set by large or small groups of speakers in a number of reasonable ways, some local and temporary, some relatively permanent and global, some largely empirical, some purely conventional, some more precise than others, and most are never set at all until there is a practical need for them.   This way, the language can be fitted to the world as closely as we need, given our many purposes, but no more closely than we need, given our constant desire for efficiency.

          Imagine a vast colony of insects, charged by their deity with building a gigantic statue of it out of twigs.  A hologram of the great being is provided so that they can fit the sculpture as exactly as they like to the real figure of the god.  They can use as many twigs as they need and glue them together wherever they choose.  There is no preset artistic style in which the sculpture must be made, and no particular deadline for completing the work.  But time matters to these short-lived insects, excreting glue and solvent uses energy, and twigs have to be found or bitten out of local trees and dragged to the site.  So, these insects need to find efficient methods for designing the statue and putting it together.  There is no central command, though; rather, decisions must be made as to the placement of this or that twig through local communication. Since these insects do not all think alike, much of this work takes place within a pheromonal cloud of arguments, and sometimes bigger or smaller segments of the sculpture have to be unglued and readjusted or rebuilt.  But clever placements and sturdy sub-structures tend to get noticed and to propagate around the colony.  Technique evolves, then, as the sculpture itself develops, from the ground up.  Fitting a natural language to the world is like this.  We base numerous predicates on useful scales and leave them vague so that we can adjust them in practice by moving their standards, either for specific purposes or as we gain greater knowledge of what typically works best.  We always want our predications to be true, but this is like the insects wanting twigs to look like parts of their god: each sentence fits the world better or worse depending on its situation relative to others in a flexible structure that must fit the world holistically.  Our question, "Where should we set the standard for this predicate?" is like the insects' question, "Where should we glue down this twig?"  The setting appears to be up to us in principle, though under many constraints in practice.  There is an art to setting standards successfully, requiring much cooperation and the mastery of numerous evolved techniques.

 

3. Vague disagreements

          Serious disagreement presupposes an objective fact, and sometimes we seriously disagree over where to draw the boundaries within the ranges of vague predicates.  Such disagreements are usually passive, comprising only contrary dispositions to apply such predicates to cases.[7]  But they also arise in personal discussions, sometimes very frustrating ones.

 

          He:    It's cold out here.

          She:  No, it isn't. 

          He:    Here's the thermometer: it's only 52 degrees.

          She:   Okay, it's 52 degrees.  But that's not cold.          

 

What is going on in cases like this?  He and She disagree as to whether or not it's cold.  But this is evidently not an empirical disagreement, since they do not disagree as to how cold it is, which is all that the world seems to tell us objectively.  Even when all of the relevant contextual facts are added in (location, time of year, time of day, statistics about skin sensitivity, how most people speak, and the like), disagreements of this sort often survive: 

 

          …

          He:    I'm pretty sure most people would call this cold for a day in June around here.

          She:   Okay, but so what?  That just means most people would be wrong.

 

          Once the visibly empirical content of the case has been exhausted by agreement upon temperature and other potentially relevant facts, the only question that apparently remains for He and She is whether they are going to count 52 degrees as cold under the present circumstances, or, what amounts to much the same thing, where to place the standard for coldness relative to 52 degrees.  We are usually able to resolve such disagreements quickly, and it often strikes us as perverse for either side to care a lot about who "wins" such arguments.  In this way, vague disagreements over predicates like "cold" tend to seem trivial.  But it isn't trivial when people seriously argue over standards that have consequences, for example when a student argues with a teacher over whether an essay deserves an "A" or a "B".

 

          Student:     What's wrong with my paper?  Why didn't I get an "A"?

          Teacher:      The paper isn't good enough.

          Student:     Why not?  It's got three plausible arguments and half a dozen quotes.

          Teacher:      That's what got you the "B". 

          Student:     But surely that's enough for an "A" around here.  All the other teachers say so.

          Teacher:      I don't care how other teachers grade.  "A" means excellent work, and this paper isn't excellent, just good.

 

Unlike the question whether it is cold outside, students tend to care a lot about their grades in college, so exactly where we draw the lines between "A"s, "B"s, and "C"s can be important.  This is why teachers often argue with their colleagues and administrators over grading standards, hoping to establish clear, uniform boundaries between adjacent grades.

          Even our most serious moral and political problems often present themselves as vague disagreements.  Consider:

 

          Us:              The president approves of torture.

          Them:         No, he doesn't.

          Us:              Well, he's letting the CIA waterboard its prisoners.   

          Them:         Yes, but that isn't torture, just extreme discomfort.

 

The basic disagreements in such cases are not merely linguistic.  Rather, we use vague terms like "torture" to mark important moral or legal boundaries that are in dispute, and the semantic disagreement often serves as a proxy for the serious debate over morality or law.  Thus, we could avoid the semantic question about what exactly counts as torture by arguing directly over what kinds of pressure on prisoners ought to be forbidden.  But terms like "torture" also have meaning prior to our present disputes, so the issue of proper standards retains a kind of precedence.  We want to forbid some treatments only because they constitute what we already ought to recognize as torture. 

          In any case, none of these vague disagreements seem to be resolvable straightforwardly through either stipulation or empirical research: we can't seem to discover pre-existing boundary lines for "cold", "excellent", or torture", nor can we draw them arbitrarily in any way likely to end the disagreement.  Instead, it should be clear that what is in dispute in all these cases is where the relevant boundaries ought to be drawn.[8]  Now, "ought" can mean a lot of things, of course, from moral necessity to practical need to casual preference depending on the purposes we have in mind.  But some such normative element seems to be essential to all vague disagreements. 

          In light of this essential normativity, I find it helpful to think of soritic boundaries not as mysterious, ungrounded truth-factors in vague statements taken one by one, but as something like ideal resolutions to actual or possible vague disagreements.  If the criterion for placement is always where we ought to draw the line, given our complex circumstances and often conflicting interests, any real, objective boundary would need to coincide with the result of the best way of coming to agree on where it is.  From this perspective, if the epistemicists are right about objective boundaries that defeat every apparent instance of the sorites paradox, then there must be a uniquely correct standard for us to set in every case.  If there is such a standard for each vague predicate, then there must be a uniquely correct resolution to each vague disagreement.  But if there is either no objectively best place to draw the line or (in case of a tie) more than one such place, then it is plainly to some extent a matter of choice where we should draw it.  And if it is to any extent a matter of choice where we should set the relevant linguistic standards, then there can be no prior fact of the matter as to where the precise boundaries for our vague terms really are.

 

4. Some varieties of standards

          When we consider how vague disagreements seem to get resolved in practice, it becomes clear how many different semantic, epistemic, and pragmatic methods are available.  This is not to say that these options are all equally attractive, or that they always produce ideal results.  But a number of these are commonly accepted as establishing the proper standards – that is, the ones we really ought to be using for the terms in question – or else properly avoiding them, when that is what we ought to do.  Here is a list of types of standards that result, some in everyday use, others more strictly philosophical. 

          Relative standards.  A common feature of vague predicates, usually noted in discussions of sorites problems, is relativity to sortals, that is, such predicates applying to things only as things belonging to certain types.  So, this thing here is a large house-plant, not just a large thing, this other thing is smart for a house cat, but not particularly smart, simpliciter, and this coffee is cold while beer at the same temperature is not.  To make sense of a statement like "This is tall", we need to connect "tall" to another predicate: "oak tree", "stack of muffins", "three-year-old girl", or something else.  Perhaps all ordinary adjectives are used only relative to sortals like this, whether we do this explicitly or let a conversation's context do the work.[9]  But this doesn't by itself resolve most vague disagreements – there is still vagueness in the term "tall flagpole" – it merely narrows them down.  Most of the time, such relativizations are not made explicitly but are made clear in the context of some activity or other.  Thus, old Celtics fans in Boston might argue over whether the six-foot nine-inch Bill Russell was tall without having to say "tall for an NBA center during Russell's career". 

          Rejected standards.  One option available for resolving particular vague disagreements is to reject them as pointless.  Thus, the two people arguing over whether it is cold outside, despite agreeing on all of the relevant facts on which the matter seems to supervene, seem to be wasting their time.  They would be better off agreeing that it's 52 degrees outside, and letting that precise fact tell them whether they should go inside or wear a coat.  If we always resolved vague disagreements by dropping categorical assertions in favor of comparisons and statements of degree along the lines suggested above, then we could say that vague predications are not inherently meaningful, in the sense of being literally true or false.  They are only meaningful as gestures toward more fleshed-out, precise but complex statements in essentially comparative form.  An ideal-language approach to the sorites problem would then be appropriate, on the order of those imagined by Russell (1923).  But this is only one way that we resolve these arguments, and far from the most common.  In most cases we agree that there are at least some directly true or false applications of the terms in question, so some standard or other must be presupposed.

          Fuzzy standards.  An alternative method for resolving similar vague disagreements is to drop bivalent language and to speak instead about degrees of truth.  Thus, people arguing over whether Russia lost World War One might agree that it is somewhat true, say in light of the treaty of Brest-Litovsk, and somewhat false, say in light of the treaty of Versailles.  We also have in common use the adjective "half-true", applied for example when a political candidate claims to be from New Hampshire, where he owns a second home, although he was born in Maine and raised in Massachusetts.  In general, if people can agree that something of the form Fa is "sort of" or "somewhat" the case, for example that it's raining on a drizzly day, then they can also agree that "Fa" is sort of or somewhat true.  The fuzzy-logical approach to the sorites paradox can be viewed as insisting on such resolutions, so that statements like "It is cold," or "Waterboarding is a form of torture," will always come out true to some degree between 0 and 1. 

          Again, however, this is just one of many ways we have of settling vague disagreements.  Most of the time, suggestions of this sort will be rejected as inadequate by speakers who want their judgments to be true or false simpliciter – particularly people with decisions to make, such as whether to find a certain agent guilty of torturing a prisoner.  We can't sort-of convict someone and sort-of send him to prison, so calling him sort-of guilty is pretty useless, even if it's true.  Even if we can agree on intermediate sorts of penalties for such iffy cases, we still expect the law to draw a bright, bivalent line between what is categorically permissible and what is not. 

          Gappy standards.   There are also occasions when it seems right not to have made up our minds.  When you are hiking towards Mt. Everest, for example, there will be a range of cases where it is clearly false that you are on the mountain, and there will be cases where it is clearly true.  But in-between it should be clear that there are also borderline cases, where it would be wrong to say either that you are definitely on the mountain or that you are definitely not.  Speakers are free to differ as to where exactly in the middle range the mountain starts, but no one's stipulation can determine the truth of the claim in common language that you are on the mountain.  According to supervaluationists like Lewis, it is only where every competent speaker would say that you are on it that it is actually true that you are on it, and only where no competent speaker would say that you are on it is it actually false.  In the middle, where one could say it either way, the truth is simply indeterminate.  Supervaluationists claim that vagueness is always like this, a matter of distinguishing clear cases where it is proper to assign truth-values from borderline cases where it is not.  We can see how this plays out in many vague disagreements: cases toward one pole end up with unanimous positive agreement, cases near the other pole end up with unanimous negative agreement, and in the middle it is left up for grabs.  In the discussion between He and She above, the two disputants might end up agreeing that it is definitely cold (for that vicinity, say, at time of year and time of day) when it is below 40 degrees, definitely not cold when it is 60 degrees or more, and indeterminate when it is in-between.  Like the other options we've considered, though, this sort of analysis cannot account for all vague disagreements, because we do not always end up breaking a vague predicate's range into three parts: true, false, and neither; any more than we always break them into two parts: true and false.  Sometimes, for example on opinion surveys, we break scales into five parts: false, somewhat false, neither true nor false, somewhat true, and true.  This can be reasonable in cases where "higher-order vagueness" appears to be a problem, and we can't find adequately precise boundaries between the definitely true or false cases and the borderline cases.  Alternatively, we could end up with a scale from one to ten, or zero to a hundred, or use the whole real number line between zero and one, or other possible partitions, depending on how much precision we desire in the case at hand. 

          Elusive standards.  Sometimes, it seems that we cannot find the precise boundaries we seek even though we agree that they exist.  One reason for this is that we cannot discriminate between sufficiently similar cases.  You might be trying to choose a color to paint your house, and you are sure that you want a blueish green of some sort, say, and you can see approximately where in the color displays at the paint store you want to look.  But once you are staring at the little samples, you find that you cannot tell the difference between next-door neighbors, so that no one color seems uniquely right to you.  If you march your eyes along soritically from one to another, sooner or later you will recognize that you have passed out of the acceptable zone into shades that are too green.  March back the other way, and you'll eventually notice that the samples before you are too blue.  In-between, you'll end up with a range of seemingly equally acceptable samples.  There is a similar problem with many vague disagreements.  Even when opponents are willing to work things out amicably, it can seem impossible to settle on a particular precise boundary, since if any particular one seems right, then the ones very nearby will also seem right.  You could always just throw a dart or flip a coin, of course, but this would be arbitrary, not a standard that you know (or have good reason to believe) has been set in exactly the right place.  Again, though, if we accept the polar premises in any sorites paradox, then the inductive premise (that if pn is true, then pn+1 is true) simply must be false, in which case there is necessarily a boundary somewhere.  Thus, according to Williamson (1994: 226f.) soritic boundaries are in principle unknowable, because knowledge always leaves a margin for error in similar cases. 

          Nevertheless, we can sometimes come to substantive agreement over such standards despite margin-of-error principles, by using indirect methods rather than direct perception or arbitrary choice.  One way to resolve the paint-chip problem is to line up in order all the chips that you cannot exclude when looked at separately, so that none are distinguishable from the adjacent chips, but all look vaguely acceptable.  Now pick the one in the middle, and you have probably chosen the exact right color.  There is no absolute guarantee that your aesthetic preferences have the symmetry required to make this work, but supposing they do, the precise color you want should be the one equally distinguishable from the first visibly unacceptable colors on either side.  Vague disagreements might sometimes be resolvable in similarly indirect ways if the parties can agree that their perceptions or judgments are probably equally reliable, and then split the difference in a principled way, just as cooks who initially disagree as to whether the sauce is too bland or too spicy might reasonably come to agree that it is just right.

          Personal standards.  Here is another way of settling vague disagreements, used when there seems to be no point in continuing to argue over standards.  Let each person just revert to his own idiolect, as far as standards go.  Thus, the people arguing about whether or not it's cold outside might well decide that there is no agreement forthcoming: it seems cold to him, and it does not seem cold to her, and maybe that is all there is.  By his standards, then, it is in fact a cold night out, by hers it is not, and no mutual standard exists.  In this way, each speaker's subsequent vague predications are effectively indexical: "a is F" just means the same as "a is F by my standards". 

          Delia Graff Fara (2000) offers a general formula for personal standards, to the effect that predicates apply only when they apply to an extent "significantly greater than the norm" for things of the relevant type, where the norm is typically the typical level or amount and the significance depends directly on the interests of the speaker.  Even this very open formula is too restrictive, though, I think, if it is intended to account for all of ordinary usage (Fara is not altogether clear on this point), since we can sometimes set reasonable standards far above or below whatever would ordinarily be viewed as the relevant norm.  "Big person", for example, might be applied by members of the Little People of America to anybody over 4'10", that is, to anyone too large to qualify to join their group, and this includes almost all adults.  Sometimes we even set the standard so low as to include all members of the relevant kind, as when we say that "Life is short," or "People are strange," or so high as to exclude all members, for example for "small" as applied to beverages in a market where sizes begin with "medium", or even "large".  Or the decision might be totally idiosyncratic.  If I want to sort my Halloween candy into two piles for larger and smaller pieces, it is up to me where I shall make the cut.  So, if I want to call the pieces in these piles "big ones" and "little ones", respectively, it is up to me where I shall set the standards for those terms, provided only that the pieces I am calling little are smaller than those that I call big.  I might even leave the candy all in one pile and call all the pieces "little", just because I'm peeved that no one gives out full-sized candy bars these days, or because I wish that everything were made of candy.

          Unconscious standards.  One thing that makes sorites seem such an intractable problem is that we do not even know where we ourselves would set standards for the vague terms we use, lacking articulate principles to back up our particular judgments.  Thus, US Supreme Court Justice Potter Stewart (1964) famously admitted in a concurrence that he could not define hard-core pornography with enough specificity to predict his judgment in all cases, but he insisted that he knew it when he saw it, and that this was good enough for present purposes.  Stewart's remark gained no visible traction among his colleagues, and indeed it offered little help to future pornographers trying to stay just within the law, but it seems an apt description of much if not most of our efforts to set even personal standards for vague predicates with any real precision.  In the argument about cold weather above, for instance, it may well be that neither He nor She has any minimum standard for coldness under various conditions consciously in mind; they just perceive the present temperature as either cold or not cold.  This does not mean that implicit standards do not underlie their relevant linguistic dispositions, for surely, each would respond in one way or another to the question, "Is it cold out here right now?" at any given temperature, other conditions being equal.  But it makes it difficult to argue meaningfully when we have not ourselves made conscious and articulated the criteria we actually use. Thus, waterboarding may strike Us a form of torture while striking Them differently, but we will have to go beyond these intuitions into principles that we have worked out for ourselves before we can debate them in a useful way.

          Conventional standards.  The more important problem with personal standards is that we do not usually speak in personal or private languages, but rather in common ones like English.  In order for language to function as a social institution, we have to care about being understood by others.  As long as a vague disagreement is a real disagreement, we cannot be using personal standards merely as personal standards.  We must each be willing to assert that they are the right standards for the predicates in question; else we have already gone our separate ways.  So, if we presuppose that there is an objective fact as to which of your ancestors was the first human being in your line, then your personal standard for humanity has only the status of an opinion.  It doesn't make anything interesting true.[10]  This is what pulls us towards agreement on standards, even if it means flipping a coin: without such conventions in place, however locally or temporarily, there is no criterion of truth.  So, whenever we need to make precise, clear classifications, we work to create uniform standards, sometimes very technical ones.  For example, back when we paid less public attention to other people's health, the word "obese" just meant "unusually fat", and people would apply it using different, ordinarily unconscious standards.  But when obesity came to be seen as a societal problem requiring redress through public education, experts agreed on a particular criterion and cut-off point, namely a BMI (body mass index) of 30 or above.  Now there is no longer an obvious sorites problem for the once-vague predicate "obese": feed Mr. Creosote one wafer-thin mint after another and, sooner or later but at one precise time, the bomb will go off.  

          Provisional standards.  We can imagine a linguistic community that imposes strict universal standards for all gradable predicates, like the European Union does for grading sausages and other foodstuffs subject to variation.  This would eliminate sorites issues in essentially the same way as using private languages with arbitrary standards, except that now they would be public and, once established, no more arbitrary than the meter.  Not all linguistic conventions have to be universal ones, however, and this is a good thing.  Often, we just need to agree on boundaries for vague predicates that will let us speak together with whatever precision is required for this or that common project.  A law school admissions board might find it useful to sort applications into three piles: "good" and "bad", based solely on a formula for LSAT scores and grade-point averages, and "uncertain", so as to produce a manageable-sized middle group of applications that they have to read with care.  The standards for distinguishing these categories may well change from year to year, depending on current application volume, prior enrollments, and other factors, and they will certainly vary from one law school to another.  These provisional standards can be as controversial within their parameters as any other standards, but given the necessity of having some standard or other to use during each application season, though, they can be fixed precisely by convention without anyone believing they are perfect. 

          Convenient standards.  Sometimes a standard is the best one among its neighbors just because it is especially convenient for us to place it there.  This is why a "full" glass of Guinness is exactly one pint as far as the language is concerned, whether or not we can distinguish it from one with a drop more or less.  The standard for "acceptably tall" for a U.S. Marine is precisely 58 inches, not 58.03, though there is no intrinsic reason to prefer one standard to the other, or even a reliable means of telling them apart.  The extrinsic reason is sufficient, that 58 is a nice, round number and 58.03 is not.  (This is surely a factor in the medical standard for obesity being set at exactly 30 BMI, as well.)  Other convenience factors may include keeping the size of bricks currently in use, obeying the Empress's arbitrary will, flipping a coin to save time, or basing a system of measurement on the mass of a molecule of water.  Having such practical reasons for stipulating one precise standard over its next-door neighbors can often make it the unique best standard to have set, all things considered, hence no longer really stipulative.  This provides another way to overcome in practice the margin of error principles that Williamson believes make pre-existing boundaries unknowable.

          Empirical standards.  It has been argued (Hart, 1992) that the standard for grains in a heap is not unknowable, not personal, not conventional, not provisional, and nowise indeterminate; it is four.  That is the minimum number of grains that can assume the definitive conical shape of a heap, after all, with three together on the bottom and one resting on top.[11]  Sorites experts may be unpersuaded by this argument, but at least it raises the possibility that standards can be discovered empirically, even after many centuries of frustrated a priori efforts.  Natural phenomena and kinds, especially, might be found to have sharp boundaries through scientific research, as modern astronomy has let us demarcate the seasons with much more precision than was possible for druids, or as chemistry has separated water from that wet stuff in the lakes.  It is unlikely, though, that such boundaries will always be perfectly precise, since even scientists are capable of hardened disagreement over small things, and there will often be some element of choice involved, like in the ways we now distinguish among breeds of dogs.  Even when we really do carve nature at its joints, as on Thanksgiving Day we separate the legs from the body of a turkey, there is a certain amount of play.  In any case, it is hard to imagine that common adjectives like "big" hold such empirically determinable boundaries, even when connected with natural-kind sortals like "asteroid" or "avocado". 

          Negotiable standards.  To the extent that precise standards for predicates are not exhausted by existing facts on the ground, it seems that opponents in a vague disagreement must somehow negotiate these boundaries if they are going to exist, like the boundaries between Germany and France after their wars.  In the meantime, they have a funny sort of status: we agree that there will be a precise boundary in the future, so in some sense that future boundary "exists" now as an object of reference.  So, for example, a hapless Alsatian farmer can ask with sense during an armistice, "Am I in Germany or France?" with reference to the border that has not yet been hammered out.  Such standards are like the grandchildren that we hope to have one day.  We can knit them little outfits – "This one is for my first granddaughter" – but we cannot actually dress them up because they don't really exist yet. 

          Here is a closer analogy.  I am in the process of buying a house, or at least I think I am.  No one involved knows exactly what the purchase price will be, assuming that the deal goes through.  But we all presuppose that some such price exists, and we refer to it in some ways just as if it were a definite figure.  For example, I have already agreed with the prospective seller to pay 20% of the purchase price, whatever that will be, in cash up front.  Presumably, it is going to fall somewhere between the seller's asking price and the substantially lower price that I have offered.  And we can guess, perhaps within a few thousand dollars, where it will probably end up, but not with any real precision.  It is not entirely arbitrary, though, even within that range.  The seller and I are both influenced by our own and others' interests and by facts about the house and neighborhood, including the value assessed for taxes and perhaps other appraisals, though we are not strictly bound in our negotiations by these facts.       

          Vague disagreements are sometimes resolved through similar negotiations.  There is agreement on range of values for which the vague predicate in question does apply and a range for which it doesn't.  In the gap between are all the cases where people are inclined to disagree.  Within this middle range of cases, the exact boundary is not devoid of fact, exactly, as the supervaluationists would have it, but implicitly under a kind of negotiation.  Thus, we can imagine the two people on the 52 degree day above agreeing on preliminary boundaries for the word "cold".  They automatically agree that 0 degrees F is cold and that 80 degrees (F) is not, and maybe they can narrow it down quickly enough to between 40 and 60 degrees, say, but then it seems they're stuck.  Each has spontaneously, if perhaps unconsciously, employed a personal, as it were idiolectal, standard for asserting that it's cold or that it isn't.  But if they want to understand each other easily in future discussions, they cannot both just stick to their guns; they need to create a common dialect respecting the term in question, if they can.  So, they each adduce considerations in favor of a higher or lower standard than the other one prefers, perhaps including wind chill factors, information about how skin is affected by different temperatures in these conditions, information about common or expert usage, and the like.  If they come to a stable agreement, it will be based on a host of factors, not excluding diplomatic skill.  Thus, the general US standard for adult drunken driving, namely 0.8 percent BAC (blood alcohol content), was set piecemeal over years of statewide and interstate negotiations among legislators, under advisement from police and health officials and pressure from citizen groups like Mothers Against Drunk Driving as well as corporations and business associations with their own interests in the question.  There is no global standard yet, even among English-speaking countries.  But there is a fact of the matter now in the United States as to what constitutes driving while intoxicated, and that precise standard was largely determined through negotiation.  

          Contingent standards.  Deals can fall through when people's interests diverge too much, when too much or too little is at stake, or when negotiators are distracted by other needs or desires.  My seller and I may never reach agreement on what the house I'd like to buy is worth, in which case there will not be any purchase price for me to pay.  In that case, what is the status of the purchase price we have been trying to agree upon, and 20% of which we have already agreed to my paying in cash?  If it is never going to be agreed upon, then clearly, it does not really exist right now.  But then, what are we talking about?  It is something that presently might exist, so that it does exist right now as the thing we are talking about.  And as a presently-existing future possibility it has a number of definite properties, for example that it lies somewhere between the seller's asking price and my first bid, but seemingly nowhere in particular within that range.  Yet it also has to be perfectly precise, because the bank will never accept a check for "around" any number of dollars; there has to be some particular number that I will write down should the deal go through.  But if negotiations fail and it does not come to exist, then there is no particular precise amount that it will actually be – so in that sense it is evidently not precise, since precision by its nature must include particularity.  It seems, then, that there are two levels of ignorance in play: first, we don't know whether or not the final purchase price exists; and second, we don't know, supposing it exists, precisely what number it will be.  So, is the number that we are speaking about thus hypothetically precise or not?  Well, it is and it isn't: if it will come to exist, then it is really precise, and is only presently unknown.  If it will not come to exist, then it is only hypothetically precise, and hence unknowable inherently.  Of the price I did pay, once the deal has gone through, I can say that it was X, precisely.  Of the price I would have paid had the deal gone through, once the deal has in fact fallen apart, I can only say that it was somewhere between my price and theirs. 

          Standards for vague terms are often a lot like this.  We disagree on whether it is cold out here or not, or whether this distant ancestor was or was not a human being, given some set of agreed-upon facts of the case.  We argue, doing our best to reach agreement as to standards of coldness or humanity, bringing in whatever facts and interests we consider relevant, and we may or may not ever end up together.  If we do, then good: we now seem to have worked out the precise, best standard that was the one that we were disagreeing over earlier.  If we cannot reach agreement, though, having done everything we reasonably could have done, then this is evidence that there never really was a standard for us to agree upon; it existed only hypothetically.[12]  But that hypothetical existence was enough to make our discussion a coherent one.  Without some standard being presupposed, the application of vague predicates ultimately makes no sense, just as the epistemicists have claimed, for one thing because it leads to the sorites paradox.  But the standard doesn't have to be a real, fully determinate one, any more than the purchase price of a house has to exist in reality in order for us to speak coherently about it as we work to nail it down.    

          As I have claimed from the outset, all standards for vague predicates are normative to some extent.  Two people can make an agreement as to what counts as cold or human or torture that is wrong in the sense of being far from normal usage.  But it is also possible for normal usage to be wrong, as courts sometimes declare, for example with respect to adjectives like "cruel" and "unusual", nouns like "property" and "speech", or verbs like "promise" and "assault".  Philosophers often make similar claims against ordinary usage, as Plato did regarding "piety" and "justice".   This normativity is obvious when arguing over whether "torture" includes or excludes waterboarding, where we are clearly not just trying to have our own way, or seeking consensus for its own convenient sake.  We seek rather to do the right thing by drawing boundaries where we ought to draw them, with all relevant empirical, prudential, and moral factors considered.  But even in the most trivial cases, like when we argue over whether it is cold outside, the issue is always where we ought to draw the line.  It seems not to matter in the meantime whether or not we ultimately reach agreement.  We can still refer to the agreement that we ought to reach on presupposition that there is one such agreement, and keep trying to employ the standard upon which we ought to settle, presupposing as always that there is exactly one such standard.  

 

5. Conclusion

          Standards in general can be defined as ideal threshold quantities that may or may not really exist for the application of vague predicates.  As we have seen, there are many different uses to which we put vague predicates, hence no one best method for resolving vague disagreements, hence no one simple way to account for standards in general, hence no single solution for the sorites paradox.  Epistemicism turns out to be true in a way for many cases, maybe most, inasmuch as we do presuppose the existence of some precise standard whenever we coherently apply vague predicates, a standard that we are ordinarily unable to specify for a number of reasons.  One reason is that we do not know our own linguistic dispositions with enough detail to tell us in advance exactly how we are inclined to rule in difficult cases.  But even if each of us did have precise standards clearly in mind when we made vague assertions, those personal standards would count as be nothing more than opinions with respect to actual or possible vague disagreements.  The standard that matters to the truth in a vague disagreement can only be the standard that would result from a proper resolution to that disagreement.  We typically cannot know in advance what this shared standard is, because we do not know already what the best agreement we can reach will be, or even whether there is a unique best agreement to reach.  Since it is uncertain in some cases whether that ideal standard exists other than hypothetically, the epistemicist approach cannot work as a universal cure for the sorites paradox. 

 

 


 

WORKS CITED

Armstrong, D. (1989).  Universals: An Opinionated Introduction.  Boulder, CO: Westview Press.

Casari, E. (1987). Comparative logics. Synthese 73: 421-449.

Chalmers, D. (2011). Verbal disputes.  Philosophical Review 120(4): 515-566.

Everett, T. (2000). A simple logic for comparisons and vagueness.  Synthese 123(2): 263-278.

------ (2002). Analyticity without synonymy in simple comparative logic. Synthese 130(2):         303-315.

Fara, D. (2000). Shifting sands: an interest-relative theory of vagueness. Philosophical Topics   28(1): 45–81.

Harman, G. and Thomson, J. (1996). Moral Relativism and Moral Objectivity.  Oxford:    Blackwell.

Hart, W. (1992). Hat tricks and heaps.  Philosophical Studies 33: 1-24.

Kennedy, C. (1997).  Projecting the adjective: The Syntax and Semantics of Gradability and    Comparison.  PhD Thesis, University of California at Santa Cruz, published in 1999 by      Garland Press, New York.

Lewis, D. (1986). On the plurality of worlds. Oxford: Blackwell.

Russell, B. (1923).  Vagueness.  Australasian Journal of Philosophy and Psychology 1: 82-92.

Sorensen, R. (2001). Vagueness and Contradiction.  New York: Oxford University Press.

Stewart, J. (1964).  Concurring opinion.  Jacobellis v. Ohio (No. 11), 378 U.S. 184.

Williamson, T. (1994). Vagueness. London: Routledge.

 

 


 

NOTES



[1] I would like to thank the editors for their patience, and Catherine Everett, Charlene Everett, Michael Everett, Brett Sherman, Alan Sidelle, an audience at SUNY Geneseo, and an anonymous referee for their helpful comments on earlier drafts of this paper. 

[2] More generally, the "truthmaker principle" (Armstrong 1989: 88, Sorensen 2001: 171) states that every true contingent proposition must be true in virtue of some fact or object in the world that makes it true.  Thus, it is true that there is a cat on my lap because the world really contains this cat and this lap and the one thing is on the other.  This seems intuitively obvious, but Sorensen denies it as a universal rule.  For one bold illustration of the supposed falsity of this principle, Sorenson claims when we buy a pair of toothbrushes packaged together under the description "Buy one, get one free", one of the two must be the particular one that we paid for, though there is no way for us to know which one it is, since there is nothing in the world that makes it true that it is the one on the left or makes it true that it is the one on the right.  In the same way, he says, sorites problems force us to accept that there is one precise borderline, that is, one uniquely correct standard, somewhere between the clearly true and clearly false applications of vague predicates, but there is simply nothing in the universe that makes it true that it is this borderline rather than any close competitor.

[3] There is, of course, considerable literature on the topic of reference or apparent reference to non-existent things of all sorts.  I am not trying to add to it here, other than to point out that the sorites paradox seems to involve indirect reference to, hence the apparent existence of, another kind of thing of which it may be true to say that there are no such things.  It might be worth some future effort to see if the sorites paradox can be reduced formally the more basic paradox of non-existence, or at least to search for a more definite connection than I am able to develop here. 

[4] For convenience, I am ignoring vague individual names like Mount Everest, though there seem to be similar problems with borderlines for them.

[5] The symbols > and = are not meant to function here as all-purpose sentential connectives, just to connect atomic formulas and metric constants into basic sentences.  A more general logic that allows for comparisons to be formed between molecular sentences is developed by Casari 1987.  For simplicity, I am ignoring units of measurement.  See Everett 2000 for a more complete discussion. 

[6] Fara 2000 relies on a similar model provided by Kennedy 1997.

[7] I apologize for the ambiguity of the term "vague disagreements".  It is not the disagreements themselves that are vague, of course, but the terms over which the disagreements take place.  In a similar way, religious disagreements are not religious in themselves, but simply ones that concern religious issues.

[8] Chalmers 2011 makes a similar point with respect to such contentious terms:

             What counts as 'torture' or as 'terrorism' might be at one level a verbal issue that a philosopher can resolve by distinguishing senses.  But in a rhetorical or political context, words have power that transcends these distinctions.  If a community counts an act as falling into the extension of 'torture' or 'terrorism', this may make a grave difference to our attitudes toward that act.  As such, there may be a serious practical question what we ought to count as falling into the extension of these terms.

[9] It is easy enough to add this sort of relativity to our formal model by connecting pairs of predicates with something like a slash, so that F/G can represent the property of being fat for a goose.  This should permit recursion, too: fat for an old goose, fat for an old free-range goose, etc.  Interpretation functions in CL then take such hybrid predicates to subsets of the classical extensions of the second, sortal predicate while conforming to the order of the first, gradable predicate.  See Everett 2002 for details.

[10] Among existing treatments of vague predicates, Fara's interest-relativity approach comes closest to the view I am expressing here.  The big difference is that she sees the interests that underlie the truth of any vague statement as those of the speaker, whereas it seems to me that in all interesting cases the essential interests are those held in common.  Fara seems to be drifting toward this normative understanding in her last few paragraphs:

             …[I]n assessing whether it costs significantly more than is typical for a car to cost, we may still feel unsure of what the right answer is to the question: significant to whom? Are the interests of one but not the other of the conversants at issue?  Is it both of their interests? Is it the interests of anyone who might want to buy the car?  If the parties are in disagreement about whether the car is expensive, the reason we as a third party might feel inclined to say "There's no fact of the matter" (whatever that means) is closely related to the          reason that some feel inclined to say that "there's no fact of the matter" about whether abortion is wrong.  How the question is answered will depend on the interests or values of the person answering it, and we feel uncomfortable or unsure in giving the interests or values of one person special weight. (Fara 2000: 77)

The reason that we might feel unsure of what to think about such disagreements is not, I think, that we cannot tell which speaker's interests to choose as governing the standards in question, but rather that, as in other normative disputes, what matters is the mutual interest of everyone involved, so siding with either party is likely to be a mistake.  "There's no fact of the matter" in such cases roughly means that no mutual standard for the relevant predicate is going to be agreed upon, so the parties might as well give it up.  Most of the time, though, we believe that there are facts of the matter, that is, common interests that can be developed into common standards. 

[11] Recent experiments with tossing boxes down the stairs suggest that heaps of three are possible for more extended objects.  (My daughter has just demonstrated the same principle with three hot dogs in buns.)  Guy Brickwood has argued (personal discussion) that even two thrown boxes could form a heap if they landed leaning up against each other in the right way. 

[12] Harman 1996 discusses the role of bargaining in the resolution of moral disputes: 

             This [reliance on bargaining] complicates the question whether we ought to assume moral disputes are always rationally resolvable.  It depends on what counts as a "rational resolution" of a moral dispute.  If moral bargaining is included, then it may indeed be useful to assume (until proven otherwise) that a moral dispute is rationally resolvable.  (p. 22)   

In negotiations over standards for vague predicates, this would amount to assuming that precise borderlines exist until it is proven impossible to reach agreement.