EPISTEMIC COMMITMENT

Ted Everett

(draft for Philosophy Club talk 11/02)

 

 

1. The problem.

            Epistemic commitment, or judgment, is the act of making up one's mind, or the state of having made it up, about a fact.  It is a matter of deciding to believe something, or of sticking with a belief already formed.  This phenomenon is not well understood, and it is worth discussing. 

            Let me begin with the more general concept of belief.  The simplest and most common theory of belief says that beliefs are attitudes toward propositions.  To believe a proposition is to take it as true; to disbelieve it is to take it as false.  Propositions are non-linguistic, or at least not tied to any one language (so "es regnet" and "it is raining" are different sentences that express the same proposition).  But we can sort-of imagine all the propositions that exist written out as sentences of English (or some other, arbitrary language) in exactly two columns, with the ones you believe on the left, and all the others on the right.  This simple model will do for some purposes in philosophy, but not for all.  There is a lot more involved in belief than such a yes-or-no classification, and the additional stuff is important.

            For one small thing, it is possible neither to believe nor to disbelieve a given proposition.  I believe that there is intelligent life on Earth, and I disbelieve that there is intelligent life on the moon, but as to whether there is intelligent life anywhere else in the universe, I don't believe or disbelieve – I just don't know.  I haven’t made a judgment.  So it looks like there are three possible attitudes, not two: belief, disbelief, and neutrality.

            But there is more.  Belief also seems to come in degrees.  That is, one believes a proposition with greater or lesser confidence, and this makes it more or less of a belief.  Thus I believe very confidently that there is no one under ten years old in this room, and I disbelieve that there is no one under fifty.  But in between, my degree of certainty changes as the limit rises.  I am pretty sure that there is no one here under sixteen, less sure that there is no one under seventeen, and still less sure that there is no one here under eighteen.  In this hazy middle, I do not even know which propositions should be called beliefs of mine, and which should not.

            A degree of confidence, or degree of belief, is what some people call a subjective probability.  To have more confidence in a proposition, in the sense I mean, is to believe that it is more likely to be true.  We speak of such degrees of belief in various ways:  I sort-of believe that p; I am fairly sure, or really sure, that p; I think that probably p, that the chance of p is such and such a percentage, that p is more likely than q, and so on.[1] 

But we also often (probably more often) speak of believing things outright: I think that p, I believe p, or just plain p.  So it looks like we need at least a two-level theory of empirical belief, with a continuous range of degrees or probabilities on the bottom, and a yes-or-no classification on the top.   There is how much you believe something, and there is whether you believe it.

            This would not be very interesting if it were possible to translate directly from the lower level to the upper.  Suppose that believing a proposition outright was simply a matter of having a certain standard degree of confidence, say anything greater than .5 on a scale of 0 to 1.  Believing p would be simply identical to believing that p was more likely to be true than not.  But this is not right.  Believing that probably p and believing that p are just not the same thing.  Sometimes we believe outright whatever seems more probable than not, but a lot of the time we don’t.  For example, I believe that a man named Edward de Vere, the Seventeenth Earl of Oxford, probably wrote most of the works attributed to William Shakespeare (I'd give it about .6 on the usual scale).  But I do not believe this proposition, called the Oxford Hypothesis, categorically.  I am not epistemically committed to it.  I am not an Oxfordian.  It is just that the evidence in favor of this hypothesis looks pretty good to me right now.  If I had to bet (at even odds), I'd say that Oxford was the guy.  But I don't have to bet.  I don't have to make my mind up, and I haven't made my mind up.  So I don't believe that the hypothesis is true simpliciter, even though I believe that it is probably true. 

            This is not just because .5 is not a high enough cut-off, and it ought to be .8 or something else.  There is no level of subjective probability short of certainty that is high enough to guarantee outright belief.  Suppose you go to the racetrack and bet on a horse.  Only rarely is the horse you choose so highly favored that it is probably going to win (I am assuming that you don't have inside information on a fixed race).  In fact, you might bet on a long-shot, just because you like the odds.  So suppose you have put down fifty dollars on a horse that you believe has only one chance in ten of winning, because the given odds are 20-1 against, so it is still a good bet.  Is it fair to say that you believe outright that your horse is going to lose?  No – you believe that it is very probably (.9) going to lose, but have not judged categorically that it will lose, or you wouldn't be putting money on the small but open possibility that it will win, not to mention screaming encouragement at the poor beast as it lurches down the track. 

            Perhaps, then, we should raise the level of required probability all the way to 1.  Only if we are subjectively certain that p do we believe that p, properly speaking.  But again, this just does not accord with common intuitions.  Surely, we have all kinds of empirical beliefs, but none of them are genuinely certain.  For example, I believe that there is beer in my refrigerator.  I believe this outright, but at the same time I admit such slender possibilities as that thieves have broken in this afternoon and taken it, or that my daughter brought it all to school today for Show and Tell, or that I drank it all myself last night, and (consequently) forgot to replace it.  So in this and similar cases, I can have belief without certainty.  But it is mysterious how this is possible.  There is clearly a lower level of probabilistic belief, and an upper level of categorical belief, and it is clear that the second supervenes over the first in some kind of subtle way, not just by way of a direct partition.  So what way is it?  What is the real relationship between believing that probably p, or believing that p with a certain degree of confidence, and judging that p, or believing that p outright?  This is the problem of epistemic commitment, and I think that it is an important problem, and a hard one.

 

2. Judgments as principles of action.

            I want to be clear that my concern here is not with the psychological but with the epistemological analysis of the problem.  That is, I don't especially care how it comes about causally that people make epistemic commitments.  What I care about as a philosopher is whether, when, and why it is rational for people to make such judgments.  What is the rational connection between probabilistic and categorical belief?  When is it rational to make up your mind?

            The first possible answer that I want to consider is: never.  At least in the empirical realm, one simply never has real certainty.  All that one knows is probabilistic - so, even if as a psychological matter we do have all kinds of outright empirical beliefs, as a epistemological matter we ought to have none.  It is always irrational to make categorical judgments.

There are philosophers who define rationality entirely in terms of subjective probabilities: to be rational is just to make the correct adjustments to such probabilities whenever relevant new evidence comes in.  The fundamental mathematical formula that governs these probabilistic calculations is called Bayes's Theorem, and the people who argue for this fluid view of rationality are called Bayesians.  Although Bayesianism is ordinarily discussed within the philosophy of science, there is no reason not to apply it to issues of belief in general.  So my question is, why shouldn't we all be Bayesians, at least with respect to our empirical beliefs?  If we cannot be certain, why should we ever make our minds up about facts at all? 

Of course, it may often be necessary in a practical way to commit ourselves to certain actions, to make up our minds about what to do.  We cannot live, or live for long, without making decisions.  But why decisions about what to believe?  Can’t we just accept that our empirical knowledge is limited to likelihoods, and then base our actions on so-called expected values, taking probability into account, but not committing categorically to facts?  This is what one does, or ought to do, when one is betting on horses, playing poker, or engaged in other kinds of disciplined, rational gambling.  One acts decisively in placing bets, but makes no categorical judgments about what is going to happen.  A gambler who believes categorically that he is going to win (or lose) is liable to make foolish bets – also to tip his hand in ways that someone merely “playing the probabilities” is less likely to do.  Why not apply the principles of rational gambling to all epistemic circumstances?

I want to claim that it just doesn’t work.  In gambling, calculations can be made on an explicitly probabilistic basis, but this requires a rare and rather artificial kind of situation.  In most of real life, it is just impossible to fix initial probabilities for all relevant facts, calculate all relevant conditional expected values via anything like Bayes’s Theorem, and evaluate the full range of outcomes as a gambler does, consciously or unconsciously.  It is just too hard.  Our brains are too small. 

Here is an ordinary chunk of practical reasoning in categorical form, called a practical syllogism:

 

                        I desire state of affairs S.

                        I believe that action A will bring about state S.

                        Therefore, I ought to perform action A.

 

Here is the full Bayesian version:

 

                        I desire state of affairs S1 to degree d1.

                        I desire state of affairs S2 to degree d2.

                        […and so on for all possible states of affairs]

                        I believe that action A1 will bring about state S1 with probability p1.

                        I believe that action A1 will bring about state S2 with probability p2.

                        […and so on for all possible states of affairs]

                        I believe that action A2 will bring about state S1 with probability pn+1.

                        I believe that action A2 will bring about state S2 with probability pn+2.

                        […and so on for all possible states of affairs]

                        […and so on for all possible actions]

                        Therefore:

I expect action A1 to have the value v1.                   

I expect action A2 to have the value v2.

                        […and so on for all possible actions]

Therefore, some action Am has the highest expected value for me.

Therefore, I ought to perform action Am.

 

Depending on the case, the relevant Bayesian matrix of probabilities involved may be fairly small, or it may be enormous.  Sometimes, as in formal gambling, we can calculate using matrices like this, but sometimes we simply can’t.  When we can’t, given our capacities in terms of brainpower, available evidence, and time, it will still often be possible to use the simpler, categorical versions of these syllogisms.  This means deciding what the facts are (and what our desires are), i. e.  making up our minds.  And this in turn means losing information, so that our calculations will be less accurate than they theoretically could be.  But we make up for that loss by being able efficiently to generate decisions regarding action, when otherwise we could not, as a practical matter, make any decision at all.

            Of course, it is not always better to do something than to do nothing.  If your evidence is highly inconclusive, if you are in a hurry, tired, distracted, or for some other reason especially stupid that day, it may be better not to make your mind up.  It depends on what is at stake.  For example, it is not wise to proposed marriage to somebody when you are roaring drunk.  But at least much of the time, we do need to take action on the basis of imperfect information and analysis.  Then it behooves us to simplify our reasoning to the point where it can quickly have effect.  This is (categorically) a hungry tiger running towards me.  Being eaten by a tiger is (categorically) undesirable.  I will (categorically) be safer if I run away.  Therefore, I ought to run away.

            There are many cases where we need not simplify our reasoning to this extent, but we are still unable to perform a complete Bayesian analysis.  Perhaps most of our practical reasoning is intermediate in this way.  We do not decide on all the facts and values in question, but decide on some, and on which others are important enough to ponder further.  We restrict ourselves to considering a few plausible-looking options for action.  Then we perform a limited calculation, compact enough to generate the necessary action in the time allowed, but as comprehensive as it can be made with such resources as we have available.  Think of how one buys a car.  It isn’t:

 

            I want a car.

            If I buy this Pontiac, then I will have a car.

            Therefore, I ought to buy this Pontiac.

 

But is also isn’t a complete Bayesian spreadsheet listing all the new and used automobiles in the world, ranked by their overall desirability, and multiplied by all the possible ways of obtaining them.  It’s more like: I want a high-quality sedan, preferably a Camry or Accord, about this age, about this mileage, for about this price, and I want it in the next few weeks.  And I end up partially evaluating a dozen or so cars - this one is no good, this one is okay except for the noise, this one is great but too expensive - before committing to the proposition that the purple ’97 Mazda with the Hillary! sticker is the best quickly available car for what I want to spend.  This is a typical piece of intermediate, semi-probabilistic and semi-categorical, practical reasoning.

Our routine use of categorical or mixed categorical and probabilistic practical reasoning need not bother the Bayesians very much.  They might say: of course we have to use practical shortcuts much of the time in our actual decision-making.  But this has no effect on the correct analysis of rationality itself.   They can still say that it is more rational ideally, absent any pressing need for practical results, to avoid commitments to empirical facts altogether, and stick with a completely flexible, probabilized understanding of the world.  Or, if you like, there is believing for the sake of acting, and there is believing for its own sake, or for epistemic goals alone.  We can be Bayesians about the second thing – what rationality would be if all we wanted was to believe the truth – while acknowledging that active life tends to distort and simplify our otherwise most rational beliefs.  So in an ideal, pure epistemology we would have no categorical  empirical beliefs at all, but in a practical epistemology we have no choice but to make up our minds about at least some matters of fact.

I do not mean here just to distinguish between the life of a dedicated intellectual and the life of a practical person in the usual way.  It is not enough for pure, Bayesian rationality just to avoid the brusque sort of decision-making of a businessman or general.  Even professional philosophers have to make decisions frequently that depend on fairly fixed beliefs: e. g. which philosophical methods are most fruitful, which philosophers are most worth reading, and which ones can be safely overlooked, plus all sorts of judgments as to theory (not to mention judging students’ work, dealing with colleagues, and the like). 

Philosophers also engage professionally in discussion, which is an interesting special case of social interaction.  There are some types of belief that seem only to be possible for creatures who talk, argue, and explain.  If you are going to write an article or give a lecture, you simply can’t express all of your thoughts in hedged, tentative, or probabilistic form.  You have to say at least some things outright, or no one will listen to you.  You have to have opinions, or no one will argue with you.  In general, to have an opinion is, I think, to be disposed to say a certain thing within a vague but broad range of social situations – roughly, those in which disputes are tolerated and expected.  Where regular beliefs, as principles of action, may well unconscious, it is hard to imagine how a person’s opinions could be other than transparent to the person himself.  Thus, a psychiatrist can correct you on the content of your own beliefs (e.g. as to why you never speak to your mother), but he cannot reasonably tell you that your opinions are anything other than the ones you think you have.  To state an opinion (sincerely) is conclusive evidence that you have that opinion.

Some opinions are more fixed than others.  The least fixed are mere conjectures – “I suppose (suggest, surmise) that p”.  The most fixed are what we call convictions – “I insist (will go to my grave proclaiming) that p”.  In between are most people’s opinions about most things about which they have opinions – “I believe (claim, think, state) that p”, or again, just plain “p”.[2]  For professional philosophers, and many other people who take arguments seriously, there seems to emerge a kind of double thought, in that we must take firm positions to be interesting, but we must be skeptical and careful to be competent.  A number of philosophers I know are both highly dogmatic when a political topic comes up in a bar, and quite skeptical about the same issues when they are raised explicitly as philosophical concerns.  I am like this myself sometimes, saying things that I don’t usually mean (although it feels at the moment like I mean them), just because there is an argument going on, and I don’t want to be left out. 

Having opinions gives you things to say, which is a practical advantage.  But if you really don’t care whether people listen to you or not, or whether you contribute to public arguments or not, or what happens to you in general, this gives you a kind of rational advantage.  It permits you, ideally, to experience the world in all of its complexity, uncertainty, and vagueness, without ever making judgments or forming opinions.  Perhaps some monks - not active intellectuals, but Zen types or mystics – actually live like this.  For the rest of us, the demands of life, even intellectual life, and especially discursive social life, often require the simplified, efficient forms of reasoning that are only possible if we make up our minds about facts.

 

3. The  reasonable person.

Perhaps I am being unfair to the Buddhists, and the ancient Pyrrhonian skeptics who took a similar view of these things.  One does not need to go all the way with such anti-judgmental positions.  You don’t have to renounce practical life completely in order to learn a lot from these traditions.  They can also be considered as a kind of corrective to the common species of overcommitment known as dogmatism.  The requirements of action being what they are, most of us tend to make our minds up too quickly and too firmly, at least much of the time, and in the long run this can get us into trouble.  The skeptics, the Buddhists, and the Bayesians remind us (if we are paying attention) that we really do not know that our empirical judgments are true – because we cannot know with certainty anything of the sort – even if we must of practical necessity employ such judgments all the time as principles of action.  So don’t overdo it.  Don’t be a jerk or a fanatic, just because you had to make your mind up at some point to get things done.  Calm down, abstract yourself, and reevaluate your fixed beliefs from time to time.  Don’t let your categorical reasoning outpace or overwhelm the Bayesian machinery that underlies it, or you can multiply your errors without limit.

            There is a specific kind of fallacy involved in the detachment of categorical belief from probabilistic evidence, which I will call a fallacy of judgment.  One version takes as premises any number of propositions that are probably (but not certainly) true, commits to them as categorical facts, and draws a conclusion from them which is entirely valid in propositional logic, but not valid in the underlying probability calculus.  For a simple example:

 

probably p                   =>        p

probably q                   =>        q

probably (p & q)          =>      (p & q)

 

Suppose we assign a probability of .7 to each of p and q.  Since this obviously makes each of them more likely than not,  we might be justified in taking each of them as true.  According to the categorical syllogism on the right, we may validly infer that (p & q) is also true.  But when we consider the full information given in probabilistic form, we discover that the derived probability for (p & q) is .49, so it is  probably false.[3]

            Whenever it is practically possible, then, we should preserve the original, probabilistic information that we possess, in order to avoid the fallacies that spring from making chains of otherwise plausible judgments.  This is, again, what professional gamblers do.  They do not say: “My opponent took three cards, therefore he probably has a bad hand.  I have three sixes, which is usually a winning hand.  So I have a good hand and he has a bad one.  He bets a hundred dollars, but he has a bad hand, so he is bluffing.  A good hand beats a bad hand, so I should see his bet and raise by everything I have.”  This is a recipe for disaster in a game like poker.  Instead, the fully rational gambler performs all of his calculations at the probabilistic level, avoiding any categorical inferences, and making commitments only as required by the game. 

            But again, this Bayesian approach isn’t always possible.  There are circumstances that force us to forget about our basal uncertainties, and commit ourselves implicitly to the truth of a proposition.  Getting married is perhaps a case like this, if we can view it as committing to a proposition, such as that your prospective spouse is an appropriate or virtuous person, or that you will be happy when you’re married.  As it turns out, at least most of the time, you really can’t be very happy in a marriage without making this kind of epistemic (as well as legal) commitment.  You have to will yourself, in William James’s term, to believe certain things.  The most discussed case of such willful credence, and the case that James himself cared most about, was your belief in God.  If you are not lucky enough to have a lot of faith in God initially, you have to make yourself believe in him.  Otherwise, you will lose have any hope of gaining the moral strength and psychological contentment that Christian belief seems plainly to confer on its adherents.  Not to mention hope of living an eternal life, which some people think would be a good thing in itself.  For James, the big point is coming to believe whatever the truth happens to be, not just avoiding error by withholding judgment. 

            Of course, it isn’t often very easy just to make yourself believe something.  If someone offered me a million dollars simply for believing that I have six fingers on my right hand, I’m not sure how I would go about cashing in on that proposal.  This is the problem, or one problem, with Pascal’s famous wager on the existence of God.  James acknowledges this problem, but he thinks there is a class of open questions (roughly, those regarding propositions that fall within a vague, middling range of subjective probabilities) where we can nudge ourselves successfully one way or the other.[4] 

            Nietzsche takes this idea a step further, arguing that moral virtue requires not just a Will to Believe, but occasionally a “Will to Stupidity” as well – the more so, as our projects and responsibilities require effort over longer periods.  It is not enough for a successful marriage that you make up your mind just one time, for as long as it takes to go through with the wedding.  It also requires an ongoing resistance to changing your mind once you are married.  It doesn’t work to say, if we’re unhappy we’ll just get divorced.  It doesn’t work to say, I’ll keep my eyes out for somebody better.  It doesn’t work to say, I’ll do everything I can to make sure that my wife is not cheating on me.  You have to not think about such things, you have to be stupid about such things, if you want to be a good and happy husband.  Similarly, and more to Nietzsche’s point, you have to be stupid in this way if you are going to exemplify such ancient virtues as courage and magnificence.  Achilles is a greater, not a lesser, warrior because he hates his enemies, and won’t listen to reason from them.  There’s a wonderful passage in the Iliad where Hector, just before their final combat, tries to get Achilles to agree that the survivor will accord his victim’s body some respect, and Achilles shouts at him that this attempt at reason during battle is “unforgivable”.  At the end, it is not from any argument, but only because Hector’s father Priam has reminded him of his love for his own aged father (another thoughtlessly noble trait) that Achilles finally returns to the old king the body of his slaughtered, desecrated son.

            The Will to Stupidity carries with it a much greater than usual risk of catastrophic error.  The good husband is not just surprised but shattered when he is finally convinced, against all of his efforts not to learn, that his wife has been cheating on him.  To make a commitment to believe, and then to be forced to change your mind by overwhelming evidence, is in general a much more wrenching experience than any mere change in subjective probabilities.  The good warrior does bad things better – more effectively – if he turns out to be fighting on the wrong side, and has more to regret in strictly moral terms, the more committed he had been to the false cause.  It is interesting, though, how much respect we often retain for someone (Antigone, say, as distinct from her sister Ismene) who committed himself wrongly, and did great damage with a noble heart. 

            Sometimes it is arguably virtuous, then, to be more rigid in belief than our nature would otherwise have it.  But more often, in my view, it is better to be flexible.  The instinct to believe (commit, take sides, make up your mind) is sometimes very strong, and has to be resisted through a force of will as strong as any warrior’s.  This will not to believe, (or, if you like, the Will to Skepticism) is, as we have seen, the key to professional gambling.  It is because it is so psychologically unnatural to withhold judgment systematically that only a few people are real masters of poker. 

This is the key to much of scientific reasoning as well: to avoid categorical judgment whenever possible, and to keep track conscientiously of all of one’s empirical data.  Most doctors tend to be wise in this regard, never letting themselves forget the probabilistic and other uncertainties involved in diagnosis and treatment.  But this epistemic attitude is not just expected of them - it is specifically trained into them.  Social scientists also pride themselves on sticking with the data and making only statistical inferences, although their work is often quite constrained by categorical judgment in other ways, regarding what hypotheses are interesting, plausible, or otherwise worth exploration.  (For example, many contemporary social scientists reject statistical investigation into intelligence along the lines of The Bell Curve, not fundamentally because the numbers are bad, but rather out of a prior, categorical commitment to social equality.  This is for them a matter not of scientific rationality, but of moral decency.) 

            The key to being a dogmatist or fanatic is just the opposite.  To make up your mind whenever possible.  To rely on categorical syllogisms for all of your reasoning.  To forget all the uncertainties of your initial evidence, and stick to your guns in the face of any new, countervailing information that shows up – as if it were always better to persist in believing what is probably false than to go back on your convictions.  I suppose that any example would be controversial, but the great pop-calypso singer Harry Belafonte has apparently turned out to be a dogmatist of this sort – so committed to the proposition that America is absolutely racist, and therefore that no black person can possibly succeed on merit here, that he has recently denounced both Colin Powell and Condoleezza Rice as nothing more than “house slaves”.  (Or feel free to substitute Jerry Falwell, Paul Begala, or whoever your favorite dogmatist happens to be at the moment).  For such a person, initially derivative facts become foundational, as he is forced by his commitments to construct a running model of the world consistent with these fixed points of belief.

            Falling in between the Zen types and the dogmatists are all the reasonable, normal, active people - some more hesitant to make their minds up as to facts (or “skeptical”, in the most common meaning of the term), some more decisive.  To be a reasonable person is to fit your epistemic commitments as tightly as you can both to your total evidence and to your total need for action.  This is often really difficult – more so than people usually imagine.  It is not just that our evidence is often a big mess, but our responsibilities can also be conflicting and complex, and this complexity can interfere with reasonable maintenance of belief.  Even a scientist is sometimes faced with conflicts between his professional responsibilities and his responsibilities as a citizen or human being.  For example, what should a scientist believe about global warming?  Some people say that it’s an urgent problem, others that it has been blown way out of proportion.  It seems to matter what we think, and that we come up with a categorical belief – if one side of the debate is right, then we don’t have the time to think exclusively like scientists about this issue, and wait for better information to develop before seriously acting.  But if the other side is right, then that is exactly what we ought to do.  And we do not really know, in advance of further research and analysis, which side is actually right.  So what do we do?  There is no general solution to this kind of problem.  We cannot stop time; inaction is a kind of action when decisions are required by circumstances.[5]   In such a case, equally reasonable people are liable to make opposite, fixed judgments, based on arrays of subjective probabilities that are initially not very different.  To the extent that this propels them to join different communities of thought (such as opposed religious sects, political parties, or even armies), such forced commitments can lead ultimately to mutual distrust, incomprehension, and even hatred between people who are initially very much alike. 

Beliefs about the war in Vietnam are, I suppose, the classic case for people of my college generation.  You either volunteered to fight or not.  You either resisted the draft or not.  You either joined the “rev” on campus or you didn’t.  There was not a lot of time to think in those days.  You just had to make your mind up quickly, on the basis of (what I still think is) grossly inadequate evidence and analysis as to the causes, progression, and likely results of that war, the nature of communism, the history, culture, and economy of Southeast Asia, the morality of violence in general, and many other relevant issues.  Of course, most students had virtually no first-order evidence about any of this stuff, and scant capacity for useful independent analysis.  Their decisions on what “stand” to take on the war depended more on their prior estimates of the reliability of other people’s judgments, especially those of parents, friends, and teachers.  We “came out” against the war, in most cases, because the people that we most respected had already done so.   Those who respected us did likewise, and the result was not an aggregate of separate empirical or moral judgments, but a political and epistemic movement.

One important effect of the this mass-scale rush to judgment on campuses, peaking with the national strike in 1970, was that thoughtfulness and epistemic self-reliance lost a lot of their prestige for college students, and another virtue, not exactly intellectual, took its place.  It was called “commitment”.  For the following five years or so, on lots of campuses and in much of the national media, it was as if epistemic rigidity were a good thing in itself.  [note tons of movies – A Man for all Seasons, Cool Hand Luke, Becket, Serpico, Z] The more closed-mindedly we stuck to our convictions, the more of our actions we were willing to base on such convictions, and the more striking and radical these actions were (short of murder), the more praise we would receive from writers, teachers, and college administrators, including those whose offices we stormed and occupied. 

Thirty years later, some long-time faculty bemoan the loss of this alleged virtue of commitment.  They take the lack of frequent, raucous campus demonstrations as a sign of moral apathy.   I don’t.  I think that current students are not less moral than we were - if anything, I find them more concerned than we used to be with issues of genuine morality, as distinct from mere politics.  I also find these students, on the whole, more independent, and, in consequence, more reasonable.  They are mostly well aware that there are all kinds of serious issues to think about, and that we are living in a dangerous world.  It’s just not very clear to most of them what the right course of action is, so they are often hesitant to make their minds up, for example concerning the pending war against Iraq.  The first-order evidence on this and other major issues is now (as I think it always was) inconclusive, but there is also much more diversity of opinion among our students’ friends than there was for most of us in 1970, so they are under that much less evidential pressure to make up their minds.  Also, there is no military draft right now to force such issues as dramatically as it once did.  Where there is still great unanimity of thought, as at Wellesley or Brown, the old radical spirit of campus commitment seems to survive and even flourish.  But at a place like Geneseo, as at most colleges these days, there is no uniform opinion on such public controversies as the war in Iraq or welfare that all the faculty and everybody’s friends are constantly voicing (though there is considerable agreement on, say, gay rights and a vague environmentalism), and no really urgent need to form one.  Hence, the total evidential situation on military (and most economic) matters is, for most of our students, more mixed than it used to be for us.  Their subjective probability for the truth of military and connected political propositions remains fairly low, as does the expected value for them of taking any particular related action, especially a violent one.  This is not a matter of their having more inherent virtue than we did, but just a function of the changes in their socioepistemic environment.  Only the most dogmatic personalities among our students will commit themselves to anything important on such issues as the coming war, given their inconclusive total evidence.  The rest have no responsible alternative but to withhold judgment on these matters and keep studying, thinking, and arguing until their total evidence improves, or until the need for some kind of action overtakes them, and they are forced to choose what to believe. 

 

 



[1] Actually, the notion of a degree of belief is best viewed as a little broader than that of probability.  It should also comprehend non-categorical beliefs about facts of intermediate degree.  I think that Buffalo is a pretty large city, so I would probably say that it is a large city if asked in just those terms.  But I would not be saying that it is probably a large city (as if there were some discrete chance that it was just a little one).  What is unclear is not how big a city Buffalo is, but only whether that size ought to count as large.  I think that this sort of metaphysical or semantic vagueness gets mushed in psychologically with the epistemic kind of vagueness that is properly called probability.  I will probably mostly ignore this complication in what follows.

 

[2] Daniel Dennett claims, in effect, that opinion and judgment are the same thing, that the only reason for us ever to stray beyond Bayesian calculations is that we have, and are disposed to used, natural language.  I think this is wrong.  I think that dogs make categorical judgments – that squirrel went up this tree – but have no opinions. [cite]

[3] Here is the semantic or metaphysical analogue:

mostly p                                  =>        p

mostly q                                  =>        q

mostly (p&q)                           =>        (p & q)

If your house is mostly brick, and mostly white, this does not make it mostly white brick.  Or suppose that most people die before the age of 72, and also that most people die after the age of 70.  This does not entail that most people die at the age of 71. 

 

[4] Carl Ginet has a pretty clear example of a common sort of voluntary belief: you have taken off on a car trip, and begin to wonder whether you left the oven on at home – but you decide to forget about it, and to believe categorically that everything is fine at home, just so you can enjoy your vacation without worrying about it.  This seems to work. [cite]

[5] So George McClellan, for example, while he headed the main Northern army at the beginning of the Civil War, had no choice but to commit himself to one consequential course of action or another, hard as he tried to do nothing while diplomacy held any hope at all.  When Lincoln judged that his inaction was only allowing the South to gather ever greater strength, he removed McClellan and ultimately replaced him as commander in chief with the much more decisive Grant.