final version: Anthropology and Philosophy 11, 2014-2015: 77-94

 

OTHER MINDS AND THE ORIGINS OF CONSCIOUSNESS[1]

Ted Everett

SUNY Geneseo

 

          Why does consciousness exist?  What is it good for?  Is there anything useful that consciousness enables us to do which could not be done by an unconscious robot, sleepwalker, or "zombie"?  If so, what is that function?   If not, how could a useless capacity like this ever have evolved as a trait among creatures like ourselves?  In this paper, I offer a conception of consciousness as a kind of "higher-order perception" along lines recently explored by Peter Carruthers, and argue that introspective consciousness probably co-evolved as a "spandrel" or by-product of our more useful ability to represent the mental states of other people.  The first part of the paper defines and motivates my conception of consciousness as the representation of representation as representation in response to standard problems about consciousness.  The second part explains the basic socioepistemic function of consciousness and suggests an evolutionary pathway to this cognitive capacity that begins with predators and prey.  The third part discusses the relevance of these considerations to the traditional problem of other minds.

 

1. What consciousness is

          Consider an ordinary conscious state like pain.  For mainstream functionalists in the philosophy of mind, pain is to be understood as whatever internal state of an organism has the function of producing appropriate behavior, such as calling for help, in response to bodily distress.  Thus, pain is fundamentally a state of alarm, and could in principle take all kinds of different physical forms.  Each particular instance of pain will still be a physical state, but there need be no identities between physical and mental states as general types.  The problem with this approach applied to conscious mental states is that it seems to leave out the subjective experience of these states, the qualia or "raw feels" for the being that has them.  Pain is like this; it feels this way to me.  So, it may be easy to identify pain in humans with something like c-fiber firing in the brain, and to imagine the nerve impulses running from a burned hand into the brain that set such firings off, and then back down into the muscles of the arm, pulling the hand off of the hot stove.  But the “paininess” of pain, its feeling like this, seems to have no additional role in such an objective account.  Nevertheless, the feeling of pain is a real thing, a fact in the universe, and is surely at least part of what we mean when we use the word “pain”.  So it looks like functionalism, however much it has to say about mentality in general, gives us no ready way of understanding conscious experience. 

          One type of functionalist response to this problem is higher-order theories of consciousness, which define it as thinking about our thoughts (higher-order-thought or HOT), perceiving our perceptions (higher-order perception or HOP), or something else with a similar, self-referential kind of structure.  The basic idea is that mental states are unconscious as long as their objects are entirely external to the mind, but become conscious when they focus inwardly in some way, so that even a mild itch or pain might be unconsciously experienced (or, rather, not experienced) until one notices that one is in that state, at which point it pops into consciousness, and this is because the noticing itself – the thinking about or the perception of the itchy feeling – is what constitutes its being conscious.  Peter Carruthers has recently (2005) developed an appealing HOP theory that he calls "dual content" theory.  I want to offer a similar analysis of consciousness in terms that seem more convenient to me, and that I am inclined to call the "double-vision" theory. 

          First, I want to say a few things about the preliminary concepts of "seeing as" and mental representation.  In ordinary perception, we don't just see things; we see things as particular things, or things of a certain type.  Here is Wittgenstein's famous example of the duck-rabbit:

We can see this figure initially either as a duck or as a rabbit, and we can also switch back and forth between seeing the rabbit and seeing the duck at will, once we have seen it both ways.

          In everyday experience, we see things through familiar categories, and it makes sense that we should be inclined to do so, rather than always troubling to infer what we are looking at from raw sense-data.  But sometimes we are also able to perceive things the other way, simply as how they look to us.  So, you might look at the duck/rabbit diagram as neither a duck nor a rabbit, but just as a squiggly shape and a dot.  This is how we try to look at things when drawing them, and it turns out that this is often difficult to do well, particularly with things that we find really important, like faces.  Here, for example, on the left, we see a fairly typical adult's drawing of her own face, while on the right we see another self-portrait, made after a few lessons in drawing.[2] 

   

Non-artists tend to draw people's heads as flattened on the top, as on the left above, evidently because our interpretive theory of the face properly includes a layer of forehead above the eyebrows, topped by a layer of hair, but attributes no real meaning to the visual proportions involved.  In order to draw faces correctly, we need to stop seeing them in this instinctively interpreted way, and attend only to what they look like: shapes, shadows, and colors in our visual field.  This is a choice we almost always have in conscious visual perception, that we can see things as whatever we think they are, or we can just see what they look like to us, independently of what they are.  The same goes for hearing and the other senses; we can sense things through interpretations, as we usually do when listening to language we can understand, or we can sense them merely as sensations, as we might do when listening to someone speaking in an unfamiliar language. 

          The next preliminary concept is mental representation.  Minimally, one thing represents another if the first functions as a proxy for the second in some kinds of situations.  In the simplest cases, the representation is similar in some appropriate way to the thing that's being represented.  But it is not enough for representation that things be similar – the representation must also have the function of standing for the thing it represents.  So, in the drawings above, the pictures represent the artist both by looking like that person (more or less, according to her skill), and by having the function of showing viewers what the she looks like.  Specifically mental representations are models (symbolic or pictorial) of objects or features of the world that are encoded somehow in a mind.  I think that consciousness can be given an intuitive functional analysis in terms of these familiar notions of "seeing as" and mental representation – something precise enough to account for the existence of qualia, but abstract enough to apply potentially to robots or aliens as well as human beings. 

          Let me define five levels of mentality in these terms, by way of building up to consciousness.  The first is sensitivity, by which I mean the mere capacity for responding to environmental cues in some way:

Thermometers are sensitive in this way, with the mercury level going up or down depending on the ambient temperature.  Ants are clearly sensitive in more complex ways, responding to all kinds of environmental signals, including pheromones, in their performance of their integrated social functions.  But they still probably lack something that many other animals have, namely perception:

 

 

          Perceiving something is more than just responding to it in a functional way.  It means forming a mental representation of the things being sensed.  Perception produces an idea or percept in the perceiver.  The function of perception, as distinct from mere sensation, is to make memory and learning possible.  Perceptive creatures are able to store representations that permit them to recognize features of their environment that they have already encountered, and to respond to these features according to something like empirical inference, rather than pure instinct.  Thus, rats learn how to get through mazes by constructing internal maps of ones that they have already been through, and this is surely more effective in real life than just having some finite set of instinctive maze-solving algorithms.  The same goes for machines like the two rovers that were recently sent to Mars to drive themselves around and map the planet.  Such machines can sense things with their cameras, then construct or alter internal models of their environments according to their programming while they move about.  We do not, however, suppose that such mechanical devices are conscious.

          A third functional step in this ideal construction of consciousness is the capacity for self-representation, beginning with bodies.  The ability of a mind to represent its body as a part of its environment can be called external self-perception:

          Here is a person who is able to represent both an outside object and his own body as elements of his environment.  This capacity will make it easier for him to avoid objects that are in its way, to measure where he can and cannot go, and many other useful tasks.  But something like a Mars rover could easily represent its own position and orientation relative to its own internal maps, and we still wouldn’t call it conscious.  We ourselves can walk through crowds without consciously perceiving how close we are to everybody else, and still not bump into anybody; this can even be done when we are sleepwalking.  So, external self-perception by itself is also not enough for consciousness. 

          Some things are capable of perceiving not just their external environments and their own bodies, but also some of their own internal states.  We can call this internal self-perception:

          Here the person is representing the external object in relation to his own body, and he is also representing a modest itch that he feels on the back of his wrist.  Here, I don't mean the qualitative feeling of an itch, the conscious feeling of an itch, but just the internal neurological signals that get us to scratch at minor irritations to our skin.  David Armstrong (1980) says that consciousness is a "self-scanning mechanism of the central nervous system".  But if he means only this simple sort of internal self-perception, then it should still not be counted as consciousness.  For ordinary laptop computers have all sorts of internal self-scanning mechanisms, that is, they represent internally their own internal states, but we don't think of them as even coming close to consciousness.  Most of our itches and other self-scanned internal states must be represented unconsciously in our mental models of our bodies, or we would be too distracted by such things to think clearly about the things that really matter to us.[3]   Such internal self-perceptions can be made conscious – that is what it is to notice them – but they are not necessarily conscious, so this cannot be used as a definition of consciousness itself.  

          One fairly common metaphor for consciousness is "having the lights on".[4]  And what does having the lights on imply, other than that one can see something?  So, if consciousness is having the lights on, this presumably means being able to perceive something that we can't perceive if we're unconscious.  But what sort of thing could that be?  It cannot just be internal states like itches, in the sense of neural impulses indicating irritations to the skin, because we can perceive those things unconsciously.  What are the things that can only be perceived consciously?  I think they are not really different objects from those that we typically perceive unconsciously, but rather a certain special property that any object of perception has, namely, what they seem like to us.  When we learn how to draw, we learn to see things not just as the things they are (there goes a car, here are my shoes, and so on), but also as the way they look to us, which is to say, how we are representing them.  We don't just see through our representations to the things they represent, as we unconsciously perceive a stop sign.  And we don't just use our representations, as a robot mouse will use its map of a maze to find its robot cheese.  Instead, in conscious perception we are able to perceive our representations as representations, as the way things seem to us.  And this, in turn, is to say no more than that in consciousness we represent our representations to ourselves as representations, like this:

          Here, the conscious person is still representing the external object we began with, and he's still representing himself as a body, but now he is also representing his representation of the object, as his representation of the object.  I claim that consciousness is this very capacity.  We perceive objects in the world, and we represent them in our minds.  We also perceive these representations themselves, as secondary objects.  This is the general semantic category of a conscious state: a representation of a representation.  What makes it a conscious state, though, is the capacity we have to represent it as a representation, as if to say to ourselves: "this is what it looks like to me".  We don't always have to be saying that in order to be conscious, but we always have to be able to say that.  So, consciousness in general is the capacity for representation of representation as representation.  A conscious state is then a representation of a representation, representable as a representation.  

          Think about conscious perceptions of color in particular.  What is color?  We know that it is not a primary property of physical objects, but rather a function of the reflectivity of different surfaces together with our sensitivities to certain wavelengths of light.  An alien species might have no experience of color as qualia at all, any more than we have qualitative experience of x-rays, though they might still understand color objectively in terms of light and optics in the same way that we understand the physical properties of x-rays.  So, think about it this way: When we are looking at the color of something – not at the thing as such, but just examining its color consciously, we must be perceiving a quality only of our representation of the thing, because the thing in itself doesn't even have such a quality.  Our mental representations of physical objects might be said to have the same shape in our visual fields as the objects really do in space, in the same way that the shapes within a photograph literally match the shapes of the corresponding objects in the real world: shapes representing shapes by way of isomorphism.  But the "filling" within these shapes is utterly different as between a physical object and its mental or photographic representation.  In the real world, outlines are filled with matter, but in the phenomenal world they're filled with color instead.  Color is in our mental models just a very rough proxy for matter, something that gives us at least some indication of the type of matter that fills up the boundaries of the object in physical space.  Thus, a tree doesn't really have qualitative brown as its actual surface – it has bark.  The "browniness" or other color-qualia of our representations of such things as trees is just the introspective analogue for such things as bark, existing as an inherent property only of representations.  So, when you're looking around like an artist, paying attention only to colors as such, that is, when you are conscious of the colors, you can only be directly perceiving (hence, representing) your own representations. 

         

2. How consciousness evolves

          Why, then, does consciousness exist? What good does it do us to represent our representations to ourselves as our own representations – that is, to perceive how things seem to us as well as how they are?  Why should we ever have evolved such a capacity?  It seems that getting around in the world, feeding ourselves, and so on, does not depend on being conscious in this way, just on being sufficiently perceptive of ourselves and our surroundings.  Sleep-walkers can go downstairs and make themselves a sandwich without waking up.  And dead machines increasingly can imitate that sort of perception-driven activity.  So, why couldn't we all just be "zombies" all the time?

          Some philosophers, including epiphenomenalists like Chalmers (1995) claim that consciousness has no function at all.  But I think that consciousness has a whole cluster of functions that are essential to our lives, and that a recognizable humanity could never have evolved without it.  Fundamentally, I think that we have consciousness, this ability to represent the way things seem to us as such, in order that we can also represent the way things seem to other people.  This is a terribly useful cognitive device, because our own, individual perceptions are both highly fallible and severely restricted by our own small spatiotemporal extension.  Being able to represent other people’s representations alongside our own, and to resolve the differences that may occur between them, allows us to multiply our own perceptive powers by as many people as we can effectively communicate with.  Moreover, the same capacity allows us to correct some of our misperceptions through perceiving that the perceptions of others are different from ours.

          Here is a very simple example of people looking at the same thing, but its seeming different to them, and their resolving that difference:

Here, the person on the left thinks that the object between them is rectangular, and the person on the right sees it as circular.  Next, they communicate with each other, by saying how things seem to them as individuals:

Now, by having disagreed in a way that each can perceive, they can become aware of the fact that they disagree: 

Provided that each sees the other as having a mind, each can now represent the other's mental representation of the thing itself, based on the other's testimony, alongside their own different representation of the object.  On my view, this sort of representational situation cannot exist without consciousness, because in order for such comparisons to take place each person must represent his or her own representations to him- or herself as representations: 

          Both people now realize that something is wrong; one or both of their representations of the object has to be faulty, or at least incomplete.  So, there’s a puzzle here, for each of them to solve, before they can be fully confident in their own representations: why does the object look this way to me, and that way to the other person?  In this case, the puzzle has a simple solution.  With the advantage of two points of view and elementary geometry, each can now easily surmise that the object is something that really is rectangular from one point of view and circular from another, namely, a cylinder: 

Now they can inform each other of their new, synthetic representations:

...ultimately realizing realize that they now see the object the same way:

Now they can go on exploring and figuring things out together as a team, each able to use the other's eyes at greater distances, given their ability to represent each other’s representations through communication and the double-visual imagination that allows them to connect or to compare their own representations to those of their partner.

This is what I think consciousness does for us: it allows us to see things from other points of view than our own.  A sleepwalker can make himself a sandwich while reciting Ozymandias, but he cannot learn from other people in the way that he can when he is conscious, because he lacks at the moment the capacity to understand that other people see things differently, because he cannot while sleeping represent his own or anyone else’s present representations to himself as representations.  A zombie can perceive things, which is enough to get around and kill people and eat their flesh, but he cannot perceive how things seem, to himself or anybody else.  If he suddenly could, then he would no longer be a zombie because he would have regained consciousness.  Therefore, nothing can be a zombie, in the philosophers' sense of an unconscious thing that is completely equivalent in function to a conscious person.[5] 

What consciousness is, then, is an essentially social kind of cognitive capacity, one that allows each of us to learn from others, and so to develop a more comprehensive and reliable mental model of the world than we could ever construct all by ourselves.  Consciousness seems hard to understand because it doesn’t really have much of a function when considered only as an individual capacity.  It is hard to imagine solitary, cognitively isolated creatures ever evolving consciousness, because they wouldn't get anything much out of it.  In fact, such creatures would probably be better off as zombies, since this would save them the energy required to operate the intricate, two-tiered perceptive and communicative system that makes consciousness possible. 

          So, where did this complex cognitive ability come from?  How did it evolve?  I suspect that beginnings of consciousness emerged out of an evolutionary "arms-race" between predators and prey.  The standard example of an arms race is cheetahs and gazelles.  In their early forms, cheetahs were built like ordinary cats and gazelles like ordinary ruminants, the cheetahs chased the gazelles and sometimes caught them, and sometimes the gazelles got away, and this would seem to have been a relatively stable pattern.  But there's a problem that propels further evolution on both sides: any cheetah that was a little faster than his fellows had a better chance of eating, and any gazelle that was a little faster and springier than his companions had a better chance of getting away.  So, over time, through natural selection cheetahs became faster and faster and developed the sleek bodies and huge lungs which are only good for high-speed short-term chasing, and the gazelles developed corresponding speed and long springy legs that are not much good for anything except evading cheetahs.  And there are corresponding changes in behavior on both sides.  Gazelles stop running in a straight line and develop random-seeming zigzag patterns of escape, because the harder they are for cheetahs to anticipate, the more likely they are to get away and live to reproduce.  Over the same time, cheetahs develop a heightened ability to make quick turns while running in response to their perceptions of the gazelles.  On top of this, gazelles develop an enhanced ability to notice cheetahs in the distance, and cheetahs develop an enhanced ability to sneak up on gazelles. 

          Now, what should we suppose is going on cognitively inside these creatures as their arms race develops?  We might imagine that, at an early evolutionary stage, they are all just Cartesian "machines", perceptive but unaware of their perceptions as the way things seem to them.  We can see them as self-perceptive, too, at least externally: the cheetah has to know where it is geographically with respect to a gazelle if it has any hope of sneaking up on it, and correspondingly for the gazelle's getting away.  Moreover, a successful cheetah or gazelle might well have the cognitive ability to anticipate more than one behavior from the other, running alternative unconscious representations of possible tactical responses from different positions and velocities within its mental picture of the local environment.  Still, we might imagine that their subsequent behavioral choices are all based on instinct: the gazelle runs in some kind of programmed but random-seeming pattern, and the cheetah adjusts continuously until it either catches the gazelle or runs out of steam – all without conscious experience on either side. 

          Even at this point in the cognitive arms race between cheetahs and gazelles, there will still be intense selective pressure on both sides to be a little better at their jobs, since every meter's distance could well make the difference between life and death for each of them.  And if we imagine two cheetahs, one that's perceptive and capable of multiple alternative representations but unconscious, and a second, mutant cheetah with the further ability to represent itself from the gazelle's point of view, the second cheetah is likely to be somewhat more successful.  So, while it is sneaking up on a gazelle, the second cheetah pays particular attention to the gazelle's eyes so that it can represent to itself what the gazelle is seeing at the moment, and in particular whether the gazelle can see the cheetah, or could see the cheetah if the cheetah crept another meter closer.  This sort of calculation can doubtless be done crudely without consciousness; the first cheetah could just use instinctive responses to its own direct perceptions of the gazelle, its eyes included.  But the further mutation that allows the second cheetah to imagine the gazelle's perceptions surely adds some increment of probable success in stealthy hunting.  By the same token, a mutant gazelle with the ability to imagine what the cheetah sees will surely have a similar advantage in getting away over gazelles that operate only on physical perceptions, unconscious anticipations, and instincts.  

          I want to emphasize again that what is working here for each creature is its ability to represent to itself the other creature's representations.  Any ability to represent its own representations, hence to be fully conscious, is essentially a byproduct of this immediately useful cognitive capacity.  But once it exists, for whatever reason it first comes into being, it can develop further functions of its own.  So, let consciousness emerge initially out of predation under pressure from natural selection, which is a social function only the a minimal sense of competition.  But for genuinely social animals like, say, wolves or other canines, full consciousness will surely confer substantial further benefits by helping them coordinate their hunting in packs.  If each wolf in a pack that is hunting an elk, say, can see what's going on from its own point of view, and also imagine accurately the points of view of other members of its pack as they close in on their quarry so as to gain a more complete picture of the total situation, on top of imagining the elk's point of view as it looks for a way to escape, then this is likely to advantage this pack over packs in which the wolves are running only on instinct and individual perceptions and anticipations.  

          This may be all that consciousness was for our earliest human ancestors: an ability to multi-perspectivize while hunting and gathering.  We certainly still use this ability at the same basic level in a lot of activities, wherein people using simple shouts and signs, together with eye-contact and facial expressions, are sometimes able to act as if they were parts of one body.  For example, squads of soldiers who attack enemy bunkers in a coordinated way from several angles at once are surely more likely to survive, just as players in sports like basketball who can connect and reconcile their teammates' and opponents' physical points of view with their own are more  likely to succeed.  Even in ordinary life, we all rely on similar abilities when we sit down together at dinner and pass the dishes, or when we take turns driving through four-way stops and roundabouts.

          This basic ability to see things as others see them has another obvious advantage in helping us take care of families, friends, and other associates, and especially our children.  Successful parents work hard at imagining how things seem to their children.  In doing so, in placing ourselves in their shoes, we are better able to imaging how they feel about these things, and this empathy motivates us in turn to help them resolve their intellectual and social problems far beyond the limits of instinctive sympathy.  This kind of cognitive connection with other people, not just perceptive but emotional, is clearly advantageous for groups of creatures like ourselves in a competitive and challenging environment, especially given that our offspring are unable for so many years to look out for themselves.  People who are especially good at seeing things from other people's points of view tend to become successful in a variety of situations, while people who can only see things their own way are liable to be left behind, all other things being equal.  Conceivably, this empathetic use of consciousness even arose first, before its practical, reciprocal uses in cooperative activities like hunting.  But it strikes me as most likely that empathy and cognitive cooperation evolved more or less in concert, both deriving ultimately from perceptive competition between predators and prey.

          This socially useful, cooperative and empathetic capacity makes possible the introspective kind of consciousness that has been taken by philosophers since Descartes as more fundamental.  When we ask ourselves what consciousness is, we naturally think of individual minds, as this is where qualitative consciousness resides.  So, when we seek to find its function, we are inclined to look within the individual for this as well.  But its most basic function is not there, within the individual as such – or perhaps it is better to say that purely introspective consciousness has no basic function at all.   Our ability to examine our ideas for their own sake, as it were, seems instead to be a "spandrel", a mere byproduct of our useful capacity for comparing and resolving differences between our perceptions and those of other people.  This more basic, as it were  altrospective consciousness entails a kind of double vision with respect to our own perceptions: seeing things as things in order to represent the world, and seeing things as how they appear from our own point of view in order to connect and reconcile our view of things with those of others. 

          Still, when other people stop communicating with us, this does not shut the lights off in our own heads.  We can still perceive our own thoughts and manipulate and question them, just as if they were the thoughts of other people.  Does this left-over, introspective form of consciousness serve any function for us?  Not in the fundamental way that altrospective consciousness does – that is, it does not extend our range of perceptions through access to other minds or quickly resolve different appearances from multiple points of view.  But it seems to have been "exapted" in ways that surely enhance our intellectual and social abilities.  For one thing, introspection allows us to consider as many possible alternative points of view as we like, for as long as we like, whether those other points of view are real or not.  It allows us to analyze and criticize our own standing perceptions and beliefs, regardless of real or hypothetical disagreement from others.  It also allows us to perceive at least some aspects of our own feelings, desires, and intentions more clearly and directly than can anybody else, and this self-knowledge makes us more empathetic and potentially more helpful with our fellows than we would be otherwise.  Finally, introspection permits us to live lives of an intellectual richness and depth far beyond what is needed for survival and practical cooperation, including my life and yours as philosophers.  And even if philosophy is just as useless in itself as most outsiders (and more than a few insiders) take it to be, no other field of useful research, including modern science, could prosper without the broad scope of imagination and internal dialectic that altrospective consciousness has permitted and that introspective consciousness provides.

 

3.  Our knowledge of other minds

          Language and consciousness have long been thought to be intrinsically related.  Descartes took the absence of language to demonstrate the absence of conscious thought in non-human animals.[6]  More recently, Donald Davidson, C. A. J. Coady, and Tyler Burge all have argued that language only makes sense as a means of communication if we presuppose that other people have the same sorts of linguistic intentions as ourselves, and that most of what we hear from other people is the truth.  And this is certainly hard to imagine without conceiving of our interlocutors as conscious beings.   As Burge puts it, "The relation between words and their subject matter and content is not an ordinary, natural, lawlike causal-explanatory relation. Crudely speaking, it involves a mind."[7]  This accords well with the story I have been telling of the evolution of the conscious mind as a means for usefully imagining the mental states of others.  It seems, then, that a genetic predisposition to believe in other minds would seem to be a prerequisite for consciousness itself.  But an evolutionarily accountable disposition to believe something is no proof that the belief is true.  We can still reasonably ask the traditional question: how can we know that other people actually do have minds?

          One plausible answer is that the hypothesis of others having minds offers the best explanation of our obvious ability to communicate and to perform all the related cognitive and social functions that communication make possible.  So, I have this capacity for double vision and it facilitates my understanding of the statements of others so as to enhance my own perceptive reach; they all seem to have the same linguistic and socio-perceptive abilities that I have, not to mention the same physical structures in the brain; therefore, it stands to reason that the same sort of double-visual capacity is what gives them these same abilities.  This abductive argument is more compelling than the standard analogical argument to the effect that others are liable to have minds simply because they behave the way we do in similar situations.  That more basic argument relies only on the principle that pure coincidences are rare, so it would be odd if I were the only person who had conscious experience along with everything else that seems to be shared with other people. This analogy would have the same force even if we supposed that conscious experience is utterly epiphenomenal.  But once we have identified a necessary function for consciousness itself, or rather for the socioperceptive ability that makes consciousness appear, we have a much stronger causal argument for believing in other minds.  For now, given that my fellows seemingly must have a double-visual mental capacity at least somewhat like my own in order for them to learn from me the same way that I learn from them, it becomes not just odd but inexplicable for me that they should not also be conscious.  

          There is, I think, a further, even stronger argument to be made here for other minds.  Consider a high-functioning android like the character Commander Data on Star Trek: the Next Generation.  Data is treated as a conscious person, not just a robot, by his shipmates, and we viewers more or less spontaneously see him that way too.  Why is that?  What evidence do we have that Data is a conscious person, not just a computerized automaton like most other science-fiction robots?  Well, one reason is what I have just been saying, that this would account for his ability to function linguistically just as we do in absorbing information from multiple points of view and sharing it back appropriately.  But there is much also in the content of what he has to say that leads us to perceive him as a conscious being.  For we know that Data is an extremely reliable source of information, including information about his own inner operations.  And many of the things he tells us are reports of his conscious experience as such.  That is, he represents to us in language how things seem to him, which is to say, how he is representing things to himself, which he can only do through the capacity to perceive these representations as his representations.  Reliable sources ought inductively to be believed if there is no reason to doubt them on a given topic.  This is why we ought to believe Data (or any other reliable speaker)  when he talks about his inner states.  If we believe him when he says that this or that is how things seem to him, then we have to believe he has a conscious mind, because that's all there is to having one.[8] 


 

WORKS CITED

Armstrong, D. 1980.  The Nature of Mind and Other Essays. Brisbane: University of Queensland Press.

Burge, T. 1993. "Content Preservation", Philosophical Review 102: 457-88.

Carruthers, P. 2005.  Consciousness: Essays from a Higher-Order Perspective.  New York: Oxford University Press.

Chalmers, D. 1995.  “Facing Up to the Problem of Consciousness”, Journal of Consciousness Studies 2.3: 200-19. 

Coady, C. 1992.  Testimony: A Philosophical Study .Oxford: Oxford University Press.

Cottingham, J., Stoothoff, R., and Murdoch, D. trans. The Philosophical Writings of Descartes, Volume I.  Cambridge: Cambridge University Press.

Davidson, D. 1977. "The Method of Truth in Metaphysics", Midwest Studies In Philosophy 2.1: 244–254.

Descartes, R. 1637.  Discourse, in Cottingham, et al 1985.

Edwards, B. 1979.  Drawing on the Right Side of the Brain. London: Souvenir Press.

Everett, T. 2000.  "Other Voices, Other Minds", Australasian Journal of Philosophy 78.2: 213- 222.

Searle, J. 1980. "Minds, Brains and Programs", Behavioral and Brain Sciences 3.3: 417–457.

 


 

NOTES



[1] Thanks to Alan Sidelle and to an audience at SUNY Geneseo for their thoughtful comments on this paper, and to Catherine Everett for the handsome illustrations.

[2] These drawings are taken from Betty Edwards 1979.

[3] This is why physical pain, which by its nature typically commands our attention, is such a pain.

[4] For example, David Chalmers 1995 uses the metaphor in a negative way, when he speaks of unconscious zombies being always "in the dark".  

[5] There might still be a kind of zombie that could behave equivalently to a conscious human being, but not by functioning in the way that we function.  It would have to use some kind of "brute force" mechanism instead, like a chess computer that has no understanding of chess but can beat a human being, just because it makes so many million calculations every second - the sort of brute functional equivalence that John Searle 1980 derides in his Chinese Room example. 

[6] pp. 140-141.  Carruthers, too, claims (p. 11) that animals other than ourselves totally "lack phenomenal consciousness", which makes the evolution of consciousness in humans much harder to explain, I think.

[7] p. 479.

[8] For a general account of this argument from reliable testimony, see Everett 2000.