AN ARGUMENT FOR HUMEAN CREATIONISM

Ted Everett

(Colloquium 02/19/10)

 

            I am going to argue that a form of creationism is probably true.  And I want to begin by making it clear I do not mean Biblical or Christian creationism, but something that might be called Humean creationism, after David Hume.  Hume’s creation hypothesis, which he presents in his Dialogues Concerning Natural Religion as a reasonable alternative to Christian creationism, is that some intelligent being or beings did create our world and ourselves, but that these creators are not morally and rationally perfect like the Christian God is said to be, but probably imperfect beings much like ourselves.  Hume takes very seriously the so-called design argument as it is made by Christians, to the effect that a world as wonderful as ours must be the work of a perfect God, but he thinks that Christians have been taking it too far.  Here is his response:

 

But were this world ever so perfect a production, it must still remain uncertain, whether all the excellences of the work can justly be ascribed to the workman. If we survey a ship, what an exalted idea must we form of the ingenuity of the carpenter who framed so complicated, useful, and beautiful a machine? And what surprize must we feel, when we find him a stupid mechanic, who imitated others, and copied an art, which, through a long succession of ages, after multiplied trials, mistakes, corrections, deliberations, and controversies, had been gradually improving? Many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out; much labour lost, many fruitless trials made; and a slow, but continued improvement carried on during infinite ages in the art of world-making.[1]

 

Hume argues further that there may as well be many gods as one, given the evidence of natural order that Christians adduce:

 

A great number of men join in building a house or ship, in rearing a city, in framing a commonwealth; why may not several deities combine in contriving and framing a world? This is only so much greater similarity to human affairs. By sharing the work among several, we may so much further limit the attributes of each, and get rid of that extensive power and knowledge, which must be supposed in one deity, and which, according to you, can only serve to weaken the proof of his existence.[2]

 

Hume acknowledges that our creators, however many and however bright or stupid, are nevertheless God-like as far as we are concerned.  But in their own reality, there is no reason to suppose that they are better beings in any way than ourselves; the only clear difference is that they have whatever power it takes to have created us and out world.  This is the Humean creationism that I will be arguing is probably true – in fact, if I am right, almost a certainty.

 

1. Preliminary points.

            My argument is liable to produce certain misunderstandings, so please bear with me while I make a few background distinctions and preliminary points.   This way, when I get to the actual argument, its gist and structure should be reasonably clear.   It is particularly important for me to distinguish my argument, which is an empirical argument, from some traditional skeptical arguments that it resembles.  I am not interested in presenting a Cartesian argument, to the effect that Hume’s creation hypothesis cannot be proven to be false.  I do not care (today) about Descartes’s metaphysical or hyperbolic sorts of doubts, based on hypotheses that are merely conceivable, like that there are undetectable magic fairies living up my nose right now, or that I am just a brain in a vat.  I admit that I can’t prove such notions false, but that doesn’t mean that I should waste my time actually doubting reality in such silly ways.  The interesting sort of doubts, to most people, are what can be called epistemic or evidential doubts, meaning things that you actually have reason to think might well be true.  So, if I am sneezing a lot, it is logically possible that there are invisible fairies up my nose tickling my nasal tissues, but it is actually probable that there are other little things called rhinoviruses up my nose, i.e. that I have a cold, and that is what is making me sneeze. 

            What most people do when they run into Cartesian or metaphysical doubts is just to ignore them.  Yes, I might be dreaming or having some gargantuan hallucination right now.  I can’t prove that something like that isn’t happening; I can’t even assign a probability to it.  But so what?  I don’t have any reason to suppose that that kind of skeptical proposition is true, and I do have a reason to get on with my life without wasting my time on idle speculation.  So I make, as everybody else makes in adult life, something that might be called the pragmatic presumption.  I just take it for granted that nothing undetectable and really weird is going on, even though I obviously can’t prove that nothing undetectable of any sort is going on (that’s what “undetectable” means).  So, I can’t prove that I am not a brain in a vat right now, but I don’t have any evidence that says I am, and what I do only matters if I am not a brain in a vat.  So I have good practical reason to ignore the possibility – the mere conceptual possibility – that I am a brain in a vat and none of this stuff is actually happening. 

            But (and this is a big but), it is worth noting that this pragmatic presumption sometimes fails, at least temporarily, in regular life, because we can sometimes find evidence that it is actually false.  So, I ignore the big-hallucination hypothesis in ordinary life, and rely on my senses and memory, like everybody else does almost all the time.  But what if I remember that I took a big hit of LSD half an hour ago, and I remember also that whenever I do that, I get these really vivid, protracted hallucinations where I am a professional philosopher and I am presenting some fancy new theory to all these attentive colleagues and students.  That kind of evidence would overwhelm the pragmatic presumption, because then I’d have good, empirical reason to believe that all this really is just a big hallucination.  Or, say, I make the presumption, as always, that I am not asleep and dreaming all of my experiences, but then I notice that I seem to be running down an incredibly long hallway in my underwear, 20 minutes late for a class that I’ve completely forgotten how to teach.  In those conditions I can figure out that my experiences are probably not real, because this is in fact a typical sort of dream for me (and other professors, too, I think).   Occasionally, I even wake myself up that way, by figuring out, rationally,  that I am probably asleep and dreaming.   So, it turns out that whether the pragmatic presumption succeeds or not depends on your actual evidential situation. It’s not a universal cure for skepticism, or even a universal palliative.  In some circumstances, you have no rational choice but to take skeptical hypotheses seriously.

            Here are a couple of cases where the pragmatic presumption might fail in a big way. 

 

Case A: the vat-tender case.

            It is conceivable that you are a brain in a vat, but there is no reason to suppose so, right?  So, you make the pragmatic presumption and forget about it, and go get a good night’s sleep.  But then you wake up into this situation:  You find yourself alone in a room that contains 99 small vats, each of which holds what appears to be a human brain attached to various wires and tubes.  You know that your job is to watch over these brains, but you have no other specific memories.  Next to each vat is a monitor screen that shows the images its brain is experiencing in real time, as well as other indicators of the state of mind of its brain.  And guess what?  Each brain is experiencing the very same thing qualitatively that you are experiencing.  That is, each of the 99 brains believes that it’s a person just like you, who is walking around a room inspecting vats with brains in them, each of which has the same idea in turn.  This is because there is a transponder attached to your head, which is sending your actual brain states to the brains in vats.  Now, under these conditions, how do you know that you are not one of these brains in vats yourself?  Clearly you don’t, because they all have exactly the same evidence that you do, so you can’t tell positively one way or the other.  But you can figure out something, which is that you are more likely to be one of the brains in the vats than to be the actual vat-tender.  For the limiting probability is plain: out of 100 internally identical worlds of experience in question, at most one of them is real.  So, without even considering any Cartesian hypotheses, the chance that your world is the real one is only one out of a hundred.  The probability that you are a brain in a vat is therefore at least 99%, given the evidence I have described. 

            It should be clear that this is not a traditional skeptical argument.  Nothing here relies on metaphysical doubt.  You begin from a completely realistic position, presupposing that your experiences are entirely real, not dreams, etc. Then it happens that certain particular experiences you have lead you to believe, rationally, that your experiences in general are not real.  In epistemic situations like this one, the pragmatic presumption turns out to be unstable.  The problem is that after you make the presumption, you are rationally forced by evidence to abandon it – in this case, forced to believe that you are probably a brain in a vat.  So, this isn’t skepticism, it’s empiricism.  I wanted to make that clear.

            Now, here is a slightly amended thought experiment, which is closer in structure to the main empirical argument that I am going to make.

 

Case B: the 10-minute TIVO case.

            This case is the same as Case A, the basic vat-tender case, except that the transponder connecting your experiences to those of the brains in vats works differently.  This time, instead of sending your experiences directly into the 99 brains, the transponder acts like a TIVO machine and records your experiences, then sends them on to the other brains ten minutes later in a continuous stream.  In this case, you can distinguish your experiences from theirs, because you have already had theirs, ten minutes earlier.  So, the  pragmatic presumption seems to hold for you right now with respect to the current brains in the vats.  But in this case, once you find out what the transponder is doing, you won’t be able to distinguish your (real) world at any time from the (artificial) worlds that you know the brains will be in 10 minutes later.  So the pragmatic presumption fails ten minutes earlier than any particular time.  You never know whether you are the original you, now, or one of the brains in vats ten minutes from now.  The probability that you are a future brain in a vat, then, is 99%, just like the probability  in the first case that you were a contemporaneous brain in a vat.

 

2. My six-step argument.

            I want to argue that something like this is going on with us.  We are not professional vat-tenders, at least I’m not, and there aren’t any actual rooms full of envatted brains that I am aware of.  But I want to argue that we are all in essentially the same kind of relationship to certain other worlds that the vat-tender in the TIVO case is toward the artificial worlds of the brains in the vats.  What are these other worlds for us?  They are the virtual worlds of the future, created by people like us (maybe our own children), with computers essentially like ours but more powerful.  I want to claim that we are very probably in one of these future virtual worlds, or something similar, right now. 

            My main argument comes in six steps.  I want to argue:

                        (1)        that consciously experienced virtual worlds (CEVWs), subjectively indistinguishable from the real world, are materially possible. 

           

                        (2)        that such virtual worlds are very likely to exist in the future.

                       

                        (3)        that such worlds are very likely to be numerous.

                                   

                        (4)        therefore, that people in that future ought to believe that they are probably in one of those virtual worlds.

                       

                        (5)        that it makes no relevant difference whether these virtual worlds have been created yet, relative to time in our world of experience.

                       

                        (6)        therefore, that we ought to believe that we are probably in one of these virtual worlds right now.

 

I will argue for these claims one at a time.

 

            (1)  First of all, CEVWs are clearly possible in the sense of being conceivable.  We see such things, or their equivalents, in science fiction movies like Dark City or The Matrix all the time.  We may find such movies silly sometimes, but we can follow them.  If we can understand what is going on in them, then they are at least coherent stories, plausible or not.  My claim is stronger, though.  I am saying that it is also physically possible for us to make CEVWs, to create artificial consciousnesses and to place them in interactive artificial environments that seem to them like this world seems to us.  There is no scientific alternative to the idea that our own mental events are (or are somehow produced by) physical processes in our brains.  So, if we could make a functioning brain out of unconscious matter, then we could create a mind.  Of course, we already do make functioning brains out of unconscious matter.  Our children’s brains, after all, like the rest of their organs, are produced out of matter: eggs, sperm, blood, milk, cheerios, cheeseburgers, and so on, and somehow their consciousness develops out of this organic process.  And if this matter-to-mind process can be done organically, there is no reason in principle that it couldn’t be done inorganically as well, either with synthetic brains materially just like ours, or with sufficiently powerful computers of some sort that work as functional equivalents to human brains. 

 

            (2) We are also making whole virtual worlds right now, though very small ones, in video games and places online like Second Life, with human-like characters that interact within an artificial environment somewhat like ours.  What these worlds lack, of course, is any conscious experience on the part of their internal characters, plus anything like the richness and complexity of a real-world environment.  But these two gaps do not appear unbridgeable to most knowledgeable people.  The potential complexity of artificial worlds has been growing exponentially for decades, along with computing power in general.  People sometimes say that Moore’s Law, which states, roughly, that computing power always doubles every two years or so, must cease to be true at some point, but the expected point continues to recede as scientists imagine further ways to shrink computer memory, down even to the quantum level.  If there is any absolute limit on computing power, it will be far beyond what it would take to simulate human experience, given all that we know about the brain’s impressive but not astronomical complexity.  Now, no mere human being could actually program all the circumstances of our real world into a computer directly, but people can create evolutionary designs that begin in relatively simple states, then follow sets of rules that increase a system’s complexity as it develops over time.  Such models are already widely used in biology to simulate evolutionary growth, population change, and so on. 

            Some people think that artificial consciousness is likely to be produced within a few decades from now.  The computer pioneer Ray Kurzweil is the most famous example.  He goes around giving speeches about the coming “singularity”, as he calls it, when the still-accelerating onrush of technological progress will become effectively instantaneous, and he is doing everything he can to live another 30 years or so, by which point he expects that he can download his own consciousness into a computer, rendering himself more or less immortal.  And if Moore’s law holds for three more decades, that will, in fact, be enough for us to simulate the neuronal structure of the human brain, bit for bit, inside of an affordable computer.  But it doesn’t have to happen any time soon for the purposes of my argument – 10,000 years from now would do just as well.  All I need is that at some point in the future, we will have computers capable of housing CEVWs. 

 

            (3) If we can make one CEVW in a computer, it will presumably be possible to make as many as we want.  We could automate the input process if we like, with slight variations in initial conditions, spitting out dense matrices of artificial worlds.  And we are likely to want to do this, too, because it will be intensely interesting, scientifically and otherwise, to see how all these worlds evolve.  Presumably, the artificial worlds that most closely resemble the real world will be most valuable and most interesting, hence the ones produced in the greatest numbers.  Because our world is so complex, there will be millions or billions of potentially interesting variations, so if anybody puts in place an automated process to make slightly different CEVWs, there are likely to be millions or billions of these worlds. 

            There is an ethical question that will have to be dealt with once we have this capability: given that we can produce whole new worlds containing conscious beings, should we do it?  There are two main possibilities.  First, that we, or many of us, will consider it a good thing for CEVWs to exist, presumably because conscious existence is a good thing in itself, at least wherever something like human happiness can be attained.  In that case, we surely will create such artificial worlds, and plenty of them.  Alternatively, we, or many of us, may consider it a bad thing to create CEVWs, so we may attempt to prevent it through political or other forceful means.  But if our experience with the internet is any guide at all (or with cloning, for that matter), such repressions will not be perfectly respected.  A single dissident will probably be able to create innumerable CEVWs in his own basement when computers become sufficiently powerful.  I don’t see how this sort of thing can be prevented, if the human future is at all like the past.

 

            (4) So, we should take it as highly probable that there will be many artificial worlds much like ours, with conscious beings in them much like us, whose subjective experiences are relevantly indistinguishable from ours – that is, they seem just as real.  In some of the CEVWs that most resemble our world, there will be scientists who find ways to produce artificial worlds of their own, in a recursive process that generates whole systems of levels of worlds.  Now I want to ask, what is the epistemic position of our descendents in this situation?  They have produced a set of other worlds containing conscious beings with the same sort of internal lives as theirs.  If a number of beings have internally identical lives, and they are each aware of the apparent existence of such “internal twins” as they are called, then they are all in exactly the same epistemic situation.  Each knows that he is in one of many worlds, at most one of which is real, and knows that he doesn’t know, and has no way of telling, which one he’s in.  Therefore, he can know at best that there is a probability that he is in the real world, inversely proportional to the number of worlds he is aware of that contain internal twins of his.  If there are 99 such artificial worlds, that maximum probability will be .01, just as in the vat-tender case that we began with.

            Again, I do not mean this to be understood as just another skeptical hypothesis, but as an actual fact.  I’m not merely saying that we could be in a situation like the Matrix, where the real world contains artificial worlds and you can’t tell which one you’re in.  I am saying that, in all probability, we will be in that situation.[3]  There will very probably be billions of artificial worlds inside of our own world in the not-too-distant future.  Some of these CEVWs will resemble our own world closely enough to contain beings with experiences like our own.  Starting from the ordinary presupposition that they are real, that their experiences represent real facts, these beings will observe that there are other worlds like theirs inside of theirs, and that they cannot in principle distinguish their own experiences from those of their own artificial creations.  And we, the actual flesh-and-blood humans, will be in exactly the same epistemic situation.  If there are going to be millions or billions of these worlds inside of our world, then the rational subjective probability for our descendents that they are in the real, original world, not one of the artificial worlds at some level of remove from the original world, will be extremely small. 

 

            (5) Next, I want to note that artificial time and real time do not have to match up.  If there is any real time at which we will not know whether we are in the real world or a CEVW, then at that actual time, we also won’t know what time it is in the real world (just as we often wake up from a dream, and find to our momentary surprise that we are no longer in high school, or not actually running late for class, with or without pants).  For all we are going to know, once the pragmatic presumption has been defeated by technology, our subjective time could be a thousand or a million years earlier or later than the objective time, or they may not correlate reliably at all.   

 

            (6) Finally, I want to claim that if this sort of thing is going to be true in the future, however distant, then it is true already.  If our descendents are in a position where they ought to believe that they are probably artificial, then so are we.  The reason is that if there can be an artificial being in the year 3010 whose experiences are just like those of a real person at the same time, then there can be an artificial being in 3010 whose experiences are just like your experiences right now in 2010.  So, what you need to know is that very probably, your experiences now in 2010 are similar to the experiences of artificial beings in the year 3010.  If you know this, then what you don’t know is whether the real time is 2010, in which case everything is more or less the way you think it is, or 3010, in which case you are one of the (probably billions of) artificial beings who believes that it is 2010.  So, it doesn’t matter whether the artificial beings who think that it is 2010 exist in 2010 or not.  This is just like the preliminary 10-minute TIVO case above.  If you know that realistic CEVWs are out there somewhere in time, then you don’t know now whether you are real now or artificial then.  And since there are many artificial people and at most one real person having these 2010-ish experiences just like yours, you can figure out now that you are probably one of the artificial ones then, or at some other time in the future after the time when realistic CEVWs have first been created.  The precise probability will be (m-1)/m, where m is the actual number of realistic, 2010-ish worlds that will ever have existed, including the one real world.  That’s where we are right now, I think, very probably living in a virtual world inside of a computer system.  In this way, creationism – of the type that David Hume proposed – turns out to be very probably true.

 

3. Implications.

            So, what more can be said about this probably artificial world, ourselves, and our creators?   Well, the relationship between us and any creators is certainly asymmetrical, even if we share the same intrinsic qualities.  To them, we are stipulative beings, almost fictional, while to us, they are speculative beings, all-powerful but imperceptible.  They can assign fates to us, hit us with lightening bolts, impregnate us with avatars, whatever they want, and we can do no worse to them than disappoint them.  Even if they are entirely fallible in themselves, and towards each other, with respect to our world they are omnipotent, omniscient, and effectively eternal – just because their time doesn’t have to match our time.  I designed a little program years ago that simulates aspects of human history.  I used to run it with a thousand people for a thousand years a thousand times (to look at certain evolutionary patterns), and the whole experiment would take about five minutes.  But internally, each run lasted a thousand years for each of the little worlds.  Now, that is pretty meaningless, since there wasn’t anybody there, just symbolic objects with a few simple properties and relations.  But there is no reason to suppose that future computers won’t be able to run essentially programs with conscious beings in them, who experience the iterations of their worlds simply as time, while an outside observer would see thousands of locally experienced years go by in minutes or seconds.  So, on the model I am proposing, our probable creators have most of the metaphysical properties ascribed by Jews, Christians, and Moslems to God, even if they are ordinary humans in their own world.

            It is very hard to say to what extent whoever or whatever created our world is benign.  As I imagine descendents creating CEVWs, a number of possibilities appear that parallel standard  theologies.  Frivolous people might make such worlds for entertainment, like our video games, putting them in the position of the Greek gods, intervening for their own amusement in a succession of otherwise senselessly violent events.  More serious people might create worlds out of loving kindness like the Christian god, for the sake of their inhabitants, with internal conditions and laws that make virtue and happiness attainable.   I think that this is what I do if I could make a world.  People with other interests might want to run experiments to test the consequences of actions and policies, like we test drugs on monkeys, or just to explore the possibilities of conscious existence, which would amount to a Deistic universe for the experimental subjects.  It seems likely to me that the vast majority of artificial worlds will be produced by machine, through automatic processes, without the direct involvement of specific awareness of any conscious designer – a kind of super deism.  This is just because computers can spit things out so much more quickly and efficiently than people can.  So, once somebody starts a program that designs and instantiates worlds automatically, individual human or human-like creators can probably never catch up in bulk.  Quality is another matter – the vast majority of automatically produces CEVWs will probably be horrible failures, like billions of novels automatically written by machines that encode English syntax and vocabulary.  It may be reasonable to think that this world, our world, is such an orderly and pleasant one, compared to most worlds that were just randomly generated, that we should infer a conscious, deliberate, and attentive sort of creator rather than a random generator.  So, there is a place for a traditional design argument within this general approach, and not a specious one, I think. 

            It seems wrong to say that people in CEVWs are not real, even if they’re only software objects in computer programs.  We are real conscious beings, whether we are in the original physical world or not.  Our environments, even our brains and bodies, may be entirely artificial from an objective point of view, but our minds seem to be composed of different stuff.  If a virtual human being feels pain, that pain is a fact about the whole universe, independently of whether it is physically realized in an organic or inorganic brain, or any other physical system that can manifest pain.  That is my intuition, anyway, although it sounds like saying you can have artificial leg together with a real foot.  This implies some kind of formalist theory of mind, where mind is not an alternative substance to matter, but a collection of qualities that can be realized in matter, potentially in many different ways, like a poem can be printed, or read out loud, or just stored in somebody’s iPod.  But I guess that’s what I am inclined to think in any case.

            As for the soul, and life after death: well, if what you are is fundamentally a pattern of information that happens to be realized right now in your brain, or, as I have argued, in some future computer system, then there is nothing to prevent that same pattern from showing up elsewhere after your death, or from being deliberately cut and pasted into another program.  I’d like to think that I would want to “save” at least some of the conscious beings in my own created worlds, but even if I didn’t, patterns are patterns, and some variant of each of us that’s pretty close will probably be showing up elsewhere, some time or other.  Still, it does not seem very probable that if some intelligent being did deliberately create our world, he added another program that gives us each a second life of eternal torment or bliss, or that we ought to expect to wake up again after our deaths in any programmatic way.

This is all, as I have said, a matter of conjecture.  If we have a conscious and attentive creator who wants us to have evidence of his existence and intentions, he could certainly have done a better job of planting the evidence.  The probability still holds that we are in some kind of a created world, because we ourselves will soon enough be creating virtual worlds containing conscious beings, and those worlds will be numerous, and we won’t be able to discriminate between the two situations, which mean that we can’t discriminate between them now, so probably, we’re in one of the numerous created worlds rather than the one original world.  Everything else is speculation.  But intelligent beings will always wonder why they exist, and why their world exists, and some will take to heart, not without reason, the hopeful possibility that a more powerful being has created them for some good purpose.

 

 

 


NOTES



[1] Hume, Dialogues Concerning Natural Religion, part 166.

[2] ibid.

[3] It is enough for my argument that this is technologically probable, even if in fact, in our world, we blow ourselves up before we actually get that much computing power.