Naturalism vs. Evolution:
A Religion/Science Conflict?

This article covers the answer to the question: “Naturalism vs. Evolution: A Religion/Science Conflict?

Naturalism is the view that there is no such person as God or anything like God. So taken, it is stronger than atheism; it is possible to be an atheist without rising to the heights (or sinking to the depths) of naturalism. A follower of Hegel could be an atheist, but, because of his belief in the Absolute, fail to qualify for naturalism; similarly for someone who believed in the Stoic’s Nous, or Plato’s Idea of the Good or Aristotle’s Prime Mover. This definition of naturalism is a bit vague: exactly how much must an entity resemble God to be such that endorsing it disqualifies one from naturalism? Perhaps the definition will be serviceable nonetheless; clear examples of naturalists would be Bertrand Russell (“A Free Man’s Worship”), Daniel Dennett (Darwin’s Dangerous Idea), Richard Dawkins (The Blind Watchmaker), the late Stephen J. Gould, David Armstrong, and the many others that are sometimes said[1] to endorse “The Scientific Worldview.”

Naturalism is presumably not, as it stands, a religion. Nevertheless, it performs one of the most important functions of a religion: it provides its adherents with a worldview. It tells us what the world is fundamentally like, what is most deep and important in the world, what our place in the world is, how we are related to other creatures, what (if anything) we can expect after death, and so on. A religion typically does that and more; it also involves worship and ritual. These latter are ordinarily (but not always) absent from naturalism; naturalism, we could therefore say, performs the cognitive or doxastic function of a religion. For present purposes, therefore, we can promote it to the status of an honorary religion, or at any rate a quasi-religion. And now we must ask the following question: is there a conflict between naturalism, so understood, and science? If so, then indeed there is a science/religion conflict–not, however, between science and Christian (or Judaic, or Islamic) belief, but between science and naturalism.

Boat Lake Nature Water Mountain Landscape Laguna

Nature

Why should we think there might be such a conflict? Here the place to look is at the relation between naturalism and current evolutionary theory. But why should we think there might be conflict there, in particular since so many apparently believe that evolution is a main supporting pillar in the temple of naturalism?[2] Note, first, that most of us assume that our cognitive faculties, our belief-producing processes, are for the most part reliable. True, they may not be reliable at the upper limits of our powers, as in some of the more speculative areas of physics; and the proper function of our faculties can be skewed by envy, hate, lust, mother love, greed, and so on. But over a broad range of their operation, we think the purpose of our cognitive faculties is to furnish us with true beliefs, and that when they function properly, they do exactly that.

But isn’t there a problem, here, for the naturalist? At any rate for the naturalist who thinks that we and our cognitive capacities have arrived upon the scene after some billions of years of evolution (by way of natural selection and other blind processes working on some such source of genetic variation as random genetic mutation)? The problem begins in the recognition, from this point of view, that the ultimate purpose or function of our cognitive faculties, if they have one, is not to produce true beliefs, but to promote reproductive fitness.[3] What our minds are for (if anything) is not the production of true beliefs, but the production of adaptive behavior. That our species has survived and evolved at most guarantees that our behavior is adaptive; it does not guarantee or even suggest that our belief-producing processes are reliable, or that our beliefs are for the most part true. That is because our behavior could be adaptive, but our beliefs mainly false. Darwin himself apparently worried about this question: “With me,” says Darwin,

the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?[4]

Charles Darwin

Charles Darwin

Perhaps we could put Darwin’s doubt as follows. Let R be the proposition that our cognitive faculties are reliable, N the proposition that naturalism is true and E the proposition that we and our cognitive faculties have come to be by way of the processes to which contemporary evolutionary theory points us: what is the conditional probability of R on N&E? I. e., what is P(R/N&E)? Darwin fears it may be low.

There is much to be said for Darwin’s doubt. Natural selection rewards adaptive behavior and penalizes maladaptive behavior, but it cares not a whit what you believe. How, exactly, does this bear on the reliability of our cognitive faculties? In order to avoid irrelevant distractions or species chauvinism, suppose we think, first, not about ourselves and our ancestors, but about a hypothetical population of creatures a lot like ourselves on a planet similar to Earth. Suppose these creatures have cognitive faculties; they hold beliefs, change beliefs, make inferences, and so on; suppose further these creatures have arisen by way of the selection processes endorsed by contemporary evolutionary thought; and suppose naturalism is true in their possible world. What is the probability that their faculties are reliable? What is P(R/N&E), specified, not to us, but to them?

We can assume that their behavior is for the most part adaptive; but what about their beliefs; is it likely that they are for the most part true? In order to evaluate P(R/N&E), for those creatures, we must look into the relation between their beliefs and their behavior. Their behavior, we suppose, is adaptive; but what does that tell us about the truth of their beliefs or the reliability of their cognitive faculties? We’ll consider the probability of R on N&E and each of two possibilities (C and -C), possibilities that are mutually exclusive and jointly exhaustive.[5] Given P(R/N&E&C) and P(R/N&E&-C), we can determine P(R/N&E). (Of course, we won’t be able to assign specific real numbers, but only vague estimates such as ‘high,’ or ‘low,’ or ‘in the neighborhood of .5.’)

What are these two possibilities C and -C? First, what sort of thing will a belief be, from the perspective of naturalism? Here I’ll assimilate materialism (about human beings) to naturalism: human beings are material objects and neither are nor contain immaterial souls or selves. (All or nearly all naturalists are materialists, so there will be little if any loss of generality.) And from this point of view, i.e., naturalism so construed as to include materialism, a belief would apparently have to be something like a long term event or structure in the nervous system–perhaps a structured group of neurons connected and related in a certain way. This neural structure will have neurophysiological properties (‘NP properties’): properties specifying the number of neurons involved, the way in which those neurons are connected with each other and with other structures (muscles, sense organs, other neuronal events, etc.), the average rate and intensity of neuronal firing in various parts of this event, and the ways in which the rate of fire changes over time and in response to input from other areas. It is easy to see how these properties of a neuronal event should have causal influence on the behavior of the organism. Beliefs, presumably, will be neurally connected with muscles; we can see how electrical impulses coming from the belief could negotiate the usual neuronal channels and ultimately cause muscular contraction.

So a belief will be a neuronal structure or event with an array of NP properties. But if this belief is really a belief, then it will also have another sort of property: it will have content; it will be the belief that p, for some proposition p–perhaps the proposition naturalism is all the rage these days. And now the question is this: does a belief–a neural structure–cause behavior, enter into the causal chain leading to behavior, by virtue of its content? C is the possibility that the content of a belief does enter the causal chain leading to behavior; -C is the possibility that it does not.

Let’s begin with -C (which we could call ‘semantic epiphenomenalism’): what is P(R/N&E&-C)? Well, it is of course the content of a belief that determines its truth or falsehood; a belief is true just if the proposition that constitutes its content is true. But given -C, the content of a belief would be invisible to evolution. Since natural selection is interested only in adaptive behavior, not true belief, it would be unable to modify belief-producing processes in the direction of greater reliability by penalizing false belief and rewarding true belief. Accordingly, the fact that these creatures have survived and evolved, that their cognitive equipment was good enough to enable their ancestors to survive and reproduce–that fact would tell us nothing at all about the truth of their beliefs or the reliability of their cognitive faculties. It would tell something about the neurophysiological properties of a given belief; it would tell us that by virtue of these properties, that belief has played a role in the production of adaptive behavior. But it would tell us nothing about the truth of the content of that belief: its content might be true, but might with equal probability be false. Now reliability requires a fairly high proportion of true beliefs–for definiteness, say 3 out of 4. On this scenario (i.e., N&E&-C), the probability that ¾ of these creature’s beliefs are true is low. Alternatively, we might think this probability is inscrutable–such that we simply cannot tell, except within very wide limits, what it is. This too seems a sensible conclusion. P(R/N&E&-C), therefore, is either low or inscrutable.

Turn to C, the other possibility, the possibility that the content of a belief does enter the causal chain leading to behavior. As I’ll argue below, it is difficult to see, on the materialist scenario, how a belief could have causal influence on behavior or action by virtue of its content. Nonetheless, suppose C is true. This is the commonsense position: belief serves as a (partial) cause and thus explanation of behavior–and this explicitly holds for the content of belief. I want a beer and believe there is one in the fridge; the content of that belief, we ordinarily think, partly explains the movements of that large lumpy object that is my body as it heaves itself out of the armchair, moves over to the fridge, opens it, and extracts the beer. What is P(R/N&E&C)? Not as high as one might think.

Could we argue that beliefs are connected with behavior in such a way that false belief would produce maladaptive behavior, behavior that would tend to reduce the probability of the believer’s surviving and reproducing? No. First, false belief by no means guarantees maladaptive action. For example, religious belief is nearly universal across the world; even among naturalists, it is widely thought to be adaptive; yet naturalists think these beliefs are mostly false. Clearly enough false belief can produce adaptive behavior. Perhaps a primitive tribe thinks that everything is really alive, or is a witch; and perhaps all or nearly all of their beliefs are of the form this witch is F or that witch is G: for example, this witch is good to eat, or that witch is likely to eat me if I give it a chance. If they ascribe the right properties to the right ‘witches,’ their beliefs could be adaptive while nonetheless (assuming that in fact there aren’t any witches) false.

Our question is really about the proportion of true beliefs among adaptive beliefs–that is, beliefs involved in the causation of adaptive behavior. What proportion of adaptive beliefs are true? For every true adaptive belief it seems we can easily think of a false belief that leads to the same adaptive behavior. The fact that my behavior (or that of my ancestors) has been adaptive, therefore, is at best a third-rate reason for thinking my beliefs mostly true and my cognitive faculties reliable–and that is true even given the commonsense view of the relation of belief to behavior. So we can’t sensibly argue from the fact that our behavior (or that of our ancestors) has been adaptive, to the conclusion that our beliefs are mostly true and our cognitive faculties reliable. It is therefore hard to see that P(R/N&E&C) is very high. To concede as much as possible to the opposition, however, let’s say that this probability is either inscrutable or in the neighborhood of .9.

Now the calculus of probabilities (the theorem on total probability) tells us that

P(R/N&E) = [P(R/N&E&C) x P(C/N&E)] + [P(R/N&E&-C) x P(-C/N&E)]

i.e., the probability of R on N&E is the weighted average of the probabilities of R on N&E&C and N&E&-C–weighted by the probabilities of C and -C on N&E.

We have already noted that the left-hand term of the first of the two products on the right side of the equality is either moderately high or inscrutable; the second is either low or inscrutable. What remains is to evaluate the weights, the right-hand terms of the two products. So what is the probability of -C, given N&E, what is the probability of semantic epiphenomenalism on N&E? Robert Cummins suggests that semantic epiphenomenalism is in fact the received view as to the relation between belief and behavior.[6] That is because it is extremely hard to envisage a way, given materialism, in which the content of a belief could get causally involved in behavior. According to materialism, a belief is a neural structure of some kind–a structure that somehow possesses content. But how can its content get involved in the causal chain leading to behavior? Had a given such structure had a different content, one thinks, its causal contribution to behavior would be the same. Suppose my belief naturalism is all the rage these days–the neuronal structure that does in fact display that content–had had the same neurophysiological properties but some entirely different content: perhaps nobody believes naturalism nowadays. Would that have made any difference to its role in the causation of behavior? It is hard to see how: there would have been the same electrical impulses traveling down the same neural pathways, issuing in the same muscular contractions. It is therefore exceedingly hard to see how semantic epiphenomenalism can be avoided, given N&E. (There have been some valiant efforts but things don’t look hopeful.) So it looks as if P(-C/N&E) will have to be estimated as relatively high; let’s say (for definiteness) .7, in which case P(C/N&E) will be .3. Of course we could easily be wrong; we don’t really have a solid way of telling; so perhaps the conservative position here is that this probability too is inscrutable: one simply can’t tell what it is. Given current knowledge, therefore, P(-C/N&E) is either high or inscrutable. And if P(-C/N&E) is inscrutable, then the same goes, naturally enough, for P(C/N&E). What does that mean for the sum of these two products, i.e., P(R/N&E)?

We have several possibilities. Suppose we think first about the matter from the point of view of someone who doesn’t find any of the probabilities involved inscrutable. Then P(C/N&E) will be in the neighborhood of .3, P(-C/N&E) in the neighborhood of .7, and P(R/N&E&-C) perhaps in the neighborhood of .2. This leaves P(R/N&E&C), the probability that R is true given ordinary naturalism together with the commonsense view as to the relation between belief and behavior. Given that this probability is not inscrutable, let’s say that it is in the neighborhood of .9. Under these estimates, P(R/N&E) will be in the neighborhood of .41.[7] Suppose, on the other hand, we think the probabilities involved are inscrutable: then we will have to say the same for P(R/N&E). P(R/N&E), therefore, is either low–less than .5, at any rate–or inscrutable.

In either case, however, doesn’t the naturalist–at any rate one who sees that P(R/N&E) is low or inscrutable–have a defeater[8] for R, and for the proposition that his own cognitive faculties are reliable? I think so. Note some analogies with clear cases. I hear about a certain substance XXX, a substance the ingestion of which is widely reputed to destroy the reliability of one’s belief-forming faculties; nevertheless I find it difficult to estimate the probability that ingestion of XXX really does destroy cognitive reliability, and regard that probability as either high or inscrutable. Now suppose I come to think you have ingested XXX. Then I have a defeater for anything I believe just on your say-so; I won’t (or shouldn’t) believe anything you tell me unless I have independent evidence for it. And if I come to think that I myself have also ingested XXX–at an unduly high-spirited party, perhaps–then I will have a defeater for R in my own case. Suppose, in the modern equivalent to Descartes’ evil demon case, I come to think I am a brain in a vat,[9] and that the probability of my cognitive faculties being reliable, given that I am a brain in a vat, is low or inscrutable: then again I have a defeater for R with respect to me.

Perhaps it seems harder to see that one has a defeater for R in the case where the relevant probability is inscrutable than in the case where it is low. Well, suppose you buy a thermometer; then you learn that the owner of the factory where it was manufactured is a Luddite who aims to do what he can to disrupt contemporary technology, and to that end makes at least some instruments that are unreliable. You can’t say what the probability is of this thermometer’s being reliable, given that it was made in that factory; that probability is inscrutable for you. But would you trust the thermometer? It’s outside your window, and reads 30°F; if you have no other source of information about the temperature outside, would you believe it is 30°F?

Another analogy: you embark on a voyage of space exploration and land on a planet revolving about a distant sun, a planet that apparently has a favorable atmosphere. You crack the hatch, step out, and immediately find what appears to be an instrument that looks a lot like a terrestrial radio; you twiddle the dials, and after a couple of squawks it begins to emit strings of sounds that, oddly enough, form English sentences. These sentences express propositions only about topics of which you have no knowledge: what the weather is like in Beijing at the moment, whether Caesar had eggs on toast on the morning he crossed the Rubicon, and whether the first human being to cross the Bering Strait was left-handed. Impressed, indeed awed, by your find, you initially form the opinion that this instrument speaks the truth, that the propositions expressed (in English) by those sentences are true. But then you recall that you have no idea at all as to who or what constructed the instrument, what it is for, whether it has a purpose at all. You see that the probability of its being reliable, given what you know about it, is inscrutable. Then you have a defeater for your initial belief that the thing does in fact speak the truth. In the same way, then, the fact that P(R/N&E) is low or inscrutable gives you a defeater for R.

But here an objection rears its ugly head. In trying to assess P(R/N&E), I suggested that semantic epiphenomenalism was probable, given materialism, because a neural structure would have caused the same behavior if it had had different content but the same NP properties. But, says the objector, it couldn’t have had the same NP properties but different content; having a given content just is having a certain set of NP properties. This is a sensible objection. Given materialism, there is a way of looking at the relation between content (as well as other mental properties) and NP properties according to which the objector is clearly right. We must therefore look a bit more deeply into that relation. Here there are fundamentally two positions: reductionism or reductive materialism on the one hand, and nonreductive materialism on the other. Consider the property of having as content the proposition naturalism is all the rage these days, and call this property ‘C.’ According to reductive materialism, C just is a certain combination of NP properties.[10] It might be a disjunction of such properties; more likely a complex Boolean construction on NP properties, perhaps something like

(P1&P7&P28…) v (P3&P17 &…) v (P8&P83&P107&…) v …
(where the Pi are NP properties).[11]

Now take any belief B you like: what is the probability that B is true, given N&E and reductive materialism? What we know is that B has a certain content; that having that content just is having a certain combination of NP properties; and (we may assume) that having that combination of NP properties is adaptive (in the circumstances in which the organism finds itself). What, then, is the probability that the content of B is true? Well, it doesn’t matter whether it is true; if it is true, the NP properties constituting that content will be adaptive, but if it is false, those properties will be equally adaptive, since in each case they make the same causal contribution to behavior. That combination of NP properties is the property of having a certain content; it is the property of being associated with a certain proposition p in such a way that p is the content of the belief. Having that combination of NP properties is adaptive; hence having that belief is adaptive; but that combination of NP properties will be equally adaptive whether p is true or false. In this case (reductionism) content does enter into the causal chain leading to behavior, because NP properties do, and having a certain content just is displaying a certain set of NP properties. But those properties will be adaptive, whether or not the content the having of which they constitute, is true. Content enters in, all right, but not, we might say, as content. Better, content enters the causal chain leading to behavior, but not in such a way that its truth or falsehood bears on the adaptive character of the belief.

But, someone might object, given that the belief is adaptive, isn’t there a greater probability of its being true than of its being false? Why so? Because, the objector continues, the belief’s being adaptive means that having this belief, in these or similar circumstances, helped the creature’s ancestors to survive and reproduce; having this belief contributed to reproductive fitness. And wouldn’t the best explanation for this contribution be that the belief accurately represented their circumstances, i.e., was true? So, probably, the belief was adaptive for the creature’s ancestors because it was true. So, probably, the belief is adaptive for this creature in its circumstances because it is true.[12]

This objection, beguiling as it sounds, is mistaken. The proper explanation of this belief’s being adaptive is that having the NP properties that constitute the content of the belief causes adaptive behavior, not that the belief is true. And of course having those NP properties can cause adaptive behavior whether or not the content they constitute is true. At a certain level of complexity of NP properties, the neural structure that displays those properties also acquires a certain content C. That is because having that particular complex of NP properties just is what it is to have C. Having those NP properties, presumably, is adaptive; but whether the content arising in this way is true or false makes no difference to that adaptivity. What explains the adaptivity is just that having these NP properties, this content, causes adaptive behavior.[13]

So consider again a belief B with its content C; what, then, given that having that belief is adaptive, is the probability that C is true, is a true proposition? Well, since truth of content doesn’t make a difference to the adaptivity of the belief, the belief could be true, but could equally likely be false. We’d have to estimate the probability that it is true as about .5. But then if the creature has 1000 independent beliefs, the probability that, say, ¾ of them are true (and this would be a minimal requirement for reliability) will be very low–less than 10-58.[14] So on naturalism and reductionism, the probability of R appears to be very low.

That’s how things go given reductive materialism; according to nonreductive materialism, the other possibility, a mental property is not an NP property or any Boolean construction on NP properties, but a new sort of property that gets instantiated when a neural structure attains a certain degree of complexity–when, that is, it displays a certain sufficiently complex set of NP properties. (We might call it an ’emergent’ property.) Again, take any particular belief B: what is the probability, on N&E & nonreductive materialism, that B is true? What we know is that B has a content, that this content arises when the structure has a certain complex set of NP properties, and that having that set of NP properties is adaptive. But once again, it doesn’t matter for adaptivity whether the content associated with those NP properties is true or false; so once again, the probability that the content is true will have to be estimated as about .5; hence the probability that these creatures have reliable faculties is low. Either way, therefore, that probability is low, so that P(R/N&E) is also low–or, as we could add, if we like, inscrutable.

Now for the argument that one can’t rationally accept N&E. P(R/N&E), for those hypothetical creatures, is low or inscrutable. But those creatures aren’t relevantly different from us; so of course the same goes for us: P(R/N&E) specified to us is also low or inscrutable. We have seen furthermore that one who accepts N&E (and sees that P(R/N&E) is either low or inscrutable) has a defeater for R. But one who has a defeater for R has a defeater for any belief she takes to be a product of her cognitive faculties–which is of course all of her beliefs. She therefore has a defeater for N&E itself; so one who accepts N&E has a defeater for N&E, a reason to doubt or reject or be agnostic with respect to it. If she has no independent evidence then the rational course would be to reject belief in N&E. If she has no independent evidence, N&E is self-defeating and hence irrational.

But of course defeaters can in turn be themselves defeated; so couldn’t she get a defeater for this defeater–a defeater-defeater? Maybe by doing some science, for example, determining by scientific means that her faculties really are reliable? Couldn’t she go to the MIT cognitive-reliability laboratory for a check-up? Clearly that won’t help. Obviously that course would presuppose that her faculties are reliable; she’d be relying on the accuracy of her faculties in believing that there is such a thing as MIT, that she has in fact consulted its scientists, that they have given her a clean bill of cognitive health, and so on. Thomas Reid (Essays on the Intellectual Powers of Man) put it like this:

If a man’s honesty were called into question, it would be ridiculous to refer to the man’s own word, whether he be honest or not. The same absurdity there is in attempting to prove, by any kind of reasoning, probable or demonstrative, that our reason is not fallacious, since the very point in question is, whether reasoning may be trusted.

Is there any sensible way at all in which she can argue for R? It is hard to see how. Any argument she might produce will have premises; these premises, she claims, give her good reason to believe R. But of course she has the very same defeater for each of those premises that she has for R; and she has the same defeater for the belief that if the premises of that argument are true, then so is the conclusion. So it looks as if this defeater can’t be defeated. Naturalistic evolution gives its adherents a reason for doubting that our beliefs are mostly true; perhaps they are mostly mistaken. But then it won’t help to argue that they can’t be mostly mistaken; for the very reason for mistrusting our cognitive faculties generally, will be a reason for mistrusting the faculties that produce belief in the goodness of the argument.

This defeater, therefore, can’t be defeated. Hence the devotee of N&E has an undefeated defeater for N&E. N&E, therefore, cannot rationally be accepted–at any rate by someone who is apprised of this argument and sees the connections between N&E and R.

But if N&E can’t rationally be accepted, there is indeed a conflict between naturalism and evolution: one can’t rationally accept them both. But evolution is an extremely important scientific doctrine, one of the chief pillars of contemporary science. Hence there is a conflict between naturalism and science. The conclusion seems to be that there is a religion/science conflict, all right, but it isn’t between Christian belief and science: it is between naturalism and science.[15]

By Alvin Plantinga

See also


Notes

[1] Erroneously, in my opinion. There is no inner connection between science and naturalism; indeed, as I’ll argue, naturalism clashes with science.

[2] Thus Richard Dawkins: “Although atheism might have been logically tenable before Darwin, Darwin made it possible to be an intellectually fulfilled atheist.” The Blind Watchmaker, (New York: Norton, 1986), pp. 6-7.

[3] As evolutionary psychologist David Sloan Wilson puts it, “the well-adapted mind is ultimately an organ of survival and reproduction” (Darwin’s Cathedral [Chicago: University of Chicago Press, 2002], p. 228).

[4] Letter to William Graham, Down, July 3rd, 1881. In The Life and Letters of Charles Darwin Including an Autobiographical Chapter, ed. Francis Darwin (London: John Murray, Albermarle Street, 1887), Volume 1, pp. 315-16. Evan Fales has suggested that Darwin is thinking, here, not of belief generally, but of religious and philosophical convictions and theoretical beliefs. If he is right, Darwin’s doubt would not extend to everyday beliefs to the effect, e.g., that bread is nourishing but mud is not, but to religious and philosophical beliefs–such as naturalism.

[5] [Editor’s note: Plantinga uses a hyphen here to abbreviate “it is not the case that.” Thus, “-C” can be read “it is not the case that C,” or “it is false that C,” or just “not C.” C and -C are “mutually exclusive” because they cannot both be true. They are “jointly exhaustive” because at least one of them must be true.]

[6] Meaning and Mental Representation (Cambridge, MA: MIT Press, 1989), p. 130.

[7] Of course these figures are the merest approximations; others might make the estimates somewhat differently; but they can be significantly altered without significantly altering the final result. For example, perhaps you think the P(R/N&C) is higher, perhaps even 1; then (retaining the other assignments) P(R/N) will be in the neighborhood of .44. Or perhaps you reject the thought that P(-C/N) is more probable than P(C/N), thinking them about equal. Then (again, retaining the other assignments) P(R/N) will be in the neighborhood of .55.

[8] [Editor’s note: When Plantinga says someone has a “(rationality) defeater” for a belief B, he means, (very) roughly, that the person in question has one or more other beliefs that make it irrational for him or her to believe B.]

[9] [Editor’s note: Rene Descartes was a 17th Century philosopher who famously entertained the possibility that an all-powerful evil demon was trying to deceive him (e.g., by making him think that his senses provide him with accurate information about the physical world when in fact there is no physical world including no bodies or sense organs. The “modern equivalent” to Descartes’ evil demon case involves the supposition that one has no body but instead is just a brain in a vat being stimulated by scientists in just the right way so as to produce a completely realistic but delusory experience of living a normal human life. Notice that Plantinga is not claiming here that the (alleged) impossibility of proving that one is not being deceived by an evil demon or that one is not a brain in a vat makes it irrational to trust one’s cognitive faculties. Instead, he is making the much more plausible claim that if one were to actually believe that one is being deceived by an evil demon or that one is a brain in a vat, then it really would be irrational for one to believe that one’s cognitive faculties are reliable.]

[10] Or (to accommodate the thought that meaning ‘ain’t in the head’) a combination of NP properties with environmental properties. I’ll assume but not mention this qualification in what follows.

[11] [Editor’s note: The symbol “v” means “or” in the sense of “one or the other or both.”]

[12] Here I am indebted to Tom Crisp.

[13] In this connection, consider dream beliefs. Take a given dream belief with its content C: Having the NP properties that constitute the property of having C is presumably adaptive; but it makes no difference whether or not that content is true.

[14] As calculated by Paul Zwier. This is the probability that the whole battery of cognitive faculties is reliable; the probability that a given faculty is reliable will be larger, but still small; if its output is, say, 100 beliefs, the probability that ¾ of them are true will be no more than .000001.

[15] For wise counsel and good advice, I am grateful to Thad Botham, E.J. Coffman, Robin Collins, Tom Crisp, Chris Green, Jeff Green, Dan McKaughan, Brian Pitts, Luke Potter and Del Ratzsch.

Leave a Reply