Sunday, May 19, 2013

Evolutionary Argument Against Naturalism (p. 4)

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
 |   1   |   2   |   3   |   4   |   5   | 

The Defeater Thesis



Borrowing heavily from Plantinga, I’ll use the drug XX analogy[4], where drug XX is a fictional drug that renders one’s cognitive faculties unreliable for the vast majority of those who take it, with the type of unreliability in question being general cognitive unreliability, such that those so afflicted can’t even rely on their cognitive faculties to determine whether they’ve passed cognitive reliability tests and aren’t necessarily capable of detecting their own cognitive unreliability (note that this is the same type of unreliability that EAAN has in mind), though those afflicted can still have some true beliefs, including the belief that drug XX entered their system (though they might be mistaken how they came to that belief). The only people who are immune to drug XX are those that have the blocking gene, a gene that produces a protein that blocks the effects of drug XX, but the likelihood of any given individual having the blocking gene is small. Once a person ingests drug XX, the drug soon enters the bloodstream and has a high probability of rendering that person’s cognitive faculties unreliable within two hours. With that, consider the following scenarios:

Scenario (S1): I know that my friend Sam ingested drug XX and that twenty-four hours later he came to believe that that a series of tests has confirmed that he has the blocking gene and that his cognitive faculties are reliable, though I have no independent reason for thinking this occurred. And since Sam obtained his belief about the cognitive tests long after he ingested drug XX, there’s a reasonable chance that this belief was produced by unreliable cognitive faculties, and so this would-be evidence for Sam’s cognitive reliability (Sam’s memory of passing the cognitive tests) is undermined by drug XX, and my belief Drug XX entered Sam’s bloodstream defeats my belief that Sam’s cognitive faculties are reliable.

Scenario (S2): After I learn about poor Sam I ingest drug XX while being aware of its potential effects. I know of no relevant difference that distinguishes my case from Sam’s. Some years after the incident I come to believe I have taken a series of tests that say I have the blocking gene and that my cognitive faculties are reliable, but since this belief came long after I ingested drug XX, it seems this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by drug XX, just as the would-be evidence for Sam’s cognitive reliability (Sam’s memory of passing the cognitive tests) is undermined by drug XX in (S1). Thus my belief Drug XX entered my bloodstream defeats my belief that R is true with respect to me.

The above two scenarios illustrate how drug XX can defeat R even for oneself, and also provide an undefeated defeater, since any alleged evidence one comes to believe in after ingesting drug XX would be undermined by drug XX. In scenario (S2), any evidence I give for the reliability of my cognitive faculties (e.g. my memories) would be presupposing the accuracy of my cognitive faculties—thus being circular reasoning. I consequently can’t maintain my belief in R in (S2) on the grounds that my cognitive faculties seem reliable to me, or that nothing seems to have changed for me, because all that relies on the very cognitive faculties that are being called into question. If I had knowingly taken a drug like drug XX that has a high probability of rendering my cognitive faculties unreliable, I can’t legitimately use something that’s the product of those cognitive faculties as evidence for my cognitive faculties being reliable. In scenario (S2), I cannot reasonably believe R is true for me while believing I ingested drug XX, and thus my belief Drug XX entered my bloodstream defeats R with respect to me.

Plantinga suggests N&E is like drug XX in its power to defeat R. N&E, like drug XX, plays a causal role in cognitive reliability; on N&E, it is naturalistic evolution that created humans and their cognitive faculties, so if on N&E it is likely that naturalistic evolution gave us unreliable cognitive faculties, that would seem to provide a defeater for R. Not only that, but Pr(R|N&E) being low would seem to provide an undefeated defeater for R.

While I think scenarios (S1) and (S2) illustrate the general principle, I think we can make the case stronger. Let’s consider a few more analogies, some of which refer to the XX-mutation—a genetic mutation that injects drug XX into the bloodstream soon after one is born.

Scenario (S3): A doctor has injected me with drug XX soon after I was born (the doctor mistakenly thought he was injecting an important vaccine), and I come to believe in the following. I initially believe I am the product of a sort of evolution that makes the reliability of my cognitive faculties very likely. I am a renowned scientist who has built a machine that I know is capable of reliably detecting whether and when drug XX entered a person's bloodstream, and I am extremely confident about the reliability of this machine (I as a qualified expert see that it works for myself and numerous scientific experts have unanimously agreed that it is reliable), such that if the machine reports drug XX entered my bloodstream, I would be as confident that the drug did enter my bloodstream as I would be in Scenario (S2). I administer the test to myself and the machine reports that drug XX entered my bloodstream at around the time I was born; as such, I am as confident that drug XX entered my bloodstream as I am in scenario (S2). Later I come to believe I have taken an extensive battery of cognitive reliability tests to confirm that I have the blocking gene, but since this belief came long after drug XX entered my bloodstream, it seems this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by drug XX just as it is in scenario (S2), and so it seems my belief Drug XX entered my bloodstream soon after I was born defeats my belief that R is true with respect to me.

Scenario (S4): I come to believe in the following. The XX-mutation afflicts approximately one in a million individuals, with only a small percentage of those with the XX-mutation having the blocking gene. I have constructed a device similar to the one described in (S3) except this device detects whether evolution gave someone the XX-mutation, and I am as confident in the reliability of this machine as I am with the one in (S3). For most of my life I believe that I am the product of naturalistic evolution that makes my cognitive reliability very likely. After some years though I finally try the XX-mutation detector on myself. To my horror, the machine reports that I have the XX-mutation and thus that drug XX entered my bloodstream soon after I was born, thereby making me believe that naturalistic evolution gave me the XX-mutation. Later I come to believe that I’ve passed a series of cognitive tests to confirm that I have the blocking gene, but since I believe these tests happened long after drug XX entered my bloodstream, it seems that this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by drug XX just as it is in scenario (S3), and so it seems that I have a defeater for my belief that my cognitive faculties are reliable. My belief Drug XX entered my bloodstream soon after I was born defeats R for me here just as it does in scenario (S3). Similarly, my belief that I have the XX-mutation (since I believe this mutation injects drug XX into my bloodstream soon after I’m born) defeats my belief that R is true with respect to me.

Scenario (S5): I come to believe in the following. Via a nifty combination of scientific and philosophical argumentation, it is proven beyond all reasonable doubt that naturalistic evolution entails that the XX-mutation is inevitably a part of any humanoid’s genetics. The aforementioned scientific and philosophical argumentation say that given N&E, it is likely that the XX-mutation rendered everyone’s cognitive faculties unreliable, though on N&E there is also the small chance that everyone evolved the blocking gene to render everyone immune to drug XX. N&E entailing that the XX-mutation is part of our genetics thus makes Pr(R|N&E) low, and I thus come to believe Pr(R|N&E) is low. I believe some time after it’s discovered that drug XX entered our bloodstream, credible scientists have run cognitive tests to confirm that we have the blocking gene. But since this belief came long after drug XX entered my bloodstream, it seems that, like scenario (S4), this would-be evidence for my cognitive reliability (my memory of the cognitive tests) is undermined by drug XX. My belief I have the XX-mutation defeats R for me here just as it does in scenario (S4).

Scenario (S6): The Probability Thesis is true and Pr(R|N&E) is low, but I do not initially believe this and instead think I am the product of a sort of evolution that makes my cognitive reliability very likely. Later however I study philosophy and see for myself that the probability of my humanoid cognitive faculties being reliable given that I am a product of naturalistic evolution is low. Afterwards I come to believe I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after naturalistic evolution created my cognitive faculties and I believe that given N&E, naturalistic evolution has a high probability of giving me unreliable cognitive faculties (by which I mean I believe that Pr(R|N&E) is low and naturalistic evolution is what gives me my cognitive faculties), it seems that this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by the effects of naturalistic evolution similar to how naturalistic evolution giving me the XX-mutation in scenario (S5) undermines my would-be evidence for R, and so it seems that I have a defeater for my belief that my cognitive faculties are reliable.

To strengthen the case further, let XX symbolize “drug XX entered one’s bloodstream” and let’s stipulate that in scenarios (S1) through (S5), Pr(R|XX) (the probability of R given that XX entered one’s bloodstream) is as low as Pr(R|N&E) is in scenario (S6). So above we have a slippery slope of scenarios. The idea is that if R is defeated in (S1), then it is defeated in (S2), and if R is defeated in (S3), then it is defeated in (S4), and so forth. Or to put it more explicitly in the form of a deductively valid argument where lines 2 through 6 are material conditionals (a material conditional is where “If P, then Q” just means “It’s not the case that P is true and Q is false”):
  1. R is defeated in scenario (S1).
  2. If R is defeated in scenario (S1), then R is defeated in scenario (S2).
  3. If R is defeated in scenario (S2), then R is defeated in scenario (S3).
  4. If R is defeated in scenario (S3), then R is defeated in scenario (S4).
  5. If R is defeated in scenario (S4), then R is defeated in scenario (S5).
  6. If R is defeated in scenario (S5), then R is defeated in scenario (S6).
  7. If R is defeated in scenario (S6), then the Defeater Thesis is true.
  8. Therefore, the Defeater Thesis is true.
Because premises (2) through (7) are material conditionals, the only way e.g. premise (2) could be false is if R is defeated in (S1) and R is not defeated in (S2). So identifying a false premise would identify where the slippery slope stops. But it’s difficult to find a premise that is plausibly false. It would clearly be irrational for me to believe that Sam’s cognitive faculties are reliable given the information I have in scenario (S1), in which case premise (1) is justifiably true. There also doesn’t seem to be a relevant difference between scenarios (S1) and (S2) whereby R is defeated in (S1) but not (S2), and the same goes for every pair of scenarios in premises (2) through (6): there doesn’t appear to be a relevant difference between the two scenarios in any individual premise where R is defeated in one but not the other. So each premise (2) through (6) would seem to be justifiably true.

What about premise (7)? Thinking that R is defeated in scenario (S6) but not for the general person who accepts the Probability Thesis seems especially implausible. Scenario (S6) is where I see the Probability Thesis is true and I believe I have passed a battery of cognitive tests that confirm my cognitive reliability. If R is defeated even here, it’s hard to imagine what relevant difference there might be between this and another person who sees that the Probability Thesis is true. Part of the point of (S6), after all, is that any would-be evidence for R would be undermined just as any would-be evidence for R is undermined in scenario (S5). It seems that for the Defeater Thesis to be false, R would have to be undefeated in scenario (S6).

If however R is not defeated in (S6), where does the slippery slope stop and why? Where does there exist a relevant difference between two scenarios that saves R from defeat? It’s particularly hard to find a relevant difference between (S5) and (S6). One might say in (S6) we know of overwhelming evidence in addition to N&E that makes R likely, whereas that’s not the case in (S5). But why exactly do we have this evidence in (S6) but not in (S5)? To make the problem for this more explicit, imagine that the two worlds of (S5) and (S6) are essentially identical apart from the differences entailed in (S5), such that I believe that the specific type of naturalistic evolution my species is a product of has given me genes that (together with proper nutrition etc.) makes it likely that my cognitive faculties are reliable, that cognitive science and evolutionary biology has given us strong evidence for human cognitive reliability, that truth-conducive faculties are adaptive in Earth primates, and so forth. I also believe that we have overwhelming scientific evidence that we have the blocking gene to nullify the effects of the XX-mutation. Yet all this alleged evidence for cognitive reliability seems undermined when one accepts the evidence long after drug XX entered the bloodstream. So how exactly is it that the alleged evidence for R is undermined in scenario (S5) but not in scenario (S6)? If there is a relevant difference between the two scenarios, what is it?

One could believe that the relevant difference between scenarios (S5) and (S6) is N&E’s mechanism of probable cognitive unreliability (MoPCU), i.e. whatever it is that makes Pr(R|N&E) low. In scenario (S5), that mechanism is the XX-mutation; whereas in scenario (S6) N&E’s MoPCU is (presumably) something else, e.g. perhaps what makes Pr(R|N&E) low in (S6) is semantic content being causally irrelevant and invisible to natural selection coupled with the fact that the enormous variety of “garbage” belief sets akin to dreams vastly outnumber belief sets that accurately resemble one’s external reality. So, one objection to the argument is that N&E’s different MoPCU is the relevant difference between (S5) and (S6) such that R is defeated in (S5) but not in (S6), which would make premise (6) false.

But there’s a problem with N&E’s MoPCU being a relevant difference. N&E’s MoPCU in scenario (S6) is functionally identical to drug XX in that it induces the same type of cognitive unreliability with the same probability (in the sense that Pr(R|XX) = Pr(R|N&E)). Thus, which mechanism is N&E’s MoPCU does not seem to be a relevant difference. Apart from N&E’s MoPCU, there doesn’t appear to be any other plausible relevant difference between (S5) and (S6) here. If so, then the would-be evidence for cognitive reliability in scenario (S6) is undermined just as it is in scenario (S5). In (S5), N&E’s MoPCU is the XX-mutation; in (S6) the mechanism is different, but it’s functionally identical to drug XX (inducing the same type of cognitive unreliability with the same probability). It therefore seems arbitrary to hold that R is defeated in (S5) but not in (S6).

 |   1   |   2   |   3   |   4   |   5   | 



[4] Well, not the analogy, since Alvin Plantinga himself has several variants, and I’m using my own variety here. Still, drug XX rendering one’s cognitive faculties unreliable for some portion of those who ingest it is common in all of Plantinga’s renditions of this analogy that I’ve read.

24 comments:

  1. Hi Maverick Christian. I just thought I'd say a few words about this post.

    The approach is clever, but in the end I don't think it works. The biggest problem is that the skeptic does not, strictly speaking, need to cooperate with the game you have constructed. He is free to reject (S6) and to remain undecided about (S1)-(S5). In order to force decisions, you will need to argue each step, i.e. you will need to argue that (S1) gives a defeater and that there are no relevant differences between (Si) and (S(i+1)) for i=1,...,5.

    For a more cooperative skeptic like myself, I will point to (S1) and (S2). I think the fellow in (S1) has a defeater but not the fellow in (S2). And the relevant difference here is that in (S2), we are speaking of the fellow's own cognitive faculties. It seems to me that one could not possibly have a defeater for the belief that one's own cognitive faculties are unreliable to the extent Plantinga has in mind.

    After all, if one did have a defeater, i.e. a reason to reject belief in the reliability of his own cognitive faculties, then that would undermine the reason he thinks he has. So the only sensible move to make, that I can see, is to hold fast to the belief that one's own cognitive faculties are sufficiently reliable.

    ReplyDelete
    Replies
    1. The approach is clever, but in the end I don't think it works. The biggest problem is that the skeptic does not, strictly speaking, need to cooperate with the game you have constructed. He is free to reject (S6) and to remain undecided about (S1)-(S5).

      The skeptic could, but it would be a notable concession to admit that each premise is so plausible the skeptic does not disbelieve any particular one.

      For a more cooperative skeptic like myself, I will point to (S1) and (S2). I think the fellow in (S1) has a defeater but not the fellow in (S2). And the relevant difference here is that in (S2), we are speaking of the fellow's own cognitive faculties. It seems to me that one could not possibly have a defeater for the belief that one's own cognitive faculties are unreliable to the extent Plantinga has in mind.

      First, remember that part of (S2) is that you know of no relevant difference between your case and Sam's. Second, in his own writings Plantinga has used the drug XX case for a full grown adult. If you take drug XX while knowing of its effects and without knowing you are immune, then you have a defeater for the reliability of your cognitive faculties. I don’t see any reasonable way to avoid this in part because even unreliable cognitive faculties can have true beliefs, and that the source of belief of having ingested drug XX comes from cognitive faculties made probably unreliable via ingesting drug XX doesn’t seem to change this. Suppose for example Sam comes up to me and says he ingested drug XX, and neither of us knows whether Sam has the blocking gene. It seems I would still have a defeater for my belief that Sam’s cognitive faculties are reliable even though the report for Sam ingesting drug XX comes from Sam himself. When you think about it, what reason would I have to doubt Sam’s belief that drug XX has rendered his cognitive faculties unreliable? If my only reason for thinking Sam is wrong about his cognitive faculties being unreliable is that Sam has ingested drug XX and this drug has rendered his cognitive faculties unreliable, this reason doesn’t appear to work at all.

      Similarly, if my own cognitive faculties tell me I ingested drug XX without knowing whether I have the blocking gene, I have a defeater for my belief that my cognitive faculties reliable. What reason do I have to doubt that my cognitive faculties are wrong about this? That I ingested drug XX and that it has rendered my cognitive faculties unreliable? As in the case of Sam, that doesn’t seem to work.

      To use an even stronger example, suppose I take drug XXX which induces cognitive unreliability in 100% of people who take it. If I held this belief, it seems I would certainly have a defeater for the reliability of my cognitive faculties being reliable, and if nothing else this would seem to serve as a counterexample to the claim that “one could not possibly have a defeater for the belief that one's own cognitive faculties are unreliable.”

      Delete
  2. To believe that your cognitive faculties have defeated their own reliability is...self-defeating. Incoherent.

    Imagine if we were to follow such a scenario through:

    I begin by believing that evolution and naturalism are correct.

    Somehow (using some other argument, since this one is rubbish) you convince me that, if naturalism and evolution are correct, my cognitive faculties are unreliable.

    Now I believe that my cognitive faculties are unreliable.

    Oh, but wait--you leveraged those very faculties to convince me that those faculties are unreliable, thus, if those faculties are unreliable, *I have no reason to believe your argument.*

    And I'm welcome to go back to thinking that my faculties are reliable, with the memory of your argument's own silly self-defeat as a sufficient warning against bothering with that tack again.

    Your argument is self-defeating, Wade. The entire approach is self-defeating. It's just a waste of time.

    ReplyDelete
    Replies
    1. To believe that your cognitive faculties have defeated their own reliability is...self-defeating.

      It's hard to see how it's self-defeating in a way that would save R from defeat.

      To illustrate, consider the specific scenario in which I believe I have ingested drug XX. Does the fact that my cognitive faculties might be rendered unreliable by drug XX produce a defeater-defeater for my belief that drug XX entered my bloodstream in a way that R is saved from defeat? It’s hard to see how that would make much sense. Even if it were true that ingesting drug XX defeats all my beliefs, including my reasoning that my beliefs are defeated, that does nothing to change the fact that upon ingesting drug XX I no longer have sufficient grounds for thinking my cognitive faculties are reliable (due to having ingested drug XX).

      Delete
  3. The deductive argument you present here is not a deductive argument for the defeater thesis, Wade.

    It's a deductive argument for the conclusion that R is defeated in scenario 6, which bears only superficial resemblance to the defeater premise.

    Also, it is unsound.

    The major distinction between S5 and S6 is that, in S5, we have the XX chemical, and I am wiling to go along with the assertion that the existence of the XX chemical in someone's blood is evidence sufficient to overpower any other considerations regarding R.

    However, in S6, no such overpowering evidence exists. In S6, we are given some evidence for R: the battery of tests and, actually no evidence at all against R--just the belief that the probability of weighted only by the assumed truth of two of the (presumably many) hypotheses you believe--and this simply is not enough for a rational person to claim that R is defeated.

    Again, we have no idea how your other hypotheses might affect your probability of R.

    More importantly, we have the evidence of your cognitive tests, against which no evidence has been arrayed at all.

    No defeater for R is present in scenario 6.

    And, of course, no defeater for R is present in scenario 7.

    In total, again:

    Your argument for your defeater premise is not actually an argument for your defeater premise--it's an argument for something that is only superficially related.

    Your argument for your defeater premise is unsound, since there is no defeater for R present in scenario 6.

    Your defeater premise is false, since scenario 7 unarguably shows that a person can believe N+E and P(R|N+E) is low and still have no defeater for R.

    Your EAAN is unsound, since your defeater premise is false.

    For reference, S7, and associated argument.

    S7:

    Tim has come to believe that P(R|N+E) is low, but, being a renowned philosopher and neurologist, Tim has also come to believe M (some particular hypothesis regarding human cognitive faculties) such that P(R|N+E+M) is high. Tim, of course, has taken an extensive battery of tests and found that his cognitive faculties are, indeed, reliable, thus confirming this particular prediction made by his hypothesis, and, as a result, concludes that he does not have a defeater for R--he is quite justified in believing that his cognitive faculties are reliable.

    And, our deductive argument:

    Given "the defeater premise": "The person who believes N&E (naturalism and evolution) and sees that Pr(R|N&E) is low has a defeater for R."

    1.) If Tim does not have a defeater for R in S7, then the defeater premise is false.

    2.) Tim does not have a defeater for R in S7.

    3.) Therefore, the defeater premise is false.

    This argument is unsound. If you had any shred of integrity, Wade, you'd abandon it.

    ReplyDelete
    Replies
    1. Regarding the argument not really being an argument for the Defeater Thesis, I confess I didn’t see this sort of objection coming since scenario (S6) was basically just the Probability Thesis being true. Still, I’ve modified the blog entry to include the Defeater Thesis as part of the conclusion.

      Thinking that R is defeated in scenario (S6) but not for the general person who accepts the Probability Thesis seems implausible. Scenario (S6) is where I see the Probability Thesis is true and I believe I have passed a battery of cognitive tests that confirm my cognitive reliability. If R is defeated even here, it’s hard to imagine what relevant difference there might be between this and another person who sees that the Probability Thesis is true. Part of the point of (S6), after all, is that any would-be evidence for R would be undermined just as any would-be evidence for R is undermined in scenario (S5). It seems that for the Defeater Thesis to be false, R would have to be undefeated in scenario (S6).

      Some other modifications: you might (or might not) recall that in page 1 of this article I suggested that the type of cognitive unreliability that EAAN has in mind is the sort where one can’t even rely on their cognitive faculties to believe they’ve passed cognitive reliability tests. Drug XX of course shares this same type of cognitive unreliability. In the revised version of page 4, I’ve made it more explicit that drug XX and EAAN share the same type of cognitive unreliability.

      You may also recall that the likelihood that R is true given that one ingested drug XX is low. Let XX symbolize “drug XX entered one’s bloodstream.” I’ve revised page 4 to stipulate that in scenarios (S1) through (S5), Pr(R|XX) (the probability of R given that XX entered one’s bloodstream) is as low as Pr(R|N&E) is in scenario (S6). The idea that the probabilities were basically equivalent was there before, but I thought it better to make it more explicit.

      The major distinction between S5 and S6 is that, in S5, we have the XX chemical, and I am wiling to go along with the assertion that the existence of the XX chemical in someone's blood is evidence sufficient to overpower any other considerations regarding R.

      However, in S6, no such overpowering evidence exists.


      Except for N&E’s mechanism of probable cognitive unreliability (MoPCU) in (S6). N&E’s MoPCU in scenario (S6) is functionally identical to drug XX in that it (probabilistically) induces the same type of cognitive unreliability with the same probability (in the sense that Pr(R|XX) = Pr(R|N&E)). It would therefore seem like special pleading to hold that R is defeated in (S5) but not in (S6). If “I had passed cognitive reliability tests” is undermined by drug XX, then it seems that N&E’s MoPCU in (S6) would do the same thing.

      Regarding scenario S7, your use of M here is analogous to the blocking gene of drug XX. Imagine the following scenario:

      Tim has come to believe that P(R|XX) is low, but, being a renowned philosopher and biologist, Tim has also come to believe in B (the hypothesis that he has the blocking gene for drug XX) such that P(R|XX+B) is high. Tim, of course, has taken an extensive battery of tests and found that his cognitive faculties are, indeed, reliable, thus confirming this particular prediction made by his hypothesis, and, as a result, concludes that he does not have a defeater for R--he is quite justified in believing that his cognitive faculties are reliable.

      Except that he isn’t.

      Drug XX undermines his belief in the blocking gene and his belief that he passed cognitive reliability tests. Similarly, it seems that if N&E’s MoPCU is functionally identical to drug XX in the way that I described before (inducing the same type of cognitive unreliability with the same probability), Tim’s belief in M and the cognitive reliability tests seems to be likewise undermined in S7.

      Delete
    2. Sorry, man, but you still aren't pointing to any underminer for R in S7.

      There is no drug XX in S7. There isn't *anything* that would undermine R in S7. Your response has a veneer of relevancy, but no actual relevant content.

      R is defeated in S7 iff P(R) is low given all of Tim's beliefs in S7, and he has four relevant beliefs:

      N
      E
      M
      that he has successfully tested his cognitive capacity.

      1.) Tim has a defeater for R iff P(R|N+E+M+tests) is low.

      2.) P(R|N+E+M+tests) is actually high--it is not low.

      3.) Therefore, Tim does not have a defeater for R in S7.

      You offer us no reason to think that he does, and I can easily prove that he does not. Your response is irrelevant, and your argument remains unarguably unsound.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. "You may also recall that the likelihood that R is true given that one ingested drug XX is low. Let XX symbolize “drug XX entered one’s bloodstream.” I’ve revised page 4 to stipulate that in scenarios (S1) through (S5), Pr(R|XX) (the probability of R given that XX entered one’s bloodstream) is as low as Pr(R|N&E) is in scenario (S6)."

      This is a new addition--nothing of the sort was actually stated in your argument. I'm glad you're trying to fix things, here, but your slap-dash patches are not sufficient. You're not addressing the core problems.

      If you are going to change the terms of your scenarios, ad hoc, after the fact, I reserve the right to change my evaluations of those scenarios.

      But, really, I don't have to. My deductive argument from S7 proves that your defeater premise is false. You have not offered any relevant response to that. Fishing through your scenarios is an irrelevant waste of time, since, as it stands, your EAAN is unsound regardless.

      Delete
    6. The fact that the probability of R given E and N *on their own* is low isn't a defeater for anything. It isn't a defeater for R. It isn't a defeater for M. It isn't a defeater for his faith in the tests. P(R|E+N) being low does not indicate anything about the total probability of R unless N and E are *the only information Tim has available.*

      But they are not.

      This basic faulty assumption about how probability theory works is a core failure in this presentation of the argument, and until it is rectified, this argument cannot hope to be sound.

      Delete
    7. In the actual evaluation of our present, real-world situation, we might start out by taking the step Plantinga takes:

      Given just N&E, SE is probably true.

      Given just SE, R is probably false.

      But, then we think, wait. If R is false, then we have no reason to trust either of those thoughts at all.

      And, hold on, it actually really seems as though R is true. Our cognitive faculties seem to be pretty reliable and powerful and useful.

      So we sit down and think about what cognitive faculties actually are, if naturalism is true, and we come to realize that there are plenty of hypotheses (call this set M) about what our cognitive faculties could be like for which N&E *should* be expected to produce reliable cognitive faculties.

      So we believe N&E, and we think that SE is a pretty good candidate for a hypothesis about cognitive faculties, given N&E, but we also have this set of other hypotheses, M, which are also good candidates.

      And, of course, we have a huge amount of evidence which supports the hypothesis that our cognitive faculties are indeed reliable--Plantinga doesn't deny this at all.

      So, when we sit down to evaluate our hypotheses regarding our cognitive faculties, SE might seem highly probable given N+E alone, but it makes predictions about our cognitive faculties which are nor borne out by the evidence.

      Meanwhile, the probability of M given N+E alone may be low, but M makes predictions about our cognitive faculties which *are* borne out by the evidence.

      So, we can think of P(SE|N+E) and P(M|N+E) as priors, after which we have some set of evidence, such that P(evidence|M) > P(evidence|SE).

      And not just a little bit greater--a *lot* greater.

      You want us to look at those priors and say, "based on these priors, SE is more likely than M, and therefore ~R is more likely than R, so we should immediately start discarding all future evidence!"

      But that's not how it works.

      We still have to update those probabilities as new evidence comes in. And, of course, the evidence supports M far better than it supports SE--such that, I think, it is extremely reasonable (far more reasonable than the alternative) to conclude that the posterior probability of M is higher than the posterior probability of SE, and that (as a result) our posteriors P(R) > P(~R).

      And the only defense you have against that is to arbitrarily shut down the collection of evidence at a point where it seems to support what you would like to conclude.

      But this is not what rational people do.

      So, while it might be true that any person who shares the same irrational approach to evidence handling that you suggest would have defeaters for R in some or all of your scenarios--perhaps even in S7--it is not the case that a rational person would have a defeater for R in S7, or S5* (and likely not in the remainder of your scenarios, either, though, again, more detail is required and it's not really worth fishing through them, vague as they are, since s7 proves your argument unsound.)

      Delete
    8. P(R|E+N) being low does not indicate anything about the total probability of R unless N and E are *the only information Tim has available.*

      But they are not.

      This basic faulty assumption about how probability theory works...


      …is not an assumption I necessarily make. Nowhere do I say anything like “Pr(R|N&E) is low, therefore Pr(R) is low, therefore R is defeated for the naturalist.” My deductive reasoning for R being defeated is entirely different, relying largely on analogies (see premises (1) through (7) of my deductive argument).

      Still, this might be worth exploring. Does Pr(R|E+N) indicate anything about the total probability of R if N and E are _not_ the only information available, and would this be a basic faulty assumption of probability theory?

      Clearly, Pr(R|E+N) = Pr(R) doesn’t hold true for just any R, E, and N, and to think otherwise would make a faulty assumption about probability theory—but it’s also an assumption I quite clearly never make. Does thinking that Pr(R|E+N) = Pr(R) hold true for _any_ R, E, and N make a faulty assumption of probability theory? No, and we can provide a counterexample. Let R represent “Sam picked a red robot” and let E be “the robot has ears” and N be “the robot has a necktie.” Suppose there are one hundred robots and Sam picks one at random. Of the robots in question, fifty of them are red and have both ears and neckties and legs. The other fifty robots are blue and none of the blue robots have neckties or legs. Our background info is that the robot Sam picked has ears, a necktie, and legs. With this background information, Pr(R|E+N) = Pr(R), even though E and N are not our only data (we also know the robot has legs; it's just that this fact is kind of irrelevant here).

      This reply is simply a side note on probability theory. Again, nowhere do I say anything like “Pr(R|N&E) is low, therefore Pr(R) is low, therefore R is defeated for the naturalist.” The response I quoted seems to be making a rebuttal to a line of reasoning I never actually make.

      Delete
    9. Sorry, man, but you still aren't pointing to any underminer for R in S7.

      I thought I did, but in case it wasn’t clear, the underminer is N&E’s mechanism of probable cognitive unreliability (MoPCU) in S7 which is functionally identical to ingesting drug XX (assuming the Probability Thesis in S7 is identical to the one in EAAN). It’s true there is no drug XX in S7, but S7 has something functionally identical to drug XX (inducing the same type of cognitive unreliability with the same probability). This seems sufficient to undermine R.

      You make reference to evidence for cognitive reliability, but to illustrate why I don’t think this works, consider the following scenario after I ingest drug XX where “B” is the blocking gene. “Sure, given just the fact I ingested drug XX, the drug probably made my cognitive faculties are unreliable. Fortunately, after I took drug XX, my cognitive faculties tell me B is true such that Pr(R|XX&B) is high; my cognitive faculties also tell me there’s lots of evidence for my cognitive reliability, so R is saved from defeat!” Why doesn’t this reasoning work? Because the would-be evidence for my cognitive reliability relies on my cognitive faculties; I can’t take something that presupposes my cognitive reliability and use it as evidence for my cognitive reliability; that would be circular reasoning.

      Similarly, reasoning like this won’t work: “Sure, given just N&E, naturalistic evolution probably gave me unreliable cognitive faculties. Fortunately, after naturalistic evolution gave me my cognitive faculties, my cognitive faculties tell me there’s excellent evidence for my cognitive reliability, thus R is saved from defeat!” The reason this doesn’t work is because it’s using circular reasoning; taking something that presupposes cognitive reliability (the evidence that your cognitive faculties tell you exists) as evidence for your cognitive reliability. Any attempt to provide evidence for your cognitive reliability fails for this reason; any time you use evidence or argument for your cognitive faculties being reliable, you’re relying on your cognitive faculties. It would be like asking a man if he’s honest and taking his answer “Yes” as proof of his honesty. Rather than trying to prove cognitive reliability by evidence or argument, our cognitive reliability seems to be a basic belief that we assume is true unless we have reason to doubt it.

      Delete
    10. "I thought I did, but in case it wasn’t clear, the underminer is N&E’s mechanism of probable cognitive unreliability (MoPCU) in S7 which is functionally identical to ingesting drug XX (assuming the Probability Thesis in S7 is identical to the one in EAAN). It’s true there is no drug XX in S7, but S7 has something functionally identical to drug XX (inducing the same type of cognitive unreliability with the same probability). This seems sufficient to undermine R."

      It is not. Sorry. No rational person would conduct the overall probability evaluation in the manner required for this to be true.

      Delete
    11. "The reason this doesn’t work is because it’s using circular reasoning; taking something that presupposes cognitive reliability (the evidence that your cognitive faculties tell you exists) as evidence for your cognitive reliability"

      Wade, if we were to take this seriously, we would have to conclude that *no one could ever be warranted in believing R.*

      Every single person would have a defeater for virtually all of their beliefs. Why?

      Because if we cannot take as justification for R anything that presupposes R, then we cannot take anything as justification for R at all. Any justification for anything at all must presuppose R. Period.

      We could construct this argument.

      Given C: "It’s using circular reasoning; taking something that presupposes cognitive reliability (the evidence that your cognitive faculties tell you exists) as evidence for your cognitive reliability"

      1.) All evidence for any hypothesis presupposes reliable cognitive faculties.
      2.) The reliability of one's cognitive faculties is a hypothesis.
      3.) If C is true, it is circular reasoning to take anything as evidence for the reliability of one's cognitive faculties.
      4.) If it is circular reasoning to take anything as evidence for the reliability of one's cognitive faculties, then it one cannot provide any rational warrant for the reliability of one's cognitive faculties.
      5.) If one cannot provide any rational warrant for the reliability of one's cognitive faculties, one's R is defeated.

      6.) If C is true, R is defeated for everyone. (from 5)

      7.) However, R is not defeated for everyone/

      8.) Therefore, C is false.

      In fact, how we it works in practice (and you pointed this out yourself) is that we take the reliability of our cognitive faculties for granted (some might say "properly basic") until a defeater is presented for them. This is what you do, by your own admission.

      It is what I do as well.

      And, of course, you have not presented a defeater for my belief in the reliability of my cognitive faculties.

      P(R|N&E) alone is not sufficient, since I have other relevant beliefs and evidence.

      Since I can presume that my faculties are reliable in order to make this evaluation (and, indeed, if I cannot, then you must admit that everyone's belief in the reliability of their own cognitive faculties is defeated) I can simply evaluate P(R|all of my beliefs and evidence) and find that that value is, in fact, high, and that I have no defeater for R.

      Since I believe N&E and I accept that P(R|N&E) is low but I do not have a defeater for R, your defeater premise is false, and your argument for your defeater premise is unsound, as is your argument proper.

      You've already exhausted your two responses, here:

      First, you said that I had to evaluate P(R|N&E) first, and ignore everything after that.

      However, as I pointed out, this is a trivial logical error that any competent logician can spot easily. P(R) should be conditioned on all of my relevant beliefs and evidence without any sort of priority, or else I would be making a logical mistake. If I perform the probability evaluation correctly, I am left with a high total P(R) and no defeater for R.

      Second, you suggested that any evidence I wanted to offer in favor of R constituted circular reasoning, period, since offering any evidence for R meant pre-supposing R.

      However, if this is true, then R is defeated for all of us, and I can either just ignore you (since you must not believe that your own faculties are reliable) or ignore you because you're a massive hypocrite who holds your own reasoning to a different standard than you hold everyone else's.

      If I can evaluate P(R) rationally, on the initial assumption that my faculties are reliable (which is what you demand we allow you to do for yourself) then P(R) is high and I have no defeater for R and your argument is proven unsound.

      No matter which way we go, your argument loses.

      Delete
    12. First, you said that I had to evaluate P(R|N&E) first, and ignore everything after that.

      However, as I pointed out, this is a trivial logical error that any competent logician can spot easily. P(R) should be conditioned on all of my relevant beliefs and evidence without any sort of priority, or else I would be making a logical mistake.


      First, nowhere do I say anything like “Pr(R|N&E) is low, therefore P(R) is low, therefore R is defeated for the naturalist.” I’m not trying to evaluate Pr(R); my deductive reasoning for R being defeated is entirely different, largely relying on analogies.

      Second, what about trying to conditionalize R on other beliefs in the case of Tim? This approach is problematic since in this case Tim accepts N&E and Pr(R|N&E) being low, and you can only legitimately condition all of your relevant beliefs if those beliefs are justified. You want to get around Pr(R|N&E) being low by Tim accepting M and T (T = passing cognitive reliability tests) such that Pr(R|N&E&M&T) is high, but conditioning R on M and T is valid only if Tim’s belief in M and T is justified, and Tim’s belief in M and T is justified only if Tim’s belief in R is justified, and Tim’s belief in R is justified only if Pr(R|N&E) being low doesn’t defeat R—which is the very thing you’re trying to prove. Thus for your argument you’d have to implicitly assume what you’re trying to prove: that Pr(R|N&E) being low doesn’t defeat R for one who accepts the Probability Thesis and N&E; without this assumption Tim is not justified in believing R, which means he wouldn’t be justified in believing M and T, which means he couldn’t legitimately condition R on M and T.

      If that sounds a bit dizzying, consider that the situation is analogous to the following: I ingest drug XX while knowing of its effects, knowing that Pr(R|XX) is low. If we let “B” be “I have the blocking gene,” suppose I want to condition R on all of my relevant beliefs and evidence, which includes B and T, and Pr(R|XX&B&T) is high—but this will work only if I’m justified in believing B and T, and I’m justified in believing B and T only if my belief in R is justified, but because ingesting drug XX defeats R, my belief in B and T is defeated as well, because it is my cognitive faculties that tell me that B and T are true.

      So the upshot is that you’ll need another way to attack my deductively valid argument for the Defeater Thesis. Might I suggest attacking a premise? In which case, which premise is false such that R is defeated in one scenario but not the other, and what is the relevant difference between those two scenarios in the false premise?

      Regarding another argument of yours:

      4.) If it is circular reasoning to take anything as evidence for the reliability of one's cognitive faculties, then it one cannot provide any rational warrant for the reliability of one's cognitive faculties.

      I disagree with this premise. As I showed in my Why Evidentialism Sucks article, the notion that no belief can be justified unless it’s based on sufficient evidence turns out not to work. Some basic beliefs have to be accepted without evidence, and R appears to be one of them.

      Delete
    13. <>

      This is wrong. What I did successfully prove is that Tim doesn't have a defeater for R at all. Here's how it works.

      At the outset, Tim is justified in believing R as properly basic--we agree on this front.

      At this point, Tim is justified in believing N and E and M and T, since he is justified in believing R, but he wants to evaluate R, so he conditions R on all of the information available to him--and, of that information, only N and E and M and T affect the posterior probability of R.

      So, after conditioning R on the information available to him, Tim finds that P(R|N and E and M and T) is high, and therefore he does not have a defeater for R.

      And your argument remains a failure, your trivially illogical attempt to evade my refutation of it notwithstanding.

      Delete
    14. If you want a premise that is false, look no further than premise 2:

      In scenario 1, you do have a defeater for Sam's R, since you do not have any confirmation of the results of his cognitive tests yourself (and his testimony is not equivalent to direct access to those results).

      In scenario 2, however, you do have direct access to the results of those tests, and hence R is not defeated for you: P(R|XX and tests) is still high.

      So, now, with your defeater premise falsified and your argument for your defeater premise rebutted, will you finally give up this stupid, stupid argument?

      Delete
    15. Thanks, at least, for *finally* fixing the wording of your scenarios so that it is now clear which premise is false.

      Delete
  4. It continues to rankle me just how genuinely sophomoric this "argument" for the defeater premise is. I think we really need to establish, finally, that it is a failure and why.

    To begin, Wade asserts in p1 that "R is defeated in scenario 1."

    This phrasing is odd, and the claim is unsubstantiated. What does it mean, Wade, to say that "R is defeated?". Does it mean that Sam has a defeater for R? That you have a defeater for R? There is no objective "defeated" condition for propositions.

    So, Wade. Explain who has a defeater for R in scenario 1, and what that defeater is. Can you? Or does this argument fail in its very first premise?

    ReplyDelete
    Replies
    1. Lol, though, reading back through, I recall now how thoroughly that has already been done.

      Oh well. Two disproofs are better than one, and I very much doubt you can do any better salvaging your argument from this train of thought, Wade, than you did with the last one.

      Delete
    2. So, Wade. Explain who has a defeater for R in scenario 1, and what that defeater is.

      In (S1) I have a defeater for R with respect to Sam (and I believe this was made clear in the description for (S1)).

      Delete
  5. In fact, really, the choice of language "is defeated" is sufficient on its own to render this argument unsound. It is nonsense language, in context. Means nothing.

    What matters is *who has a defeater for R," and that isn't well established in any of Wade's scenarios.

    This argument really is childish, Wade.

    ReplyDelete