Sunday, May 19, 2013

Evolutionary Argument Against Naturalism (p. 5)

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
 |   1   |   2   |   3   |   |   5   |   Next >

Conclusion



The evolutionary argument against naturalism goes like this:
  1. Pr(R|N&E) is low
  2. The person who belives N&E (naturalism and evolution) and sees that Pr(R|N&E) is low has a defeater for R.
  3. Anyone who has a defeater for R has a defeater for pretty much any other belief she has, including (if she believed it) N&E.
  4. Therefore, the devotee of N&E (at least such a devotee who is aware of the truth of premise 1) has a self-defeating belief.
One of the big reasons to accept the Probability Thesis (premise 1) is that if N&E were true, then the semantic content of our beliefs is causally irrelevant in the sense that a belief causes stuff by virtue of its neurophysiological (NP) properties, and not by its semantic content. If a belief had the same NP properties but different content, the same behavior would result (the same neurophysiological properties means we would have the same electrical impulses travelling down the same neural pathways and thus issuing the same muscular contractions). Even if that weren’t the case, the ANPD scenario suggests it’s still possible for “garbage” beliefs to be associated with electrochemical reactions producing advantageous behavior. If semantic epiphenomenalism (SE) isn’t true on N&E, then semantic pseudo-epiphenomenalism (SPE) is, and both Pr(R|N&E&SE) and Pr(R|N&E&SPE) are low, thereby making Pr(R|N&E) low.

The argument for the Defeater Thesis (premise 2) is that if R is defeated in (S1), then it is defeated in (S2), and if R defeated in (S3), then it is defeated in (S4), and so forth, where (S6) is the scenario of a person who accepts both N&E and the Probability Thesis. The general idea is that the effect of an evolutionary naturalist believing Pr(R|N&E) to be low is akin to believing that drug XX has been put into one’s body (where drug XX destroys the cognitive reliability of most who take it).

The upshot of all this is that there is a serious conflict between science and naturalism, because the conjunction of naturalism and evolution is in an interesting way self-defeating.

 |   1   |   2   |   3   |   |   5   |   Next >

28 comments:

  1. <<
    1.) Pr(R|N&E) is low

    2.) The person who believes N&E (naturalism and evolution) and sees that Pr(R|N&E) is low has a defeater for R.
    >>

    1 is false, as I've demonstrated above.

    Neither of your arguments for 1 are sound. Let's look at both, just to remember why:

    1.) If Pr(RA|N&E) is low, then Pr(R|N&E) is low.
    2.) Pr(RA|N&E) is low.
    3.) Therefore, Pr(R|N&E) is low.

    Premise 1 remains entirely unjustified. As a strict conditional, it is obviously false. As a material conditional, it is utterly baseless, and you offer no logical justification for it at all.

    Premise 2 is entirely unjustified. In your attempt to justify it, you write this:

    "It would still be possible that the electrochemical reactions that produce advantageous behavior also generate mostly true beliefs, but given the causal irrelevance of the semantic content (such that the semantic content could literally be anything at all without affecting behavior), it would seem to be the most serendipitous of coincidences if that were to occur. Thus in the absence of further relevant information, the likelihood that their cognitive faculties are reliable (given N&E) is low."

    But there is no valid logic here at all--just a failed attempt at what appears to be induction, but clearly is not.

    Your only real effort to justify premise 2 comes in the form of this line,

    "it would seem to be the most serendipitous of coincidences if that were to occur"

    But, of course, I demonstrate why this is false, given evolution, in my argument above.

    The ANPD thought experiment takes the same basic logical approach. However, it admits to the possibility that "that semantic content just is the NP properties" and proposed a "semantic pseudo-epiphenomenalism" wherein false beliefs produce the same behavior as true ones.

    This is *precisely* the scenario that my argument above addresses, and I show why, though it is true that false beliefs *can* produce the same behavior as true ones, there is still selective pressure towards *true* beliefs instead of false ones.

    Indeed, I think that this version of semantic-pseudo-epiphenomenalism is true and that the semantic-epiphnomnalism you discuss in your first argument is irrelevant, since it is false on what I consider to be naturalism and evolution, this second argument points out (essentially correctly) that the two are functionally very similar and, indeed, my disproof of the core assumption in the ANPD argument (that evolution has no cause to select for true beliefs over false ones) is, as I pointed out, a disproof of the core assumption in your first argument.

    In both cases, the premise that "2.) Pr(RA|N&E) is low." is both not logically justified by the argument you present, and falsified by the argument that I present.

    This alone is sufficient to dismantle your position, but let's continue on to your second premise in the next post.

    ReplyDelete
    Replies
    1. <<
      1.) Pr(R|N&E) is low

      2.) The person who believes N&E (naturalism and evolution) and sees that Pr(R|N&E) is low has a defeater for R.
      >>

      1 is false, as I've demonstrated above.


      You really haven't demonstrated that anywhere "above."

      As I said in the article, "what’s true for the aliens is true for us (remember, we’re basically considering the probability of R on just N&E)."

      You've made a lot of posts here; perhaps we should discuss this in a forum?

      Delete
    2. Yeah, these got a little out of order when I was copying and pasting. Should say "below" not "above."

      The problem is that it isn't any more true for the aliens than it is for us--and it's not true for us.

      Delete
  2. In defense of premise 2, you offer the following deductive argument:

    "
    1.) If R is defeated in scenario (S1), then R is defeated in scenario (S2).
    2.) If R is defeated in scenario (S2), then R is defeated in scenario (S3).
    3.) If R is defeated in scenario (S3), then R is defeated in scenario (S4).
    4.) If R is defeated in scenario (S4), then R is defeated in scenario (S5).
    5.) If R is defeated in scenario (S5), then R is defeated in scenario (S6).
    6.) R is defeated in scenario (S1).
    7.) Therefore, R is defeated in scenario (S6).
    "

    Let's say I were to agree with 6.

    You say that each of your premises is a material conditional statement, but you provide no appropriate justification for any of them.

    In each case, the corresponding strict conditional statement is certainly false, so we can't appeal in that direction for justification.

    The only remaining option, then, is to make an independent probabilistic statement about each consequent and antecedent which, in not one case, do you actually bother to do.

    This complete lack of justification for each premise is, itself, sufficient to warrant the dismissal of your argument. Essentially, your scenarios are simply not analogous, and you offer no reason at all to think that they are. This ham-fisted attempt at a defense for your defeater premise has to be considered a failure.

    ReplyDelete
    Replies
    1. You say that each of your premises is a material conditional statement, but you provide no appropriate justification for any of them.

      We can't have an infinite regress of justifications; we must eventually reach some stopping point. In this case, the stopping points were the premises of these arguments.

      Delete
    2. Not good enough. Whatever you just happen to pull out of your ass is not an acceptable stopping point for any rational person, and that's all these premises are.

      And, of course, I go on to show that at least one of them must be false, below.

      Delete
    3. Not good enough. Whatever you just happen to pull out of your ass is not an acceptable stopping point for any rational person, and that's all these premises are.

      By my lights it’s difficult to find a premise that is plausibly false. It would clearly be irrational for me to believe that Sam’s cognitive faculties are reliable given the information I have in scenario (S1), and there doesn’t seem to be a relevant difference between difference between e.g. scenarios (S1) and (S2) whereby R is defeated in (S1) but not (S2). I'd say the same goes for the other premises, but as for a specific premise (apparently you disagree with the premise in which the consequent is scenario (S3)) I'll deal with that below since that's where the conversation seems to lie.

      Delete
  3. As an amusing side note, we can see the duplicity in your general approach quite clearly when we look at scenario 3:

    "
    I administer the test to myself and the machine reports that drug XX came into my bloodstream at around the time I was born. Later I come to believe that I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after drug XX entered my bloodstream, I conclude that I have a defeater for my belief that my cognitive faculties are reliable."

    Here, you want to say that one test you administer to yourself, after the XX has been in your bloodstream, constitutes a warrant for believing that the proposition "I have XX in my bloodstream" is true.

    However, you *immediately* turn around and say that a *battery* of tests which tell you that your faculties are reliable should be dismissed.

    Why the disparity, here? In both cases, we're talking about tests administered after you've allegedly been injected with the drug. Why believe one set of tests but not the other?

    If we are to conclude that our faculties are unreliable, *then we should accept neither test.*

    If we are to withhold judgement about our faculties until after all of the tests, then the actual information gained from all of those tests is relevant. The information gained from the test for XX, our information about XX, the information gained from the tests of our cognitive faculties are relevant. Information about *those* tests is relevant.

    We do not, in fact, have a defeater for R in this scenario. You quite clearly have failed to give us enough information to establish that.

    You allude to one bit of information that suggests we have a defeater, and another bit of information which contradicts it. Without more details, it would be *irrational* to say that we have a defeater for R in this scenario, yet that is *precisely* what you have to do.

    Since the consequent in S3 is false, we can look at the other premises in your argument:

    Presuming that P6, P1, and P2 are true (I don't really care if P1 and P2 are true--they are vapid and baseless, as I discuss above), then the antecedent of P3 is true as well--which would mean that the antecedent of P3 is true while its consequent is false, which would mean that P3 is false and your argument is unsound. (A problem with using material conditionals is that your argument is defeasible in this way.)

    And so, we see that in your argument proper, your first premise is false, and both of the arguments you offer for it are demonstrably unsound. Your second premise is likely false as well, and we can say with great confidence that your supporting argument for it is unsound.

    And, with that, I think we can--indeed, must, if we are to pretend to be rational people--safely dismiss your argument as a failure.

    ReplyDelete
    Replies
    1. We do not, in fact, have a defeater for R in this scenario. You quite clearly have failed to give us enough information to establish that.

      Perhaps you're right; I've modified the scenario whereby I'm as confident that drug XX has entered my bloodstream as I am in scenario (S2) where I personally ingested drug XX. In that case, I would have a defeater for the belief that my cognitive faculties are reliable. If you still think R isn't defeated here, why is it not defeated in (S3) when it is defeated in (S2)? What's the relevant difference here?

      Delete
    2. In each of your scenarios, R isn't defeated unless we have some memory or other record we can reference (a test that doesn't rely on the compromised cognitive faculties, basically).

      That condition might obtain in the early scenarios (it might not--your descriptions are ambiguous, suggesting you haven't actually thought this through enough to provide any rational warrant for any of your conclusions) but it certainly doesn't obtain by the time we get to scenario 3.

      Hence, either premise 6 is false or premise 3 is false. Either way, your argument fails.

      Delete
    3. In each of your scenarios, R isn't defeated unless we have some memory or other record we can reference (a test that doesn't rely on the compromised cognitive faculties, basically).

      We can reference the belief that drug XX entered my bloodstream, and to say I shouldn’t believe drug XX entered my bloodstream because drug XX entered my bloodstream just doesn’t make sense, even if drug XX rendered my cognitive faculties unreliable. The belief Drug XX entered my bloodstream isn’t exactly the sort of thing I could believe and be mistaken about if the only thing that would undermine my cognitive reliability is drug XX entering my bloodstream (we can stipulate that in these scenarios I have no reason to believe anything else might be undermining my cognitive reliability). In contrast, the belief I have passed effective cognitive reliability tests is the sort of thing I can believe and be mistaken about if drug XX entered my system (arguably, it is likely I would be mistaken about it since only a small percentage of people have the blocking gene). I’ve modified the blog entry to make this point more explicit.

      This incidentally answers your earlier question about why the disparity between accepting the test for drug XX entering my bloodstream but not accepting the cognitive reliability test. In the former case, it’s not reasonably possible I could hold the belief Drug XX entered my bloodstream and be mistaken about it being true, but it is very possible I could believe I have passed effective cognitive reliability tests and be mistaken about that belief.

      You’re not completely off track though; it could be that as a result of drug XX I am mistaken about how I came to believe that drug XX entered my bloodstream, e.g. perhaps another scientist told me I have the XX-mutation and my unreliable cognitive faculties told me something different (like the renowned scientist part of scenario (S3)). That said, it is still not reasonably possible I could hold the belief Drug XX entered my bloodstream and be mistaken about it being true, but it is very possible I could believe I have passed effective cognitive reliability tests and be mistaken about that belief.

      Delete
    4. You should reject chains of reasoning that defeat themselves.

      In scenario 6, we have four situational elements to consider:

      1.) the reasoning that leads you to believe that R is defeated is reliable.

      2.) the reasoning that leads you believe that R is defeated is not reliable.

      3.) the reasoning that leads you to believe R is reliable.

      4.) the reasoning that leads you to believe R is not reliable.

      If you believe 1 and 3, you should not believe 1. 1 is therefore self-defeating and should be rejected. Thus, you are left with no reason to reject 3, and conclude with R.

      If you believe 1 and 4, you should not believe 1. 1 is self-defeating and should be rejected, but 4 should leave you suspicious of R none-the-less.

      If you believe 2 and 3, you should obvious accept R.

      If you believe 2 and 4, you should, again, not reject R, but should be suspicious of R.

      It seems as though in your scenario, we have a person who believes 1 and 4, and that situation, for a rational person, plays out such that the person accepts R.

      Thus, the antecedent for your premise (whichever: the one that says "If R is defeated in scenario 5 it is defeated in scenario 6) is false.

      Thus, either a prior premise is false (it doesn't matter which one, and I'm not willing to play your childish material-conditional shell game) or this premise is itself false. Either way, the argument is unsound.


      Or, if you prefer, you can think about it this way:

      In scenario 6, you have some warrant for believing that R is defeated and some warrant for believing that R is not defeated. Both are presented explicitly (your decision that the probability premise is probably true on the one hand and your battery of tests on the other). Both have attached uncertainty. We do not have enough information to evaluate which should have the more epistemic weight.

      Thus, even if we ignore the fact that rejecting R is self-defeating, we are left without enough information to evaluate whether or not you have a defeater for R in scenario 6.

      Either way, the argument is left unsound.

      Delete
    5. It seems as though in your scenario, we have a person who believes 1 [the reasoning that leads you to believe that R is defeated is reliable.] and 4 [the reasoning that leads you to believe R is not reliable.], and that situation, for a rational person, plays out such that the person accepts R.

      That doesn’t seem to follow. Consider for example scenario (S2) where I ingest drug XX knowing of its effects (it has a high probability of rendering one’s cognitive faculties unreliable). I conclude that my reasoning about thinking R was not defeated was mistaken (so in this sense I accept 4) on the alleged cognitive tests happened after I took drug XX and so this belief was likely produced by unreliable cognitive faculties. But do I really have adequate grounds for thinking R is true for me given the information I have? It would seem not, and thus it would seem that R is defeated here whether I think so or not (given the information about e.g. drug XX and my ingesting of it), and this this would seem to provide a counterexample to your reasoning here.

      That said, the claim in the argument is not that I in e.g. (S2) have adequate grounds for believing that the reasoning for my thinking R is defeated is reliable, but rather that R is defeated in (S2)—the difference being that the latter makes no claims about whether I am justified in (S2) in thinking that my reasoning that leads me to believe that R is reliable. In (S2), it would seem that the information I have at the time undermines my warrant for thinking R is true. The same would appear to be the case for (S3), (S4), (S5), and (S6) due to the apparent truth of the premises in my argument. Speaking of which, since you reject the conclusion of my deductively valid argument, is there a premise you believe to be false?

      Thus, either a prior premise is false (it doesn't matter which one, and I'm not willing to play your childish material-conditional shell game) or this premise is itself false. Either way, the argument is unsound.

      Do you really think identifying a false premise is a childish shell game? Seems to me a rather reasonable request when the deductive argument is valid and I claimed there is no premise that is plausibly false.

      In scenario 6, you have some warrant for believing that R is defeated and some warrant for believing that R is not defeated. Both are presented explicitly (your decision that the probability premise is probably true on the one hand and your battery of tests on the other).

      Well, not quite. The would-be warrant for thinking that R is not defeated is entirely undermined by the effects of N&E just as it is in scenario (S5) via the XX-mutation.

      But as I said, the issue in the argument is not whether I have warrant for thinking it is defeated in those scenarios but whether it is actually defeated in those scenarios (a claim could be true without my having warrant for it in a given situation). How I view it is that we initially start with belief in R and (usually) we have warrant for it. If we believe we’ve ingested drug XX while being aware of its effects, that warrant is undermined, and any would-be warrant acquired after drug XX (such as the belief that one has passed a battery of cognitive tests) is likewise undermined. After drug XX, I no longer have adequate grounds to believe R is true for me in scenarios (S2) through (S5), and if that is true then R is defeated for me in scenarios (S2) through (S5). Agree or disagree?

      Delete
  4. Here's why induction has a high probability of arising given evolution (this was written specifically in response to the meme containing C. S. Lewis's version, but if refutes your version quite nicely as well, Wade.)

    Understanding why this argument is a failure comes down to understanding models and how we construct these sets of beliefs about reality and use them to predict how things in reality will behave. We want our model to require the least amount of programming that we can get away with--the less programming our model has, the more efficient it is. This is what we're measuring when we talk about complexity (Kolmogorov complexity) and it is the basis for the formalized inductive axiom that we call, colloquially, Occam's razor.

    Reductionism is a "thing" in philosophy precisely because of this tendency to prefer efficiency in our models, and false beliefs do not lend themselves to reductionism. For instance, eat eagle eggs and you get spiritual powers. Take eagle eggs, and the eagle god will frown upon you. Eat chicken eggs, get spiritual powers. Take chicken eggs--nobody cares. Maybe the Eagle god beat the chicken god in a squawking contest and now Chicken has to forfeit his eggs to Human in payment for some debt of Eagle's.

    These webs of myth and superstition are not reducible--each time we find some new phenomenon that we want to be able to model, we need a new story or a new superstition. This is the opposite of what we actually see our faculties doing--what we actually see, in those cases where we take our models to be trending towards truth, is that simpler and simpler principles can be used to predict wider and wider ranges of phenomenon.

    That's the sort of efficiency we're talking about, here, and I think there is very good reason to think that accurate belief structures are more efficient than false belief structures with identical predictive power. Indeed, this is precisely what the formalized version of Occam's razor says:

    between models with identical predictive power, those which are more efficient are more likely to be true.

    This particular point is entailed directly from one of the core axioms of inductive logic. So, let's call this point 1.

    1.) Accurate belief systems should be expected to be more efficient than inaccurate belief systems with the same predictive power. This seems to be pretty firmly established, so let's move on to point 2:

    ReplyDelete
    Replies
    1. Reductionism is a "thing" in philosophy precisely because of this tendency to prefer efficiency in our models, and false beliefs do not lend themselves to reductionism

      If that were true then reductionism is refuted by the existence of false beliefs.

      Or perhaps what you really mean is that if reductionism were true, false beliefs would be disadvantageous to survival. But if reductive materialism implies that the semantic content of our beliefs is causally irrelevant, this wouldn't be true. For more on that see my article on Plantinga’s Argument against Materialism.

      Delete
    2. <>

      No, it isn't. That doesn't follow logically from anything I wrote.

      <>

      This isn't precisely what I meant, but close enough.

      <>

      It doesn't. This is nonsense. Also, this is false as a conditional statement. Reductionism and reductive materialism are not the same thing.

      Also, I explaim why reductionism has selective advantage below.

      Delete
  5. 2.) Between belief systems with identical predictive power, more efficient systems are more valuable, when it comes to reproduction, than less efficient systems.

    Evolution is very much concerned with efficiency. Frankly, I would think that this would be obvious, but, let's recap: A particular trait becomes more frequent within a population if members of that population who have that trait reproduce more, and it becomes less frequent if those members reproduce less. Reproduction requires resources--actual, physical resources which the organism must acquire from its environment. Model-building (forming those belief systems I talk about above) also requires resources. The more complex the model, the more resources the organism spends building it. The more resources the organism spends building it, the more resources the organism must collect in order to reproduce. Investing a lot of resources in a model or belief system, in order to make that model robust and powerful (in terms of its ability to make predictions that will be useful for the organism building it) is a strategy that some animals take and others don't. Humans, for instance, are exemplars of this approach, while paramecium aren't particularly well known for their elaborate belief-systems.

    You can think of it like hiring a think tank before launching a company. You spend some capital up front to get a bunch of smart people to plan out a strategy for you. They're not actually making anything you can sell (the equivalent of reproducing, in the world of business) but they're laying the groundwork so that your later endeavors will be more successful. You want your think tank to be good at predicting the markets you'll be swimming in, of course, but you also don't want to spend any more money on it than you have to. You might hire a lot of bright young kids from a variety of fields. But, of course, if you find that one of your hires is just sitting there spouting crap about massaging your life forces with his spirit palms, you're going to get rid of him--he's probably worth the money you're spending on him, and if you save that money to use for start-up capital, your business will likely be more successful later on. Evolution works the same way. If an organism can spend fewer resources for the same result, that's a selective advantage. Hence, again, point 2:

    2: Among belief systems with identical predictive capabilities, more efficient belief systems are more valuable, when it comes to reproduction, than less efficient belief systems.

    Once we have point 1 and point 2, the flaw in this argument becomes clear:

    1.) Accurate belief systems should be expected to be more efficient than inaccurate belief systems with the same predictive power.

    2.) Between belief systems with identical predictive power, more efficient systems are more valuable, when it comes to reproduction, than less efficient systems.

    In addition, we should expect evolution to select for those traits which are more valuable, when it comes to reproduction. Indeed, this is the core principle of evolution by natural selection.
    Therefore, we can conclude with point 3:

    3.) We should expect evolution to select for accurate belief systems over inaccurate belief systems.

    And, with 3 established, the core claim in this argument is falsified.

    Plantinga before you and Lewis before him base their argument on the assertion that evolution has no reason to select for accurate belief systems--accurate models of reality--over inaccurate ones. With my three points and a basic understanding of evolution, we can safely conclude that this assertion is false. And, with the failure of its core premise, the EAAN stands neatly refuted.

    ReplyDelete
    Replies
    1. None of this addresses the argument against knowing on naturalism that the most efficient and selectable physiologies are those that yield mostly true beliefs, e.g. if semantic epiphenomenalism is true on naturalism, the notion that the most efficient and selectable physiologies would also happen to generate mostly true beliefs--given that we could even have "garbage" beliefs unrelated to the external world (as in dreams) without those beliefs affecting behavior--seems like it would have to be a very lucky coincidence indeed.

      Delete
    2. Actually, it does address that specifically. You can't just hand-wave it away.

      If semantic epiphenominalism is is not consistent with what I wrote above, then semantic epiphenominalism is not entailed by naturalism.

      Delete
    3. Actually, it does address that specifically.

      I reread what you wrote. Not only is the argument from semantic epiphenomenalism not mentioned specifically, semantic epiphenomenalism itself is never mentioned at all. Indeed, your whole argument seems to simply presuppose that semantic epiphenomenalism (the view that the semantic content of a belief is causally irrelevant) is false. Consider these excerpts from what you’ve written:

      Understanding why this argument is a failure comes down to understanding models and how we construct these sets of beliefs about reality and use them to predict how things in reality will behave.

      And later…

      Model-building (forming those belief systems I talk about above) also requires resources. The more complex the model, the more resources the organism spends building it. The more resources the organism spends building it, the more resources the organism must collect in order to reproduce. Investing a lot of resources in a model or belief system, in order to make that model robust and powerful (in terms of its ability to make predictions that will be useful for the organism building it) is a strategy that some animals take and others don't. Humans, for instance, are exemplars of this approach, while paramecium aren't particularly well known for their elaborate belief-systems.

      But if semantic epiphenomenalism is true, why not a simple set of beliefs that is utterly wrong about the environment? Part of the issue of semantic epiphenomenalism (SE) and semantic pseudo-epiphenomenalism (SPE) is that we could be living in a virtual dream-world utterly divorced from the way reality is—even if said dream-world would have to comprise a simple set of simple beliefs. On SE, the semantic content of beliefs has no causal relevance at all, and on SPE, the semantic content might influence behavior in a way that has something to do with what’s going on, e.g. whereby the belief Grass is air is what causes one to get a drink of water when thirsty (thereby mimicking the effects of SE when SE says it’s the NP properties and not the semantic content that causes belief).

      Your rebuttal to the Probability Thesis seems to simply presuppose the falsity of both SE and SPE and assume that the semantic content of beliefs is causally relevant in the way the “normal” person believes it to be. To some degree I sympathize; when I heard Plantinga say things like “Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect…” I was unconvinced. It wasn’t until Plantinga argued that naturalism implies some sort of epiphenomenalism, where the semantic content of beliefs need not have anything to do with one’s external environment since they would be causally irrelevant, that I came to believe that Pr(R|N&E) was low. To refute the Probability Thesis you’ll have to attack the view that naturalism entails SE, and you haven’t quite done that here.

      Delete
    4. 1.) If evolution and naturalism are true, organisms with a set of cognitive faculties that produce models that produce more accurate predictions with a given set of resources are more likely to thrive than organisms with a set of cognitive faculties that produce models that produce less accurate predictions with the same set of resources.

      2.) If a model produces more accurate predictions with a given set of resources than another model that produces less accurate predictions with the same set of resources, the former model is more likely to be true and the latter model is more likely to be false.

      3.) (from 1 and 2) If evolution and naturalism are true, organisms with a set of cognitive faculties that produce models that are more likely to be true are more likely to thrive than organisms with a set of cognitive faculties that produce models that are less likely to be true.

      4.) If one set of faculties produces models that are more likely to be true and another set of faculties produces models that are less likely to be true, the former set of faculties is more reliable and the latter set of faculties is less reliable.

      5.) (from 3 and 4) If evolution and naturalism are true, organisms with a more reliable set of faculties are more likely to thrive than organisms with a less reliable set of faculties.

      6.) If evolution and naturalism are true and organisms with a more reliable set of faculties are more likely to thrive than organisms with a less reliable set of faculties, then reliable faculties offer a selective advantage.

      7.) (from 5 and 6) If evolution and naturalism are true, then reliable faculties offer a selective advantage.

      8.) If evolution and naturalism are true and reliable faculties offer a selective advantage, then reliable faculties are an expected consequence of evolution.

      9.) (from 7 and 8) If evolution and naturalism are true, then reliable faculties are an expected consequence of evolution.

      10.) If it is the case that, "if evolution and naturalism are true, then reliable faculties are an expected consequence of evolution," then P(R|E+N) is not low.

      11.) (from 9 and 10) P(R|E+N) is not low.

      This argument does not mention semantic content at all. It does not presuppose anything about SE one way or the other.

      Honestly, SE is simply irrelevant to this thread. It has no implications regarding the reliability of our cognitive faculties on materialism.

      Delete
    5. On at least one version of reductive materialism,

      Both what you think of as "semantic content" and behavior are determined by physical structures (models) in our brains. On this hypothesis, accurate models produce true semantic content and corresponding behavior.

      If this is true, then the claim that "semantic content" is causally relevant is false--a simple case of mistaking correlation for causation and ignoring a lurking third variable: the models themselves.

      In addition, this does nothing to raise "the intellectual price tag" of materialism.

      The fact that semantic content itself is not causally says nothing about the reliability of our cognitive faculties, if this hypothesis is true. Both behavior and semantic content are determined by models, and enjoy a predictable correlative relationship as a result.

      When it comes to the EAAN, then, we can avoid the entire problem by simply pointing out the possibility that this hypothesis is true. If this hypothesis is true, then:

      Reliable faculties tend to produce accurate models.

      Accurate models make good predictions efficiently.

      Evolution selects for the ability to make good predictions efficiently.

      Thus evolution selects for accurate models.

      Thus evolution selects for reliable faculties.

      And, since reliable faculties tend to produce accurate models and accurate models produce true semantic content, evolution also selects for true semantic content, albeit indirectly.

      There is no bullet to bite, here. Plantinga fails to really think through how mental processes work on naturalism, does not address this fairly obvious hypothesis at all, and simply assumes that it is false by claiming that "semantic content is relevant."

      (I personally think we should use the term "semantic content" differently--more in line with how I use it in the beginning of my last post--but, for now, let's stick to Plantinga's usage.)

      Howevever, if my hypothesis is correct (and neither you nor Plantinga provide any rationale at all for thinking that it is not) then your P2 here is simply false.

      I think that my hypothesis is correct, and you have offered no reason to reject it, so I will reject your p2. As I said, you're basically just begging the question by assuming that my particular materialistic hypothesis is false at this step.

      And since you are simply wrong about the implications of this rejection re: the EAAN, I can basically leave it at that.

      Semantic content is not causally relevant. It is determined by models, such that accurate models produce true semantic content. Accuracy in models is what matters

      Delete
    6. (response copied from your post on Plantinga's argument against materialism)

      Delete
    7. 1.) If evolution and naturalism are true, organisms with a set of cognitive faculties that produce models that produce more accurate predictions with a given set of resources are more likely to thrive than organisms with a set of cognitive faculties that produce models that produce less accurate predictions with the same set of resources.

      While not explicitly stated one might get the impression that your presupposing the falsity of semantic epiphenomenalism (SE) here, because if the semantic content of beliefs is causally irrelevant, why would it matter how accurate our models of reality are? They could be wildly inaccurate, akin to that in dreams, without affecting behavior. So if SE is true on naturalism there doesn’t appear to be any reason on N&E alone to believe (1) is true.

      Or do you mean something else besides “models,” and if so, what? Because my first impression was that you were speaking to the semantic content of beliefs regarding our models of reality.

      Delete
  6. I think what you need to do, Wade, is take the time to carefully define this semantic epiphenominalism that is so critical to the structure of your argument and prove that it is entailed by naturalism.

    That would at least be a start on clearing up about half of the fatal flaws in your argument.

    ReplyDelete
    Replies
    1. I think what you need to do, Wade, is take the time to carefully define this semantic epiphenominalism that is so critical to the structure of your argument and prove that it is entailed by naturalism.

      I did that in pages 2 and 3 of this blog entry, and I've restructured the blog to make this clearer. That said, the real blog entry to see naturalism entailing semantic epiphenomenalism is Plantinga's argument against materialism.

      Delete
    2. The article to which you link does not have any argument which proves that SE is entailed by naturalism. However, in that article, I explain both why the argument that you actually present is a failure and why it is not relevant to the EAAN.

      Delete
    3. The article to which you link does not have any argument which proves that SE is entailed by naturalism. However, in that article, I explain both why the argument that you actually present is a failure and why it is not relevant to the EAAN.

      I read your response; you don’t do anything to address or even dispute the argument that tries to prove that SE is entailed by naturalism. Instead you accept the idea that a belief’s semantic content is causally irrelevant.

      Delete