Friday, May 31, 2013

Is God Morally Responsible/Good/Praiseworthy If He Cannot Do Evil?

It’s nice to see my blog seemingly influence people, even if it’s only in names and taglines. Among those who have read my blog, however rarely, is a blogger named The Analytic Philosopher whose tagline is “Philosophical Musings,” which of course is similar to my own tagline “Philosophical Musings etc.” (I say seemingly influence because the influence might not be real.[1]) In a similar vein, the name of my blog seems to have at least partly inspired the name of the Maverick Atheism Blog.[2]

Still, what I’d really like to discuss is the following claim in the Analytic Philosopher’s entry Jesus and the Principle of Alternate Possibilities. Originating in Harry Frankfurt’s famous philosophy paper “Alternate Possibilities and Moral Responsibility,” the philosophically famous principle of alternative possibilities (PAP) says that “a person is morally responsible for what he has done only if he could have done otherwise.”[3] Jesus being essentially morally perfect means in all possible worlds that Jesus exists, Jesus is morally perfect. Since we Christians believe that Jesus is God incarnate, we of course believe that Jesus is morally perfect. With that in mind, consider the Analytic Philosopher’s argument:
Suppose Jesus in [some possible world] W promises His mother, Mary, that He will be home by some time t in order to build her a table. Of course, since Jesus is Lord He fulfills His obligation and, as promised, builds her a table after having arrived at home at t. Given PAP, Jesus was morally responsible for what He did only if He could’ve done otherwise. But then a problem is evident. That means there is a possible world W* that is just like the W except that, in W*, Jesus breaks His promise and so doesn’t make it home at t. So here is a dilemma for the theist who thinks Jesus is essentially morally perfect. Either Jesus isn’t essentially morally perfect, because there are worlds in which He goes wrong, or it is impossible for him to be morally responsible for obligatory acts.
As the blogger notes, one “might be inclined to reject the PAP.” Should one reject it?

The Principle of Alternative Possibilities



Frankfurt counterexamples, named after the very same Harry Frankfurt who coined the term “principle of alternative possibilities,” are proposed examples that show PAP to be false. To use something along the style of Frankfurt (indeed, borrowing a fair amount from his original paper), suppose Jones chooses some action (say, stealing a candy bar) but Black has a device in the brain of Jones such if Jones were to choose not to steal the candy bar, the device would force Jones to steal the candy bar. Jones however decides to steal the candy bar and so the device is not used. In this case, Jones could not have done otherwise but to steal the candy bar, yet he is still responsible for his action. Thus, the PAP is (in at least some sense) false.

Still, isn’t there some sense where the PAP is true? Suppose Jones could not even try to not steal the candy bar, because the device prevents even this from occurring. Would we say Jones is not responsible for his action then? Actually, he still could be. Suppose that if Jones were about to try not to steal the candy bar (suppose the device or the mad scientist had some sort of precognitive ability), the device would mind-control Jones so that he could not even try to not steal the candy bar. If Jones were to try to steal the candy bar though, the device would do nothing. Suppose then that Jones does try to steal the candy bar and the device does nothing. Even in this situation where Jones is incapable of trying to not steal the candy bar, Jones is morally responsible for his attempt at thievery.

We could even modify the scenario slightly so that Jones is incapable of doing something morally wrong, because the device steps in whenever he would make an immoral decision. Suppose Jones always chooses what is right and good, so that the device never intervenes. Then even though Jones is not morally free in the sense that he is capable of doing evil as well as good, Jones would still be a good man.

The upshot from all this is that the PAP is clearly false. God and thus Jesus as the Son of God cannot even try to do evil; there is no possible world where God (or Jesus as God incarnate) does evil. But the inability of an agent to even try to do evil does not imply that the agent isn’t morally responsible for one’s actions, as the case of Jones demonstrates.

Still, in the absence of a mad scientist’s device present in God’s mind, how is it that God (and thus Jesus) can be morally responsible while being unable to even attempt evil? This might depend on what exactly one means by “morally responsible,” but if being morally praiseworthy is a sufficient condition for moral responsibility, then I think Jesus is morally responsible. But in the case of God, how can a being who cannot fail to do good be morally praiseworthy or even rightly be considered as good?

Is God Morally Praiseworthy If He Can’t Do Evil?



One could object to God’s essential goodness by saying that because God does good in all possible worlds, it makes no sense to say that God is himself good, morally praiseworthy, or worthy of worship, since God has no choice but to do good, and there is no possible world where God does evil. When it comes to morally significant freedom (in the sense of being able to freely choose between good and evil actions), God has none.

The first thing to say is that this objection seems to rely on the PAP, which has already been proven false thanks to Frankfurt counterexamples. One can use Frankfurt counterexamples to show that its possible to be good without having morally significant freedom, as in the case of Jones freely doing good without having morally significant freedom due to Black’s device. With the proven falsity of the PAP, the onus is on the person who claims God not having morally significant freedom somehow leads to God not being praiseworthy and good.

Still, let’s ignore the burden of proof here. Can we show this objection is unsound even without Frankfurt-style cases? I think so, because one possibility is that God is worthy of worship at least in part because God is the Good, i.e. the paragon and model of moral goodness. As an analogy, the fidelity of an audio recording of a symphony is measured against how closely it matches the actual, original performance; the original performance sets the standard for audio fidelity. Similarly, God’s perfectly holy nature sets the standard of what moral goodness is, e.g. God is by nature loving and just, and these attributes become moral virtues for us. God is basically morality incarnate, and just as we should follow morality above all else even if morality’s nature cannot be different from what it is, so too should we follow God above all else even if God’s nature cannot be different from what it is. God is worthy of worship even without morally significant freedom, in part because, unlike us humans, God is the Good.

In a similar vein, just as morality is good even if its nature cannot be other than what it is, so too can God (and his actions) be good even if his nature cannot be other than what it is. If it makes sense to praise the virtues of morality, it makes sense to praise the Good. Thus, at least partly because God is the Good, God is worthy of praise and worship. And if God being morally praiseworthy is sufficient to make him morally responsible, then God is morally responsible. I think this would also wind up with Jesus being morally responsible even if there is no possible world where he does evil. If God is the Good, his goodness and his good deeds (like incarnating himself as a human and undergoing extreme suffering to save our souls) are worthy to be praised. Jesus, as God incarnate, is thus morally praiseworthy and morally responsible.




[1] In addition to saying “That my tagline is the same as yours is purely coincidental” after admitting to reading my blog, he explained in a subsequent communication that he never once noticed the tagline when reading my blog—despite the tagline being present in every one of my blog’s posts. One wonders if he didn’t at least subconsciously notice the tagline upon thinking of a tagline for his own blog. Still, it could be coincidental. The Analytic Philosopher noted I wasn’t exactly the first blogger to come up with the phrase “philosophical musings” (as a Google search confirms).

[2] This one I have more direct evidence for; see the blog’s Q & A page.

[3] Frankfurt, Harry G. (1969). “Alternate Possibilities and Moral Responsibility” The Journal Of Philosophy 66. p. 829.

Sunday, May 19, 2013

Evolutionary Argument Against Naturalism (p. 5)

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
 |   1   |   2   |   3   |   |   5   |   Next >

Conclusion



The evolutionary argument against naturalism goes like this:
  1. Pr(R|N&E) is low
  2. The person who belives N&E (naturalism and evolution) and sees that Pr(R|N&E) is low has a defeater for R.
  3. Anyone who has a defeater for R has a defeater for pretty much any other belief she has, including (if she believed it) N&E.
  4. Therefore, the devotee of N&E (at least such a devotee who is aware of the truth of premise 1) has a self-defeating belief.
One of the big reasons to accept the Probability Thesis (premise 1) is that if N&E were true, then the semantic content of our beliefs is causally irrelevant in the sense that a belief causes stuff by virtue of its neurophysiological (NP) properties, and not by its semantic content. If a belief had the same NP properties but different content, the same behavior would result (the same neurophysiological properties means we would have the same electrical impulses travelling down the same neural pathways and thus issuing the same muscular contractions). Even if that weren’t the case, the ANPD scenario suggests it’s still possible for “garbage” beliefs to be associated with electrochemical reactions producing advantageous behavior. If semantic epiphenomenalism (SE) isn’t true on N&E, then semantic pseudo-epiphenomenalism (SPE) is, and both Pr(R|N&E&SE) and Pr(R|N&E&SPE) are low, thereby making Pr(R|N&E) low.

The argument for the Defeater Thesis (premise 2) is that if R is defeated in (S1), then it is defeated in (S2), and if R defeated in (S3), then it is defeated in (S4), and so forth, where (S6) is the scenario of a person who accepts both N&E and the Probability Thesis. The general idea is that the effect of an evolutionary naturalist believing Pr(R|N&E) to be low is akin to believing that drug XX has been put into one’s body (where drug XX destroys the cognitive reliability of most who take it).

The upshot of all this is that there is a serious conflict between science and naturalism, because the conjunction of naturalism and evolution is in an interesting way self-defeating.

 |   1   |   2   |   3   |   |   5   |   Next >

Evolutionary Argument Against Naturalism (p. 4)

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
 |   1   |   2   |   3   |   4   |   5   | 

The Defeater Thesis



Borrowing heavily from Plantinga, I’ll use the drug XX analogy[4], where drug XX is a fictional drug that renders one’s cognitive faculties unreliable for the vast majority of those who take it, with the type of unreliability in question being general cognitive unreliability, such that those so afflicted can’t even rely on their cognitive faculties to determine whether they’ve passed cognitive reliability tests and aren’t necessarily capable of detecting their own cognitive unreliability (note that this is the same type of unreliability that EAAN has in mind), though those afflicted can still have some true beliefs, including the belief that drug XX entered their system (though they might be mistaken how they came to that belief). The only people who are immune to drug XX are those that have the blocking gene, a gene that produces a protein that blocks the effects of drug XX, but the likelihood of any given individual having the blocking gene is small. Once a person ingests drug XX, the drug soon enters the bloodstream and has a high probability of rendering that person’s cognitive faculties unreliable within two hours. With that, consider the following scenarios:

Scenario (S1): I know that my friend Sam ingested drug XX and that twenty-four hours later he came to believe that that a series of tests has confirmed that he has the blocking gene and that his cognitive faculties are reliable, though I have no independent reason for thinking this occurred. And since Sam obtained his belief about the cognitive tests long after he ingested drug XX, there’s a reasonable chance that this belief was produced by unreliable cognitive faculties, and so this would-be evidence for Sam’s cognitive reliability (Sam’s memory of passing the cognitive tests) is undermined by drug XX, and my belief Drug XX entered Sam’s bloodstream defeats my belief that Sam’s cognitive faculties are reliable.

Scenario (S2): After I learn about poor Sam I ingest drug XX while being aware of its potential effects. I know of no relevant difference that distinguishes my case from Sam’s. Some years after the incident I come to believe I have taken a series of tests that say I have the blocking gene and that my cognitive faculties are reliable, but since this belief came long after I ingested drug XX, it seems this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by drug XX, just as the would-be evidence for Sam’s cognitive reliability (Sam’s memory of passing the cognitive tests) is undermined by drug XX in (S1). Thus my belief Drug XX entered my bloodstream defeats my belief that R is true with respect to me.

The above two scenarios illustrate how drug XX can defeat R even for oneself, and also provide an undefeated defeater, since any alleged evidence one comes to believe in after ingesting drug XX would be undermined by drug XX. In scenario (S2), any evidence I give for the reliability of my cognitive faculties (e.g. my memories) would be presupposing the accuracy of my cognitive faculties—thus being circular reasoning. I consequently can’t maintain my belief in R in (S2) on the grounds that my cognitive faculties seem reliable to me, or that nothing seems to have changed for me, because all that relies on the very cognitive faculties that are being called into question. If I had knowingly taken a drug like drug XX that has a high probability of rendering my cognitive faculties unreliable, I can’t legitimately use something that’s the product of those cognitive faculties as evidence for my cognitive faculties being reliable. In scenario (S2), I cannot reasonably believe R is true for me while believing I ingested drug XX, and thus my belief Drug XX entered my bloodstream defeats R with respect to me.

Plantinga suggests N&E is like drug XX in its power to defeat R. N&E, like drug XX, plays a causal role in cognitive reliability; on N&E, it is naturalistic evolution that created humans and their cognitive faculties, so if on N&E it is likely that naturalistic evolution gave us unreliable cognitive faculties, that would seem to provide a defeater for R. Not only that, but Pr(R|N&E) being low would seem to provide an undefeated defeater for R.

While I think scenarios (S1) and (S2) illustrate the general principle, I think we can make the case stronger. Let’s consider a few more analogies, some of which refer to the XX-mutation—a genetic mutation that injects drug XX into the bloodstream soon after one is born.

Scenario (S3): A doctor has injected me with drug XX soon after I was born (the doctor mistakenly thought he was injecting an important vaccine), and I come to believe in the following. I initially believe I am the product of a sort of evolution that makes the reliability of my cognitive faculties very likely. I am a renowned scientist who has built a machine that I know is capable of reliably detecting whether and when drug XX entered a person's bloodstream, and I am extremely confident about the reliability of this machine (I as a qualified expert see that it works for myself and numerous scientific experts have unanimously agreed that it is reliable), such that if the machine reports drug XX entered my bloodstream, I would be as confident that the drug did enter my bloodstream as I would be in Scenario (S2). I administer the test to myself and the machine reports that drug XX entered my bloodstream at around the time I was born; as such, I am as confident that drug XX entered my bloodstream as I am in scenario (S2). Later I come to believe I have taken an extensive battery of cognitive reliability tests to confirm that I have the blocking gene, but since this belief came long after drug XX entered my bloodstream, it seems this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by drug XX just as it is in scenario (S2), and so it seems my belief Drug XX entered my bloodstream soon after I was born defeats my belief that R is true with respect to me.

Scenario (S4): I come to believe in the following. The XX-mutation afflicts approximately one in a million individuals, with only a small percentage of those with the XX-mutation having the blocking gene. I have constructed a device similar to the one described in (S3) except this device detects whether evolution gave someone the XX-mutation, and I am as confident in the reliability of this machine as I am with the one in (S3). For most of my life I believe that I am the product of naturalistic evolution that makes my cognitive reliability very likely. After some years though I finally try the XX-mutation detector on myself. To my horror, the machine reports that I have the XX-mutation and thus that drug XX entered my bloodstream soon after I was born, thereby making me believe that naturalistic evolution gave me the XX-mutation. Later I come to believe that I’ve passed a series of cognitive tests to confirm that I have the blocking gene, but since I believe these tests happened long after drug XX entered my bloodstream, it seems that this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by drug XX just as it is in scenario (S3), and so it seems that I have a defeater for my belief that my cognitive faculties are reliable. My belief Drug XX entered my bloodstream soon after I was born defeats R for me here just as it does in scenario (S3). Similarly, my belief that I have the XX-mutation (since I believe this mutation injects drug XX into my bloodstream soon after I’m born) defeats my belief that R is true with respect to me.

Scenario (S5): I come to believe in the following. Via a nifty combination of scientific and philosophical argumentation, it is proven beyond all reasonable doubt that naturalistic evolution entails that the XX-mutation is inevitably a part of any humanoid’s genetics. The aforementioned scientific and philosophical argumentation say that given N&E, it is likely that the XX-mutation rendered everyone’s cognitive faculties unreliable, though on N&E there is also the small chance that everyone evolved the blocking gene to render everyone immune to drug XX. N&E entailing that the XX-mutation is part of our genetics thus makes Pr(R|N&E) low, and I thus come to believe Pr(R|N&E) is low. I believe some time after it’s discovered that drug XX entered our bloodstream, credible scientists have run cognitive tests to confirm that we have the blocking gene. But since this belief came long after drug XX entered my bloodstream, it seems that, like scenario (S4), this would-be evidence for my cognitive reliability (my memory of the cognitive tests) is undermined by drug XX. My belief I have the XX-mutation defeats R for me here just as it does in scenario (S4).

Scenario (S6): The Probability Thesis is true and Pr(R|N&E) is low, but I do not initially believe this and instead think I am the product of a sort of evolution that makes my cognitive reliability very likely. Later however I study philosophy and see for myself that the probability of my humanoid cognitive faculties being reliable given that I am a product of naturalistic evolution is low. Afterwards I come to believe I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after naturalistic evolution created my cognitive faculties and I believe that given N&E, naturalistic evolution has a high probability of giving me unreliable cognitive faculties (by which I mean I believe that Pr(R|N&E) is low and naturalistic evolution is what gives me my cognitive faculties), it seems that this would-be evidence for my cognitive reliability (my memory of passing the cognitive tests) is undermined by the effects of naturalistic evolution similar to how naturalistic evolution giving me the XX-mutation in scenario (S5) undermines my would-be evidence for R, and so it seems that I have a defeater for my belief that my cognitive faculties are reliable.

To strengthen the case further, let XX symbolize “drug XX entered one’s bloodstream” and let’s stipulate that in scenarios (S1) through (S5), Pr(R|XX) (the probability of R given that XX entered one’s bloodstream) is as low as Pr(R|N&E) is in scenario (S6). So above we have a slippery slope of scenarios. The idea is that if R is defeated in (S1), then it is defeated in (S2), and if R is defeated in (S3), then it is defeated in (S4), and so forth. Or to put it more explicitly in the form of a deductively valid argument where lines 2 through 6 are material conditionals (a material conditional is where “If P, then Q” just means “It’s not the case that P is true and Q is false”):
  1. R is defeated in scenario (S1).
  2. If R is defeated in scenario (S1), then R is defeated in scenario (S2).
  3. If R is defeated in scenario (S2), then R is defeated in scenario (S3).
  4. If R is defeated in scenario (S3), then R is defeated in scenario (S4).
  5. If R is defeated in scenario (S4), then R is defeated in scenario (S5).
  6. If R is defeated in scenario (S5), then R is defeated in scenario (S6).
  7. If R is defeated in scenario (S6), then the Defeater Thesis is true.
  8. Therefore, the Defeater Thesis is true.
Because premises (2) through (7) are material conditionals, the only way e.g. premise (2) could be false is if R is defeated in (S1) and R is not defeated in (S2). So identifying a false premise would identify where the slippery slope stops. But it’s difficult to find a premise that is plausibly false. It would clearly be irrational for me to believe that Sam’s cognitive faculties are reliable given the information I have in scenario (S1), in which case premise (1) is justifiably true. There also doesn’t seem to be a relevant difference between scenarios (S1) and (S2) whereby R is defeated in (S1) but not (S2), and the same goes for every pair of scenarios in premises (2) through (6): there doesn’t appear to be a relevant difference between the two scenarios in any individual premise where R is defeated in one but not the other. So each premise (2) through (6) would seem to be justifiably true.

What about premise (7)? Thinking that R is defeated in scenario (S6) but not for the general person who accepts the Probability Thesis seems especially implausible. Scenario (S6) is where I see the Probability Thesis is true and I believe I have passed a battery of cognitive tests that confirm my cognitive reliability. If R is defeated even here, it’s hard to imagine what relevant difference there might be between this and another person who sees that the Probability Thesis is true. Part of the point of (S6), after all, is that any would-be evidence for R would be undermined just as any would-be evidence for R is undermined in scenario (S5). It seems that for the Defeater Thesis to be false, R would have to be undefeated in scenario (S6).

If however R is not defeated in (S6), where does the slippery slope stop and why? Where does there exist a relevant difference between two scenarios that saves R from defeat? It’s particularly hard to find a relevant difference between (S5) and (S6). One might say in (S6) we know of overwhelming evidence in addition to N&E that makes R likely, whereas that’s not the case in (S5). But why exactly do we have this evidence in (S6) but not in (S5)? To make the problem for this more explicit, imagine that the two worlds of (S5) and (S6) are essentially identical apart from the differences entailed in (S5), such that I believe that the specific type of naturalistic evolution my species is a product of has given me genes that (together with proper nutrition etc.) makes it likely that my cognitive faculties are reliable, that cognitive science and evolutionary biology has given us strong evidence for human cognitive reliability, that truth-conducive faculties are adaptive in Earth primates, and so forth. I also believe that we have overwhelming scientific evidence that we have the blocking gene to nullify the effects of the XX-mutation. Yet all this alleged evidence for cognitive reliability seems undermined when one accepts the evidence long after drug XX entered the bloodstream. So how exactly is it that the alleged evidence for R is undermined in scenario (S5) but not in scenario (S6)? If there is a relevant difference between the two scenarios, what is it?

One could believe that the relevant difference between scenarios (S5) and (S6) is N&E’s mechanism of probable cognitive unreliability (MoPCU), i.e. whatever it is that makes Pr(R|N&E) low. In scenario (S5), that mechanism is the XX-mutation; whereas in scenario (S6) N&E’s MoPCU is (presumably) something else, e.g. perhaps what makes Pr(R|N&E) low in (S6) is semantic content being causally irrelevant and invisible to natural selection coupled with the fact that the enormous variety of “garbage” belief sets akin to dreams vastly outnumber belief sets that accurately resemble one’s external reality. So, one objection to the argument is that N&E’s different MoPCU is the relevant difference between (S5) and (S6) such that R is defeated in (S5) but not in (S6), which would make premise (6) false.

But there’s a problem with N&E’s MoPCU being a relevant difference. N&E’s MoPCU in scenario (S6) is functionally identical to drug XX in that it induces the same type of cognitive unreliability with the same probability (in the sense that Pr(R|XX) = Pr(R|N&E)). Thus, which mechanism is N&E’s MoPCU does not seem to be a relevant difference. Apart from N&E’s MoPCU, there doesn’t appear to be any other plausible relevant difference between (S5) and (S6) here. If so, then the would-be evidence for cognitive reliability in scenario (S6) is undermined just as it is in scenario (S5). In (S5), N&E’s MoPCU is the XX-mutation; in (S6) the mechanism is different, but it’s functionally identical to drug XX (inducing the same type of cognitive unreliability with the same probability). It therefore seems arbitrary to hold that R is defeated in (S5) but not in (S6).

 |   1   |   2   |   3   |   4   |   5   | 



[4] Well, not the analogy, since Alvin Plantinga himself has several variants, and I’m using my own variety here. Still, drug XX rendering one’s cognitive faculties unreliable for some portion of those who ingest it is common in all of Plantinga’s renditions of this analogy that I’ve read.

Evolutionary Argument Against Naturalism (p. 3)

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
 |   1   |   2   |   3   |   4   |   5   | 

The ANPD Scenario



It would seem to be the case that Pr(R|N&E&SE) is low, but what if SE were false? What if in spite of Plantinga’s Argument Against Materialism, one is still convinced that the semantic content of a belief is causally relevant on naturalism? In that case there’s another thought experiment I’ll call the “ANPD scenario.”

Suppose a mad scientist creates an artificial neurophysiological device (ANPD), a many-tentacled device implanted near Smith’s brainstem that controls both his thoughts and behavior. The mad scientist can remotely control the ANPD’s electrochemical processes to vary Smith’s beliefs and behavior in innumerable and diverse ways. For example, Smith is dehydrated, and the mad scientist, wanting his victim to be in good health, uses the ANPD to force Smith to drink some water while simultaneously making him believe I am thirsty and this water will quench my thirst. The second time Smith is dehydrated, the mad scientist uses a different electrochemical setting to make Smith believe Drinking this water will grant me superpowers in the afterlife while producing the same drinking behavior (and suppose this belief is false). Here, the electrochemical process that produces fitness-enhancing behavior also produces a false belief. The ANPD can even produce “garbage” semantic beliefs that have little to do with the forced behavior, such as making Smith believe that Grass is air or that 1 + 1 = 3 at the same time it causes Smith to drink the water. The third time Smith is dehydrated the mad scientist configures the ANPD so that it causes Smith to drink the water while also causing him to believe Grass is air. Indeed, the mad scientist can associate just about any belief with the same drinking behavior.

For those who think that the semantic content just is neurophysiology and think that this avoids SE (it doesn’t, as I argue in the reductive and nonreductive materialism section in my article on Plantinga’s argument against materialism) let it be noted that the mad scientist can even use the device so that the physical properties of the belief Grass is air trigger an electrochemical reaction that causes Smith to drink the water, and this is possible because on naturalism it’s how a belief’s physical properties interact with the rest of the system that determines one’s behavior. So even if it were the semantic content of Grass is air that is causing the action here, the ANPD scenario would show that the semantic content need not cause behavior in a manner befitting a rational agent; the causal link could be akin to the effects of SE whereby the semantic content of the belief doesn’t even need to have anything to do with one’s external environment (thereby mimicking the effects of SE when SE says it’s the NP properties and not the semantic content that causes belief). To have some handy terminology, let’s say that semantic content is causally relevant in a meaningless way when it fits situations like the semantic content Grass is air causing one to drink water, and say that semantic content is causally relevant in a meaningful way the casual link is more apropos of a rational agent, such as when This water will quench my thirst is (part of) what causes me to drink water.

Such an artificial neurophysiological device is not only metaphysically possible, but it also seems to be physically possible (given that beliefs and behavior can be brought about by electrochemical means). The ANPD scenario shows that false beliefs can be associated with fitness-enhancing behavior, even to the point where the false beliefs are garbage beliefs (beliefs that are wildly unrelated to the external environment, as in dreams). But if the scenario’s artificial neurophysiology is physically possible, then it is at least metaphysically possible for an evolved creature’s natural neurophysiology to have the same “disconnect” between semantics and behavior. Even if were possible for a belief’s semantic content to be causally relevant, the ANPD scenario shows that for any given behavior B, there are innumerably many semantic contents C—even C’s wildly unrelated to the external environment—that could be associated with B. Like SE, this would still allow for the possibility for beliefs and behavior to be linked in a meaningful manner (e.g. I believe a plant is poisonous so I won’t eat it) but like SE, this also allows for the possibility of even garbage beliefs to be associated with advantageous behavior, and have belief/behavior disconnects identical in effect with SE even if SE were false, e.g. when the physical properties of Grass is air cause Smith to drink water, thereby mimicking the effects of SE when SE says it’s the NP properties and not the semantic content that is causally relevant. One could argue that the relation between semantic content and behavior is in this way functionally equivalent to SE in spite of the falsity of SE. Call this view semantic pseudo-epiphenomenalism (SPE).

Two key claims of SPE are (1) SE is false; (2) even though SE is false, it is still possible for even garbage beliefs to be associated with advantageous behavior (as by semantic content influencing behavior in a meaningless way)—and the ANPD scenario demonstrates that this is indeed physically possible (since the device is physically possible). The ANPD scenario thus shows that if SE isn’t true, then SPE is. Both SE and SPE permit a great divorce between beliefs and behavior (again think of the case where the belief Grass is air causes Smith to drink water). Upon reflection it’s very easy to envisage a set of moving atoms that create advantageous behavior while producing beliefs unrelated to the external world, and it’s easy to take for granted our rather fortunate truth-conducive relationship between belief and behavior because it is so familiar to us. Yet if naturalism were true and SE were false, semantic content being casually relevant in a meaningless way would be very possible.

To again avoid bias our own species, think not of us but of alien creatures on some other world where N&E&SPE holds for them. While it’s easy to assume that beliefs and behavior would be linked in a “rational” manner (e.g. a man believes water will quench his thirst so he drinks), there’s nothing on N&E&SE or N&E&SPE alone to believe such a link would occur for the aliens (whose physiology, we may presume, differs from ours), since both SE and SPE easily allow garbage beliefs to be connected with advantageous behavior. Because SPE is functionally equivalent to SE, and given the enormous variety of diverse beliefs that could be associated with a given behavior (e.g. Bachelors are married, Grass is air, 2 + 2 = 1, and 2 + 2 = 2) an evolving race of alien creatures afflicted with SPE has a low probability of evolving reliable cognitive faculties just as if they were afflicted with SE. In sum, naturalism entails that either SE or SPE is true, and since Pr(RA|N&E&SE) and Pr(RA|N&E&SPE) are low, it follows that Pr(RA|N&E) is likewise low. But then if Pr(RA|N&E) is low, then Pr(R|N&E) is also low (since, as with the case of the aliens, we are basically considering the likelihood of R on N&E without further relevant information).

A Rebuttal



In response one could put forth the following rebuttal. Even though naturalism unavoidably entails an SE-type problem—whether via semantic epiphenomenalism or semantic pseudo-epiphenomenalism—the fitness-enhancing neurophysiological properties that are most likely to be selected by natural selection (say that a certain neurophysiology is selectable if it’s likely to be selected by natural selection) happen to be those that are truth-conducive. The ANPD scenario, while physically possible, is contrived and produces certain belief-behavior pairs that are unlikely to obtain in real human physiology. The most selectable and efficient way for neurophysiology to produce advantageous behavior also produces mostly true beliefs. Thus, even though the SE-type situation exists for semantics and behavior, luckily for us the physiological relation between semantics and behavior is such that true beliefs usually obtain.

All that may be true, but as an objection against the Probability Thesis it falls short. A major problem is that even if a favorable physiological relation between beliefs and behavior obtains for our species, such a favorable relation does not appear to be knowable from N&E alone. It is not knowable from N&E&SE alone, nor is it knowable from N&E&SPE alone. To illustrate the problem, consider a planet with aliens whose neurophysiology radically differs from ours (though we don’t know much more about it). On N&E&SE where the semantic content of a belief is causally irrelevant, it would still be possible that mostly true beliefs are associated with advantageous behavior, but since the semantic content of their beliefs could be anything and it wouldn’t matter, it would be the most serendipitous of coincidences if that were to occur. Similarly on N&E&SPE where even garbage beliefs can be associated with advantageous behavior, it would still be possible that the alien electrochemical reactions causing advantageous behavior also generate mostly true beliefs, but it would be a rather serendipitous coincidence if that were to occur, given the enormous variety of beliefs that can be associated with a given behavior (as the ANPD scenario suggests) and given that we have no further relevant information about the physiology of the aliens.

One could concede that the probability of R given (just) N&E is low but also claim we know some proposition P (perhaps that the physiological relation between beliefs and behavior happens to be benevolent for our species) such that Pr(R|N&E&P) is high, and we have excellent reason to believe that P is true. Therefore, Pr(R|N&E) being low does not defeat R for the evolutionary naturalist. This however would be an objection against the Defeater Thesis rather than the Probability Thesis, so it will not be discussed in this section. Can the Defeater Thesis withstand this objection? For that matter, why accept the Defeater Thesis in the first place?

 |   1   |   2   |   3   |   4   |   5   | 




[1] Plantinga, Alvin. “A New Argument against Materialism” Philosophia Christi 14.1 (Summer 2012) p. 21

Evolutionary Argument Against Naturalism (p. 2)

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
 |   1   |   2   |   |   4   |   5   | 

The Probability Thesis



Why think Pr(R|N&E) is low? Ordinarily one might think that true beliefs help us survive. That certainly is the case if the content of our beliefs is causally relevant to behavior (e.g. I believe this plant is poisonous so I won’t eat it). But if the truth of our beliefs has no such causal relevance, then such a factor will be invisible to natural selection in the sense that while natural selection selects for adaptive physiologies and certain physiologies produce semantic content, which semantic content gets generated from a given physiology doesn’t affect natural selection’s outcome. So the content of our beliefs could be anything, true or not, and it wouldn’t affect our behavior. Whether the belief's content is 2 + 2 = 4, 2 + 2 = 67, or 2 + 2 = 4096 would make no difference to how we behave. If that is true, then Pr(R|N&E) is low.

The sort of naturalism being discussed here assumes that we human beings are purely physical creatures with no nonphysical minds or souls. In my article Plantinga’s Argument against Materialism I described Alvin Plantinga’s argument for the idea that if materialism with respect to human beings were true (i.e. if we were purely physical beings), the propositional content of our beliefs (e.g. There is a cold soda in the fridge) would not be causally relevant. For convenience, I’ll recapitulate some of that here.

To illustrate the idea of a belief’s semantic content being causally relevant to behavior, suppose I want a cold soda, and my roommate informs me that there is cold soda in the fridge. The belief There is a cold soda in the fridge is (part of) what causes me to go to the fridge and get a cold soda. We would naturally think it is by virtue of the belief’s content that this belief causes me to go over to the fridge. On dualism (the view that our minds are a composite of the physical brain and a nonphysical mental component, e.g. the soul) it is possible for a belief’s content to affect behavior; e.g. I believe something and on the basis of this belief my soul impacts my neural pathways in a certain way to cause behavior.

But on materialism, the coin of belief has two sides: the neurophysiological (NP) properties (certain neurons being connected in a certain way etc.) of the belief, and the actual semantic content of the belief (e.g. There is a cold soda in the fridge). If materialism were true, the content of a belief is causally irrelevant in the sense that (given materialism) a belief causes stuff by virtue of its NP properties, and not by virtue of its content. We can see this by doing a little thought experiment. Suppose a given person’s belief (a neural structure possessing semantic content), say, the belief that There is a cold soda in the fridge, had the same NP properties but an entirely different content—such as The moon is made of green cheese. Would the person’s behavior be any different if the belief had the same NP properties but different content? It would not, because having the same neurophysiological properties means we would have the same electrical impulses travelling down the same neural pathways and thus issuing the same muscular contractions. Thus if materialism were true, the content of our beliefs would be causally irrelevant. The view that a belief causes stuff by virtue of its NP properties and not its semantic content is called semantic epiphenomenalism (SE).

In response the critic of SE could say that there’s something wrong with the thought experiment because it is metaphysically impossible for a given set of neurophysiological properties to have a different semantic content. But even if it is impossible for a given set of NP properties to have a different semantic content associated with it, does this prevent the statement “If a belief had the same NP properties but different content, the same behavior would result” from being meaningfully true? I think not. In philosophy, statements of the form, “If P were true, then Q would be true” where P is an impossibility are called counterpossibles. It does seem that there are counterpossibles that are meaningfully true. For example, suppose renowned mathematician Kurt Gödel proved a certain theorem; it is impossible for theorems to be proved false since they are necessarily true. Yet as Alvin Plantinga points out, “If Mic were to prove Gödel wrong, mathematicians everywhere would be astonished; it is not true that if Mic were to prove Gödel wrong, mathematicians everywhere would yawn in boredom.”[1] So even if “If a belief had the same NP properties but different content, the same behavior would result” were a counterpossible, this doesn’t seem to prevent the statement from being meaningfully true.

More could be said about SE. If you’re still not convinced, I argue for it a little more in the reductive and nonreductive materialism section in my article on Plantinga’s argument against materialism (don’t worry, I explain what both sorts of materialism are!).

So why would it matter if N&E entails that the semantic content of our beliefs is causally irrelevant? To avoid bias against our own species, think not of us but of alien creatures whose physiology is radically unlike our own, and let RA represent the cognitive faculties of the aliens are reliable. N&E is true for these aliens, thus making the semantic content of their beliefs causally irrelevant. Then on N&E the electrochemical reactions that cause the behavior of these aliens could generate any semantic content at all (e.g. 2 + 2 = 1 or Grass is air) without that content affecting behavior. The semantic content could even be “garbage” beliefs unrelated to the external environment, as in dreams, and it still wouldn’t affect behavior. To illustrate the potential problem this creates, suppose a random belief is assigned for the answer to What does two plus two equal? Answers of one, two, rock, and sunshine would all be wrong. Randomly selected beliefs about the color of the sky and one’s age are similarly likely to be wrong. The enormous variety of “garbage” belief sets akin to dreams vastly outnumber those belief sets that accurately resemble one’s external reality. Since which belief set gets produced is invisible to natural selection, while it is still possible for the electrochemical reactions that produce advantageous behavior to also produce a reliably true belief set (as opposed to a garbage, dream-like one), it would be the most serendipitous of coincidences if that were to occur. Thus in the absence of further relevant information, the likelihood that their cognitive faculties are reliable (given N&E) is low, and thus Pr(RA|N&E) is low.

One could object saying that while the probability of cognitive reliability is low given just N&E, we know of further relevant information P such Pr(R|N&E&P) is high, e.g. we know that for own physiology, the link between content and behavior is favorable for our species, such that we act as if semantic content influences our behavior in a manner befitting a rational agent. Maybe that’s true, but that’s an objection to the Defeater Thesis and not the Probability Thesis. For now we’re just concerned with justifying Pr(R|N&E) being low. I’ve argued that Pr(RA|N&E) is true for the following argument:
  1. If Pr(RA|N&E) is low, then Pr(R|N&E) is low.
  2. Pr(RA|N&E) is low.
  3. Therefore, Pr(R|N&E) is low.
I’ve already justified premise (2), explaining why semantic epiphenomenalism renders it unlikely that RA is true given N&E. What about premise (1)? What’s true for the aliens here is also true for us, since we are basically considering the probability of R on just N&E (we considered Pr(RA|N&E) merely so we could try thinking about the issue in a way that avoids bias towards our own species). But suppose that even after reading the rest of my article on Plantinga’s argument against materialism, one still isn’t convinced that naturalism entails SE. Is there another way to argue for the Probability Thesis?

 |   1   |   2   |   |   4   |   5   | 




[1] Plantinga, Alvin. “A New Argument against Materialism” Philosophia Christi 14.1 (Summer 2012) p. 21

Evolutionary Argument Against Naturalism

Home > Philosophy > Metaphysics

Evolutionary Argument Against Naturalism
< Prev   |   1   |   |   3   |   4   |   5   | 

Some atheists claim there is a conflict between science and religion. But what if there were a conflict between naturalism (disbelief in the supernatural) and science? Enter the evolutionary argument against naturalism (EAAN), a remarkable argument that uses the theory of evolution to argue against the rationality of naturalism. This argument was originated by Christian philosopher Alvin Plantinga.

Overview of the Evolutionary Argument Against Naturalism



To define some terms and abbreviations, a defeater is (roughly) something that removes or weakens rational grounds for accepting some belief; in the context of the EAAN, the defeater is such that one is rationally obligated to withhold the defeated belief (i.e. not believe it; as by (1) remaining agnostic about it, or (2) believing it to be false). Suppose for example I arrive in a city and see what appears to be a barn from fifty meters away. I later learn that some eccentric has last week put up fake barns all over the area along with real ones, and that these fake barns are indistinguishable from real barns when viewed at a distance of thirty meters or more. I now have a defeater for my belief that I had seen a barn. I realize I could have seen a barn, but I don’t have sufficient grounds to accept the belief anymore. The rational thing for me to do is to withhold my belief that I had seen a barn. Suppose though I learn later that the eccentric removed all fake barns prior to my arrival. I would then have something that nullifies the defeating force of the defeater, i.e. a defeater-defeater.

Somewhat more precisely for the analytically inclined, in the context of the EAAN a defeater is a belief D that defeats another belief B for someone if that person would no longer be justified in believing B when coming to believe (and continuing to believe) D. So in our barn example (prior to learning that the eccentric removed the fake barns), the defeater D is “An eccentric put up fake barns in the area that are indistinguishable from real barns at the distance I was looking,” which defeats belief B “I saw a barn.” As long as I believe D, I cannot reasonably believe B with the information that I have. EAAN claims the naturalist who believes in evolution acquires a defeater for his belief in evolution + naturalism. The abbreviations commonly used for EAAN:

R = One’s cognitive faculties are reliable
N = naturalism is true
E = evolution is true
Pr(R|N&E) = the probability of R given N&E


In other words, Pr(R|N&E) refers to the probability that our cognitive faculties are reliable given naturalism and evolution, where by “cognitive faculties” the EAAN is referring to those faculties that process or produce beliefs—such as memory, perception, and reasoning. In a nutshell, the evolutionary argument against naturalism goes like this:
  1. Pr(R|N&E) is low
  2. The person who believes N&E (naturalism and evolution) and sees that Pr(R|N&E) is low has a defeater for R.
  3. Anyone who has a defeater for R has a defeater for pretty much any other belief she has, including (if she believed it) N&E.
  4. If one who accepts N&E gets a defeater for N&E in the manner described in lines (1) through (3), N&E is self-defeating and can’t be rationally accepted.
  5. Conclusion: N&E can’t be rationally accepted (at least, not for the N&E believer who accepts premise (1)).[1]
Call premise (1) the Probability Thesis and premise (2) the Defeater Thesis. Denying the truth of evolution isn’t much of an option for the naturalist, so if the above evolutionary argument against naturalism is sound, the naturalist is in serious trouble. But defeaters can themselves be defeated as in the case of the barn scenario I described. Quoting Alvin Plantinga:
Of course defeaters can be themselves be defeated; so couldn’t you get a defeater for this defeater—a defeater-defeater? Maybe by doing some science—for example, determining by scientific means that her cognitive faculties are reliable? Couldn’t she go to the MIT cognitive-reliability laboratory for a check-up? Clearly that won’t help. Obviously that course would presuppose that her cognitive faculties are reliable; she’d be relying on the accuracy of her faculties in believing there is such a thing as MIT, that she has in fact consulted scientists, that they have given her a clean bill of cognitive health, and so on.[2]
Any would-be evidence or argument for one’s cognitive reliability would be relying on the very cognitive faculties that are being called into question, so the defeater mentioned in premise (2) would be an undefeated defeater.

Note also from Plantinga’s quote that we get a feel for what type of cognitive unreliability EAAN has in mind; this isn’t ordinary fallibility, the kind that we could compensate for by scientific methods and peer review, but a kind of general cognitive unreliability such that one can’t even trust one’s memory of whether one passed cognitive reliability tests.

So much then for an overview of the argument. Both the Probability Thesis and the Defeater Thesis of EAAN will need to be justified. Up next, justifying the Probability Thesis.

< Prev   |   1   |   |   3   |   4   |   5   | 



[1] Plantinga, Alvin. Where the Conflict Really Lies (New York, New York: Oxford University Press, 2011), p. 344-345. Though I worded it slightly and I added the “pretty much” part to avoid possible controversies that things like cogito ergo sum and I am being appeared to redly might create, since by my lights the fact that there might be a few exceptions like this don’t affect the heart of the argument.

[2] Plantinga, Alvin. Where the Conflict Really Lies (New York, New York: Oxford University Press, 2011), p. 354

Sunday, May 12, 2013

No Free Will Means No Rationality

Home > Philosophy > Metaphysics

The claim that “No statement is rational to believe,” is self-defeating, since it implies that the claim itself is not rational to believe. In this article I’ll argue that the view that we have no free will undermines rationality, and so undercuts itself, albeit in a somewhat indirect way. The following quote from J.B.S. Haldane (1892-1964) has some relevance to this matter, since if we have no free will then our thoughts and beliefs are determined by physical processes:
It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true….And hence I have no reason for supposing my brain to be composed of atoms.[1]
In philosophy, the view that all our thoughts, actions, and beliefs are determined by processes outside our control and that we have no free will is called hard determinism. This is contrast to soft determinism, which affirms determinism but also claims we are in fact responsible for our behavior (I attacked soft determinism, also known as compatibilism, in The Free Will Argument for the Soul). Hard determinism says that we are never responsible for any of our thoughts and beliefs, and it is my contention that this view undermines human rationality.

Ways to Believe



There are at least two ways we can arrive at a belief or course of action. We can decide to do something or accept some belief via reasons that are freely chosen, or our actions and beliefs can be the result of unthinking and impersonal causes of which we have no control over. To illustrate the idea of freely chosen reasons, suppose I have libertarian free will with respect to two options before me: fattening chocolate ice cream and low-fat strawberry frozen yogurt. Both have a reason for me to eat it: for the fattening chocolate ice cream, it tastes better; for the low-fat frozen yogurt, it’s more healthful. Both options are live and available to me, but I freely choose the low-fat frozen yogurt with my reason being “it's better for me.” Nonetheless, that reason is freely chosen. To illustrate the idea of unthinking and impersonal causes, suppose a man believes a certain economic policy is the best one because a strange brain tumor is causing him to think so.

With free will, we are ultimately responsible for our actions, and this at least allows the possibility for our thoughts and beliefs to be the product of our own rational thought (as via freely chosen reasons). In contrast, hard determinism implies that all our thoughts and beliefs are ultimately caused by unthinking and impersonal causes, if for no other reason that all our behavior is ultimately caused by forces outside of ourselves and that we ourselves are never responsible for any of our thoughts and beliefs.[2] Even if we ignore outside forces though, it seems to me that if our minds are purely physical then our mental processes would be wholly the product of (unthinking) atoms in motion. Such beliefs could be true or they could be false, but ultimately such beliefs would not be the product of rational thought but of mindless chemical reactions. Human rationality itself would be undermined, and our beliefs would be no more rational than a toothache.

The Calculator Reply



One of the best rebuttals to this sort of argument I can think of is something I’ll call “the calculator reply.” The calculator is a deterministic system, yet it provides answers in accord with reason. Similarly, our brains are like (albeit imperfect) calculators in that even though they are deterministic, they supply us with rationality.

One thing to remember is this: the claim is not that determinism entails that our beliefs aren’t mostly correct. Rather, the claim is that even if our beliefs are correct, we are not being rational in accepting them. Recall that under hard determinism, mindless and blind physical causes produce all our beliefs, rather than a mind having some control in producing them. Does the calculator argument compensate this to provide us with rationality? Consider this hypothetical set if scenarios in which astronauts find an alien species that mindlessly accepts any belief given to them. Also suppose we separate these aliens into groups A and B.

Scenario 1: We write down mathematical statements on slips of paper and give them to the aliens in the following manner: we give group A slips of paper that convey false beliefs like “2 + 2 = 5,” and also via paper slips we give group B true mathematical beliefs (having their source in rational individuals like ourselves) like “2 + 2 = 4.” Is group B more rational than group A? No, both are accepting beliefs thoughtlessly and without one iota of rational reflection.

Scenario 2: Suppose we give group A calculators that consistently give incorrect answers such as “2 + 2 = 5” and group B calculators that produce consistently correct answers. Both groups mindlessly accept the answers the calculators give them. Is group B being more rational than group A in accepting their beliefs? Again, the answer is no.

Scenario 3: Now suppose we implant the calculators within the heads of the aliens where their little head tentacles press the buttons and the calculators feed them the answers through neurochemical reactions. Group A is given the faulty calculators and group B is given the good calculators. Once again, both groups mindlessly accept whatever beliefs are given to them by these calculators. Not much has changed here other than the physical location of the calculators, so is group B behaving more rationally than group A? I think again the answer is no.

We can present the argument more formally as follows, where the if-then statement in premise (4) uses the material conditional.

(1)   In scenario 1, group B is no more rational than group A.
(2)   In scenario 2, group B is no more rational than group A.
(3)   In scenario 3, group B is no more rational than group A.
(4)   If (1), (2), and (3) are true, then on hard determinism one’s brain operating in accord with reason is not a sufficient condition for genuine rationality.
(5)   Conclusion: on hard determinism, one’s brain operating in accord with the reason is not a sufficient condition for genuine rationality.


Again, (4) is using the material conditional, which means the only way it can be false is if (1), (2), and (3) are true and one’s brain operating in accord with reason is not a sufficient condition for genuine rationality is false.

Here’s the reasoning behind premise (4): We can give aliens slips of paper whose answers are in accord with reason, but if they are mindlessly accepting whatever beliefs are presented to them, then it isn’t real rationality. Similarly, it seems that chemical reactions doing the same sort of thing to the group B aliens (whether in the manner specified in Scenario 3 or a more organic calculator integrated into the brain), then it isn’t real rationality. Making the calculators an organic part of the brain for the aliens in groups A and B doesn’t seem to make a relevant difference in making group B more rational than group A. If that’s true, then a brain operating in accord with reason is not a sufficient condition for genuine rationality.

The above deductive argument is logically valid, which means the conclusion follows necessarily from the premises by the rules of logic, and so a false conclusion requires a false premise. But all the premises seem to be true, and if so then the conclusion is true.

Conclusion



It is metaphysically possible for hard determinism to be true and for us to have correct beliefs. Yet even if we had consistently correct beliefs, hard determinism still undermines rationality. We simply have no reason to accept our beliefs as true if they are all solely the product mindless, blind physical causes.

It’s important to note what I am not arguing for in this article. The argument is not that if hard determinism is true, then our beliefs are mostly false. Rather, the argument is that if hard determinism is true, human rationality doesn’t exist. Our beliefs could still be true, but we wouldn’t be rational in accepting them any more than our hypothetical aliens are. We thus have the following sort of argument against hard determinism.
  1. If hard determinism is true, then human rationality does not exist.
  2. Human rationality does exist.
  3. Therefore, hard determinism is false.
Note that if human rationality does not exist, we are not rational in accepting any belief, including hard determinism. Again, rationality is not required for accepting true beliefs (though it is recommended) and this argument doesn’t say that we can’t have true beliefs on hard determinism. With that said, in my next entry I’ll introduce the evolutionary argument against naturalism (EAAN), which says (among other things) that given evolution and naturalism, the probability that our cognitive faculties are reliable is low.




[1] J.B.S. Haldane, Possible Worlds and Other Papers (New York: Harper & Brothers, 1928) p. 220.

[2] This of course wouldn’t apply if a theistic God designs our cognitive faculties to work a certain way to give us beliefs, in which case our thoughts are determined by a thinking mind (viz. the mind of God). In this case I’m referring to a version of hard determinism that assumes naturalism, where there is no supernatural mind guiding the development of our cognitive faculties, so that the external forces that produce our thoughts are mindless and impersonal. Note that the problem of our thoughts and beliefs ultimately being the product of mindless and impersonal causes is not resolved by our brains being the product of a purely physical creator if this creator’s thoughts and beliefs were itself the product of mindless and impersonal causes, since in that case our thoughts and beliefs would still (ultimately) be the result of mindless and impersonal causes.