Wednesday, July 25, 2012

Are Christians Sincere Believers?

Home  >  Christianity  >  General

This blog entry is in part a response to the following YouTube video. Warning: the YouTube video contains strong language that is unsuitable for children, socially conservative non-maverick Christian old ladies, and bosses peering over your shoulder.



While he raises some good points, I don’t think the basis he uses for his conclusion of Christians not really believing that e.g. the Lord of the Universe offers his hand to salvation and paradise is particularly good (I know, you’re shocked). Being a monk may be an honorable occupation, but it’s not the only noble one for sincere Christians. The world still needs doctors, police officers, schoolteachers, etc. and it’s perfectly reasonable for a sincere Christian believer to serve the world in those ways. Still, the YouTuber has a point; Christians often don’t really act out what we believe. Why is that? One idea is that Christians deep down don’t really believe what we say we believe. But there’s a different and more natural explanation: we humans tend to get desensitized to what we have. The YouTube video below gives a nice illustration of this:



We tend to have a sort of self-centered greed where we focus too much on our wants and what we don’t have in the present moment rather than being grateful for all the cool stuff we do have.

This isn’t new. Think back to David sleeping with Bathsheba when Bathsheba was somebody’s wife. When you think about it, why does David need Bathsheba anyway? It’s not as if David was a poor single loser who couldn’t hook up with any other lady; this man already had multiple wives, and before you start making bad jokes about married life in the bedroom, keep in mind that David also had many concubines as well, at least some of whom had to be pretty hot (if any man in the land could get hot concubines, it would be the king). So by all available evidence, this was not a man who had trouble getting laid. But he ignored all that and wanted more, sleeping with somebody else’s wife.

To illustrate the point of self-centered greed further, consider this excerpt from my blog article We Are the Depraved:
We may say we value our fellow human being as much as ourselves, but how often do we buy things we don’t need when that money could go to clothe the naked or feed the hungry? For definiteness, consider a hypothetical man named Smith who buys an expensive big screen television. I being the occasionally unpleasant fellow ask Smith, “Do you value having a big screen television over starving children being fed?” Smith answers, “Of course not.” I reply, “Then why are you spending a sizable sum on the big screen television instead of donating that money to feed starving children?” Smith might say that such an act would be supererogatory rather than morally obligatory. Perhaps it would be supererogatory, but that doesn’t change the unpleasant fact that Smith is valuing having a large television over feeding starving children. Smith has a choice between having children desperately in need of food being fed and having a big screen television, and Smith chooses the television.
Smith doesn’t donate to charity and instead buys the expensive TV. Does this mean that he doesn’t really believe that there are starving children? No, he’s just self-centered, focused more on the luxuries he doesn’t have and less on the necessities that others are lacking. A similar focused-on-what-we-don’t-have attitude infects our everyday lives, a bit like, “Yeah Jesus died for my sins, but that doesn’t help me get a new car does it? Why doesn’t Jesus give me a bigger house or a higher paying job?” Even Christians can be materialistic, hence the prosperity gospel heresy. When I wrote We Are the Depraved I was focusing on how we undervalue our fellow human beings, but it seems to me that we undervalue God also. Sure, we may believe God exists just as Smith believes starving children exist, but in both cases we undervalue what’s important.

What to do?

We have a lot to be grateful for and often we don’t focus on it. That’s true of dishwashers, phones, hot showers, and yes, even God’s love and Christ’s sacrifice for us. So what to do? Perhaps the first step in being grateful for what you have is to notice what you have. We do in fact have cars, computers, hot showers, etc. Another thing is this: look down, not up. Imagine Alice living in a third world country. She lives in a tiny house with a dirt floor, no toilets, no dishwashers, and no showers. She has to sleep on the floor with no pillow and is often worried about where her next meal will come from. One day she meets Bob the American who is frowning. “Why are you sad?” Alice asks Bob. Bob replies, “It’s true I have a refrigerator full of good food, a house ten times bigger than yours with a dishwasher, hot shower, and comfortable bed where I watch high-definition TV, but looking at Jones having a bigger house and higher paying job makes me sad.” If Alice were to get what Bob has, she’d feel as if she won the lottery. Alice thinks Bob’s complaint is foolish and that he’s oblivious to what he has. Moreover, Alice is right to think that Bob’s complaint is foolish. Having his house, car, dishwasher, refrigerator full of good food etc. is a lot to be grateful for as Alice recognizes and Bob doesn’t. Imagine yourself not having your great stuff (like poor Alice) and you’ll more clearly see how much you have to be thankful for.

Of course, the same goes for God and Christ’s sacrifice. To illustrate, consider a man who’s won the lottery complaining that he lost a dollar on his way to collect his winnings; the lottery winner would seem to have lost sight of what he has. We Christians do the same with petty complaints, acting as if we’ve forgotten the beautiful story: the incarnation of God—the infinite being that is the locus of goodness—becomes truly human, and with nails in his limbs he dies for us so that we might inherit a blessed eternity we do not deserve, an inheritance far better than any lottery. Though we can enjoy the love of God in this life and the next, we don’t act on it like we should. We are people who’ve inherited incalculable wealth complaining over a few missing pennies. Perhaps heaven is largely about finally noticing what we have and being grateful for it. But why wait?

Monday, July 16, 2012

Why Falsificationism Sucks

Home  >  Philosophy  >  Philosophy Of Science

I once heard an atheist accuse theism of not being falsifiable. That’s a bit odd when you think about it, because “theism isn’t falsifiable” is basically a tacit admission that the problem of evil fails to refute theism. Still, presumably “theism isn’t falsifiable” is supposed to be some sort of objection against the rationality of theism, which thus inspired this article. In this entry I’ll talk about how well falsificationism works as a criterion of meaning, rationality, and scientific legitimacy.

Varieties of Falsificationism that Suck

Falsificationism comes in many varieties, but two forms I’ll talk about are:

    (1) For a claim to be meaningful, it must be falsifiable.
    (2) For a claim to be rational, it must be falsifiable.


I’ll argue that both varieties of falsificationism suck.

It’s easy to think of counterexamples to (1) and (2). “There is no greatest prime number” is meaningful and rational to believe (at least, it’s rational to believe once one apprehends the mathematical proof for it), yet it is logically impossible to falsify it and is therefore non-falsifiable. Why? Because it’s logically impossible for “There is no greatest prime number” to be false, ergo it is logically impossible to prove that it is false.

Still, one might think this can be remedied. Say that a claim is logically necessary if and only if it is logically impossible for it to be false. Say that a claim is logically contingent if and only if neither its truth nor its falsity is logically necessary. We can modify (1) and (2) to get the following:

    (1*) For a logically contingent claim to be meaningful, it must be falsifiable.
    (2*) For a logically contingent claim to be rational, it must be falsifiable.


Yet even these improved versions suck. Consider the following claim:
I exist.
That claim is logically contingent, yet it is logically impossible for me to falsify this claim, for to falsify it I would have to exist. In spite of all that, the proposition I exist is meaningful and rational to believe. One could try to remedy this by saying something like, “By a claim being falsifiable I mean that it’s possible in principle for somebody to falsify it.” In that case we have the following counterexample to (1*) and (2*):
Somebody exists.
This claim is logically contingent, yet it is logically impossible for anybody to falsify it. In spite of all that, the proposition Somebody exists is both meaningful and rational to believe. One could say, “Falsificationism applies everywhere except for the proposition Somebody exists.” That sort of falsificationism seems a bit ad hoc, but even with this version we can find a counterexample. Consider the following claim.
There is, has been, or will be a bucket of ten red balls.
We can’t even in principle falsify this claim; no matter our past experience, there is still at least the outside chance that someday in the future we will have discovered it. Not only that, but this claim can in principle be empirically verified and rational to believe.

One general reason falsificationism varieties like these suck is there are meaningful, empirically verifiable, rational claims where it’s easy to justify the truth of the claim but is trickier to prove the claim false. In any case, as a criterion of rationality and meaning, falsificationism fails. The fact (if it were so) that theism is non-falsifiable is insufficient to prevent it from being a rational belief.

Falsificationism and Science

Some varieties of falsificationism relate to science. One could say that for a belief, theory, or hypothesis to be scientific it must be falsifiable. Why this might be problematic should by now be apparent. As we’ve already seen, some non-falsifiable beliefs are empirically verifiable and rational to accept. So if science is to be a rational project in the truth business, it seems that non-falsifiability shouldn’t automatically disqualify a view from being scientific.

Consider for example the theory (or hypothesis, if one prefers) of abiogenesis: the view that undirected natural processes evolved life from non-life. Barring extreme cases like time travel and psychic ability, this view is not falsifiable, for what observation could possibility falsify it? Perhaps we could find obstacles for undirected natural processes, but any obstacles we find one could just say, “There is a way for undirected natural processes to overcome those obstacles and we just haven’t discovered it yet.” In response one could say, “That maneuver wouldn’t work if the obstacles in question were physical laws; if we found that for abiogenesis to take place it would require violating heavily verified physical laws, that would falsify abiogenesis.” That would be an interesting scenario, since in this case it seems we would (if we also knew life began to exist) have grounds for thinking that life was supernaturally created. Still, even in this case we wouldn’t have falsified abiogenesis. The abiogenesis proponent could say that perhaps physical laws as we know them today didn’t apply everywhere on earth billions of years ago.[1] While many of us take it for granted that the physical laws as we know them today applied everywhere on earth those eons ago, we haven’t actually proven that to be true (none of us were even around back then to test the laws anywhere on earth). So abiogenesis would not, strictly speaking, be proven false in this scenario even if we would have some rational grounds for rejecting it. Nor is this sort of thing limited to abiogenesis. Even if there were a physical law that forbid large-scale Darwinian evolution, the Darwinist could, similar to our hypothetical abiogenesis proponent, say that maybe the physical laws as we know them today didn’t apply back then at certain locations under unknown conditions.

It is of course possible for a scientific theory to make predictions and have those predictions be falsified, but theories do not make predictions out of a vacuum. Rather, the predictions of a theory arise from a background system of beliefs called auxiliary hypotheses or auxiliary assumptions (some of which are other scientific theories), and the truth of such auxiliary hypotheses are typically not rigorously proven. For example, it was once thought that if the Earth was really spinning on its axis, birds would get blown west whenever they let go of a tree branch. We no longer accept that as evidence that the earth is not spinning because we have accepted a different background system of physics that allows us to make different predictions. Still, we can have rational grounds for thinking that our background system of beliefs is correct, and so if Darwinian evolution did conflict with physical laws, we would have rational grounds for rejecting the theory, even though we can’t prove that exceptions to physical laws didn’t take place back then to allow Darwinian evolution to occur. In science, one cannot prove theories true in the sense that we can prove theorems of logic and mathematics; there’s always at least the outside chance that our theories are wrong (and if history is any indication, at least some of our scientific theories are in fact probably wrong in some respect); similarly, we often cannot prove theories false due to the reliance of auxiliary hypotheses. The reliance on auxiliary hypothesis to make empirical predictions (and thus the inability to isolate one hypothesis for empirical testing, since the necessity of auxiliary hypothesis means that hypotheses are tested in bundles) is called the Duhem-Quine thesis or the Duhem-Quine problem. More precisely (since the definition of the “Duhem-Quine thesis” can vary somewhat) the “theories can be empirically tested only in bundles” belief is also called confirmation holism.

We could use another form of falsificationism, perhaps something like “No theory is scientific unless it is possible in principle to have sufficiently strong evidence against it.” To use a specific example, consider intelligent design (ID) theory, which as applied to life on earth is the theory that a designing intelligence created certain features of life.[2] A popular criticism against ID is that it is not legitimately scientific and that ID is non-falsifiable. Why? Suppose we found a detailed, rigorous account for how undirected natural processes could evolve the first single-celled organism and how that could lead to the evolution of mice, reptiles, and all of life’s diversity on Earth. Does this refute ID theory? No, because even if natural processes are reasonably capable of doing the job, it’s still possible for an intelligent designer to do the same thing (albeit possibly needing technology far beyond ours). It’s hard to have real solid evidence against ID theory, and so in some sense it is non-falsifiable and therefore not legitimately scientific.

But that objection is problematic. To see why, consider the following scenario. Suppose we live in a technologically advanced society where a designing intelligence is capable of creating mice, gold (through subatomic rearrangement), electrons (via converting photons into electrons), and anything else nature could create within certain size limits. There is a box with something inside of it but neither of us knows what it is. The person who put the thing in the box has died and has destroyed all evidence of its origin, apart from the object itself. I present the theory, “What is in this box was created by a designing intelligence, as opposed to undirected natural processes like geological formation.” This theory is non-falsifiable, since anything that nature could have produced a designing intelligence could also have produced (note the similarity between this and ID). Still, if I open the box and find a desktop computer, it seems we would have rational grounds for thinking a designing intelligence created it. The fact that “What’s in this box was created by a designing intelligence” is non-falsifiable wouldn’t change that, nor would it be any reason to reject this sort of intelligent design theory.

Of course, human beings are not computers, but there do seem to be conceivable circumstances where scientists could have rational grounds for thinking ID theory is true. If for example scientists examined the physiology of organisms and discovered that all life forms on earth are actually robots (including us), it seems scientists would have good evidence for ID theory being true. One might say if we had sufficient evidence for it, ID would be science, but since it doesn’t, it is not science. Even if that’s true, the fact remains that non-falsifiability is insufficient for preventing a theory from being legitimately scientific. If it were sufficient, then it would be the case that science should not accept ID even if the scientific evidence overwhelmingly supported it, and that doesn’t seem quite right.

In response, one could bite the bullet and say, “Yes, that may be the case, but science is a human activity and we can place whatever restrictions we wish.” Maybe that’s true, but according to the falsifiability criterion, even scientifically well-supported theories we know to be true must be rejected by science, and this seems no more sensible than saying scientifically well-supported theories we know to be true must be rejected by science if the theories are created by somebody named Bob. Science is indeed a human activity and we can set up any restrictions we wish, but if we set up restrictions like “a theory that is non-falsifiable is sufficient for science to reject it, truth and evidence be hanged” and “a theory being made by a guy named Bob is sufficient for science to reject it, truth and evidence be hanged” we can no longer say that science’s goal is to rationally obtain the truth about the natural world; it becomes more like a game. If one wants to have science still be a rational project in the truth business, one could say ID should be rejected because there is insufficient evidence for it. Maybe that’s true, but then ID should be rejected for that reason, not because it isn’t falsifiable. A wrong theory should be rejected for the right reasons.

Conclusion

Falsificationism sucks horribly as a criterion for meaning and rationality. There are a number of meaningful, rational, empirically verifiable statements that are not falsifiable. In science, if we demand that beliefs/theories/hypotheses need to be capable of being proven false to be scientific, this doesn’t work either; both abiogenesis and evolution cannot be proven false, even though it is conceivable to have evidence against it. A better form of falsificationism with respect to science is “it is possible in principle to have sufficiently strong evidence against it,” but even this doesn’t work if we’re to have science be in the business of truth and rationality. After all, it is possible for a claim to be meaningful, rational, empirically verifiable without being falsifiable even in this weak sense as the “What’s in this box” scenario illustrates.







[1] One might think, “Would anyone really be willing to go this far?” Consider this real-life case. To support a claim I made in an Internet dialogue with an atheist, I said if things can pop into being uncaused out of nothing, it becomes inexplicable why anything and everything doesn’t pop into being uncaused out of nothing. The atheist suggested that perhaps such things did happen somewhere in the cosmos outside of our observable sphere; apparently that this would conflict with the conservation of mass-energy wasn’t enough for him to reject this as a live option. To be sure I couldn’t rigorously prove that violations of physical laws as we know them aren’t happening somewhere far out in space, even though astrophysics tends to rely on such uniformity of nature.

[2] I’m actually being somewhat uncharitable to ID here, since more falsifiable formulations of the belief are possible. If for example we go by how the most prominent ID organization describes the theory, when applied to life ID holds that certain features “of living things are best explained by an intelligent cause, not an undirected process such as natural selection.” That this is falsifiable seems difficult for ID critics to deny, for then they would have to give up the claim that they are justified in saying that ID is not the best explanation for certain features of living things, and they would have to give up any claim of us knowing that some rival theory explains it better. Another formulation one could make is “the theory that intelligent causes are necessary to explain certain features of living things.” This claim also appears falsifiable (at least in the “it is possible in principle to have sufficiently strong evidence against it” sense); if we show that natural processes can do the job, one will have falsified this sort of ID theory.

Sunday, July 15, 2012

Simplicity as Evidence of Truth: How Do We Know It?

Home  >  Philosophy  >  Philosophy Of Science

This part 3 of a series on simplicity being evidence of truth.
  1. Simplicity as Evidence of Truth: Justifying Ockham’s Razor
  2. Simplicity as Evidence of Truth: Theories Tying Into Background Knowledge
  3. Simplicity as Evidence of Truth: How Do We Know It?
In this section I’ll have some concluding remarks to this series, one of which is this: how do we know that simplicity is evidence of truth? Whereas in part 1 of simplicity as evidence of truth I offered some justification for thinking that simplicity is evidence of truth, in this entry I’ll say a bit about how in practice we come to believe simplicity is such a guide for truth.

Recap

In part 1 of simplicity as evidence of truth I gave the following illustration to evoke the intuition that, ceteris paribus, the simplest explanation is the one most likely to be true. Suppose we investigate a new area of scientific research for which we have no background knowledge to tell us which theory is more probable and we study two variables: x and y. We have the following data:

Data Set 1
 
x0123456
y024681012


Let’s call the above situation the “Data Set 1 Scenario.” An equation presents itself for predicting the other values of x and y: y = 2x (we’ll call this equation 1), but that isn’t the only formula that fits the data. As Swinburne points out, all formulas of the following form (which we’ll call equation 2) yield the same data as well.[1]

Equation 2
 
y = 2x + (x − 1)(x − 2)(x − 3)(x − 4)(x − 5)(x − 6)z


Where z can be any constant or function of x. Although they agree with the data, the two equations may make very different predictions of unobserved data. For example, we can let z be x720 to get the following equation that predicts unobserved data differently from equation 1:

Equation 3
 
y = 2x + (x − 1)(x − 2)(x − 3)(x − 4)(x − 5)(x − 6)(x720)


Equation 1 predicts that when x = 9, y will be 18. Equation 3 predicts that when x = 9, y will be 270. If we were forced to go with either equation 1 or equation 3 to predict further data, which one would we choose? Obviously equation 1, and the reason seems clear: simplicity. There are literally infinitely many equations fitting Data Set 1 yielding infinitely many different y values for x = 9 (if nothing else, there are infinitely many numbers one could use for z), yet if one were forced to correctly predict what y would be for x = 9 and the consequences were sufficiently dire for predicting incorrectly (say, upon pain of ignominious death that one strongly wants to avoid), we would think it quite irrational to give any answer other than y = 18 for x = 9. This fact (if the stakes were sufficiently high it would be irrational to go with anything other than y = 18 for x = 9) suggests that simplicity is indeed a tool of rationality; we use simplicity as a guide for obtaining the truth.

Past Success Justification

The law of parsimony has been widely used in the history of science. How is it that we know simplicity is evidence of truth? One possible response is, “In science’s history, we’ve found that simpler theories tend to be better predictors.” One could say that the reason we believe simplicity is a guide for truth is through experience. Simplicity has served us well in the past, so probably it will serve us well in the future.

There are some problems with that “past success justification” approach however. Swinburne says the “in the past, simpler scientific theories have been the better predictors” claim is doubtful, since in science simpler physical laws have on many occasions been supplanted by more complicated laws.[2] Swinburne also adds:
But even if simplest theories have usually proved better predictors, this would not provide a justification for subsequent use of the criterion of simplicity, for the reason that the justification itself already relies on the criterion of simplicity. There are different ways of extrapolating from the corpus of past data about the relative success which was had by actual theories and which would have been had by possible theories of different kinds, if they had been formulated. “Usually simpler theories predict better than more complex theories” is one way. [3]
There are many ways to form a pattern of “theories that predict best” (or for that matter, true theories) that fit past experience (many of these giving different predictions about what future theories are likely to be better predictors), and surprise surpise, simplicity is big factor when choosing the “right” pattern. Astute readers may recognize the similarity between this situation and the Data Set 1 scenario, where there are innumerable theories that fit Data Set 1, predict that data set with perfect success, but give different future predictions.

If you’re skeptical of the existence of innumerable patterns fitting past experience, note that if nothing else one can construct an outrageously complicated disjunction of specific descriptions like “If (a man named Newton does such-and-such at such-and-such time) or (a man named Einstein does such-and-such at such-and-such time) or (...) or (...) ..., then the resulting equation will be at least approximately true.” If one objects saying, “Well, if Newton were named Smith, the same results would have obtained” one should note that (a) this still doesn’t change the fact that the massive disjunction perfectly fits past successes; (b) we can add “would have been” matters to the disjunction to get something like “If a man named Newton or Smith did such-and-such at such-and-such time)..., then the resulting equation will be at least approximately true.”

What’s more, just as equation 2 used a relatively complex set of parenthetical units (x − 1)(x − 2)…(x − 6) to help perfectly fit the observed data set, yielding infinitely many equations for infinitely many different possible future results (largely thanks to there being infinitely many choices for z), so too one can adopt a similar sort of approach for past successes where there are infinitely many models that perfectly fit past knowledge (and knowledge that would have obtained on other circumstances), give detailed predictions for the future, yet the models contradict each other over those future expectations. If nothing else, it remains true that there are many massive conjunctions (“If (A) or (B) or (C) or (D)…., then the theory is a good one”) that perfectly fit past successes yet give future different predictions of what is likely to be true in the future. Each of these infinitely many models, if it were used in the past, would have yielded great results. So clearly there are infinitely many models to fit past successes and at least some of these models (indeed, infinitely many) will contradict each other over future expectations. As noted in part 1 of this series, Swinburne notes that the criterion, “Choose the theory which postulates that the future resembles the past” is empty (infinitely many theories that are inconsistent with each other do that), and that to have real content we should change it to “Choose the theory which postulates that the future resembles the past in the simplest respect.” [4]

So how do we know that simplicity is a tool for rationality? In practice, this is something we know via a priori intuition. Even it is possible to construct a clever argument for why we should accept simplicity as a guide for truth, in practice that’s not why we do accept it. We accept it because it just seems to be true.

Not Just in Science

While a big focus in this series has been simplicity and science, the law of parsimony extends beyond science. Swinburne gives this nice illustration.
If we can explain an event as brought about by a person in virtue of powers of the same kind as other humans have, we do not postulate some novel power—we do not postulate that some person has a basic power of bending spoons at some distance away if we can explain the phenomenon of the spoons’ being bent by someone else bending them with his hands.[5]
Another illustration is the use of simplicity in the argument from ontological simplicity in my series on the moral argument.







[1] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 22.

[2] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 52

[3] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 52

[4] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 23

[5] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 59

Sunday, July 8, 2012

Simplicity as Evidence of Truth: Theories Tying Into Background Knowledge

Home  >  Philosophy  >  Philosophy Of Science

This part 2 of a series on simplicity being evidence of truth.
  1. Simplicity as Evidence of Truth: Justifying Ockham’s Razor
  2. Simplicity as Evidence of Truth: Theories Tying Into Background Knowledge
  3. Simplicity as Evidence of Truth: How Do We Know It?
As mentioned in part 1, much of this series is taken from Richard Swinburne’s Simplicity as Evidence of Truth. This article will discuss how simplicity plays a role in judging how well a theory fits in with our background knowledge. Before moving on I’ll introduce some preliminaries.

Some Facets of Simplicity

In this article it’ll be nice to mention a few facets of simplicity borrowed from Swinburne.[1]
  1. Number of entities. The theory that postulate fewer entities is simple than if it postulated more entities. As Swinburne notes, “The application of this facet in choosing theories is simply the use of Ockham’ razor.”[2]
  2. Number of kinds of things. A theory that postulates fewer different kinds of entities is simpler than if it postulated many different kinds of entities, e.g. a theory that postulates fewer different kinds of quark is simpler than a theory that postulates more of them.
  3. Fewer separate laws is simpler than many separate laws.
More could be listed (and Swinburne does list more) but this small list will be enough for our purposes.

Simplicity Not the Only Factor

I’ve been using the phrase “ceteris paribus” an awful lot in this series precisely because there are other factors to consider when choosing a good explanation besides simplicity. The list below is largely taken from Richard Swinburne.[3]
  • Yielding the data. This category covers both explanatory scope (how much data the theory explains) and explanatory power (the probability of expecting the data if the explanation were true).
  • Content. All else held constant, the theory that makes more claims (or is a stronger claim in the sense that it “claims more”) is less likely to be true than one that makes fewer claims (or is a weaker claim in the sense that it “claims less”). For example, “At least a million animals of this species have black fur” is a stronger claim than “At least ten animals of this species have black fur.” All else held constant, the weaker claim “At least ten animals of this species have black fur” is more likely to be true than the stronger claim.
  • Fitting in with background knowledge. For example, “The hypothesis that John stole the money is rendered more probable if we know [due to our background knowledge] that John has stolen on other occasions and comes from a social group among whom stealing is widespread.”[4] The likelihood of such background beliefs being true plays a role in our judgments. However, simplicity has a role when determining how well a theory fits in with our background knowledge.
And it’s that last item that we’ll deal with next in regards to simplicity.

Simplicity and Background Knowledge

Where there is relevant background knowledge, simplicity is a factor that determines which theory “fits best” with such data. Swinburne even goes so far as saying that “fitting better” is “fitting more simply” and making for a simpler overall view with the world.[5] When discovering a new chemical substance, it’s possible that it has a special kind of quark or special kind of chemical bond never before discovered that yields the same data, but it’s simpler not to posit “more than one kind of thing” and simply use what’s available ceteris paribus. Notice that in this case having a special kind of quark or special kind of chemical bond never seen before does tie in with our background knowledge to some extent (we believe that there are such things as quarks and chemical bonds) but fitting into our background knowledge more simply (using the same sort of quarks and chemical bonds we know of) is the better option. Simplicity is clearly a role when deciding how well a theory fits in with our background knowledge.

Theories often interact with each other; our current theory of genetics ties in with belief in DNA, and the belief that DNA ties in with belief in atoms etc. Also, when making predictions we rely on background knowledge; e.g. at one point people thought that if the earth was really moving, birds would be blown West when they let go of a tree branch. We no longer accept that as evidence that the earth is not moving because we have a different background system of physics that allows us to make different predictions. Call this network of theories and assumptions a conceptual grid. When looking into what theories tie into our background knowledge, we are assuming that ceteris paribus the world is more likely to be simple than complex, and ceteris paribus we prefer simpler conceptual grids to complex ones. Hence when discovering a new chemical, we don’t assume the chemical has entirely new types of quarks (thus giving us a more complex conceptual grid) when the chemical having the same sort of quarks we are familiar with will do (in terms of yielding the data etc.).





[1] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), pp. 29-30, 31.

[2] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 29

[3] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), pp. 18-19

[4] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 18

[5] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 45

Thursday, July 5, 2012

Simplicity as Evidence of Truth: Justifying Ockham’s Razor

Home  >  Philosophy  >  Philosophy Of Science

This part 1 of a series on simplicity being evidence of truth.
  1. Simplicity as Evidence of Truth: Justifying Ockham’s Razor
  2. Simplicity as Evidence of Truth: Theories Tying Into Background Knowledge
  3. Simplicity as Evidence of Truth: How Do We Know It?
In one blog entry where I argued that objective morality could be used as evidence for theism, I used what I called the argument from ontological simplicity. In that blog entry I mentioned the principle of rationality that, all else held constant, the simplest explanation is the best one—which incidentally is one of the formulations of Ockham’s razor (also known as Occam’s razor) named after 14th century philosopher William of Ockham. It’s also been called (perhaps more accurately) as the law of parsimony. A formulation of Ockham’s razor that is closer to what the philosopher originally stated—and one facet of simplicity—is to not multiply explanatory entities beyond necessity. Though both versions of Ockham’s razor are often used in science, philosophy, and everyday life, is it really true that ceteris paribus the simplest explanation is the one most likely to be true?

A Quick Justification for Simplicity

Much of this entry (and its sequels) will be taken from Richard Swinburne’s excellent little book Simplicity as Evidence of Truth, including the following illustration. Suppose we investigate a new area of scientific research for which we have no background knowledge to tell us which theory is more probable and we study two variables: x and y. We have the following data:

Data Set 1
 
x0123456
y024681012


Let’s call the above situation the “Data Set 1 Scenario.” An equation presents itself for predicting the other values of x and y: y = 2x (we’ll call this equation 1), but that isn’t the only formula that fits the data. As Swinburne points out, all formulas of the following form (which we’ll call equation 2) yield the same data as well.[1]

Equation 2
 
y = 2x + (x − 1)(x − 2)(x − 3)(x − 4)(x − 5)(x − 6)z


Where z can be any constant or function of x. Although they agree with the data, the two equations may make very different predictions of unobserved data. For example, we can let z be x720 to get the following equation that predicts unobserved data differently from equation 1:

Equation 3
 
y = 2x + (x − 1)(x − 2)(x − 3)(x − 4)(x − 5)(x − 6)(x720)


Equation 1 predicts that when x = 9, y will be 18. Equation 3 predicts that when x = 9, y will be 270. If we were forced to go with either equation 1 or equation 3 to predict further data, which one would we choose? Obviously equation 1, and the reason seems clear: simplicity. There are literally infinitely many equations fitting Data Set 1 yielding infinitely many different y values for x = 9 (if nothing else, there are infinitely many numbers one could use for z), yet if one were forced to correctly predict what y would be for x = 9 and the consequences were sufficiently dire for predicting incorrectly (say, upon pain of ignominious death that one strongly wants to avoid), we would think it quite irrational to give any answer other than y = 18 for x = 9.

Examples like this strongly suggest that simplicity is among the tools of rationality. For equations in science (e.g. the multitude of equations in physics) there are literally infinitely many possible equations perfectly fitting the observed empirical data that give different predictions of unobserved data, including wild ones like that of equation 2 (where (x − 1)(x − 2)... are added so that when x is 1, 2, etc. the data will come out right) but out of the multitude of equations that fit the observed data, scientists rationally prefer the simpler ones when it comes to making new predictions, even if they don’t do so consciously. If simplicity isn’t a guide for truth, why is it that if the stakes were sufficiently high it would be irrational to go with anything other than y = 18 for x = 9?

Objections and Rebuttals

Objection: We don’t need simplicity; we just assume that the future is like the past to make the next prediction. That is how we can favor equation 1 over equation 3.

Rebuttal: Both equations assume the future resembles the past, e.g. both equations say that for any time in the future when x = 4, we will get y = 8. Swinburne notes that the criterion, “Choose the theory which postulates that the future resembles the past” is empty (infinitely many theories that are inconsistent with each other do that), and that to have real content we should change it to “Choose the theory which postulates that the future resembles the past in the simplest respect.”[2] But in that case we have the simplicity criterion in action.

Objection: We can test and eliminate alternate theories with further testing rather than relying on simplicity. For example, equation 3 predicts that when x = 7, y would be 21 whereas equation 1 predicts y would be 14. So if we observe that y = 14 when x = 7, then we’ve confirmed equation 1 over equation 3.

Rebuttal: Even with this approach infinitely many theories will always remain. In philosophy of science, that there are countless theories that fit the empirical data is sometimes described as empirical data underdetermining theories. For example, suppose it is true we observe that y = 14 when x = 7 such that we get Data Set 2 below:

Data Set 2
 
x01234567
y02468101214


We can see that there will be infinitely many equations fitting Data Set 2 by noting equation 4 below, where z stands for any constant or function of x.

Equation 4
 
y = 2x + (x − 1)(x − 2)(x − 3)(x − 4)(x − 5)(x − 6)(x − 7)z


Thus the “just keep on testing” approach just isn’t enough when trying to eliminate the alternatives in the Data Set 1 scenario. Of course, it is possible that the proposed simplest theory might later shown to be false with later observations, but even when that occurs, if we are to choose between empirically identical theories, all else held constant we are rational to prefer the simplest theory as the most likely one.

Objection: There are reasons why we choose simpler theories that don’t have to do with simpler theories being more likely to be true; e.g. it is more convenient to work with simpler theories.

Rebuttal: Convenience is nice to have, but it’s still the case that we in practice rely on simpler theories as being true ceteris paribus. If we want to know whether e.g. bridge will be able to withstand trucks driving over it, we want a theory that gives us the truth, not one that is merely more convenient to work with. In practice, when seeking the truth we go with the simplest theories ceteris paribus. To illustrate further, consider the following case. A scientific experiment has gone horribly wrong and part of the building will be destroyed. You are trapped inside the building and have the option of being either in region #1 of the building or region #2 by the time the explosion occurs, but one of those regions will annihilated and these are your only two options. Which region will be destroyed will depend on which theory about the experiment is true: theory S (the simpler theory) or theory C (the more complicated theory). Both theories are equal in explanatory power, explanatory scope, how well they tie in with background knowledge etc. Theory S says that region #1 will be destroyed, and theory C says region #2 will be destroyed. The rational thing to do would be to go with the simpler theory and move to region #2—not because the simpler theory is easier and more convenient to use, but because the simpler theory is more likely to be true.

To make things more concrete, suppose it came down to the Data Set 1 scenario, where the data set was this:

Data Set 1
 
x0123456
y024681012


Suppose guessing the right non-destroyed region depended on predicting the right answer for what y would be when x = 9. Again, it would be irrational to guess anything other than y = 18, even if it meant going to region #2 and even if the travel would be somewhat burdensome (e.g. you would have to rush up some flights of stairs as opposed to sitting on a comfortable couch).





[1] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 22.

[2] Swinburne, Richard. Simplicity as Evidence of Truth (Milwaukee, Wisconsin: Marquette University Press, 1996), p. 23