Introduction
In my third Maverick Christian Vlog episode I refer to a scholarly paper which is called A Folk Semantics Argument for Moral NonNaturalism. In this blog entry I’ll provide some of the technical background so that those of us who aren’t analytic philosophers can better understand it.
Why is it important that morality is nonnatural? One reason is that it reveals that there is more to reality beyond the natural, physical world. Another reason is that morality being nonnatural makes it so that atheism doesn’t fit in very well with the existence of morality, especially objective morality for reasons I explain in my third vlog episode. In contrast, the existence of an objective and nonnatural morality makes perfect sense in a theistic worldview.
Next I’ll explain some philosophy lingo before explaining the math used in the paper.
Philosophical Terminology
Moral semantics is about how to define moral terms. In philosophy, the word “folk” refers to colloquial stuff that laypersons use; e.g. “folk psychology” is (an albeit derogatory) term for beliefs about the human mind that ordinary people accept. In the paper, “folk semantics” with respect to morality refers to what most ordinary people mean when they use terms like “morally wrong.”
A stipulative definition assigns a meaning to a particular word or phrase to be used in a given context (as a philosophy paper). For example, in a philosophy paper one might give a stipulative definition of “fully justified” by saying, “I will say that a belief is fully justified to denote the belief being justified to the point where one can rationally say one knows it to be true.” Stipulative definitions are often used for conveniently assigning a label to some concept and won’t necessarily match the lexical (“dictionary”) definition.
A hypothetical imperative takes the form of something like, “If you want to do X, you should do Y” and describes what to do as a matter of practical necessity to accomplish some goal. For example, “If you want to do well in school, you ought to study” meaning something like, “As a matter necessity, you need to study to do well in school.” The sort of ought used in hypothetical imperatives is called a hypothetical ought.
A category mistake (or category error) is attributing a characteristic to something that it can’t possibly have because it’s not of the right category; e.g. saying that the number six has mass or volume, when the category of abstract objects is such that they can’t have mass or volume.
Set Theory
Some Basics
Sets are collections of stuff where order and duplicates are irrelevant. For example, the followings sets are all identical.
{1, 2, 3, 4}
{1, 2, 2, 3, 4}
{4, 3, 2, 1}
There’s the empty set, sometimes symbolized as {} which is a set that has no members.
To illustrate some set operations, suppose our “universe” consists entirely of natural numbers 1 through 9. Now let A, B, and C be the following:
A = {1, 5, 9}
B = {1, 5, 7, 8}
C = {2, 3}
Symbol  Example  Explanation 

∈ (element of)  1 ∈ A  For any set S, x ∈ S means that x is an element of S. 
∉ (not an element of)  1 ∉ C  For any set S, x ∉ S means that x is not an element of S. 
∩ (intersection)  A ∩ B = {1, 5}  Given sets S and T, S ∩ T contains all the elements x such that x ∈ S and x ∈ T. 
∪ (union)  A ∪ B = {1, 5, 7, 8, 9}  Given sets S and T, S ∪ T contains all the elements x such that x ∈ S or x ∈ T. 
⊆ (subset)  {1, 5} ⊆ B  Given sets S and T, S is a subset of T if and only if each member of S is also a member of T. 
⊄ (not a subset)  {2, 9} ⊄ B  Given sets S and T, S is not a subset of T if and only if it is not the case that S ⊆ T. 
The set “All x such that x > 3” can be symbolized like this:
{ x  x > 3 }
The set “All x ∈ A such that x > 3” can be symbolized as:
{ x ∈ A  x > 3 }
That set described above would be {5, 9}.
Relations
Unlike sets were order and duplicates don’t matter, they do matter in tuples. The following are all different from each other:
(1, 2, 3, 4)Those who have taken algebra might remember the tuple known as the ordered pair:
(1, 2, 2, 3, 4)
(4, 3, 2, 1)
(2, 3)Relations are sets of tuples, with a binary relation being a set of ordered pairs. For example, suppose we have this set:
(11, 3)
{Diana, Steve, Barbara}The relation “tallerthan” could consist of this set of ordered pairs, where e.g. Diana is taller than Steve.
{(Diana, Steve), (Steve, Barbara), (Diana, Barbara)}If we symbolize our taller relation as T then we could say that (Diana, Steve) ∈ T.
Relations between different sets are also possible. Suppose we have these two sets:
L = {Reed, Scott, Clark}And the “ishusbandof” relation is a relation from set L to set F; e.g. Reed is the husband of Sue:
F = {Sue, Jean, Lois}
H = {(Reed, Sue), (Scott, Jean), (Cark, Lois)}An inverse of a binary relation R goes like this:
R1 = {(y, x)  (x, y) ∈ R}For example, the inverse of the “ishusbandof” relation would be the “iswifeof” and be this:
H1 = {(Sue, Reed), (Jean, Scott), (Lois, Clark)}A relation from set A to set B is a function if each member of A is paired off with exactly one member of B. The “input” part of a function is the domain (set A) and the “output” part is called the range (set B). For instance, the “ishusbandof” relation is a function because each member L is paired off with exactly one member of F, with L being the domain and F being the range, whereas an “ishusbandof” relation would not be a function if there were polygamous marriages.
Suppose relations S and T are the following:
S = {(1, 2), (10, 11)}A composition of two relations S and T can be symbolized as S ∘ T, and when the relations are binary, the set of ordered pairs in such a composition goes like this:
T = {(2, 3), (11, 12)}
{(x, z)  (x, y) ∈ S and (y, z) ∈ T}In our example, S ∘ T would be the following:
{(1, 3), (10, 12)}Suppose relation V is the following:
V = {(1, 2), (1, 3), (1, 9), (2, 3), (2, 4)}Because the relation is binary, V(x, ⋅) is { y  (x, y) ∈ S }
Examples:
V(1, ⋅) = {2, 3, 9}
V(2, ⋅) = {3, 4}
Formal Logic
Deductive Arguments
A deductive argument tries to show that it’s logically impossible (i.e. selfcontradictory, like a married bachelor) for the argument to have true premises and a false conclusion, and thus that the conclusion follows from the premises by the rules of logic. If it’s logically impossible for an argument to have true premises and a false conclusion the argument is deductively valid or valid. An example of a deductively valid argument:
 If it is raining, then my car is wet.
 It is raining.
 Therefore, my car is wet.
 If P, then Q
 P
 Therefore, Q.
 If P, then Q
 NotQ
 Therefore, notP.
 If it is raining, then my car is wet.
 My car is wet.
 Therefore, it is raining.
Basic Symbols and Rules of Inference
Here’s a summary of how the connectives in propositional work where p and q represent propositions (claims that are either true or false):
Type of connective  English  Symbolic Logic  When it’s true/false 

Conjunction  p and q  p ∧ q  True if both are true; otherwise false 
Disjunction  p or q  p ∨ q  False if both are false; otherwise true 
Conditional  If p, then q  p → q  False if p is true and q is false; otherwise true 
Negation  Notp  ¬p  True if p is false; false if p is true 
As suggested in the above table, the symbols →, ¬, ∨, and ∧ are called connectives. It’s a somewhat misleading name since ¬ doesn’t connect propositions even though the other three connectives do. Still, it’s a popular label a lot of logic textbooks use. While the terminology varies among writers, I’ll call a single letter a simple statement and one more or more simple statements with one or more connectives is called a compound statement. For example, “¬P” and “A ∧ B” are compound statements.
The type of conditional (p → q) being used here is called a material conditional. A material conditional is equivalent to “It is not the case that the antecedent (p) is true and the consequent (q) is false,” such that the only way for a material conditional to be false is for it to have a true antecedent with a false consequent. A material conditional might seem like a pretty weak claim (in the sense that it doesn’t claim very much), since the antecedent and consequent don’t even have to be related to each other for a material conditional to be true. Thus, “If there is a married bachelor, then Minnesota is awesome” constitutes a true material conditional since it is not the case that we have a true antecedent (there is a married bachelor) with a false consequent (Minnesota is awesome). But it turns out that a material conditional is enough for modus ponens and modus tollens to be valid rules of inference, since in a true material conditional if the antecedent is true, then the consequent is true as well.
Speaking of which, here are those rules of inference I’ve already mentioned in symbolic form:
modus ponens  

In English  In Symbolic Logic 
If p then q p Therefore, q 
p → q p ∴ q 
modus tollens  

In English  In Symbolic Logic 
If p then q Notq Therefore, notp 
p → q ¬q ∴ ¬p 
In the convention I’m using, the lower case letters p, q, r,...z are placeholders for both simple and compound statements. Thus, below is a valid instance of modus tollens.
 (A ∧ B) → C
 ¬C
 ¬(A ∧ B) 1, 2, modus tollens
 ¬C
 (A ∧ B) → C
 ¬(A ∧ B) 1, 2, modus tollens
Disjunctive Syllogism  

In English  In Symbolic Logic 
p or q Notp Therefore, q 
p ∨ q ¬p ∴ q 
p or q Notq Therefore, p 
p ∨ q ¬q ∴ p 
simplification  

In English  In Symbolic Logic 
p and q Therefore, p 
p ∧ q ∴ p 
p and q Therefore, q 
p ∧ q ∴ q 
Before moving forward, I’ll introduce a quick example of how to use some rules of inference. Suppose we wanted to get C from premises 1 and 2 below:
 A ∨ (B ∧ C)
 ¬A
 B ∧ C 1, 2, disjunctive syllogism
 C 3, simplification
conjunction 

p q ∴ p ∧ q 
hypothetical syllogism 

p → q q → r ∴ p → r 
Equivalences
In propositional logic, two statements are logically equivalent whenever the connectives make it so that they’re always the same truthvalue (i.e. both true or both false). Some rules of propositional logic are themselves equivalences, such as these:
equivalence  name of equivalence 

p ⇔ ¬¬p  double negation 
p → q ⇔ ¬q → ¬p  transposition (also called contraposition) 
¬(p ∧ q) ⇔ ¬p ∨ ¬q  De Morgan’s laws 
¬(p ∨ q) ⇔ ¬p ∧ ¬q 
Equivalence rules can be used to replace stuff “inline” whenever their equivalence appears. As an example of how to use some equivalences, suppose we want to prove ¬C ∨ ¬D from premises 1 and 2 below:
 A
 (C ∧ D) → ¬A
 ¬¬A → ¬(C ∧ D) 2, transposition
 A → ¬(C ∧ D) 3, double negation
 ¬(C ∧ D) 1, 4 modus ponens
 ¬C ∨ ¬D 5, De Morgan’s laws
Conditional Proofs
The conditional is symbolized as p → q where p is called the antecedent and q is called the consequent. The conditional proof aims to prove that a conditional is true, with the antecedent of the conditional being the conditional proof assumption which is often used to help show that if the antecedent is true then the consequent is true also. The structure of a conditional proof takes the following form below:
conditional proof  


For example, suppose we want to prove A → (B ∧ C) from premises 1 and 2 below:
 A → B
 A → C
 A conditional proof assumption
 B 1, 3, modus ponens
 C 2, 3, modus ponens
 B ∧ C 4, 5, conjunction
 A → (B ∧ C) 36, conditional proof
Predicate Logic
To give an example of predicate logic, consider the following symbolization key:
B(x)  =  x is a Bachelor.  
U(x)  =  x is Unmarried. 
The letters B and M in these examples are predicates which say something about the element they are predicating. Sometimes parentheses aren’t used; e.g. Bx being used to mean “x is a bachelor.” The symbol; ∀ means “For All” or “For Any” such that the following basically means “All bachelors are unmarried:”
universal quantification  

In English  In Symbolic Logic 
For any x: [if x is B, then x is U]  ∀x[B(x) → U(x)] 
The domain of discourse is the set of things we’re talking about when we make statements like ∀x[B(x) → U(x)], such that the “∀x” means “For any x in the domain of discourse (i.e. set of things we’re talking about here).” We can let an individual lowercase letter signify a specific element in our domain of discourse; e.g. c can signify a guy named “Charles” and we can let B(c) to signify c is B (i.e. Charles is a bachelor).
A rule of predicate logic called Universal Instantiation allows us to instantiate a universal quantification (a ∀x[...] statement) for a specific individual, like so:
 ∀x[B(x) → U(x)]
 B(c)
 B(c) → U(c) 1, universal instantiation
 U(c) 2, 3, modus ponens
 ∀x[A(x) → B(x)]
 ∀x[B(x) → C(x)]
 A(t) → B(t) 1, universal instantiation
 B(t) → C(t) 2, universal instantiation
 A(t) → C(t) 3, 4, hypothetical syllogism
 ∀x[A(x) → C(x)] 5, universal generalization
No comments:
Post a Comment