Question: What do the following three assumptions have in common?
1. (-1)(-1) = -1.
2. The number infinity is defined by its property 0x = 1.
3. r is the smallest rational number greater than 0.
Answer: All three assumptions would make mathematics less useful, or meaningless.
The first assumption would make the distributive property of multiplication for negative numbers useless:
(-1)(1 + -1) = (-1)(1) + (-1)(-1)
(-1)(0) = -1 + -1
0 = -2
The second assumption is incompatible with the associative law for multiplication, and evident facts such as that 0*2=0:
1 = infinity*0 = infinity *(0*2) = (infinity*0)*2 = 1*2 = 2
1 = 2
The third assumption leads to a contradiction:
x = r/2
r > x
Mathematics can be viewed as either a tool, with practical value, or as a game. This tool can, for example, be used to divide things into 2 equal pieces. Respectively, one of the rules of the game is that you can get smaller things by dividing them. What matters here is (a) that we want to be able to divide things into equal parts, and (b) that it is intuitively evident that things can be divided.
To get a better idea of how mathematics would become less useful, or even meaningless, if we were to accept those assumptions, let us now take a closer look at the third assumption.
Assumption three is the assertion (let us label it P) that there is a smallest rational number greater than 0, and that this number is r. But some of the intuitively evident properties of our tool (let us label these properties S) contradict this assumption. In other words, S implies that P is false (which we will label ¬P).
The problem with assumption three is that S and P together imply that P is true and false at the same time, P and ¬P (a contradiction). But this is absurd. Either you can divide the number r, in which case it is not the smallest rational number greater than 0, or that is not possible. How do we decide?
To resolve this contradiction we ask ourselves, what speaks in favor of P, and what against P? It seems that nothing speaks in favor of P, and everything in favor of ¬P. We not only want to be able to divide things, including r, but it also seems obvious that this should be possible. After all, what would stop us from dividing r by 2? Dividing r by 2 is a straightforward and intuitive operation, since r does not seem to have any special properties, other than the claim that it is the smallest rational number greater than 0. But that claim has not been justified.
Therefore we are going to reject P, and instead assume that it is true that P is false. In other words, we are going to accept that ¬P is true. Which means that S implies ¬P: It is true that there is no smallest rational number greater than 0.
Question: But are we justified to do so? There seems to be a lot of intuition involved here!
Answer: Well, sure! One way or the other, we are honing a tool, or playing a game. And we do whatever it takes to keep our tool as effective as possible, and avoid our game from becoming meaningless.
Question: Is there no alternative? Maybe there is a smallest number greater than 0 after all. We could just stay agnostic about this fact.
Answer: The particular game that we are playing requires that every proposition is either true or not true, and that no proposition can be both true and not true. Technically this is called the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P).
Question: Why then are we playing that particular game, rather than another game?
Answer: Other people are playing different games. But this particular game proved to be a useful tool. And given our game, there is some justification for avoiding contradictions, called the principle of explosion. Explosion shows that everything can be proven from a contradiction, making it impossible to distinguish truth from falsehood. We obviously do not want that to happen, because it would render our tool useless, and the game meaningless. So if some assumption leads to a contradiction, we accept that it is false, to keep our tool from becoming useless, and to keep our game meaningful.
Question: This all seems really shaky. Are there no proofs to settle these issues?
Answer: No. To prove something true or false would presuppose that it must be either true or false, but not both or neither.
Question: But since the basic assumptions of our game are based on intuition, is it not possible that these rules themselves are already inconsistent?
Answer: That is possible. But those assumptions (axioms) have proved really useful so far, so that seems unlikely. And if our foundations eventually turn out to be inconsistent, then we will simply fix that inconsistency by rejecting certain axioms, or by adopting new ones, while trying to keep our tool useful.
Question: What about probability theory, can we use it to estimate how likely our assumptions are to be consistent?
Answer: Probabilistic reasoning (inductive reasoning), in contrast to the deductive reasoning above, allows for the possibility that the conclusion is false, even if all of the premises are true. Yet probabilistic reasoning is largely based on deductive foundations. For example, when you add or multiply probabilities (numbers), you rely on the validity of the the Peano axioms, which are statements in first-order logic (a deductive system). So what you are asking for is if it is possible to use a system’s capabilities to verify if its capabilities are valid.
It is indeed possible to use inductive reasoning to examine our deductive reasoning. And I did so above, when I said that the current axioms are probably consistent, because they have worked well so far. But the title of this post mentions intuition for a reason. When we use inductive reasoning, because it worked in the past, we rely on inductive reasoning. It is like trusting somebody because they claim to be trustworthy.
So the answer is that you always rely on some amount of intuition, and on acting in good faith that your intuitions are correct. Since we are unable to prove our intuitions to be correct by using arguments that fundamentally rely on our intuitions.
We are now almost ready to formalize the above. Before we can do so we need to take a look at what is called ‘material implication’.
Intuitive explanation of material implication
Why is the material implication of classical logic (also known as material conditional or material consequence), p -> q, defined to be false only when its antecedent (p) is true and the consequent (q) is false? Here is an informal way to think about it.
You could view logic as metamathematics, a language designed to talk about mathematics. Logic as the “hygiene”, the grammar and syntax of mathematics.
In the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Now what if we want to express the natural language construction “If…then…” in this language? Well, there are exactly sixteen possible truth functions of two inputs p and q (since there are 2^2 inputs and 2^(2^2) ways to map them to outputs). And the candidate that best captures the connotations of what we mean by “If…then…” is the definition of material implication. Here is why.
By stating that p -> q is true we want to indicate that the truth of q can be inferred from the truth p, but that nothing in particular can be inferred from the falsity of p. And this is exactly the meaning captured by the material conditional:
First, when “If p, q” is true, and we also know that p is true, then we want to be able to infer q. In other words, if we claim that if p is true then q is true, then if p is indeed true, q should be true as well. This basic rule of inference has a name, it is called modus ponens.
Second, if we claim “If p, q”, then if p is false, we did not say anything in particular about q. If p is false, q can either be true or false, our claim “If p, q” is still true.
But notice that it is not possible to capture all notions of what we colloquially mean by “If…then…” statements as a two-valued truth function.
It is for example possible to make meaningless statements such as “If grass is red then the moon if made of cheese.” This is however unproblematic under the assumption that logic is an idealized language, which is adequate for mathematical reasoning. Since we are mainly interested in simplicity and clarity. Under this assumption, such nonsense implications are analogous to grammatically correct but meaningless sentences that can be formed in natural languages, such as “Colorless green ideas sleep furiously“.
To demonstrate its adequacy for mathematics, here is a mathematical example:
If n > 2 then n^2 > 4.
We claim that if n is greater than 2 then its square must be greater than 4. For n = 3, this is obviously true, as we claimed. But what about n smaller than 2? We didn’t say anything in particular about n smaller than 2. Its square could be larger than 4 or not. And indeed, n = 1 and n = -3 yield a false, respectively true, consequent. Yet the implication is true in both cases.
Intuitively more problematic are statements such as (p and not(p)) -> q, p and its negation imply q. Think about it this way. The previous implication is a tautology, it is always true. And you believe true statements. This however does not mean that you must believe that arbitrary q is true too (as long as you stay consistent), since in case of the falsity of the antecedent you are not making any particular claim about the truth of the consequent (q). And since the statement that p is true and false, p AND not(p), is always false — remember the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P), requires that every proposition is either true or not true, and that no proposition can be both true and not true — q can be false without invalidating the implication.
Another way to look at p -> q is by interpreting it as “p is a subset of q”. Then if it is true that x is an element of p, then it must be true that it is also an element of q (since q contains p). However, if x is not an element p, then it might still turn out to be an element of q, since q can be larger than p.
Proof by contradiction
Here is a formula which is known as ‘reductio ad absurdum’:
This formula is a tautology. Which means that we believe this formula.
Remember that in the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Also remember that an implication, such as p->q, is only false when its antecedent (p) is true and the consequent (q) is false.
Now suppose we assume ¬p to be true (that p is false). If this assumption allows us to prove “If ¬p, q” (¬p implies q), and (p&q is true only if both of its operands are true) “If ¬p, ¬q” (¬p implies ¬q), then we proved that the antecedent of the above formula, (¬p->q)&(¬p->¬q), is also true. And since we already know that the formula is always true, its consequent, p, must therefore be true as well.
Here is what happened:
(1) We started with a theory consisting of a set of axioms that we assumed to be true (e.g. the Peano axioms), tautologies that are true, and formulas that we proved to be true.
(2) We made an additional assumption, that p is false.
(3) We considered the case where this assumption, together with our theory, allowed us to prove two implications:
(4) The truth of both of these implications means that we proved the antecedent of the tautology to be true by proving the conjunction of both implications to be true:
(5) We needed to find an interpretation of the initial formula under which it remains true when its antecedent is true.
(6) Since q and ¬q cannot be both true in our theory, as this would be absurd, the only interpretation that is left for both implications to be true is when their antecedent, ¬p, is false. Which also satisfies our tautology.
(7) Therefore p is true.
Tags: proof by contradiction
Comments are now closed.