Mathematics

You are currently browsing the archive for the Mathematics category.

Why is the material implication of classical logic (also known as material conditional or material consequence), p -> q, defined to be false only when its antecedent (p) is true and the consequent (q) is false? Here is an informal way to think about it.

You could view logic as metamathematics, a language designed to talk about mathematics. Logic as the “hygiene”, the grammar and syntax of mathematics.

In the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Now what if we want to express the natural language construction “If…then…” in this language? Well, there are exactly sixteen possible truth functions of two inputs p and q (since there are 2^2 inputs and (2^2)^2 ways to map them to outputs). And the candidate that best captures the connotations of what we mean by “If…then…” is the definition of material implication. Here is why.

By stating that p -> q is true we want to indicate that the truth of q can be inferred from the truth p, but that nothing in particular can be inferred from the falsity of p. And this is exactly the meaning captured by the material conditional:

p q p->q
T T T
T F F
F T T
F F T

First, when “If p, q” is true, and we also know that p is true, then we want to be able to infer q. In other words, if we claim that if p is true then q is true, then if p is indeed true, q should be true as well. This basic rule of inference has a name, it is called modus ponens.

Second, if we claim “If p, q”, then if p is false, we did not say anything in particular about q. If p is false, q can either be true or false, our claim “If p, q” is still true.

But notice that it is not possible to capture all notions of what we colloquially mean by “If…then…” statements as a two-valued truth function.

It is for example possible to make meaningless statements such as “If grass is red then the moon if made of cheese.” This is however unproblematic under the assumption that logic is an idealized language, which is adequate for mathematical reasoning. Since we are mainly interested in simplicity and clarity. Under this assumption, such nonsense implications are analogous to grammatically correct but meaningless sentences that can be formed in natural languages, such as “Colorless green ideas sleep furiously“.

To demonstrate its adequacy for mathematics, here is a mathematical example:

If n > 2 then n^2 > 4.

We claim that if n is greater than 2 then its square must be greater than 4. For n = 3, this is obviously true, as we claimed. But what about n smaller than 2? We didn’t say anything in particular about n smaller than 2. Its square could be larger than 4 or not. And indeed, n = 1 and n = -3 yield a false, respectively true, consequent. Yet the implication is true in both cases.

Intuitively more problematic are statements such as (p and not(p)) -> q, p and its negation imply q. Think about it this way. The previous implication is a tautology, it is always true. And you believe true statements. This however does not mean that you must believe that arbitrary q is true too (as long as you stay consistent), since in case of the falsity of the antecedent you are not making any particular claim about the truth of the consequent (q). And since the statement that p is true and false, p AND not(p), is always false — remember the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P), requires that every proposition is either true or not true, and that no proposition can be both true and not true — q can be false without invalidating the implication.

Another way to look at p -> q is by interpreting it as “p is a subset of q”. Then if it is true that x is an element of p, then it must be true that it is also an element of q (since q contains p). However, if x is not an element p, then it might still turn out to be an element of q, since q can be larger than p.

————–

Here is a term I just learnt: Extraneous solutions.

Take for example the equation

A = B.

If you were to square both sides you would get

A^2 = B^2

or

A^2 – B^2 = 0.

Which is equal to

(A – B)(A + B) = 0 (by the difference of two squares).

Now the roots of this equation are the roots of the equations A = B and A = -B. This means that we generated an additional solution by squaring the original equation.

The reason for this is that squaring is not an injective fuction (injective means one-to-one, every element is mapped to one and only one unique element), it is not invertible. The function y = x^2 does not pass the horizontal line test. In other words, squaring preserves equality, if A = B then A^2 = B^2, but does not preserve inequality. It is not true that if A != B then A^2 != B^2, since both -1 and 1 are mapped to 1 when squared. Which means that both 1^2 = 1^2 and (-1)^2 = (1)^2 are solutions to the squared equations, while only one of them makes each pre-squared equation true.

————–

Operation Crossroads

Operation Crossroads

Operation Crossroads

Video: http://youtu.be/Y53vDnNPiA4

————–

Milky Way may bear 100 million life-giving planets

New Obama doctrine on climate change will achieve CO2 emission reductions from the power sector of approximately 30% from CO2 emission levels in 2005.

Parallel and Perpendicular Lines

(1) The distance between two parallel lines is the distance between the points of intersection of a third line that is perpendicular to both these lines.

(2) The slopes of two perpendicular lines are negative reciprocals of one another.

The two blueish triangles are copies of each other, one of which has been rotated 90 degrees about point A (the origin). This means that the line segment c1 is perpendicular to line segment c2.

Since line segment c2 is parallel to line segment e, the perpendicular distance between both line segments is the distance between the two points where the line segment c1 intersects c2 and e.

For non-visual proofs of 1 and 2, see below.

The givens:

Line segment c1 is perpendicular to line segment c2.

Line segment c2 is parallel to line segment e.

Coordinates of point A = (0, 0).

Length of line segment b = b.

Length of line segment a = a.

Length of line segment c = c1 = c = sqrt(b^2 + a^2) (by the Pythagorean theorem).

Length of line segment e = c.

The slope of line c1 = m1 = a/b.

The slope of line c2 = m2 = -b/a.

Coordinates of point B1 = (b, a) = (b, m1b).

Coordinates of point B2 = (-a, b) = (-a, m2*-a).

Length of line segment d = sqrt(c1^2 + c2^2) = sqrt(2c^2) = sqrt(sqrt(2)*sqrt(2)*sqrt(c^2)*sqrt(c^2)) = sqrt(sqrt(2)*sqrt(c^2)*sqrt(2)*sqrt(c^2)) = sqrt(2)sqrt(c^2) = sqrt(2)*c = sqrt(2)*sqrt(b^2 + a^2) (by the Pythagorean theorem). Respectively, d = sqrt((b-(-a))^2 +  (a-b)^2) = sqrt((b – (-a))^2 + (m1b – m2*-a)^2).

Proof that the slopes of two perpendicular lines are negative reciprocals of one another:

d^2 = (sqrt(2)*sqrt(b^2 + a^2))^2 = sqrt(2)*sqrt(2)*sqrt(b^2 + a^2)*sqrt(b^2 + a^2) = 2(b^2+a^2) = (b^2 + (m1b)^2)+(-a^2 + (m2*-a)^2) = (b^2 + (m1b)^2) + (a^2 + (m2a)^2)

d^2 = (b – (-a))^2 + (m1b – m2*-a)^2

(b + a)^2 + (m1b – m2*-a)^2 = b^2 + (m1b)^2 + a^2 + (m2a)^2

b^2 + 2ab + a^2 + (m1b)^2 + 2m1m2ab + (m2a)^2 = b^2 + (m1b)^2 + a^2 + (m2a)^2

2ab + 2m1m2ab = 0

2m1m2ab = -2ab

m1m2 = -1

m1m2 = (a/b)(-b/a) = -1

Proof that the distance between two parallel lines is the distance between the points of intersection of a third line that is perpendicular to both these lines:

Consider any two parallel lines,

f(x) = y1 = mx + b1

g(x) = y2 = mx + b2,

and a third line that is perpendicular to both these lines,

h(x) = y3 = (-1/m)x = -x/m.

Then the intersection point of y1 and y3 is the solution to the system of linear equations,

f(x) = h(x)

mx + b= -x/m

(m^2)x + mb1 = -x

(m^2)x + x = -mb1

((m^2) + 1)x = -mb1

x = -mb1 / ((m^2) + 1)

h(-mb1 / ((m^2) + 1)) =  y1 = y3 = (-1/m)(-mb1 / ((m^2) + 1)) = b1 / ((m^2) + 1)

Intersection point 1: (x1, y1) = (-mb1 / ((m^2) + 1), b1 / ((m^2) + 1))

By the same logic, the intersection point of y2 and y3 is,

g(x) = h(x)

Intersection point 2: (x2, y2) = (-mb2 / ((m^2) + 1), b2 / ((m^2) + 1))

By the Pythagorean theorem, the distance between intersection point 1 and 2 is,

d = sqrt((x– x2)^2 + (y1 – y2)^2) = sqrt(((-mb1 – (-mb2)) / ((m^2) + 1))^2 + ((b1 – b2) / ((m^2) + 1))^2) = sqrt((-mb1 + mb2)^2 / (m^2 + 1)^2 + (b1 – b2)^2 / (m^2 + 1)^2)  = sqrt((m^2)(-b1 + b2)^2 / (m^2 + 1)^2 + (b1 – b2)^2 / (m^2 + 1)^2) = sqrt(((m^2)(-b1 + b2)^2 + (b1 – b2)^2) / (m^2 + 1)^2) = sqrt(((m^2)(b1^2 – 2b1b+ b2^2) + (b1^2 – 2b1b+ b2^2)) / (m^2 + 1)^2) = sqrt((m^2 + 1)(b1^2 – 2b1b+ b2^2) / (m^2 + 1)^2) = sqrt((b1^2 – 2b1b+ b2^2) / (m^2 + 1)) = sqrt((b1 – b2)^2 / (m^2 + 1)) = sqrt((b1 – b2)^2) / sqrt(m^2 + 1) = |(b1 – b2)| / sqrt(m^2 + 1)

Tags:

Question: What do the following three assumptions have in common?

1. (-1)(-1) = -1.

2. The number infinity is defined by its property 0x = 1.

3. r is the smallest rational number greater than 0.

Answer: All three assumptions would make mathematics less useful, or meaningless.

The first assumption would make the distributive property of multiplication for negative numbers useless:

(-1)(1 + -1) = (-1)(1) + (-1)(-1)

(-1)(0) = -1 + -1

0 = -2

The second assumption is incompatible with the associative law for multiplication, and evident facts such as that 0*2=0:

1 = infinity*0 = infinity *(0*2) = (infinity*0)*2 = 1*2 = 2

1 = 2

The third assumption leads to a contradiction:

x = r/2

r > x

Mathematics can be viewed as either a tool, with practical value, or as a game. This tool can, for example, be used to divide things into 2 equal pieces. Respectively, one of the rules of the game is that you can get smaller things by dividing them. What matters here is (a) that we want to be able to divide things into equal parts, and (b) that it is intuitively evident that things can be divided.

To get a better idea of how mathematics would become less useful, or even meaningless, if we were to accept those assumptions, let us now take a closer look at the third assumption.

Assumption three is the assertion (let us label it P) that there is a smallest rational number greater than 0, and that this number is r. But some of the intuitively evident properties of our tool (let us label these properties S) contradict this assumption. In other words, S implies that P is false (which we will label ¬P).

The problem with assumption three is that S and P together imply that P is true and false at the same time, P and ¬P (a contradiction). But this is absurd. Either you can divide the number r, in which case it is not the smallest rational number greater than 0, or that is not possible. How do we decide?

To resolve this contradiction we ask ourselves, what speaks in favor of P, and what against P? It seems that nothing speaks in favor of P, and everything in favor of ¬P. We not only want to be able to divide things, including r, but it also seems obvious that this should be possible. After all, what would stop us from dividing r by 2? Dividing r by 2 is a straightforward and intuitive operation, since r does not seem to have any special properties, other than the claim that it is the smallest rational number greater than 0. But that claim has not been justified.

Therefore we are going to reject P, and instead assume that it is true that P is false. In other words, we are going to accept that ¬P is true. Which means that S implies ¬P: It is true that there is no smallest rational number greater than 0.

Q&A

Question: But are we justified to do so? There seems to be a lot of intuition involved here!

Answer: Well, sure! One way or the other, we are honing a tool, or playing a game. And we do whatever it takes to keep our tool as effective as possible, and avoid our game from becoming meaningless.

Question: Is there no alternative? Maybe there is a smallest number greater than 0 after all. We could just stay agnostic about this fact.

Answer: The particular game that we are playing requires that every proposition is either true or not true, and that no proposition can be both true and not true. Technically this is called the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P).

Question: Why then are we playing that particular game, rather than another game?

Answer: Other people are playing different games. But this particular game proved to be a useful tool. And given our game, there is some justification for avoiding contradictions, called the principle of explosion. Explosion shows that everything can be proven from a contradiction, making it impossible to distinguish truth from falsehood. We obviously do not want that to happen, because it would render our tool useless, and the game meaningless. So if some assumption leads to a contradiction, we accept that it is false, to keep our tool from becoming useless, and to keep our game meaningful.

Question: This all seems really shaky. Are there no proofs to settle these issues?

Answer: No. To prove something true or false would presuppose that it must be either true or false, but not both or neither.

Question: But since the basic assumptions of our game are based on intuition, is it not possible that these rules themselves are already inconsistent?

Answer: That is possible. But those assumptions (axioms) have proved really useful so far, so that seems unlikely. And if our foundations eventually turn out to be inconsistent, then we will simply fix that inconsistency by rejecting certain axioms, or by adopting new ones, while trying to keep our tool useful.

Question: What about probability theory, can we use it to estimate how likely our assumptions are to be consistent?

Answer: Probabilistic reasoning (inductive reasoning), in contrast to the deductive reasoning above, allows for the possibility that the conclusion is false, even if all of the premises are true. Yet probabilistic reasoning is largely based on deductive foundations. For example, when you add or multiply probabilities (numbers), you rely on the validity of the the Peano axioms, which are statements in first-order logic (a deductive system). So what you are asking for is if it is possible to use a system’s capabilities to verify if its capabilities are valid.

It is indeed possible to use inductive reasoning to examine our deductive reasoning. And I did so above, when I said that the current axioms are probably consistent, because they have worked well so far. But the title of this post mentions intuition for a reason. When we use inductive reasoning, because it worked in the past, we rely on inductive reasoning. It is like trusting somebody because they claim to be trustworthy.

So the answer is that you always rely on some amount of intuition, and on acting in good faith that your intuitions are correct. Since we are unable to prove our intuitions to be correct by using arguments that fundamentally rely on our intuitions.

We are now almost ready to formalize the above. Before we can do so we need to take a look at what is called ‘material implication’.

Intuitive explanation of material implication

Why is the material implication of classical logic (also known as material conditional or material consequence), p -> q, defined to be false only when its antecedent (p) is true and the consequent (q) is false? Here is an informal way to think about it.

You could view logic as metamathematics, a language designed to talk about mathematics. Logic as the “hygiene”, the grammar and syntax of mathematics.

In the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Now what if we want to express the natural language construction “If…then…” in this language? Well, there are exactly sixteen possible truth functions of two inputs p and q (since there are 2^2 inputs and 2^(2^2) ways to map them to outputs). And the candidate that best captures the connotations of what we mean by “If…then…” is the definition of material implication. Here is why.

By stating that p -> q is true we want to indicate that the truth of q can be inferred from the truth p, but that nothing in particular can be inferred from the falsity of p. And this is exactly the meaning captured by the material conditional:

p q p->q
T T T
T F F
F T T
F F T

First, when “If p, q” is true, and we also know that p is true, then we want to be able to infer q. In other words, if we claim that if p is true then q is true, then if p is indeed true, q should be true as well. This basic rule of inference has a name, it is called modus ponens.

Second, if we claim “If p, q”, then if p is false, we did not say anything in particular about q. If p is false, q can either be true or false, our claim “If p, q” is still true.

But notice that it is not possible to capture all notions of what we colloquially mean by “If…then…” statements as a two-valued truth function.

It is for example possible to make meaningless statements such as “If grass is red then the moon if made of cheese.” This is however unproblematic under the assumption that logic is an idealized language, which is adequate for mathematical reasoning. Since we are mainly interested in simplicity and clarity. Under this assumption, such nonsense implications are analogous to grammatically correct but meaningless sentences that can be formed in natural languages, such as “Colorless green ideas sleep furiously“.

To demonstrate its adequacy for mathematics, here is a mathematical example:

If n > 2 then n^2 > 4.

We claim that if n is greater than 2 then its square must be greater than 4. For n = 3, this is obviously true, as we claimed. But what about n smaller than 2? We didn’t say anything in particular about n smaller than 2. Its square could be larger than 4 or not. And indeed, n = 1 and n = -3 yield a false, respectively true, consequent. Yet the implication is true in both cases.

Intuitively more problematic are statements such as (p and not(p)) -> q, p and its negation imply q. Think about it this way. The previous implication is a tautology, it is always true. And you believe true statements. This however does not mean that you must believe that arbitrary q is true too (as long as you stay consistent), since in case of the falsity of the antecedent you are not making any particular claim about the truth of the consequent (q). And since the statement that p is true and false, p AND not(p), is always false — remember the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P), requires that every proposition is either true or not true, and that no proposition can be both true and not true — q can be false without invalidating the implication.

Another way to look at p -> q is by interpreting it as “p is a subset of q”. Then if it is true that x is an element of p, then it must be true that it is also an element of q (since q contains p). However, if x is not an element p, then it might still turn out to be an element of q, since q can be larger than p.

Proof by contradiction

Here is a formula which is known as ‘reductio ad absurdum’:

((¬p->q)&(¬p->¬q))->p

This formula is a tautology. Which means that we believe this formula.

Remember that in the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Also remember that an implication, such as p->q, is only false when its antecedent (p) is true and the consequent (q) is false.

Now suppose we assume ¬p to be true (that p is false). If this assumption allows us to prove “If ¬p, q” (¬p implies q), and (p&q is true only if both of its operands are true) “If ¬p, ¬q”  (¬p implies ¬q), then we proved that the antecedent of the above formula, (¬p->q)&(¬p->¬q), is also true. And since we already know that the formula is always true, its consequent, p, must therefore be true as well.

Here is what happened:

(1) We started with a theory consisting of a set of axioms that we assumed to be true (e.g. the Peano axioms), tautologies that are true, and formulas that we proved to be true.

(2) We made an additional assumption, that p is false.

(3) We considered the case where this assumption, together with our theory, allowed us to prove two implications:

(¬p->q)

(¬p->¬q)

(4) The truth of both of these implications means that we proved the antecedent of the tautology to be true by proving the conjunction of both implications to be true:

(¬p->q)&(¬p->¬q)

(5) We needed to find an interpretation of the initial formula under which it remains true when its antecedent is true.

(6) Since q and ¬q cannot be both true in our theory, as this would be absurd, the only interpretation that is left for both implications to be true is when their antecedent, ¬p, is false. Which also satisfies our tautology.

(7) Therefore p is true.

Tags:

(The following is adapted from a scenario by Graham Priest, depicted in his book ‘Logic: A Very Short Introduction‘.)

Suppose that at some point you find yourself in a posthuman hell. But you have one chance to get out of it. You can toss a coin; if it comes down heads, you are out and go to heaven. If it comes down tails, you stay in hell forever. The coin is not a fair one, however, and the posthuman entity that simulates the hell has control of the odds. If you toss it today, the chance of heads is 1/2 (i.e. 1-1/2). If you wait till tomorrow, the chances go up to 3/4 (i.e. 1-1/2^2). If you wait n days, the chance of going to heaven goes up to 1-1/2^n. How long are you going to wait before tossing the coin?

The associated values of remaining in hell or escaping are constant over time.

Tags: ,

This is a very basic introduction to odds, followed by a short introduction to log-odds and a few recommendations from where to go from here.

Odds

Imagine you were living in a universe with 3 gods. One benefactor and two nasty gods. Each time you want or need something, one of the gods is chosen at random to serve you.

So if you want to eat strawberries, and you get lucky, the benefactor is chosen to serve you some tasty strawberries. But that’s only going to happen, on average, 1/3 of times. If one of the two nasty gods is chosen instead, you might or might not get something, but what’s for sure is that it won’t be what you want.

In other words, after a very long lifespan you can expect to have 1/3 of your wants and needs being satisfied. In 2 out of 3 cases you did either get nothing, if you are lucky, or something nasty.

That means that if you ask for strawberries it is 2 times as likely that you won’t get strawberries, or something nasty instead, as it is that you are actually going to get what you want. Because (2/3) / (1/3) = 2/1 = 2. In other words, in the long-term you will be left with 2 unfulfilled wishes for each fulfillment, or 2:1.

In this undesirable universe that we imagine, the odds in favor of getting what you want are 1:2. Which means that the odds against receiving some tasty strawberries when you want them are 2:1.

Here is another more abstract example. Imagine that the probability of event A happening is

P(A) = 80% = 0.80 = 80/100 = 4/5.

Then the probability of it not happening, ~A, is

P(~A) = 1-P(A) = 20% = 0.20 = 20/100 = 1/5.

Now the odds are simply the ratio of the probabilities. ‘Odds’ are an expression of relative probabilities.

The odds in favor are the ratio of the probability that an event will happen to the probability that it will not happen. Which in our example equals

P(A)/P(~A) = P(A) / (1-P(A)) = 80%/20% = 0.80/0.20 = (80/100) / (20/100) = 80/20 = (4/5) / (1/5) = 4:1.

The other way round, the odds against event A are 1:4.

Why use odds?

Using odds can help to illustrate the actual confidence accompanying various probabilities.

Take for example the difference between a probability of 99.98% and 99.99% of an event occurring. In odds we have,

99.98%/0.02% = 0.9998/0.0002 = (9998/10000) / (2/10000) = 9998/2 = 4999:1

and

99.99%/0.01% = 0.9999/0.0001 = (9999/10000) / (1/10000) = 9999/1 = 9999:1.

Which means that increasing your confidence from 99.98% to 99.99% is equivalent to saying that you believe that the event is 9999 times as likely to occur than not instead of it being “just” 4999 times as likely.

As you can see, not only does using odds reveal that an increase from 99.98% to 99.99% means that you are actually twice as confident as before, but also how incredible confident you must be to say that something is even 4999 times more likely to occur than not.

In the same way, converting probabilities to odds shows that the difference between 50.01% and 50.02% is negligible (under many circumstances). As 50.01% in odds are

50.01%/49.99% = 0.5001/0.4999 = (5001/10000) / (4999/10000) = 5001/4999 = 1.0004:1

and

50.02%/49.98% = 0.5002/0.4998 = (5002/10000) / (4998/10000) = 5002/4998 = 1.0008:1.

Which is almost the same, since 1.0008/1.0004 ≈ 1.

What odds reveal

To see why you have to realize that using odds reveals what might intuitively not be obvious, namely that to increment the smallest factor has the largest effect.

In the first example we had 9998/2 and 9999/1, which is the same as 9998 * 1/2 and 9999 * 1. The larger factor only inreased by 0.01% while the smaller factor increased by 100%, that is 9999/1 ≈ 9998*1.0001*(1/2)*2. Notice the bold 2? That’s twice as much.

Whereas in the second example both factors are approximately equal and increased or decreased by a similar percentage. That is 5002/4998 is approximately equal to 5001*1.0002*1/4999*1.0002. Which is almost an increase of a factor of 1, or in other words no increase at all.

What odds reveal is that the relative increase or decrease of a factor by one unit becomes more pronounced as the factors absolute difference increases.

Log-odds

Another way of representing probabilities is in terms of log-odds, or decibel (dB).

To convert probabilities into log-odds, first convert percentages into odds. We have already talked about how to do that above.

Once you got the odds, all you have to do is to take the base 10 logarithm of the odds ratio and multiply it by 10.

For example, 4:1 odds would translate to 6dB because 10*log(4) ≈ 6.

Further Reading and Resources

I could go on to explain the advantages of using log-odds. But there are others who have already done so, probably better than I could. And you should now be able to follow and understand what they have written.

I recommend that you start with this short primer, maybe followed by this blog post.

For a precomputed lookup table of important probabilities and their approximate odds and log-odds (decibel) values see this PDF made by muflax.

Tags: , ,

The following is based on the book ‘Das Ziegenproblem‘ by Gero von Randow.

Setup:

  • There are 3 doors.
  • 1 door has a car behind it.
  • 2 doors have a goat behind it.
  • The content behind the doors is randomly chosen.
  • There are 2 candidates, A and B.
  • There is 1 moderator who knows which door has a car behind it.

Actions:

  1. A and B are asked to choose a door and both choose the same door.
  2. The moderator chooses one door which has a goat behind it.
  3. A and B are asked if they would like to switch their choice and pick the remaining door.
  4. A always stays with his choice, the door that has been initially chosen by both A and B.
  5. B always changes her choice to the remaining third door.

Repeat the actions 999 times:

If you repeat the above list of actions 999 times, given the same setup, what will happen?

Candidate A always stays with his initial choice. Which means that he will on average win 1/3 of all games. He will win 1/3*999, 333 cars.

But who won the remaining 666 cars?

Given the setup of the game, the moderator has to choose a door with a goat behind it. Therefore the moderator does win 0 cars.

Candidate B, who always switched her choice, after the moderator picked a door with a goat behind it, must have won the remaining 666 cars (2/3*999)!

1 candidate and 100 doors:

Alter the above setup of the game in the following way

  • There are 100 doors.
  • 1 door has a car behind it.
  • 99 doors have a goat behind it.
  • There is 1 candidate, A.

Alter the above actions in the following way

  • The moderator opens 98 doors with goats behind them.

Now let’s say the candidate picks door number 8. By rule of the game the moderator now has to open 98 of the remaining 99 doors behind which there is no car.

Afterwards there is only one door left besides door 8 that the candidate has chosen.

You would probably switch your choice to the remaining door now. If so, the same should be the case with only 3 doors!

Further explanation:

Your chance of picking the car with your initial choice is 1/3 but your chance of choosing a door with a goat behind it, at the beginning, is 2/3. Thus on average, 2/3 of times that you are playing this game you’ll pick a goat at first go. That also means that 2/3 of times that you are playing this game, and by definition pick a goat, the moderator will have to pick the only remaining goat. Because given the laws of the game the moderator knows where the car is and is only allowed to open a door with a goat in it.

What does that mean?

On average, at first go, you pick a goat 2/3 of the time and hence the moderator is forced to pick the remaining goat 2/3 of the time. That means 2/3 of the time there is no goat left, only the car is left behind the remaining door. Therefore 2/3 of the time the remaining door has the car. Which makes switching the winning strategy.

Further reading

Tags:

There are n = 4 sorts of candy to choose from and you want to buy k = 10 candies. How many ways can you do it?

This is a problem of counting combinations (order does not matter) with repetition (you can choose multiple items from each category). Below we will translate this problem into a problem of counting combinations without repetition, which can be solved by using a better understood formula that is known as the “binomial coefficient“.

First let us represent the 10 possible candies by 10 symbols ‘C’ and divide them into 4 categories by placing a partition wall, represented by a ‘+’ sign, between each sort of candy to separate them from each other

CC+CCCC+C+CCC

Note that there are 10 symbols ‘C’ and 3 partition walls, represented by a ‘+’ sign. That is, there are n-1+k = 13, equivalently n+k-1, symbols. Further note that each of the 3 partition walls could be in 1 of 13 positions. In other words, to represent various choices of 10 candies from 4 categories, the positions of the partition walls could be rearranged by choosing n-1 = 3 of n+k-1 = 13 positions

C++CCC+CCCCCC

CCCCCCCCCC+++

We have now translated the original problem into choosing 3 of 13 available positions.

Note that each position can only be chosen once. Further, the order of the positions does not matter. Since choosing positions {1, 4, 12} does separate the same choice of candies as the set of positions {4, 12, 1}. Which means that we are now dealing with combinations without repetition.

Calculating combinations without repetition can be done using the formula that is known as the binomial coefficient

n!/k!(n-k)!

As justified above, to calculate combinations with repetition, simply replace n with n+k-1 and k with n-1,

(n+k-1)!/(n-1)!((n+k-1)-(n-1))!

In our example above this would be (4+10-1)!/(4-1)!((4+10-1)-(4-1))! = 13!/3!10!. Which is equivalent to

(n+k-1)!/k!(n-1)!

because (4+10-1)!/10!(4-1)! = 13!/10!3! = 13!/3!10!, which is the same result that we got above.

Further reading

Tags: