The Goat, the Car, and the Bayesian Way of Thinking
Why Your Gut Says 50/50, But Bayes Says Grab the Other Door
There’s a game show that has broken more mathematical egos than trigonometric integrals: the Monty Hall problem. You face three doors. Behind one is a shiny car. Behind the other two: goats. You pick a door. The host, Monty, who knows exactly where the car is, opens one of the other doors and reveals a goat. He smiles.
Then comes the fateful question:
“Do you want to stick with your choice or switch to the other unopened door?”
Most people shrug and say: “Well, now there are two doors, so 50/50. Doesn’t matter.”
That’s where the Bayesian voice inside your head clears its throat.
First Principles: The Bayesian Whisper
Bayes’ theorem is simple in form:
P(H | E) = P(E | H) * P(H) / P(E)
It’s nothing more than an updating rule. Hypothesis H, evidence E. What you believed before becomes what you believe after, adjusted by how well the evidence fits.
So, let’s talk hypotheses:
H_1
: The car is behind the door you first chose.H_2
: The car is behind the other unopened door.
Before Monty does anything, your prior beliefs are straightforward:
P(H_1) = 1/3
.P(H_2) = 2/3
.
Already the odds favor switching. You just don’t feel it yet, because Monty hasn’t played his hand.
The Evidence Arrives
Monty opens one of the goat doors. That’s the evidence E
.
Now, how likely is that evidence under each hypothesis?
- If
H_1
is true (the car is behind your door), Monty can open either of the two other goat doors. SoP(E|H_1) = 1
. - If
H_2
is true (the car is behind the other unopened door), Monty is forced to open the one goat door that remains. So againP(E|H_2) = 1
.
In both cases the evidence is guaranteed. So the likelihood doesn’t change the ratio of probabilities. The priors survive the update intact:
P(H_1|E) = 1/3
.P(H_2|E) = 2/3
.
And that’s the Bayesian “aha”: nothing Monty does washes away the fact that you began with a 1/3
chance of being right and a 2/3
chance of being wrong. Switching simply means you inherit that 2/3
probability.
Let’s Simulate It
Math is nice, but let’s put the goats to work in code. We’ll simulate the game a couple hundred thousand times and let the law of large numbers speak.
import numpy as np
rng = np.random.default_rng(42)
def monty_hall_sim(n=200_000):
# Each trial: where’s the car, what does the player pick?
car = rng.integers(0, 3, size=n)
pick = rng.integers(0, 3, size=n)
# Monty opens a goat door
monty_open = []
switch_to = []
for i in range(n):
doors = {0,1,2}
goat_doors = list(doors - {car[i]} - {pick[i]})
monty_open.append(rng.choice(goat_doors))
switch_to.append(list(doors - {pick[i]} - {monty_open[-1]})[0])
stay_win = np.mean(pick == car)
switch_win = np.mean(np.array(switch_to) == car)
return stay_win, switch_win
stay, switch = monty_hall_sim()
print(f"Stay win rate: {stay:.3f}")
print(f"Switch win rate: {switch:.3f}")
Run this, and the results will hover around:
- Stay win rate: ~0.333
- Switch win rate: ~0.667
Simulation agrees: switching doubles your odds.
Walking Through the Code
Line by line, what’s happening here?
car
is a vector of random doors (0
,1
, or2
) where the car hides each trial.pick
is your initial choice.- For each trial, Monty chooses a goat door that isn’t yours and isn’t the car’s.
switch_to
is the remaining unopened door.- Finally, compare: did staying hit the car, or did switching?
The averages over 200,000 runs are the empirical truth.
Why This Matters Beyond Game Shows
The goat problem isn’t about cars. It’s about intuition failing when conditional probability enters the room. Most of us collapse situations into a vague “50/50” because it feels balanced. But the Bayesian method keeps track of what you knew before and how evidence updates those beliefs.
In science, trading, medicine, or even daily decisions, this pattern repeats:
- Define hypotheses.
- Assign prior beliefs.
- Receive evidence.
- Update.
Bayes’ theorem doesn’t make decisions for you; it sharpens your odds.
Final Thought
The beauty of the Monty Hall problem is that it forces you to see probability as a process, not a snapshot. You don’t throw away the past (the prior); you bring it forward and weigh it with the present (the evidence). That’s the Bayesian way of thinking: step by step, never fooled by the surface appearance of 50/50 goats.