One of the big problems faced in economics, particularly in macroeconomics, is the bugbear of uncertainty. Actually, that’s not right: it’s one of the problems we try *not* to face. Typically, at most, we handle uncertainty by deciding upon a set of events, and assigning a probability to each, then assume that an agent in our model knows the events and probabilities of each.

*Prima facie*, that seems wrong: typically real people don’t know the probability of an event. The defence of the assumption, however, goes something like this: people aren’t stupid, therefore as they see reality unfolding and see uncertainty resolving itself, they will adjust their behaviour. Since we’re often modelling long-term repetitions of the same set of events, even if people initially guess the wrong probabilities, by observing the events that occur, they will update their beliefs on the probabilities, converging to the true probabilities.

This seems wrong to me, still: casual observation yields examples every day where people make the same mistake, again and again, and don’t update their beliefs–or update them wrongly. For example, flip a coin 100 times. Straight, non-human-interpreted probability says that there is a better-than-even chance of seeing a string of 7 of the same side of the coin in a row somewhere in that 100 flip sequence, and more than a 10% chance of seeing 13 in a row. Or take a pair of dice rolled 100 times: there’s a 15 percent chance that the same total will be rolled 4 times in a row somewhere in the 100 roll sequence. But we humans aren’t very good at dealing with probabilities: when we see 7 heads in a row, we start thinking the coin must be biased, or the dice must be loaded.