# Economic modelling of behaviour

One of the big problems faced in economics, particularly in macroeconomics, is the bugbear of uncertainty. Actually, that’s not right: it’s one of the problems we try *not* to face. Typically, at most, we handle uncertainty by deciding upon a set of events, and assigning a probability to each, then assume that an agent in our model knows the events and probabilities of each.

Prima facie, that seems wrong: typically real people don’t know the probability of an event. The defence of the assumption, however, goes something like this: people aren’t stupid, therefore as they see reality unfolding and see uncertainty resolving itself, they will adjust their behaviour. Since we’re often modelling long-term repetitions of the same set of events, even if people initially guess the wrong probabilities, by observing the events that occur, they will update their beliefs on the probabilities, converging to the true probabilities.

This seems wrong to me, still: casual observation yields examples every day where people make the same mistake, again and again, and don’t update their beliefs–or update them wrongly. For example, flip a coin 100 times.  Straight, non-human-interpreted probability says that there is a better-than-even chance of seeing a string of 7 of the same side of the coin in a row somewhere in that 100 flip sequence, and more than a 10% chance of seeing 13 in a row. Or take a pair of dice rolled 100 times: there’s a 15 percent chance that the same total will be rolled 4 times in a row somewhere in the 100 roll sequence. But we humans aren’t very good at dealing with probabilities: when we see 7 heads in a row, we start thinking the coin must be biased, or the dice must be loaded.

The point is, people aren’t rational: even if we are willing to simplify the number of outcomes in a period, our subjective beliefs don’t line up with reality, and probably never will. Different people have different levels of “rationality”: some might realize that 10 heads in a row was bound to happen sooner or later and not change their beliefs, but I suspect a great many would. By taking our models so far from the way people actually behave, is it such a wonder that our models have trouble matching up with reality?

There are two main reasons we model with such a firm attachment to mathematical rationality.  The first is a sort of faith in the purity of mathematics.  We want models to be relatively easy to solve, and want the solutions to such models to be unique (or perhaps a unique set of solutions).  We want precise rules to follow in deriving a solution.  Mathematics gives us just such a set of rules: for example, 3 is always greater than 2; a maximization problem will always favour a result of 3 over a result of 2.  Thus mathematics gives us a way to declare a model, drive through the algebra and calculus to a solution, then present the results with confidence: the solution follows exactly and precisely from the model assumptions.  Thus we obtain a sort “pure” answer.  Once we have that, we reintroduce the world we are trying to model by assigning some real-world interpretation to model parameters or variables.  This lets us provide an answer to questions such as “How does increases in investment subsidies affect economic growth?”  (Of course, it’s important to remember that there’s an implicit “… under the strong mathematical assumptions of our model” at the end of that question.)  Uncertainty is not something that fits well with pure mathematics, and so we replace it with risk, which is just one very specific type of uncertainty.  It’s not a particularly good fit, but it’s one with clear mathematical rules, so it satisfies the desire for mathematical purity.

The second reason we do it is that it gives us tractability, by which I mean an actual way to obtain a solution to a model.  As long as we don’t have too many equations, we can solve mathematical systems, giving us a mathematically pure, precise answer.  But this need for tractability severely limits us.  For example, in macroeconomic models we solve for a “representative agent”, assuming that that agent can represent the behaviour of an infinite set of agents in the economy.  We can allow individuals to have different states, i.e. rich and poor, but fundamentally we require that they be identical in their preferences—that a rich person in the poor man’s shoes would behave exactly as the poor man does, and vice versa.  There’s no ability for such a model to allow for diversity of preferences, abilities, or anything else.

Essentially we’re hobbled by the intersection of tractability and mathematics.  There is a theorem, called the Sonnenchein-Mantel-Debreu theorem, which essentially states that macroeconomic behaviour can only be guaranteed to reflect microeconomic behaviour when income expansion paths are identical straight lines for all agents: that is, that there are no wealth effects.  In less technical terms, this means that there can be neither “luxury goods” nor “inferior goods”: when an individual’s income increases, his spending on macaroni and cheese increases by the same proportion as his spending on Ferraris, no matter what his current income level.  If this seems absurd, that’s because it is: mathematical tractability requires our model to impose a condition requiring poor people to split their income between flour and diamonds (and every other good) in the same proportion rich people would split their income on those goods.

As long as we, as economists, insist on models being purely mathematical in nature, we can’t avoid this problem: the math becomes too hard (i.e. impossible) very quickly.  Instead of accepting the bizarre assumptions necessary to avoid the “anything goes” implications of the Sonnenschein-Mantel-Debreu theorem, we need to break the mathematical stranglehold on economic theory.  Only then can we actually start moving to richer models that relaxing and purge these strong, unrealistic assumptions.

Fundamentally, economists need to remember (or perhaps learn) that economics is about the study of choices made by members of society (consumers, producers, etc.).  Mathematics is one tool to do that, but when it fails us, instead of throwing up our hands in despair, declaring the problem unsolvable, and retreating to mathematical tractability, we need to look at the other tools available to us.

One promising approach to me is to think about an increasing role for complex computational simulations for analysing economic problems.  This gives us the freedom to program “agents” with whatever optimization rules and distinct preferences we can think of (which need not be purely mathematical), give them ways to interact, and see what happens.  The complexity that can emerge—so-called “emergent behaviour”—is not something to be feared but rather an outcome we should embrace.

Human beings aren’t constrained by mathematical rationality and the tractability of mathematics; modelling them shouldn’t be either.

(Author’s note: the first half of this post was written on 5 June 2012, during the time the author was studying for PhD comprehensive exams.  It was edited, extended, and published a year later (exactly a year to the day, purely by coincidence) when actual blog software was installed on this website).