I’ve been reading Daniel Kahneman’s book, Thinking Fast and Slow, and thinking, mostly slowly, about the many ways in which the psychology of decision-making relates to strategic studies. In fact, Kahneman himself occasionally suggests military illustrations for his and others’ research on decision making.
Some readers may have come across prospect theory – Kahneman and longtime collaborator Amos Tversky’s groundbreaking research that won Kahneman a Nobel prize in Economics. I like the theory because it it’s evidence based, and comprehensively undermines the rational actor, expected utility models that were the basis of my undergrad education in social sciences.
In a nutshell, prospect theory holds that we don’t much like losing. In fact, we dislike losing a whole lot more than we like winning stuff. Truly, a bird in the hand is worth two in the bush. Moreover, when we are faced with losing something, we are more disposed to gamble on retaining it than we are when faced with a gamble to win something. We are, in the jargon, risk acceptant when in a domain of losses.
Stay with me, because there’s a strategic application to this that I’m by no means the first to see (Levy and Thompson point it out in their book Causes of War, for example). Whether we consider ourselves as in the domain of losses or gains depends entirely on what our personal reference point is: the outcome with which we are satisfied. That might be the status quo ante bellum, for example. But it might not be – we might consider the status quo to be unjust, and the true reference point to be a situation that is more balanced in our favour than the present conditions suggest. In such a situation, we are, prospect theory holds, more likely to gamble on risky ventures than if the present situation were more advantageous than our perceived reference point.
So far, so Nazi Germany gambling like a mad thing against improbable odds in the 1930s. Your goal, as Kahneman says, is your reference point. And for Hitler, bent on revenge and prepared to bet the farm on restoring Germany to its proper place, that was pre-1918 Germany, or perhaps Germany after 1,000 years of the glorious Third Reich.
A second aspect of Kahneman’s research programme shows how we typically overweight improbable events. In expected utility theory, the standard social science model, the expected value we will derive from some gamble is proportional to the probability of that gamble paying off. It’s simply the probability of the payoff multiplied by what the payoff means to us – in terms of pleasure or pain.
In the real world, however, psychologists have found that our sense of proportion is skewed towards improbable events. We weight these much more heavily in deriving the expected value of an outcome – good or bad. It’s this that makes terrorists tick – we are most unlikely to be killed by them, and yet we spend a fortune to protect against their pinprick attacks – overweighting the improbable, in part because it seems so vivid, and easily brought to mind. The distortion happens at both ends of the probability spectrum – where things are nearly impossible, and conversely where there are almost certain, you can bet we’ll be overweighting that bit at the end, compared to what a theoretically rational actor would do.
Combining these two fundamentals of cognitive psychology gives Kahneman a quadrant that looks like this:
Let’s go through it:
1. Think about the top right cell here. You’re in a domain of losses – some way distant from your reference point – you know, the political goal that Clausewitz told you to keep in mind at all times. Now, the enemy attacks, and does well. As things stand, you face the near certain prospect of defeat. There’s a 95% chance you’ll lose the war and with it your whole empire.
The enemy meanwhile makes you a peace offer that any rational actor would approve of. You will lose much of what you control. In fact, you will lose the exact expected value – 95% of your empire. You should take that certain deal, right? Rather than fight on crazily in an (almost) lost cause and risk losing the lot? 5% of empire is better than zero percent of empire, after all. That’s the rational thing to do, no?
Prospect theory says you are disposed to gamble. You hate losing, after all. What’s more, there’s a high probability of the event happening, and that distorts your weighting such that you are insensitive to the overwhelming probability of defeat. You overweight the benefits associated with that slim, 5% chance of victory, and not losses consequent on the attack. Prospect theory and the bias induced by improbable events are working in the same direction. Even if the enemy offered terms that let you keep 10% of your empire, you might still gamble, well against the odds to lose nothing.
2. What about the cell bottom right? Overall, it’s still the case that things have been going badly, viz your initial reference point. You’re in that domain of losses. Now the enemy has attacked, and you once again face the prospect of losing. But it’s not armageddon out there – there’s now only a low probability of your total defeat, if you don’t accept terms.
Here comes the enemy negotiator, and he’s offering a sensible terms, at least to a rational actor. He only wants 5% of your empire, and you get to keep your head. He’s not likely to beat you, is he? Surely you should gamble on almost certainly not losing the war?
…but still, what if that 5% came good? You’d lose everything. you focus on that 5% chance , in fact, you over-focus on it. Suddenly it doesn’t seem so remote. The overweighting of remote possibilities kicks in, and you become, on balance, risk averse. Of course, a savvy enemy negotiator knows you’re going to worry about that small chance – and he can exploit that worry, and drive a harder bargain. Perhaps you give up 10% of your realm. Still, better safe than sorry. Effectively you’ve bought some insurance against regime change.
3. What’s going on top left? Now the boot is on the other foot – you’re ahead of the curve. ‘Mission Accomplished’ banners are streaming from your castle. And you are on the attack, and have a 95% prospect of winning the war and your enemy’s total unconditional surrender. That’s almost certain, right? So when the enemy comes crawling to you, what do you do?
You’re risk averse, of course. You’re in the domain of gains, already ahead of your reference point. You can gamble on getting more, through driving home this latest attack, but there’s an outside chance you’ll lose the battle if you do, and gain nothing. What’s more, the distortion induced by the certainty effect – operating very close to absolute certainty of winning – means that more attention than is strictly ‘rational’ gets focused on that slim residual chance of losing. Discretion is the better part of valour here, for sure. And the enemy, knowing that, can drive a hard bargain. If you settle, you still get a decent chunk of real estate, but you are likely to settle for much less than a rational actor would, with a keener appreciation of how unlikely defeat is, and more stomach for the fight.
4. In the last cell, you are still in the domain of gains, and mounting an attack. There’s only a five percent chance of it paying off, but boy, if it does! You win the whole caboodle.
Prospect theory says that in a domain of gains, you likely to be risk averse. But here, it’s just so tempting… the possibility effect – the slim possibility that victory might be yours, draws your attention disproportionately to it. Here comes the enemy negotiator, and he offers you a settlement to call off the attack and cease fighting that’s bang in line with the expected value – 5% of the gain you might make. A rational actor would accept. You know what happens next.
A game two can play:
There are a number of implications for strategy of this sort of research. We like to think of strategy as being an intensely practical activity, using force instrumentally to link ends, ways and means.
A good strategist, we sometimes argue, should be a rational thinker, carefully judging the threats and assigning resources to meet them. In war, when the likely costs exceed the benefits, the rational actor draws stumps. If two rational actors with matching assessments of probabilities are fighting, we can see war as a negotiation with a rational, mutually acceptable outcome. Even if the information on payoffs and probabilities is initially uncertain, fighting is one way of providing the belligerents with a clearer picture.
But if two real, human actors are at war, things are more complicated. We can – for example – imagine someone in the top left pitched in battle against someone in the top right. The actual settlement, and the course of fighting depends, among other things on where those initial reference points are; on what their respective sensitivity to the being in a domain of losses or gains are (which depends, among other things, on emotions); and on how extreme the probabilities associated with the various strategies are.
Model that, if you can, my quantitative chums.
In a paper under review, I argue that prospect theory is one aspect of cognitive psychology among others that can explain why states fight on past where you would think a rational actor would call it quits. Let me know if you want a shufty.