I am having one of those weeks where everything seems to be related. You know, you are mulling over an idea and almost everything that you read or hear seems to fit in with it?
And, that Dear Reader, is what is happening to me. Fascinating, I know. Allow me to explain…
I took part in a great workshop last week on the issue of the Millennium Development Goals and what comes after them. There was talk about more goals, better goals, less goals, more inclusive processes, high-level championship, grass-roots consultations, etc., etc. After about an hour, I made an intervention, asking simply whether or not the time for fine-tuning was done. Maybe, I posited, the entire “development” project was finished: we’d tried all possible permutations over the last eight decades, and now maybe it was time to admit defeat and move on to something else.
In a room full of development practitioners you can imagine that this question was about as popular as…well, you get the idea. The notion was immediately dismissed, and the conversation returned to a debate about how to fix development, rather than discussing replacing it outright.
So with this in mind, I then encountered Dave Betz’s post on Dylan and how it relates to–may indeed alter–Clausewitz’s notions of the links between war, policy, and society. Then I read Thomas Rid’s post here on KOW. He, too, asks a similar question, in a different field: when does deterrence end?
I have wondered about this before in these pages. What if the entire foundation of an idea is faulty, rather than just certain aspects of it, or the particular way in which it was implemented? How far do we go before ‘calling it’, and shifting our attention to developing a new idea or system? How come we tend, though, to get caught up in ‘re-arranging the deck chairs on the Titanic’?
It occurred to me that what happens sometimes is that we base these larger projects (development, COIN, deterrence) on quite fundamental premises, and that it is sometimes these premises that are at fault. No amount of fine-tuning could ever resolve the shortcomings; the flaws are in the recipe, not the baking process, as it were.
For example, take the logic at the heart of the idea of deterrence:
If we are seen to have the ability to severely punish any would-be attackers, then they will make a rational calculation and refrain from attacking us in the first place.
This applies to deterrence at the micro (arming a bank guard, say) and macro (developing a hardened, redundant nuclear capability) levels. And if it holds true, then deterrence works.
But what it doesn’t hold true? What if having an armed bank guard only escalates the problem by ensuring that any would-be bank robbers would just try and ‘outgun’ the guards? One could say that the problem was one of application: the armed guard was not sufficiently armed, and therefore did not really serve as a deterrent. The solution? Usually we would try provide more and better weapons to the guard. And then we know where this goes…arms race, escalation, measure/counter-measure, ad infinitum. At some point, maybe it would have been better to adopt a different strategy altogether, rather than fighting so hard to make the first strategy work.
As a logical matter, what we have here is an enthymeme; an argument that depends upon an unstated (or unsubstantiated) assumption. A classic enthymeme is “If I study hard, I will go on to attend King’s College London.” This statement utterly relies upon the unproven (but yet widely and passionately held) assumption that there is a positive relationship (even a causal one) between studying hard and getting to go to university. In reality, there are a host of other factors at play: timing, funding for the university to provide places, funding for the student to afford to live in London, etc. etc.
Many of our complex endeavours (such as fighting wars, or developing societies) are also predicated on enthymemes. Despite their seriousness and cost (in times of lives and finances), the fundamental ideas underpinning these activities are often untested.
“If we protect the population, then we will defeat the insurgents.”
“If we provide aid money and do a lot of projects in a country, then we can eliminate poverty and improve well-being.”
We believe in these foundational concepts and then work hard to bring them about, often ignoring the setbacks we encounter, chalking them up to faulty implementation, bad sequencing, or poor prioritization.
This is where we might want to try something different.
But wait, I hear you asking. Shouldn’t we try and be practical? As Rob Dover has written recently, don’t we, as academics, need to strive to be Thomas-like in our work?
- Being useful can take many forms, though. As this is an academic blog, it may be useful to look at the scholarly literature for inspiration. Robert W. Cox speaks about this issue with some insight, looking at ‘problem solving’ approaches and ‘critical theory’ approaches:
Problem solving takes the world as it is and focuses on correcting certain dysfunctions, certain specific problems. Critical theory is concerned with how the world, that is all the conditions that problem solving theory takes as the given framework, may be changing. Because problem solving theory has to take the basic existing power relationships as given, it will be biased towards perpetuating those relationships…
Cox doesn’t rule out problem-solving approaches, but rather introduces that notion that sometimes, they are responsible for causing us to miss the forest for the trees, as it were. Too much fine-tuning, rather than just moving on altogether. (I have written a bit about this recently, looking at it through a Kuhnian lens).
[ As an aside, I think this relates to something else that Dave Betz has asked earlier here on KOW: "Is there any truth in the aphorism that good tactics can't save bad strategies?" For me, the answer is almost (but not 100%) always true: a badly conceived strategy can rarely be saved through excellent implementation. If it can be, then most likely the tactics themselves changed the strategy, rather then just executing it. More than singling out one aspect of the strategic process though (again, whether we are talking about fighting wars or developing nations), we need to be properly mindful of the entirety of the process: the ends AND the ways AND the means AND the underlying causal logic have to be sound for something to work properly.]
By trying to be problem-solvers, rather than taking a more critical approach, perhaps we merely prolong the difficulty. What does that mean in practice? For a start:
- We need to learn to not fall in love with our own ideas. Wanting something to work is not enough. Drinking the Kool Aid is not helpful. Being a ‘problem-solver’ can sometimes blind us to the reality that the core proposition is just plain dumb. A problem-solver shouldn’t become a cheerleader.
- We have to understand our assumptions. Sometimes we don’t dig deeply enough to expose the assumptions upon which our strategies hinge. We have to expose the building blocks of our strategies so that they can be scrutinized. We can’t get caught up in the rush to ‘get it done’; time spent on analysis of the key premises is seldom wasted, I always say.
- We should demand proof of causal relationships. It can be easy to fall under the spell of powerful ideas like “If you build it, they will come.” But as academics and advisors to practitioners of public policy, we need to be more critical. Does that causal link actually exist? Do we have any evidence to support that it does? If we don’t and we aren’t in a position to stop the project or strategy, at least we can spell out a programme for measuring or gathering evidence that would be useful to test the claims at a later stage.
- We need to learn when to ‘give up’. Flogging away for decades on a project that is doomed to fail from the outset is not a productive use of any body’s time, talent, or money. We have to muster the moral courage to call it quits and move on to a different (and hopefully better) idea at some point.
Now that really would be useful.