Phil wrote, in part:
...I think you're being a bit unfair here, LFC. I can't speak for Sullivan, but myself, I would never give policy advice based on analogies. Nor would I ever assume that my simplifying model (be it statistical or theoretical) has captured all the relevant details, thus obviating the need for any appreciation of the specifics of any given case when deciding how to handle that case....I think you're turning statistical modelers into a caricature.My post speculated about whether a scholar's epistemological and/or methodological orientation (or 'standpoint of inquiry'[1]) might affect how he or she would give advice to a policymaker. I wrote: "...if you are a scholar with a more nomothetic mindset (e.g. Sullivan), if a policymaker called you and asked you what to do, you might start thinking in terms of historical analogies, because you are used to homogenizing historical cases.... (emphasis added)."
Phil says he would never give advice based on analogies. OK. But I never said anyone would give such advice. I said might, not would. Maybe the modeler would just say to the policymaker: "Here's what my model suggests are the relevant considerations. How you weigh these considerations is up to you and to others who may know more about the specifics of this particular case or issue than I do." Maybe the modeler would say: "Sorry, I don't give policy advice. Goodbye." I don't know. I was speculating. Nonetheless, despite my explicitly tentative language, it was probably a mistake to bring Sullivan's name into this aspect of the post. I'll concede that much. As to whether I caricatured modelers, readers can make up their own minds on that.
Kindred wrote in his first comment (and Phil agreed with this):
We always approach new cases with reference to old ones. I don't think there's any way around it, and if there were I'm not sure the "on its own" approach would outperform the "look for similarities with past events and act accordingly". I don't think it's unreasonable to believe that we use mental models (i.e. heuristics) all the time anyway, so trying to create better models is better than relying on worse ones.I don't think there is much real disagreement among us here, though there is probably some difference in emphasis. Do we always approach new cases with reference to old ones? Yes, probably at some level, but "approaching new cases with reference to old ones" can mean different things in operation, depending on one's particular bent. In any event I am not disparaging models if they are used as one tool or method. They are suited for answering or exploring some questions more than others. It's useful to know that the nature of the stronger state's objective in an asymmetric conflict affects, or may affect, the likelihood of its success (per Sullivan). But if you all had was the general proposition that the more coercive (i.e. less 'brute-force') the objective the lower the likelihood of success, that, standing alone, would not equip you either to analyze adequately a particular case or to give policy advice about any given situation. I think we probably agree on that. Which in turn raises the question whether this (i.e., our discussion) has all been a waste of time. Again, I'll let readers judge that.
---------
1. This phrase comes from a just-published article by Christopher Meckstroth in the August 2012 issue of APSR. More on that later.
19 comments:
Hey LFC.
With respect to your first point, I appreciate that you weren't saying that you thought it would always be the case. The reason I said you were being unfair is that I read your comment to mean that you thought this possibility was not purely abstract, and that the use of such models tends, on average, to encourage such behavior. Perhaps that was too much of an assumption on my part, but if I were to read your statement literally, you could just as easily have said that someone might look at Sullivan's results and conclude that the only response to the crisis in Iran is to nuke Tehran. That would be a huge leap. I would oppose that conclusion, as I'm sure you would, and I have every reason to believe Sullivan would. But the magic of the world "might" is such that the statement would still be true. If that is all you were claiming, then it would seem that you weren't really making any point at all, just saying something that is vacuously true. I saw no reason to assume that you were doing so.
At any rate, I apologize if I misinterpreted your comment. My position is that I see no reason to assume that people who conduct the type of analysis Sullivan offered in her article are any more or less prone to giving policy advice of the form you described. Perhaps we agree on that.
I think you are right that we mostly agree about Kindred's point. I'm not particularly inclined to think that there's no point to asking questions about general tendencies if we still need to know the details of any given case to be able to give careful advice about how to handle that case, but as you say, people can decide for themselves whether analysis that doesn't tell us everything we need to know is no better than analysis that tells us nothing at all.
"But if you all had was the general proposition that the more coercive (i.e. less 'brute-force') the objective the lower the likelihood of success, that, standing alone, would not equip you either to analyze adequately a particular case or to give policy advice about any given situation."
Really? Either the word "adequately" is doing a lot of work here or else I'd completely disagree.
(Note that I haven't read the Sullivan article so I'm not sure of the actual finding.) Suppose I was advising the President on whether the US should invade Iran with the goal of overthrowing the clerical regime. Suppose he told me "I think we can do it, our military is better and we have more allies". I would say, "Mr. President, given the nature of this objective the costs are likely to be very high and the probability of success if likely to be lower than it would if we were simply stacking up capabilities. Here is some research that supports my point. Now, perhaps you think that there is something about this case that will pull it away from the central tendency, but *our starting point should be the central tendency, not the outlier*."
That last bit is really important, to me. Starting points are starting points. Perhaps there are extenuating circumstances or other reasons to think that This Time Will Be Different. But it's easy to trick yourself into believing in those when you shouldn't -- think the neocons in the Bush admin -- and a model that emphasizes a general result is therefore quite useful.
One more thing that gets at me a lot: the results of statistical models are almost always reported as the average effect of all variables. But we can use models for much more. I.e., we can take our model, and plug in the actual values for Iran and the US and get a prediction that might be fairly far from the mean.
In other words, the model might still be very useful even if the central tendency of the model does not approximate the situation under analysis. In that case you don't throw the model away... you simply adjust your prediction from the central tendency to the model's prediction *given the actual values you care about*.
Not many people actually do this, at least in academia, but it can certainly be done. That prediction will not be perfect, but if a model has performed well in the past I'd rather start with the model's prediction than just spit-ball it. Or delay all decision-making until some 24 year old American can spend 2 years in the field in Iran learning every nuance of the place.
Let's think of this another way. People with their skin in the game -- sports teams, businesses, political campaigns -- rely heavily on models. Presumably they do so because they think they are useful. So why should we believe they would be less useful in other areas?
Ok. Perhaps it was unfair to suggest that building such models on average or on balance encourages a certain sort of policy advice, and I agree that a fair reading is that I did say that.
Of course the fact is that the majority of scholars, of whatever persuasion, are never called on to give policy advice, so it's all necessarily rather speculative. Sullivan's article is obviously a piece written primarily to make a contribution/extension/revision of the literature on a particular question (it's in JCR not Foreign Affairs after all) and I probably just should have treated it on its own terms. My recollection from perusing the article is that she doesn't claim it has any policy implications or doesn't make a big point of it.
We need all different kinds of analysis asking all different sorts of questions -- I'm w/ PTJ in endorsing methodological pluralism (and also in some of his associated critique of the discipline, but that's another point). I don't quite get your last clause about analysis "that tells us nothing at all." Obviously analysis that tells us something is better than analysis that tells us nothing.
The problem, however (now going back to the paragraph before the immediately preceding one) that still lingers for me is this: Sullivan is writing a piece about why big states lose asymmetric conflicts. She refers to the then-ongoing war in Iraq (the piece was published in 2007). She refers to past wars and conflicts, e.g. Vietnam, Grenada, Somalia, I forget what else. You have to make some kind of heroic effort to read this piece, however scholarly and detached its language, and not at least think about whether it might have some relevance for U.S. or e.g. NATO policymakers. I mean, it's about a current issue of some importance. No one was going to read my diss. and call me about a current policy question b/c my diss. had really no possible relevance to any current policy question. But this article does, or at least it requires a real effort to pretend otherwise.
It seems to me that IR scholars who do this kind of work cannot really hide behind the claim, however facially valid, that they are inquiring into general tendencies and that their work therefore has no or minimal policy relevance. If that were indeed the case, why did she choose this particular topic? Why didn't she write about state-building in ancient China and early-modern Europe, for instance (cf. Victoria Hui). Sullivan is writing about a 'real-world' issue. To address a real-world issue, no matter in how abstruse a way, and then to claim that your work has no effective, practical connection to the real world whatsoever strikes me as peculiar. The other thing is that I don't think her basic pt is all that startling or surprising -- obviously it's easier to invade a weaker country and conquer it or bomb it to smithereens than it is to persuade it via sanctions or coercive diplomacy to change its 'behavior' -- but I actually don't hold that against her b/c all scholars who want to succeed in their careers have to claim that their work is startling and innovative.
(P.s. Going out; back later.)
The above is directed to Phil.
Response to Kindred to follow a bit later.
P.p.s. Though note, e.g., the case of Kosovo, where bombing was used for a basically 'coercive' objective (ie. getting Serbia to stop ethnic cleansing of Kosovar Albanians, or what was widely seen at the time as such, at any rate).
My final point was in reaction to "Which in turn raises the question whether this has all been a waste of time. Again, I'll let readers judge that." I don't want to make any assumptions about what your answer to this question is, but my answer is that it's pretty silly to answer that question in the affirmative. That one would need to be mindful of the specifics of a given case before making an informed decision about how to handle that case does not, in any way, imply that this has all been a waste of time.
As to your second point, I wasn't trying to suggest that there is no policy relevance of Sullivan's work, nor do I think that she would claim as much. As Kindred said, work such as this establishes a starting point. And one can generate predicted probabilities that are more appropriate for any given case by plugging in values of the independent variables that are more appropriate for that case. Of course, such predictions still will not account for everything, since they are generated by models of reality that by definition simplify away many aspects of reality. But no one is contending that these models are entirely silent on questions of policy. Rather, I was arguing that it is unfair to imply that people who analyze such models are guilty of failing to appreciate that their models do indeed simplify reality, and that thus it remains important to be mindful of the unique circumstances associated with any given case. I read your original post as implying that scholars who analyze such models are indeed prone to believing that there is no need to be mindful of the unique circumstances associated with any given case.
Regarding the obviousness of her argument, I agree. However, "obvious" is a hard thing to judge, since many scholarly contributions are deemed obvious after the fact by people who spent a great deal of time advancing arguments that would make no sense in light of the point they now deem obvious. The simple fact that people keep asking the question "do sanctions work" rather than asking "does coercion work" is strong evidence that many people implicitly assume that the tool you are using to pursue your objectives (military power versus economic statecraft) determines its success when they should instead recognize that the *goal* you are *pursuing* is far more important than whether you pursue it through force of arms.
Perhaps if people were more mindful of the difficulty of coercion, irrespective of the policy instrument one employs when seeking to achieve it, we'd be more reluctant to call for the use of force in situations where the objectives cannot be directly achieved through brute force.
Re my "which in turn raises the question whether this has all been a waste of time" -- this was unclear, but I did *not* mean that establishing the general proposition was a waste of time. I meant that if we are, in the end, basically in agreement that you need the general proposition + something more specific, then maybe our dialogue has been less than useful, ie if we agree, what we are arguing about. But my phrasing was bad, sorry.
shd be: "what are we arguing about"
I see what you mean. Yes, in that case, we are in agreement.
Kindred:
"Or delay all decision-making until some 24 year old American can spend 2 years in the field in Iran learning every nuance of the place."
Well, there's another alternative, namely call in some specialists on Iran who already know a lot about the nuances.
But I take your point about starting from a best estimate of the central tendency.
P.s.
I think I really shdn't have dragged sanctions in here.
Her theory/argument has to do with the objectives and use of military force -- so the pt would be that some objectives are easier to achieve w force than others. (Now, the objectives difficult to achieve w force might or might not also be difficult to achieve w sanctions, but that's a separate question.)
I would've dragged them in anyway. :) In fact, I wrote a post on my blog where the very point I was trying to make is that we'd be better off asking whether coercion works than asking whether sanctions work. So if you feel we've conflated issues that are best seen as separate, I'll be glad to take the blame for it. :)
ok :)
Sure, calling in experts is fine. That is, in fact, exactly what Bruce Bueno de Mesquita does when he sets the parameters for his models.
I don't want to overstate my case because, in practice, I agree that there are a lot of issues with translating statistical and/or formal models into policy. But I see no evidence that policymakers are making systematic errors because they rely too much on statistical/formal models. I see all the time that policymakers make systematic errors by relying too much on experience, case-specific knowledge, and intuition. So my suspicion is that policymakers would benefit from using more stats/formal models rather than fewer.
This is the lesson the business community has learned over the past few decades. My sense, from the few people I know who work in/for the government, is that policymakers are starting to realize this as well. But this movement is in its infancy and still has quite a ways to go.
Addendum:
I really appreciate your contributions to the poli-sci internet community, LFC. I've gotten a lot both from our exchanges and your exchanges with others. I'm glad that, while you and I come from somewhat different perspectives, you're always trying to find a productive way to bridge the gap. And I've benefited from it.
So cheers.
Thanks.
And I appreciate your comments here and your own blogging (while you're working on a little distraction, i.e. dissertation, at the same time).
Post a Comment