OxBlog

Thursday, September 25, 2003

# Posted 7:49 PM by Ariel David Adesnik  

TEMPEST IN THE ACADEMIC TEAPOT: When I put up posts about theoretical and methodological issues in political science, I generally expect them to be ignored. After all, that kind of stuff is pretty distant from real politics.

But what I underestimated, I think, was how many bloggers have a dog in this fight. Thus, there were passionate responses to my post from both Dan Drezner and Chris Lawrence, as well as a pretty animated discussion in the comments section following Dan's post.

However, despite the rising temperature, I don't think I disagree with all that much in their posts. A few clarifications are in order, however. First of all, I was all but unaware of the "perestroika" movement in political science at the time of writing my post.

As a scholar of British extraction, nothing that happens on the far side of the Atlantic tends to enter my stream of consciousness. However, in the past three weeks at Harvard, I have heard some murmuring about "perestroika" without really knowing what it's about.

What I can definitely say is that after reading the article about the movment which Dan recommended, I think I can say that I am fundamentally sympathetic to its objectives. (On the other hand, I find it strange to agree with John Mearsheimer about anything.)

Next up: Dan surmises that
There's a very big difference between creating new data and using new statistical techniques to analyze old data. I strongly suspect Adesnik's source of irritation is the latter. The former is way too rare in the discipline, especially in international relations. Mostly that's because building new data sets takes a lot of time and the rewards in terms of professional advancement are not great, whereas relying on old data has no fixed costs.
Actually, I'm far more frustrated by the new data sets than the rehashing of the old ones. Just three days ago I was at a presentation in which a colleague described the data set she assembled on over 120 civil wars that have taken place since 1945. Since Latin America is the region I know best, I pulled the Latin American cases out of the data to set look at them.

What I found was that a very large proportion of the cases were "coded" in a misleading or flat-out wrong manner. Why? Because no one can study 120 civil wars. But pressure to come up with data sets leads scholars to do this anyway and do it poorly. Of course, since their work is evaluated mostly by other scholars who lack the historical knowledge to criticize their work, they get away with it. And so the academic merry-go-round spins merrily along.

Now for an actual disagreement: Chris Lawrence takes exception to my statement that "it is absolutely impossible to explain the tactics of Al Qaeda or Hamas without reference to their perverse ideologies." He responds:
It is? Actually, it’s pretty easy to explain their tactics—historically, they’ve been quite effective. What’s (slightly) more difficult to explain is why Al Qaeda and Hamas engage in terrorism while the Sierra Club and Libertarian Party don’t.
With apologies to Chris, his comment summarizes everything that is wrong with political science. Who but a political scientist could think that ideology is not a good explanation for the differences between the Sierra Club and Hamas?

Now, if Chris is still willing to talk to me after that cheap shot, I'd ask him where he's been spending the past month given that he
just came back from spending a month with people who told me that the absolute worst way to get a job in political science is to “invent statistics.”
Around Harvard, all one hears is that incorporating statistics into one's work significantly increases one's marketability (and I don't just mean at the p<.05 level -- we're talking p<.01 on a one-tailed test.) Obviously, Harvard isn't the be all and end all of political science, but all the visiting fellows from Stanford, Columbia, etc. agree. Also, consider the following, taken from the Perestroika article that Dan recommended:
In their study “Methodological Bias in the APSR” David Pion-Berlin, a political scientist at the University of California at Riverside, an outspoken perestroikan, and his student Dan Cleary assessed APSR content from 1991 to 2000, finding that 74 percent of its articles were based on empirical statistical analysis or formal modeling. Only 25 percent involved political theory, and just 1 percent were qualitative case studies of particular governments or institutions. In a “publish or perish” world where jobs and research funding are doled out according to APSR appearances on c. vitae, qualitative researchers, as Mearsheimer puts it, “are considered dinosaurs.”
Yikes. Btw, I do need to concede one point Chris made. It is ironic that my anti-polisci jeremiad was provoked by a study that had comparatively few statistics in it -- something I would've noticed if I'd looked at the American Political Science Review instead of the New York Times. Still, the words "comparatively few" are important here. The study in question makes exactly those mistakes I ascribe to political science in general, even if it is not the worst offender.

Finally, the Edward Said challenge. I obviously agree that many area studies experts with extensive language training add little to our collective knowledge because of their political prejudices. But I am firm in many conviction that many of the simple errors that political scientists make could be avoided through greater area expertise.

Take, for example, the flaws in the civil war data set mentioned above. I'm hardly a Latin America specialist, but even some knowledge of the region's history made it apparent that the data set was flawed. If political scientists had greater expertise in a given region, they would appreciate just how often in-depth study is necessary to get even the basic facts right. Thus, when putting together a global data set, no political scientist would even consider coding the data before consulting colleagues who are experts in the relevant regional subfields.

But is that enough? As Dan says,
I have no doubt that historians can, through closely argued scholarship, identify which groups are extremist -- ex post. The key is to find descriptive characteristics that can be identified ex ante. Without ex ante markers to identify proper explanatory variables, theories degenerate into tautologies.
It's sort of strange that Dan picked the identification of extremist groups as his example, since that's an easy case for me. Long before 9/11, almost everyone in the US government believed that Osama bin Laden was a menace because of his radical ideology. Included in that "everyone" are Steve Simon and Daniel Benjamin, NSC experts who published an article months before 9/11 arguing that bin Laden's ideology set him far apart from other terrorists precisely because he wanted to kill as many civilians as possible, rather than simply generating media coverage though small to medium-sized attacks.

Only now, two years after 9/11, does a generalist like Robert Pape come along and tell us that ideology isn't the primary cause of suicide terror attacks. Ah, political science.

Last but not least: I can agree with just about everything in Josh's response to my original post. That post was certainly more polemical than nuanced.
(0) opinions -- Add your opinion

Comments: Post a Comment


Home