Dave M. emailed me, noting a post on battle intensity that had a Wall Street Journal chart that I relied on for estimating troop numbers in Afghanistan. He recalled a Washington Post article with different numbers and wondered what I made of that discrepancy. It was certainly odd.
The original chart from the WSJ showed:
Yet Dave noted a July Washington Post chart at odds with my numbers drawn from the above chart:
Whoa. The first thing I noticed is that the former was a Department of Defense derived chart and it shows changes over time. I liked it because I could eyeball the average for the year. The latter is a CENTCOM report that shows troop strength in December of each year. If they were each using the same numbers, I'd expect the latter approach to generally show higher numbers since our troop strength is going up. But since 2006, the CENTCOM numbers are way lower and not higher.
Or could it be from some confusion based on counting OEF US forces and not NATO ISAF's US forces? That might explain the discrepancy in later years given our ramping up of US forces for OEF under US command rather than under ISAF. (UPDATE: It looks like about 36,000 of our troops are not in ISAF with about 30,000 being in ISAF.) And the earlier year differences could be explained by the simple difference of static December-to-December counts versus my rough averages.
Looking around a little more, to my horror I found that the numbers game is really confusing with different organizations giving out different numbers. For example, if your job is to count how many people get combat pay in a given year in Afghanistan, when you consider rotations in and out of the country or theater, when one guy rotates out in June to be replaced by another guy in July, it would be totally legitimate to count that as two soldiers (at 6 months each) rather than one soldier-equivalent for the entire year. Oh, and definitions used to compile different counts are subject to change.
I couldn't find the Boots on the Ground (BOG) reports cited, which I think would be best for the purposes of my combat intensity analysis. I don't really care to consider troops near Afghanistan supporting the campaign. Although on the donwside, it is a point-in-time number when I'd rather have average numbers of boots on the ground. The CRS report above does have monthly BOG numbers for Iraq and Afghanistan which I may mine to refine my eyeballing of averages for the year, but I don't know where to get new numbers.
In the short run, I plan to update my battle intensity post in the new year using the original WSJ data and using actual 2009 OEF Afghanistan casualties (and note that I already had to revise that post since I originally used OEF casualty data, briefly forgetting that Afghanistan is a subset of OEF--albeit the major component. Indeed, I've read articles that clearly used OEF data as Afghanistan data.). At least trends using the same source of data should be visible even if the troop data isn't the one I'd select as a basis of measurement.
So if and when I do that exercise in the future, I won't just add whatever number for 2010 I find to either chart above since I'll have no idea if they are drawn from the same source or standard. I'd have to start from scratch. I'd like to use the BOG data despite its shortcoming (point-in-time count), which appears to be actual people on the ground who can be shot at, but I can't find new sources for that data.
An interesting problem, eh? You'd think there'd be one number to use but there isn't. And I bet if I didn't know this (I mean, I knew there are different ways to report troops in the theater or supporting it, but assumed numbers portrayed as in Afghanistan meant "in" Afghanistan), darned few journalists are aware of the danger of mixing and matching.
Nobody is lying, here. But the number depends on what your organization is supposed to count. So reporters need to be aware of what number they ask for, what number they actually receive, and what that number really means.