Validating Attrition

Continuing to comment on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 3 of 7; see Part 1, Part 2)

On the first page (page 28) in the third column they make the statement that:

Models of complex systems, especially those that incorporate human behavior, such as that demonstrated in combat, do not often lend themselves to empirical validation of output measures, such as attrition.

Really? Why can’t you? If fact, isn’t that exactly the model you should be validating?

More to the point, people have validated attrition models. Let me list a few cases (this list is not exhaustive):

1. Done by Center for Army Analysis (CAA) for the CEM (Concepts Evaluation Model) using Ardennes Campaign Simulation Study (ARCAS) data. Take a look at this study done for Stochastic CEM (STOCEM):

2. Done in 2005 by The Dupuy Institute for six different casualty estimation methodologies as part of Casualty Estimation Methodologies Studies. This was work done for the Army Medical Department and funded by DUSA (OR). It is listed here as report CE-1:

3. Done in 2006 by The Dupuy Institute for the TNDM (Tactical Numerical Deterministic Model) using Corps and Division-level data. This effort was funded by Boeing, not the U.S. government. This is discussed in depth in Chapter 19 of my book War by Numbers (pages 299-324) where we show 20 charts from such an effort. Let me show you one from page 315:


So, this is something that multiple people have done on multiple occasions. It is not so difficult that The Dupuy Institute was not able to do it. TRADOC is an organization with around 38,000 military and civilian employees, plus who knows how many contractors. I think this is something they could also do if they had the desire.



Continuing to comment on the article in the December 2018 issue of the Phalanx by Jonathan Alt, Christopher Morey and Larry Larimer (this is part 2 of 7; see part 1 here).

On the first page (page 28) top of the third column they make the rather declarative statement that:

The combat simulations used by military operations research and analysis agencies adhere to strict standards established by the DoD regarding verification, validation and accreditation (Department of Defense, 2009).

Now, I have not reviewed what has been done on verification, validation and accreditation since 2009, but I did do a few fairly exhaustive reviews before then. One such review is written up in depth in The International TNDM Newsletter. It is Volume 1, No. 4 (February 1997). You can find it here:

The newsletter includes a letter dated 21 January 1997 from the Scientific Advisor to the CG (Commanding General)  at TRADOC (Training and Doctrine Command). This is the same organization that the three gentlemen who wrote the article in the Phalanx work for. The Scientific Advisor sent a letter out to multiple commands to try to flag the issue of validation (letter is on page 6 of the newsletter). My understanding is that he received few responses (I saw only one, it was from Leavenworth). After that, I gather there was no further action taken. This was a while back, so maybe everything has changed, as I gather they are claiming with that declarative statement. I doubt it.

This issue to me is validation. Verification is often done. Actual validations are a lot rarer. In 1997, this was my list of combat models in the industry that had been validated (the list is on page 7 of the newsletter):

1. Atlas (using 1940 Campaign in the West)

2. Vector (using undocumented turning runs)

3. QJM (by HERO using WWII and Middle-East data)

4. CEM (by CAA using Ardennes Data Base)

5. SIMNET/JANUS (by IDA using 73 Easting data)


Now, in 2005 we did a report on Casualty Estimation Methodologies (it is report CE-1 list here: We reviewed the listing of validation efforts, and from 1997 to 2005…nothing new had been done (except for a battalion-level validation we had done for the TNDM). So am I now to believe that since 2009, they have actively and aggressively pursued validation? Especially as most of this time was in a period of severely declining budgets, I doubt it. One of the arguments against validation made in meetings I attended in 1987 was that they did not have the time or budget to spend on validating. The budget during the Cold War was luxurious by today’s standards.

If there have been meaningful validations done, I would love to see the validation reports. The proof is in the pudding…..send me the validation reports that will resolve all doubts.

Engaging the Phalanx

The Military Operations Research Society (MORS) publishes a periodical journal called the Phalanx. In the December 2018 issue was an article that referenced one of our blog posts. This took us by surprise. We only found out about thanks to one of the viewers of this blog. We are not members of MORS. The article is paywalled and cannot be easily accessed if you are not a member.

It is titled “Perspectives on Combat Modeling” (page 28) and is written by Jonathan K. Alt, U.S. Army TRADOC Analysis Center, Monterey, CA.; Christopher Morey, PhD, Training and Doctrine Command Analysis Center, Ft. Leavenworth, Kansas; and Larry Larimer, Training and Doctrine Command Analysis Center, White Sands, New Mexico. I am not familiar with any of these three gentlemen.

The blog post that appears to be generating this article is this one:

Wargaming Multi-Domain Battle: The Base Of Sand Problem

Simply by coincidence, Shawn Woodford recently re-posted this in January. It was originally published on 10 April 2017 and was written by Shawn.

The opening two sentences of the article in the Phalanx reads:

Periodically, within the Department of Defense (DoD) analytic community, questions will arise regarding the validity of the combat models and simulations used to support analysis. Many attempts (sic) to resurrect the argument that models, simulations, and wargames “are built on the thin foundation of empirical knowledge about the phenomenon of combat.” (Woodford, 2017).

It is nice to be acknowledged, although it this case, it appears that we are being acknowledged because they disagree with what we are saying.

Probably the word that gets my attention is “resurrect.” It is an interesting word, that implies that this is an old argument that has somehow or the other been put to bed. Granted it is an old argument. On the other hand, it has not been put to bed. If a problem has been identified and not corrected, then it is still a problem. Age has nothing to do with it.

On the other hand, maybe they are using the word “resurrect” because recent developments in modeling and validation have changed the environment significantly enough that these arguments no longer apply. If so, I would be interested in what those changes are. The last time I checked, the modeling and simulation industry was using many of the same models they had used for decades. In some cases, were going back to using simpler hex-games for their modeling and wargaming efforts. We have blogged a couple of times about these efforts. So, in the world of modeling, unless there have been earthshaking and universal changes made in the last five years that have completely revamped the landscape….then the decades old problems still apply to the decades old models and simulations.

More to come (this is the first of at least 7 posts on this subject).

Afghan Security Forces Deaths Top 45,000 Since 2014

The President of Afghanistan, Ashraf Ghani, speaking with CNN’s Farid Zakiria, at the World Economic Forum in Davos, Switzerland, 25 January 2019. [Office of the President, Islamic Republic of Afghanistan]

Last Friday, at the World Economic Forum in Davos, Switzerland, Afghan President Ashraf Ghani admitted that his country’s security forces had suffered over 45,000 fatalities since he took office in September 2014. This total far exceeds the total of 28,000 killed since 2015 that Ghani had previously announced in November 2018. Ghani’s cryptic comment in Davos did not indicate how the newly revealed total relates to previously released figures, whether it was based on new accounting, a sharp increase in recent casualties, or more forthrightness.

This revised figure casts significant doubt on the validity of analysis based on the previous reporting. Correcting it will be difficult. At the request of the Afghan government in May 2017, the U.S. military has treated security forces attrition and loss data as classified and has withheld it from public release.

If Ghani’s figure is, in fact, accurate, then it reinforces the observation that the course of the conflict is tilting increasingly against the Afghan government.


What Multi-Domain Operations Wargames Are You Playing? [Updated]

Source: David A. Shlapak and Michael Johnson. Reinforcing Deterrence on NATO’s Eastern Flank: Wargaming the Defense of the Baltics. Santa Monica, CA: RAND Corporation, 2016.








[UPDATE] We had several readers recommend games they have used or would be suitable for simulating Multi-Domain Battle and Operations (MDB/MDO) concepts. These include several classic campaign-level board wargames:

The Next War (SPI, 1976)

NATO: The Next War in Europe (Victory Games, 1983)

For tactical level combat, there is Steel Panthers: Main Battle Tank (SSI/Shrapnel Games, 1996- )

There were also a couple of naval/air oriented games:

Asian Fleet (Kokusai-Tsushin Co., Ltd. (国際通信社) 2007, 2010)

Command: Modern Air Naval Operations (Matrix Games, 2014)

Are there any others folks are using out there?

A Mystics & Statistic reader wants to know what wargames are being used to simulate and explore Multi-Domain Battle and Operations (MDB/MDO) concepts?

There is a lot of MDB/MDO wargaming going on in at all levels in the U.S. Department of Defense. Much of this appears to use existing models, simulations, and wargames, such as the U.S. Army Center for Army Analysis’s unclassified Wargaming Analysis Model (C-WAM).

Chris Lawrence recently looked at C-WAM and found that it uses a lot of traditional board wargaming elements, including methodologies for determining combat results, casualties, and breakpoints that have been found unable to replicate real-world outcomes (aka “The Base of Sand” problem).




C-WAM 4 (Breakpoints)

There is also the wargame used by RAND to look at possible scenarios for a potential Russian invasion of the Baltic States.

Wargaming the Defense of the Baltics

Wargaming at RAND

What other wargames, models, and simulations are there being used out there? Are there any commercial wargames incorporating MDB/MDO elements into their gameplay? What methodologies are being used to portray MDB/MDO effects?

An Administrative Weakness

Another post is response the comments to this blog post:

The Afghan Insurgents

The comment was “…the insurgents are one side of the coin and the other is the credibility of the government we are trying to create in Afghanistan…If the central government is seen as corrupt and self serving then this also inspires the insurgents and may in fact be the decisive factor….”

This immediately brought to mind David Galula’s construct, which was based upon four major points (see pages 210-211 of America’s Modern Wars):

  1. Insurgents need a cause
  2. A police and administrative weakness
  3. A non-hostile geographic environment
  4. Outside support in the middle to late states.

He specifically state that: “the first two are musts. The last is a help that may become a necessity.”

Now, the problem is that we never took the time to measure an “administrative weakness” or even define what it was. Nor did David Galula. Furthermore, there is also probably an “administrative weakness” or two on the guerilla side. If the culture of Iraq/Afghanistan/Vietnam make it difficult to create government structures and armed forces that are highly motivated, unified and not corrupted, well I suspect some of those same problems exist among the guerillas drawn from that same culture. Therefore, to measure this requires some way of defining what these “administrative weaknesses” are, but also quantifying them, and then determining how they affected both (or more) sides. Needless to say, this was not going to be done in the initial phase of our analysis. We were never funded to conduct follow-up analysis.

This is the problem with David Galula’s construct. There is no easy way to measure it or analyze it. Galula offers no definition of what an “administrative weakness” is. If he does not define it, then how do I define it for his “theory?”

One does note that Galula in his description of the Viet Cong in 1963 states that:

The insurgent has really no cause at all: he is exploiting the counterinsurgent’s weaknesses and mistakes….The insurgent’s program is simply: “Throw the rascals out.: If the “rascals” (whoever is in power in Saigon) amend their ways, the insurgents would lose his cause.

As I note on page 48 of my book:

This was a war that eventually resulted in over 2 million deaths and insurgent force in excess of 300,000. As it is, one could infer from Galula’s statement that he felt that the insurgency could be easily defeated since it was based upon “no real cause.”  We believe that this view has been proven incorrect by historical events.

Clearly identifying insurgent cause and administrative weakness was also a challenge for David Galula.

Hausser Wielding Chalk

The Battle of Prokhorovka took place on 12 July 1943 (and for several days after, depending on definition). The most famous part of the fighting was the attack from the Soviet XVIII Tank Corps and XXIX Tank Corps against the Leibstandarte SS Adolf Hitler Division.

Several stories posted on the web and I gather a few books mention something like: “Several German accounts mention that SS-Obergruppenführer Paul Hausser, commander of the SS Panzer Corps, had to use chalk to mark and count the huge jumble of 93 knocked-out Soviet tanks in the Leibstandarte sector alone.”

Now, this makes for an interesting scene: General Hausser, the 62-year old founder of the Waffen SS, is crawling around the battlefield marking up 93 tanks with chalk. With the Totenkopf SS Division having to continue the offensive on the 13th, and Das Reich SS Division in the days after that, I would think that the SS Panzer Corps commander would have a few more important things to do at this moment. Also suspect that significant parts of the battlefield were still under enemy observation. Its gets a little hard to imagine that Hausser was out there with chalk counting tanks.

Does anyone know the original source of this story?

Bernard Fall Quote

We have gotten several interesting comments to this blog post:

The Afghan Insurgents

One comment stated in part that “….I am thinking the road building, school building, and all that has zero impact on winning the people…..”

This reminds me of a Bernard Fall quote related to the Vietnam War. I used it as the introduction to Chapter 14 (page 147) of my book America’s Modern Wars:

Civic action is not the construction of privies or the distribution of anti-malaria sprays. One can’t fight an ideology; one can’t fight a militant doctrine with better privies. Yet this is done constantly. One side says, “Land reform,” and the other side says, “Better culverts.” One side says, “We are going to kill all of those nasty village chiefs and landlords.” The other side says, “Yes, but look, we want to give you prize pigs to improve your strain.” These arguments just do not match.  Simple but adequate appeals will have to be found sooner or later.

 Bernard Fall, 1967


Forecasting the Iraqi Insurgency

[This piece was originally posted on 27 June 2016.]

Previous posts have detailed casualty estimates by Trevor Dupuy or The Dupuy Institute (TDI) for the 1990-91 Gulf War and the 1995 intervention in Bosnia. Today I will detail TDI’s 2004 forecast for U.S. casualties in the Iraqi insurgency that began in 2003.

In April 2004, as simultaneous Sunni and Shi’a uprisings dramatically expanded the nascent insurgency in Iraq, the U.S. Army Center for Army Analysis (CAA) accepted an unsolicited proposal from TDI President and Executive Director Christopher Lawrence to estimate likely American casualties in the conflict. A four-month contract was finalized in August.

The methodology TDI adopted for the estimate was a comparative case study analysis based on a major data collection effort on insurgencies. 28 cases were selected for analysis based on five criteria:

  1. The conflict had to be post-World War II to facilitate data collection;
  2. It had to have lasted more than a year (as was already the case in Iraq);
  3. It had to be a developed nation intervening in a developing nation;
  4. The intervening nation had to have provided military forces to support or establish an indigenous government; and
  5. There had to be an indigenous guerilla movement (although it could have received outside help).

Extensive data was collected from these 28 cases, including the following ten factors used in the estimate:

  • Country Area
  • Orderliness
  • Population
  • Intervening force size
  • Border Length
  • Insurgency force size
  • Outside support
  • Casualty rate
  • Political concept
  • Force ratios

Initial analysis compared this data to insurgency outcomes, which revealed some startlingly clear patterns suggesting cause and effect relationships. From this analysis, TDI drew the following conclusions:

  • It is difficult to control large countries.
  • It is difficult to control large populations.
  • It is difficult to control an extended land border.
  • Limited outside support does not doom an insurgency.
  • “Disorderly” insurgencies are very intractable and often successful insurgencies.
  • Insurgencies with large intervening third-party counterinsurgent forces (above 95,000) often succeed.
  • Higher combat intensities do not doom an insurgency.

In all, TDI assessed that the Iraqi insurgency fell into the worst category in nine of the ten factors analyzed. The outcome would hinge on one fundamental question: was the U.S. facing a regional, factional insurgency in Iraq or a widespread anti-intervention insurgency? Based on the data, if the insurgency was factional or regional, it would fail. If it became a nationalist revolt against a foreign power, it would succeed.

Based on the data and its analytical conclusions, TDI provided CAA with an initial estimate in December 2004, and a final version in January 2005:

  • Insurgent force strength is probably between 20,000–60,000.
  • This is a major insurgency.
    • It is of medium intensity.
  • It is a regional or factionalized insurgency and must remain that way.
  • U.S. commitment can be expected to be relatively steady throughout this insurgency and will not be quickly replaced by indigenous forces.
  • It will last around 10 or so years.
  • It may cost the U.S. 5,000 to 10,000 killed.
    • It may be higher.
    • This assumes no major new problems in the Shiite majority areas.

When TDI made its estimate in December 2004, the conflict had already lasted 21 months, and U.S. casualties were 1,335 killed, 1,038 of them in combat.

When U.S. forces withdrew from Iraq in December 2011, the war had gone on for 105 months (8.7 years), and U.S. casualties had risen to 4,485 fatalities—3,436 in combat. The United Kingdom lost 180 troops killed and Coalition allies lost 139. There were at least 468 contractor deaths from a mix of nationalities. The Iraqi Army and police suffered at least 10,125 deaths. Total counterinsurgent fatalities numbered at least 15,397.

As of this date, the conflict in Iraq that began in 2003 remains ongoing.


Christopher A. Lawrence, America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Philadelphia, PA: Casemate, 2015) pp. 11-31; Appendix I.

U.S. Army Releases New Iraq War History

On Thursday, the U.S. Army released a long-awaited history of its operational combat experience in Iraq from 2003 to 2011. The study, titled The U.S. Army in the Iraq War – Volume 1: Invasion – Insurgency – Civil War, 2003-2006 and The U.S. Army in the Iraq War – Volume 2: Surge and Withdrawal, 2007-2011, was published under the auspices of the U.S. Army War College’s Strategic Studies Institute.

This reflects its unconventional origins. Under normal circumstances, such work would be undertaken by either the U.S. Army Combat Studies Institute (CSI), which is charged with writing quick-turnaround “instant histories,” or the U.S. Army Center of Military History (CMH), which writes more deeply researched “official history,” years or decades after the fact.[1] Instead, these volumes were directly commissioned by then-Chief of the Staff of the Army, General Raymond Odierno, who created an Iraq Study Group in 2013 to research and write them. According to Odierno, his intent was “to capture key lessons, insights, and innovations from our more than 8 years of conflict in that country.[I]t was time to conduct an initial examination of the Army’s experiences in the post-9/11 wars, to determine their implications for our future operations, strategy, doctrine, force structure, and institutions.”

CSI had already started writing contemporary histories of the conflict, publishing On Point: The United States Army in Operation IRAQI FREEDOM (2004) and On Point II: Transition to the New Campaign (2008), which covered the period from 2003 to January 2005. A projected third volume was advertised, but never published.

Although the Iraq Study Group completed its work in June 2016 and the first volume of the history was scheduled for publication that October, its release was delayed due to concerns within the Army historical community regarding the its perspective and controversial conclusions. After external reviewers deemed the study fair and recommended its publication, claims were lodged after its existence was made public last autumn that the Army was suppressing it to avoid embarrassment. Making clear that the study was not an official history publication, current Army Chief of Staff General Mark Milley added his own forward to Odierno’s, and publicly released the two volumes yesterday.


[1] For a discussion of the roles and mission of CSI and CMH with regard to history, see W. Shane Story, “Transformation or Troop Strength? Early Accounts of the Invasion of IraqArmy History, Winter 2006; Richard W. Stewart, “‘Instant’ History and History: A Hierarchy of NeedsArmy History, Winter 2006; Jeffrey J. Clarke, “The Care and Feeding of Contemporary History,” Army History, Winter 2006; and Gregory Fontenot, “The U.S. Army and Contemporary Military History,” Army History, Spring 2008.