Validating A Combat Model (Part VIII)

[The article below is reprinted from April 1997 edition of The International TNDM Newsletter.]

The First Test of the TNDM Battalion-Level Validations: Predicting the Winners
by Christopher A. Lawrence

CASE STUDIES: WHERE AND WHY THE MODEL FAILED CORRECT PREDICTIONS

Modern (8 cases):

Tu-Vu—On the first run, the model predicted a defender win. Historically, the attackers (Viet Minh) won with a 2.8 km advance. When the CEV for the Viet Minh was put in (1.2), the defender still won. The real problem in this case is the horrendous casualties taken by both sides, with the defending Moroccans losing 250 out of 420 people and the attacker losing 1,200 out of 7,000 people. The model predicted only 140 and 208 respectively. This appears to address a fundamental weakness in the model, which is that if one side is willing to attack (or defend) at all costs, the model cannot predict the extreme losses. This happens in some battles with non-first world armies, with the Japanese in WWII, and apparently sometimes with the WWI predictions. In effect, the model needs some mechanism to predict fanaticism that would increase the intensity and casualties of the battle for both sides. In this case, the increased casualties certainly would have resulted in an attacker advance after over half of the defenders were casualties.

Mapu—On the first run the model predicted an attacker (Indonesian) win. Historically, the defender (British) won. When the British are given a hefty CEV of 2.6 (as one would expect that they would have), the defender wins, although the casualties are way off for the attacker. This appears to be a case in which the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Bir Gifgafa II (Night)—On the first run the model predicted a defender (Egyptian) win. Historically the attacker (Israel) won with an advance of three kilometers. When the Israelis are given a hefty CEV of 3.5 (as historically they have tended to have), they win, although their casualties and distance advanced are way off. These errors are probably due to the short duration (one hour) of the model run. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run in order to replicate historical results.

Goose Green—On the first run the model predicted a draw. Historically, the attacker (British) won. The first run also included the “cheat” of counting the Milans as regular weapons versus anti-tank. When the British are given a hefty CEV of 2.4 (as one could reasonably expect that they would have) they win, although their advance rate is too slow. Casualty prediction is quite good. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Two Sisters (Night)—On the first run the model predicted a draw. Historically the attacker (British) won yet again. When the British are given a CEV of 1.7 (as one would expect that they would have) the attacker wins, although the advance rate is too slow and the casualties a little low. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Mt. Longdon (Night)—0n the first run the model predicted a defender win. Historically, the attacker (British) won as usual. When the British are given a CEV of 2.3 (as one would expect that they should have) the attacker wins, although as usual the advance rate is too slow and the casualties a little low. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Tumbledown—On the first run the model predicted a defender win. Historically the attacker (British) won as usual. When the British were given a CEV of 1.9 (as one would expect that they should have), the attacker wins, although as usual, the advance rate is too slow and the casualties a little low. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Cuatir River—On the first run the model predicted a draw. Historically, the attacker (The Republic of South Africa) won. When the South African forces were given a CEV of 2.3 (as one would expect that they should have) the attacker wins, with advance rates and casualties being reasonably close. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Next: Predicting casualties.

Validating A Combat Model (Part VII)

A painting by a Marine officer present during the Guadalcanal campaign depicts Marines defending Hill 123 during the Battle of Edson’s Ridge, 12-14 September 1942. [Wikipedia]

[The article below is reprinted from April 1997 edition of The International TNDM Newsletter.]

The First Test of the TNDM Battalion-Level Validations: Predicting the Winners
by Christopher A. Lawrence

CASE STUDIES: WHERE AND WHY THE MODEL FAILED CORRECT PREDICTIONS

World War ll (8 cases):

Overall, we got a much better prediction rate with WWII combat. We had eight cases where there was a problem. They are:

Makin Raid—On the first run, the model predicted a defender win. Historically, the attackers (US Marines) won with a 2.5 km advance. When the Marine CEV was put in (a hefty 2.4), this produced a reasonable prediction, although the advance rate was too slow. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run in order to replicate historical results.

Edson’s Ridge (Night)—On the first run, the model predicted a defender win. Historically, the battle must be considered at best a draw, or more probably a defender win, as the mission accomplishment score of the attacker is 3 while the defender is 5.5. The attacker did advance 2 kilometers, but suffered heavy casualties. The second run was done with a US CEV of 1.5. This maintained a defender win and even balanced more in favor of the Marines. This is clearly a problem in defining who is the winner.

Lausdell X-Road: (Night)—On the first run, the model predicted an attacker victory with an advance rate of 0.4 kilometer. Historically, the German attackers advanced 0.75 kilometer, but had a mission accomplishment score of 4 versus the defender’s mission accomplishment score of 6. A second run was done with a US CEV of 1.1, but this did not significantly change the result. This is clearly a problem in defining who is the winner.

VER-9CX—On the first run, the attacker is reported as the winner. Historically this is the case, with the attacker advancing 1.2 kilometers although suffering higher losses than the defender. On the second run, however, the model predicted that the engagement was a draw. The model assigned the defenders (German) a CEV of 1.3 relative to the attackers in attempt to better reflect the casualty exchange. The model is clearly having a problem with this engagement due to the low defender casualties.

VER-2ASX—On the first run, the defender was reported as the winner. Historically, the attacker won. On the second run, the battle was recorded as a draw with the attacker (British) CEV being 1.3. This high CEV for the British is not entirely explainable, although they did fire a massive suppressive bombardment. In this case the model appears to be assigning a CEV bonus to the wrong side in an attempt to adjust a problem run. The model is still clearly having a problem with this engagement due to the low defender casualties.

VER-XHLX—On the first run, the model predicted that the defender won. Historically, the attacker won. On the second run, the battle was recorded as an attacker win with the attacker (British) CEV being 1.3. This high CEV is not entirely explainable. There is no clear explanation for these results.

VER-RDMX—On the first run, the model predicted that the attacker won. Historically, this is correct. On the second run, the battle recorded that the defender won. This indicates an attempt by the model to get the casualties correct. The model is clearly having a problem with this engagement due to the low defender casualties.

VER-CHX—On the first run, the model predicted that the defender won. Historically, the attacker won. On the second run, the battle was recorded as an attacker win with the attacker (Canadian) CEV being 1.3. Again, this high CEV is not entirely explainable. The model appears to be assigning a CEV bonus to the wrong side in an attempt to adjust a problem run. The model is still clearly having a problem with this engagement due to the low defender casualties.

Next: Post-WWII Cases

Dispersion versus Lethality

This is a follow-up post to the post discussing Trevor Dupuy’s work compared to the Army Research Laboratories (ARL) current work:

The Evolution of Weapons and Warfare?

The work by ARL produced a graph similar to this one by Trevor Dupuy, except it was used to forecast the “figure of regularity” (which I gather means firepower or lethality). But if you note there is another significant line on Trevor Dupuy’s graph, besides the weapons’ “theoretical killing capacity.” It is labeled Dispersion. Note the left side of the graph where it is labeled “Disperion: Square Meters per Man in Combat.” It also goes up as the “theoretical killing capacity” of the weapons goes up.

This is the other side of equation. For every action, there is an equal and opposite reaction to paraphrase a famous theorist. This results in this chart from Col. Dupuy:

Now….this is pretty damn significant….for as firepower, or lethality, or “theoretical killing capacity” has gone up, even geometrically…..daily casualty rates have declined. What is happening? Well, not only “for every action, there is an equal and opposite reaction,” but in fact, the reaction has outweighed the increase in firepower/lethality/killing capacity over time. This is worth thinking about. For as firepower has gone up, daily casualty rates have declined.

In fact, I did discuss this in my book War By Numbers (Chapter 13: The Effects of Dispersion on Combat). Clearly there was more to “dispersion” than just dispersion, and I tried to illustrate that with this chart:

To express it in simple English, people are dispersing, increasing engagement ranges and making more individual use of cover and concealment (page 166). Improvements in weapons, which occur on both sides, have also been counteracted by changes in deployment and defense. These changes have been more significant than the increases in lethality. See pages 166-169 of War by Numbers for a more complete explanation of this chart.

The issues related to lethality and forecasting the future of lethality gets a little complex and multifaceted.