# How Attrition is Calculated in the QJM vs the TNDM

French soldiers on the attack, during the First World War. [Wikipedia]

[The article below is reprinted from December 1996 edition of The International TNDM Newsletter. It was referenced in the recent series of posts addressing the battalion-level validation of Trevor Dupuy’s Tactical Numerical Deterministic Model (TNDM).]

How Attrition is Calculated in the QJM vs the TNDM
by Christopher A. Lawrence

There are two different attrition calculations in the Quantified Judgement Model (QJM), one for post-1900 battles and one for pre-1900 battles. For post-1900 battles, the QJM methodology detailed in Trevor Dupuy’s Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles (Indianapolis; New York: The Bobbs-Merrill Co., 1979) was basically:

(Standard rate in percent*) x (factor based on force size) x (factor based upon mission) x (opposition factor based on force ratios) x (day/night) x (special conditions**) = percent losses.

* Different for attacker (2.8%) and defender (1.5%)
** WWI and certain forces in WWII and Korea

For the attacker the highest this percent can be in one day is 13.44% not counting the special conditions, and the highest it can be for the defender is 5.76%.

The current Tactical Numerical Deterministic Model (TNDM) methodology is:

(Standard personnel loss factor*) x (number of people) x (factor based upon posture/mission) x (combat effectiveness value (CEV) of opponent. up to 1.5) x (factor for surprise) x (opposition factor based on force ratios) x (factor based on force size) x (factor based on terrain) x (factor based upon weather) x (factor based upon season) x (factor based upon rate of advance) x (factor based upon amphibious and river crossings) x (day/night) x (factor based upon daily fatigue) = Number of casualties

* Different for attacker (.04) and defender (.06)

The special conditions mentioned in Numbers, Predictions, and War are not accounted for here, although it is possible to insert them, if required.

All these tables have been revised and reﬁned from Numbers, Predictions, and War.

In Numbers, Predictions and War, the highest multiplier for size was 2.0, and this was for forces of less than 5,000 men. From 5,000 to 10,000 is 1.5 and from 10,000 to 20,000 is 1.0. This formulation certainly ﬁt the data to which the model was validated.

The TNDM has the following table for values below 15,000 men (which is 1.0):

The highest percent losses the attacker can suffer in a force of greater than 15,000 men in one day is “over” 100%. If one leaves out three large multipliers for special conditions—surprise, amphibious assault, and CEV—then the maximum percent losses is 18%. The multiplier for complete surprise is 2.5 (although this degraded by historical period), 2.00 for amphibious attack across a beach, and 1.5 for enemy having a noticeable superior CEVs In the case of the defender, leaving out these three factors, the maximum percent casualties is 21.6% a day.

This means at force strengths of less than 2,000 it would be possible for units to suffer 100% losses without adding in conditions like surprise.

The following TNDM tables have been modiﬁed from the originals in Numbers, Predictions, and War to include a casualty factor, among other updates (numbers in quotes refer to tables in the TNDM, the others refer to tables in Numbers, Predictions, and War):

Table 1/”2”: Terrain Factors
Table 2/“3″: Weather Factors
Table 3/“4″: Season Factors
Table 5/”6″: Posture Factors
Table 6/“9″: Shoreline Vulnerability
Table 9/”11″: Surprise

The following tables have also been modiﬁed from the original QJM as outlined in Numbers, Predictions, and War:

Table “1”: OLl’s
Table “16”: Opposition Factor
Table “17”: Strength/Size Attrition Factors
Table “20”: Maximum Depth Factor

The following tables have remained the same:

Table 4/“5”: Effects of Air Superiority
Table 7/“12”: Morale Factors
Table 8/“19”: Mission Accomplishment
Table “15”: River or Stream Factor

The following new tables have been added:

Table “7”: Qualitative Signiﬁcance of Quantity
Table “8”: Weapons Sophistication
Table “10”: Fatigue Factors
Table “18”: Velocity Factor
Table “20”: Maximum Depth Factor

The following tables has been deleted and the effect subsumed into another table:

unnumbered: Mission Factor
unnumbered: Mineﬁeld Factors

As far as I can tell, Table “20”: Maximum Depth Factor has a very limited impact on the model outcomes. Table “1”: OLIs, has no impact on model outcomes

I have developed a bad habit, if I want to understand or know something about the TNDM, to grab my copy of Numbers, Predictions, and War for reference. As shown by these attrition calculations, the TNDM has developed enough from its original form that the book is no longer a good description of it. The TNDM has added in an additional level of sophistication that was not in the QJM.

The TNDM does not have any procedure for calculating combat from before 1900. In fact, the TNDM is not intended to be used in its current form for any combat before WWII.

# Comparing Force Ratios to Casualty Exchange Ratios

“American Marines in Belleau Wood (1918)” by Georges Scott [Wikipedia]

Comparing Force Ratios to Casualty Exchange Ratios
Christopher A. Lawrence

There are three versions of force ratio versus casualty exchange ratio rules, such as the three-to-one rule (3-to-1 rule), as it applies to casualties. The earliest version of the rule as it relates to casualties that we have been able to ﬁnd comes from the 1958 version of the U.S. Army Maneuver Control manual, which states: “When opposing forces are in contact, casualties are assessed in inverse ratio to combat power. For friendly forces advancing with a combat power superiority of 5 to 1, losses to friendly forces will be about 1/5 of those suffered by the opposing force.”[1]

The RAND version of the rule (1992) states that: “the famous ‘3:1 rule ’, according to which the attacker and defender suffer equal fractional loss rates at a 3:1 force ratio the battle is in mixed terrain and the defender enjoys ‘prepared ’defenses…” [2]

Finally, there is a version of the rule that dates from the 1967 Maneuver Control manual that only applies to armor that shows:

As the RAND construct also applies to equipment losses, then this formulation is directly comparable to the RAND construct.

Therefore, we have three basic versions of the 3-to-1 rule as it applies to casualties and/or equipment losses. First, there is a rule that states that there is an even fractional loss ratio at 3-to-1 (the RAND version), Second, there is a rule that states that at 3-to-1, the attacker will suffer one-third the losses of the defender. And third, there is a rule that states that at 3-to-1, the attacker and defender will suffer the same losses as the defender. Furthermore, these examples are highly contradictory, with either the attacker suffering three times the losses of the defender, the attacker suffering the same losses as the defender, or the attacker suffering 1/3 the losses of the defender.

Therefore, what we will examine here is the relationship between force ratios and exchange ratios. In this case, we will ﬁrst look at The Dupuy Institute’s Battles Database (BaDB), which covers 243 battles from 1600 to 1900. We will chart on the y-axis the force ratio as measured by a count of the number of people on each side of the forces deployed for battle. The force ratio is the number of attackers divided by the number of defenders. On the x-axis is the exchange ratio, which is a measured by a count of the number of people on each side who were killed, wounded, missing or captured during that battle. It does not include disease and non-battle injuries. Again, it is calculated by dividing the total attacker casualties by the total defender casualties. The results are provided below:

As can be seen, there are a few extreme outliers among these 243 data points. The most extreme, the Battle of Tippennuir (l Sep 1644), in which an English Royalist force under Montrose routed an attack by Scottish Covenanter militia, causing about 3,000 casualties to the Scots in exchange for a single (allegedly self-inﬂicted) casualty to the Royalists, was removed from the chart. This 3,000-to-1 loss ratio was deemed too great an outlier to be of value in the analysis.

As it is, the vast majority of cases are clumped down into the corner of the graph with only a few scattered data points outside of that clumping. If one did try to establish some form of curvilinear relationship, one would end up drawing a hyperbola. It is worthwhile to look inside that clump of data to see what it shows. Therefore, we will look at the graph truncated so as to show only force ratios at or below 20-to-1 and exchange rations at or below 20-to-1.

Again, the data remains clustered in one corner with the outlying data points again pointing to a hyperbola as the only real ﬁtting curvilinear relationship. Let’s look at little deeper into the data by truncating the data on 6-to-1 for both force ratios and exchange ratios. As can be seen, if the RAND version of the 3-to-1 rule is correct, then the data should show at 3-to-1 force ratio a 3-to-1 casualty exchange ratio. There is only one data point that comes close to this out of the 243 points we examined.

If the FM 105-5 version of the rule as it applies to armor is correct, then the data should show that at 3-to-1 force ratio there is a 1-to-1 casualty exchange ratio, at a 4-to-1 force ratio a 1-to-2 casualty exchange ratio, and at a 5-to-1 force ratio a 1-to-3 casualty exchange ratio. Of course, there is no armor in these pre-WW I engagements, but again no such exchange pattern does appear.

If the 1958 version of the FM 105-5 rule as it applies to casualties is correct, then the data should show that at a 3-to-1 force ratio there is 0.33-to-1 casualty exchange ratio, at a 4-to-1 force ratio a .25-to-1 casualty exchange ratio, and at a 5-to-1 force ratio a 0.20-to-5 casualty exchange ratio. As can be seen, there is not much indication of this pattern, or for that matter any of the three patterns.

Still, such a construct may not be relevant to data before 1900. For example, Lanchester claimed in 1914 in Chapter V, “The Principal of Concentration,” of his book Aircraft in Warfare, that there is greater advantage to be gained in modern warfare from concentration of ﬁre.[3] Therefore, we will tap our more modern Division-Level Engagement Database (DLEDB) of 675 engagements, of which 628 have force ratios and exchange ratios calculated for them. These 628 cases are then placed on a scattergram to see if we can detect any similar patterns.

Even though this data covers from 1904 to 1991, with the vast majority of the data coming from engagements after 1940, one again sees the same pattern as with the data from 1600-1900. If there is a curvilinear relationship, it is again a hyperbola. As before, it is useful to look into the mass of data clustered into the corner by truncating the force and exchange ratios at 20-to-1. This produces the following:

Again, one sees the data clustered in the corner, with any curvilinear relationship again being a hyperbola. A look at the data further truncated to a 10-to-1 force or exchange ratio does not yield anything more revealing.

And, if this data is truncated to show only 5-to-1 force ratio and exchange ratios, one again sees:

Again, this data appears to be mostly just noise, with no clear patterns here that support any of the three constructs. In the case of the RAND version of the 3-to-1 rule, there is again only one data point (out of 628) that is anywhere close to the crossover point (even fractional exchange rate) that RAND postulates. In fact, it almost looks like the data conspires to make sure it leaves a noticeable “hole” at that point. The other postulated versions of the 3-to-1 rules are also given no support in these charts.

Also of note, that the relationship between force ratios and exchange ratios does not appear to signiﬁcantly change for combat during 1600-1900 when compared to the data from combat from 1904-1991. This does not provide much support for the intellectual construct developed by Lanchester to argue for his N-square law.

While we can attempt to torture the data to ﬁnd a better ﬁt, or can try to argue that the patterns are obscured by various factors that have not been considered, we do not believe that such a clear pattern and relationship exists. More advanced mathematical methods may show such a pattern, but to date such attempts have not ferreted out these alleged patterns. For example, we refer the reader to Janice Fain’s article on Lanchester equations, The Dupuy Institute’s Capture Rate Study, Phase I & II, or any number of other studies that have looked at Lanchester.[4]

The fundamental problem is that there does not appear to be a direct cause and effect between force ratios and exchange ratios. It appears to be an indirect relationship in the sense that force ratios are one of several independent variables that determine the outcome of an engagement, and the nature of that outcome helps determines the casualties. As such, there is a more complex set of interrelationships that have not yet been fully explored in any study that we know of, although it is brieﬂy addressed in our Capture Rate Study, Phase I & II.

NOTES

[1] FM 105-5, Maneuver Control (1958), 80.

[2] Patrick Allen, “Situational Force Scoring: Accounting for Combined Arms Effects in Aggregate Combat Models,” (N-3423-NA, The RAND Corporation, Santa Monica, CA, 1992), 20.

[3] F. W. Lanchester, Aircraft in Warfare: The Dawn of the Fourth Arm (Lanchester Press Incorporated, Sunnyvale, Calif., 1995), 46-60. One notes that Lanchester provided no data to support these claims, but relied upon an intellectual argument based upon a gross misunderstanding of ancient warfare.

[4] In particular, see page 73 of Janice B. Fain, “The Lanchester Equations and Historical Warfare: An Analysis of Sixty World War II Land Engagements,” Combat Data Subscription Service (HERO, Arlington, Va., Spring 1975).

# Simpkin on the Long-Term Effects of Firepower Dominance

To follow on my earlier post introducing British military theorist Richard Simpkin’s foresight in detecting trends in 21st Century warfare, I offer this paragraph, which immediately followed the ones I quoted:

Brieﬂy and in the most general terms possible, I suggest that the long-term effect of dominant ﬁrepower will be threefold. It will disperse mass in the form of a “net” of small detachments with the dual role of calling down ﬁre and of local quasi-guerrilla action. Because of its low density, the elements of this net will be everywhere and will thus need only the mobility of the boot. It will transfer mass, structurally from the combat arms to the artillery, and in deployment from the direct ﬁre zone (as we now understand it) to the formation and protection of mobile ﬁre bases capable of movement at heavy-track tempo (Chapter 9). Thus the third effect will be to polarise mobility, for the manoeuvre force still required is likely to be based on the rotor. This line of thought is borne out by recent trends in Soviet thinking on the offensive. The concept of an operational manoeuvre group (OMG) which hives off raid forces against C3 and indirect ﬁre resources is giving way to more fluid and discontinuous manoeuvre by task forces (“air-ground assault groups” found by “shock divisions”) directed onto ﬁre bases—again of course with an operational helicopter force superimposed. [Simpkin, Race To The Swift, p. 169]

It seems to me that in the mid-1980s, Simpkin accurately predicted the emergence of modern anti-access/area denial (A2/AD) defensive systems with reasonable accuracy, as well the evolving thinking on the part of the U.S. military as to how to operate against them.

Simpkin’s vision of task forces (more closely resembling Russian/Soviet OMGs than rotary wing “air-ground assault groups” operational forces, however) employing “fluid and discontinuous manoeuvre” at operational depths to attack long-range precision firebases appears similar to emerging Army thinking about future multidomain operations. (It’s likely that Douglas MacGregor’s Reconnaissance Strike Group concept more closely fits that bill.)

One thing he missed on was his belief that rotary wing helicopter combat forces would supplant armored forces as the primary deep operations combat arm. However, there is the potential possibility that drone swarms might conceivably take the place in Simpkin’s operational construct that he allotted to heliborne forces. Drones have two primary advantages over manned helicopters: they are far cheaper and they are far less vulnerable to enemy fires. With their unique capacity to blend mass and fires, drones could conceivably form the deep strike operational hammer that Simpkin saw rotary wing forces providing.

Just as interesting was Simpkin’s anticipation of the growing importance of information and electronic warfare in these environments. More on that later.

# Richard Simpkin on 21st Century Trends in Mass and Firepower

Anvil of “troops” vs. anvil of fire. (Richard Simpkin, Race To The Swift: Thoughts on Twenty-First Century Warfare, Brassey’s: London, 1985, p. 51)

For my money, one of the most underrated analysts and theorists of modern warfare was the late Brigadier Richard Simpkin. A retired British Army World War II veteran, Simpkin helped design the Chieftan tank in the 60s and 70s. He is best known for his series of books analyzing Soviet and Western military theory and doctrine. His magnum opus was Race To The Swift: Thoughts on Twenty-First Century Warfare, published in 1985. A brilliant blend of military history, insightful analysis of tactics and technology as well as operations and strategy, and Simpkin’s idiosyncratic wit, the observations in Race To The Swift are becoming more prescient by the year.

Some of Simpkin’s analysis has not aged well, such as the focus on the NATO/Soviet confrontation in Central Europe, and a bold prediction that rotary wing combat forces would eventually supplant tanks as the primary combat arm. However, it would be difficult to find a better historical review of the role of armored forces in modern warfare and how trends in technology, tactics, and doctrine are interacting with strategy, policy, and politics to change the character of warfare in the 21st Century.

To follow on my previous post on the interchangeability of fire (which I gleaned from Simpkin, of course), I offer this nugget on how increasing weapons lethality would affect 21st Century warfare, written from the perspective of the mid 1980s:

While accidents of ground will always provide some kind of cover, the effect of modern ﬁrepower on land force tactics is equally revolutionary. Just as we saw in Part 2 how the rotary wing may well turn force structures inside out, ﬁrepower is already turning tactical concepts inside out, by replacing the anvil of troops with an anvil of ﬁre (Fig. 5, page 51)*. The use of combat troops at high density to hold ground or to seize it is already likely to prove highly costly, and may soon become wholly unproﬁtable. The interesting question is what effect the dominance of ﬁrepower will have at operational level.

One school of thought, to which many defence academics on both sides of the Atlantic subscribe, is that it will reduce mobility and bring about a return to positional warfare. The opposite view is that it will put a premium on elusiveness, increasing mobility and reducing mass. On analysis, both these opinions appear rather simplistic, mainly because they ignore the interchangeability of troops and ﬁre…—in other words the equivalence or complementarity of the movement of troops and the massing of ﬁre. They also underrate the part played by manned and unmanned surveillance, and by communication. Another factor, little understood by soldiers and widely ignored, is the weight of ﬁre a modern fast jet in its strike conﬁguration, ﬂying a lo-lo-lo proﬁle, can put down very rapidly wherever required. With modern artillery and air support, a pair of eyes backed up by an unjammable radio and perhaps a thermal imager becomes the equivalent of at least a (company) combat team, perhaps a battle group. [Simpkin, Race To The Swift, pp. 168-169]

Sound familiar? I will return to Simpkin’s insights in future posts, but I suggest you all snatch up a copy of Race To The Swift for yourselves.

* See above.

# Artillery Effectiveness vs. Armor (Part 5-Summary)

U.S. Army 155mm field howitzer in Normandy. [padresteve.com]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

Table IX shows the distribution of cause of loss by type or armor vehicle. From the distribution it might be inferred that better protected armored vehicles may be less vulnerable to artillery attack. Nevertheless, the heavily armored vehicles still suffered a minimum loss of 5.6 percent due to artillery. Unfortunately the sample size for heavy tanks was very small, 18 of 980 cases or only 1.8 percent of the total.

The data are limited at this time to the seven cases.[6] Further research is necessary to expand the data sample so as to permit proper statistical analysis of the effectiveness of artillery versus tanks.

NOTES

[18] Heavy armor includes the KV-1, KV-2, Tiger, and Tiger II.

[19] Medium armor includes the T-34, Grant, Panther, and Panzer IV.

[20] Light armor includes the T-60, T-70. Stuart, armored cars, and armored personnel carriers.

# Artillery Effectiveness vs. Armor (Part 4-Ardennes)

Knocked-out Panthers in Krinkelt, Belgium, Battle of the Bulge, 17 December 1944. [worldwarphotos.info]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

NOTES

[14] From ORS Joint Report No. 1. A total of an estimated 300 German armor vehicles were found following the battle.

[15] Data from 38th Infantry After Action Report (including “Sketch showing enemy vehicles destroyed by 38th Inf Regt. and attached units 17-20 Dec. 1944″), from 12th SS PzD strength report dated 8 December 1944, and from strengths indicated on the OKW brieﬁng maps for 17 December (1st [circa 0600 hours], 2d [circa 1200 hours], and 3d [circa 1800 hours] situation), 18 December (1st and 2d situation), 19 December (2d situation), 20 December (3d situation), and 21 December (2d and 3d situation).

[16] Losses include conﬁrmed and probable losses.

[17] Data from Combat Interview “26th Infantry Regiment at Dom Bütgenbach” and from 12th SS PzD, ibid.

# Artillery Effectiveness vs. Armor (Part 3-Normandy)

The U.S. Army 333rd Field Artillery Battalion (Colored) in Normandy, July 1944 (US Army Photo/Tom Gregg)

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

NOTES

[10] From ORS Report No. 17.

[11] Five of the 13 counted as unknown were penetrated by both armor piercing shot and by infantry hollow charge weapons. There was no evidence to indicate which was the original cause of the loss.

[12] From ORS Report No. 17

[13] From ORS Report No. 15. The “Pocket” was the area west of the line Falaise-Argentan and east of the line Vassy-Gets-Domfront in Normandy that was the site in August 1944 of the beginning of the German retreat from France. The German forces were being enveloped from the north and south by Allied ground forces and were under constant, heavy air attack.

# Artillery Effectiveness vs. Armor (Part 2-Kursk)

German Army 150mm heavy field howitzer 18 L/29.5 battery. [Panzer DB/Pinterest]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

Curiously, at Kursk, in the case where the highest percent loss was recorded, the German forces opposing the Soviet 1st Tank Army—mainly the XLVIII Panzer Corps of the Fourth Panzer Army—were supported by proportionately fewer artillery pieces (approximately 56 guns and rocket launchers per division) than the US 1st Infantry Division at Dom Bütgenbach (the equivalent of approximately 106 guns per division)[4]. Nor does it appear that the German rate of ﬁre at Kursk was significantly higher than that of the American artillery at Dom Bütgenbach. On 20 July at Kursk, the 150mm howitzers of the 11th Panzer Division achieved a peak rate of ﬁre of 87.21 rounds per gum. On 21 December at Dom Bütgenbach, the 155mm howitzers of the 955th Field Artillery Battalion achieved a peak rate of ﬁre of 171.17 rounds per gun.[5]

NOTES

[4] The US artillery at Dom Bütgenbach peaked on 21 December 1944 when a total of 210 divisional and corps pieces ﬁred over 10,000 rounds in support of the 1st Division’s 26th Infantry.

[5] Data collected on German rates of fire are fragmentary, but appear to be similar to that of the American Army in World War ll. An article on artillery rates of ﬁre that explores the data in more detail will be forthcoming in a future issue of this newsletter. [NOTE: This article was not completed or published.]

Notes to Table I.

[8] The data were found in reports of the 1st Tank Army (Fond 299, Opis‘ 3070, Delo 226). Obvious math errors in the original document have been corrected (the total lost column did not always agree with the totals by cause). The total participated column evidently reﬂected the starting strength of the unit, plus replacement vehicles. “Burned'” in Soviet wartime documents usually indicated a total loss, however it appears that in this case “burned” denoted vehicles totally lost due to direct ﬁre antitank weapons. “Breakdown” apparently included both mechanical breakdown and repairable combat damage.

[9] Note that the brigade report (Fond 3304, Opis‘ 1, Delo 24) contradicts the army report. The brigade reported that a total of 28 T-34s were lost (9 to aircraft and 19 to “artillery”) and one T-60 was destroyed by a mine. However, this report was made on 11 July, during the battle, and may not have been as precise as the later report recorded by 1st Tank Army. Furthermore, it is not as clear in the brigade report that “artillery” referred only to indirect fire HE and not simply lo both direct and indirect fire guns.

# Artillery Effectiveness vs. Armor (Part 1)

A U.S. M1 155mm towed artillery piece being set up for firing during the Battle of the Bulge, December 1944.

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

The effectiveness of artillery against exposed personnel and other “soft” targets has long been accepted. Fragments and blast are deadly to those unfortunate enough to not be under cover. What has also long been accepted is the relative—if not total—immunity of armored vehicles when exposed to shell ﬁre. In a recent memorandum, the United States Army Armor School disputed the results of tests of artillery versus tanks by stating, “…the Armor School nonconcurred with the Artillery School regarding the suppressive effects of artillery…the M-1 main battle tank cannot be destroyed by artillery…”

This statement may in fact be true,[1] if the advancement of armored vehicle design has greatly exceeded the advancement of artillery weapon design in the last fifty years. [Original emphasis] However, if the statement is not true, then recent research by TDI[2] into the effectiveness of artillery shell ﬁre versus tanks in World War II may be illuminating.

The TDI search found that an average of 12.8 percent of tank and other armored vehicle losses[3] were due to artillery ﬁre in seven eases in World War II where the cause of loss could be reliably identified. The highest percent loss due to artillery was found to be 14.8 percent in the case of the Soviet 1st Tank Army at Kursk (Table II). The lowest percent loss due to artillery was found to be 5.9 percent in the case of Dom Bütgenbach (Table VIII).

The seven cases are split almost evenly between those that show armor losses to a defender and those that show losses to an attacker. The ﬁrst four cases (Kursk, Normandy l. Normandy ll, and the “Pocket“) are engagements in which the side for which armor losses were recorded was on the defensive. The last three cases (Ardennes, Krinkelt. and Dom Bütgenbach) are engagements in which the side for which armor losses were recorded was on the offensive.

Four of the seven eases (Normandy I, Normandy ll, the “Pocket,” and Ardennes) represent data collected by operations research personnel utilizing rigid criteria for the identification of the cause of loss. Specific causes of loss were only given when the primary destructive agent could be clearly identified. The other three cases (Kursk, Krinkelt, and Dom Bütgenbach) are based upon combat reports that—of necessity—represent less precise data collection efforts.

However, the similarity in results remains striking. The largest identiﬁable cause of tank loss found in the data was, predictably, high-velocity armor piercing (AP) antitank rounds. AP rounds were found to be the cause of 68.7 percent of all losses. Artillery was second, responsible for 12.8 percent of all losses. Air attack as a cause was third, accounting for 7.4 percent of the total lost. Unknown causes, which included losses due to hits from multiple weapon types as well as unidentiﬁed weapons, inﬂicted 6.3% of the losses and ranked fourth. Other causes, which included infantry antitank weapons and mines, were responsible for 4.8% of the losses and ranked ﬁfth.

NOTES

[1] The statement may be true, although it has an “unsinkable Titanic,” ring to it. It is much more likely that this statement is a hypothesis, rather than a truism.

[2] As pan of this article a survey of the Research Analysis Corporation’s publications list was made in an attempt to locate data from previous operations research on the subject. A single reference to the study of tank losses was found. Group 1 Alvin D. Coox and L. Van Loan Naisawald, Survey of Allied Tank Casualties in World War II, CONFIDENTIAL ORO Report T-117, 1 March 1951.

[3] The percentage loss by cause excludes vehicles lost due to mechanical breakdown or abandonment. lf these were included, they would account for 29.2 percent of the total lost. However, 271 of the 404 (67.1%) abandoned were lost in just two of the cases. These two cases (Normandy ll and the Falaise Pocket) cover the period in the Normandy Campaign when the Allies broke through the German defenses and began the pursuit across France.

# Artillery Survivability In Modern Combat

The U.S. Army’s M109A6 Paladin 155 mm Self-Propelled Howitzer. [U.S. Army]

[This piece was originally published on 17 July 2017.]

Much attention is being given in the development of the U.S. joint concept of Multi-Domain Battle (MDB) to the implications of recent technological advances in long-range precision fires. It seems most of the focus is being placed on exploring the potential for cross-domain fires as a way of coping with the challenges of anti-access/area denial strategies employing long-range precision fires. Less attention appears to be given to assessing the actual combat effects of such weapons. The prevailing assumption is that because of the increasing lethality of modern weapons, battle will be bloodier than it has been in recent experience.

I have taken a look in previous posts at how the historical relationship identified by Trevor Dupuy between weapon lethality, battlefield dispersion, and casualty rates argues against this assumption with regard to personnel attrition and tank loss rates. What about artillery loss rates? Will long-range precision fires make ground-based long-range precision fire platforms themselves more vulnerable? Historical research suggests that trend was already underway before the advent of the new technology.

In 1976, Trevor Dupuy and the Historical Evaluation and Research Organization (HERO; one of TDI’s corporate ancestors) conducted a study sponsored by Sandia National Laboratory titled “Artillery Survivability in Modern War.” (PDF) The study focused on looking at historical artillery loss rates and the causes of those losses. It drew upon quantitative data from the 1973 Arab-Israel War, the Korean War, and the Eastern Front during World War II.

Conclusions

1. In the early wars of the 20th Century, towed artillery pieces were relatively invulnerable, and they were rarely severely damaged or destroyed except by very infrequent direct hits.

2. This relative invulnerability of towed artillery resulted in general lack of attention to the problems of artillery survivability through World War II.

3. The lack of effective hostile counter-artillery resources in the Korean and Vietnam wars contributed to continued lack of attention to the problem of artillery survivability, although increasingly armies (particularly the US Army) were relying on self-propelled artillery pieces.

4. Estimated Israeli loss statistics of the October 1973 War suggest that because of size and characteristics, self-propelled artillery is more vulnerable to modern counter-artillery means than was towed artillery in that and previous wars; this greater historical physical vulnerability of self-propelled weapons is consistent with recent empirical testing by the US Army.

5. The increasing physical vulnerability of modern self-propelled artillery weapons is compounded by other modern combat developments, including:

a. Improved artillery counter-battery techniques and resources;
b. Improved accuracy of air-delivered munitions;
c..increased lethality of modern artillery ammunition; and
d. Increased range of artillery and surface-to-surface missiles suitable for use against artillery.

6. Despite this greater vulnerability of self-propelled weapons, Israeli experience in the October war demonstrated that self-propelled artillery not only provides significant protection to cannoneers but also that its inherent mobility permits continued effective operation under circumstances in which towed artillery crews would be forced to seek cover, and thus be unable to fire their weapons. ‘

7. Paucity of available processed, compiled data on artillery survivability and vulnerability limits analysis and the formulation of reliable artillery loss experience tables or formulae.

8. Tentative analysis of the limited data available for this study indicates the following:

a. In “normal” deployment, percent weapon losses by standard weight classification are in the following proportions:

b. Towed artillery losses to hostile artillery (counterbattery) appear in general to very directly with battle intensity (as measured by percent personnel casualties per day), at a rate somewhat less than half of the percent personnel losses for units of army strength or greater; this is a straight-line relationship, or close to it; the stronger or more effective the hostile artillery is, the steeper the slope of the curve;

c. Towed artillery losses to all hostile anti-artillery means appears in general to vary directly with battle intensity at a rate about two-thirds of the-percent personnel losses for units of army strength or greater; the curve rises slightly more rapidly in high intensity combat than in normal or low-intensity combat; the stronger or more effective the hostile anti-artillery means (primarily air and counter-battery), the steeper the slope of the curve;

d. Self-propelled artillery losses appear to be generally consistent with towed losses, but at rates at least twice as great in comparison to battle intensity.

9. There are available in existing records of US and German forces in World war II, and US forces in the Korean and Vietnam Wars, unit records and reports that will permit the formulation of reliable artillery loss experience tables and formulae for those conflicts; these, with currently available and probably improved, data from the Arab-Israeli wars, will permit the formulation of reliable artillery loss experience tables and formulae for simulations of modern combat under current and foreseeable future conditions.

The study caveated these conclusions with the following observations:

Most of the artillery weapons in World War II were towed weapons. By the time the United States had committed small but significant numbers of self-propelled artillery pieces in Europe, German air and artillery counter-battery retaliatory capabilities had been significantly reduced. In the Korean and Vietnam wars, although most American artillery was self-propelled, the enemy had little counter-artillery capability either in the air or in artillery weapons and counter-battery techniques.

It is evident from vulnerability testing of current Army self-propelled weapons, that these weapons–while offering much more protection to cannoneers and providing tremendous advantages in mobility–are much more vulnerable to hostile action than are towed weapons, and that they are much more subject to mechanical breakdowns involving either the weapons mountings or the propulsion elements. Thus there cannot be a direct relationship between aggregated World War II data, or even aggregated Korean war or October War data, and current or future artillery configurations. On the other hand, the body of data from the October war where artillery was self-propelled is too small and too specialized by environmental and operational circumstances to serve alone as a paradigm of artillery vulnerability.

Despite the intriguing implications of this research, HERO’s proposal for follow on work was not funded. HERO only used easily accessible primary and secondary source data for the study. It noted much more primary source data was likely available but that it would require a significant research effort to compile it. (Research is always the expensive tent-pole in quantitative historical analysis. This seems to be why so little of it ever gets funded.) At the time of the study in 1976, no U.S. Army organization could identify any existing quantitative historical data or analysis on artillery losses, classified or otherwise. A cursory search on the Internet reveals no other such research as well. Like personnel attrition and tank loss rates, it would seem that artillery loss rates would be another worthwhile subject for quantitative analysis as part of the ongoing effort to develop the MDB concept.