Trevor Dupuy’s Definitions of Lethality

Two U.S. Marines with a M1919A4 machine gun on Roi-Namur Island in the Marshall Islands during World War II. [Wikimedia]

It appears that discussion of the meaning of lethality, as related to the use of the term in the 2018 U.S. National Defense Strategy document, has sparked up again. It was kicked off by an interesting piece by Olivia Gerard in The Strategy Bridge last autumn, “Lethality: An Inquiry.

Gerard credited Trevor Dupuy and his colleagues at the Historical Evaluation Research Organization (HERO) with codifying “the military appropriation of the concept” of lethality, which was defined as: “the inherent capability of a given weapon to kill personnel or make materiel ineffective in a given period, where capability includes the factors of weapon range, rate of fire, accuracy, radius of effects, and battlefield mobility.”

It is gratifying for Gerard to attribute this to Dupuy and HERO, but some clarification is needed. The definition she quoted was, in fact, one provided to HERO for the purposes of a study sponsored by the Advanced Tactics Project (AVTAC) of the U.S. Army Combat Developments Command. The 1964 study report, Historical Trends Related to Weapon Lethality, provided the starting point for Dupuy’s subsequent theorizing about combat.

In his own works, Dupuy used a simpler definition of lethality:

He also used the terms lethality and firepower interchangeably in his writings. The wording of the original 1964 AVTAC definition tracks closely with the lethality scoring methodology Dupuy and his HERO colleagues developed for the study, known as the Theoretical Lethality Index/Operational Lethality Index (TLI/OLI). The original purpose of this construct was to permit some measurement of lethality by which weapons could be compared to each other (TLI), and to each other through history (OLI). It worked well enough that he incorporated it into his combat models, the Quantified Judgement Model (QJM) and Tactical Numerical Deterministic Model (TNDM).

Dupuy’s Verities: Combat Power =/= Firepower

A U.S. 11th Marines 75mm pack howitzer and crew on Guadalcanal, September or October, 1942. The lean condition of the crewmembers indicate that they haven’t been getting enough nutrition during this period. [Wikipedia]

The ninth of Trevor Dupuy’s Timeless Verities of Combat is:

Superior Combat Power Always Wins.

From Understanding War (1987):

Military history demonstrates that whenever an outnumbered force was successful, its combat power was greater than that of the loser. All other things being equal, God has always been on the side of the heaviest battalions and always will be.

In recent years two or three surveys of modern historical experience have led to the finding that relative strength is not a conclusive factor in battle outcome. As we have seen, a superficial analysis of historical combat could support this conclusion. There are a number of examples of battles won by the side with inferior numbers. In many battles, outnumbered attackers were successful.

These examples are not meaningful, however, until the comparison includes the circumstances of the battles and opposing forces. If one take into consideration surprise (when present), relative combat effectiveness of the opponents, terrain features, and the advantage of defensive posture, the result may be different. When all of the circumstances are quantified and applied to the numbers of troops and weapons, the side with the greater combat power on the battlefield is always seen to prevail.

The concept of combat power is foundational to Dupuy’s theory of combat. He did not originate it; the notion that battle encompasses something more than just “physics-based” aspects likely originated with British theorist J.F.C. Fuller during World War I and migrated into U.S. Army thinking via post-war doctrinal revision. Dupuy refined and sharpened the Army’s vague conceptualization of it in the first iterations of his Quantified Judgement Model (QJM) developed in the 1970s.

Dupuy initially defined his idea of combat power in formal terms, as an equation in the QJM:

P = (S x V x CEV)


P = Combat Power
S = Force Strength
V = Environmental and Operational Variable Factors
CEV = Combat Effectiveness Value

Essentially, combat power is the product of:

  • force strength as measured in his models through the Theoretical/Operational Lethality Index (TLI/OLI), a firepower scoring method for comparing the lethality of weapons relative to each other;
  • the intangible environmental and operational variables that affect each circumstance of combat; and
  • the intangible human behavioral (or moral) factors that determine the fighting quality of a combat force.

Dupuy’s theory of combat power and its functional realization in his models have two virtues. First, unlike most existing combat models, it incorporates the effects of those intangible factors unique to each engagement or battle that influence combat outcomes, but are not readily measured in physical terms. As Dupuy argued, combat consists of more than duels between weapons systems. A list of those factors can be found below.

Second, the analytical research in real-world combat data done by him and his colleagues allowed him to begin establishing the specific nature combat processes and their interaction that are only abstracted in other combat theories and models. Those factors and processes for which he had developed a quantification hypothesis are denoted by an asterisk below.

Dupuy’s Verities: The Inefficiency of Combat

The “Mud March” of the Union Army of the Potomac, January 1863.

The twelfth of Trevor Dupuy’s Timeless Verities of Combat is:

Combat activities are always slower, less productive, and less efficient than anticipated.

From Understanding War (1987):

This is the phenomenon that Clausewitz called “friction in war.” Friction is largely due to the disruptive, suppressive, and dispersal effects of firepower upon an aggregation of people. This pace of actual combat operations will be much slower than the progress of field tests and training exercises, even highly realistic ones. Tests and exercises are not truly realistic portrayals of combat, because they lack the element of fear in a lethal environment, present only in real combat. Allowances must be made in planning and execution for the effects of friction, including mistakes, breakdowns, and confusion.

While Clausewitz asserted that the effects of friction on the battlefield could not be measured because they were largely due to chance, Dupuy believed that its influence could, in fact, be gauged and quantified. He identified at least two distinct combat phenomena he thought reflected measurable effects of friction: the differences in casualty rates between large and small sized forces, and diminishing returns from adding extra combat power beyond a certain point in battle. He also believed much more research would be necessary to fully understand and account for this.

Dupuy was skeptical of the accuracy of combat models that failed to account for this interaction between operational and human factors on the battlefield. He was particularly doubtful about approaches that started by calculating the outcomes of combat between individual small-sized units or weapons platforms based on the Lanchester equations or “physics-based” estimates, then used these as inputs for brigade and division-level-battles, the results of which in turn were used as the basis for determining the consequences of theater-level campaigns. He thought that such models, known as “bottom up,” hierarchical, or aggregated concepts (and the prevailing approach to campaign combat modeling in the U.S.), would be incapable of accurately capturing and simulating the effects of friction.

Dupuy’s Verities: The Effects of Firepower in Combat

A German artillery barrage falling on Allied trenches, probably during the Second Battle of Ypres in 1915, during the First World War. [Wikimedia]

The eleventh of Trevor Dupuy’s Timeless Verities of Combat is:

Firepower kills, disrupts, suppresses, and causes dispersion.

From Understanding War (1987):

It is doubtful if any of the people who are today writing on the effect of technology on warfare would consciously disagree with this statement. Yet, many of them tend to ignore the impact of firepower on dispersion, and as a consequence they have come to believe that the more lethal the firepower, the more deaths, disruption, and suppression it will cause. In fact, as weapons have become more lethal intrinsically, their casualty-causing capability has either declined or remained about the same because of greater dispersion of targets. Personnel and tank loss rates of the 1973 Arab-Israeli War, for example, were quite similar to those of intensive battles of World War II and the casualty rates in both of these wars were less than in World War I. (p. 7)

Research and analysis of real-world historical combat data by Dupuy and TDI has identified at least four distinct combat effects of firepower: infliction of casualties (lethality), disruption, suppression, and dispersion. All of them were found to be heavily influenced—if not determined—by moral (human) factors.

Again, I have written extensively on this blog about Dupuy’s theory about the historical relationship between weapon lethality, dispersion on the battlefield, and historical decline in average daily combat casualty rates. TDI President Chris Lawrence has done further work on the subject as well.

TDI Friday Read: Lethality, Dispersion, And Mass On Future Battlefields

Human Factors In Warfare: Dispersion

Human Factors In Warfare: Suppression

There appears to be a fundamental difference in interpretation of the combat effects of firepower between Dupuy’s emphasis on the primacy of human factors and Defense Department models that account only for the “physics-based” casualty-inflicting capabilities of weapons systems. While U.S. Army combat doctrine accounts for the interaction of firepower and human behavior on the battlefield, it has no clear method for assessing or even fully identifying the effects of such factors on combat outcomes.

Has The Army Given Up On Counterinsurgency Research, Again?


[In light of the U.S. Army’s recent publication of a history of it’s involvement in Iraq from 2003 to 2011, it may be relevant to re-post this piece from from 29 June 2016.]

As Chris Lawrence mentioned yesterday, retired Brigadier General John Hanley’s review of America’s Modern Wars in the current edition of Military Review concluded by pointing out the importance of a solid empirical basis for staff planning support for reliable military decision-making. This notion seems so obvious as to be a truism, but in reality, the U.S. Army has demonstrated no serious interest in remedying the weaknesses or gaps in the base of knowledge underpinning its basic concepts and doctrine.

In 2012, Major James A. Zanella published a monograph for the School of Advanced Military Studies of the U.S. Army Command and General Staff College (graduates of which are known informally as “Jedi Knights”), which examined problems the Army has had with estimating force requirements, particularly in recent stability and counterinsurgency efforts.

Historically, the United States military has had difficulty articulating and justifying force requirements to civilian decision makers. Since at least 1975, governmental officials and civilian analysts have consistently criticized the military for inadequate planning and execution. Most recently, the wars in Afghanistan and Iraq reinvigorated the debate over the proper identification of force requirements…Because Army planners have failed numerous times to provide force estimates acceptable to the President, the question arises, why are the planning methods inadequate and why have they not been improved?[1]

Zanella surveyed the various available Army planning tools and methodologies for determining force requirements, but found them all either inappropriate or only marginally applicable, or unsupported by any real-world data. He concluded

Considering the limitations of Army force planning methods, it is fair to conclude that Army force estimates have failed to persuade civilian decision-makers because the advice is not supported by a consistent valid method for estimating the force requirements… What is clear is that the current methods have utility when dealing with military situations that mirror the conditions represented by each model. In the contemporary military operating environment, the doctrinal models no longer fit.[2]

Zanella did identify the existence of recent, relevant empirical studies on manpower and counterinsurgency. He noted that “the existing doctrine on force requirements does not benefit from recent research” but suggested optimistically that it could provide “the Army with new tools to reinvigorate the discussion of troops-to-task calculations.”[3] Even before Zanella published his monograph, however, the Defense Department began removing any detailed reference or discussion about force requirements in counterinsurgency from Army and Joint doctrinal publications.

As Zanella discussed, there is a body of recent empirical research on manpower and counterinsurgency that contains a variety of valid and useful insights, but as I recently discussed, it does not yet offer definitive conclusions. Much more research and analysis is needed before the conclusions can be counted on as a valid and justifiably reliable basis for life and death decision-making. Yet, the last of these government sponsored studies was completed in 2010. Neither the Army nor any other organization in the U.S. government has funded any follow-on work on this subject and none appears forthcoming. This boom-or-bust pattern is nothing new, but the failure to do anything about it is becoming less and less understandable.


[1] Major James A. Zanella, “Combat Power Analysis is Combat Power Density” (Ft. Leavenworth, KS: School of Advanced Military Studies, U.S. Army Command and General Staff College, 2012), pp. 1-2.

[2] Ibid, 50.

[3] Ibid, 47.

What Multi-Domain Operations Wargames Are You Playing? [Updated]

Source: David A. Shlapak and Michael Johnson. Reinforcing Deterrence on NATO’s Eastern Flank: Wargaming the Defense of the Baltics. Santa Monica, CA: RAND Corporation, 2016.








[UPDATE] We had several readers recommend games they have used or would be suitable for simulating Multi-Domain Battle and Operations (MDB/MDO) concepts. These include several classic campaign-level board wargames:

The Next War (SPI, 1976)

NATO: The Next War in Europe (Victory Games, 1983)

For tactical level combat, there is Steel Panthers: Main Battle Tank (SSI/Shrapnel Games, 1996- )

There were also a couple of naval/air oriented games:

Asian Fleet (Kokusai-Tsushin Co., Ltd. (国際通信社) 2007, 2010)

Command: Modern Air Naval Operations (Matrix Games, 2014)

Are there any others folks are using out there?

A Mystics & Statistic reader wants to know what wargames are being used to simulate and explore Multi-Domain Battle and Operations (MDB/MDO) concepts?

There is a lot of MDB/MDO wargaming going on in at all levels in the U.S. Department of Defense. Much of this appears to use existing models, simulations, and wargames, such as the U.S. Army Center for Army Analysis’s unclassified Wargaming Analysis Model (C-WAM).

Chris Lawrence recently looked at C-WAM and found that it uses a lot of traditional board wargaming elements, including methodologies for determining combat results, casualties, and breakpoints that have been found unable to replicate real-world outcomes (aka “The Base of Sand” problem).




C-WAM 4 (Breakpoints)

There is also the wargame used by RAND to look at possible scenarios for a potential Russian invasion of the Baltic States.

Wargaming the Defense of the Baltics

Wargaming at RAND

What other wargames, models, and simulations are there being used out there? Are there any commercial wargames incorporating MDB/MDO elements into their gameplay? What methodologies are being used to portray MDB/MDO effects?

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

The Great 3-1 Rule Debate

coldwarmap3[This piece was originally posted on 13 July 2016.]

Trevor Dupuy’s article cited in my previous post, “Combat Data and the 3:1 Rule,” was the final salvo in a roaring, multi-year debate between two highly regarded members of the U.S. strategic and security studies academic communities, political scientist John Mearsheimer and military analyst/polymath Joshua Epstein. Carried out primarily in the pages of the academic journal International Security, Epstein and Mearsheimer argued the validity of the 3-1 rule and other analytical models with respect the NATO/Warsaw Pact military balance in Europe in the 1980s. Epstein cited Dupuy’s empirical research in support of his criticism of Mearsheimer’s reliance on the 3-1 rule. In turn, Mearsheimer questioned Dupuy’s data and conclusions to refute Epstein. Dupuy’s article defended his research and pointed out the errors in Mearsheimer’s assertions. With the publication of Dupuy’s rebuttal, the International Security editors called a time out on the debate thread.

The Epstein/Mearsheimer debate was itself part of a larger political debate over U.S. policy toward the Soviet Union during the administration of Ronald Reagan. This interdisciplinary argument, which has since become legendary in security and strategic studies circles, drew in some of the biggest names in these fields, including Eliot Cohen, Barry Posen, the late Samuel Huntington, and Stephen Biddle. As Jeffery Friedman observed,

These debates played a prominent role in the “renaissance of security studies” because they brought together scholars with different theoretical, methodological, and professional backgrounds to push forward a cohesive line of research that had clear implications for the conduct of contemporary defense policy. Just as importantly, the debate forced scholars to engage broader, fundamental issues. Is “military power” something that can be studied using static measures like force ratios, or does it require a more dynamic analysis? How should analysts evaluate the role of doctrine, or politics, or military strategy in determining the appropriate “balance”? What role should formal modeling play in formulating defense policy? What is the place for empirical analysis, and what are the strengths and limitations of existing data?[1]

It is well worth the time to revisit the contributions to the 1980s debate. I have included a bibliography below that is not exhaustive, but is a place to start. The collapse of the Soviet Union and the end of the Cold War diminished the intensity of the debates, which simmered through the 1990s and then were obscured during the counterterrorism/ counterinsurgency conflicts of the post-9/11 era. It is possible that the challenges posed by China and Russia amidst the ongoing “hybrid” conflict in Syria and Iraq may revive interest in interrogating the bases of military analyses in the U.S and the West. It is a discussion that is long overdue and potentially quite illuminating.


[1] Jeffery A. Friedman, “Manpower and Counterinsurgency: Empirical Foundations for Theory and Doctrine,” Security Studies 20 (2011)


(Note: Some of these are behind paywalls, but some are available in PDF format. Mearsheimer has made many of his publications freely available here.)

John J. Mearsheimer, “Why the Soviets Can’t Win Quickly in Central Europe,” International Security, Vol. 7, No. 1 (Summer 1982)

Samuel P. Huntington, “Conventional Deterrence and Conventional Retaliation in Europe,” International Security 8, no. 3 (Winter 1983/84)

Joshua Epstein, Strategy and Force Planning (Washington, DC: Brookings, 1987)

Joshua M. Epstein, “Dynamic Analysis and the Conventional Balance in Europe,” International Security 12, no. 4 (Spring 1988)

John J. Mearsheimer, “Numbers, Strategy, and the European Balance,” International Security 12, no. 4 (Spring 1988)

Stephen Biddle, “The European Conventional Balance,” Survival 30, no. 2 (March/April 1988)

Eliot A. Cohen, “Toward Better Net Assessment: Rethinking the European Conventional Balance,International Security Vol. 13, No. 1 (Summer 1988)

Joshua M. Epstein, “The 3:1 Rule, the Adaptive Dynamic Model, and the Future of Security Studies,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, “Assessing the Conventional Balance,” International Security 13, no. 4 (Spring 1989)

John J. Mearsheimer, Barry R. Posen, Eliot A. Cohen, “Correspondence: Reassessing Net Assessment,” International Security 13, No. 4 (Spring 1989)

Trevor N. Dupuy, “Combat Data and the 3:1 Rule,” International Security 14, no. 1 (Summer 1989)

Stephen Biddle et al., Defense at Low Force Levels (Alexandria, VA: Institute for Defense Analyses, 1991)

Human Factors In Warfare: Fear In A Lethal Environment

Chaplain (Capt.) Emil Kapaun (right) and Capt. Jerome A. Dolan, a medical officer with the 8th Cavalry Regiment, 1st Cavalry Division, carry an exhausted Soldier off the battlefield in Korea, early in the war. Kapaun was famous for exposing himself to enemy fire. When his battalion was overrun by a Chinese force in November 1950, rather than take an opportunity to escape, Kapaun voluntarily remained behind to minister to the wounded. In 2013, Kapaun posthumously received the Medal of Honor for his actions in the battle and later in a prisoner of war camp, where he died in May 1951. [Photo Credit: Courtesy of the U.S. Army Center of Military History]

[This piece was originally published on 27 June 2017.]

Trevor Dupuy’s theories about warfare were sometimes criticized by some who thought his scientific approach neglected the influence of the human element and chance and amounted to an attempt to reduce war to mathematical equations. Anyone who has read Dupuy’s work knows this is not, in fact, the case.

Moral and behavioral (i.e human) factors were central to Dupuy’s research and theorizing about combat. He wrote about them in detail in his books. In 1989, he presented a paper titled “The Fundamental Information Base for Modeling Human Behavior in Combat” at a symposium on combat modeling that provided a clear, succinct summary of his thinking on the topic.

He began by concurring with Carl von Clausewitz’s assertion that

[P]assion, emotion, and fear [are] the fundamental characteristics of combat… No one who has participated in combat can disagree with this Clausewitzean emphasis on passion, emotion, and fear. Without doubt, the single most distinctive and pervasive characteristic of combat is fear: fear in a lethal environment.

Despite the ubiquity of fear on the battlefield, Dupuy pointed out that there is no way to study its impact except through the historical record of combat in the real world.

We cannot replicate fear in laboratory experiments. We cannot introduce fear into field tests. We cannot create an environment of fear in training or in field exercises.

So, to study human reaction in a battlefield environment we have no choice but to go to the battlefield, not the laboratory, not the proving ground, not the training reservation. But, because of the nature of the very characteristics of combat which we want to study, we can’t study them during the battle. We can only do so retrospectively.

We have no choice but to rely on military history. This is why military history has been called the laboratory of the soldier.

He also pointed out that using military history analytically has its own pitfalls and must be handled carefully lest it be used to draw misleading or inaccurate conclusions.

I must also make clear my recognition that military history data is far from perfect, and that–even at best—it reflects the actions and interactions of unpredictable human beings. Extreme caution must be exercised when using or analyzing military history. A single historical example can be misleading for either of two reasons: (a) The data is inaccurate, or (b) The example may be true, but also be untypical.

But, when a number of respectable examples from history show consistent patterns of human behavior, then we can have confidence that behavior in accordance with the pattern is typical, and that behavior inconsistent with the pattern is either untypical, or is inaccurately represented.

He then stated very concisely the scientific basis for his method.

My approach to historical analysis is actuarial. We cannot predict the future in any single instance. But, on the basis of a large set of reliable experience data, we can predict what is likely to occur under a given set of circumstances.

Dupuy listed ten combat phenomena that he believed were directly or indirectly related to human behavior. He considered the list comprehensive, if not exhaustive.

I shall look at Dupuy’s treatment of each of these in future posts (click links above).

Human Factors In Warfare: Suppression

Images from a Finnish Army artillery salvo fired by towed 130mm howitzers during an exercise in 2013. [Puolustusvoimat – Försvarsmakten – The Finnish Defence Forces/YouTube]

[This piece was originally posted on 24 August 2017.]

According to Trevor Dupuy, “Suppression is perhaps the most obvious and most extensive manifestation of the impact of fear on the battlefield.” As he detailed in Understanding War: History and Theory of Combat (1987),

There is probably no obscurity of combat requiring clarification and understanding more urgently than that of suppression… Suppression usually is defined as the effect of fire (primarily artillery fire) upon the behavior of hostile personnel, reducing, limiting, or inhibiting their performance of combat duties. Suppression lasts as long as the fires continue and for some brief, indeterminate period thereafter. Suppression is the most important effect of artillery fire, contributing directly to the ability of the supported maneuver units to accomplish their missions while preventing the enemy units from accomplishing theirs. (p. 251)

Official US Army field artillery doctrine makes a distinction between “suppression” and “neutralization.” Suppression is defined to be instantaneous and fleeting; neutralization, while also temporary, is relatively longer-lasting. Neutralization, the doctrine says, results when suppressive effects are so severe and long-lasting that a target is put out of action for a period of time after the suppressive fire is halted. Neutralization combines the psychological effects of suppressive gunfire with a certain amount of damage. The general concept of neutralization, as distinct from the more fleeting suppression, is a reasonable one. (p. 252)

Despite widespread acknowledgement of the existence of suppression and neutralization, the lack of interest in analyzing its effects was a source of professional frustration for Dupuy. As he commented in 1989,

The British did some interesting but inconclusive work on suppression in their battlefield operations research in World War II. In the United States I am aware of considerable talk about suppression, but very little accomplishment, over the past 20 years. In the light of the significance of suppression, our failure to come to grips with the issue is really quite disgraceful.

This lack of interest is curious, given that suppression and neutralization remain embedded in U.S. Army combat doctrine to this day. The current Army definitions are:

Suppression – In the context of the computed effects of field artillery fires, renders a target ineffective for a short period of time producing at least 3-percent casualties or materiel damage. [Army Doctrine Reference Publication (ADRP) 1-02, Terms and Military Symbols, December 2015, p. 1-87]

Neutralization – In the context of the computed effects of field artillery fires renders a target ineffective for a short period of time, producing 10-percent casualties or materiel damage. [ADRP 1-02, p. 1-65]

A particular source for Dupuy’s irritation was the fact that these definitions were likely empirically wrong. As he argued in Understanding War,

This is almost certainly the wrong way to approach quantification of neutralization. Not only is there no historical evidence that 10% casualties are enough to achieve this effect, there is no evidence that any level of losses is required to achieve the psycho-physiological effects of suppression or neutralization. Furthermore, the time period in which casualties are incurred is probably more important than any arbitrary percentage of loss, and the replacement of casualties and repair of damage are probably irrelevant. (p. 252)

Thirty years after Dupuy pointed this problem out, the construct remains enshrined in U.S. doctrine, unquestioned and unsubstantiated. Dupuy himself was convinced that suppression probably had little, if anything, to do with personnel loss rates.

I believe now that suppression is related to and probably a component of disruption caused by combat processes other than surprise, such as a communications failure. Further research may reveal, however, that suppression is a very distinct form of disruption that can be measured or estimated quite independently of disruption caused by any other phenomenon. (Understanding War, p. 251)

He had developed a hypothesis for measuring the effects of suppression, but was unable to interest anyone in the U.S. government or military in sponsoring a study on it. Suppression as a combat phenomenon remains only vaguely understood.