Tag security studies

What Did James Mattis Mean by “Lethality?”

Then-Lt. Gen. James Mattis, commander of U.S. Marine Corps Forces, Central Command, speaks to Marines with Marine Wing Support Group 27, in Al Asad, Iraq, in May 2006. [Photo: Cpl. Zachary Dyer]

Ever since publication of the U.S. National Defense Strategy by then-Secretary of Defense James Mattis’s Defense Department in early 2018 made the term “lethality” a foundational principle, there has been an open-ended discussion as to what the term actually means.

In his recent memoir, co-written with Bing West, Call Sign Chaos: Learning to Lead (Random House, 2019), Mattis offered his own definition of lethality. Sort of.

At the beginning of Chapter 17 (pages 235-236), he wrote (emphasis added):

LETHALITY AS THE METRIC

History presents many examples of militaries that forgot that their purpose was to fight and win. So long as we live in an imperfect world, one containing enemies of democracy, we will need a military strictly committed to combat-effectiveness. Our liberal democracy must be protected by a bodyguard of lethal warriors, organized, trained, and equipped to dominate in battle.

The need for lethality must be the measuring stick against which we evaluate the efficacy of our military. By aligning the entire military enterprise—recruiting, training, educating, equipping, and promoting—to the goal of compounding lethality, we best deter adversaries, or if conflict occurs, win at lowest cost to our troops’ lives. …

While not defining lethality explicitly, it would appear that Mattis equates it with “combat-effectiveness,” which he also does not explicitly define, but seems to mean as the ability “to dominate in battle.” It would seem that Mattis understands lethality not as the destructive quality of a weapon or weapon system, but as the performance of troops in combat.

More than once he also refers to lethality as a metric, which suggests that it can be quantified and measured, perhaps in terms of organization, training, and equipment. It is likely Mattis would object to that interpretation, however, given his hostility to Effects Based Operations (EBO), as implemented by U.S. Joint Forces Command, before he banned the concept from joint doctrine in 2008, as he related on pages 179-181 in Call Sign Chaos.

Trevor Dupuy’s Definitions of Lethality

Two U.S. Marines with a M1919A4 machine gun on Roi-Namur Island in the Marshall Islands during World War II. [Wikimedia]

It appears that discussion of the meaning of lethality, as related to the use of the term in the 2018 U.S. National Defense Strategy document, has sparked up again. It was kicked off by an interesting piece by Olivia Gerard in The Strategy Bridge last autumn, “Lethality: An Inquiry.

Gerard credited Trevor Dupuy and his colleagues at the Historical Evaluation Research Organization (HERO) with codifying “the military appropriation of the concept” of lethality, which was defined as: “the inherent capability of a given weapon to kill personnel or make materiel ineffective in a given period, where capability includes the factors of weapon range, rate of fire, accuracy, radius of effects, and battlefield mobility.”

It is gratifying for Gerard to attribute this to Dupuy and HERO, but some clarification is needed. The definition she quoted was, in fact, one provided to HERO for the purposes of a study sponsored by the Advanced Tactics Project (AVTAC) of the U.S. Army Combat Developments Command. The 1964 study report, Historical Trends Related to Weapon Lethality, provided the starting point for Dupuy’s subsequent theorizing about combat.

In his own works, Dupuy used a simpler definition of lethality:

He also used the terms lethality and firepower interchangeably in his writings. The wording of the original 1964 AVTAC definition tracks closely with the lethality scoring methodology Dupuy and his HERO colleagues developed for the study, known as the Theoretical Lethality Index/Operational Lethality Index (TLI/OLI). The original purpose of this construct was to permit some measurement of lethality by which weapons could be compared to each other (TLI), and to each other through history (OLI). It worked well enough that he incorporated it into his combat models, the Quantified Judgement Model (QJM) and Tactical Numerical Deterministic Model (TNDM).

Dupuy’s Verities: The Complexities of Combat

“The Battle of Leipzig, 16-19 October 1813” by A.I. Zauerweid (1783-1844) [Wikimedia]
The thirteenth and last of Trevor Dupuy’s Timeless Verities of Combat is:

Combat is too complex to be described in a single, simple aphorism.

From Understanding War (1987):

This is amply demonstrated by the preceding [verities]. All writers on military affairs (including this one) need periodically to remind themselves of this. In military analysis it is often necessary to focus on some particular aspect of combat. However, the results of such closely focused analyses must the be evaluated in the context of the brutal, multifarious, overlapping realities of war.

Trevor Dupuy was sometimes accused of attempting to reduce war to a mathematical equation. A casual reading of his writings might give that impression, but anyone who honestly engages with his ideas quickly finds this to be an erroneous conclusion. Yet, Dupuy believed the temptation to simplify and abstract combat and warfare to be common enough that he he embedded a warning against doing so into his basic theory on the subject. He firmly believed that human behavior comprises the most important aspect of combat, yet it is all too easy to miss the human experience of war figuring who lost or won and why, and counts of weapons, people, and casualties. As a military historian, he was keenly aware that the human stories behind the numbers—however imperfectly recorded and told—tell us more about the reality of war than mere numbers on their own ever will.

Dupuy’s Verities: Combat Power =/= Firepower

A U.S. 11th Marines 75mm pack howitzer and crew on Guadalcanal, September or October, 1942. The lean condition of the crewmembers indicate that they haven’t been getting enough nutrition during this period. [Wikipedia]

The ninth of Trevor Dupuy’s Timeless Verities of Combat is:

Superior Combat Power Always Wins.

From Understanding War (1987):

Military history demonstrates that whenever an outnumbered force was successful, its combat power was greater than that of the loser. All other things being equal, God has always been on the side of the heaviest battalions and always will be.

In recent years two or three surveys of modern historical experience have led to the finding that relative strength is not a conclusive factor in battle outcome. As we have seen, a superficial analysis of historical combat could support this conclusion. There are a number of examples of battles won by the side with inferior numbers. In many battles, outnumbered attackers were successful.

These examples are not meaningful, however, until the comparison includes the circumstances of the battles and opposing forces. If one take into consideration surprise (when present), relative combat effectiveness of the opponents, terrain features, and the advantage of defensive posture, the result may be different. When all of the circumstances are quantified and applied to the numbers of troops and weapons, the side with the greater combat power on the battlefield is always seen to prevail.

The concept of combat power is foundational to Dupuy’s theory of combat. He did not originate it; the notion that battle encompasses something more than just “physics-based” aspects likely originated with British theorist J.F.C. Fuller during World War I and migrated into U.S. Army thinking via post-war doctrinal revision. Dupuy refined and sharpened the Army’s vague conceptualization of it in the first iterations of his Quantified Judgement Model (QJM) developed in the 1970s.

Dupuy initially defined his idea of combat power in formal terms, as an equation in the QJM:

P = (S x V x CEV)

When:

P = Combat Power
S = Force Strength
V = Environmental and Operational Variable Factors
CEV = Combat Effectiveness Value

Essentially, combat power is the product of:

  • force strength as measured in his models through the Theoretical/Operational Lethality Index (TLI/OLI), a firepower scoring method for comparing the lethality of weapons relative to each other;
  • the intangible environmental and operational variables that affect each circumstance of combat; and
  • the intangible human behavioral (or moral) factors that determine the fighting quality of a combat force.

Dupuy’s theory of combat power and its functional realization in his models have two virtues. First, unlike most existing combat models, it incorporates the effects of those intangible factors unique to each engagement or battle that influence combat outcomes, but are not readily measured in physical terms. As Dupuy argued, combat consists of more than duels between weapons systems. A list of those factors can be found below.

Second, the analytical research in real-world combat data done by him and his colleagues allowed him to begin establishing the specific nature combat processes and their interaction that are only abstracted in other combat theories and models. Those factors and processes for which he had developed a quantification hypothesis are denoted by an asterisk below.

Dupuy’s Verities: The Inefficiency of Combat

The “Mud March” of the Union Army of the Potomac, January 1863.

The twelfth of Trevor Dupuy’s Timeless Verities of Combat is:

Combat activities are always slower, less productive, and less efficient than anticipated.

From Understanding War (1987):

This is the phenomenon that Clausewitz called “friction in war.” Friction is largely due to the disruptive, suppressive, and dispersal effects of firepower upon an aggregation of people. This pace of actual combat operations will be much slower than the progress of field tests and training exercises, even highly realistic ones. Tests and exercises are not truly realistic portrayals of combat, because they lack the element of fear in a lethal environment, present only in real combat. Allowances must be made in planning and execution for the effects of friction, including mistakes, breakdowns, and confusion.

While Clausewitz asserted that the effects of friction on the battlefield could not be measured because they were largely due to chance, Dupuy believed that its influence could, in fact, be gauged and quantified. He identified at least two distinct combat phenomena he thought reflected measurable effects of friction: the differences in casualty rates between large and small sized forces, and diminishing returns from adding extra combat power beyond a certain point in battle. He also believed much more research would be necessary to fully understand and account for this.

Dupuy was skeptical of the accuracy of combat models that failed to account for this interaction between operational and human factors on the battlefield. He was particularly doubtful about approaches that started by calculating the outcomes of combat between individual small-sized units or weapons platforms based on the Lanchester equations or “physics-based” estimates, then used these as inputs for brigade and division-level-battles, the results of which in turn were used as the basis for determining the consequences of theater-level campaigns. He thought that such models, known as “bottom up,” hierarchical, or aggregated concepts (and the prevailing approach to campaign combat modeling in the U.S.), would be incapable of accurately capturing and simulating the effects of friction.

Dupuy’s Verities: The Effects of Firepower in Combat

A German artillery barrage falling on Allied trenches, probably during the Second Battle of Ypres in 1915, during the First World War. [Wikimedia]

The eleventh of Trevor Dupuy’s Timeless Verities of Combat is:

Firepower kills, disrupts, suppresses, and causes dispersion.

From Understanding War (1987):

It is doubtful if any of the people who are today writing on the effect of technology on warfare would consciously disagree with this statement. Yet, many of them tend to ignore the impact of firepower on dispersion, and as a consequence they have come to believe that the more lethal the firepower, the more deaths, disruption, and suppression it will cause. In fact, as weapons have become more lethal intrinsically, their casualty-causing capability has either declined or remained about the same because of greater dispersion of targets. Personnel and tank loss rates of the 1973 Arab-Israeli War, for example, were quite similar to those of intensive battles of World War II and the casualty rates in both of these wars were less than in World War I. (p. 7)

Research and analysis of real-world historical combat data by Dupuy and TDI has identified at least four distinct combat effects of firepower: infliction of casualties (lethality), disruption, suppression, and dispersion. All of them were found to be heavily influenced—if not determined—by moral (human) factors.

Again, I have written extensively on this blog about Dupuy’s theory about the historical relationship between weapon lethality, dispersion on the battlefield, and historical decline in average daily combat casualty rates. TDI President Chris Lawrence has done further work on the subject as well.

TDI Friday Read: Lethality, Dispersion, And Mass On Future Battlefields

Human Factors In Warfare: Dispersion

Human Factors In Warfare: Suppression

There appears to be a fundamental difference in interpretation of the combat effects of firepower between Dupuy’s emphasis on the primacy of human factors and Defense Department models that account only for the “physics-based” casualty-inflicting capabilities of weapons systems. While U.S. Army combat doctrine accounts for the interaction of firepower and human behavior on the battlefield, it has no clear method for assessing or even fully identifying the effects of such factors on combat outcomes.

Dupuy’s Verities: The Requirements For Successful Defense

A Sherman tank of the U.S. Army 9th Armored Division heads into action against the advancing Germans during the Battle of the Bulge. {Warfare History Network]

The eighth of Trevor Dupuy’s Timeless Verities of Combat is:

Successful defense requires depth and reserves.

From Understanding War (1987):

Successful defense requires depth and reserves. It has been asserted that outnumbered military forces cannot afford to withhold valuable firepower from ongoing defensive operations and keep it idle in reserve posture. History demonstrates that this is specious logic, and that linear defense is disastrously vulnerable. Napoleon’s crossing of the Po in his first campaign in 1796 is perhaps the classic demonstration of the fallacy of linear (or cordon) defense.

The defender may have all of his firepower committed to the anticipated operational area, but the attacker’s advantage in having the initiative can always render much of that defensive firepower useless. Anyone who suggests that modern technology will facilitate the shifting of engaged firepower in battle overlooks three considerations: (a) the attacker can inhibit or prevent such movement by both direct and indirect means, (b) a defender engaged in a fruitless firefight against limited attacks by numerically inferior attackers is neither physically nor psychologically attuned to making lateral movements even if the enemy does not prevent or inhibit it, and (c) withdrawal of forces from the line (even if possible) provides an alert attacker with an opportunity for shifting the thrust of his offensive to the newly created gap in the defenses.

Napoleon recognized that hard-fought combat is usually won by the side committing the last reserves. Marengo, Borodino, and Ligny are typical examples of Napoleonic victories that demonstrated the importance of having resources available to tip the scales. His two greatest defeats, Leipzig and Waterloo, were suffered because his enemies still had reserves after his were all committed. The importance of committing the last reserves was demonstrated with particular poignancy at Antietam in the American Civil War. In World War II there is no better example than that of Kursk. [pp. 5-6]

Dupuy’s observations about the need for depth and reserves for a successful defense take on even greater current salience in light of the probably character of the near-future battlefield. Terrain lost by an unsuccessful defense may be extremely difficult to regain under prevailing circumstances.

The interaction of increasing weapon lethality and the operational and human circumstantial variables of combat continue to drive the long-term trend in dispersion of combat forces in frontage and depth.

Long-range precision firepower, ubiquitous battlefield reconnaissance and surveillance, and the effectiveness of cyber and information operations will make massing of forces and operational maneuver risky affairs.

As during the Cold War, the stability of alliances may depend on a willingness to defend forward in the teeth of effective anti-access/area denial (A2/AD) regimes that will make the strategic and operational deployment of reserves risky as well. The successful suppression of A2/AD networks might court a nuclear response, however.

Finding an effective solution for enabling a successful defense-in-depth in the future will be a task of great difficulty.

The Cold War Roots of the Integrated U.S./Japan/NATO Air Defense Network

Continental U.S. Air Defense Identifications Zones [MIT Lincoln Laboratory]

My last post detailed how the outbreak of the Korean War in 1950 prompted the U.S. to undertake emergency efforts to bolster its continental air defenses, including the concept of the Air Defense Identification Zone (ADIZ). This post will trace the development of this network and its gradual integration with those of Japan and NATO.

In the early 1950s, U.S. continental air defense, designated the Semi-Automatic Ground Environment air defense system or SAGE, resembled a scaled-up version of the Dowding System, pioneered by Great Britain as it faced air attack by the Luftwaffe in 1940. SAGE was initially a rudimentary and analog affair:

The permanent network depended on each radar site to perform GCI [Ground Control & Intercept] functions or pass information to a nearby GCI center. For example, information gathered by North Truro Air Force Station on Cape Cod was transmitted via three dedicated land lines to the GCI center at Otis AFB, Massachusetts, and then on to the ADC Headquarters at Ent AFB, Colorado. The facility at Otis AFB was a regional information clearinghouse that integrated the data from North Truro and other regional radar stations, Navy picket ships, and the all-volunteer GOC [Ground Observer Corps]. The clearinghouse operation was labor intensive. The data had to be manually copied onto Plexiglas plotting boards. The ground controllers used this data to direct defensive fighters to their targets. It was a slow and cumbersome process, fraught with difficulties. Engagement information was passed on to command headquarters by telephone and teletype. At Ent AFB, the information received from the regional clearinghouses was then passed on to enlisted airmen standing on scaffolds behind the world’s largest Plexiglas board. Using grease pencils, these airmen etched the progress of enemy bombers onto the back of the Plexiglas board so that air defense commanders could evaluate and respond. This arrangement impeded rapid response to the air battle.

It is hard to imagine an air defense challenge of the magnitude that potentially faced the U.S. and USSR by 1955. The Strategic Air Command (SAC) bomber fleet peaked at over 2,500 in 1955-1965, with 2,000 B-47s (range of 2,013 statute miles) and 750 B-52s (range of 4,480 statute miles). The range of U.S. bombers was extended considerably by the ~800 KC-135 aerial re-fueling tanker aircraft fleet as well.

In spite of the much publicized “bomber gap,” taking Soviet production numbers (and liberally adding aircraft of shorter range or unavailable until 1962…) produces an approximate estimate for a Soviet bombing fleet:

  • M-4 “Bison” (range of 3480 statute miles) = 93
  • Tu-16 “Badger” (range of 3888 statute miles) = 1507
  • Tu-22 “Blinder” (range of 3000 statute miles) = 250-300
  • Tu-95 “Bear” (range of 9400 statute miles) = 300+

That gave the U.S. an advantage in bombers of 2,750 to ~2,200 over the Soviets. Now, imagine this air battle being conducted with manual tracking on plexiglass with grease pencils…untenable!

Air Defense and Modern Computing

However, the problem proved amenable to solutions provided by the pending computer revolution.

At the Lincoln Laboratory development continued on an automated command and control system centered around the 250-ton Whirlwind II (AN/FSQ-7) computer. Containing some 49,000 vacuum tubes, the Whirlwind II became a central component of the SAGE system. SAGE, a system of analog computer-equipped direction centers, processed information from ground radars, picket ships, early-warning aircraft, and ground observers onto a generated radarscope to create a composite picture of the emerging air battle. Gone were the Plexiglas TM boards and teletype reports. Having an instantaneous view of the air picture over North America, defense commanders would be able to quickly evaluate the threats and effectively deploy interceptors and missiles to meet the threat.

The SAGE system was continually upgraded through the mid-to-late 1950s.

By 1954, with several more radars in the northeast providing data, the Cambridge control center (a prototype SAGE center) gained experience in directing F-86D interceptors against B-47 bombers performing mock raids. Still much development, research, and testing lay ahead. Bringing together long-range radar, communications, microwave electronics, and digital computer technologies required the largest research and development effort since the Manhattan Project. During its first ten years, the government spent $8 billion to develop and deploy SAGE. By 1958, Lincoln Laboratory had a professional staff of 720 with an annual budget of $22.5 million, to conduct SAGE-related work. The contract with IBM to build sixty production models of the Whirlwind II at $30 million each provided about half of the corporation’s revenues for the 1950s and exposed the corporation to technologies that it would use in the 1960s to dominate the computer industry. In the meantime, scientists and electronic engineers in the defense industry strove to install better radars and make these radars invulnerable to electronic countermeasures (ECM), commonly called jamming.

The SAGE development effort became one of the foundations of modern computing, giving IBM the technological capability to dominate for several decades, until it outsourced two key components: hardware to Intel and software to a young Microsoft, both of which became behemoths of the internet age. It is also estimated that this effort brought a price tag which exceeded that of the Manhattan Project. SAGE also transformed the attitude of the USAF towards technology and computerization.

Current Air Defense Networks

In the 1950s and 60s, the U.S. continental air defense network gradually began to expand geographically and integrate with NADGE and JADGE air defense networks of its NATO allies and Japan.

NATO Air Defense Ground Environment (NADGE): This was approved by NATO in December 1955, and became operational in 1962 with 18 radar stations. This eventually grew to 84 stations and provided an inter-connected network from Norway to Turkey before being superseded by the NATO Integrated Air Defense System (NATINADS) in 1972. NATINADS was further upgraded in the 1980s to include data from the E-3 Sentry AWACS aircraft (AEGIS (Airborne Early-warning/Ground Environment Integrated Segment); not to be confused with the USN system with the same acronym.)

Base Air Defense Ground Environment (BADGE): This was the automated system, in the same fashion as SAGE, which replaced the manual system in place with the JASDF since 1960. The requirement was stated in July 1961, and was actually modeled on the Naval Tactical Information System (NTDS), developed by Hughes for the US Navy. This was ordered in December 1964, and operational in March 1969. This was superseded by Japan Aerospace Defense Ground Environment (JADGE) in July 2009.

Japanese Air Defense and the Cold War Origins of Air Defense Identification Zones

Air Defense Identification Zones (ADIZ) in the South China Sea [Maximilian Dörrbecker (Chumwa)/Creative Commons/Wikipedia]

My previous posts have discussed the Japanese Air Self Defense Force (JASDF) and the aircraft used to perform the Defensive Counter Air (DCA) mission. To accomplish this, the JASDF is supported by an extensive air defense system which closely mirrors U.S. Air Force (USAF) and U.S. Navy (USN) systems and has co-evolved as technology and threats have changed over time.

Japan’s integrated air defense network and the current challenges it faces are both rooted in the Cold War origins of the modern U.S. air defense network.

On June 25, 1950, North Korea launched an invasion of South Korea, drawing the United States into a war that would last for three years. Believing that the North Korean attack could represent the first phase of a Soviet-inspired general war, the Joint Chiefs of Staff ordered Air Force air defense forces to a special alert status. In the process of placing forces on heightened alert, the Air Force uncovered major weaknesses in the coordination of defensive units to defend the nation’s airspace. As a result, an air defense command and control structure began to develop and Air Defense Identification Zones (ADIZ) were staked out along the nation’s frontiers. With the establishment of ADIZ, unidentified aircraft approaching North American airspace would be interrogated by radio. If the radio interrogation failed to identify the aircraft, the Air Force launched interceptor aircraft to identify the intruder visually. In addition, the Air Force received Army cooperation. The commander of the Army’s Antiaircraft Artillery Command allowed the Air Force to take operational control of the gun batteries as part of a coordinated defense in the event of attack.

In addition to North America, the U.S. unilaterally declared ADIZs to protect Japan, South Korea, the Philippines, and Taiwan in 1950. This action had no explicit foundation in international law.

Under the Convention on International Civil Aviation (the Chicago Convention), each State has complete and exclusive sovereignty over the airspace above its territory. While national sovereignty cannot be delegated, the responsibility for the provision of air traffic services can be delegated.… [A] State which delegates to another State the responsibility for providing air traffic services within airspace over its territory does so without derogation of its sovereignty.

This precedent set the stage for China to unilaterally declare ADIZs its own in 2013 that overlap those of Japan in the East China Sea. China’s ADIZs have the same international legal validity as those of the U.S. and Japan, which has muted criticism of China’s actions by those countries.

Recent activity by the Chinese People’s Liberation Army Air Force (PLAAF) and nuclear and missile testing by the Democratic People’s Republic of Korea (DPRK, or North Korea) is prompting incremental upgrades and improvements to the Japanese air defense radar network.

In August 2018, six Chinese H-6 bombers passed between Okinawa’s main island and Miyako Island heading north to Kii Peninsula. “The activities by Chinese aircraft in surrounding areas of our country have become more active and expanding its area of operation,” the spokesman [of the Japanese Ministry of Defense] said.… “There were no units placed on the islands on the Pacific Ocean side, such as Ogasawara islands, which conducted monitoring of the area…and the area was without an air defense capability.”

Such actions by the PLAAF and People’s Liberation Army Navy (PLAN) have provided significant rationale in the Japanese decision to purchase the F-35B and retrofit their Izumo-class helicopter carriers to operate them, as the Pacific Ocean side of Japan is relatively less developed for air defense and airfields for land-based aircraft.

My next post will look at the development of the U.S. air defense network and its eventual integration with those of Japan and NATO

TDI Friday Read: Engaging The Phalanx

The December 2018 issue of Phalanx, a periodical journal published by The Military Operations Research Society (MORS), contains an article by Jonathan K. Alt, Christopher Morey, and Larry Larimer, entitled “Perspectives on Combat Modeling.” (the article is paywalled, but limited public access is available via JSTOR).

Their article was written partly as a critical rebuttal to a TDI blog post originally published in April 2017, which discussed an issue of which the combat modeling and simulation community has long been aware but slow to address, known as the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

In short, because so little is empirically known about the real-world structures of combat processes and the interactions of these processes, modelers have been forced to rely on the judgement of subject matter experts (SMEs) to fill in the blanks. No one really knows if the blend of empirical data and SME judgement accurately represents combat because the modeling community has been reluctant to test its models against data on real world experience, a process known as validation.

TDI President Chris Lawrence subsequently published a series of blog posts responding to the specific comments and criticisms leveled by Alt, Morey, and Larimer.

How are combat models and simulations tested to see if they portray real-world combat accurately? Are they actually tested?

Engaging the Phalanx

How can we know if combat simulations adhere to strict standards established by the DoD regarding validation? Perhaps the validation reports can be released for peer review.

Validation

Some claim that models of complex combat behavior cannot really be tested against real-world operational experience, but this has already been done. Several times.

Validating Attrition

If only the “physics-based aspects” of combat models are empirically tested, do those models reliably represent real-world combat with humans or only the interactions of weapons systems?

Physics-based Aspects of Combat

Is real-world historical operational combat experience useful only for demonstrating the capabilities of combat models, or is it something the models should be able to reliably replicate?

Historical Demonstrations?

If a Subject Matter Expert (SME) can be substituted for a proper combat model validation effort, then could not a SME simply be substituted for the model? Should not all models be considered expert judgement quantified?

SMEs

What should be done about the “Base of Sand” problem? Here are some suggestions.

Engaging the Phalanx (part 7 of 7)

Persuading the military operations research community of the importance of research on real-world combat experience in modeling has been an uphill battle with a long history.

Diddlysquat

And the debate continues…