TDI Friday Read: Links You May Have Missed, 30 March 2018

This week’s list of links is an odds-and-ends assortment.

David Vergun has an interview with General Stephen J. Townshend, commander of the U.S. Army Training and Doctrine Command (TRADOC) on the Army website about the need for smaller, lighter, and faster equipment for future warfare.

Defense News’s apparently inexhaustible Jen Judson details the Army’s newest forthcoming organization, “US Army’s Futures Command sets groundwork for battlefield transformation.”

At West Point’s Modern War Institute, Army Lionel Beehner, Liam Collins, Steve Ferenzi, Robert Person and Aaron Brantly have a very interesting analysis of the contemporary Russian approach to warfare, “Analyzing the Russian Way of War: Evidence from the 2008 Conflict with Georgia.”

Also at the Modern War Institute, Ethan Olberding examines ways to improve the planning skills of the U.S. Army’s junior leaders, “You Can Lead, But Can You Plan? Time to Change the Way We Develop Junior Leaders.”

Kyle Mizokami at Popular Mechanics takes a look at the state of the art in drone defenses, “Watch Microwave and Laser Weapons Knock Drones Out of the Sky.”

Jared Keller at Task & Purpose looks into the Army’s interest in upgunning its medium-weight armored vehicles, “The Army Is Eyeing This Beastly 40mm Cannon For Its Ground Combat Vehicles.”

And finally, MeritTalk, a site focused on U.S. government information technology, has posted a piece, “Pentagon Wants An Early Warning System For Hybrid Warfare,” looking at the Defense Advanced Research Projects Agency’s (DARPA) ambitious Collection and Monitoring via Planning for Active Situational Scenarios (COMPASS) program, which will incorporate AI, game theory, modeling, and estimation technologies to attempt to decipher the often subtle signs that precede a full-scale attack.

‘Love’s Tables’: U.S. War Department Casualty Estimation in World War II

The same friend of TDI who asked about ‘Evett’s Rates,” the British casualty estimation methodology during World War II, also mentioned that the work of Albert G. Love III was now available on-line. Rick Atkinson also referenced “Love’s Tables” in The Guns At Last Light.

In 1931, Lieutenant Colonel (later Brigadier General) Love, then a Medical Corps physician in the U.S. Army Medical Field Services School, published a study of American casualty data in the recent Great War, titled “War Casualties.”[1] This study was likely the source for tables used for casualty estimation by the U.S. Army through 1944.[2]

Love, who had no advanced math or statistical training, undertook his study with the support of the Army Surgeon General, Merritte W. Ireland, and initial assistance from Dr. Lowell J. Reed, a professor of biostatistics at John Hopkins University. Love’s posting in the Surgeon General’s Office afforded him access to an array of casualty data collected from the records of the American Expeditionary Forces in France, as well as data from annual Surgeon General reports dating back to 1819, the official medical history of the U.S. Civil War, and U.S. general population statistics.

Love’s research was likely the basis for rate tables for calculating casualties that first appeared in the 1932 edition of the War Department’s Staff Officer’s Field Manual.[3]

Battle Casualties, including Killed, in Percent of Unit Strength, Staff Officer’s Field Manual (1932).

The 1932 Staff Officer’s Field Manual estimation methodology reflected Love’s sophisticated understanding of the factors influencing combat casualty rates. It showed that both the resistance and combat strength (and all of the factors that comprised it) of the enemy, as well as the equipment and state of training and discipline of the friendly troops had to be taken into consideration. The text accompanying the tables pointed out that loss rates in small units could be quite high and variable over time, and that larger formations took fewer casualties as a fraction of overall strength, and that their rates tended to become more constant over time. Casualties were not distributed evenly, but concentrated most heavily among the combat arms, and in the front-line infantry in particular. Attackers usually suffered higher loss rates than defenders. Other factors to be accounted for included the character of the terrain, the relative amount of artillery on each side, and the employment of gas.

The 1941 iteration of the Staff Officer’s Field Manual, now designated Field Manual (FM) 101-10[4], provided two methods for estimating battle casualties. It included the original 1932 Battle Casualties table, but the associated text no longer included the section outlining factors to be considered in calculating loss rates. This passage was moved to a note appended to a new table showing the distribution of casualties among the combat arms.

Rather confusingly, FM 101-10 (1941) presented a second table, Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength. It included rates for front line regiments and divisions, corps and army units, reserves, and attached cavalry. The rates were broken down by posture and tactical mission.

Estimated Daily Losses in Campaign of Personnel, Dead and Evacuated, Per 1,000 of Actual Strength, FM 101-10 (1941)

The source for this table is unknown, nor the method by which it was derived. No explanatory text accompanied it, but a footnote stated that “this table is intended primarily for use in school work and in field exercises.” The rates in it were weighted toward the upper range of the figures provided in the 1932 Battle Casualties table.

The October 1943 edition of FM 101-10 contained no significant changes from the 1941 version, except for the caveat that the 1932 Battle Casualties table “may or may not prove correct when applied to the present conflict.”

The October 1944 version of FM 101-10 incorporated data obtained from World War II experience.[5] While it also noted that the 1932 Battle Casualties table might not be applicable, the experiences of the U.S. II Corps in North Africa and one division in Italy were found to be in agreement with the table’s division and corps loss rates.

FM 101-10 (1944) included another new table, Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), meaning that it now provided three distinct methods for estimating battle casualties.

Estimate of Battle Losses for a Front-Line Division (in % of Actual Strength), FM 101-10 (1944)

Like the 1941 Estimated Daily Losses in Campaign table, the sources for this new table were not provided, and the text contained no guidance as to how or when it should be used. The rates it contained fell roughly within the span for daily rates for severe (6-8%) to maximum (12%) combat listed in the 1932 Battle Casualty table, but would produce vastly higher overall rates if applied consistently, much higher than the 1932 table’s 1% daily average.

FM 101-10 (1944) included a table showing the distribution of losses by branch for the theater based on experience to that date, except for combat in the Philippine Islands. The new chart was used in conjunction with the 1944 Estimate of Battle Losses for a Front-Line Division table to determine daily casualty distribution.

Distribution of Battle Losses–Theater of Operations, FM 101-10 (1944)

The final World War II version of FM 101-10 issued in August 1945[6] contained no new casualty rate tables, nor any revisions to the existing figures. It did finally effectively invalidate the 1932 Battle Casualties table by noting that “the following table has been developed from American experience in active operations and, of course, may not be applicable to a particular situation.” (original emphasis)

NOTES

[1] Albert G. Love, War Casualties, The Army Medical Bulletin, No. 24, (Carlisle Barracks, PA: 1931)

[2] This post is adapted from TDI, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 314-317).

[3] U.S. War Department, Staff Officer’s Field Manual, Part Two: Technical and Logistical Data (Government Printing Office, Washington, D.C., 1932)

[4] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., June 15, 1941)

[5] U.S. War Department, FM 101-10, Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., October 12, 1944)

[6] U.S. War Department, FM 101-10 Staff Officer’s Field Manual: Organization, Technical and Logistical Data (Washington, D.C., August 1, 1945)

C-WAM 2

Here are two C-WAM documents: their rule book and a CAA briefing, both from 2016:

C-WAM’s rule book: https://paxsims.files.wordpress.com/2016/10/c-wam-rules-version-7-29-jul-2016.docx

CAA briefing on C-WAM: https://paxsims.files.wordpress.com/2016/10/mors-wargame-cop-brief-20-apr-16.pptx

A few highlights (rule book):

  1. Grid size from 2 to 10 km, depending on terrain (section 2.2)
    1. Usually 5 km to a grid.
  2. There is an air-to-air combat table based upon force ratios (section 3.6.4).
  3. There is a naval combat table based upon force ratios (section 3.9.4).
  4. There are combat values of ground units (section 3.11.5.B)
  5. There is a ground combat table based upon force ratios (section 3.11.5.E)
  6. There is a “tactics degrade multiplier” which effectively divides one sides’ combat power by up to 4 (section 3.11.5.P).
  7. These tables use different types of dice for probability generation (showing the influence of Gary Gygax on DOD M&S).

A few highlights (briefing)

  1. Executes in 24 or 72 hours time steps (slide 3)
  2. Brigade-level (slide 18)
  3. Breakpoint at 50% strength (can only defend), removed at 30% strength (slide 18 and also rule book, section 5.7.2).

Anyhow, interesting stuff, but still basically an old style board-game, like Avalon Hill or SPI.

 

Saudi Missile Defense

The Houthi’s in Yemen are lobbing missiles at Saudi Arabia. Saudi Arabia does have a missile defense system (I assume made in America). Apparently they are missing the incoming missiles: http://www.businessinsider.com/saudi-missile-defense-failed-video-2018-3

A few other points:

  1. One interceptor appears to have “pulled a u-turn” and exploded over Riyadh.
    1. This interceptor may have been the source of the Saudi casualties (one dead, two injured)
  2. This could be the largest barrage of missiles fired at Saudi Arabia by the Houthi’s yet.

I wonder what interceptor Saudi Arabia was using. I wonder if failure is common with most missile defense systems (the situation with North Korea comes to mind here).

——————————————————————————————————————-

Update:

This is not the first time we have discussed this problem:

Did The Patriot BMD Miss Again In Saudi Arabia?

C-WAM 1

Linked here is an article about a wargame called C-WAM, the Center for Army Analysis (CAA) Wargaming Analysis Model: https://www.govtechworks.com/how-a-board-game-helps-dod-win-real-battles/#gs.ifXPm5M

A few points:

  1. It is an old-style board game.
  2. Results are feed into RAND’s JICM (Joint Integrated Contingency Model).
    1. Battle attrition is done using CAA’s COSAGE and ATCAL.
  3. Ground combat is brigade-level.

More to come.

‘Evett’s Rates’: British War Office Wastage Tables

Stretcher bearers of the East Surrey Regiment, with a Churchill tank of the North Irish Horse in the background, during the attack on Longstop Hill, Tunisia, 23 April 1943. [Imperial War Museum/Wikimedia]

A friend of TDI queried us recently about a reference in Rick Atkinson’s The Guns at Last Light: The War in Western Europe, 1944-1945 to a British casualty estimation methodology known as “Evett’s Rates.” There are few references to Evett’s Rates online, but as it happens, TDI did find out some details about them for a study on casualty estimation. [1]

British Army staff officers during World War II and the 1950s used a set of look-up tables which listed expected monthly losses in percentage of strength for various arms under various combat conditions. The origin of the tables is not known, but they were officially updated twice, in 1942 by a committee chaired by Major General Evett, and in 1951-1955 by the Army Operations Research Group (AORG).[2]

The methodology was based on staff predictions of one of three levels of operational activity, “Intense,” “Normal,” and “Quiet.” These could be applied to an entire theater, or to individual divisions. The three levels were defined the same way for both the Evett Committee and AORG rates:

The rates were broken down by arm and rank, and included battle and nonbattle casualties.

Rates of Personnel Wastage Including Both Battle and Non-battle Casualties According to the Evett Committee of 1942. (Percent per 30 days).

The Evett Committee rates were criticized during and after the war. After British forces suffered twice the anticipated casualties at Anzio, the British 21st Army Group applied a “double intense rate” which was twice the Evett Committee figure and intended to apply to assaults. When this led to overestimates of casualties in Normandy, the double intense rate was discarded.

From 1951 to 1955, AORG undertook a study of casualty rates in World War II. Its analysis was based on casualty data from the following campaigns:

  • Northwest Europe, 1944
    • 6-30 June – Beachhead offensive
    • 1 July-1 September – Containment and breakout
    • 1 October-30 December – Semi-static phase
    • 9 February to 6 May – Rhine crossing and final phase
  • Italy, 1944
    • January to December – Fighting a relatively equal enemy in difficult country. Warfare often static.
    • January to February (Anzio) – Beachhead held against severe and well-conducted enemy counter-attacks.
  • North Africa, 1943
    • 14 March-13 May – final assault
  • Northwest Europe, 1940
    • 10 May-2 June – Withdrawal of BEF
  • Burma, 1944-45

From the first four cases, the AORG study calculated two sets of battle casualty rates as percentage of strength per 30 days. “Overall” rates included KIA, WIA, C/MIA. “Apparent rates” included these categories but subtracted troops returning to duty. AORG recommended that “overall” rates be used for the first three months of a campaign.

The Burma campaign data was evaluated differently. The analysts defined a “force wastage” category which included KIA, C/MIA, evacuees from outside the force operating area and base hospitals, and DNBI deaths. “Dead wastage” included KIA, C/MIA, DNBI dead, and those discharged from the Army as a result of injuries.

The AORG study concluded that the Evett Committee underestimated intense loss rates for infantry and armor during periods of very hard fighting and overestimated casualty rates for other arms. It recommended that if only one brigade in a division was engaged, two-thirds of the intense rate should be applied, if two brigades were engaged the intense rate should be applied, and if all brigades were engaged then the intense rate should be doubled. It also recommended that 2% extra casualties per month should be added to all the rates for all activities should the forces encounter heavy enemy air activity.[1]

The AORG study rates were as follows:

Recommended AORG Rates of Personnel Wastage. (Percent per 30 days).

If anyone has further details on the origins and activities of the Evett Committee and AORG, we would be very interested in finding out more on this subject.

NOTES

[1] This post is adapted from The Dupuy Institute, Casualty Estimation Methodologies Study, Interim Report (May 2005) (Altarum) (pp. 51-53).

[2] Rowland Goodman and Hugh Richardson. “Casualty Estimation in Open and Guerrilla Warfare.” (London: Directorate of Science (Land), U.K. Ministry of Defence, June 1995.), Appendix A.

TDI Friday Read: Links You May Have Missed, 23 March 2018

To follow on Chris’s recent post about U.S. Army modernization:

On the subject of future combat:

  • The U.S. National Academies of Sciences, Engineering, and Medicine has issued a new report emphasizing the need for developing countermeasures against multiple small unmanned aerial aircraft systems (sUASs) — organized in coordinated groups, swarms, and collaborative groups — which could be used much sooner than the U.S. Army anticipates.  [There is a summary here.]
  • National Defense University’s Frank Hoffman has a very good piece in the current edition of Parameters, “Will War’s Nature Change in the Seventh Military Revolution?,” that explores the potential implications of the combinations of robotics, artificial intelligence, and deep learning systems on the character and nature of war.
  • Major Hassan Kamara has an article in the current edition of Military Review contemplating changes in light infantry, “Rethinking the U.S. Army Infantry Rifle Squad

On the topic of how the Army is addressing its current and future challenges with irregular warfare and wide area security:

Perla On Dupuy

Dr. Peter Perla, noted defense researcher, wargame designer and expert, and author of the seminal The Art of Wargaming: A Guide for Professionals and Hobbyists, gave the keynote address at the 2017 Connections Wargaming Conference last August. The topic of his speech, which served as his valedictory address on the occasion of his retirement from government service, addressed the predictive power of wargaming. In it, Perla recalled a conversation he once had with Trevor Dupuy in the early 1990s:

Like most good stories, this one has a beginning, a middle, and an end. I have sort of jumped in at the middle. So let’s go back to the beginning.

As it happens, that beginning came during one of the very first Connections. It may even have been the first one. This thread is one of those vivid memories we all have of certain events in life. In my case, it is a short conversation I had with Trevor Dupuy.

I remember the setting well. We were in front of the entrance to the O Club at Maxwell. It was kind of dark, but I can’t recall if it was in the morning before the club opened for our next session, or the evening, before a dinner. Trevor and I were chatting and he said something about wargaming being predictive. I still recall what I said.

“Good grief, Trevor, we can’t even predict the outcome of a Super Bowl game much less that of a battle!” He seemed taken by surprise that I felt that way, and he replied, “Well, if that is true, what are we doing? What’s the point?”

I had my usual stock answers. We wargame to develop insights, to identify issues, and to raise questions. We certainly don’t wargame to predict what will happen in a battle or a war. I was pretty dogmatic in those days. Thank goodness I’m not that way any more!

The question of prediction did not go away, however.

For the rest of Perla’s speech, see here. For a wonderful summary of the entire 2017 Connections Wargaming conference, see here.

 

Artificial Intelligence (AI) And Warfare

Arnold Schwarzenegger and friend. [Image Credit Jordan Strauss/Invision/AP/File]

Humans are a competitive lot. With machines making so much rapid progress (see Moore’s Law), the singularity approaches—see the discussion between Michio Kaku and Ray Kurzweil, two prominent futurologists. This is the “hypothesis that the invention of artificial super intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.” (Wikipedia). This was also referred to as general artificial intelligence (GAI) by The Economist, and previously discussed in this blog.

We humans also exhibit a tendency to anthropomorphize, or to endow any observed object with human qualities. The image above illustrates Arnold Schwarzenegger sizing up his robotic doppelgänger. This is further evidenced by statements made about the ability of military networks to spontaneously become self-aware:

The idea behind the Terminator films – specifically, that a Skynet-style military network becomes self-aware, sees humans as the enemy, and attacks – isn’t too far-fetched, one of the nation’s top military officers said this week. Nor is that kind of autonomy the stuff of the distant future. ‘We’re a decade or so away from that capability,’ said Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff.

This exhibits a fundamental fear, and I believe a misconception, about the capabilities of these technologies. This is exemplified by Jay Tuck’s TED talk, “Artificial Intelligence: it will kill us.” His examples of AI in use today include airline and hotel revenue management, aircraft autopilot, and medical imaging. He also holds up the MQ-9 Reaper’s Argus (aka Gorgon Stare) imaging systems, as well as the X-47B Pegasus, previously discussed, as an example of modern AI, and the pinnacle in capability. Among several claims, he states that the X-47B has an optical stealth capability, which is inaccurate:

[X-47B], a descendant of an earlier killer drone with its roots in the late 1990s, is possibly the least stealthy of the competitors, owing to Northrop’s decision to build the drone big, thick and tough. Those qualities help it survive forceful carrier landings, but also make it a big target for enemy radars. Navy Capt. Jamie Engdahl, manager of the drone test program, described it as ‘low-observable relevant,’ a careful choice of words copping to the X-47B’s relative lack of stealth. (Emphasis added).

Such questions limit the veracity of these claims. I believe that this is little more than modern fear mongering, playing on ignorance. But, Mr. Tuck is not alone. From the forefront of technology, Elon Musk is often held up as an example of commercial success in the field of AI, and he recently addressed the national governors association meeting on this topic, specifically in the need for regulation in the commercial sphere.

On the artificial intelligence [AI] front, I have exposure to the most cutting edge AI, and I think people should be really concerned about it. … AI is a rare case, I think we should be proactive in terms of regulation, rather that reactive about it. Because by the time we are reactive about it, its too late. … AI is a fundamental risk to human civilization, in a way that car crashes, airplane crashes, faulty drugs or bad food were not. … In space, we get regulated by the FAA. But you know, if you ask the average person, ‘Do you want to get rid of the FAA? Do you want to take a chance on manufacturers not cutting corners on aircraft because profits were down that quarter? Hell no, that sounds terrible.’ Because robots will be able to do everything better than us, and I mean all of us. … We have companies that are racing to build AI, they have to race otherwise they are going to be made uncompetitive. … When the regulators are convinced it is safe they we can go, but otherwise, slow down.  [Emphasis added]

Mr. Musk also hinted at American exceptionalism: “America is the distillation of the human spirit of exploration.” Indeed, the link between military technology and commercial applications is an ongoing virtuous cycle. But, the kind of regulation that exists in the commercial sphere from within the national, subnational, and local governments of humankind do not apply so easily in the field of warfare, where no single authority exists. Any agreements to limit technology are a consensus-based agreement, such as a treaty.

The husky was mistakenly classified as wolf, because the classifier learned to use snow as feature. [Machine Master blog]

In a recent TEDx talk, Peter Haas describes his work in AI, and some of challenges that exist within the state of the art of this technology. As illustrated above, when asked to distinguish between a wolf and a dog, the machine classified the Husky in the above photo as a wolf. The humans developing the AI system did not know why this happened, so they asked the AI system to show the regions of the image that were used to make this decision, and the result is depicted on the right side of the image. The fact that this dog was photographed with snow in the background is a form of bias – are fact that snow exists in a photo does not yield any conclusive proof that any particular animal is a dog or a wolf.

Right now there are people – doctors, judges, accountants – who are getting information from an AI system and treating it like it was information from a trusted colleague. It is this trust that bothers me. Not because of how often AI gets it wrong; AI researchers pride themselves on the accuracy of results. It is how badly it gets it wrong when it makes a mistake that has me worried. These systems do not fail gracefully.

AI systems clearly have drawbacks, but they also have significant advantages, such as in the curation of shared model of the battlefield.

In a paper for the Royal Institute of International Affairs in London, Mary Cummings of Duke University says that an autonomous system perceives the world through its sensors and reconstructs it to give its computer ‘brain’ a model of the world which it can use to make decisions. The key to effective autonomous systems is ‘the fidelity of the world model and the timeliness of its updates.‘ [Emphasis added]

Perhaps AI systems might best be employed in the cyber domain, where their advantages are naturally “at home?” Mr. Haas noted that machines at the current time have a tough time doing simple tasks, like opening a door. As was covered in this blog, former Deputy Defense Secretary Robert Work noted this same problem, and thus called for man-machine teaming as one of the key areas of pursuit within the Third Offset Strategy.

Just as the previous blog post illustrates, “the quality of military men is what wins wars and preserves nations.” Let’s remember Paul Van Ripper’s performance in Millennium Challenge 2002:

Red, commanded by retired Marine Corps Lieutenant General Paul K. Van Riper, adopted an asymmetric strategy, in particular, using old methods to evade Blue’s sophisticated electronic surveillance network. Van Riper used motorcycle messengers to transmit orders to front-line troops and World-War-II-style light signals to launch airplanes without radio communications. Red received an ultimatum from Blue, essentially a surrender document, demanding a response within 24 hours. Thus warned of Blue’s approach, Red used a fleet of small boats to determine the position of Blue’s fleet by the second day of the exercise. In a preemptive strike, Red launched a massive salvo of cruise missiles that overwhelmed the Blue forces’ electronic sensors and destroyed sixteen warships.

We should learn lessons on the over reliance on technology. AI systems are incredibly fickle, but which offer incredible capabilities. We should question and inspect results by such systems. They do not exhibit emotions, they are not self-aware, they do not spontaneously ask questions unless specifically programmed to do so. We should recognize their significant limitations and use them in conjunction with humans who will retain command decisions for the foreseeable future.

Reinventing the Army

Interesting article: 2018 Forecast: Can the Army Reinvent Itself

A few highlights:

  1. They are standing up the Army Futures Command this summer.
    1. Goal is to develop new weapons and new ways to use them.
    2. It has not been announced where it will be located.
  2. They currently have eight “Cross Functional Teams” already set up, lead by general officers.
    1. Army Chief of Staff General Mark Milley has a “Big Six” modernization priorities. They are: 1) Long-range missiles, 2) new armored vehicles, 3) high speed replacements for current helicopters, 4) secure command networks, 5) anti-aircraft and missile defense, 6) soldier equipment.
      1. There is a link for each of these in this article: https://breakingdefense.com/2017/12/army-shifts-1b-in-st-plans-modernization-command-undersec-mccarthy/
    2. This effort will start making their mark “in earnest” with the 2020 budget.
      1. The 2018 and 2019 budgets have been approved. In the current  political environment, hard to say what the 2020 budget will look like [these are my thoughts, not part of the article].
    3. The U.S. Army has approved Active Protection Systems (APS) for their tanks to shoot down incoming missiles, like Russia and Israel are using.
      1. Goal is to get a brigade of M1 Abrams tanks outfitted with Israeli-made Trophy APS systems by 2020 [why do I get the sense from the wording that this date is not going to be met].
      2. They are testing APS for Bradleys and Strykers.
        1. Also testing anti-aircraft versions of these vehicles.
        2. Also testing upgunned Strykers.
      3. Army is building the Mobile Protected Firepower (MPF) light tank to accompany airborne troops.
        1. RPF has been issued, contract award in early 2019.
    4. The Army is the lead sponsor for the Future Verticle Lift (FVL) to replace existing helicopters. Flight testing has started.
    5. This is all part of the Multi-Domain Battle
      1. They are moving the thinkers behind the Multi-Domain Battle from the Training & Doctrine Command (TRADOC) to the Futures Command.
      2. Milley has identified Russia as the No. 1 threat. [We will note that several years ago some influential people were tagging China as the primary threat.]
      3. Still, Milley has stood up two advisor brigades [because we have wars in Afghanistan, Iraq, Syria, Niger/Mali, Somalia, Yemen, etc. that don’t seem to be going away].