Validating A Combat Model (Part VI)

Advancing Germans halted by 2nd Battalion, Fifth Marine, June 3 1918. Les Mares form 2 1/2 miles west of Belleau Wood attacked the American lines through the wheat fields. From a painting by Harvey Dunn. [U.S. Navy]

[The article below is reprinted from April 1997 edition of The International TNDM Newsletter.]

The First Test of the TNDM Battalion-Level Validations: Predicting the Winners
by Christopher A. Lawrence

CASE STUDIES: WHERE AND WHY THE MODEL FAILED CORRECT PREDICTIONS

World War I (12 cases):

Yvonne-Odette (Night)—On the first prediction, selected the defender as a winner, with the attacker making no advance. The force ratio was 0.5 to 1. The historical results also show e attacker making no advance, but rate the attacker’s mission accomplishment score as 6 while the defender is rated 4. Therefore, this battle was scored as a draw.

On the second run, the Germans (Sturmgruppe Grethe) were assigned a CEV of 1.9 relative to the US 9th Infantry Regiment. This produced a draw with no advance.

This appears to be a result that was corrected by assigning the CEV to the side that would be expected to have that advantage. There is also a problem in defining who is winner.

Hill 142—On the first prediction the defending Germans won, whereas in the real world the attacking Marines won. The Marines are recorded as having a higher CEV in a number of battles, so when this correction is put in the Marines win with a CEV of 1.5. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat rim to replicate historical results.

Note that while many people would expect the Germans to have the higher CEV, at this juncture in WWI the German regular army was becoming demoralized, while the US Army was highly motivated, trained and fresh. While l did not initially expect to see a superior CEV for the US Marines, when l did see it l was not surprised. I also was not surprised to note that the US Army had a lower CEV than the Marine Corps or that the German Sturmgruppe Grethe had a higher CEV than the US side. As shown in the charts below, the US Marines’ CEV is usually higher than the German CEV for the engagements of Belleau Wood, although this result is not very consistent in value. But this higher value does track with Marine Corps legend. l personally do not have sufficient expertise on WWI to confirm or deny the validity of the legend.

West Wood I—0n the first prediction the model rated the battle a draw with minimal advance (0.265 km) for the attacker, whereas historically the attackers were stopped cold with a bloody repulse. The second run predicted a very high CEV of 2.3 for the Germans, who stopped the attackers with a bloody repulse. The results are not easily explainable.

Bouresches I (Night)—On the first prediction the model recorded an attacker victory with an advance of 0.5 kilometer. Historically, the battle was a draw with an attacker advance of one kilometer. The attacker’s mission accomplishment score was 5, while the defender’s was 6. Historically, this battle could also have been considered an attacker victory. A second run with an increased German CEV to 1.5 records it as a draw with no advance. This appears to be a problem in defining who is the winner.

West Wood II—On the first run, the model predicted a draw with an advance of 0.3 kilometers. Historically, the attackers won and advanced 1.6 kilometers. A second run with a US CEV of 1.4 produced a clear attacker victory. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

North Woods I—On the first prediction, the model records the defender winning, while historically the attacker won. A second run with a US CEV of 1.5 produced a clear attacker victory. This appears to be a case where the side that would be expected to have the higher CEV needed that CEV input into the combat run.

Chaudun—On the first prediction, the model predicted the defender winning when historically, the attacker clearly won. A second run with an outrageously high US CEV of 2.5 produced a clear attacker victory. The results are not easily explainable.

Medeah Farm—On the first prediction, the model recorded the defender as winning when historically the attacker won with high casualties. The battle consists of a small number of German defenders with lots of artillery defending against a large number of US attackers with little artillery. On the second run, even with a US CEV of 1.6, the German defender won. The model was unable to select a CEV that would get a correct final result yet reflect the correct casualties. The model is clearly having a problem with this engagement.

Exermont—On the first prediction, the model recorded the defender as winning when historically, the attacker did, with both the attackers and the defender’s mission accomplishment scores being rated at 5. The model did rate the defender‘s casualties too high, so when it calculated what the CEV should be, it gave the defender a higher CEV so that it could bring down the defenders losses relative to the attackers. Otherwise, this is a normal battle. The second prediction was no better. The model is clearly having a problem with this engagement due to the low defender casualties.

Mayache Ravine—The model predicted the winner (the attacker) correctly on the first run, with the attacker having an opposed advance of 0.8 kilometer. Historically, the attacker had an opposed rate of advance of 1.3 kilometers. Both sides had a mission accomplishment score of 5. The problem is that the model predicted higher defender casualties than the attacker, while in the actual battle the defender had lower casualties that the attacker. On the second run, therefore, the model put in a German CEV of 1.5, which resulted in a draw with the attacker advancing 0.3 kilometers. This brought the casualty estimates more in line, but turned a successful win/loss prediction into one that was “off by one.” The model is clearly having a problem with this engagement due to the low defender casualties.

La Neuville—The model also predicted the winner (the attacker) correctly here, with the attacker advancing 0.5 kilometer. In the historical battle they advanced 1.6 kilometers. But again, the model predicted lower attacker losses than the defender losses, while in the actual battle the defender losses were much lower than the attacker losses. So, again on the second run, the model gave the defender (the Germans) a CEV of 1.4, which turned an accurate win/loss prediction into an inaccurate one. It still didn’t do a very good job on the casualties. The model is clearly having a problem with this engagement due to the low defender casualties.

Hill 252—On the first run, the model predicts a draw with a distanced advanced of 0.2 km, while the real battle was an attacker victory with an advance of 2.9 kilometers. The model’s casualty predictions are quite good. On the second run, the model correctly predicted an attacker win with a US CEV of 1.5. The distance advanced increases to 0.6 kilometer, while the casualty prediction degrades noticeably. The model is having some problems with this engagement that are not really explainable, but the results are not far off the mark.

Next: WWII Cases

Data Used for the ARL Paper

This is a follow-up post to this on the work being done at the Army Research Laboratory (ARL) by Dr. Alexander Kott:

The Evolution of Weapons and Warfare?

On page 9 of Dr. Kott’s paper provides the following table:

This is a sample of the data used for 8 weapons systems. He ended up using 195 weapon systems for his analysis. This is discussed in depth in his paper (referenced in his footnote 12): “Kott A. Initial datasets for explorations in long-range forecasting of military technologies. Adelphi (MD): Army Research Laboratory; 2019. 128 p. Report No.: ARL-SR-0417.” It is here:

https://www.arl.army.mil/arlreports/2019/ARL-SR-0417.pdf

These are all ground-based systems (no aircraft) that are either direct fire, or indirect fire systems using explosive rounds.

 

————-

P.S. Now the figure of a rate of fire of 30 for the house-mounted harquebusier got my attention, and no other muzzle loading weapon has a rate of fire above 3 rounds per minute. I did discuss this with Dr. Kott. He has a note in his papers that states:

MFS048: I consider the harquebusier (see Wikipedia “Harquebusier”) of the early 17th century (taken as 1620) as light armored at 160 J of protection and with armament that is an interpolation between a light harquebus (which they often could fire only once at the beginning of the engagement and produced about 1600 J KE) and a sword/saber that produced about 100 J per hack (see data for gladius in Note MFS005). I take this intermediate effect as corresponding to about 500 J, and assign an artificial projectile mass and velocity to account for this. I assume that the maximum rate of sword blows could reach 30 per minute.

Note, his figures are based upon cyclic rate of fire, not sustained rate of fire. This will be the subject of a future post.

Validating A Combat Model (Part V)

[The article below is reprinted from April 1997 edition of The International TNDM Newsletter.]

The First Test of the TNDM Battalion-Level Validations: Predicting the Winners
by Christopher A. Lawrence

Part II

CONCLUSIONS:

WWI (12 cases):

For the WWI battles, the nature of the prediction problems are summarized as:

CONCLUSION: In the case of the WWI runs, five of the problem engagements were due to confusion of defining a winner or a clear CEV existing for a side that should have been predictable. Seven out of the 23 runs have some problems, with three problems resolving themselves by assigning a CEV value to a side that may not have deserved it. One (Medeah Farm) was just off any way you look at it, and three suffered a problems because historically the defenders (Germans) suffered surprisingly low losses. Two had the battle outcome predicted correctly on the first run, and then had the outcome incorrectly predicted after a CEV was assigned.

With 5 to 7 clear failures (depending on how you count them), this leads one to conclude that the TNDM can be relied upon to predict the winner in a WWI battalion-level battle in about 70% of the cases.

WWII (8 cases):

For the WWII battles, the nature of the prediction problems are summarized as:

CONCLUSION: In the case of the WWII runs, three of the problem engagements were due to confusion of defining a winner or a clear CEV existing for a side that should have been predictable. Four out of the 23 runs suffered a problem because historically the defenders (Germans) suffered surprisingly low losses and one case just simply assigned a possible unjustifiable CEV. This led to the battle outcome being predicted correctly on the first run, then incorrectly predicted after CEV was assigned.

With 3 to 5 clear failures, one can conclude that the TNDM can be relied upon to predict the winner in a WWII battalion-level battle in about 80% of the cases.

Modern (8 cases):

For the post-WWll battles, the nature of the prediction problems are summarized as:

CONCLUSION: ln the case of the modem runs, only one result was a problem. In the other seven cases, when the force with superior training is given a reasonable CEV (usually around 2), then the correct outcome is achieved. With only one clear failure, one can conclude that the TNDM can be relied upon to predict the winner in a modern battalion-level battle in over 90% of the cases.

FINAL CONCLUSIONS: In this article, the predictive ability of the model was examined only for its ability to predict the winner/loser. We did not look at the accuracy of the casualty predictions or the accuracy of the rates of advance. That will be done in the next two articles. Nonetheless, we could not help but notice some trends.

First and foremost, while the model was expected to be a reasonably good predictor of WWII combat, it did even better for modem combat. It was noticeably weaker for WWI combat. In the case of the WWI data, all attrition figures were multiplied by 4 ahead of time because we knew that there would be a fit problem otherwise.

This would strongly imply that there were more significant changes to warfare between 1918 and 1939 than between 1939 and 1989.

Secondly, the model is a pretty good predictor of winner and loser in WWII and modern cases. Overall, the model predicted the winner in 68% of the cases on the first run and in 84% of the cases in the run incorporating CEV. While its predictive powers were not perfect, there were 13 cases where it just wasn’t getting a good result (17%). Over half of these were from WWI, only one from the modern period.

In some of these battles it was pretty obvious who was going to win. Therefore, the model needed to do a step better than 50% to be even considered. Historically, in 51 out of 76 cases (67%). the larger side in the battle was the winner. One could predict the winner/loser with a reasonable degree of success by just looking at that rule. But the percentage of the time the larger side won varied widely with the period. In WWI the larger side won 74% of the time. In WWII it was 87%. In the modern period it was a counter-intuitive 47% of the time, yet the model was best at selecting the winner in the modern period.

The model’s ability to predict WWI battles is still questionable. It obviously does a pretty good job with WWII battles and appears to be doing an excellent job in the modern period. We suspect that the difference in prediction rates between WWII and the modern period is caused by the selection of battles, not by any inherit ability of the model.

RECOMMENDED CHANGES: While it is too early to settle upon a model improvement program, just looking at the problems of winning and losing, and the ancillary data to that, leads me to three corrections:

  1. Adjust for times of less than 24 hours. Create a formula so that battles of six hours in length are not 1/4 the casualties of a 24-hour battle, but something greater than that (possibly the square root of time). This adjustment should affect both casualties and advance rates.
  2. Adjust advance rates for smaller unit: to account for the fact that smaller units move faster than larger units.
  3. Adjust for fanaticism to account for those armies that continue to fight after most people would have accepted the result, driving up casualties for both sides.

Next Part III: Case Studies

The Evolution of Weapons and Warfare?

Many years ago, Trevor Dupuy wrote the book The Evolution of Weapons and Warfare. One of great graphics from that book was:

This graphic either intrigued or excited the reader; or gave him serious heartburn. It was a little ambitious in a lot of people’s mind.

Well, I found something more ambitious here: https://www.defenseone.com/technology/2019/09/formula-predicts-soldier-firepower-2050/159931/

It produces this graphic:

There is a “press release” here: https://scitechdaily.com/u-s-army-research-uncovers-pattern-in-progression-of-weapons-technologies/

The actual more detailed article is here: https://admin.govexec.com/media/universallaw.docx

This link leads to the 28-page article by Alexander Kott, chief scientist of the Army Research Laboratory (ARL). It is an interesting idea. It is an idea that I also toyed with at times, but never took the time to actually turn into a meaningful set of formulae.

I will probably have a few more comments on this work in the next couple of weeks.

Validating A Combat Model (Part IV)

[The article below is reprinted from April 1997 edition of The International TNDM Newsletter.]

The First Test of the TNDM Battalion-Level Validations: Predicting the Winners
by Christopher A. Lawrence

Part I

In the basic concept of the TNDM battalion-level validation, we decided to collect data from battles from three periods: WWI, WWII, and post-WWII. We then made a TNDM run for each battle exactly as the battle was laid out, with both sides having the same CEV [Combat Effectiveness Value]. The results of that run indicated what the CEV should have been for the battle, and we then made a second run using that CEV. That was all we did. We wanted to make sure that there was no “tweaking” of the model for the validation, so we stuck rigidly to this procedure. We then evaluated each run for its fit in three areas:

  1. Predicting the winner/loser
  2. Predicting the casualties
  3. Predicting the advance rate

We did end up changing two engagements around. We had a similar situation with one WWII engagement (Tenaru River) and one modern period engagement (Bir Gifgafa), where the defender received reinforcements part-way through the battle and counterattacked. In both cases we decided to run them as two separate battles (adding two more battles to our database), with the conditions from the first engagement being the starting strength, plus the reinforcements, for the second engagement. Based on our previous experience with running Goose Green, for all the Falklands Island battles we counted the Milans and Carl Gustavs as infantry weapons. That is the only “tweaking” we did that affected the battle outcome in the model. We also put in a casualty multiplier of 4 for WWI engagements, but that is discussed in the article on casualties.

This is the analysis of the first test, predicting the winner/loser. Basically, if the attacker won historically, we assigned it a value of 1, a draw was 0, and a defender win was -1. In the TNDM results summary, it has a column called “winner” which records either an attacker win, a draw, or a defender win. We compared these two results. If they were the same, this is a “correct” result. If they are “off by one,” this means the model predicted an attacker win or loss, where the actual result was a draw, or the model predicted a draw, where the actual result was a win or loss. If they are “off by two” then the model simply missed and predicted the wrong winner.

The results are (the envelope please….):

It is hard to determine a good predictability from a bad one. Obviously, the initial WWI prediction of 57% right is not very good, while the Modern second run result of 97% is quite good. What l would really like to do is compare these outputs to some other model (like TACWAR) to see if they get a closer fit. I have reason to believe that they will not do better.

Most cases in which the model was “off by 1″ were easily correctable by accounting for the different personnel capabilities of the army. Therefore, just to look where the model really failed. let‘s just look at where it simply got the wrong winner:

The TNDM is not designed or tested for WWI battles. It is basically designed to predict combat between 1939 and the present. The total percentages without the WWI data in it are:

Overall, based upon this data I would be willing to claim that the model can predict the correct winner 75% of the time without accounting for human factors and 90% of the time if it does.

CEVs: Quite simply a user of the TNDM must develop a CEV to get a good prediction. In this particular case, the CEVs were developed from the first run. This means that in the second run, the numbers have been juggled (by changing the CEV) to get a better result. This would make this effort meaningless if the CEVs were not fairly consistent over several engagements for one side versus its other side. Therefore, they are listed below in broad groupings so that the reader can determine if the CEVs appear to be basically valid or are simply being used as a “tweak.”

Now, let’s look where it went wrong. The following battles were not predicted correctly:

There are 19 night engagements in the data base, five from WWI, three from WWII, and 11 modern. We looked at whether the miss prediction was clustered among night engagements and that did not seem to be the case. Unable to find a pattern, we examined each engagement to see what the problem was. See the attachments at the end of this article for details.

We did obtain CEVs that showed some consistency. These are shown below. The Marines in World War l record the following CEVs in these WWI battles:

Compare those figures to the performance of the US Army:

In the above two and in all following cases, the italicized battles are the ones with which we had prediction problems.

For comparison purposes, the CEVs were recorded in the battles in World War II between the US and Japan:

For comparison purposes, the following CEVs were recorded in Operation Veritable:

These are the other engagements versus Germans for which CEVs were recorded:

For comparison purposes, the following CEVs were recorded in the post-WWII battles between Vietnamese forces and their opponents:

Note that the Americans have an average CEV advantage of 1 .6 over the NVA (only three cases) while having a 1.8 advantage over the VC (6 cases).

For comparison purposes, the following CEVs were recorded in the battles between the British and Argentine’s:

Next: Part II: Conclusions

The Best and The Brightest

One of seminal works coming out of the Vietnam war was David Halberstam’s book The Best and the Brightest about the highly intelligent, highly educated “whiz kids” that were brought into our national security structure in the 1950s and 1960s and ended up tangled up in the unsolvable Vietnam War. This tendency for the foreign policy team to include highly educated specialists was reinforced by Nixon hiring the scholar Henry Kissinger as his National Security Advisor and later Secretary of State. This has become somewhat of a tradition, where the National Security Advisor is often a reputable academic like Rostow (PhD, Yale), Kissinger (PhD Harvard) or Brzezinski (PhD, Harvard). Even Trump’s second national security advisor, the legendary three-star general H. R. McMaster, had a PhD and had published one book.

So the tradition, for better or worse, is that the U.S. national security team consists a smattering of “whiz kids”, academics and some of the “Best and the Brightest.” This tradition does not appear to be closely adhered to now. The Secretary of State, Mike Pompeo, is a lawyer (although from Harvard) and career politician. The newly nominated National Security Advisor is Robert O’Brien, also a lawyer. The previous holder of that office, the infamous John Bolton, was also a lawyer. The head of the Defense Department is Mark Esper, who has a PhD in Public Policy.

I will leave it to the reader as to whether having a bunch of Harvard academics with a background in International Relations results in better foreign policy. I just note that this is now no longer the tradition. It is mostly lawyers now.

 

 

P.S. A few related posts:

Secretary of the Army, take 3

Secretary of Defense – 3

H. R. McMaster

McMaster vs Spector on Vietnam

Validating A Combat Model (Part III)

[The article below is reprinted from April 1997 edition of The International TNDM Newsletter.]

Numerical Adjustment of CEV Results: Averages and Means
by Christopher A. Lawrence and David L. Bongard

As part of the battalion-level validation effort, we made two runs with the model for each test case—one without CEV [Combat Effectiveness Value] incorporated and one with the CEV incorporated. The printout of a TNDM [Tactical Numerical Deterministic Model] run has three CEV figures for each side: CEVt CEVl and CEVad. CEVt shows the CEV as calculated on the basis of battlefield results as a ratio of the performance of side a versus side b. It measures performance based upon three factors: mission accomplishment, advance, and casualty effectiveness. CEVt is calculated according to the following formula:

P′ = Refined Combat Power Ratio (sum of the modified OLls). The ′ in P′ indicates that this ratio has been “refined” (modified) by two behavioral values already: the factor for Surprise and the Set Piece Factor.

CEVd = 1/CEVa (the reciprocal)

In effect the formula is relative results multiplied by the modified combat power ratio. This is basically the formulation that was used for the QJM [Quantified Judgement Model].

In the TNDM Manual, there is an alternate CEV method based upon comparative effective lethality. This methodology has the advantage that the user doesn’t have to evaluate mission accomplishment on a ten point scale. The CEVI calculated according to the following formula:

In effect, CEVt is a measurement of the difference in results predicted by the model from actual historical results based upon assessment for three different factors (mission success, advance rates, and casualties), while CEVl is a measurement of the difference in predicted casualties from actual casualties. The CEVt and the CEVl of the defender is the reciprocal of the one for the attacker.

Now the problem comes in when one creates the CEVad, which is the average of the two CEVs above. l simply do not know why it was decided to create an alternate CEV calculation from the old QJM method, and then average the two, but this is what is currently being done in the model. This averaging results in a revised CEV for the attacker and for the defender that are not reciprocals of each other, unless the CEVt and the CEVl were the same. We even have some cases where both sides had a CEVad of greater than one. Also, by averaging the two, we have heavily weighted casualty effectiveness relative to mission effectiveness and mission accomplishment.

What was done in these cases (again based more on TDI tradition or habit, and not on any specific rule) was:

(1.) If CEVad are reciprocals, then use as is.

(2.) If one CEV is greater than one while the other is less than 1,  then add the higher CEV to the value of the reciprocal of the lower CEV (1/x) and divide by two. This result is the CEV for the superior force, and its reciprocal is the CEV for the inferior force.

(3.) If both CEVs are above zero, then we divide the larger CEVad value by the smaller, and use its result as the superior force’s CEV.

In the case of (3.) above, this methodology usually results in a slightly higher CEV for the attacker side than if we used the average of the reciprocal (usually 0.1 or 0.2 higher). While the mathematical and logical consistency of the procedure bothered me, the logic for the different procedure in (3.) was that the model was clearly having a problem with predicting the engagement to start with, but that in most cases when this happened before (meaning before the validation), a higher CEV usually produced a better fit than a lower one. As this is what was done before. I accepted it as is, especially if one looks at the example of Mediah Farm. If one averages the reciprocal with the US’s CEV of 8.065, one would get a CEV of 4.13. By the methodology in (3.), one comes up with a more reasonable US CEV of 1.58.

The interesting aspect is that the TNDM rules manual explains how CEVt, CEVl and CEVad are calculated, but never is it explained which CEVad (attacker or defender) should be used. This is the first explanation of this process, and was based upon the “traditions” used at TDI. There is a strong argument to merge the two CEVs into one formulation. I am open to another methodology for calculating CEV. I am not satisfied with how CEV is calculated in the TNDM and intend to look into this further. Expect another article on this subject in the next issue.

Losses of the 32nd and 31st Tank Brigades at Prokhorovka

Dr. Wheatley requested me to list out the losses for the 32nd and 31st Tank Brigades on 12 July 1943. They were the two attacking tank brigades on the right flank of the XXIX Tank Corps, with the 32nd Tank Brigade in the first echelon and the 31st in the second echelon. Next to the 32nd Tank Brigade was the 25th Tank Brigade and they were supported by the 53rd Motorized Rifle Brigade. Here are their reports (the text in italics are the direct translations of the reports, done by Dr. Richard Harrison):

Operational Report #90, 0800 July 11, 1943. HQ 29th TC:

Corps material and supply situation:

25th TBde: 32 T-34s, 39 T-70s, 103 cars, 4 45mm guns, 3 37mm guns, 6 82mm mortars

31st TBde: 31 T-34s, 39 T-70s, 103 cars, 4 45mm guns, 2 37mm guns, 6 82mm mortars

32nd TBde: 63-T34s, 102 cars, 4 45mm guns, 2 25mm guns, and 6 82mm mortars

53rd MotRBde: 293 cars, 17 BA-64 armored cars, 12 76mm guns, 12 45mm guns, 30 82mm mortars and 6 120mm mortars.

271st Mortar Rgt: 69 cars and 36 120mm mortars

1446th Self-Propelled ArtRgt: 28 cars, 9 76mm SP guns, 12 122mm howitzers

108th ATArtRgt: 37 cars, 12 76mm guns and 8 45mm guns

75th Motorcycle Bn: 10 BA-64s, 13 cars, 72 motorcycles, and 4 82mm mortars

38th Armored Bn: 7 T-70s, 12 BA-10s, 10 BA-64s and 12 cars

363rd Ind Communications Bn: 74 cars, 10 BA-64s, and 3 T-34s.

193rd Sapper Bn: 31 cars

69th (?) Reconnaissance Bn: 15 cars

72nd (?) Reconnaissance Bn: 10 cars

1st (?) Co: 45 cars

7th (?): 6 cars

Combat Report #73, 1600, July 11, 1943, HQ: 29th TC:

Type……………………….25th TBde…..31st TBde…..32nd TBde…..1446th SP Art Rgt

T-34…………………………31………………29……………….60………………-

T-34 (in repair)…………..1……………….3…………………..4……………….-

T-70………………………..36……………….38………………..-………………..-

T-70 (in repair)…………..3………………..1…………………-…………………-

KV……………………………1………………………………………………………..-

122mm SAU………………1……………………………………………………….11

76mm SAU………………..1………………………………………………………….8

Corps Strength 123 T-34s, 81 T-70s, 11 122mm SAUs, and 8 76mm SAU.

Note that this Corps Strength list does not match the list above in any category. In part because there were 7 T-70s with the 38th Armored Bn and 3 T-34s with the 363rd Ind Communications Bn.

Combat Report #75, 2400, July 12, 1943, HQ 29th TC:

25th Tank Brigade losses: 140 men killed, 180 wounded. 13 T-34s and 10 T-70s were irretrievably lost; 11 T-34s and 10 T-70s were knocked out or hit mines; 7 T-34s and 4 T-70s are out of action due to technical breakdowns.

32nd Tank Brigade losses: 100 men killed and 130 wounded. Overall, 54 T-34s were either burned, knocked out, or are in need of repair.

31st Tank Brigade losses: 20 T-34s and 18 T-70s knocked out and burned. Tanks in line: 3, with the location and condition of the remainder being investigated.

During the night 3 T-34s and 1 122mm SAU were repaired.

The evacuation of knocked-out tanks is being carried out by 3 turretless T-34s and a single M-3 “Grant”. Four brigades are working to restore damaged equipment, with one working to repair self-propelled guns; 2 brigades working to repair 32nd TBdes equipment, and 1 working for 31st TBde.

Note the reference to evacuation of tanks, which does have some definite impact on the photo reconnaissance pictures taken on 16 July and 7 August 1943.

Operational Report #2, 0700, July 13, 1943. HQ 5th Gds Tank Army:

29th TC: Losses: 95 T-34s, 38 T-70s, 8 self-propelled platforms, 240 men killed and 610 wounded.

Combat Report #76, 1300, July 13, 1943, HQ 29th TC:

31st Tank Bde: Material Supply and condition: 8 T-34s and 20 T-70s in line; during the night 8 T-34s were evacuated from the field.

Losses for 12 July: 14 men killed, 27 wounded, and 15 missing. 1 45mm guns wrecked, 1 heavy MG, 2 SMGs and 1 rifle.

25th Tank Bde, consisting of 50th MotRBn, 11 T-70s and 2 guns from an antitank battalion, are defending 1 km east of Storzhevoye.

32nd Tk Bde: Tanks in line: 12 T-34s

1529th Self-Propelled Art Rgt is in Prokhorovka.

Operational Report #91, 0400 July 14, 1943. HQ 29th TC:

25th TBde losses: 40 men killed, 87 wounded, 2 T-70s burns, and 1 knocked out.

53rd MotRBde: Losses for July 12: 517 men killed and missing, and 572 wounded; 16 heavy MGS, 25 AT rifles, 2 45mm guns, 13 light MGs, and 2 cars.

1446th Self-Propelled ArtRgt turned over 2 guns to 25th TBde and 6 to 32nd TBde. Losses for July 12: 19 men killed, 14 wounded; 8 122mm SAUs and 3 76mm SAUs destroyed.

108th ATArtRgt is the corps commander’s reserve without losses

271st Mortar Rgt has been subordinated to 53rd MotRBde. Losses for July 12: 5 men killed and missing, with 4 wounded.

On July 12 1 man was killed and another wounded.

Material Condition:

On hand: 31 T-34s, 40 T-70s, 3 122mm SAUs, and 5 76mm SAUs

Losses: 58 T-34s, 23 T-70s, 8 122mm SAUs, and 3 76mm SAUs

Undetermined location: 18 T-34s and 9 T-70s

Needing major repairs: 11 T-34s and 5 T-70s

Needing lesser repairs: 13 T-34s and 8 T-70s

 

Operational Report #4, 0700, July 14, 1943. HQ 5th Gds Tank Army:

29th TC: Losses: 3 T-70s, of which 2 were irreplaceable; 40 men killed and 87 wounded. Tanks on hand: 31 T-34s, 40 T-70s.

Operational Report #92, 1600 July 14, 1943. HQ 29th TC:

25th TBde losses: 1 T-70 burns, 1 man killed and 5 wounded.

Equipment Strength:

On hand: 33 T-34s, 39 T-70s, 3 122mm SAUs, and 5 76mm SAUs.

 

Combat Report #77, 1900, July 14, 1943. HQ 29th TC:

25th TBde: Losses 1 T-70 burned, 1 man killed and 5 wounded.

Operational Report #5, 1900, July 14, 1943. HQ 5th Gds Tank Army:

29th TC: Losses: 1 T-70 burned, 1 man killed and 5 wounded. Tanks on hand: 33 T-34s and 39 T-70s.

Operational Report #6, 0700, July 15, 1943. HQ 5th Gds Tank Army:

29th TC: Tanks in line: 35 T-34s and 40 T-70s

Operational Report #94, 1600 July 15, 1943. HQ 29th TC:

31st TBde: Tanks on hand: 15 T-34s and 20 T-70s. Losses: 1 man killed.

53rd MotRBde: Losses 1 man killed, 17 wounded.

25th TBde: Tanks on hand: 5 T-34s and 19 T-70s. Loses: 1 T-70 knocked out, 1 man killed.

32nd TBde: Tanks on hand: 15 T-34s.

Operational Report #7, 0400, July 16, 1943. HQ 5th Gds Tank Army:

29th TC: Losses: 1 T-70 knocked out, 1 man killed. Tanks in line: 40 T-34s and 45 T-70s.

Combat Report #80, 1900 July 16, 1943 HQ 29th TC:

25th TBde: Losses: none. Material Status: 5 T-34s and 17 T-70s in the line; 4 antitank guns; 5 82mm mortars; 3 37mm AA guns.

31st TBde: Material Status: 16 T-34s and 21 T-70s in the line; 3 45mm guns, 2 37mm guns, 2 MBGs, and 3 82mm guns [probably mortars]

32nd TBde: Losses for July 16: 5 men killed, 5 wounded, 1 T-34. Enemy aircraft, in groups of up to 60 planes, bombed the brigade’s positions 4 times.

One notes that in most wargames, attacking a tank brigade with 120 or more Ju-87s and Fw-190s would probably result in more than 13 casualties (see below).

53rd MotRBde: Losses 2 men wounded. Material status: 11 76mm guns; 7 45mm guns; 51 AT rifles; 19 HMGs, 41 LMGs.

1446th Self-Propelled ArtRgt: Equipment on hand: 4 122mm SAUs and 6 76mm SAUs.

271st Mortar Rgt: Losses: 3 men wounded due to bombing and 3 cars damaged. Material condition: 33 120mm mortars.

108th ATArtRgt: Material status: 12 76mm and 8 45mm guns.

38th Armored Bn: Material status: 7 T-70s, 12 Ba-10s and 10 Ba-64s.

75th Motorcycle Bn: 9 BA-64s and 60 motorcycles.

Operational Report #95, 2400 July 16, 1943. HQ 29th TC:

Losses for July 16: 6 men killed and 19 wounded, 1 T-34, 3 cars knocked out and 3 damaged.’

Material Condition: 42 T-34s, 47 T-70s, 1 KV, 4 122mm SAUs, 6 76mm SAUs, 23 76mm guns, 26 45mm guns, 5 37mm guns, 3 25mm guns, 39 120mm mortars, 44 82mm mortars. By 0600 on July 17 5 T-34s and 3 T-70s will be restored.

Operational Report #96, 2400 July 16, 1943. HQ 29th TC:

Material Status: 42 T-34s, 50 T-70s, 1 KV, 4 122mm SAUs, 6 76mm SAUs, 23 76mm guns, 26 45mm guns, 5 47mm guns, 3 25mm guns, and 44 82mm mortars.

Operational Report #8, 0400, July 17, 1943. HQ 5th Gds Tank Army:

29th TC: Losses: 1 T-34, 5 men killed and 10 wounded. 6 cars smashed or knocked out. Tanks in line: 39 T-34s and 45 T-70s.

XXIX Tank Corps (Fond 332, Opis: 1943, Delo: 80, Pages 2-3):

Information on Equipment Loses and Strengths, July 12-16

Equipment Strength: July 12-16

T-34s: 56

T-70: 52

KV: 1

SU-122: 4

SU-76: 6

Irreplaceable loses (burned)

T-34: 60

T-70: 31

SU-122: 8

SU-76: 3

Transportation Equipment Strength

1.5 tons: 572

2.5-3 tons: 205

Irreplaceable Losses:

1.5 tons: 15

2.5-3 tons: 8

Jeeps: 2

Artillery Strength:

76mm: 23

45mm: 26

37mm AA: 5

25mm AA: 3

120mm Mortar: 39

82mm Mortar: 44

Irreplaceable Artillery Losses:

76mm gun: 1

45mm gun: 1

120mm mortar: 3

82mm mortar: 5

Readiness of Rifle Companies:

25th TBde: 50%

31st TBde: 55%

32nd TBde: 85%

53rd MotRBde: 40%

 

Note that I had to retype all these entries, and I am ham-fisted, so there might be typo  or two in them.

By the way, reviewing this just reinforces my opinion that the 31st Tank Brigade was in a second echelon position and used as such. May not have ever gotten past Oktyabrskii Sovkhoz.

Yemen, Saudi Arabia, Iraq and Iran

Saudi Fires From Outerspace (picture from NASA)

There are four countries have been in the news lately, intertwined in a complex little dance that had resulted in the temporary shutting down of 5% of the world oil production. Lets us look at the four countries for a moment:

………………………………………Iran……….Iraq……….Saudia Arabia……..Yemen
Population (millions)……………….83…………37……………..33………………….28
GDP (billions)……………………..484………..250……………762………………….28
Per Capita Income……………..5,820………6,116………..23,566………………..925
% Shiite……………………………..90+…………60…………10-15%…………….35-40%

Now, there are also five other states in and around the Persian/Arabian Gulf (Kuwait, Qatar, Bahrain, UAE and Oman. The most populous and richest of these is UAE with 9.6 million people and a nominal GDP of 433 billion. Some of these states, like Bahrain, are majority Shiite.

While there might be some retaliatory strikes in response, this simple comparison shows that:

  1. Iran is the big guy in the region.
  2. Saudia Arabia is probably not in position to wage war against Iran. It may conduct a military response, but nothing pushing towards something that looks like full-scale war.
    1. Especially as they do not have a common border except over the Gulf.

P.S. Based upon Purchasing Power Parity (PPP)

………………………………………Iran………..Iraq…………Saudia Arabia……..Yemen
GDP (billions)……………………….484…………250…………..762……………………28

Per Capita Income………………..5,820………6,116……….23,566………………….925

GDP PPP………………………….1,540…………734………..1,924……………………73

Per Capita Income………………18,504……..17,952………56,817………………..2,380

 

P.P.S. A related relevant earlier blog post:

Air Forces in the Persian/Arabian Gulf