
Do we need to validate training models? The argument is that as the model is being used for training (vice analysis), it does not require the rigorous validation that an analytical model would require. In practice, I gather this means they are not validated. It is an argument I encountered after 1997. As such, it is not addressed in my letters to TRADOC in 1996: See http://www.dupuyinstitute.org/pdf/v1n4.pdf
Over time, the modeling and simulation industry has shifted from using models for analysis to using models for training. The use of models for training has exploded, and these efforts certainly employ a large number of software coders. The question is, if the core of the analytical models have not been validated, and in some cases, are known to have problems, then what are the models teaching people? To date, I am not aware of any training models that have been validated.
Let us consider the case of JICM. The core of the models attrition calculation was the Situational Force Scoring (SFS). Its attrition calculator for ground combat is based upon a version of the 3-to-1 rule comparing force ratios to exchange ratios. This is discussed in some depth in my book War by Numbers, Chapter 9, Exchange Ratios. To quote from page 76:
If the RAND version of the 3 to 1 rule is correct, then the data should show a 3 to 1 force ratio and a 3 to 1 casualty exchange ratio. However, there is only one data point that comes close to this out of the 243 points we examined.

That was 243 battles from 1600-1900 using our Battles Data Base (BaDB). We also tested it to our Division Level Engagement Data Base (DLEDB) from 1904-1991 with the same result. To quote from page 78 of my book:
In the case of the RAND version of the 3 to 1 rule, there is again only one data point (out of 628) that is anywhere close to the crossover point (even fractional exchange ratio) that RAND postulates. In fact it almost looks like the data conspire to leave a noticeable hole at that point.

So, does this create negative learning? If the ground operations are such that an attacking ends up losing 3 times as many troops as the defender when attacking at 3-to-1 odds, does this mean that the model is training people not to attack below those odds, and in fact, to wait until they have much more favorable odds? The model was/is (I haven’t checked recently) being used at the U.S. Army War College. This is the advanced education institute that most promotable colonels attend before advancing to be a general officer. Is such a model teaching them incorrect relationships, force ratios and combat requirements?
You fight as you train. If we are using models to help train people, then it is certainly valid to ask what those models are doing. Are they properly training our soldiers and future commanders? How do we know they are doing this. Have they been validated?
