2 X 20 would be good for 40 shots. I just like more groups. CMD multiplier is 1.15, and SD is about 4.2% for 2 X 20
---------------------------------------------------------------------
I would be remiss to not mention the proper statistical method to compare grouping performance between different loads or rifles. The use of five groups of 10 shots each is adequate to use this method, which is called the "Student's T-Test" (which is another reason to use five 10-shot groups as a standard procedure).. I will not go into the theory behind the T-Test, as most will not have a statistical background adequate to understand its principles. You can Google T-Test and find out more about it should you desire. However, you really don't need to understand it to use it. It is a built-in part of Microsoft Excel, and is very simple to use in performing a T-Test analysis. Excel "help" will tell you everything you will need to do. The T-Test performs a mathematical comparison of two (or more) sets of data to determine the degree of similarity between the two data sets. Let's say you have fired a set of 50 shots composed of five groups of 10 shots for each of two different loads, under identical conditions, and have determined the ES of each 10-shot group. You will go into Excel, and enter all of the ES data for the first load in five cells one column. The ES data for the second load is similarly entered into the adjacent column. You will need to read the Excel help as to what is to be done next (look for help under "T-Test"), but you will then get a number between zero and one as a result. This number is called the T-Statistic. The smaller the number, the more different the data in the two columns are. I cannot give an exact interpretation of the numbers, but if T-Statistic is 0.1 or less, there is certainly a significant statistical difference in the two loads. The closer the T-Statistic gets to 1, the more similar the loads are. In general, I'd say (my opinion) that if the T-Statistic is 0.2 or more, there is very little significant difference between the two loads, and if it is 0.1 or less, there is a statistically significant difference between the loads. The mathematics of statistics deals in probabilities and confidence levels, not exactness.
I earlier mentioned my analysis of U. S. Navy lot acceptance testing data for the MK262 5.56 round. This involved firing of five 10-round groups for each of 10 production lots, in each of two test barrels to determine the ES for each group and the average ES for each lot. I went through the exercise of comparing a large sample of groups to each other using the T-Test (not all, as that would require 380 comparisons). I chose 50 comparisons randomly as being more manageable. As it turned out, in 47 comparisons, there was no significant statistical difference in grouping performance noted between different lots and different test barrels. Only three comparisons had a T-Test statistic indicating a statistically significant difference. That is nearly phenomenal, and speaks very highly of the manufacturing quality control exercised by Black Hills. Even so, none of the lots failed the Navy's grouping requirements. The Navy is completely sold on this grouping acceptance test method vs use of all the other possible comparisons, such as horizontal plus vertical standard deviation, diagonal, figure of merit, mean radius, or anything else of a similar nature, as they just do not add any practical value or advantage to the comparison beyond the much simpler ES measurement, and are much more difficult and time-consuming to apply. I agree with that opinion wholeheartedly.