We all wish we didn't get variation in our loads with temperatures. That would be a perfect world.
What I gleaned from the test is viable options for existing loads when our go to powders are some what non existent.
Thanks for the suggestion of n=10. Do you have personal experience or publication data that you can post or cite to support your suggestion that 10 replicates will provide substantially better information than 5 at each temperature and powder point? I would need something more than a preconceived notion to justify the time and expense.
As the next test design is evolving, it appears that it will be n=5 between Varget and IMR 4166 at roughly 10 degree intervals between 60 and 100°F. I’ll use my Autotrickler to more precisely dispense powder charges. Increasing replicates from 3 to 5 and more precise powder measurement should significantly reduce error of the mean and permit greater confidence in the data.
Thank you for the descriptive statistics primer on standard deviation and the value of increasing sample size to reduce standard deviation and to increase the confidence that the calculated mean is close to the parametric mean. It is simple to see that 10 samples would lead to a smaller standard deviation than 5 samples and that 20 samples would lead to a smaller standard deviation than 10 samples. One hundred samples would even be better in reducing standard deviation.There are several fundamental aspects concerning the use of the standard deviation (SD) for the normal distribution in statistics. It is well known that the avg +/- 1SD contains 68% of the observations, +/- 2SD contains 95%, and so on. But in this case, you are not concerned with the individual results but the behavior of the averages for which you want to explain vs temperature. So you need enough data to obtain a "good" average as the inputs, through which you can fit a smooth curve (or straight line). How much data to get a solid average? If you "practice" statistics another well known principle is SD(average) = SD(individuals) / Sqrt(n), which tells you how much the larger sample size will reduce the SD of the averages. As an example lets assume the velocity SD=6 for the Varget ( that is the average SD for the different temps based on 3 shots each). For 4 shots SD(avg) = 6/sqrt(4) = 6/2=3; so if the average was 2550 you would be 68% confident that the real average is within 2550 +/-3. This is what the error bars which are automatically plotted are telling you, and you can see that these overlap between adjacent temperatures; meaning the average can be anywhere within that bracket. So you want to reduce the SD(average) such that the error bars are tighter, to the extent necessary to separate the signal (temperature effect) from the noise (the SD). I proposed 10 as a reasonable sample size to achieve a significant improvement, and using the formula shown above it can be determined for any sample size to find the point of diminishing returns. This is a rather short explanation of a very powerful principle!
The details you seek are on the preceding cited web page. It was a 30 inch 1:10 twist barrel.@BillC79 , for curiosity, what loads in grains you were using on the original graph with the 200-20X bullet? And what barrel lenght and twist?
And thank you for posting your findings!
LRCampos.
The details you seek are on the preceding cited web page. It was a 30 inch 1:10 twist barrel.
I was able to get to the range this A.M. and get the remaining data I needed. Details should be posted by this evening.
Looking forward to the new report,thanks.The details you seek are on the preceding cited web page. It was a 30 inch 1:10 twist barrel.
I was able to get to the range this A.M. and get the remaining data I needed. Details should be posted by this evening.
...or fear those seeking knowledge as they pass by you. Some folks already have the target part figured out.Fear the one who has no chronograph nor bore scope. For all he knows is what the target shows...