• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

How much "Extreme Spread" is acceptable in competition?

Apparently Fredo thinks that playing with an internal ballistics program is as valid as results of actual tests and that he has been appointed the arbitrator of what is scientific. Test everything and if it applies, believe your targets. We test to find differences between what we think and what is is real.

This is not the first time that I have read that case weight is not a useful way to predict case capacity. Jim, Alex, and Tom, among others, do and have done a lot of actual testing. for that reason, and their successes in competition, I tend to pay a lot of attention to their posts.
 
are all the cases the exact same? how much carbon in the neck, is this one more britol or hardened then the others, is the case capscity of these 5 the same as those...it all matters. I really can not beleive anyone would even consider arguing the point. But to each his own. Enjoy your test and thanks for your time.
I can see where this is going
Statistically speaking it may not matter. This is why you would want to shoot 40+ test shots for each instead of 5 or 10. If you shoot enough your data pool is big enough to be statistically significant. What this means is if the primers matter in a material way you will be able to draw a conclusion from your test even though you could not perfectly eliminate all variables. Think about an economic model where there are billions of variables and data points. No possible way to control all variables yet we can still make conclusions about stimulus packages, tax cuts etc.
 
Apparently Fredo thinks that playing with an internal ballistics program is as valid as results of actual tests and that he has been appointed the arbitrator of what is scientific. Test everything and if it applies, believe your targets. We test to find differences between what we think and what is is real.

This is not the first time that I have read that case weight is not a useful way to predict case capacity. Jim, Alex, and Tom, among others, do and have done a lot of actual testing. for that reason, and their successes in competition, I tend to pay a lot of attention to their posts.
I think the word you're looking for is 'arbitrar' there Teach.... ;) (sorry, I just HADDA! . . . . lightenin' ya' up, not lightin' ya up......lol)
 
I like Boydallen’s take on the test using two cases with 10 shots each alternating light then heavy primers - any velocity change due to work hardening of the necks is an mute point as both test samples will be linear as the case neck changes over the 10 shots.
Would be very interesting data to look over when done especially on just how much the velocity moves between shot one and nine then two to ten and if it shows up at all due to the noise.
Allan
 
Apparently Fredo thinks that playing with an internal ballistics program is as valid as results of actual tests and that he has been appointed the arbitrator of what is scientific. Test everything and if it applies, believe your targets. We test to find differences between what we think and what is is real.

This is not the first time that I have read that case weight is not a useful way to predict case capacity. Jim, Alex, and Tom, among others, do and have done a lot of actual testing. for that reason, and their successes in competition, I tend to pay a lot of attention to their posts.

Apparently, you Sir, cannot grasp and/or accept that there are proven verifiable tests & ballistic programs that, without a shadow of a doubt, account for those very "confounding factors" I've mentioned...from the get go...

So, you're more than welcome to try shifting the narrative away from scientific fact, since as of yet, not a single solitary one of you gents can explain away how ANY reloader/ tester can subjectively choose to heed, or ignore tangible data, at ones own discretion?

This may be well & good in your own minds, for whatever justification you can cook up. 'Attacking the messenger' is the most elementary way to do so, especially when the actual facts of the matter cannot be refuted...

So, let's get back to the ballistic discussion, shall we?

Please explain why such a widely accepted & utilized program as Quickload includes a user input for "maximum case capacity"?
Who over @ Quickload camp nominated themselves "arbitrator" for that dispensation of erroneous data point???

The nerve of 'em!
Those gents should know that case capacity only matters when (input your name here) has decided it's convenient! To heck with all that science jumbo jumbo....:confused:

And, (again) why did those USAF scientists specifically structure their primer test to negate "confounding factors" from skewing the data they sought to collect on specific primers?

Here's yet another opportunity to explain your stance on THOSE facts.
Or, you could think hard about how you can spin this back onto me (the messenger), since the message of truth is too tough to swallow...

Tell ya one thing, if I'm ever proven WRONG, I will be the first to step up and admit it. So, please...take all the time you need to do just that!

Prove to me that variable case capacity has absolutely ZERO EFFECT on pressure, velocity & downrange POI shift. I've got the science to prove that it does, and conventional reloading practices agree, as well. If you care to debate that fact, here's NECOS Quickload email contact:
sales@neconos.com
I'm sure they'd love to learn from you, how their program is missing an "ignore case capacity" button next to the data field!

To direct right back to main point:
For our discussion, there are three main things affect pressure generated in a vessel:

1. Volume of the vessel
2. Volume of propellant
3. 'Energy' added by the primer to inpgnite the propellant

So, if one seeks to test the variation of any one of those three main factors, one must keep the other two, as a constant. If that very simple, very obvious methodology is not adhered to, the test results are flawed.

How can it be, any other way?
 
Last edited:
fredo I for one try to do as much as I can to get thing as close as I can, but the other part you are leaving out is human error. As much as I try I know there will be something I'm going to forget or just plane screw up and I'd be the first to admit it. And yes I weigh primers, cases, measure bullets and all the other tings known to man. But it still dosen't help the jerk behind the trigger so your Quickload is flawed. And I'm still waiting for the guy that was making the tool that was supposed to measure case capacity! I told him to make two the make sure they read the same every time.

Joe Salt
 
Apparently, you Sir, cannot grasp and/or accept that there are proven verifiable tests & ballistic programs that, without a shadow of a doubt, account for those very "confounding factors" I've mentioned...from the get go...

So, you're more than welcome to try shifting the narrative away from scientific fact, since as of yet, not a single solitary one of you gents can explain away how ANY reloader/ tester can subjectively choose to heed, or ignore tangible data, at ones own discretion?

This may be well & good in your own minds, for whatever justification you can cook up. 'Attacking the messenger' is the most elementary way to do so, especially when the actual facts of the matter cannot be refuted...

So, let's get back to the ballistic discussion, shall we?

Please explain why such a widely accepted & utilized program as Quickload includes a user input for "maximum case capacity"?
Who over @ Quickload camp nominated themselves "arbitrator" for that dispensation of erroneous data point???

The nerve of 'em!
Those gents should know that case capacity only matters when (input your name here) has decided it's convenient! To heck with all that science jumbo jumbo....:confused:

And, (again) why did those USAF scientists specifically structure their primer test to negate "confounding factors" from skewing the data they sought to collect on specific primers?

Here's yet another opportunity to explain your stance on THOSE facts.
Or, you could think hard about how you can spin this back onto me (the messenger), since the message of truth is too tough to swallow...

Tell ya one thing, if I'm ever proven WRONG, I will be the first to step up and admit it. So, please...take all the time you need to do just that!

Prove to me that variable case capacity has absolutely ZERO EFFECT on pressure, velocity & downrange POI shift. I've got the science to prove that it does, and conventional reloading practices agree, as well. If you care to debate that fact, here's NECOS Quickload email contact:
sales@neconos.com
I'm sure they'd love to learn from you, how their program is missing an "ignore case capacity" button next to the data field!

To direct right back to main point:
For our discussion, there are three main things affect pressure generated in a vessel:

1. Volume of the vessel
2. Volume of propellant
3. 'Energy' added by the primer to inpgnite the propellant

So, if one seeks to test the variation of any one of those three main factors, one must keep the other two, as a constant. If that very simple, very obvious methodology is not adhered to, the test results are flawed.

How can it be, any other way?

Your making this into more than it is. Every thing matters. The question is if your shooting small enough for it to matter to YOU. So with your current process you test something, if you cant see it then it doesnt matter. If the next guy is shooting small enough that he can see it, it matters to him. The target will tell you.
 
Your making this into more than it is. Every thing matters. The question is if your shooting small enough for it to matter to YOU. So with your current process you test something, if you cant see it then it doesnt matter. If the next guy is shooting small enough that he can see it, it matters to him. The target will tell you.
no, not really
fredo is saying if you are going to test, test with no variables so the test is real.
Its not about target reading, its about the TEST used to read the target
 
The OP would like to do a test to see if he can measure the effect of sorting primers by weight. It seems that information that would be helpful to him in designing the test is an appropriate reply to his post. It seems like telling him it is a waste of time is not helpful. He may get meaningful results and he may not, either way he will probably learn something and enjoy the effort. If he reports the results, then we all have an opportunity to benefit.
 
no, not really
fredo is saying if you are going to test, test with no variables so the test is real.
Its not about target reading, its about the TEST used to read the target
I know what he is trying to say. However thats not possible in the real world there will always be variables and case volume is one of them. I find it very hard to sort quality brass by volume. What I am saying is take your base line that you know your capable of and do the test. You cant shoot a group with the same case as it would take too long and conditions would change, and I'd rather have the target data then chrony data.
 
Quite honestly, I'd like to see some of fredo's targets from 1k sanctioned matches. (Not just targets from home use) From the sounds of it he's got this figured out and must be pretty hard to beat.

And I dont think anyone is trying to discredit what fredo is saying because everything does matter. The problem is where he has decided to place his focus, which to me seems the only thing he considers relevant is case capacity. You can take that further and try to quantify how much spring back the same pressure vessel has, and its relativity to how the combustion chamber absorbs the explosion. And you can say after a once fired, sized, and verified case capacity is ideal, but now you're making an assumption to the elasticity of the brass being the same. Who's to say the brass was actually consistently annealed at the factory ? What have you done to test the elasticity?

We can go on and on about the variables and what ifs, but we have to test to the best of our abilities.

If a top bench rest shooter tells you he isn't sorting primers, he's likely lying to you lol.
 
Here is my final take on this topic. There is no debate that there are a number of other variables which affect velocity, and the OP wants advice regarding methods and number of samples to answer this "better". Many have given opinions, but the primary question which has not been asked and therefore not answered is "how small of a difference in velocity do you want to be able to reliable detect"? This is one the most fundamental aspects to consider when doing any type of testing, and must be considered when deciding how to conduct the experiment and how many samples to measure because all of this costs time and money. As a result, different folks have an intrinsic opinion which is expressed as how to go about doing this without any regard to how good do the results need to be.

If I tell you using random brass, bullets, etc that I used a powder charge of 40gr and measured 2800fps on one firing, and with 45gr measured 3200fps on another (one sample each) would you doubt that the charges give different velocity; I don't think anyone would. That is because we believe that the differences in brass volume and prep, bullet weight and dimensions, etc give relatively small error in velocity as compared to the major impact of charge weight. In other words the signal is sufficiently large so as to overshadow the noise in this case. But if I am not simply satisfied to judge that charge weight makes a difference, and I want to know the "true" average velocity of 45gr with a high degree of confidence such that I can use this result in a ballistic calculator, how many samples should I load and measure? And what is my confidence in the average?

These questions regarding how much data, and what degree of difference do I want to be able to detect in able to separate the signal from the noise can be answered using the standard deviation (sd). Did anyone else look at the velocity results the OP originally posted? For the groups of fouling shots, light, and heavy primers the average sd = 9 for the individual shots within a group. This represents the noise due to brass prep, charge weight, bullet weight, chrono, etc. From this the number of samples can be determined to detect the desired level of resolution, or to say how confident we are with the average.

Most readers will have learned that the sd can be interpreted to provide information about the % of the results that are within certain ranges at 1,2, and 3sd. With the velocity sd = 9 we can say 68% of the shots will be within +/-9fps, 95% will be within +/-18fps , and 99% within +/- 27 based on the "bell curve". Therefore a shot that is 50 fps off would be considered a flyer because it is 50/9 = 5.5 standard deviations different, and not associated with normal behavior.

In this case we are not interested in the distribution of the individual shots, but whether the averages are different and how many samples are needed to reliably detect this. Early in this thread I mentioned the sd of the average, which is sd(average) = sd (individuals) / square root (n). So with n=4, sd(average) = 9/2 = 4.5. With n=9 sd(average) = 9/3 = 3. So what? Of course this means the variability of averages is less than that of individual shots, but the sd(average) can be interpreted the same. In this case the OP used 4 shots per primer so sd(average) = 9/2=4.5. But the difference in the averages was 26, or the averages are 26/4.5 = 5.8 standard deviations different from each other. Just like if an individual shot was 50fps different as noted above was out of the bell curve distribution ( it is 5sd different too), the same criteria can be used to judge differences in averages vs the sd(average). And I noted to the OP that his limited sample size test gave differences in velocity that were truely different.

So with n=4 and sd(average) = 9/2=4.5, a difference of 2*4.5 = 9 can reliably be detected with 95% (2sd) confidence. Not good enough? Then take 20 measurements and sd(average) = 9/sqrt(20) = 2.2 and you can detect a difference of 2 * 2.2 = 4.4 reliably. This sample size is the point of diminishing returns, and if a smaller degree of resolution is desired then it is prudent to reduce sd (the noise) using options which have been discussed (brass sorting, etc). Just because you go to the trouble to sort components and "make them the same" does not mean you have accomplished anything productive unless you determine a reduction in SD (the noise).

I know this discussion regarding the statistics will not interest many but the main takeaway in logic is before jumping to a conclusion about how to conduct a proper trial, understand where you are ( velocity sd, target group size, etc ) and set a criteria regarding how small of a difference ( improvement or deterioration ) you need to be able to detect and plan accordingly.
 
These types of tests are all about determining whether a source of error is limiting. Mulligan has done this test by a scientifically acceptable method. You can never eliminate all other variables in a test such as this. The best you can do is carry out the test in such a way that all other variables are equally distributed amongst the the two sample populations in which your test variable is being analyzed. If you use a statistically significant number of samples, all other variables such as neck tension, case volume, powder weight, bullet weight, etc., will be similarly distributed in both test groups. At that point, you will either detect a difference in velocity due to high/low primer compound weight above and beyond the contribution of all other random variables, or you will not.

For example, if you test a sufficient number of rounds, each sample set will have approximately the same amount of velocity error due to neck tension variance within the two sets of brass. The cumulative effect of all other variables on velocity will thus be represented as the +/- error in your control sample set and will be consistent between ALL sample sets if a statistically significant number of samples have been tested. By using heavy/light primer weight sets as the test variable, the contribution of primer compound weight to velocity variance will either be above and beyond the sum total of all other sources of velocity variance, or it will not. If it is not, trying to come up with a better mousetrap to completely isolate the effect of primer compound weight is useless, because it will never be the limiting source of error.

On the other hand, if you CAN measure an effect on velocity between extreme high/low primer compound weight sets, that strongly suggests that primer compound weight IS a limiting source of error. If a statistically significant sample sizes have been tested, all other possible sources of velocity variance will be equally represented in both primer weight high/low sample sets. At that point, any statistically significant difference in velocity between the two sets can be correctly be assumed to have originated due to the difference in primer compound weight. Proper application of statistical analysis is a common method used to assess the potential effect of a single variable in a complex system of variables when it is not easy, or perhaps even possible, to experimentally remove all other potential sources of error.
 
Last edited:
A little background,
Here is what I am thinking;
Sort a brick of primers by weight
separate the 10 heaviest primers and 10 lightest primers
10 shots across the chronograph of each loaded at the range with the same piece of brass.
Sound good?

If more shots are required, how many rounds is a piece of brass good for?????

My ask is this, can some of you help me frame this up to make the test valid, without breaking the bank?

Thanks for your input

CW

To avoid biasing the test by driving results, instead of observing natural phenomenon, this is how I would run the test:

I would take whatever a normal set of cases is for you to reload and shoot in a day (50 for me). I would load them all up with my best practices and efforts to make them all the exact same. I would record the weight of each primer and keep it associated with the specific case it went in to. I would not sort primers, as that is biasing the test to drive results. Shoot all the rounds across a chrono and record the data. Take that data and scatter plot it, fps on y and primer weight on x. If you see a clear trend, great! If it doesn't show a visible trend, don't despair; you may just need to bring more sophisticated tools to bear on the problem to analyze what you've collected. Feel free to PM me at this point if you'd like my help processing the data.

What I like about this method: you can go have a normal, fun shooting day. No burdensome load and shoot testing on two cases that, to me, would make the day less fun.

What I don't like about this method: you may need more than 50 to accurately model what is going on which pushes you into multiple sessions that can have vastly different variables. You could repeat this test multiple times, but I wouldn't compile the data together at first, plot each day in a different color on the graph to make sure you haven't introduced something.

Personal pet-peeve: utilizing gaussian metrics, like SD, on data without confirming that the data is in fact normally distributed. If the data isn't gaussian, then gaussian metrics aren't accurate. Calculating SD from 5 points, while technically viable, is not acceptable in rigorous science. We can use rigorous mathematics on my method above. The data set will be large enough to actual analyze if the set is gaussian or not and choose the correct statistical tools to derive results, or at least information, from the data collected.
 
no, not really
fredo is saying if you are going to test, test with no variables so the test is real.
Its not about target reading, its about the TEST used to read the target

Yep, that's it.
Really, it is...

Go back, read this whole thread again.
Find where I've questioned anything, other than related to the testing method(s) suggested?

Yet, there seems to be a 'gang' here that like to bully a narrative, and influence by association, rather than scientifically sound testing?
When it is acceptable for 'ego' to overshadow logic & simple science? Just sayin', science does not, and cannot draw valid conclusion "because I said so"...

Fact remains that if ANYONE wishes to test for ANYTHING, whether it pertains to reloading or not, they must adhere to a scientific method which isolates the variable being tested for. To deviate from that is to contaminate test results.

Period, end of story.

But, since there seems to be no shortage of justifications for deviating from basic science, whether they be statistical, correlated, or outright ego driven, there's really nothing more to share.

I wish you all well in your "testing"...
 
Fredo,
The disagreement is about what constitutes scientifically correct testing. Specifically, I disagree with your idea of what that would be. You seem to think that by using the word scientific that your suggested method is. IMO the magnitude of the difference shown in the first small tests shows that differences in primer weights result in differences in bullet velocity. While a slightly larger sample would have been desirable, and alternately loading the case from the heavy and light primer groups would have improved it, my experience tells me that the magnitude of difference between the primer groups would not have been produced by progressive hardening of the case as it was repeatedly fired and sized. For those that suggest very large samples, I think that if the effect had been more subtle, the difference less, that that may have been required but not in this case. Finally, I would like to thank Mulligan for sharing his information with all of us. I think that the other group that should be grateful for this discussion is the vendors of highly accurate scales, which are a requirement for this sort of testing and sorting.
 
Fredo,
The disagreement is about what constitutes scientifically correct testing. Specifically, I disagree with your idea of what that would be. You seem to think that by using the word scientific that your suggested method is. IMO the magnitude of the difference shown in the first small tests shows that differences in primer weights result in differences in bullet velocity. While a slightly larger sample would have been desirable, and alternately loading the case from the heavy and light primer groups would have improved it, my experience tells me that the magnitude of difference between the primer groups would not have been produced by progressive hardening of the case as it was repeatedly fired and sized. For those that suggest very large samples, I think that if the effect had been more subtle, the difference less, that that may have been required but not in this case. Finally, I would like to thank Mulligan for sharing his information with all of us. I think that the other group that should be grateful for this discussion is the vendors of highly accurate scales, which are a requirement for this sort of testing and sorting.
Thanks Boyd,
I’m still loaded up on pain meds and not all together “with it”.
I will start sorting primers as soon as I can navigate the stairs down to my little slice of paradise. You are right, the good folks at Cambridge are the winners thus far.
CW
 
2 pieces of brass total. Each piece is loaded 5 times for light and 5 for heavy. Total of twenty rounds fired as you originally suggested. Purpose is to reduce chances of component failure during testing, but maintains the ability of direct comparison within the overall data group. No matter what you do, it seems you have an eye for accurate data so I'm sure you'll do fine.

I heard 74% of statistics are made up anyways.
View attachment 1065870
Thanks for the advice, that is exactly how I did the test.
CW
 
All, the primer test is finished.
Thanks to all who helped with good advice. I have posted the test results below and I have 8 pages (with lots of pictures) of notes/explanation that are attached.
Primer graph PPC Fall 2018_2.jpg
@BoydAllen, Thanks Boyd.
@dkhunt14 Thanks Matt.
Should I start a new thread with a better title for this????

The report explains what I did and how I did it. Some might find this useful, it did answer a few questions I had, and I appreciate the help framing up the test.

CW

Grammatical edits.....
CW
 

Attachments

Last edited:

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
165,840
Messages
2,204,695
Members
79,160
Latest member
Zardek
Back
Top