• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

Seating depth prs rifle

[I don't think it has been concluded that there is no statistical significance to these results. ]

maybe not concluded perhaps suggested unless I’m not understanding my friend VT ( that wouldn’t be the first time) I’m not a PRS shooter either, just a dumb old LRBR shooter but I can tune a rifle and sometimes a .003 -.004 window is all we get and be happy at that. One thing i would suggest is if a couple consecutive depths are spitting rounds at 100 it won’t get better at 800.
 
The best book I ever read on tuning a rifle was The Book of Rifle Accuracy by Tony Boyer. Although the book was written based on shooting a 6 PPC in short range benchrest competition it taught me a lot and I found many of the principles he outlined also apply to other disciplines of rifle shooting. The one thing that stands out in my mind is that Mr Boyer also tuned his gun using 3 shot groups and no where in chapter 22 Tuning Your Rifle did he mention SD or ES. My guess is because with three shot groups SD is meaningless and if you are getting high ES values on velocity there is probably something wrong with the ignition system on your rifle. To tune his gun he used a matrix arrangement with three powder charges (fairly close approx 1/2 gn apart) for each vertical column and different seating depths approx 0.003 apart for the horizontal rows. The big difference is he (and most other benchrest shooters) shoot with the bullets jammed. He looked for consecutive good groups at a given seating depth indicating that the tune had some forgiveness in powder charge. He also emphasized that the groups should all be at about the same height on target. I have used his method on my 6BR and other rifles and it seems to work well. In general I have found that most of my rifles shoot their best either very slightly jumped or somewhere between 0.003and .020 into the lands. You still have to work up your powder charge to avoid over pressure but at 0.020 or less jam it is highly unlikely you will stick a bullet. I also assume you are using a short action magazine with the 6BR and will have adequate space in the magazine.
Just looking at your target I would reshoot 11, 12 and 13. They are similar in height on the target and 13 is really good but 12 has too much vertical. I think many shooters give up a lot by not testing loads loaded slightly into the lands. My 6 BR loves 30.1 gn Varget with a 105 about 0.008 into the lands.
 
That's like saying if you look at velocity across 20 incremental charge weights, and calculate the SD across those 20 charges, then those velocities are all within +/-3SD . Inferring charge weight did not affect velocity. That's an incorrect basis to use.

Bad extension of the science here. Either you’re trying to build a false strawman or you’re not understanding the stats being described.

The ERROR at any given velocity would have to be greater than +/-3SD, which typically is never true when we have single digit SD’s for a given charge weight, and can demonstrate data, as an example, of ~1fps/kernel or ~6fps/0.1grn (~20fps/0.1grn is highest I have ever observed for an anti-node). So when our SD is 4, but every step of 0.2grn is ~12fps average, up to ~60fps, which happens to be more than 3x SD, yeah, we're not talking about non-differentiable statistics here...
 
I don't think it has been concluded that there is no statistical significance to these results, or that there are not strong indications for followup. There simply has not been an analysis of the data, and generally speaking not many are interested in a such an analysis. I personally spend as much time analyzing my target data as it takes to load the bullets for the testing, but I'm a nerd!

This underlined statement is kinda my point - sufficient analysis has not been done for us to be able to TRUST that these group sizes are actually different, but a lot of folks fall into a trap of believing they MUST be different.

The targets were presented and a bunch of guesses were chased based on one small 3 shot group, which was assumed to be different than one bigger 3 shot group - "different" meaning "differentiated" by the variable change, rather than simply "coincidentally not the same" which happens simply because one group out of two groups almost inevitably has to be smaller than another. A scatter plot was presented to better visualize a trend, and the observation bias was confirmed for a lot of folks here, again, without actually doing any actual analysis...

To the contrary of the quote here, I WOULD argue, however, that there is evidence that there really is no "strong indication for followup." Because there are no strong indicators of differentiation of the results within the dataset presented, and rather, there ARE strong indicators that the samples presented are NOT differentiated.

Here's a visual depiction of what I'm describing (this isn't even evaluating the Mean Radius or the Standard Deviation of the Radii of the groups, just taking a simplified look at the extreme spread of the groups, since that was the data presented):

I've plotted here the group sizes, as was done above, but adding +/-1SD error bars on each group, along with depicting the average of all groups, as well as the ranges for +/-1SD and +/-2SD from the average. Reminding, for a Normal Distribution Population, meaning NO differentiation at all, we predict 68.2% of samples to fall within +/-1SD from Mean, and we expect 95.4% of samples to fall within +/-2SD from Mean.

1709267409865.png

So evaluating ONLY the data presented, because that's all we have:
Mean Group size ("average") = 0.37" (I estimated ~0.35 earlier today)
Standard Deviation of Group Size = 0.136" (I estimated ~0.1" earlier today)
Range = 0.52 (largest group minus smallest group)

The Mean group size is depicted by the light blue dashed line. We note here that the +/-1SD error bars on 14 of the samples crosses the mean line - so 14 of those groups COULD have been 0.37" within just 1SD of where their group size actually landed...

The span of +/-1 SD for this data set would span between 0.234" and 0.507" groups, which is depicted by the span between the orange dashed lines - again, this captures 14 of the 21 data points, which is 66%, while a Normal Distribution would predict 68.2%...

The span of +/-2SD is represented between the purple dashed lines, spanning between 0.098" and 0.643". This span captures 100% of the sample points, while a Normal Distribution would predict it to capture 95.4%... Since 4.6% of 21 is only 0.966 samples, I guess we should have expected ONE sample outside of the purple bracket (but within a +/-3SD bracket), but statistical predictions being what they are, it's fair to expect we might be off on ONE observation in 21...

So it sure seems uncanny to me that when we run statistics on this entire sample set as if they are NOT differentiated, they follow NEARLY perfectly Normal Distribution predictions. So it REALLY doesn't look to me like there is any support a hypothesis for "strong indications for follow up" to be derived from this data set.

It was mentioned above that we could consider a rolling average of the group sizes, which isn't a good means to determine differentiation between samples, BUT, just for some fun... I've added here a 5 point and 7 point rolling average over the plot, the orange curve representing the rolling average of the sample plus it's 2 nearest neighbors (5) and the sample plus it's 3 nearest neighbors (7). The class favorite of #13 isn't supported as favorable, as it seems the general trend of the data is showing smaller and smaller groups for the longer and longer jump, so 21 would be the best option if we believed this were a valid means of analysis.

1709269704718.png

Given more information about each group, enough to compile a T-Test for Mean Radius, then we could REALLY be cooking to determine if these groups truly are differentiated or just coincidentally different. Repeating this test to compile multiple group sizes for each jump distance would also create enough sample size for a T-test. But in this quick and dirty analysis, it looks like we can't say ANY of these are truly different from one another.
 
So I'll take the opportunity to recycle back to here:

It’s a PRS rifle, I wouldn’t overthink it.

The data doesn't look like any of those seating depths are better than the others, AND I'll point out the WEZ analysis done by Cal Zant for PRB a handful of years ago, relevant to target sizes being used in PRS competition - the difference between our assumed 0.1moa group and our assumed 0.62moa group is 2.7%... Acknowledging, the reality of our group average is 0.37moa, so tightening is only 0.7% better, and loosening to the worst group on the page is only 2% work...

how-much-does-rifle-group-size-matter11.png


So yeah... I DID overthink it for you here, so in your shoes, I wouldn't overthink it... Pick a jump and party on.
 
This underlined statement is kinda my point - sufficient analysis has not been done for us to be able to TRUST that these group sizes are actually different, but a lot of folks fall into a trap of believing they MUST be different.

The targets were presented and a bunch of guesses were chased based on one small 3 shot group, which was assumed to be different than one bigger 3 shot group - "different" meaning "differentiated" by the variable change, rather than simply "coincidentally not the same" which happens simply because one group out of two groups almost inevitably has to be smaller than another. A scatter plot was presented to better visualize a trend, and the observation bias was confirmed for a lot of folks here, again, without actually doing any actual analysis...

To the contrary of the quote here, I WOULD argue, however, that there is evidence that there really is no "strong indication for followup." Because there are no strong indicators of differentiation of the results within the dataset presented, and rather, there ARE strong indicators that the samples presented are NOT differentiated.

Here's a visual depiction of what I'm describing (this isn't even evaluating the Mean Radius or the Standard Deviation of the Radii of the groups, just taking a simplified look at the extreme spread of the groups, since that was the data presented):

I've plotted here the group sizes, as was done above, but adding +/-1SD error bars on each group, along with depicting the average of all groups, as well as the ranges for +/-1SD and +/-2SD from the average. Reminding, for a Normal Distribution Population, meaning NO differentiation at all, we predict 68.2% of samples to fall within +/-1SD from Mean, and we expect 95.4% of samples to fall within +/-2SD from Mean.

View attachment 1530286

So evaluating ONLY the data presented, because that's all we have:
Mean Group size ("average") = 0.37" (I estimated ~0.35 earlier today)
Standard Deviation of Group Size = 0.136" (I estimated ~0.1" earlier today)
Range = 0.52 (largest group minus smallest group)

The Mean group size is depicted by the light blue dashed line. We note here that the +/-1SD error bars on 14 of the samples crosses the mean line - so 14 of those groups COULD have been 0.37" within just 1SD of where their group size actually landed...

The span of +/-1 SD for this data set would span between 0.234" and 0.507" groups, which is depicted by the span between the orange dashed lines - again, this captures 14 of the 21 data points, which is 66%, while a Normal Distribution would predict 68.2%...

The span of +/-2SD is represented between the purple dashed lines, spanning between 0.098" and 0.643". This span captures 100% of the sample points, while a Normal Distribution would predict it to capture 95.4%... Since 4.6% of 21 is only 0.966 samples, I guess we should have expected ONE sample outside of the purple bracket (but within a +/-3SD bracket), but statistical predictions being what they are, it's fair to expect we might be off on ONE observation in 21...

So it sure seems uncanny to me that when we run statistics on this entire sample set as if they are NOT differentiated, they follow NEARLY perfectly Normal Distribution predictions. So it REALLY doesn't look to me like there is any support a hypothesis for "strong indications for follow up" to be derived from this data set.

It was mentioned above that we could consider a rolling average of the group sizes, which isn't a good means to determine differentiation between samples, BUT, just for some fun... I've added here a 5 point and 7 point rolling average over the plot, the orange curve representing the rolling average of the sample plus it's 2 nearest neighbors (5) and the sample plus it's 3 nearest neighbors (7). The class favorite of #13 isn't supported as favorable, as it seems the general trend of the data is showing smaller and smaller groups for the longer and longer jump, so 21 would be the best option if we believed this were a valid means of analysis.

View attachment 1530288

Given more information about each group, enough to compile a T-Test for Mean Radius, then we could REALLY be cooking to determine if these groups truly are differentiated or just coincidentally different. Repeating this test to compile multiple group sizes for each jump distance would also create enough sample size for a T-test. But in this quick and dirty analysis, it looks like we can't say ANY of these are truly different from one another.
[/QU

The SD should be calculated based on deviations among replicates (which don't exist), not between the the test items. There is sufficient data to carry out a proper analysis if one has stat software for anova, but not many are interested.
 
CharlieNC said:
The SD should be calculated based on deviations among replicates (which don't exist), not between the the test items.

I mentioned that on page 1 - that we don't have sufficient replication to know a standard error to really do this analysis, and acknowledged the choice to use the set SD as a proxy. It seems pretty safe to assume that this choice doesn't totally undermine the simple analysis I explained. We COULD use a proxy for standard error of SD's derived from non-differentiated groups (one load, lots of groups), or we COULD shoot multiple replications of this same test - but we don't have that data. As I mentioned also, we DO have replicates within each group which could be used to run a T-test for Mean Radii of each group, but that's not as quick and is more dirty than the quick and dirty analysis I explained here.

Naturally, most folks don't want to have to break out matlab or JMP to run stats on their groups, and don't want to spend hours measuring each shot to determine mean radii from centroid - which is fine - but unfortunately, it seems apparent that a LOT of folks get trapped by under-analysis... We assume a variable change MUST cause a difference, so we TRUST that a difference on the paper is truly differentiated. I put up an analysis package which shows support for a null hypothesis, meaning we AT LEAST have reason to question whether any of the results are truly differentiated. No T-Test, no P-Test, no variance calcs, just simple gaussian statistical application to determine if the null hypothesis could be true - and it appears it could be, which SHOULD be enough evidence to suggest we can't stand on any differentiation of samples based on this test alone.

Maybe more simply stated - I know my position here goes against the popular opinion, but if a quick and dirty statistical review shows a test is likely non-differentiated, what evidence do we have in this test which supports the results ARE differentiated? Popular trend among reloaders and shooters is ALWAYS to pick the smallest group, whether it's meaningful or not. We readily derive confidence from coincidence. We're seeing more and more detailed science "mythbusting" from groups like Hornady and Applied Ballistics which prove coincidence is what we see on target more often than difference - and I've nearly become convinced that seating depth testing methods, the accepted methods we've all used for years and years - are one of those myths.
 
I mentioned that on page 1 - that we don't have sufficient replication to know a standard error to really do this analysis, and acknowledged the choice to use the set SD as a proxy. It seems pretty safe to assume that this choice doesn't totally undermine the simple analysis I explained. We COULD use a proxy for standard error of SD's derived from non-differentiated groups (one load, lots of groups), or we COULD shoot multiple replications of this same test - but we don't have that data. As I mentioned also, we DO have replicates within each group which could be used to run a T-test for Mean Radii of each group, but that's not as quick and is more dirty than the quick and dirty analysis I explained here.

Naturally, most folks don't want to have to break out matlab or JMP to run stats on their groups, and don't want to spend hours measuring each shot to determine mean radii from centroid - which is fine - but unfortunately, it seems apparent that a LOT of folks get trapped by under-analysis... We assume a variable change MUST cause a difference, so we TRUST that a difference on the paper is truly differentiated. I put up an analysis package which shows support for a null hypothesis, meaning we AT LEAST have reason to question whether any of the results are truly differentiated. No T-Test, no P-Test, no variance calcs, just simple gaussian statistical application to determine if the null hypothesis could be true - and it appears it could be, which SHOULD be enough evidence to suggest we can't stand on any differentiation of samples based on this test alone.

Maybe more simply stated - I know my position here goes against the popular opinion, but if a quick and dirty statistical review shows a test is likely non-differentiated, what evidence do we have in this test which supports the results ARE differentiated? Popular trend among reloaders and shooters is ALWAYS to pick the smallest group, whether it's meaningful or not. We readily derive confidence from coincidence. We're seeing more and more detailed science "mythbusting" from groups like Hornady and Applied Ballistics which prove coincidence is what we see on target more often than difference - and I've nearly become convinced that seating depth testing methods, the accepted methods we've all used for years and years - are one of those myths.

I agree there is sufficient radii data to perform an analysis, as inferred via anova which few have access to use. I do not agree that the sd between groups of different seating depths is a suitable proxy vs within group sd which is the approach suggested by Hornady and AB. Coincidentally I am in the process of writing how to test for significant differences in shot dispersion, and look forward to your comments when posted.
 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
166,332
Messages
2,216,788
Members
79,554
Latest member
GerSteve
Back
Top