• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

Litz and Cortina - follow up on barrel tuner discussion

Cal has a great recruiting class coming in next year, If they stay healthy, a lot of people are predicting him to get another national championship next year. The way the tourney works though, even the very best team seldom wins. So even though the stats can predict who the best team is, the odds still favor the field.

So who do you bet on and why?
 
Fair enough. I mainly brought up calculating the SD of groups of SDs because you mentioned it in relations to calculating SDs of groups sizes, and you could certainly calculate the SD of a a group of SDs. The likelihood that it might not be particularly meaningful is more apparent with more thought.

I’m still not convinced that group size would not follow a normal distribution, and thus calculating the probability that a group resulted from an improvement in load or from the natural variation in group size would be rather straightforward and rather useful. Do you have any information suggesting that group size does not follow a normal distribution? I suppose I could analyze a bunch or match results and see for myself.
It is apparent that as the average group size gets very small, that it is bounded by zero in which a Raleigh distribution has been proposed for it's description. And we know that the range statistic (group size) is not normally distributed. All that being said there are certainly sets of group sizes that appear normal, but this cannot be accepted as a universal rule. The main takeaway is group size is a much less robust statistic than using the mean and SD of the individual radius results to make decisions, and this doesn't require numerous groups either.
 
It is apparent that as the average group size gets very small, that it is bounded by zero in which a Raleigh distribution has been proposed for it's description. And we know that the range statistic (group size) is not normally distributed. All that being said there are certainly sets of group sizes that appear normal, but this cannot be accepted as a universal rule. The main takeaway is group size is a much less robust statistic than using the mean and SD of the individual radius results to make decisions, and this doesn't require numerous groups either.
I agree. I was actually thinking about an hour ago that being bounded on one end by zero prevented it from following a normal distribution, but I had not had the opportunity to look for alternative distributions. I would not have come up with the correct answer rapidly. I’m too rusty. I’m sure I’ve made other mistakes in these posts as well. Nonetheless, the simple comment “serious shooters should brush up on their statistics” has resulted in a firestorm that demonstrates just how much we could benefit from exactly that.

I went to crack open the stats textbook and can’t find it. Might have to switch to Google myself, which is a little disappointing.
 
I agree. I was actually thinking about an hour ago that being bounded on one end by zero prevented it from following a normal distribution, but I had not had the opportunity to look for alternative distributions. I would not have come up with the correct answer rapidly. I’m too rusty. I’m sure I’ve made other mistakes in these posts as well. Nonetheless, the simple comment “serious shooters should brush up on their statistics” has resulted in a firestorm that demonstrates just how much we could benefit from exactly that.

I went to crack open the stats textbook and can’t find it. Might have to switch to Google myself, which is a little disappointing.

This should get you started:

 
Ok...so, if I accurately predict that it's gonna be 4 marks(thousandths of an inch) between completely in tune to as far out of tune as the gun can shoot, as well as predict the shape of the groups and poi on traget for 4 marks or more, what is the statistical significance of me doing that? Nobody cares about free throws but they do care about if tuners do what I say, or they wouldn't be reading all this.
So, lets say there are 375 marks on my tuner and I pick the best to worst of 4-5 of those within that range and that the difference from worst to best is very clear on a target. Is it significant enough if I just go from a predicted number(4) and the results are as plain as the nose on your face or what more is needed? Just trying to appease the crowd here that is hung up on stats, so what are the requirements on my part to satisfy the test..statistically. I don't have all the time, money nor bbl life to just keep going with something that I already know the result of, to appease others but I'll do what I can. What if I have a rifle that is perfectly in tune at tuner mark 2(for example) and it shoots mid teens at 100. I predict it's compltely out of tune 4 marks later and I move it 4 marks and shoot it at both settings for say 5 groups at both settings? If the gun repeatably shoots teens at 2 and lets say mid 3's at 6, and I do that 5 times at each setting, after predicting the results, is that statistically significant to the stats guys? That's only 50 rounds but with predicted results.
Hey Mike,

Statistical significance is a concept that contemplates the possibility that two samples for the same population (effectively the same) can give differing results.

The real problem is how different the in-tune is compared to the out-of-tune. If the difference is very large, smaller samples can identify it to a reasonable confidence level. As the two drift closer together, like your example, the size of sample starts to skyrocket.

For me, a rigorous statistical significance is not needed to prove that the tune changes. If you shoot a test, then 5, 5 shot groups of each setting alternately, there is a good opportunity to provide evidence of how it works. If the two sets of groups never converge in size, the stronger that implied proof would be (i.e the largest group from the in-tune is .2 and the smallest out-of tune is .25 or .3).

Add all of that to well-documented testing across multiple barrels, and voila! Statistical significance!
 
I agree. I was actually thinking about an hour ago that being bounded on one end by zero prevented it from following a normal distribution, but I had not had the opportunity to look for alternative distributions. I would not have come up with the correct answer rapidly. I’m too rusty. I’m sure I’ve made other mistakes in these posts as well. Nonetheless, the simple comment “serious shooters should brush up on their statistics” has resulted in a firestorm that demonstrates just how much we could benefit from exactly that.

I went to crack open the stats textbook and can’t find it. Might have to switch to Google myself, which is a little disappointing.

It's easy to get rusty! There are many excellent YouTube series on stats and about any other scientific topic. Besides being used to judge competitions, group size is a poor statistic for characterization and testing. Yes you can shoot numerous groups, determine it's distribution, and even assuming normality find the SD between groups. Thai is assume group size is the observation. BUT using the individual shot radius as the observation, it's mean and SD within the group are adequate for significance testing. I plan on collecting targets from our next fclass match to demonstrate this.
 
If you don't get statistics, you're going to waste a lot of time and money doing stuff that doesn't work while thinking it does. I don't know anyone winning matches who doesn't have a good understanding of the topic. (Bryan has won more than his fair share of big matches and goes on about it constantly). That may be formal mathematical knowledge, or it may just be intuitive knowledge. Intuition, of course, is just another word for experience. I don't know *any* good shooters who would take a three shots at the same velocity to mean their load will have a variance of zero. And that's regardless of how good at math they are. Experience tells you it's a fluke. (Math alone, ironically, does not). If it seems like experienced shooters don't talk about or care about statistics, it's because they don't have to - it's something they take for granted. New shooters should read up. Shooters tend to be biased against book knowledge, but it has its place.

The trouble is both approaches can lead you astray. The mathematicians and engineers can get complacent about the assumptions that go into their formal theories. Intuitive/experiential statisticians can grossly over or under estimate the significance of what they're seeing. Both are prone to seeing what they thought they were going to see. It's this last bit that's the most challenging, as no amount of mathematical rigor will solve the problem. It's an exercise in psychology and self awareness.

All that is to say that none of this is as obvious as some would make it out to be.

Also, +1 on what Keith said - Bryan never said tuners don't work. People need to read what he wrote before repeating that misinformation.
 
Now that you PARTLY understand it, go read on a stats page just how true it is. The mistake you’re still making is that you’re centering your bell curve on X=0 again. The only way the curve would be centered on X=0 would be if the person’s average number of consecutive free throws is 0. That would mean that he was equally likely to miss and to hit, and he would, once in some astronomical number, hit ten in a row, and he would miss ten in a row equally often. But why would his average have to be zero? If his average was 1, then the curve would be centered on X=1, and he would make 11 in a row with the same frequency that his missed 9 in a row. His average does not have to be an integer. It could 1.687. Then the curve would be centered on 1.687, but there would be no numbers that he hit in a row with equal frequency because 10 in a row is 8.313 from his average. Going 8.313 in the opposite direction gives you -6.626 and it’s impossible to miss 6.686 times. What you can do is use the curve to determine what percentage of the time he would miss 7 in a row, and it would not be the same as the percentage of times that he made 10 in a row. It’s not so wrong. It’s so true. You’re just as likely to underperform as you are to over perform, but that does not mean that your performance centers on zero. You might be just as likely to miss two in a row as you are to hit 5 in a row, and an NBA player might be just as likely to hit 27 in a row as to hit 23 in a row. Notice that you missed on the left portion of your bell curve, but the NBA player is still hitting on the left portion of his bell curve. Stop placing the center of the bell curve at x=0.

I don’t think that one is equally likely to over perform as to under perform, though, in many examples. Agree that skill level shifts the peak left and right. Agree that there will be outcomes on both sides. But disagree that the “shape” of the outcomes is, or is close to symmetrical, (my understanding of your understanding of normal distribution” being that deviations vary similarly side to side).

I originally picked the example of free throw strings where a person could reliably (half the engagements) make X number of shots, 5, as an example because it seemed broadly relatable.

If half the efforts of shots ended at 5, the other 50% of efforts ended at 0, 1, 2, 3, or 4, on the left and 6, or higher on the right.

I have understood you to be saying that normal distribution splits the other 50% - the half that were “not 5 consecutive free throws” on both sides of 5, and symmetrically looking.

This is where I’d differ. Yes, I think normal distribution could split it that way, I just don’t think normal distribution applies to the example, or to many other elements of accuracy shooting.

I firmly believe from observation, in a left side bias in numerosity of outcomes, generally when the subject matter involves a test that is hard. Is this illogical? Naturally that line above, you are just as likely to over and underperform, got my attention.

Tyler seemed to immediately agree the half of efforts that did not end in 5, did not split “evenly,” meaning that 25% ended in 6, or better than 6, and 25% ended in 4 or worse. As I wrote, I predicted that far more of the endings that were not 5, were left side. Now, the “consecutive shots” of my example may turn out to be where you (hopefully) concur another layer of complication to unpack lies, but it wasn’t accidentally chosen. (BR for example, a single errant, misfire flier renders any improvement from all remaining shots impossible. It can only get worse, never better and one might as well stop and wait to start over, - I suppose if it were the first, you could just aim at it, but not after that).

In certain kinds of performance, as with batting outcomes, it is simply easier to fall short. The batting outcomes example strongly supports my contention. I’ll grant the other can also happen, for example a well placed golf handicap ought to split the over and under outcomes.

It is perhaps not “easier” for a scale to err heavier than lighter, or a bullet to vary longer than shorter, or more shots to miss right than left, and so on, and in this regard, I don’t take issue in principle with symmetry statistically occurring, in a great many other things.
 
Last edited:
I have a challenge for you statistics guys. (Yes, I have had a college course, years back.) Ask the winners of any benchrest match whether they used statistics in their load development. I don't mean just anyone, or some little match, but the big boys that win the big matches. I think that they will give you a funny look and tell you that it is not a part of their process. Most of what we "know" comes from anecdotal information. Sometimes I think that people have a tendency to forget this. On the other hand, you can enjoy your hobbies pretty much any way you want, but I think that most people who succeed in the shooting sports are not thinkiing about statistics at all while developing loads. They have learned how to make useful inferences from small samples, based on experience.
 
I have a challenge for you statistics guys. (Yes, I have had a college course, years back.) Ask the winners of any benchrest match whether they used statistics in their load development. I don't mean just anyone, or some little match, but the big boys that win the big matches. I think that they will give you a funny look and tell you that it is not a part of their process. Most of what we "know" comes from anecdotal information. Sometimes I think that people have a tendency to forget this. On the other hand, you can enjoy your hobbies pretty much any way you want, but I think that most people who succeed in the shooting sports are not thinkiing about statistics at all while developing loads. They have learned how to make useful inferences from small samples, based on experience.

Whether work or play there is always a balance of experience vs the need to experiment and depending on where the pivot point is the needs and approach differ. Regardless of how you got there, if you know the answer there is little need for experimenting beyond fine tuning. On the other hand when starting on the low end of the learning curve, efficient experimentation will win. I lived on both ends of the spectrum when working and try to find a proper balance.
 
It's a good problem to have, when the groups from two or more different loads are shooting so small that you need a statistician to decide which is actually the best of the choices. Benchrest Heaven!
I have a Jones group measuring caliper attachment, like the ones that are used at group matches that does an excellent job of sorting that sort of thing out. Bought it out of an estate many years ago.
 
Hey Mike,

Statistical significance is a concept that contemplates the possibility that two samples for the same population (effectively the same) can give differing results.

The real problem is how different the in-tune is compared to the out-of-tune. If the difference is very large, smaller samples can identify it to a reasonable confidence level. As the two drift closer together, like your example, the size of sample starts to skyrocket.

For me, a rigorous statistical significance is not needed to prove that the tune changes. If you shoot a test, then 5, 5 shot groups of each setting alternately, there is a good opportunity to provide evidence of how it works. If the two sets of groups never converge in size, the stronger that implied proof would be (i.e the largest group from the in-tune is .2 and the smallest out-of tune is .25 or .3).

Add all of that to well-documented testing across multiple barrels, and voila! Statistical significance!
Good stuff here. I was anything but ready to attempt a statistics lesson, and didn’t intend to make so many comments, especially about statistics, but then it seemed as though two camps formed rapidly. The camp that almost didn’t believe statistics work at all, and the camp that seemed like opening a statistics book would melt their gun barrels, even though I hadn’t even suggested a particular method, level of significance, or number of shots.

Having not competed in over a decade, and with no immediate plans to do so in the near future, it isn’t urgent for me, but this shows I need to brush up on a few things big time.

I totally agree that very high confidence is not necessarily what a competitive shooting should be looking for, ar least not in a single barrel. Certain things likely transfer accross barrels, especially if you use the same length, contour, and chamber. Things that are particular to a specific barrel, we can’t be as thorough with. If I was staring up today I would look into ways of analyzing my shots that would be more efficient that group diameter, I would choose a much lower threshold of significance like 70-80% instead of the traditional 95% so save me so shooting, and I would log as much data as possible, perhaps in some sort of ballistic or target software, so that over the span of a few years, I might actually have something that points toward a higher statistical significance level.
 
Having not competed in over a decade, and with no immediate plans to do so in the near future, it isn’t urgent for me, but this shows I need to brush up on a few things big time.

No time like the present to jump back in. Everything from then still works. You probably had a barrel from Wisconsin, hodgedon extreme powder, Berger bullets and Lapua brass.
 
"statistics" doesn't have to be some sort of drawn out academic exercise. Measuring a group is statistics, deciding that it's better to use your average group size rather than your smallest or largest group size is statistics. Knowing that what you're looking at may or may not repeat itself in the future, and guessing at how likely that is - that's all statistics is.

The takeaway is that if you don't look at your shooting experiments in a way that allows for the same test to result in different outcomes, you're going to get confused quickly. Whether or not you choose to do that formally isn't really the point. I've not met a national level shooter who does not do this intuitively if not explicitly.
 
@Bryan Z. Do you have anything you can add here from your testing? I don't remember every detail but it seems you used the 95% standard threshold for probability in your tests so far. Thanks in advance.
You tagged me on a post that’s 8 pages long…ain’t nobody got time fah dat! :). Without reading the whole thing and specifically answering the question, yes, I do statistical tests in my testing and do use the 95% probability standard as a threshold for whether the data is likely due to chance or not. I get so many questions about stats that I made a “simple stats” video trying to explain this in as simple terms as I can muster/the type of words I used when I taught stats. The 95% standard is the industry standard and used all the time in various paradigms of research. For example, if you are taking medication for a chronic medical illness, cancer, or any other major medical illness, the research that was conducted to substantiate the effectiveness of that medication very likely used that standard when comparing active medication to placebo on medical outcome variables. There are other types of stats that do not even use probability and instead use “best fit” but those stats are usually used in different types of applications. I plan to use the “best fit” statistical approach when analyzing the tuner testing data after I capture data in various atmospheric conditions. For now, I have only used the 95% standard because I was directly comparing high and low barometric pressure conditions. Once I have more data on temperature and humidity over an appreciable amount of time and with sufficient data, I will use the “best fit” method to analyze the relationships among all variables including tuner settings. Anyway, I’m happy to bore all of you with this whole statistic mumbo jumbo ;)

 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
165,828
Messages
2,203,914
Members
79,144
Latest member
BCB1
Back
Top