• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

Litz and Cortina - follow up on barrel tuner discussion

Well that’s great, at least we have that settled. Lol

Hopefully we can move on.
If I wasn't clear, I'll try to do a better job of explaining what I'm saying but yes, it's very settled...to me. Temp and density combine to affect tune, so each degree of temp change's effect on tune is contingent upon air density too, where each mark on the tuner has a given value alone, that doesn't change, per se
 
Last edited:
I’m not saying you’re wrong. I’m too rusty to plant a stake in the ground against your position. But I see no reason why it would be invalid to take the SD of a group of SDs. However, you’re exactly making one of my points. That point being that group size is an inefficient way to evaluate a small number of shots. Ten five shot groups requires fifty shots, but you’re basically only generating ten data points. You could look at the same fifty shots, and analyze them differently, like distance from center or X-Y coordinates, and generate fifty data points. The data is the same, but the resolution would be better.

Because I see no reason that you can’t take an SD of a group of SD’s and get a result that bad meaning, even if you didn’t know what to call it, I also think that if you plotted an infinite number of groups, shot by the same shooter, using the same load, in a barrel that never wears, and then you graphed the percentage of the time a group was a certain size for every size group that he shot, that you would get a normal distribution that looked like a bell curve that was symmetric across his average group size. If that’s the case, and I see no reason why it wouldn’t be, then you can apply it to the real world. A shooter can take the measurement of a sample of groups with the same load, and generate an SD if group size. Then he can make a change in his load and calculate the probability that the change in grouo size was due to the change in load or the fact that he doesn’t shoot groups that are all the same size. And he doesn’t actually have to start over with each barrel. He can go through a log book of groups, and compare his current load and gun to his past performance. He can examine match results from last week, and go shoot a group, and see what probability there is that he is shooting in what percentile of the pack.

This is actually what experience shooters are doing, they’re just doing it by feel instead of by numbers. Numbers can give them better resolution than their feel does.
I generally agree with you, except there is no statistical premise to calculate the SD of numerous SD's. It is NOT normally distributed. If this premise was valid and useful don't you think it would be found in a text instead of a shooting forum? On the other hand the pooled ("average") SD can be compared to another using the Ftest, but what would that mean? Using individual shots, the mean radius plus it's SD are easily used to test hypothesis of loads etc. Group size is important in that it is used to judge completions, but it's use is otherwise butchered with misapplication.
 
Last edited:
Do fokes use tuners on rail guns successfully? Thanks John
yes. There's a preconceived notion with some that super stiff bbls don't respond to tuners. In fact though, they vibrate at a slightly higher frequency and sweet spots are slightly closer together as well as a bit narrower because of it. The difference isn't huge but it's there. I see it even on bag guns with 1.250 straight bbls vs say a LV contour. The bigger difference is in the clarity of the shape and poi of the groups on the target. A super stiff bbl has less muzzle deflection(amplitude) and shows tune less clearly than a bbl that is somewhat less stiff...which makes it easier to recognize tune or lack of it.
 
Every day, as it's changing what is essentially a constant with each mark by its respective value. But the need to move it is a different story. Some days the gun will hold tune all day, while others, it does not, even over the same temp swing. I believe this is due to air density. Bottom line, each mark on the tuner has a value but because air density may or may not be the same over a given temp change, there will be times that it doesn't go out of tune. Moving the tuner changes a constant where each degree is not necessarily the same from day to day. There MIGHT be more to it but temp AND air density combined are absolutely the biggest driving factors to tune.

Temps effects are easy to understand because powder turning from a solid into a gas is a chemical reaction and all chemical reactions(AFAIK) are more or less temp dependent. Why density matters to a point is harder for me to understand. I may be as simple as denser air dampening vibration more than lighter air. And...it may be something totally different but density does have a role in tune, albeit, less than temp ime.

Correct me if I'm wrong, but isn't temperature a component of air density?
 
They are intertwined but not always do they follow the same path. Yes, warmer air is less dense but what if there's a storm system in place and the temp doesn't change while pressure drops, for example.
Absolutely witnessed this awhile back and the ONLY time that "down and out" did not follow suit for adjusting the tuner. It was one of those days where the temp and air density was not normal for lack of better term.
 
You’re misunderstanding over, and over, and over. Perhaps I am not the person to explain this to you. Perhaps this is a poor medium by which to attempt to explain. I’m not a teacher. I’m really trying. You’d be much better served by cracking open a text book on statistics.

Bill, I totally now, just now, understand what you are saying. But it is soooo wrong. You suppose that because a person may be able to make 5 consecutive free throws often, they will somehow miss 5 free throws about the same percentage of time.

Why would that be? That is absolutely implausible. Is it simply because the black and white Yen and Yang symbol balances all endeavors? I have some news…

A person who very often make 5 free throws consecutively will almost never, and I mean “never” miss 5 in a row. I didn’t get what you were saying, because what you are saying is so removed from actual outcomes “that it did not compute.”

A person who can rarely, but in a couple hours’ time, can make 10 in a row, will NEVER miss 10 in a row. It is hard to make 10 in a row. The talent is rare. That person, will, I assure you, never ever, ever, miss 10 in a row. Have you shot free throws? This, the coin’s obverse, is not how it works, not even close.
 
Last edited:
I tend to agree with Erik that a repeatable change giving never before seen results is significant and predictable results from respective changes over a range of tuner adjustments is even more so. I'll leave the actual calculations of the likelihood of those things, statistically, to you all. I'll try to post some pics or videos soon, weather permitting and we can all decide(or calculate) for ourselves. I'll go ahead and predict now though, that 4 marks on my tuner will be from shooting tiny to shooting as big as it will, on a yet to be chambered bbl. in a combination I've never shot before. We'll see how it goes.
 
Misses are entirely material to the point. By only counting baskets, you only look at part of the data. But you don’t not miss. When you hide the part of the data that is misses, you hide the portion of the bell curve that is left of hits. The bell curve is symmetrical. It looks perfectly like a bell curve. Just because the portion of the curve that you make visible isn’t the midpoint, doesn’t meant the curve doesn’t look exactly like a bell curve. The portion of the data that is left of hits could be the center, if the player’s performance was exactly equal at missing baskets compared to making baskets. If the player misses more baskets than he makes, the center of the bell is left of zero. If a player makes more baskets than he misses. Then the center of the bell is right of zero. The bell still looks perfectly like a bell. It’s perfectly symmetrical about its center. It’s not symmetric about x=0. If you hide the portion of the data that is left of X=0(which is what you do when you don’t graph misses)then the only part of the bell that is visible is right of zero. It’s still a bell curve. It still looks exactly like a bell curve, it just happens to look exactly like a portion of the curve has been hidden or cut off, and that’s exactly what’s been done. You’ve clearly not done one of the things I suggested to help you visualize it. Draw a bell curve. Hold a piece of paper over it with the edge vertical. Slide the paper left and right covering different portions of the bell curve. The visible portion is not symmetric to invisible portion, but the bell curve is symmetric about its center, which is not always centered at X=0. You continue to insist that the “left side” would be higher than the “right side” in my examples, but that would only occur if you didn’t draw the line defining the sides in the middle. It’s a shame I can’t show you what I’m talking about on paper.

Graphing the outcome of hitting or missing free throws is almost identical to flipping a coin except for two minor complications. The first difference is that the coin flip is centered on zero, but the skilled throwing moves the entire bell left or right. It remains symmetrical, but its center is not at zero. The hit is heads, the miss is tails. You’re only counting heads and then saying that because the left side is missing, it isn’t a bell curve. By graphing tails, you would immediately agree that it was a bell curve. Somehow with free throws not being centered on zero, that complication(which doesn’t change the shape of the curve at all) is making it harder for you to see that the exact same bell is there. Do the exercise. Cover part of a bell curve and look at it. That’s what you’re doing when you ignore some of the data. And it isn’t necessarily half of the data. The portion of the curve that is left of zero is determined by the skill of the player. The second complication is that the height of the bell(which remains a bell) is determined by how consistently the player achieves his average outcome. Two players average is seven consecutive free throws made. One makes his seven free throws 50% of the time. His bell peaks at Y=50, and is narrow. The other player hits his average of seven only 25% of the time. His bell peaks at Y=25 and slopes down more gradually than the first player. Both curves are bells. Both are symmetric about X=7. The area under both bells is 100(which happens to be the percentage of the shots that you graphed if you include misses). The shooter that hits seven consecutive throws 25% of the time hits six and eight more often than the shooter that hits seven consecutive 50% of the time. Why? Because if that wasn’t true, then they wouldn’t both have an average of seven consecutive throws. You see the first shooter isn’t entirely better than the second. They both had the same average. If you had two shooters who hit their average number of consecutive free throws 25% of the time, but one shooter’s average was six consecutive shots, and the other’s was eight consecutive shots, then the two bells would be completely identical in shape(height is Y=25), except they would be centered on X=6 for one and X=8 for the other.


You’re misunderstanding over, and over, and over. Perhaps I am not the person to explain this to you. Perhaps this is a poor medium by which to attempt to explain. I’m not a teacher. I’m really trying. You’d be much better served by cracking open a text book on statistics.

The number, not “consecutive free throws made”, but percentage made of all attempts, that is considered elite, is 90% of free throws attempted. In real basketball, it doesn’t matter how many are consecutive, but in the long run many have to be be, if 9 out of 10 are made.

If you were going to plot hits to the right of the center and misses to the left, such that the “area under the curve” on each side maintains the 9:1 ratio of the two outcomes, AND and it’s going to be a symmetrical bell shape, might I see a rough semblance of what that graph will look like?
 
Ok...so, if I accurately predict that it's gonna be 4 marks(thousandths of an inch) between completely in tune to as far out of tune as the gun can shoot, as well as predict the shape of the groups and poi on traget for 4 marks or more, what is the statistical significance of me doing that? Nobody cares about free throws but they do care about if tuners do what I say, or they wouldn't be reading all this.
So, lets say there are 375 marks on my tuner and I pick the best to worst of 4-5 of those within that range and that the difference from worst to best is very clear on a target. Is it significant enough if I just go from a predicted number(4) and the results are as plain as the nose on your face or what more is needed? Just trying to appease the crowd here that is hung up on stats, so what are the requirements on my part to satisfy the test..statistically. I don't have all the time, money nor bbl life to just keep going with something that I already know the result of, to appease others but I'll do what I can. What if I have a rifle that is perfectly in tune at tuner mark 2(for example) and it shoots mid teens at 100. I predict it's compltely out of tune 4 marks later and I move it 4 marks and shoot it at both settings for say 5 groups at both settings? If the gun repeatably shoots teens at 2 and lets say mid 3's at 6, and I do that 5 times at each setting, after predicting the results, is that statistically significant to the stats guys? That's only 50 rounds but with predicted results.
 
Bill, I totally now, just now, understand what you are saying. But it is soooo wrong. You suppose that because a person may be able to make 5 consecutive free throws often, they will somehow miss 5 free throws about the same percentage of time.

Why would that be? That is absolutely implausible. Is it simply because the black and white Yen and Yang symbol balances all endeavors? I have some news…

A person who very often make 5 free throws consecutively will almost never, and I mean “never” miss 5 in a row. I didn’t get what you were saying, because what you are saying is so removed from actual outcomes “that it did not compute.”

A person who can rarely, but in a couple hours’ time, can make 10 in a row, will NEVER miss 10 in a row. It is hard to make 10 in a row. The talent is rare. That person, will, I assure you, never ever, ever, miss 10 in a row. Have you shot free throws? This, the coin’s obverse, is not how it works, not even close.

Bill, let me reply to my own reply by saying that the way you explained in detail how you applied normal distribution and shifted the curve by performance was appreciated and well done.

That’s probably hard to tell from what I wrote, and we were at this all night. I did manage to travel 300 miles and shoot a 199, 198, and 196 at 1,000, F-Open. That was good for I believe, about last place in HM. Tough crowd.

My strongest reactions to the bell curve have been when the subject matter concerns one actor’s performance potential, implying an equal likelihood of opposite end performance, that’s far less likely for that person than for any random average person because that person is in an end.

If a person’s performance range is already at either low probability “end”!of a given bell curve, depending on the subject matter, allowance for crossover to the other end is best case stacking or multiplying fractional improbabilities but worst case overlooks actual diametric opposition between the ends.

I suppose asymmetrical curves with a blunt flange suit me better, especially for certain subject matter.

Respectfully, I don’t think it’s plausible to count both consecutive number of baskets and consecutive number of misses, on one bell curve. I believe that the proper X axis would start at 1 basket and go to say 20, or whatever. The subject heading would be baskets made consecutively, the range would be those numbers and the mode somewhere middle.

I believe that your example sets up two subject fields on one bell curve. We just don’t see that practiced. It permits multiple peaks. A guy could cluster 80% of his baskets at 2 consecutive shots sloping on either side to one and three, but 90% of his total shots are actually misses where he starts over, such that his hits are completely in the graph’s “shadow” (curve) of his miss peak.

I also believe you could graph misses, starting over with hits. I believe that recoding both is simply a scoreboard.

If you don’t separate them, your one graph will overlap yielding an increased chance for both a simultaneous hit and a miss to occur under given areas of curves.
 
Last edited:
I generally agree with you, except there is no statistical premise to calculate the SD of numerous SD's. It is NOT normally distributed. If this premise was valid and useful don't you think it would be found in a text instead of a shooting forum? On the other hand the pooled ("average") SD can be compared to another using the Ftest, but what would that mean? Using individual shots, the mean radius plus it's SD are easily used to test hypothesis of loads etc. Group size is important in that it is used to judge completions, but it's use is otherwise butchered with misapplication.
Fair enough. I mainly brought up calculating the SD of groups of SDs because you mentioned it in relations to calculating SDs of groups sizes, and you could certainly calculate the SD of a a group of SDs. The likelihood that it might not be particularly meaningful is more apparent with more thought.

I’m still not convinced that group size would not follow a normal distribution, and thus calculating the probability that a group resulted from an improvement in load or from the natural variation in group size would be rather straightforward and rather useful. Do you have any information suggesting that group size does not follow a normal distribution? I suppose I could analyze a bunch or match results and see for myself.
 
Bill, I totally now, just now, understand what you are saying. But it is soooo wrong. You suppose that because a person may be able to make 5 consecutive free throws often, they will somehow miss 5 free throws about the same percentage of time.

Why would that be? That is absolutely implausible. Is it simply because the black and white Yen and Yang symbol balances all endeavors? I have some news…

A person who very often make 5 free throws consecutively will almost never, and I mean “never” miss 5 in a row. I didn’t get what you were saying, because what you are saying is so removed from actual outcomes “that it did not compute.”

A person who can rarely, but in a couple hours’ time, can make 10 in a row, will NEVER miss 10 in a row. It is hard to make 10 in a row. The talent is rare. That person, will, I assure you, never ever, ever, miss 10 in a row. Have you shot free throws? This, the coin’s obverse, is not how it works, not even close.
Now that you PARTLY understand it, go read on a stats page just how true it is. The mistake you’re still making is that you’re centering your bell curve on X=0 again. The only way the curve would be centered on X=0 would be if the person’s average number of consecutive free throws is 0. That would mean that he was equally likely to miss and to hit, and he would, once in some astronomical number, hit ten in a row, and he would miss ten in a row equally often. But why would his average have to be zero? If his average was 1, then the curve would be centered on X=1, and he would make 11 in a row with the same frequency that his missed 9 in a row. His average does not have to be an integer. It could 1.687. Then the curve would be centered on 1.687, but there would be no numbers that he hit in a row with equal frequency because 10 in a row is 8.313 from his average. Going 8.313 in the opposite direction gives you -6.626 and it’s impossible to miss 6.686 times. What you can do is use the curve to determine what percentage of the time he would miss 7 in a row, and it would not be the same as the percentage of times that he made 10 in a row. It’s not so wrong. It’s so true. You’re just as likely to underperform as you are to over perform, but that does not mean that your performance centers on zero. You might be just as likely to miss two in a row as you are to hit 5 in a row, and an NBA player might be just as likely to hit 27 in a row as to hit 23 in a row. Notice that you missed on the left portion of your bell curve, but the NBA player is still hitting on the left portion of his bell curve. Stop placing the center of the bell curve at x=0.
 
The number, not “consecutive free throws made”, but percentage made of all attempts, that is considered elite, is 90% of free throws attempted. In real basketball, it doesn’t matter how many are consecutive, but in the long run many have to be be, if 9 out of 10 are made.

If you were going to plot hits to the right of the center and misses to the left, such that the “area under the curve” on each side maintains the 9:1 ratio of the two outcomes, AND and it’s going to be a symmetrical bell shape, might I see a rough semblance of what that graph will look like?
I’ll draw a curve and explain it sometime. Perhaps in pm. You’ve at least made one step in the right direction. Back to the kids for now.
 
I’ll draw a curve and explain it sometime. Perhaps in pm. You’ve at least made one step in the right direction. Back to the kids for now.

Also, as I’m driving along, isn’t there a logical problem with this notion:

If I’ve missed once, I have increased my chances of a second miss, but, here’s the problem,

If I’ve missed once, I have increased the chances of “making” the next shot. Does that logically follow? Sloping lines imply both bordering outcomes are going to captured.
 
Also, as I’m driving along, isn’t there a logical problem with this notion:

If I’ve missed once, I have increased my chances of a second miss, but, here’s the problem,

If I’ve missed once, I have increased the chances of “making” the next shot. Does that logically follow? Sloping lines imply both bordering outcomes are going to captured.
Missing in the past does not change the probability of hitting in the future. Look into coin flip problems. If you just flipped tails, your odds of flipping heads on your next flip are 50%. If you just flipped heads 100,000 times in a row, your odds of flipping heads on your next toss remain 50%. It doesn’t feel right. The fact that the numbers do not coincide with our feelings is the reason that Vegas is so profitable.

If a basketball player is a 90% free throw shooter, and he hit his last 9, he is still 90% likely to hit his next one. He is not doomed to miss.
 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
165,789
Messages
2,203,442
Members
79,110
Latest member
miles813
Back
Top