• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

Flyers - drop or not to drop"

222Jim

Silver $$ Contributor
I, like so many of us, am often faced with the awkward question "was that me or my handload?" while analyzing my handload results. One approach I have taken is to decide BEFORE I look through my spotting scope if I messed up. If not, the shot's valid, no matter where it landed. If I think "Yeah, I pulled that one", I reject the shot, no matter where it landed.

Recently, I stumbled across the analysis factor website (www.theanalysisfactor.com/outliers-to-drop-or-not-to-drop) which identifies itself as "the practical approach to complex statistics". On that website I found this article that I think offers an added insights into that age old questions about rejecting or keeping "flyers".

I'm sharing it in the interest of hearing from you on your thoughts on the topic and article, and just sharing what I thought was an informative article.

Outliers: to drop or not to drop

by Karen Grace-Martin

Should you drop outliers? Outliers are one of those statistical issues that everyone knows about, but most people aren’t sure how to deal with. Most parametric statistics, like means, standard deviations, and correlations, and every statistic based on these, are highly sensitive to outliers.
And since the assumptions of common statistical procedures, like linear regression and ANOVA, are also based on these statistics, outliers can really mess up your analysis.
Despite all this, as much as you’d like to, it is NOT acceptable to drop an observation just because it is an outlier. They can be legitimate observations and are sometimes the most interesting ones. It’s important to investigate the nature of the outlier before deciding.
  1. If it is obvious that the outlier is due to incorrectly entered or measured data, you should drop the outlier:
    For example, I once analyzed a data set in which a woman’s weight was recorded as 19 lbs. I knew that was physically impossible. Her true weight was probably 91, 119, or 190 lbs, but since I didn’t know which one, I dropped the outlier.
    This also applies to a situation in which you know the datum did not accurately measure what you intended. For example, if you are testing people’s reaction times to an event, but you saw that the participant is not paying attention and randomly hitting the response key, you know it is not an accurate measurement.
  2. If the outlier does not change the results but does affect assumptions, you may drop the outlier. But note that in a footnote of your paper.
    Neither the presence nor absence of the outlier in the graph below would change the regression line:
    graph-1
  3. More commonly, the outlier affects both results and assumptions. In this situation, it is not legitimate to simply drop the outlier. You may run the analysis both with and without it, but you should state in at least a footnote the dropping of any such data points and how the results changed.
    graph-2
  4. If the outlier creates a strong association, you should drop the outlier and should not report any association from your analysis.
    In the following graph, the relationship between X and Y is clearly created by the outlier. Without it, there is no relationship between X and Y, so the regression coefficient does not truly describe the effect of X on Y.
    graph-3
So in those cases where you shouldn’t drop the outlier, what do you do?
One option is to try a transformation. Square root and log transformations both pull in high numbers. This can make assumptions work better if the outlier is a dependent variable and can reduce the impact of a single point if the outlier is an independent variable.
Another option is to try a different model. This should be done with caution, but it may be that a non-linear model fits better. For example, in example 3, perhaps an exponential curve fits the data with the outlier intact.
Whichever approach you take, you need to know your data and your research area well. Try different approaches, and see which make theoretical sense.
 
Like fishermen, golfers, and other liars, I reserve the right to add or delete whatever information is necessary to tell the story exactly how I think it ought to be told. Mulligan, No that's just a club length. Slot limits? Well those are for the guys that catch the little ones it does not concern me. Flyers? What flyers?
 
Every shot counts.

Once you head down the path of dismissing shots you don't like, for whatever reason, you're in a bad place.
I only dismiss shots if I know the reason they are bad and even then only for the sake of comparing load during development. Such as a clean bore shot. A few weeks ago I was shooting a rifle with a 1oz trigger. The bipod slipped forward a bit as I was about to fire and caused the shot to pull high. For the sake of comparing different loads I decided not to count that shot. If that happened during the real competition I would be stuck with it.
 
I find the concept of dropping any valid data as against sound analysis principles. As others have mentioned, if it's clear the data point is invalid, then, of course, discard it [and, note it if sharing the data/results with others].

The case in #4 simply seems dishonest and calls into question the integrity of the analyst. If there isn't a valid reason for discarding it, it distorts the analysis. I can't believe someone really wrote that.
 
A miss is a miss, the deer, varmint, predator or score keeper in a match doesn't care if it's a statistical outlier or anything other rationalization one invents for a miss - a miss is a miss.

However, with that said, if you shoot enough and are skilled at calling shots, then a shooter error on a called shot can be taken into account when analyzing shots when a shot goes awry.
 
I, like so many of us, am often faced with the awkward question "was that me or my handload?" while analyzing my handload results. One approach I have taken is to decide BEFORE I look through my spotting scope if I messed up. If not, the shot's valid, no matter where it landed. If I think "Yeah, I pulled that one", I reject the shot, no matter where it landed.

Recently, I stumbled across the analysis factor website (www.theanalysisfactor.com/outliers-to-drop-or-not-to-drop) which identifies itself as "the practical approach to complex statistics". On that website I found this article that I think offers an added insights into that age old questions about rejecting or keeping "flyers".

I'm sharing it in the interest of hearing from you on your thoughts on the topic and article, and just sharing what I thought was an informative article.

Outliers: to drop or not to drop

by Karen Grace-Martin

Should you drop outliers? Outliers are one of those statistical issues that everyone knows about, but most people aren’t sure how to deal with. Most parametric statistics, like means, standard deviations, and correlations, and every statistic based on these, are highly sensitive to outliers.
And since the assumptions of common statistical procedures, like linear regression and ANOVA, are also based on these statistics, outliers can really mess up your analysis.
Despite all this, as much as you’d like to, it is NOT acceptable to drop an observation just because it is an outlier. They can be legitimate observations and are sometimes the most interesting ones. It’s important to investigate the nature of the outlier before deciding.
  1. If it is obvious that the outlier is due to incorrectly entered or measured data, you should drop the outlier:
  2. If the outlier does not change the results but does affect assumptions, you may drop the outlier. But note that in a footnote of your paper.

    graph-1
  3. More commonly, the outlier affects both results and assumptions. In this situation, it is not legitimate to simply drop the outlier. You may run the analysis both with and without it, but you should state in at least a footnote the dropping of any such data points and how the results changed.
    graph-2
  4. If the outlier creates a strong association, you should drop the outlier and should not report any association from your analysis.

    graph-3
So in those cases where you shouldn’t drop the outlier, what do you do?
One option is to try a transformation. Square root and log transformations both pull in high numbers. This can make assumptions work better if the outlier is a dependent variable and can reduce the impact of a single point if the outlier is an independent variable.
Another option is to try a different model. This should be done with caution, but it may be that a non-linear model fits better. For example, in example 3, perhaps an exponential curve fits the data with the outlier intact.
Whichever approach you take, you need to know your data and your research area well. Try different approaches, and see which make theoretical sense.
Also we have no knowledge about the quality of each bullet. It's tempting to think they are all identical.
 
30 caliber pencil ( appropriate caliber) and get any group you want.
Why waste ammo practicing if you discount what you don’t like.
 
When you are at Cam Perry do they let you pick the shoots that get scored? When hunting does the animal come back and stand still for you to get another try? Really???
 
Every shot counts.

Once you head down the path of dismissing shots you don't like, for whatever reason, you're in a bad place.
I would state this view is valid if all shots were fired from a fixed mechanical rest. Though once the fixed rest is removed and a flexible human element is inserted the movable element needs to be accounted for.
 
Seems this thread has drifted from the OP's intent, "while analyzing my handload results". It's not competition or hunting, it's determining which load provides the best precision. If in doubt regardless of the reason, why not retest to be sure of the results. Here is an opposite example. Last month I shoot this three round group.....
57-07.jpg

I know this rifle, I know the shooter (me), and have been test loading Bergers and have a huge suspicion something is not right. It was a good day and this old Redneck Remington is a pretty good shooter, but maybe not this good!! Is this two bad shots that gathered with the first, or maybe three bad shots that just mingled together because the moon phase and stars lined up with me holding my tongue to the right?!?! If we get a group that is suspicious regardless of why, the load should be shot again. Hopefully I will get out in the next few weeks to shoot this load and find out the truth.
 
If not competition or hunting what would these handloads be used for? I read here all the time "I only shoot against myself!" OK but yourself just threw 2 shoots, pretending they didn't happen doesn't make you shoot any better or learn anything. The flyers tell you how to improve ignoring them says I want instant success without learning or doing the work. There is a reason those shots were out like you said maybe me maybe the load. Don't put it off for another second figure out what caused it and learn how to eliminate the problem.
 
I read here all the time "I only shoot against myself!"
I do. Nearly all the time.
How does that relate to dis-counting flyers etc?
I count every friggin’ round. Every time.
 
@ebb if you are referring to my previous post, I'm not pretending anything, I'm shooting it again, because regardless of the reason I cannot take those two shots back I can only do another test. No one is ignoring the why and the correction of learning if it is our ability or inability. It's learning what shoots best out of the rifle at it's best. No one gets a prize at shooting small or large groups with developing loads.

And BTW, if you do throw a one round out of five, where do you zero? On the center of the five round group including the flier/outlier or zero on the four tight rounds? Me, I don't zero on either until I have retested and confident the results pretty much imply the precision of the rifle, ammo, and me shooting well on a good day, if I can pull all that off with confidence. It goes from initial load development to a whole lot more testing before I consider a rifle/load combination worthy of using. I also never compete against myself, if I did I would never win!

I'm certainly not trying to convince you of doing it my way or any other way other than the way you like to do it. Your rifle, ammunition and your shooting. Do it the way you like.
 
Last edited:
I'm a believer in counting all shots when doing load testing, BUT, if I know a shot erred due to a gust of wind just as the shot let off and it wandered as was expected, I'm not going to hold that against what was otherwise a promising load when working to find a good load. likewise, If I see an errant shot and then realize my rifle was not fully returned to battery on the rest or I KNOW I pulled the shot very badly - I make notes of those as well. To always include such shots which one KNOWS it was of no fault of the load being tested is without merit. If in a match or testing one's skills in practice - all shots absolutely count. In load development - no. Of course, if I were telling that to someone who didn't have the skills to clearly identify what happened - I'd say they all count. Not picking on anyone here - but many shooters on this forum KNOW at the break of the trigger whether they did their part, excluding external influences. And as many or more don't.
 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
166,250
Messages
2,215,401
Members
79,508
Latest member
Jsm4425
Back
Top