• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

Statistics are not lies

 
The low quality of the attached photo is unreadable when enlarged enough to see the smaller print. Interesting but .......
 
Not meaning to disparage your efforts at all. I appreciate your posting. Good info and I really wish I could see/use it all but print is to small for my eyes.

Another hard pass on Facebook.
 
I'm not sure what the data is supposed to convey.
I thought accuracy of smallbore ammo in a given rifle was lot dependent - irrespective of ballistic tests.
Wildcat is correct. Hundreds or maybe thousands of rounds shot for nothing. Ammo is single barrel dependent if your talking accuracy. Typewriter shooting gets more popular every day. Never for get....Figures lie and liars figure. That's what always comes to mind when I hear statistics.
 
Wildcat is correct. Hundreds or maybe thousands of rounds shot for nothing. Ammo is single barrel dependent if your talking accuracy. Typewriter shooting gets more popular every day. Never for get....Figures lie and liars figure. That's what always comes to mind when I hear statistics.
 
In all of the statistical discussions I read, I have questioned for a long time, the assumption that shots fired in any test fall into a "normal" distribution. Why do we believe that. In my experience sometimes the curves look like they do, but often time not at all. Also, I've shot enough RF to agree with jelenko above - results are totally barrel dependent. If it weren't so, why "lot test"?
 
Stats are only as good as their inputs. This seems to be lost, factually correct info these days. A gazillion rounds fired in a bad test will still yield garbage. They have a term for it...garbage in, garbage out...GIGO!
Somebody tell Litz this shitz!
 
I studied at the PhD level many courses dealing with
Numerical Statistical Analysis
Random Processes and Queuing Theory
Random Processes and Stochastic Analysis
And many more.

The law of sum or random processes would converge to a normal distribution with an average mean and a standard deviation of the outcome of the random process.

Proving otherwise is worthy of a PhD thesis.
 
I've seen some references that indicate a different (Rayleigh) distribution curve than the normal / Gaussian curve.

View attachment 1398706
This is a normal distribution of point of impacts with non-biased radial dispersion of bullet impacting a target with SD of given Sigma.
What the chart is saying if you shoot enough rounds where the POI is driven by random processes. The radial distribution would follow a normal distribution with SD of Sigma, with 99% of all shots falling in a radius of 3x Sigma

Note if the MV or BC are considered as random processes too. The radial dispersion would become biased, and the MV and BC would make the dispersion vertically elliptical.

Similarly, if the wind is changing in speed but fixed in direction (say coming from 3 O'clock), the dispersion would be horizontally elliptical.

In all, statistical analysis and simulations (e.g. Monte Carlo Simulations) the user has to declare any input to be static or random. Thus, the more inputs are declared static (i.e. has a fixed value), the less realistic the analysis/simulation becomes.
 
Last edited:
Where does bad input fall? Not disagreeing but I don't see anything predictable from a bad input....except a bad output.
You're going to need some one better than me to sift that out. I'm fairly certain it could be done; there are procedures specifically designed for metrology to determine how much variation in measurements come from the user vs the tool vs the object being measured. I'm also pretty sure that it's way beyond what most people would be willing or able to do. Not sure what your point is though. I don't want to get yelled at for 'assuming' again ;)
 
You're going to need some one better than me to sift that out. I'm fairly certain it could be done; there are procedures specifically designed for metrology to determine how much variation in measurements come from the user vs the tool vs the object being measured. I'm also pretty sure that it's way beyond what most people would be willing or able to do. Not sure what your point is though. I don't want to get yelled at for 'assuming' again ;)
No yelling but I'm not sure how inputs are calculated. You have to start somewhere. That's what I'd call input source.
 
In all of the statistical discussions I read, I have questioned for a long time, the assumption that shots fired in any test fall into a "normal" distribution. Why do we believe that. In my experience sometimes the curves look like they do, but often time not at all. Also, I've shot enough RF to agree with jelenko above - results are totally barrel dependent. If it weren't so, why "lot test"?
I think the bigger issue is separating systemic error from random error, which a normal distribution is very good at modeling. Need lots of data and systems analysis for that.

If you dig deep enough, we aren't testing enough and we aren't using the correct metrics for evaluation, but ain't nobody got time (or money) for that (except the government, who wrote the book back in 1964).
 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
166,288
Messages
2,215,802
Members
79,519
Latest member
DW79
Back
Top