closer to 3x on the xring 7,07 vs 19.6
i was misremebering the mil target not the f, my bad
and the wind blows at 1k br matches too,
BUT we to not get shoots spotted. all is an educated guess once time starts.
bottom line you do not need the precsion that br shooters do at 1k.
the problem with your numbers is you do not measure the group or score of the first 5 or 10 shots.
it is a different game but it aint as precise
This is all hand-waving, if F-Class was that much easier than BR, the BR shooters could show up at F-Class matches and clean up. That isn't happening, and here's the reason, which is very pertinent to the whole tenor of this thread:
the limiting sources of error are not the same in the two disciplines. Let me state that again, the limiting sources of error are not the same in the two disciplines.
In a 20-shot F-Class match, one might observe changes in the wind conditions that are worth anywhere from 1-2 MOA increased dispersion in relatively benign conditions, up to as much 4-5 MOA, or even more, when the conditions are challenging. Thus, wind-reading becomes the major source of error. The difference between a load that will shoot 0.25 MOA and one that will shoot 0.10 MOA are pretty much meaningless when the wind condition is capable of putting shots out in the 7-, or 6-ring, or even off the target at 1000 yd. In other words, the inherent precision of the load is often no longer the limiting source of error in an F-Class match that is fired over a time limit of 20+ minutes, where the wind conditions can change many times. BR shooters don't generally take 10-20 minutes to get their shots off, hence the difference between the two disciplines. None of that means one is easier or harder than the other, they're just different. Shooters in both disciplines strive for obtaining the utmost precision from their equipment, regardless of what the relative final precision at the target under match conditions may be.
The reason this is pertinent to the OP's original question pertains to actually identifying the limiting source(s) of error, so that they can be dealt with, if possible. For example, identification of a charge weight window that puts velocity within an acceptable window across a specific temperature range, identification of an optimal seating depth window, etc., etc., etc. All reloaders go through this, but some are even more concerned with sorting out the details at the finest possible level or increment. For those individuals, in order to understand when a limit has been reached past which no further appreciable effect on precision can be obtained via a given operation, one must have some knowledge of the limiting source(s) of error involved.
One example of this concept based on the OP's question of "how much precision is required in a scale to be used for precise weight sorting?" would be the sorting of brass cases by weight as a surrogate for internal volume. In order to define the limits of each sorted weight group, one has to know how much brass weight variance corresponds to an internal capacity variance that is sufficiently large to alter velocity to such an extent that it will result in reduced precision. This can only be determined by weighing many pieces of brass AND determining their internal (water) volume. Only then can one get a feel for the relative size and number of weight sorting groups necessary that will provide some benefit to precision, but not go so far that one is simply wasting time sorting to an increment so small that the effect cannot realistically ever be observed. Having done this for many years, I can tell you that in general, there is a pretty good correlation between case weight and case volume. It is not surprising given that the case will expand to the limits of the a given chamber upon firing. Thus, the only major places on a cartridge where weight can vary without affecting internal volume are in the primer pocket and the extractor groove. I personally have never found that variance in the size of either of those two features is sufficient to introduce significant variance into the correlation between case weight and case volume. If I was using a brand of brass where the width or depth of the extractor groove or primer pocket was markedly non-uniform, I would be looking for a different brand of brass. Nonetheless, I have observed that the relationship between case weight and case volume is not the same among different brands of brass. In some, the correlation is much better than others. Further, there will always be a few "outliers", even with the best brass available. As I stated above, the only way to know this is to determine weight and water volume for a number of cases, make a scatter plot out the data, and let the software determine the correlation coefficient of the best straight line through the data points. Having done this for many years, I sort cases by weight into three groups, "light", "medium", and "heavy". Although it is certainly still possible that a "low" and "high" volume outlier could happened to fall within just one of those sorting groups, in general, I am stacking the odds in my favor that case volume will be more uniform in cases sorted by weight. I do not claim it is a perfect solution. However, the good news is that the overall volume range of any of the individual weight-sorting groups will
never be higher than the total range for an entire batch of un-sorted cases. So I deem the practice to be worthwhile because it takes only a modest effort.
Another example of how identifying a limiting source of error can be used would be weighing powder. Reloaders commonly test charge weight in intervals ranging from about 0.1 gr to 0.5 gr, in part dependent on the case capacity. Is there any reason to ever test charge weights using a finer increment than 0.1 gr? IMO, no. A change in charge weight of less than 0.1 gr across a pretty wide range of case capacities is unlikely to generate a change in velocity that can even be measured accurately by most chronographs, on the order of less than 5 fps velocity variance, in some cases, even less than 1 fps. Do some reloaders use charge weight increments of less than 0.1 gr? I'm sure there are. However, regardless of the perceived result of doing so, it would be next to impossible to ever prove that it actually made any difference, when current chronograph technology is incapable of detecting such a small effect on velocity. Nonetheless, I have no doubt that those that do it are convinced that it does make a difference. Note that I am specifically referring to charge weight testing increments, not weighing precision of a set charge weight in rounds loaded for competition. In the latter event, I would suggest weighing individual charges to the best precision you can generate. I typically strive for charge weight variance of less than +/- one kernel. Why? Because with a good setup it takes little more effort than does weighing to lesser precision, and then you never, ever have to worry about charge weight variance as a possible source of error when behind the rifle at a match. In other words, for a very minimal effort, I am generating precision in charge weights that is far below any other limiting source of error, so that in effect, charge weight variance ceases to be a source of error. So I weigh charges to +/- one kernel or less for a developed match load, but I typically do the charge weight testing during load development in 0.1 to 0.3 gr increments, depending on what I'm doing. This might seem like dichotomy, but it is not. It is based on my understanding of limiting sources of error as viewed through the lens of how much effort a given reloading step or process might involve. Testing charge weights in increments of 0.05 gr (or less) would require significantly more time, effort, reloading components, and barrel life. Weighing charges for a match to +/- one kernel requires hardly any extra effort at all with the right setup.
So to finally get to the meat of it in what I'm sure has already been an excruciating reading experience for some, how does one actually learn to identify and quantify limiting sources of error in the shooting/reloading process? Obviously, having a background in science, engineering, statistics, or mathematics would be beneficial. However, such a background is not necessarily "essential", and simple experience can be more than sufficient. For example, anyone can look at their charge weight test data and correlate a change in average velocity with a change in charge weight. In my hands, an increase of 0.1 gr charge weight is usually good for somewhere between about 5-10 fps difference in average velocity. Thus, I would be looking at a velocity variance of only 2.5-5 fps if I conducted charge weight testing in .05 gr increments. I'm not going to use my time testing using an increment so small that the resultant velocity change is at, or even below the limit of accuracy with most chronographs. Likewise, one can use a reloading program such as QuickLoad or GRT to predict the effect of a 0.1 gr difference in case volume on velocity with a given charge weight. Although such predictions are not "written in stone", they can provide a rough guide as to whether some given parameter might be a limiting source of error, in which case it might be addressed by tools readily available to the individual. One has to start by learning to identify the [major] limiting sources of error, which can vary widely depending on the cartridge/powder/bullet used. Only then can they determine the minimum resolution necessary for a balance that will be used to sort cases or bullets by weight, or to weigh powder to +/- one kernel, or measure seating depth t0 +/- .001" or less. Sure, someone can list their own specific results at a shooting forum such as A.S. For example, my Lapua .223 Rem and .308 Win brass weight groups end up with a range (per group) in the neighborhood of around 0.5 gr range. So a balance accurate to about 0.1 gr would be probably be sufficient for my purpose of sorting brass. However, someone else's results may differ markedly if they are using a different cartridge/bullet/powder. In fact, a balance that weighs to +/- 0.001 g should be sufficient for most of what we do, if not everything. That is getting very close to the point (if not past) where other features of the balance may be more important than the resolution. In any event, it is always a good idea IMO to learn to make these estimates for yourself. In the long run, you will benefit from being able to do so.