"
Nice explanation of wt. classifications and when each class is normally required."
What they don't cover is what weight values are needed to confirm Full Scale versus small test weights, or sensitivity tests. The NIST document reference does.
I'll point out some logic errors many folks make.
Full Scale Calibration Vs a Full Scale CHECK.
A full check without invoking Cal can confirm your scale is still "In Cal"
Doing a Calibration, resets scale factor, and you lose a stability data point.
Most scales will always read the Cal Weight if you Re-CAL.
What accuracy @ Full Scale matters?
Cal with a 100.0001 gram weight,
Now RE-CAL with a 100.0101 weight (add 10 milligrams)
The scale measurement SLOPE will have changed by 0.01 PERCENT.
With a GOOD ZERO that 0.01 percent slope change will make most readings LOW.
50 grains will now read 49.995 grains

About one count on my EJ-54D2 on low range.
Linearity errors, usually a limit of design are usually stable (relative to overall scale stability). Some scales use software and linearity check weights to correct most of this error. It's weights at the low end of most scales that matter to us.
A 1,2,2,5,10 Gram Set would give the ability to check every gram from 1 to 20.
A sensitivity weight also doesn't need to be super accurate, just small enough (one or two counts) to prove the scale can detect that magical "One Kernel of Varget".
IF mo better scale accuracy isn't in your stars, then none of this really matters.