• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

MY CM1500 Capability +/-.05

CharlieNC

Gold $$ Contributor
Many threads continue regarding the capability of the CM1500, upgrading, tuning, etc. I've used one for several years after initial assessments to try to understand how it works, and subsequently modifying based on those results. Over time I have continued to monitor it's capability, and for critical loads such as Fclass confirm every charge on an independent scale. It is not perfect, and like all auto dispensers is subject to overthrows. But with only a little work it's capability is much better than most think. The following is my assessment.

Scale Resolution and Operation
After taring the pan, upon placement the system performs an auto-zero each time. This is observed when the pan is placed on the platen and the display is not zero, but will not operate until it resets to zero.

While the display is 0.1gr, the internal resolution is finer. The weighed result is not rounded off, but truncated to the lesser 0.1 result for the logic to either add more or stop. How do I know this? Using a fine ball powder which weighs approx 0.01 gr per kernel, one kernel was consecutively added until the display increased by the 0.1 display resolution. This required slightly over 10 kernels (0.1gr total), which is an important observation. Firstly it registered a change after adding approx 0.11gr, secondly it did not register at approx .09gr. This means trickling to the desired result is not based on rounding off and therefore stopping short of the goal, but stops upon a small change in the 0.01 range exceeding the goal. Similarly an overcharge up to 0.09 does not register as a problem. Bottom line is fine trickling with the smallest increments possible will give less variable charge weights.

Parameter Adjustment
Much has been reported so I will only add a couple of main points.

The weight factors HSBA1-A3 are well known and used to decrease total charging time, as well as charging at slower speed using C1 for best trickling control. Optimum settings for these depend upon the size of the charging tip and the powder coarseness (kernel size and weight).

In addition the subsequent parameter MSPA2 can be decreased to maintain a higher charging speed until the trickle mode.

Tip Size
This has a significant effect on charge uniformity because it determines the weight of the incremental drop during trickling. I have evaluated the standard tip (0.31" ID), a commercial metal insert (0.18" ID), and a plastic tip cut off of a Bic pen (0.16"ID). The size of the individual trickle dumps visually corresponds to the ID of the tips. The results of a recent evaluation gave results similar to those which I had previously determined, so this trial has been replicated several times.

Capability
Weights were independently taken on a Bald Eagle scale with .01gr resolution. A separate pan was used such that it could be tared before weighing every sample from the CM. To assess it's capability a 25gr checkweight was weighed.

Then the CM capability was determined using a fine ball powder. Ten charges were made using each of the tips at a target of 25.0gr.

The summary statistics follow:
BEStd CM TipBicTipMetalTip
Mean25.03325.03924.95924.994
SD0.0050.0720.0300.033
ES0.010.210.110.09

When tareing the BE before every weighing, the capability is excellent and well suited for this evaluation.

While the SD of the metal and Bic tips appear better, is this statistically different:
1632414361864.png
1632414410327.png

Both tips are statistically better than the standard CM tip because the incremental drops are smaller and more consistent. In general this small evaluation and long term experience shows the CM will deliver most charges within +/-.05gr for fine powder using a small tip.

Fine trickling is the governing factor to achieve consistent results on the CM, and the tip configuration is a critical component. Also extreme care is necessary to avoid vibration during final trickling as this could drop an unusually large clump, and the CM would not register this as a problem unless it is greater than 0.10gr over.
 
Many threads continue regarding the capability of the CM1500, upgrading, tuning, etc. I've used one for several years after initial assessments to try to understand how it works, and subsequently modifying based on those results. Over time I have continued to monitor it's capability, and for critical loads such as Fclass confirm every charge on an independent scale. It is not perfect, and like all auto dispensers is subject to overthrows. But with only a little work it's capability is much better than most think. The following is my assessment.

Scale Resolution and Operation
After taring the pan, upon placement the system performs an auto-zero each time. This is observed when the pan is placed on the platen and the display is not zero, but will not operate until it resets to zero.

While the display is 0.1gr, the internal resolution is finer. The weighed result is not rounded off, but truncated to the lesser 0.1 result for the logic to either add more or stop. How do I know this? Using a fine ball powder which weighs approx 0.01 gr per kernel, one kernel was consecutively added until the display increased by the 0.1 display resolution. This required slightly over 10 kernels (0.1gr total), which is an important observation. Firstly it registered a change after adding approx 0.11gr, secondly it did not register at approx .09gr. This means trickling to the desired result is not based on rounding off and therefore stopping short of the goal, but stops upon a small change in the 0.01 range exceeding the goal. Similarly an overcharge up to 0.09 does not register as a problem. Bottom line is fine trickling with the smallest increments possible will give less variable charge weights.

Parameter Adjustment
Much has been reported so I will only add a couple of main points.

The weight factors HSBA1-A3 are well known and used to decrease total charging time, as well as charging at slower speed using C1 for best trickling control. Optimum settings for these depend upon the size of the charging tip and the powder coarseness (kernel size and weight).

In addition the subsequent parameter MSPA2 can be decreased to maintain a higher charging speed until the trickle mode.

Tip Size
This has a significant effect on charge uniformity because it determines the weight of the incremental drop during trickling. I have evaluated the standard tip (0.31" ID), a commercial metal insert (0.18" ID), and a plastic tip cut off of a Bic pen (0.16"ID). The size of the individual trickle dumps visually corresponds to the ID of the tips. The results of a recent evaluation gave results similar to those which I had previously determined, so this trial has been replicated several times.

Capability
Weights were independently taken on a Bald Eagle scale with .01gr resolution. A separate pan was used such that it could be tared before weighing every sample from the CM. To assess it's capability a 25gr checkweight was weighed.

Then the CM capability was determined using a fine ball powder. Ten charges were made using each of the tips at a target of 25.0gr.

The summary statistics follow:
BEStd CM TipBicTipMetalTip
Mean25.03325.03924.95924.994
SD0.0050.0720.0300.033
ES0.010.210.110.09

When tareing the BE before every weighing, the capability is excellent and well suited for this evaluation.

While the SD of the metal and Bic tips appear better, is this statistically different:
View attachment 1281583
View attachment 1281584

Both tips are statistically better than the standard CM tip because the incremental drops are smaller and more consistent. In general this small evaluation and long term experience shows the CM will deliver most charges within +/-.05gr for fine powder using a small tip.

Fine trickling is the governing factor to achieve consistent results on the CM, and the tip configuration is a critical component. Also extreme care is necessary to avoid vibration during final trickling as this could drop an unusually large clump, and the CM would not register this as a problem unless it is greater than 0.10gr over.

Yup, fine trickling is important to get best results. And my little effort to see how different powders might weight out on the CM appears to bear out that one gets better measuring results with the finer powders as I only went as fine as the Varget powder.

Scale Comparison.jpg
 
Last edited:
Interesting about truncating as opposed to rounding. Actually only makes a difference when comparing weight to finer resolution scales.

Thanks for the effort and sharing!
 
Interesting about truncating as opposed to rounding. Actually only makes a difference when comparing weight to finer resolution scales.

Thanks for the effort and sharing!
Good to see you are also getting similar SD, verifying the CM is capable of much better results than many think.
 
I see the CM as little different than other reloading scales in that you need to pay attention to it just the same. For instance, for good use of a triple beam balance(like that in Prometheus), you have to watch it, learn it, get good with it.

You can get really good with the CM also, as the scale does at least respond well.
I slightly disturb my pan to cause resampling with each completed charge, and I get vital validation from this.
I watched the trickle response anyway which gives me a fuzzy or not about the result. Disturb the pan and watch again. If the scale wonders about my desired weight, takes a while to lock in. -it isn't quite right. It's off a kernel or two(which I suspected by that point). If the disturbed scale locks dead onto my desired, quickly, and I had a good fuzzy about it while trickling -I know it's right.
If I believe I'm over, I dump the charge back in the dispenser. Not gonna fiddle with picking out a kernel..
If I believe I'm short a kernel, I jog a kernel, disturb the pan and watch it lock on.
This is no different than I operate a balance.

My CM has all the common mods.
 
...While the display is 0.1gr, the internal resolution is finer. The weighed result is not rounded off, but truncated to the lesser 0.1 result for the logic to either add more or stop. How do I know this?...
https://en.wikipedia.org/wiki/Truncation_error
"Occasionally, by mistake, round-off error (the consequence of using finite precision floating point numbers on computers), is also called truncation error, especially if the number is rounded by chopping. That is not the correct use of "truncation error"; calling it truncating a number may be acceptable, but then why create confusion."

...While the display is 0.1gr, the internal resolution is finer. ...
The converter in the CM is not better or finer than the display. Think of that like a grey scale limit based on the number of bits in the resolution. The more bits you have, the better the resolution on the conversion, on the other hand if the bits are not there, the resolution does not exist.

1 LSB = FSR/(2^n - 1) for an n bit converter. We add to this, linearity error, quantization errors, etc. etc.

When you are discussing statistical distributions expected to be normal or Gaussian, the sample size needs to be closer to 30, or extended till the SD to ES relationship looks more like 1 to 6 or when the histogram is filled in smoothly. Otherwise, you are underdamped to make extrapolations.

Curious about the use of the settings, the tips, and the fine powder. What happens to the cycle time when optimized to something like Varget or H4350?
 
Last edited:
https://en.wikipedia.org/wiki/Truncation_error
"Occasionally, by mistake, round-off error (the consequence of using finite precision floating point numbers on computers), is also called truncation error, especially if the number is rounded by chopping. That is not the correct use of "truncation error"; calling it truncating a number may be acceptable, but then why create confusion."


The converter in the CM is not better or finer than the display. Think of that like a grey scale limit based on the number of bits in the resolution. The more bits you have, the better the resolution on the conversion, on the other hand if the bits are not there, the resolution does not exist.

1 LSB = FSR/(2^n - 1) for an n bit converter. We add to this, linearity error, quantization errors, etc. etc.

When you are discussing statistical distributions expected to be normal or Gaussian, the sample size needs to be closer to 30, or extended till the SD to ES relationship looks more like 1 to 6 or when the histogram is filled in smoothly. Otherwise, you are underdamped to make extrapolations.

Curious about the use of the settings, the tips, and the fine powder. What happens to the cycle time when optimized to something like Varget or H4350?

I really don't have the electronic astuteness to know if the display has an integrated A/D converter and operates as you describe, or if those functions are separate electronically. Doesn't matter to me since experimentally I was able to decifer how it operates logically, which affects how to think about optimization options. As I mentioned the limited stats I presented replicate what was found on numerous other investigations, and even the limited sample size was quite sufficient to conclude the tip selection had a significant effect on variability.

For larger extruded powder I have not found the parameter optimization to be different to a first approximation, meaning a degree of fine tuning is still possible. Finer incremental trickling is achieved with smaller ID tips, but larger kernels lead to larger incremental dumps even with smaller ID tips which can be observed by monitoring the number of incremental dumps being required to change the display by 0.1gr. Yes smaller tips require longer total cycle time as the cost for finer trickling, which is the primary limiting factor to achieve less variability. The powder type may have an effect on cycle time, but because it is secondary to my primary objective of uniformity I have not looked at it closely. My tuning strategy is to run the initial charging as fast as it can physically go until about 0.2gr short of goal and then transition to as slow as possible for trickling.
 
Thanks for posting the work. Don't worry about the internal design. You have friends here for that. Those design choices are always driven by cost versus performance trades. If the CM had another half digit, it would have cost more.

You don't need to be an engineer to test the scale or share your work. I just keep the post honest because some of us are scientists and engineers who try to point things in the right direction while trying not to go overboard.

My post isn't meant to do anything but make sure we don't drift into solidifying the wrong terminology or concepts when they detract from the good stuff you did bring to the table.

The concept that when you test dispersive behavior that tends to want to become a bell curve, is to run the test out far enough to fill in the curve. When you see the the ratio between the SD and the ES isn't close to 6:1, then take a look at the data and see if you ran enough samples. Testing some things costs lots of money, but some things like taking 30 samples isn't any trouble.

There is a difference between checking an established well known process that has a mountain of data behind it, and one that is being tested because it is unknown. The test you did on the comparison of the Standard Deviations between the stock settings and the ones with the tips, was still a good one in that it certainly points to the ability to tune the CM to a purpose and improve the performance. That is worth the trouble.

We are also trying to make sure folks don't get lost between small samples of an unknown or unproven process, versus a small sample to verify a re-start or new batch of an old well established process.

If you run a CM test out far enough, what you will find is the limit of the 0.1 LSB window fills in. It just takes more samples than 10 and a noisy powder shows it faster is all. If you had some hypothetical 0.0100 granules, and could make a nearly straight line test where you just added those kernels and recorded the window on the CM, it would look like the chart in this document Figure 1, (but remember the actual load cell isn't perfect, so keep reading).

The CM is a load cell based scale. Once we know the full scare range, and the LSB window, it isn't hard to find out if the overall uncertainty will be driven by that window. If they had an extra half digit in them, they would have shown it.

Load cell based scales are known well enough and we don't bother spending on 7 digit voltage meters or ratiometric meters on them because the transducer has real linearity and noise issues that mean it isn't worth looking at the noise. The CM is a fairly balanced design in that respect, which also makes it affordable.

I'm still a CM user and own things like the FX120i, Sartorius, Mettler, etc., and I even still run a tuned up beam balance once in a while. However, when I run CM, I run more than one CM at a time for a batch session. That gives me the time to wait for it to trickle and finish reading out. With tuning and good calibration discipline, they do their job well enough for the money.

I like and agree with your post, I was just cleaning up the engineering and stats concepts. Thanks again for sharing the work.
 

Attachments

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
166,279
Messages
2,215,723
Members
79,518
Latest member
DixieDog
Back
Top