• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

Athlon Rangecraft Compact Chrono Tested

Here's the data promised by @Big_Daddy As expected, ammo variations far outshadowed chronograph variations. All four chronographs performed well. You'll note missing shots in the data set. Not the fault of the chonographs, the responsibility lands squarely on human shoulders. Forget to set the right velocity range and they will happily ignore the shot.

In my opinion, a decision between Garmin and Athlon should focus on the chronograph's usability, anticipated product support and one's budget. There's little difference in performance based on this test.


Nice work, my conclusion is that the Athalon does not pick up the projectile until it is 8’-12’ feet beyond the muzzle. (same as (Oehler). The Garmin is a closer reflection of true muzzle velocity.
 
Nice work, my conclusion is that the Athalon does not pick up the projectile until it is 8’-12’ feet beyond the muzzle. (same as (Oehler). The Garmin is a closer reflection of true muzzle velocity.

Maybe I missed it, but how did you conclude that? They work on the same principle and have extremely similar form factors.
 
Maybe I missed it, but how did you conclude that? They work on the same principle and have extremely similar form factors.
I briefly scanned the data, and the Athalon lines up closer to the Oehler measurements taken 10'-12' from the muzzle while the both Garmins appear 8-10fps faster.


Could be an aiming issue, compared to the other two radar units the Athalon appears to be the outlier.
My conclusion may have been to hasty, its a very small sample size, perhaps to small to conclude anything.
 
The observation by @59FLH is failing for how Doppler Radar ballistic chronographs actually work. ALL of these units 1) measure velocity downrange and back-calculate “muzzle velocity” (actually velocity-at-device) and 2) ping the bullet with multiple pulses which are required to establish an appropriate decay ratio for this calculation.

Doppler Radar ballistic chronographs - in every ping of the bullet, capture data which offers BOTH distance of capture AND velocity of object. The internal chronometer knows when the pulse is transmitted and echo returned, with the time between being used to calculate distance of the object when pinged. The Doppler radar ALSO returns instantaneous velocity of the object by measuring frequency shift of the echo vs. the original signal. The units then record multiple pings (generally expected to be 1300hz ping rate) which returns multiple positions and velocities to allow calculation of a decay ratio which then is used to back-calculate the “velocity at device” which is displayed as muzzle velocity. So “where the radar reads the bullet” is completely irrelevant - unlike optical chronographs which only read the velocity at their physical position, Doppler radars know where they read the velocity, and read multiple positions and velocities to allow back-calculation for muzzle velocity.

I don’t want to overwhelm the thread, but I may not have shared these yet on this site? But here is some evidence from experimentation I have been doing to compare several chronographs on the market:

I’m part way through a relatively large matrix of comparison between several chronographs, and this “offset error” is a recurring problem for each of the 3 Athlon chronographs I have used for testing. It HAS improved since the original release, May and June were kinda ugly with almost EVERY session offering these offset errors, but reducing after the July firmware update to ~1/4-1/3 of sessions, and reducing in magnitude of offset.

IMG_2142.jpeg

Here is an example of the offset error compared to other brands: the green lines are 2 Garmins, the orange lines are 2 LabRadar LX’s, and the pink lines are 2 Athlons.
IMG_2300.png

No, with this experiment, I cannot definitively say that one unit is reading the “true velocity” or not, BUT, these tests (multiple iterations of this test so far) DO offer strong indication as to whether 1) one brand/model or another is delivering their promised precision, and/or 2) if one brand/model is doing so, or not, then COULD both units be reading the “true velocity.” Considering promised precision of +/-0.1% for all of these units, for this 2800fps load, the two units of each brand (and really ALL units) should agree with one another within 5.6fps. Not exceeding +/-2.8fps from “truth.” The Garmin units DID deliver that standard for every shot fired, never exceeding 2.8fps spread from one another (potentially +/-0.05% precision), whereas the LabRadars averaged 2.6fps spread from one another with a max of 15.2fps disagreement (fails +/-0.1% specification promise), but the Athlons AVERAGED larger disagreement than their promised spread, averaging 6.5fps disagreement between the 2 units, and a max disagreement of 19.4fps. Using a +/-0.1% specification, this data proves that it is impossible for BOTH Athlons to be reading within their spec from the “true velocity.” Seeing the agreement of the green and orange lines above, we can see the Garmins and LabRadars agreed quite closely, with only a few shots across the LabRadars disagreeing more than promised spec, and NONE of the shots across the Garmins failing that promise.

In other words, both Garmins CAN be correct to true velocity, because they agree sufficiently within their spec that if one is right, the other also could be right. However, the Athlons both CANNOT be reading the true velocity on average, because if one is correctly reading within the promised 0.1%, the other is too far away to also be reading within that margin from truth.

IMG_3729.png

I have multiple replicants of these tests, and the same results persist - high or low offset disagreements by the Athlons, and higher volatility which makes it impossible for the Athlons to be considered to be delivering its promised spec of +/-0.1% precision to truth.

I’m happy to answer any questions I can, and to include additional expanded scope into this matrix of comparison.
 
The observation by @59FLH is failing for how Doppler Radar ballistic chronographs actually work. ALL of these units 1) measure velocity downrange and back-calculate “muzzle velocity” (actually velocity-at-device) and 2) ping the bullet with multiple pulses which are required to establish an appropriate decay ratio for this calculation.

Doppler Radar ballistic chronographs - in every ping of the bullet, capture data which offers BOTH distance of capture AND velocity of object. The internal chronometer knows when the pulse is transmitted and echo returned, with the time between being used to calculate distance of the object when pinged. The Doppler radar ALSO returns instantaneous velocity of the object by measuring frequency shift of the echo vs. the original signal. The units then record multiple pings (generally expected to be 1300hz ping rate) which returns multiple positions and velocities to allow calculation of a decay ratio which then is used to back-calculate the “velocity at device” which is displayed as muzzle velocity. So “where the radar reads the bullet” is completely irrelevant - unlike optical chronographs which only read the velocity at their physical position, Doppler radars know where they read the velocity, and read multiple positions and velocities to allow back-calculation for muzzle velocity.

I don’t want to overwhelm the thread, but I may not have shared these yet on this site? But here is some evidence from experimentation I have been doing to compare several chronographs on the market:

I’m part way through a relatively large matrix of comparison between several chronographs, and this “offset error” is a recurring problem for each of the 3 Athlon chronographs I have used for testing. It HAS improved since the original release, May and June were kinda ugly with almost EVERY session offering these offset errors, but reducing after the July firmware update to ~1/4-1/3 of sessions, and reducing in magnitude of offset.

View attachment 1720064

Here is an example of the offset error compared to other brands: the green lines are 2 Garmins, the orange lines are 2 LabRadar LX’s, and the pink lines are 2 Athlons.
View attachment 1720065

No, with this experiment, I cannot definitively say that one unit is reading the “true velocity” or not, BUT, these tests (multiple iterations of this test so far) DO offer strong indication as to whether 1) one brand/model or another is delivering their promised precision, and/or 2) if one brand/model is doing so, or not, then COULD both units be reading the “true velocity.” Considering promised precision of +/-0.1% for all of these units, for this 2800fps load, the two units of each brand (and really ALL units) should agree with one another within 5.6fps. Not exceeding +/-2.8fps from “truth.” The Garmin units DID deliver that standard for every shot fired, never exceeding 2.8fps spread from one another (potentially +/-0.05% precision), whereas the LabRadars averaged 2.6fps spread from one another with a max of 15.2fps disagreement (fails +/-0.1% specification promise), but the Athlons AVERAGED larger disagreement than their promised spread, averaging 6.5fps disagreement between the 2 units, and a max disagreement of 19.4fps. Using a +/-0.1% specification, this data proves that it is impossible for BOTH Athlons to be reading within their spec from the “true velocity.” Seeing the agreement of the green and orange lines above, we can see the Garmins and LabRadars agreed quite closely, with only a few shots across the LabRadars disagreeing more than promised spec, and NONE of the shots across the Garmins failing that promise.

In other words, both Garmins CAN be correct to true velocity, because they agree sufficiently within their spec that if one is right, the other also could be right. However, the Athlons both CANNOT be reading the true velocity on average, because if one is correctly reading within the promised 0.1%, the other is too far away to also be reading within that margin from truth.

View attachment 1720074

I have multiple replicants of these tests, and the same results persist - high or low offset disagreements by the Athlons, and higher volatility which makes it impossible for the Athlons to be considered to be delivering its promised spec of +/-0.1% precision to truth.

I’m happy to answer any questions I can, and to include additional expanded scope into this matrix of comparison.

Nice work.

Were all six of those set up at the same time measuring the same shots?

If so, any concerns of interference or 'crosstalk'?

My Athlon should be here this weekend and I plan to run it parallel with my MagnetoSpeed to see how they correlate.
 
Last edited:
I really like the direct comparisons. Really shows the difference that the added price gets you. I am surprised the Athlon's did so poorly. Usually the Chinese companies are better at reverse engineering this kind of stuff.
 
Were all six of those set up at the same time measuring the same shots?

If so, any concerns of interference or 'crosstalk'?

Yes, and no.

The photo with 8 units mounted above was taken during my pre-experimental DoE phase, during which I determined interference potential and identified units which don’t play nice together, and start up and channel assignment protocols to eliminate co-channel interference. The Caldwell didn’t play nice with one of the Athlons, so I cut it out - I had to reassign the LabRadar V1 channel, but aiming that thing is such a PITA, I cut it from that phase of the side by side test, and added a specific phase to test those individually against the Garmins as relative controls (only have one V1 and one Caldwell on hand).

So the data series’ captured in the rest of the matrix have been captured with the co-channel interference potential Eliminated.

There are multiple replicants of these tests in which the Athlons bounce above and below the other brands, but the Garmin and LabRadar tend to track together within the expected precision. No, this agreement is not a guarantee they are reading true velocity, but effectively, both Garmins and both LabRadars have to be WRONG if ONE of the Athlons is right (5 out of 6 must be wrong). Alternatively, all 4 of the LabRadar LX’s and Garmin’s could be right.
 
Last edited:
Yes, and no.

The photo with 8 units mounted above was taken during my pre-experimental DoE phase, during which I determined interference potential and identified units which don’t play nice together, and start up and channel assignment protocols to eliminate co-channel interference. The Caldwell didn’t play nice with one of the Athlons, so I cut it out - I had to reassign the LabRadar V1 channel, but aiming that thing is such a PITA, I cut it from that phase of the side by side test, and added a specific phase to test those individually against the Garmins as relative controls (only have one V1 and one Caldwell on hand).

So the data series’ captured in the rest of the matrix have been captured with the co-channel interference potential Eliminated.

There are multiple replicants of these tests in which the Athlons bounce above and below the other brands, but the Garmin and LabRadar tend to track together within the expected precision. No, this agreement is not a guarantee they are reading true velocity, but effectively, both Garmins and both LabRadars have to be WRONG if ONE of the Athlons is right (5 out of 6 must be wrong). Alternatively, all 4 of the LabRadar LX’s and Garmin’s could be right.

Is one of your tests an isolation test of each pair to see how they performed without the other units? A baseline?

The linear velocity increase is also interesting. It looks like the Garmins have identified a couple of nodes :)

The Garmins are Taiwan manufactured, Athlon, China. Where are the LabRadars manufactured?
 
Is one of your tests an isolation test of each pair to see how they performed without the other units? A baseline?

Yes.

The linear velocity increase is also interesting. It looks like the Garmins have identified a couple of nodes :)

No nodes, the load is all the same. These are simply re-ordered, by individual shot, for increasing velocity to make the data more visible. (Increasing average velocity of all 6 units - relatively arbitrary choice, but I felt it best offered some visibility for the inherent noise of all units - if I had picked increasing velocity based on one of the individual units, that trend would look artificially smoother than the others).

The Garmins are Taiwan manufactured, Athlon, China. Where are the LabRadars manufactured?

I believe in Canada still.
 
I am surprised the Athlon's did so poorly.

Relatively speaking, the performance stability I am seeing from the Athlons remains as good or better than we had from any of the ~2ft long optical chronographs. They seem to deliver something around +/-0.35-0.45% precision. The offset error may improve in time, and is intermittent already, but the volatility really just looks so bad because it is compared to the Garmin.
 
I'm waiting on Athlon's 2nd Generation. Ahh, come on guys, you know there will be one.
In all reality it's probably plenty good enough for me know. I use a Magnetospeed V2.
 
Relatively speaking, the performance stability I am seeing from the Athlons remains as good or better than we had from any of the ~2ft long optical chronographs. They seem to deliver something around +/-0.35-0.45% precision. The offset error may improve in time, and is intermittent already, but the volatility really just looks so bad because it is compared to the Garmin.
Have any of the firmware updates specifically included improvements in the offset error? Wonder if some of this amounts to “ growing pains “ similar to what the Garmin went through.
 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
167,789
Messages
2,240,723
Members
80,769
Latest member
viperman88
Back
Top