The Dayton Hamvention 2018 Question – How Independent is that Test Lab anyway?
The run up to Dayton 2018 was punctuated by competing claims what lab measurements this lab got vs what other labs recorded on several radios.
I am certain other writers have covered the reported numbers, what the numbers mean in real life, and the ins & outs of why measurements might be different.
But I want to comment on a deeper intrigue that has developed. Not the numbers, as that the very same identical radios appear to have different numbers when tested in different labs is somewhat interesting but perhaps less intriguing when some background events are considered.
The real intrigue appears to be well correlated reports that one of the widely reported labs independence is completely uncertain, after the claimed independent lab sought Manufacturer payment as retainers.
Obviously a lab cannot claim to be independent while in the pay of one or more Manufacturers.
Some labs are transparent with any potential bias, handling it openly for all to see & understand.
For example the league’s lab buys its test samples with league funds, tests them, and then eventually sells them by auction. They feel this best distances them from the appearance of bias by the league accepting advertising revenue for the products it also tests.
The bias of manufacturer’s in-house labs is fairly clear. I think we all understand that potential for bias, and even through they most likely test each other products manufacturers don’t publish competitor’s products test results.
Certification labs have an interesting bias, as they are unlikely to overstate performance or minimize problems, as it is their reputation & certification that potentially is at risk. Of course we all understand that the Manufacturer foots the costs for certification testing, but the Manufacturer isn’t supposed to be able to exert control over the test results.
But what of a non-affiliated lab that asks to be paid by some of the manufacturers of products it tests? And if that request and potential payments are information kept from the consumers who look to the resulting test rankings for guidance? And what if that payment proposal was to be put on retainer – essentially to be paid on a regular ongoing basis to act as an advocate for the firm paying?
One might understand if a non-aligned lab looked to amortize their expenses by charging a known set fee for any product to be tested. That is rather like the process behind many of the testing houses we trust for consumer goods. Greatly the bias is handled in a way everyone is comfortable with and the bias is removed from any trust concerns on test results.
But if a lab has started asking for retainers, which in a testee-testor situation pretty much feel like back-handers, how can we trust that lab’s results now or every again?
It simply isn’t possible.
In the run up to Dayton 2018 one lab appears to have acted against a manufacturer who told them “No we will not put you on retainer.” Radios that hwhen tested on automated calibrated test gear confirmed or exceeded one manufacturer’s advertised numbers were suddenly reported as deficient by this lab at the same time the lab also went public with information provided ahead of Dayton even knowing the information had a Dayton release date. Then they didn’t even get the information they released early correct.
Oh did I mention that it happened to be the exact individual radios that automated certified gear confirmed specifications that suddenly had their individual performance questioned? Was their a problem or was it an uncalibrated test gear issue, or was there something to the requested backhander’s being refused?
And what do we make of the gear that did test well – are those “good tests” or payback-for-retainers?
What a mess.
To make the whole issue more a mess, the levels of performance being tested to exceeds discernible end user’s ability to differentiate. Instead of these tests being real world, having a performance level above which differences while measurable are not necessarily repeatable nor offer any discernible improvement, they have been hyped to suggest an end user could tell the difference.
The only end user that might be able to tell the difference in test results is if that end user is another piece of test gear!
I happen to have three brands of radios I like their real world numbers well enough to continue to own – FlexRadio Systems, TenTec and Collins. Specifically the Flex-6000 series, the TenTec Pegasus/Jupiter/Omni-VII radios, and the Collins S-Line/KWM-2A/380 radios.
But there are plenty of other operators who have found MANY other radios offering a performance package that THEY prefer. I know of one ham who makes a point to negatively comment on most every FlexRadio forum post (in forums that let him) as his experience and opinion truly run against FlexRadio. That’s A-Okay, for him. Let me repeat “for him.”
Ditto with hams who perhaps scorn all radios other than FlexRadio System’s radios.
Or those who favor radios that have high scores from testing, especially when we know that at least one non-affiliated test lab may be affiliated by retainer payment?
If you are going to report tested results the test methods, testing lab, and product acquisition need to be trusted – beyond reproach. Being paid or even asking for payments from ANY of the product manufacturers breaks our trust and makes the test reports a sham.
YMMV and yes I avoided specifically mentioning the exact lab, as they are perhaps owed by the hobby a chance to come clean, fix the trust issue, or simply retire.