Pt2: Independent evaluation versus the Four Horsemen of the Negative Relationship Apocalypse
Author: Matt Tett (Enex TestLab) March 2022
It’s perfectly normal to feel a mix of hope and apprehension when starting a new relationship with any business. But if you’re trying to build a strong relationship, it pays to be honest from the start.
If you’re a vendor, when I say ‘you’ I mostly mean the business and products you represent. But I do also mean the actual ‘you’ too.
So, though it’s really common knowledge Enex TestLab is all about independent testing – and you’d think vendors would be wise to this – we’ve seen and heard every excuse for not cooperating with us. None of them original.
We’ve seen the panic on vendors’ faces when they get an inkling there may be more than just a desktop audit of their slick marketing and pricing tables… and we’ve heard some mind-blowing blowback after we’ve gently explained we need to prove their shiny widget will deliver real value.
Hey, if you have nothing to hide, then our independent testing will help prove your claims are true! But if you do have something to hide and your widget fundamentally doesn’t widget the way it is supposed to… then by all means: panic!
Gottman gotcha: 4 negative relationship archetypes
All we’re asking is please don’t behave like any of The Four Horsemen of the Negative
We’ve dealt with them all before. And they’re not big or clever:
It’s a given that projects need to be completed in a timely manner and independent evaluation is no different. It’s science, but not rocket science (unless we’re evaluating satellite technologies … which we’ve done).
You know the feeling when you’re trying to complete a report and you see a message that an application ‘unexpectedly quit’?
That’s what it’s like when we’re working on an independent evaluation and an account rep tries to block or slow down the process.
We’ve had stacks of legal documents thrown at us (NDAs, MNDAs, loan agreements, evaluation agreements, shipping requests etc etc etc), through to demands to Cease and Desist with the evaluation for a plethora of reasons. Some vendors have even claimed they have anti testing/evaluation clauses in their widget EULA (despite offering evaluation licencing *sigh*).
We’ve been asked to sit through endless marketing PowerPoints and meetings with account reps, which seem to serve no other purpose than try to persuade us they deserve some advantage for their widget over what the client has asked for.
And we’ve even been told (so many times) that the product is not “currently” available, it’s stuck in international transit, or that it would be better if we wait for the better v2+ widget coming out “shortly”. Stonewalling is as pointless as hitting a snooze button more than twice: you snooze, you lose.
The next step is typically defensiveness, along the lines of “Please don’t poke at our baby widget”. Then when we explain the whole point of independent testing, the smarter vendors drop the defensive barriers and let us get on with. But the not-so-smart ones escalate things, sometimes bringing in their product specialists to spew criticism or contempt at our testing officers.
We’ve heard every accusation under the sun, but the stupidly common one is that our testing officers aren’t fully proficient in the nuances of a vendor’s special widget. In some ways they’re right: the testing engineers are very qualified generalist technology engineers. So if a widget really is special and they need help examining it, they defer to the experts working with the vendor, as it should be. No penalty, no foul, no ego.
But that doesn’t excuse vendors who say things like:
- “A ‘true’ third party evaluation cannot be valid unless the assignment is to test every single bolt-on ‘value add’ – also, don’t dare overlook our USPs.”
(Sure, some people like heated seats in a utility vehicle but what their customer really wants to know is does the ute perform its core functionality well? Can it actually carry a ton of weight at 80km/h reliably? That’s what the client wants verified, and they can decide themselves if their employees will benefit from warm bottoms.)
•”Have you looked at XYZ report or an industry-sponsored self-assessment method which shows the vendor is number one in its class?”
(Thrusting a whitepaper or other sponsored report at us to show a widget was tested-house and passed 100% is pointless. The point is third-party evaluation not first party.
A doozy is when a very defensive vendor starts chucking their weight around, making demands like “You have to show me your exact methodology”, or “Tell me right now who are competing against in the shortlist”.
Look, the bottom line is you have to be in it to win it. If a vendor doesn’t want their product evaluated against a competitor’s product we can happily remove them from the shortlist and the race if they like.
That’s not blackmail: it’s always their choice for evaluation to continue. Interestingly, in the three decades-plus we’ve been operating no one has ever withdrawn from the thousands of widget evaluations we’ve performed at the lab. Surprisingly, many have gone on to be selected as the preferred supplier.
The truth about what happens when we evaluate vendors
First up, when there’s an issue on our side, we’re honest about it: yes, sometimes test officers do hit engineering roadblocks and there are unexpected anomalies with the testing results.
When that happens, we re-check our method and rig, and (contrary to some of the 4 Horsemen’s belief) we lock down as many variables that can be controlled.
We develop multiple methods to verify the results, including correlating and comparing results from alternate measurement techniques to ensure rigor and accuracy. We want repeatable and accurate results – not anomalies.
We then take those consistent results and compare them with both the vendors’ claims and what the other widgets under evaluation have achieved.
If we’ve found discrepancies or unaccountable anomalies – and we cannot find fault with the methodology or test equipment – we will engage with the vendor and call on their technical expertise to determine if there is a configuration error or product fault in the particular widget we tested.
And we won’t proceed until we’ve found the cause of the anomaly.
So yes, benefit of doubt is absolutely given, as it should be.
No vendor in our experience has ever been penalised by a client for re-submitting a product for evaluation because the first one they submitted happens to have a one-off fault. These things can happen with any technology.
Ultimately, customers need to know the products they’re investing in will match their requirements. Part of our job is to independently verify (or not) the key claims made by vendors. Another really important part is to help customers decide which products are the best fit for purpose.
So, what does independent procurement evaluation cost?
Here’s our spiel: independent evaluation of vendors and products adds value – and reduces risk. Our services represent a very small fraction of the total cost of the procurement, and a far lot less than the cost of project failure.
What’s the point of third-party testing for vendors?
If you’re a vendor it pays to voluntarily have your widget independently tested. Particularly if you have a unique product that doesn’t quite fit in a traditional technology niche, meaning your customers can’t compare apples with apples.
Third-party verification of your product is an excellent way to prove it’s better than an apple, or orange. And it’s not a lemon.
What’s the point of third-party testing for technology buyers?
If you are in contracts and procurement independent evaluation of multiple vendors’ short listed products will help you make informed decisions – backed by scientific evidence – about which products will meet the scope of requirement best.
And you won’t have to wade through piles of marketing claims ever again.