I’m not an election pundit, but I am interested in how voting sentiment surveys are run.
And based on my experience so far, they’re hit and miss. That’s because they typically test small samples of the broad Australian population, then those results are extrapolated to vaguely predict the real-world election outcome.
I say ‘vaguely predict’ because we’ve seen just how wildly pre-election polling inputs can differ from the only election data collection point that really matters: the ballot box.
Sure, Australian voters can be unpredictable. But some of the voting prediction results are ridiculously inaccurate because their data collection – and testing – methods aren’t accurate to begin with:
• Simplistic tests – you can’t come to good conclusions if you don’t ask all the questions that you need answers for to weigh up your options, let alone inform a final decision.
• Unrepresentative sample inputs – the decisions you make about which sample data inputs to include in your tests (surveys) can skew the results from the start. In the case of election polls, you need to cross-reference all kinds of demographic data to help you select a sample group of people who might be representative of all Australian voters. Yes, that’s a complex challenge, even for expert psephologists (political scientists who examine and analyse election data).
• Faulty testing – the old total quality management adage – ‘you can’t manage (or improve) what you don’t measure’ – is an apt warning for anyone who wants to test something. You need to choose your test inputs carefully, so you get relevant data to help you make decisions. But you also need the measurement method to be robust, repeatable and reproducible. And you need robust and accurate measuring tools.
Worse though, is not even testing for faults that happen when something is put under stress. Because when your customers experience those faults, at the very least, your reputation is damaged. At worst, you cause damage to your customers.
Why robust test methods matter
The televised debates between political leaders before the Federal election were frustrating for a lot of reasons.As someone obsessed with robust testing,
I was particularly frustrated by the methods used by one of the commercial networks to test voting sentiment in real-time.
The first input method wasn’t bad: a QR code on screen inviting viewers to scan it on their phone to open a survey on a linked webpage. Most Australians got comfortable with QR codes during the pandemic, so the risk of user error was low.
But then the weaknesses in the testing method were on full display. The network readily admitted some of these weaknesses: “these polls are not scientific and reflect the opinion only of visitors who choose to participate.”Here’s what I found frustrating:
• Simplistic tests – the survey asked very simple questions such as: 1. Who would make the better PM? 2. Who won the debate? 3. Who are you more likely to vote for? Obviously, the test just measured viewer opinion at a moment in time, but the headline result was all about who won the debate: a 50/50 spilt. I wish they’d had something like the old data ‘worm’ visual showing real-time changes in sentiment throughout the bout, round by round. Because then the final score for the debate would be more conclusive. Still, it’s the result at the ballot box that matters.
• Unrepresentative sample inputs – there were two big issues with the sampling: 1. its total potential inputs (900,000+ people apparently) were skewed from the start, because it only measured sentiment of people willing to watch live TV on that Sunday night AND take part in the poll. 2. Even if you chose to do the survey, you might not have been able to… because the input mechanism broke under pressure several times, and a lot of people’s ‘votes’ weren’t counted, so there’s no way a representative sample of inputs was captured.
• Faulty testing – I’m not sure if the web service hosting the debate survey was load tested beforehand, but clearly it wasn’t up to the task of handling tens or hundreds of thousands of inputs all at once on that Sunday night. Is it ironic that the measurement tools for testing voter sentiment during an unrepeatable and unreproducible test (the debate) weren’t robust and accurate? Maybe.
Independent testing can answer questions you hadn’t thought to ask
Whether you’re evaluating a person, product, service or organisation, the quality of information available to you influences the quality of your decisions.
Before making a decision, testing your assumptions is a good start. Better still, get those assumptions tested independently.
Enex TestLab has been helping organisations make decisions about products and services for 30+ years by designing and conducting tests to answer their questions – plus we suggest important considerations they might have missed.
We scientifically test products and services to get robust data across a huge range of purchase considerations including suitability, performance, durability, compatibility and value.
We also help developers and vendors improve their offerings by testing all those purchase considerations, plus we offer independent certifications and claims validation.
If you don’t test, you won’t know
The public failure of the Boeing 737 Max airplane design is an extreme example of what can go wrong when testing methodologies fall short. Boeing’s corporate reputation, business and value plunged following accusations it was too focused on profits and selling products cost efficiently.
Vendors and developers might run their own tests before going public with a new offering, but their cultures are driven more by selling products. So they risk only testing what they think needs testing for the market, based on what they know.
Our culture is driven by testing. Importantly our independent testing methodologies are robust, repeatable and reproducible. Our methods are systematically tested and verified by independent third parties: Enex TestLab is ISO9001 certified and an ISO 17025 quality accredited enterprise. They’re scientific.
The aim of scientific testing isn’t just to test what you know – it’s also to find out what you don’t know. If a product isn’t scientifically tested by an independent third party, you won’t get all your assumptions tested, and you might not discover issues before you go public.
So, what don’t you know?