“The question seems simple – what about fraud and quality? But it masks a ton of complexity,” said Giles Palmer, CEO of Cint, when I chatted with him on my “Research Revolutionaries” podcast.
It’s such a complex and important issue that the industry is working to figure it out together, for example, through the Global Data Quality project.
So, how are panel vendors working to combat market research fraud and deliver quality? What role do research agencies and brands play as well? Based on my podcast episode with Giles, this article explores the nuanced landscape.
Defining the Scope: Types of Data Fraud
The first step entails delineating the actual issues erroneously bundled under the banner of survey fraud.
“There are poorly written surveys, bad incentives, technical glitches that have nothing to do with deceit,” Giles said. Painting these as fraud creates misguided solutions.
More precisely, there are three buckets:
Engagement issues – Respondents lose interest due to boring questions, repetitive items, overly long surveys, or insufficient pay. Dropping out or straightlining questions might say more about the survey than the respondent.
Technology gaps – Problems matching demand and supply of respondents so they receive misrepresented or inappropriate surveys. That certainly creates frustration but not deceit.
Malicious fraud – Respondents deliberately misrepresent themselves or actively work around quality checks, including bots and automation. This is the only category implicating true fraud. While malicious fraud steals attention, issues like poor design shouldn’t be absolved from blame.
“We as an industry do a lot of self-harm here that we need to solve first,” Giles urges. “Treating respondents with more care and respect would go a long way.”
Read next: The Future of Brand Tracking: Measure Brand Performance to Drive Business Results
Owning Accountability to Prevent Survey Fraud
Research partners, agencies/researchers, and brands can help solve this problem together. After all, problems stem from multiple directions.
To combat malicious market research fraud, panel vendors implement ever-evolving technical measures for detecting bots and suspicious behavior. For example,Cint checks each survey response against an internal trust score, assessing hundreds of data points for signs of automated activity.
However, panel providers have historically lacked visibility into the survey creation process, which creates the very designs and incentives catalyzing engagement issues.
“Cint doesn’t see the actual surveys,” Giles said. “So when we pass on poor designs that frustrate respondents, how can we take full responsibility for the outcomes?”
Here is where market researchers must own their central role. Survey design, length, reimbursement models, and communication directly impact the legitimacy and reliability of data. Overly long, boring, repetitive surveys guarantee disengaged responses no matter how ethically the panel vets respondents.
Creating surveys centered around extracting data rather than respecting respondents’ time and perspective compounds the end-impact. Establishing empathy internally helps shape designs and incentives more thoughtfully.
Lastly, brands also need to be accountable for budgets and specifications that allow only minimal quality control. Lower prices and faster field times add enormous sample strain, frequently at the cost of added verification measures. Prioritizing representation and reliability is reflected through each link in the survey supply chain.
Read next: Why you should use brand tracking
Getting Proactive: How Can Technology and Collaboration Help Drive Data Quality?
With engagement issues so pivotal, panel vendors seek ways to provide upstream design guidance to improve experiences proactively. That’s quite different from reactively screening out disengaged results.
“We really need to be able to review surveys before they go into the field to eliminate problems from the start,” Giles says. “Large language models, best practices, and other tools can help improve designs.”
He also points to creative formats beyond surveys, like chatbots and interactive storytelling, that produce higher engagement.
Collaboration is another opportunity, Giles highlights.
“We need open dialogues with researchers on how we can help each other get better,” Giles said.
Additionally, panels evolve advanced techniques for identifying and combating malicious fraud. Artificial intelligence models grow better at teasing out patterns like response consistencies that signal automation rather than human respondents. Other options include adding biometric checks like facial recognition and iris scans to confirm real users.
But Giles cautions the arms race of fighting automated tools with more advanced tools.
“The evidence of actual AI bots posing as human respondents remains low,” he comments. “As threats emerge, focus stays on thoughtful and ethical precautions.”
Driving Lasting Results: What Does Progress Look Like from Here?
As an industry, let’s align to drive meaningful change. That can include:
Increase focus on respondent experience metrics.
“If we see uptake on tools helping reward respondents better and reduce repetitiveness, we’ll know researchers feel accountable too,” Giles said.
Speed does not sacrifice quality.
“If clients worry less about benchmark sample sizes and give us the flexibility to ramp verification as needed, it’s progress,” notes Giles.
Collaboration matters.
He also predicts advisory roles emerging between panels and research teams – spanning preferred vendor lists down to bespoke consultation on optimizing specific survey performance.
Through collaboration and shared accountability, the agents powering the insights ecosystem can elevate quality to the forefront – earning back diminished trust in the underlying online sample model that makes market research so accessible for so many.