Market researchers have embraced artificial intelligence at a staggering pace, with 98% of professionals now incorporating AI tools into their work and 72% using them daily or more frequently, according to a new industry survey that reveals both the technology's transformative promise and its persistent reliability problems.
The findings, based on responses from 219 U.S. market research and insights professionals surveyed in August 2025 by QuestDIY, a research platform owned by The Harris Poll, paint a picture of an industry caught between competing pressures: the demand to deliver faster business insights and the burden of validating everything AI produces to ensure accuracy.
While more than half of researchers — 56% — report saving at least five hours per week using AI tools, nearly four in ten say they've experienced "increased reliance on technology that sometimes produces errors." An additional 37% report that AI has "introduced new risks around data quality or accuracy," and 31% say the technology has "led to more work re-checking or validating AI outputs."
The disconnect between productivity gains and trustworthiness has created what amounts to a grand bargain in the research industry: professionals accept time savings and enhanced capabilities in exchange for constant vigilance over AI's mistakes, a dynamic that may fundamentally reshape how insights work gets done.
How market researchers went from AI skeptics to daily users in less than a year
The numbers suggest AI has moved from experiment to infrastructure in record time. Among those using AI daily, 39% deploy it once per day, while 33% use it "several times per day or more," according to the survey conducted between August 15-19, 2025. Adoption is accelerating: 80% of researchers say they're using AI more than they were six months ago, and 71% expect to increase usage over the next six months. Only 8% anticipate their usage will decline.
“While AI provides excellent assistance and opportunities, human judgment will remain vital,” Erica Parker, Managing Director Research Products at The Harris Poll, told VentureBeat. “The future is a teamwork dynamic where AI will accelerate tasks and quickly unearth findings, while researchers will ensure quality and provide high level consultative insights.”
The top use cases reflect AI's strength in handling data at scale: 58% of researchers use it for analyzing multiple data sources, 54% for analyzing structured data, 50% for automating insight reports, 49% for analyzing open-ended survey responses, and 48% for summarizing findings. These tasks—traditionally labor-intensive and time-consuming — now happen in minutes rather than hours.
Beyond time savings, researchers report tangible quality improvements. Some 44% say AI improves accuracy, 43% report it helps surface insights they might otherwise have missed, 43% cite increased speed of insights delivery, and 39% say it sparks creativity. The overwhelming majority — 89% — say AI has made their work lives better, with 25% describing the improvement as "significant."
The productivity paradox: saving time while creating new validation work
Yet the same survey reveals deep unease about the technology's reliability. The list of concerns is extensive: 39% of researchers report increased reliance on error-prone technology, 37% cite new risks around data quality or accuracy, 31% describe additional validation work, 29% report uncertainty about job security, and 28% say AI has raised concerns about data privacy and ethics.
The report notes that "accuracy is the biggest frustration with AI experienced by researchers when asked on an open-ended basis." One researcher captured the tension succinctly: "The faster we move with AI, the more we need to check if we're moving in the right direction."
This paradox — saving time while simultaneously creating new work — reflects a fundamental characteristic of current AI systems, which can produce outputs that appear authoritative but contain what researchers call "hallucinations," or fabricated information presented as fact. The challenge is particularly acute in a profession where credibility depends on methodological rigor and where incorrect data can lead clients to make costly business decisions.
"Researchers view AI as a junior analyst, capable of speed and breadth, but needing oversight and judgment," said Gary Topiol, Managing Director at QuestDIY, in the report.
That metaphor — AI as junior analyst — captures the industry's current operating model. Researchers treat AI outputs as drafts requiring senior review rather than finished products, a workflow that provides guardrails but also underscores the technology's limitations.
Why data privacy fears are the biggest obstacle to AI adoption in research
When asked what would limit AI use at work, researchers identified data privacy and security concerns as the greatest barrier, cited by 33% of respondents. This concern isn't abstract: researchers handle sensitive customer data, proprietary business information, and personally identifiable information subject to regulations like GDPR and CCPA. Sharing that data with AI systems — particularly cloud-based large language models — raises legitimate questions about who controls the information and whether it might be used to train models accessible to competitors.
Other significant barriers include time to experiment and learn new tools (32%), training (32%), integration challenges (28%), internal policy restrictions (25%), and cost (24%). An additional 31% cited lack of transparency in AI use as a concern, which could complicate explaining results to clients and stakeholders.
The transparency issue is particularly thorny. When an AI system produces an analysis or insight, researchers often cannot trace how the system arrived at its conclusion — a problem that conflicts with the scientific method's emphasis on replicability and clear methodology. Some clients have responded by including no-AI clauses in their contracts, forcing researchers to either avoid the technology entirely or use it in ways that don't technically violate contractual terms but may blur ethical lines.
"Onboarding beats feature bloat," Parker said in the report. "The biggest brakes are time to learn and train. Packaged workflows, templates, and guided setup all unlock usage faster than piling on capabilities."
Inside the new workflow: treating AI like a junior analyst who needs constant supervision
Despite these challenges, researchers aren't abandoning AI — they're developing frameworks to use it responsibly. The consensus model, according to the survey, is "human-led research supported by AI," where AI handles repetitive tasks like coding, data cleaning, and report generation while humans focus on interpretation, strategy, and business impact.
About one-third of researchers (29%) describe their current workflow as "human-led with significant AI support," while 31% characterize it as "mostly human with some AI help." Looking ahead to 2030, 61% envision AI as a "decision-support partner" with expanded capabilities including generative features for drafting surveys and reports (56%), AI-driven synthetic data generation (53%), automation of core processes like project setup and coding (48%), predictive analytics (44%), and deeper cognitive insights (43%).
The report describes an emerging division of labor where researchers become "Insight Advocates" — professionals who validate AI outputs, connect findings to stakeholder challenges, and translate machine-generated analysis into strategic narratives that drive business decisions. In this model, technical execution becomes less central to the researcher's value proposition than judgment, context, and storytelling.
"AI can surface missed insights — but it still needs a human to judge what really matters," Topiol said in the report.
What other knowledge workers can learn from the research industry's AI experiment
The market research industry's AI adoption may presage similar patterns in other knowledge work professions where the technology promises to accelerate analysis and synthesis. The experience of researchers — early AI adopters who have integrated the technology into daily workflows — offers lessons about both opportunities and pitfalls.
First, speed genuinely matters. One boutique agency research lead quoted in the report described watching survey results accumulate in real-time after fielding: "After submitting it for fielding, I literally watched the survey count climb and finish the same afternoon. It was a remarkable turnaround." That velocity enables researchers to respond to business questions within hours rather than weeks, making insights actionable while decisions are still being made rather than after the fact.
Second, the productivity gains are real but uneven. Saving five hours per week represents meaningful efficiency for individual contributors, but those savings can disappear if spent validating AI outputs or correcting errors. The net benefit depends on the specific task, the quality of the AI tool, and the user's skill in prompting and reviewing the technology's work.
Third, the skills required for research are changing. The report identifies future competencies including cultural fluency, strategic storytelling, ethical stewardship, and what it calls "inquisitive insight advocacy" — the ability to ask the right questions, validate AI outputs, and frame insights for maximum business impact. Technical execution, while still important, becomes less differentiating as AI handles more of the mechanical work.
The strange phenomenon of using technology intensively while questioning its reliability
The survey's most striking finding may be the persistence of trust issues despite widespread adoption. In most technology adoption curves, trust builds as users gain experience and tools mature. But with AI, researchers appear to be using tools intensively while simultaneously questioning their reliability — a dynamic driven by the technology's pattern of performing well most of the time but failing unpredictably.
This creates a verification burden that has no obvious endpoint. Unlike traditional software bugs that can be identified and fixed, AI systems' probabilistic nature means they may produce different outputs for the same inputs, making it difficult to develop reliable quality assurance processes.
The data privacy concerns — cited by 33% as the biggest barrier to adoption — reflect a different dimension of trust. Researchers worry not just about whether AI produces accurate outputs but also about what happens to the sensitive data they feed into these systems. QuestDIY's approach, according to the report, is to build AI directly into a research platform with ISO/IEC 27001 certification rather than requiring researchers to use general-purpose tools like ChatGPT that may store and learn from user inputs.
"The center of gravity is analysis at scale — fusing multiple sources, handling both structured and unstructured data, and automating reporting," Topiol said in the report, describing where AI delivers the most value.
The future of research work: elevation or endless verification?
The report positions 2026 as an inflection point when AI moves from being a tool researchers use to something more like a team member — what the authors call a "co-analyst" that participates in the research process rather than merely accelerating specific tasks.
This vision assumes continued improvement in AI capabilities, particularly in areas where researchers currently see the technology as underdeveloped. While 41% currently use AI for survey design, 37% for programming, and 30% for proposal creation, most researchers consider these appropriate use cases, suggesting significant room for growth once the tools become more reliable or the workflows more structured.
The human-led model appears likely to persist. "The future is human-led, with AI as a trusted co-analyst," Parker said in the report. But what "human-led" means in practice may shift. If AI handles most analytical tasks and researchers focus on validation and strategic interpretation, the profession may come to resemble editorial work more than scientific analysis — curating and contextualizing machine-generated insights rather than producing them from scratch.
"AI gives researchers the space to move up the value chain – from data gatherers to Insight Advocates, focused on maximising business impact," Topiol said in the report.
Whether this transformation marks an elevation of the profession or a deskilling depends partly on how the technology evolves. If AI systems become more transparent and reliable, the verification burden may decrease and researchers can focus on higher-order thinking. If they remain opaque and error-prone, researchers may find themselves trapped in an endless cycle of checking work produced by tools they cannot fully trust or explain.
The survey data suggests researchers are navigating this uncertainty by developing a form of professional muscle memory — learning which tasks AI handles well, where it tends to fail, and how much oversight each type of output requires. This tacit knowledge, accumulated through daily use and occasional failures, may become as important to the profession as statistical literacy or survey design principles.
Yet the fundamental tension remains unresolved. Researchers are moving faster than ever, delivering insights in hours instead of weeks, and handling analytical tasks that would have been impossible without AI. But they're doing so while shouldering a new responsibility that previous generations never faced: serving as the quality control layer between powerful but unpredictable machines and business leaders making million-dollar decisions.
The industry has made its bet. Now comes the harder part: proving that human judgment can keep pace with machine speed — and that the insights produced by this uneasy partnership are worth the trust clients place in them.
Original Source: https://venturebeat.com/ai/98-of-market-researchers-use-ai-daily-but-4-in-10-say-it-makes-errors
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
