Blog

Blog

Blog

How Insight Teams Should Use AI Agents Without Losing the Plot

“Generative AI is a foundational technology like electricity was.” 

Speaking with James (JT) Turner on Research RevolutionariesTina Tonielli, senior insights and analytics leader and former North America Lead for Consumer and Business Insights and Analytics at Haleon, uses that comparison to set expectations. There is no neat playbook for how to use AI in insights. You learn by trying things, seeing what holds up, and rebuilding what doesn’t. 

The teams that get real value will be the ones that treat AI as something to shape, not something to consume passively. 

“If you can teach it your way of doing things, that agent is your new IP.” 

Agents, training, and the early “virtual agency” experiments she has seen matter because they change the starting point. But they only pay off if someone keeps the work tied to the business question and carries it through to application. As the mechanics get easier, that human job becomes more important, not less. 

You can watch or listen to the full podcast episode here: https://www.research-revolutionaries.com/e13-cheaper-faster-but-is-it-better-ais-impact-on-consumer-research/  

Screenshot 2025 12 01 161010

 

The “Virtual Agency” as a First Pass

 

Tina has seen plenty of AI experiments already, and some of them are starting to look familiar. 

“I’ve seen a lot of persona work,” she says. “There’s been a lot of experiments with personas.” 

Personas can be useful, but they’re not the part that really changes how work moves. The more interesting shift is what happens when AI starts behaving less like a tool you prompt once and more like a set of roles you can run a project through. 

She’s seen one approach that goes beyond personas. 

“A company created almost like a virtual agency where they had different agents actually filling the roles.” 

The useful part is not the label, but what it lets a team do early.  The faster you can get something on the table, the faster the team can start working like a team, instead of circling around opinions and assumptions. 

“You could run almost like a pseudo project through the whole way to creative.” 

What comes out isn’t the finished work, but something you can push on. 

“It’s not perfect,” she says. But it can be enough to move the work forward. A rough first pass that gives the team something concrete to respond to. 

“I don’t see that as the endpoint,” she says. “I see that as a different starting point.” 

It’s a first-draft problem, not a final-answer problem. If a “virtual agency” output gets treated as finished thinking, it will flatten the work. If it gets treated as a first draft, it can do what first drafts are supposed to do. Speed up the work that follows, not replace it. 

Training Is Where Agents Start to Feel Like Yours

 

A lot of teams are still using AI in a one-off way. Prompt in, output out. That can be handy, but it does not change much about how the work runs week to week. 

Tina gets interested when the model can be shaped over time. 

“When you train the models, you can give it a little more agency,” she says. “That’s an agent. If you can teach it your way of doing things, that agent is your new IP.” 

What she means by that is the real differentiators inside an insights function: how you frame questions, how you define a category, what you consider “good” evidence, the standard ways you pressure-test a brief, the patterns you trust because you’ve lived through enough cycles to know what holds. 

If an agent can start from that, you’re no longer starting from a generic internet version of your work. You’re starting from your own definitions and your own habits of thinking. 

But, “it needs to partner with a human.” 

Training can raise the quality of the first pass, but it doesn’t remove responsibility for judgment. It does not decide what matters or what will land in a room full of stakeholders. The agent can help you begin. The team still has to do the thinking that makes the work worth anything. 

Mechanics Are Not the Same as Value

 

Tina’s argument is not really about agents. It’s about where insights work breaks when organizations mistake capability for impact. 

She’s been in companies with serious analytics capability. Big teams that can run the models and produce the outputs. And still, something doesn’t connect. 

“What they didn’t have was as much of a grounding in the business and the application of what they were doing.” 

You can have strong analytics and still struggle to create impact if the work is disconnected from the business problem on the way in and from the business decision on the way out. 

“There’s this upfront business problem definition, and then this backend application.” 

If you lose either end, you can produce endless output and still struggle to create value. 

And when the mechanics get easier, that risk gets larger, with AI quietly making things worse by generating something that just looks good enough for people to stop asking questions. 

That surface neatness is exactly what can lull teams into skipping the work that makes insights usable: getting clear on the real problem and carrying the output into a decision. 

“When we fully decouple the mechanics of how you run the analysis from that front and back end,” she says, “I think you don’t get the full value out of it.” 

The Steward Role That Keeps the Chain Intact

 

AI will keep improving. The mechanics will keep getting easier. The part that will not get easier is navigating the work through the business. 

“That might mean you’ve got more of a steward role,” she says. “Someone who can move between data scientists, commercial partners, and data stewards.” Someone who can “navigate that through the funnel.” 

That’s the job a lot of insights leaders already do, even if it’s not written on the org chart. It’s the ability to hold the thread from the first business question to the final business moment. 

It’s also the ability to spot when the work is drifting: 

  • When the team is building something impressive that won’t be used. 
  • When people are producing output because they can, not because it’s tied to a decision. 
  • When the language gets technical enough that stakeholders stop engaging, or vague enough that everyone nods and nothing changes. 

A steward catches those early. They’re the ones who hold the problem steady while everything else speeds up. They’re also the ones who can sit with technical teams and business teams without flattening either side. Even in organizations with strong data science, the missing piece is often the connection work.  

“You don’t have to run the analytics yourself,” she says. “You just need to really understand business problems and really understand how to apply that.” 

Where AI Becomes Genuinely Useful

 

Tina doesn’t dismiss efficiency uses. Summaries can help. Drafting can help. The time savings are real. But when she talks about what she actually wants from AI, she keeps pulling away from “more” and toward “clearer.” 

She talks about effectiveness as pulling insight together, across places, and using it to sharpen thinking. 

“Effectiveness is maybe pulling insights from a bunch of different places,” she says. 

She’s also interested in tools that help with one of the most expensive failure points in insight work: vague problem statements. 

“I’ve seen some really interesting things where they can train models to help define problem statements.” 

That’s not glamorous, but it’s where projects live or die. A weak problem statement gives you a weak study, no matter how good the fieldwork is. A clear problem statement can make even a modest study useful. And she’s honest about the limits of “more data” as a default answer. 

“I struggle with that a little bit,” she says, “because I feel like I have so much data already.” 

That’s a familiar feeling inside insight teams. The challenge isn’t always access. It’s making sense of what’s already there, then translating it into something a stakeholder can act on. 

“If you can find something to help me pull that story together and influence my stakeholders and better define a problem statement,” she says, “that’s actually where I need the most help right now.” 

AI is worth the effort when it helps teams with synthesis, clarity, and influence. When it helps the work survive real meetings with real people. If it simply adds more output into an already crowded system, it can make things worse while looking like progress.

A Better Starting Point Doesn’t Mean a Smaller Role

 

Tina’s electricity analogy lands because it’s a reminder to stop waiting for certainty. There is no tidy “AI strategy” that lasts. There is only building, testing, learning, and adjusting. 

Agents and “virtual agency” experiments are useful because they change the starting point. They can help teams move faster into the part of the work that actually creates value: critique, refinement, sensemaking, decision support. 

But that only works if the human work gets taken more seriously, not less. The part that AI doesn’t solve is the part that decides whether insights matter inside a business. As the mechanics get easier, the steward role is what keeps the chain intact when it would be easier to let the work fragment into drafts, outputs, and abandoned ideas. 

Because agents can draft, but someone still has to drive. 

You can watch or listen to the full podcast episode here: https://www.research-revolutionaries.com/e13-cheaper-faster-but-is-it-better-ais-impact-on-consumer-research/  

Join our Newsletter