The story in 2026 is not going to be about more data or new tools. It’s going to be about the widening gap between what companies spend and what they can prove.
The challenge is not finding data but knowing which data to trust, how they connect, and when the evidence is strong enough to scale. The patterns are familiar:
- Overinvesting: budgets grow faster than validation.
- Overlooking: foundational work on data hygiene and definitions gets overlooked and underfunded.
- Not yet capitalizing on: simple, high-impact opportunities sit unused.
Here we look at each, where these gaps show up and how leading teams are closing them.
1. Overinvesting
Overinvesting happens when budgets chase clean-looking charts before the evidence is there.
- When Retail Media Spend Runs Ahead of Proof
Retail media budgets keep climbing faster than the evidence behind them. Dashboards often promise lift that disappears once you strip out baseline demand or overlapping ads. When the numbers come from the seller, efficiency almost always looks better than it is.
That tension is starting to show. Finance teams are asking tougher questions about incrementality, and a growing number of advertisers are linking retail media results directly to loyalty or store data, so offline sales aren’t ignored. The leading indicator of maturity is visibility. Marketing, insights, and finance can all see the same proof and agree on what counts as lift.
A few networks are also moving in the right direction by publishing incrementality reporting for qualified campaigns, sometimes even offsite. That’s where spend is now tilting, because buyers are rewarding transparency. In 2026, the path to budget is shifting from presentation to method.
- When Legacy Trackers Can’t Keep Up
Traditional brand trackers were built for slower marketing cycles. They’re still useful for measuring long-term brand health, but they can’t keep up with campaigns that change every few weeks.
That’s why many teams now use faster performance reviews for campaign decisions and keep trackers focused on brand equity. They use the same review rhythm and rules across both, so no one has to learn a new system.
Some teams also add simple creative recall questions to see whether people actually remember specific ads. That helps link changes in brand or sales performance to what was shown, not just to how much was spent.
- When Attention Data Gets Stuck
Attention data only adds value when it’s built into how teams work. Leading teams use it as a quality signal in their models and as a trigger for when to rotate creative. They also keep things consistent by using one provider per market so results are comparable. Before rolling out attention at scale, they test different vendors to see which one best predicts the outcomes they care about.
Industry studies have already shown that attention vendors can produce very different results, which is why local validation matters before those numbers influence real budgets.
- When Data Systems Don’t Connect
Many teams now manage data partnerships in silos. Each partner builds its own setup, which can seem efficient at first but quickly drives up cost and slows learning. Leading insights teams now design integrations that work across partners, using shared definitions and consistent data joins, so evidence can be compared side by side instead of being rebuilt each time.
When connections are planned from the start, tests between partners and retailers become faster, cheaper, and directly comparable, giving teams a clearer view of what’s really driving performance.
2. Overlooking
Strong measurements depend on clean data and clear definitions. Without them, even the best dashboards rest on weak ground, and the most expensive mistakes often start here.
Survey data still supports most brand and campaign readouts. But opt-in panels can drift, copy-paste answers fill open-ends, and AI-assisted responses often look fine while adding no real value. Basic controls help but aren’t enough.
Leading teams now add calibration. At least once a year, they align their non-probability trackers with a probability benchmark so they can measure change. Address-based studies such as NPORS exist for that purpose.
- When Cross-Media Is Still in Pilot
Everyone wants a single view of reach and frequency across channels, but most programs are still in pilot. Origin in the UK has just moved beyond beta, and Aquila in the US continues to report progress. Although important, these efforts are not yet sufficient to close the gap.
That’s why many teams rely on their own view for now. They run overlap reports regularly, share the error margins, and apply simple spending rules that cut investment when reach stops growing and frequency climbs too high. As the new systems stabilize, those manual checks can shift to automated ones.
- When Mix Models Drift from Reality
Privacy changes and new open-source tools have brought mix modeling back, but even precise numbers can create false confidence. The best teams keep their models grounded in real data. They run at least one real experiment each quarter in the channels that drive the most spend. They use enough history to see patterns in trade, seasonality, and distribution clearly.
They also set their own assumptions for how ads wear out and how returns slow as budgets rise, instead of relying on default settings. When they have clean-room access, they pull in retailer data so the model reflects actual shopper behavior.
Finally, they keep a simple table that shows how confident they are in each result. When experiments and models do not align, they fix the design, not the story.
3. Not Yet Capitalizing On
Some of the most valuable opportunities are already available. They remain underused not because they are complex, but because no one fully owns the loop from data to insight to decision.
- Using Retail Media Beyond Ads
Clean-room data can now shape more than media plans. Leading teams use it to test how a multipack performs across shopper groups, how a small price change affects new-to-brand buyers, and whether short promotions pull sales forward or drive real growth.
They feed these results back into pricing, pack design, and promotion planning, not just advertising. The result is a faster loop that links exposure to product and trade outcomes in the same quarter.
- Turning Attention Data into Action
Attention data only adds value when it informs real decisions. Teams that use it well treat attention as a quality signal in their models and as a rule for rotating creative, not as a slide headline.
They track when attention scores drop for a creative and switch assets before that decline wastes budget. If vendor data shows no link to business outcomes, they keep it in research and stop paying for it in-market.
Quarterly trackers cannot match today’s campaign speed. The technology already supports faster readouts; the difference is ownership. High-performing teams assign one person per market to run the readout, one creative lead who can act on it, and one finance partner who signs off on changes. With those roles defined, decisions move faster and loops close naturally.
- Embedding Experiments in Everyday Work
Strong teams treat every major campaign as a chance to add causal evidence. They run geo tests for TV and retail, randomized controls for digital, and holdouts for retail media. Plans are registered, promotions are locked, and results are logged along with the actions that followed.
Over time, this evidence base becomes a shared record between insights and finance. Clean-room history now allows teams to study what happens after a lift, who stayed, who churned, and whether gains were pulled forward or truly incremental.
Some retailers, including Amazon, now allow up to five years of purchase signals, making these lifetime views more reliable than ever.