ChatGPT Just Introduced CPC Bidding. Performance Marketers Have a Problem That Isn't in the Pricing.

ChatGPT Just Introduced CPC Bidding. Performance Marketers Have a Problem That Isn't in the Pricing.
"Is the click worth $3 to $5 given the quality of the intent behind it?"

The channel is now accessible to the buyers who care most about attribution. The measurement infrastructure they need is not there yet.


ChatGPT advertising launched in February 2026 as an enterprise CPM product. Ten weeks later it has a self-serve ads manager, CPMs down from $60 to $25, minimum commitments down from $250,000 to $50,000, and โ€” as of this week โ€” CPC bidding available to a subset of pilot advertisers at $3 to $5 per click.

Each of those steps has opened the channel to a different category of buyer. CPM at $60 with a $250,000 minimum was a brand awareness play for enterprise marketing teams. CPC at $3 to $5 with a $50,000 minimum is a performance marketing product. And performance marketers, as Digiday notes, account for the majority of online ad spend.

This is the moment ChatGPT advertising becomes a real budget conversation for a much wider set of advertisers. It is also the moment the upstream measurement gap becomes an urgent commercial problem rather than a theoretical one.


What the SEJ piece correctly identifies โ€” and where it stops

Search Engine Journal's coverage notes that measurement tools are "limited and inconsistent" and that advertisers will be "evaluating ChatGPT clicks largely on faith" until OpenAI's reporting improves. That is accurate. But the measurement problem runs deeper than reporting latency or attribution methodology.

The issue is not that OpenAI's measurement tools are incomplete. The issue is that they measure the wrong moment.

CPC measures what happens after the click. CPM measures impressions served. The pixel measures post-click behaviour. None of these tools measure what the model recommended organically before the ad appeared โ€” whether the brand was selected, weakened, or displaced in the reasoning chain before paid placement entered the conversation.

For a search or social campaign, this does not matter in the same way. A Google Search click means the user typed a query and chose a result. The intent is expressed at the point of click. The click is a reasonably reliable signal of commercial interest.

For a ChatGPT campaign, the model has often already reasoned through the category, evaluated the options, applied decision filters, and formed a recommendation before the ad fires. The Economist reported this week that nearly a third of ChatGPT ads appear after the tenth turn in a conversation. By turn ten, the purchase recommendation has typically already been made organically. The ad is entering a conversation the model has already concluded โ€” for or against the brand.

A $3 CPC in that environment tells you a user clicked. It does not tell you whether the model had already recommended your competitor three turns earlier.


The intent quality question โ€” reframed

The SEJ article correctly raises the intent quality question โ€” are ChatGPT clicks worth more or less than Google Search clicks, given that ChatGPT users are in a different mode than search users?

The better question for performance marketers is not whether the click is worth more or less. It is whether the organic inference position at the point the ad fires is working for the brand or against it.

A ChatGPT click from a user who has been organically guided toward your brand through the model's reasoning chain is worth significantly more than a click from a user the model has already steered toward a competitor. Both produce a CPC event. Neither OpenAI's current reporting nor any third-party measurement tool currently distinguishes between them.

This is the performance marketer's equivalent of buying clicks without knowing the quality score โ€” except the quality score in this environment is not a number OpenAI publishes. It is the organic inference position, and it is currently invisible to every standard measurement tool in the stack.


Three states before any CPC decision is made

Before a performance marketer commits budget to ChatGPT CPC inventory, there is a prior question to answer: what is the brand's organic inference position, and does it support paid amplification?

AIVO Meridian classifies every brand into one of three states based on structured multi-turn buying sequences run across ChatGPT, Perplexity, Gemini, and Grok.

Amplify. The brand wins the T4 purchase recommendation organically. The model selects it when a buyer is ready to purchase without prompting. CPC spend in this state amplifies a decision the model is already making in the brand's favour. This is the only state in which ChatGPT CPC bidding is performing at full efficiency.

Monitor. The T4 outcome is platform-specific or contested. The brand wins on some platforms and loses on others. Selective CPC spend matched to platforms where the organic position is strong can be justified. Broad spend across all inventory without knowing which platforms are hostile is a significant risk. A brand in Monitor state on ChatGPT may be in Caution state on Perplexity โ€” and as self-serve opens across multiple platforms simultaneously, that distinction will matter for budget allocation.

Advertise with Caution. The brand is eliminated at T3. The criteria filter fires before the purchase recommendation stage and a competitor takes T4. CPC spend in this state enters a conversation the model has already resolved against the brand. The click may occur. The conversion environment is hostile before the ad appeared. Spend in this state requires organic remediation before CPC investment is justified.

Across 7,000+ structured buying sequences run across 160+ brands over twelve months, AIVO has found that 19 of 20 brands are in the Monitor or Caution state โ€” meaning their organic inference position does not cleanly support paid amplification without remediation.


What the CPC introduction actually means for measurement

The SEJ article notes that OpenAI is hiring its first advertising marketing science leader. That hire will eventually produce better reporting. Attribution will improve. Proxy measurement will be replaced by more direct signals.

But the measurement gap that matters for performance marketers is not a reporting latency problem that a marketing science hire will solve. It is a structural gap in what the pixel can see. Post-click attribution, however sophisticated, starts after the ad fires. The organic inference position โ€” the state of the model's recommendation before paid placement entered the conversation โ€” is upstream of the ad event and invisible to any measurement framework built on click data.

The CPC introduction accelerates this problem by bringing performance marketers into the channel. Brand advertisers could absorb the measurement uncertainty as part of an awareness investment. Performance marketers, who plan against CPC, ROAS, and attribution models, cannot. They need to know what they are buying.

The question that CPC pricing makes urgent โ€” "is this click worth $3 to $5 given the quality of the intent behind it?" โ€” cannot be answered from click data alone. It requires knowing what the model recommended before the ad appeared.

That is the upstream measurement question. And it is the question AIVO Meridian is built to answer before the first dollar of CPC spend is committed.


The practical implication for this week

If you are a performance marketer being invited into the ChatGPT CPC pilot, or planning to participate as self-serve access expands, the measurement sequence should be:

Run a diagnostic on your brand's organic inference position across ChatGPT, Perplexity, Gemini, and Grok before allocating budget. Understand which state your brand is in โ€” Amplify, Monitor, or Caution โ€” on each platform. Allocate CPC budget to platforms where the organic position supports amplification. For platforms where the brand is in Caution state, prioritise organic remediation before CPC spend. Measure the organic inference position again after remediation to confirm the state has shifted before scaling paid spend.

This sequence does not replace post-click attribution. It precedes it. And it ensures that when the click happens, the model's reasoning chain is working with the campaign rather than against it.

aivomeridian.com


Tim de Rosen is CEO and Co-Founder of AIVO, Inc. AIVO Meridian is the agency-tier platform for AI inference intelligence and remediation, built on peer-archived research: WP-2026-01 (DOI: 10.5281/zenodo.19401584) and WP-2026-03 (DOI: 10.2139/ssrn.6606518)