From Agent Risk to Framing Risk
The Governance Blind Spot in External AI Systems
AIVO Journal β Governance Commentary
Abstract
AI governance has concentrated on execution risk: whether agents act lawfully, whether decisions are compliant, whether liability can be assigned.
This focus leaves a structural blind spot.
Before execution, there is representation.
Large language models shape how option spaces are framed at the moment reliance forms. They influence what appears comparable, credible, or even visible. When that representational layer is external, non-deterministic, and unlogged, institutions face a new condition: evidentiary asymmetry. Influence persists. Durable record does not.
This article argues that governance must expand from agent control to framing reconstructability. The emerging requirement is not optimization of outputs, but preservation of decision-adjacent representational states sufficient for supervisory review.
1. Governance Has Focused on the Last Mile
Current AI governance regimes emphasize:
- Model robustness
- Bias mitigation
- Transparency
- Agent oversight
- Liability assignment
These address execution.
They assume that the informational substrate preceding a decision is either stable, documented, or reproducible.
That assumption is no longer reliable.
Large language models do not merely retrieve information. They synthesize, compress, reorder, and resolve option spaces dynamically. They operate as upstream decision architecture.
By the time a committee votes, a risk model updates, or a shortlist is drafted, representational narrowing may already have occurred.
Execution leaves artifacts.
Framing rarely does.
2. Representational Narrowing as a Governance Object
Representational narrowing occurs when:
- Alternatives are omitted
- Comparisons are bundled
- Salience is shifted
- Ordering changes materially
- Resolution is presented as singular
No misconduct is required.
No internal system failure is necessary.
The representational state itself becomes the variable.
What makes this structurally significant is reconstructability.
If a regulator, auditor, or court later asks:
What did the system present at the time reliance occurred?
In many institutions, the answer is unknowable.
External AI outputs are typically:
- Non-deterministic
- Version unstable
- Ephemeral
- Unlogged
- Irreproducible post hoc
Yet reliance is real.
This creates evidentiary asymmetry:
influence without durable record.
3. A Decision-Adjacent Pattern
Consider an anonymized but reproducible pattern observed across multiple sectors:
A vendor appears consistently in informational responses.
It is described neutrally in comparative summaries.
But when prompts shift to:
βWhich provider should I choose?β
βWho is best suited for this implementation?β
The vendor disappears from final resolution outputs across model versions and over time.
No explicit displacement occurs.
No competitor is named as a substitute.
The absence leaves no audit artifact.
Shortlist formation upstream is shaped by omission.
Traditional controls assume that the shortlist is neutral ground.
AI-mediated framing challenges that assumption.
4. Regulatory Intersection Is Indirect but Real
This representational layer intersects with existing frameworks.
EU AI Act
Articles 12β15 require documentation, traceability, and record-keeping for high-risk systems.
Articles 61β72 introduce post-market monitoring and reporting.
Effective monitoring presupposes visibility into system behavior at time of impact.
If a high-risk institutional decision is materially influenced by an external AI representational state that cannot be reconstructed, documentation obligations may be indirectly impaired.
The issue is not whether the external model is regulated.
It is whether reliance is evidencable.
SEC Internal Controls
Exchange Act Rule 13a-15 requires issuers to maintain controls ensuring reliability of information underlying public disclosures.
If management judgments are shaped by external AI representations, and those representations are irreproducible, a control design question arises:
Can the informational basis for a material judgment be reconstructed?
This is not about correctness.
It is about reconstructability.
Procurement Integrity and Clinical Governance
Procurement controls assume neutral shortlist formation.
Pharmacovigilance frameworks assume observable promotional conduct.
Neither were designed for dynamic third-party synthesis layers that:
- Compress option spaces upstream
- Leave no trace once relied upon
- Vary across time and versions
The evidentiary gap persists across sectors.
5. From Agent Governance to Framing Governance
Most AI governance discourse focuses on:
- Training data integrity
- Bias metrics
- Agent monitoring
- Incentive alignment
These regulate behavior.
They do not regulate visibility.
If governance remains agent-centric, institutions risk regulating the last mile while leaving the upstream framing layer structurally unobserved.
The central governance questions therefore shift:
- What option space was visible at the moment of reliance?
- Can that representational state be reconstructed under supervisory review?
Absent structured observation, the answer is often no.
The absence does not remove accountability.
It concentrates it on institutions unable to reconstruct their informational substrate.
6. Toward a Reconstructability Standard
A minimal governance test for decision-adjacent AI reliance would require:
- Identification of relevant prompt classes
- Preservation of output state at time of reliance
- Timestamp and model version capture
- Sufficient archival integrity for audit reconstruction
This is not an optimization exercise.
It is an evidentiary safeguard.
The objective is not to influence model behavior.
It is to prevent asymmetry between influence and accountability.
7. The Blind Spot
The most consequential governance failure may not occur when an AI system acts improperly.
It may occur when it quietly defines what was available to be chosen.
Framing risk precedes agent risk.
Representation precedes execution.
If institutions cannot evidence the representational state that shaped reliance, governance remains incomplete.
The next phase of AI governance will not be defined solely by controlling models.
It will be defined by evidencing what they made visible.
