The measurement science
for AI impact.
Four frameworks. Three layers. One standard. VaryOn Amplitude quantifies what no one else measures - from data quality to agent trust to systemic risk.
Frameworks for the AI Economy
VaryOn Meridian
Data Quality
“Is this data worth consuming, and what should an agent pay for it?”
Meridian evaluates external data sources consumed by AI agents across four orthogonal dimensions, producing a composite score mapped to procurement tiers and dynamic pricing. Delivered in real time via MCP server integration during agent tool-call execution.
VaryOn Drift
Alignment Impact
“Is this agent still serving its principal’s intent?”
Drift detects the invisible gap between what a human principal wants and what an agent actually does - especially across delegation chains where alignment degrades per hop. Its shadow principal detection acts as a multiplicative gate, identifying when third-party interests silently influence agent behavior and directly capping the maximum possible score.
VaryOn Cascade
Systemic Impact
“If something breaks, how far does the damage spread?”
Cascade is the financial stress test for the agent economy. A single compromised agent can poison 87% of downstream decisions within 4 hours. Cascade runs Monte Carlo simulations on observed network topology to estimate propagation probability - the systemic risk measurement central banks are demanding.
VaryOn Convergence
Collusion Impact
“Are autonomous agents colluding to manipulate market prices?”
Convergence detects emergent algorithmic collusion and anti-competitive behavior in AI agent markets through statistical analysis of observable market outcomes. The framework identifies when autonomous AI agents converge on supra-competitive pricing equilibria - sustaining prices 200% or more above competitive levels - without any explicit communication or coordination protocol.
Frameworks in Development
VaryOn Provenance
Identity Impact
“Can you verify who this agent is, what it does, and who deployed it?”
Provenance is the passport layer for autonomous agents. Before trust can be assessed, identity must be established. Provenance measures the verifiability, transparency, and completeness of an agent’s identity, capability claims, and operational history, enabling SOC 2-style certification for the agent economy.
VaryOn Fidelity
Trust Impact
“Can this agent be trusted to do what it claims?”
Fidelity measures signal integrity - the credit score for autonomous systems. It scores whether an agent can be trusted based on its observable behavioral track record. Identity is measured by Provenance; Fidelity measures behavior exclusively: consistency, fulfillment, reputation, and anomalies.
VaryOn Threshold
Resilience Impact
“How resistant is this agent to adversarial attack and manipulation?”
Threshold stress-tests agents against adversarial conditions. Where Fidelity measures past behavior (credit score), Threshold measures future resilience (stress test). Research shows 82.4% of LLMs succumb to peer-agent manipulation - Threshold quantifies exactly how resistant a specific agent is.
VaryOn Parity
Fairness Impact
“Is this agent treating all populations equitably?”
Parity measures what no other framework captures: whether an agent’s decisions produce equitable outcomes across demographic groups. A hiring agent filtering out certain backgrounds, a pricing agent charging more based on inferred characteristics - these are Parity failures invisible to trust, alignment, or competition metrics.
VaryOn Mandate
Human Oversight Impact
“Can a human effectively intervene, override, or stop this agent?”
Mandate quantifies whether human control over autonomous agents is real or ceremonial. EU AI Act Article 14 mandates human oversight; Mandate measures it. Each delegation hop adds latency between the human and the action - at some point, the human is nominally “in the loop” but functionally irrelevant.
VaryOn Yield
Economic Impact
“Is value being created efficiently, or is friction destroying it?”
Yield measures economic efficiency - the ratio of value created to value extracted in agent ecosystems. It detects when transaction costs consume value, when intermediaries extract excessive rents, and when misaligned incentives destroy welfare. Every basis point of friction compounds across millions of autonomous transactions.
VaryOn Lineage
Governance Impact
“Who is accountable when autonomous agents cause harm?”
Lineage traces accountability chains in AI agent ecosystems, mapping the flow of responsibility from actions to actors. It quantifies governance effectiveness, audit trail completeness, and liability attribution when autonomous systems create unintended consequences.
When frameworks combine, invisible patterns emerge.
Amplitude's power multiplies at the intersections.
Systemic Competition Risk
Market concentration in a dense network means anti-competitive behavior goes systemic.
Uncontrollable Failure
When humans can't intervene and failures propagate, the system is ungovernable.
Silent Misalignment
An agent drifting from intent while human oversight is ceremonial creates invisible risk.
Coordinated Discrimination
Fairness failures in a concentrated market amplify bias across the entire ecosystem.
Efficiency Crisis
Transaction friction compounds through interconnected networks, destroying value at systemic scale.
Data ROI
High-quality data inputs directly correlated with genuine economic efficiency.
Research
Our research teams build the measurement science for the AI economy - quantifying data quality, agent trust, alignment, resilience, fairness, and systemic risk.
Cross-Index Intelligence: How Patterns Across Frameworks Reveal What Single Scores Cannot
We present a systematic analysis of emergent intelligence patterns that arise when scores from multiple Amplitude frameworks are examined jointly. High Fidelity paired with low Drift reveals compliant but misaligned agents. Elevated Cascade risk alongside depressed Harmony signals fragile concentrated markets. These cross-framework patterns surface systemic insights invisible to any individual measurement instrument.
Data Science
Developing real-time data quality scoring methodologies that evaluate external data sources consumed by AI agents during inference-time operations.
Meridian
Agent Trust
Building the behavioral trust layer for autonomous agents - identity verification, behavioral consistency, and adversarial resilience testing.
Provenance, Fidelity, Threshold
Alignment & Control
Measuring the gap between human intent and agent behavior across delegation chains, and quantifying whether human oversight is real or ceremonial.
Drift, Mandate, Parity
Systemic Risk
Modeling failure propagation, market competition dynamics, and economic efficiency across interconnected agent ecosystems at scale.
Cascade, Convergence, Torque
Publications
Every question a regulator, judge, or enterprise buyer would ask about AI agents - answered.
Get early access to Amplitude scoring, research updates, and framework specifications.
Join the waitlist. No spam, just measurement science.