The shape of SEAM.
Three concepts to hold: an intelligence layer (the category), an intelligence model (the artefact), and the split between the open framework and the managed runtime.
Intelligence layer.
An intelligence layer sits between AI agents and the data sources they query. When an agent asks a question, the intelligence layer resolves what the question means against a governed model before any data is fetched. Same prompt, same answer, every time. Across every agent, every model, every team.
SEAM is the intelligence layer. Two halves: the open-source framework (schemas, CLI, resolver) and the Measurelab-managed runtime (MCP server, OAuth, audit pipeline).
Intelligence model.
An intelligence model is the specific configuration that represents your organisation’s intelligence layer: the YAML definitions that govern what your data means, where it lives, who owns it and how AI agents should resolve queries against it.
An intelligence model is what seam init scaffolds, what Canvas helps you build, what gets validated and compiled, and what the runtime resolves through. Conceptually it’s a single Git repository with five kinds of definition file:
Metric
A quantifiable measurement (monthly_revenue, active_client). The unit of governance.
Entity
A business object identified differently across systems (customer, employee, course).
Connection
A downstream system the runtime can reach. Carries transport config plus governance metadata.
Resource
A discrete governed asset within a connection (a Slack channel, a GA4 stream, a BigQuery dataset).
CLI tool
A local CLI binary (gcloud, bq, gh) exposed as governed MCP tools. Each safe-to-run command becomes a tool agents can call.
From definitions to a manifest to an answer.
Three stages.
Compile
All definitions are loaded, validated and indexed into a single manifest. The manifest contains every metric, entity, connection and resource, plus cross-references between them (which metrics depend on which entities, which entities resolve through which sources).
Resolve
When an agent asks a question, the resolver matches the query against every metric and entity in the manifest. Scoring uses signal weights: name (1.0), synonym (0.7), example (0.7), description (0.5). Returns a confidence score (0 to 100) and the matched definition with governance context.
Audit
Every resolution is logged with a unique audit ID. The audit record carries the question, the matched definition, the source consulted, the user, the timestamp. Available via seam__audit at runtime; persistent BigQuery audit pipeline lives in the Measurelab-managed runtime.
Framework. Runtime.
SEAM ships in two halves with different licences.
SEAM Framework
The open-source half of SEAM. The schema standard, the loader, the validator, the compiler and the resolver. Everything you need to author and validate an intelligence model locally, then run queries against it on your own machine.
@measurelab/seam-cli(the binary you install)@measurelab/seam-core(the library it uses)
SEAM Runtime
The MCP server in front of agents in production. Multi-tenant OAuth (per-user and shared). Audit logging to BigQuery. Downstream MCP proxying. Git-sync from your repository.
- Implementation engagement
- Managed Runtime (ongoing)
SEAM is MCP server and MCP client.
SEAM speaks Model Context Protocol on both sides.
Upstream - SEAM exposes itself as an MCP server to Claude (or any compatible agent). The agent calls native SEAM tools (seam__resolve, seam__lookup, seam__audit) plus governed proxies of every downstream tool.
Downstream - SEAM acts as an MCP client to other MCP servers (BigQuery, Slack, Notion, your warehouse, your CRM). Calls go through SEAM, get governance context attached, and are routed back to the agent.