account_balance
A regulated financial services firm
Risk, finance and customer teams each point AI agents at the same question and get different numbers back. Counterparty exposure, customer lifetime value, regulatory capital - the definitions sit in spreadsheets, dashboards and the heads of senior analysts. Compliance can't explain to the regulator how any specific number was produced.
With SEAM: definitions are versioned, owned and explicit. Every agent across the firm resolves through the same map. The audit trail captures which definition, which source and which rule applied to every answer - the evidence FCA, AI Act and DORA reviewers ask for, ready on the day they ask.
auto_stories
A university or college
Student records live in SITS. The VLE is Moodle or Canvas. Finance runs in Unit4. The CRM is somewhere else again. Recruitment, registry and student-experience teams point AI agents at the same question - "how are first-years progressing?", "which applicants need a second look?" - and get different answers. The Office for Students wants a story about how AI is being used; nobody can tell it cleanly.
With SEAM: definitions of "applicant", "active student", "at risk" and "completion" live once and apply to every system the institution owns. Recruitment, registry, and student-experience agents all resolve through the same map. The audit trail is what the OfS, the data-protection lead and the AI Act all want.
school
A multi-academy trust
Pupil progress lives in Arbor. Behaviour lives in CPOMS. Attendance, observations and assessments each have their own home. Every report needs a teacher to stitch the same numbers together a different way.
With SEAM: definitions of "on track", "at risk" and "vulnerable" live once and apply to every system. A teacher asks "which Year 9 pupils need a check-in this week?" and gets one answer with a source they can trust. A trust CEO sees the same data rolled up consistently. Ofsted sees the statutory cut.
balance
A professional services firm
A firm with multiple practice areas - audit, tax, advisory, sector teams. Each tracks utilisation, billable hours and partner economics differently. AI agents pull pipeline, capacity and revenue numbers that never quite reconcile across the firm. Risk and compliance inherit that mess at audit time.
With SEAM: definitions of "billable", "utilisation", "engagement" and "active client" live once and span every practice. Every agent - pipeline, finance, resource planning - resolves through the same map. Partner reports, regulatory submissions and the firm's own KPIs all come from a single governed source.
storefront
A sports apparel brand
Customer service, marketing, finance and merchandising each have an AI agent of some flavour, hooked into the e-commerce platform, Klaviyo, GA4 and the warehouse. The same question - "which products are returning above expected rates?", "what's the LTV on the new running line?" - gets different answers depending on whose agent is asking. Returns logic, attribution windows and customer cohorts have all drifted.
With SEAM: product, customer, order and channel definitions live once and are versioned. Every agent - service, marketing, finance, merch - resolves through the same map. Returns and lifetime value carry their reasoning into every report. The Black Friday post-mortem reads identically to finance, the merch team, and your retained agency.
volunteer_activism
A charity or cultural-sector organisation
Funders, the board and regulators each ask for the same outcomes a different way. Programme data lives across membership, finance and case-management systems, each with its own definitions. Every report is a manual reconciliation done by someone whose job is supposed to be the work itself.
With SEAM: outcomes have a definition that's audience-aware. The same numbers come out, cut to whichever stakeholder asks for them. Hours come back into the work, not into spreadsheet wrangling.
terminal
An in-house AI engineering team
You've wired Claude or GPT to your data sources via MCP. Some queries work. Many burn tokens trying to discover what tables exist, then guess wrongly. Your eval bench shows wild variance for what should be the same question. The platform team is asking who governs this.
With SEAM: the discovery loop is short-circuited by an entity layer that already knows where things live. Same question, same prompt, same answer. The audit trail is what the platform team and the AI Act both want, available on day one rather than as a multi-quarter build.