How we score AI readiness.
A walkthrough of the pillar-and-maturity rubric behind our AI readiness audit — what we measure, how we score it, and why transparency is the entire product.
The ask behind the rubric
Most AI readiness assessments end the same way: a letter grade, a slide deck, a handshake. The client walks away reassured, unchanged, and one news cycle away from the exact failure the assessment was supposed to catch. We score differently because we were asked to.
The ask came from a specific kind of conversation. A head of engineering, six months into a production AI rollout, cornered at the quarterly board meeting by a question they'd been hoping nobody would ask: what controls are in place? Not the operational controls — the governance controls. Not uptime and latency — lineage and access and documentation. The kind of question you cannot answer with a dashboard.
A written report, scored across clearly defined pillars, with evidence cited for every finding — that is the artifact that ends that conversation. Everything in our methodology is designed to produce that artifact. Everything is in service of one job: give the people answering the hard questions something they can put in front of the people asking them.
The goal isn't to certify readiness. The goal is to produce a score the operator can defend — to a board, to a customer, to themselves.
Three pillars
Every audit evaluates three pillars: data architecture, access control, and process documentation. These are not arbitrary categories. They are the three operational surfaces where AI deployments most commonly fail in production — the three places where policy and reality come apart.
Pillar I
Data Architecture
Input risk.
How information flows from source to model. Lineage, quality, transformation, reproducibility.
Pillar II
Access Control
Operational risk.
Who can touch what. The gap between policy and practice is where breaches live.
Pillar III
Process Documentation
Continuity risk.
Whether your organization can operate what it built — when incidents happen and people leave.
Data architecture covers how information flows from source to model. We trace lineage, evaluate quality gates, and examine storage patterns. If your training data cannot be reproduced or your feature pipeline has undocumented transforms, that is where it surfaces.
Access control examines who can touch what — and whether the stated policy matches the implemented reality. Identity management, role boundaries, permission models, audit trails. The gap between a written policy and an enforced one is the gap that lets breaches happen silently.
Process documentation assesses whether your organization can operate what it has built. SOPs, change management, incident response, deployment runbooks. The question isn't whether documents exist. The question is whether someone could actually follow them at 2 AM under load.
Four maturity levels
Each pillar is scored along a four-level ladder. Not a letter grade. Not a single composite number. A score per pillar, on a scale where every step is operationally defined — which means you can tell what a Level 2 looks like from across the room.
Ad hoc
Knowledge in heads. Outcomes depend on individuals.
Defined
Processes exist on paper. Enforcement inconsistent.
Managed
Processes followed, measured, and improved.
Optimized
Automated where possible, continuously refined.
A Level 2 in data architecture and a Level 3 in access control tells you something specific. It tells you where to invest next. It gives your board a defensible answer when they ask are we ready. It gives you a baseline you can measure against twelve months from now.
The report never assigns a composite score. There is no A-minus, no green-yellow-red, no single number. We resist the collapse. A composite score is a story — and the audit is not in the business of telling stories.
A Level 2 in one pillar and a Level 3 in another is not a contradiction. It is the truth. The report is in service of the truth.
Why transparency is the product
The rubric is not proprietary. You see every criterion before the engagement begins. You see every evidence citation in the final report. If you disagree with a score, the rationale is in the document — and we walk through it in the debrief.
This matters because the alternative is a black-box assessment where you are paying for a stamp of approval. That is not what we sell. We sell a score you can defend, challenge, and act on. The difference is the entire value of the product.
The scored report is not the finish line. The thirty-day follow-up check is built into every engagement because readiness is not a snapshot — it is a trajectory. The score tells you where you are. What you do next determines where you end up.
The score tells you where you are. What you do next determines where you end up.