The Audit
Find out where your AI deployment will fail before something else does. The audit produces a scored report with the evidence cited under every finding, so you can act on the gaps and defend the score on the merits.
Who this is for
- You ship AI in healthcare, finance, insurance, or defense — and one bad model is a regulator's letter, a lawsuit, or a recall.
- You bought or built an AI system and need someone outside the build team to check it before it scales — or before you put it in front of the board.
- The gap between "we have a model" and "we can defend this model" is the gap keeping your leadership up at night.
What you walk away with
- A ~30-page scored report. Every score backed by specific evidence, every recommendation prioritized.
- A 60-minute debrief with your technical and executive teams in the same room.
- A 30-day check-in. We come back to see what moved — and you don't pay extra for it.
Methodology
-
01 — Intake Week 1
Scope definition and non-disclosure agreement execution. We identify stakeholders, define the systems under review, and issue a structured document request covering architecture diagrams, access policies, data lineage maps, and operational runbooks.
-
02 — Evidence Review Weeks 2–3
Deep review of all submitted artifacts. We trace data flows end-to-end, evaluate access control implementations against stated policies, and assess process documentation for completeness and operational viability.
-
03 — Pillar Scoring Week 4
Each of the three pillars — data architecture, access control, and process documentation — is scored against defined maturity levels. Every score is backed by specific evidence and documented rationale.
-
04 — Report Week 5
Delivery of the scored written report. Findings include risk flags, gap analysis, and prioritized recommendations for each pillar.
-
05 — Debrief Week 6
A 60-minute walkthrough of findings with your stakeholders. Questions addressed, priorities clarified. Followed by a 30-day check-in to assess progress against flagged items.
Scoring rubric
Each pillar is evaluated across four maturity levels. The resulting score is transparent — you see exactly what was measured, what evidence was cited, and where your organization sits on each axis.
What a scored report looks like
The scored output is a triangle on three axes. An equilateral shape means balanced maturity. A lopsided shape tells you which pillar to invest in next — backed by the evidence citations that produced each score.
Engagement terms
- Typical engagement duration: 3–6 weeks depending on scope and organizational complexity.
- Access requirements: architecture documentation, access control policies, data pipeline configurations, stakeholder availability for interviews.
- Non-disclosure agreements executed at intake. All findings remain confidential.
Go deeper
Methodology
The full rubric, in public
Every criterion the audit scores against, at every maturity level — crosswalked to NIST AI RMF, ISO/IEC 42001, the EU AI Act, SR 11-7, OWASP LLM Top 10, and MITRE ATLAS.
Read the methodology →
Worked examples
Six AI deployments, scored
Customer-support LLM, fintech fraud model, medical-imaging assist, multi-agent code review, mature LLM, EU AI Act employment screening — each scored across the four pillars with the rationale.
Read the examples →
Pricing
Engagements sized to scope. Reach out to discuss your organization's requirements.