Quick Intro~7 MIN· GOV2

AI Governance & Compliance

Full Study

A scannable trailer of the 8-lesson course. Read top to bottom — no clicks needed.

INTROBLOCK · 01
GOV2 · 7 MIN PREVIEW

Governance is now code, not slideware.

EU AI Act high-risk regime kicks in 2 Aug 2026. ISO 42001 audits are happening every week. Italian Garante just annulled a €15M fine — and a US judge sent Anthropic to a $1.5B settlement table. The 2026 platform engineer ships policy as code, model cards, and audit logs by default.

CONCEPTBLOCK · 02

What changed in the last 12 months

Until 2024 'AI governance' was a slide deck legal owned. Twelve months later it is operational platform work: **EU AI Act lit the fuse.** GPAI provider obligations have been live since 2 Aug 2025. The AI Office enforcement powers (fines up to €15M / 3% of global turnover) and Annex III high-risk requirements activate **2 Aug 2026** — that is 95 days from today. Every team selling into the EU is scrambling to produce technical documentation per Annex IV. **ISO/IEC 42001:2023 became the certification companies actually buy.** Two-stage audit; ~50–75 evidence artifacts; surveillance audits in years 2 and 3. AICPA-HITRUST converged SOC 2 + AI assessment is the parallel North-American track. **Frontier model providers raised the bar.** Anthropic Responsible Scaling Policy v3.0 (Feb 2026), OpenAI Preparedness Framework, Google DeepMind Frontier Safety Framework v3.0 (Apr 2026) — every system card now publishes capability evaluations against shutdown-resistance, deception, and bio/chem/cyber thresholds. Their templates are now your templates. **Real money landed in court.** Bartz v. Anthropic settled $1.5B (final fairness hearing 14 May 2026). NYT v. OpenAI summary judgment briefs concluded 2 Apr 2026. Moffatt v. Air Canada (Feb 2024) confirmed common-law jurisdictions hold the deploying company liable for chatbot misrepresentation. The person who can wire model cards, fairness audits, prompt guardrails, lineage, and a 42001 evidence pipeline into a single platform is the 2026 hire.
DIAGRAMBLOCK · 03

The governance platform — five lanes

promote?evidenceDOCS / CARDSFAIRNESS / DPGUARDRAILSEVALS / CILINEAGEMODEL REGISTRYOPA / CEDAR GATEISO 42001 / SOC 2
Five evidence streams (cards, fairness/DP, guardrails, evals, lineage) feed a model registry. A policy-as-code gate decides promotion. The same evidence trail is what ISO 42001 / SOC 2 auditors sample.
CODEBLOCK · 04

A model registry promotion gate — 12 lines that catch every bad release

PYTHON
1# OPA + MLflow admission controller. Run on every promote-to-prod call.
2import requests, mlflow
3
4def gate(model_name: str, version: int) -> bool:
5 mv = mlflow.MlflowClient().get_model_version(model_name, version)
6 payload = {
7 "input": {
8 "model_card_present": "model_card_uri" in mv.tags,
9 "fairness_report_present": "fairness_report_uri" in mv.tags,
10 "atlas_threats_linked": "atlas_techniques" in mv.tags,
11 "owner_email": mv.tags.get("owner_email", ""),
12 "trained_on_pii": mv.tags.get("pii_class", "none"),
13 }
14 }
15 r = requests.post("http://opa:8181/v1/data/registry/promote", json=payload, timeout=2)
16 return r.json()["result"]["allow"]
Lines 7-12: every governance signal travels as a model-version tag. Line 14: OPA Rego is the authority — Legal owns the policy file, platform owns the gate. Line 15: a green return is your ISO 42001 promotion evidence — log it.
CHEATSHEETBLOCK · 05

The 5 rules every 2026 governance shipper knows

01Cards always — a model card per model, a datasheet per dataset. Auto-generated, not hand-written.
02Fairlearn for tabular fairness; Presidio for PII redaction; Opacus for differential privacy. The trifecta.
03Two guardrail layers: input scanner (jailbreak / PII) + output scanner (prompt-injection echo / topic). Both required.
04Evals run in CI on every prompt or model bump. Inspect AI is the 2026 reference; lm-eval-harness for academics.
05Policy-as-code gates promotion. OPA Rego or Cedar — pick one, version it, audit it like Terraform.
MINIGAME · RAPIDFIRETFBLOCK · 06

Quick check — true or false?

EU AI Act Annex III high-risk requirements are already in full force.
CLAIM 1/5 · READY · scroll into view
CONCEPTBLOCK · 07

What you'll ship in the full study

Eight lessons. Eight docker projects. Each is something you can drop into your real ML platform tomorrow: — A bias-audit lab that runs Fairlearn + AIF360 + Aequitas on a public tabular dataset (Folktables) and emits a model card + disparity report. — A differential-privacy training lab using Opacus DP-SGD with Ghost Clipping; visualises the (ε, δ) budget vs accuracy curve. — A prompt-injection guardrail gateway: NeMo Guardrails IORails + LLM Guard input/output scanners + LiteLLM upstream proxy. — A PII scrubber pipeline: Microsoft Presidio + spaCy + transformers redacting Slack / Zendesk-shape exports. — An LLM eval CI: Inspect AI 0.3.209 + Phoenix tracing, gating PRs that touch prompts or model versions. — A policy-as-code model-registry gate: MLflow + OPA Rego v1 + Cedar checking model-card / fairness / ATLAS-threats completeness before promotion. — A RAG lineage stack: OpenLineage + Marquez + Qdrant tracing doc → chunk → embedding → retrieval → response. — A model-card + risk-register generator that emits MODEL_CARD.md + RISK_REGISTER.csv + ISO 42001 SoA YAML. Every project is meant to be lifted into your real work, not just demoed.
LESSON COMPLETEBLOCK · 08

That's the trailer.

NEXTLesson 1 · The 2026 governance landscape
WHAT YOU'LL WALK AWAY WITH

Real skills, real career delta.

Skills you'll gain

10
  • Map the 2026 AI regulatory landscapeWorking

    Decode EU AI Act timelines (Annex III, Annex IV, Annex VI vs VII), NIST AI RMF + Generative AI Profile, ISO/IEC 42001:2023 vs 23894, US state laws (Colorado SB 24-205, CA AB 2013, NYC LL 144), GDPR Art. 22 + Art. 32, India DPDP Rules — and translate each into a concrete platform-engineering control.

  • Author audit-ready model cards & datasheetsWorking

    Generate a Mitchell et al model card and a Gebru et al datasheet from a model registry's metadata; align fields to EU AI Act Annex IV technical documentation; ship as part of CI; sample-tight against ISO 42001 evidence requirements.

  • Run a fairness audit with Fairlearn + AIF360Production

    Use MetricFrame + demographic_parity_difference + equalized_odds_difference + equal_opportunity_difference on tabular data; mitigate with ThresholdOptimizer / ExponentiatedGradient; emit disparity_report.html and a plain-English exec summary that survives a regulator's read.

  • Train with differential privacy in PyTorchProduction

    Wire Opacus DP-SGD + Ghost Clipping into a real training loop; tune noise_multiplier / max_grad_norm; explain (ε, δ) budgets to legal; visualise the privacy/utility curve; finetune a LoRA adapter on a foundation model with formal DP guarantees.

  • Stand up an LLM guardrail gatewayProduction

    Compose NeMo Guardrails 0.20 IORails + LLM Guard input/output scanners in front of a LiteLLM proxy; triage a real jailbreak corpus; report precision/recall against MITRE ATLAS techniques; publish per-tenant policy YAML.

  • Build PII scrubbing pipelinesProduction

    Deploy Microsoft Presidio analyzer + anonymizer with spaCy + transformer recognizers; add custom recognizers for product-specific identifiers; benchmark recall on synthetic + real corpora; integrate into log/ticket egress for GDPR Art. 32.

  • Eval-gate prompt and model changes in CIProduction

    Author Inspect AI Tasks + Solvers + Scorers; wire into GitHub Actions on PRs that touch prompts or model versions; trace runs in Phoenix (OpenInference / OpenTelemetry); publish a regression delta as a PR comment.

  • Write policy-as-code for model registriesProduction

    Author Rego v1 (or Cedar v4.5) policies that gate MLflow promotion on model-card / fairness-report / ATLAS-threats / owner-email presence; ship a tiny admission controller in Go or Python; version the policy file in Git like Terraform.

  • Trace data lineage end-to-endWorking

    Emit OpenLineage events from a RAG pipeline (loader → chunker → embedder → vector store → retriever → LLM); wire to a Marquez 0.51 server; produce a screenshot-able DAG that answers GDPR Art. 15 / DPDP source-tracing requests.

  • Drive an ISO 42001 / SOC 2 + AI engagementAdvanced

    Map the 38 ISO 42001 Annex A controls to your platform; produce a Statement of Applicability and AI Impact Assessment per system; pre-stage Stage 1 evidence; map to AICPA-HITRUST converged SOC 2 + AI controls (CC6/CC7/CC8/PI1); brief auditors and own the exception register.

Career & income delta

Career moves
  • Title yourself credibly as 'AI Governance Engineer', 'Responsible AI Engineer', or 'ML Platform & Compliance Lead' — the 2026 hiring channel for senior IC roles at every regulated industry, all major cloud Bedrock/Vertex/Azure-AI partner programs, and the algorithmic-auditing firms (Holistic AI, BABL AI, Saidot, ORCAA).
  • Lead the AI / ISO 42001 cert program — every Series-B+ company shipping into the EU is hiring a person to drive 42001 + AICPA-HITRUST converged SOC 2 + AI; cert-program owner is one of the best-paid IC roles in 2026.
  • Pick up consulting work at $250-450/hr — sim/real for robotics has its peak, but 'wire our LLM platform for the 2 Aug 2026 deadline' is the dominant 2026 inquiry. Six-week engagements are typical.
  • Become the bridge between Legal and ML — every team's Legal counsel is asking 'what is our exposure?' and most ML engineers can't translate. Speak both vocabularies and you become unfireable.
Income impact
  • $30-100K bump for senior ML / platform engineers adding production governance + ISO 42001 evidence pipeline + LLM guardrails to their resume in 2026.
  • $250-450K total comp for senior IC AI Governance Engineer at FAANG / financial / regulated SaaS (per April 2026 levels.fyi data + public job listings).
  • Freelance / consulting rates: $250-450/hr — running a 4–8 week ISO 42001 readiness sprint or wiring a multi-tenant guardrail gateway. Algorithmic-audit subcontracts pay $300-500/hr.
  • Sales-engineering uplift at any AI-platform vendor: closing a regulated-industry deal often hinges on a working bias-audit demo + a model-card generator + a policy gate — all of which this course ships.
  • EU bands: typically 50-70% of US — Berlin, Munich, Dublin, London, Paris, Amsterdam concentrate hiring (DPC + AI Office near-by, Microsoft / Anthropic / OpenAI EU offices). Senior €110-220K total.
Market resilience
  • Regulators outlast model providers. EU AI Act, ISO 42001, GDPR, India DPDP — these don't sunset when a foundation-model vendor pivots.
  • Policy-as-code (OPA / Cedar) is portable across model registries, MLOps stacks, and cloud providers; once you can author Rego v1, you can gate any change-management surface.
  • Bias / privacy / lineage skills compound — every dataset you audit, every PII scrubber you tune, every lineage DAG you draw becomes part of a portfolio that survives multiple job cycles.
  • Audit experience is durable — once you have shipped an ISO 42001 Stage 1 + Stage 2 you can do it for any system, sector, or country. Auditors and certification bodies will hire you.
  • Real incidents are accelerating, not slowing — Bartz, NYT v. OpenAI, DeepSeek bans, Italian Garante decisions. The demand curve for governance engineers points up through 2030 minimum.