GOV2Course

AI Governance & Compliance

Lessons8modules
Total86mfull study
Quick7mtrailer
Projects8docker labs
CHEATSHEET · 01Governance · master cheatsheet
Regulatory dates that matter (April 2026)
  • ·EU AI Act prohibitions — live since 2 Feb 2025
  • ·EU AI Act GPAI provider obligations + AI Office — live since 2 Aug 2025
  • ·EU AI Act Annex III high-risk + AI Office enforcement — 2 Aug 2026
  • ·EU AI Act embedded products (medical, machinery) — 2 Aug 2027
  • ·Colorado AI Act — enforcement delayed to 30 Jun 2026 (SB 25B-004)
  • ·California AB 2013 — generative-AI training-data disclosures live 1 Jan 2026
  • ·NYC LL 144 AEDT — bias audits live since Jul 2023; enforcement uplift in 2026 after Comptroller audit
  • ·PCI DSS 4.0.1 — future-dated requirements mandatory since 31 Mar 2025
  • ·India DPDP Rules — Phase 1 live 13 Nov 2025; Phase 2 Nov 2026; Phase 3 May 2027
ISO 42001 audit stages
  • ·Stage 1 (~2 days, mostly remote): docs review — AIMS Manual, AI Policy, Scope, AI Risk Register, SoA on the 38 Annex A controls, AI Impact Assessments
  • ·Stage 2 (1–3 weeks on-site): evidence sampling — interviews, training records, internal-audit reports, mgmt-review minutes, incident logs, supplier evals
  • ·Surveillance — abbreviated audits in years 2 and 3 (~1 week each)
  • ·Recertification — full audit in year 3
  • ·Common gaps in 2025/26: missing model cards, no AI-incident classification rubric, 'human oversight' is a checkbox not a role, no data-quality acceptance criteria
The 5 controls every AI gets audited on
  • ·Documentation: model card + datasheet + system card per release
  • ·Risk register: ATLAS technique → control mapping with named owner
  • ·Human oversight: role + frequency + decision authority — not a checkbox
  • ·Incident response: AIID-style classification, MTTA/MTTR per severity tier
  • ·Change management: model version → eval results → policy gate → promotion log
US state-law map
  • ·Colorado SB 24-205 (delayed to 30 Jun 2026 / possible repeal-and-replace under ADMT framework Jan 2027)
  • ·California AB 2013 — GenAI training-data transparency (live 1 Jan 2026; covers systems released after 1 Jan 2022)
  • ·California SB 1047 — vetoed Sep 2024
  • ·NYC LL 144 — automated employment decision tool bias audits (live; enforcement uplift in 2026)
  • ·Texas, Tennessee, Illinois, Utah — sectoral AI rules (employment, mental health, deepfake)
  • ·Federal — EO 14110 rescinded; NIST RMF + Generative AI Profile remain voluntary baselines
Incident classification rubric (AIID-aligned)
  • ·Severity 1 — physical harm, mass loss, regulatory action (e.g. SyRI, Apple Card DFS investigation)
  • ·Severity 2 — service-level legal liability (e.g. Moffatt v. Air Canada)
  • ·Severity 3 — privacy/safety harm without immediate regulatory escalation (e.g. PII leak in logs)
  • ·Severity 4 — internal red-team find, no external impact
  • ·Always log: timestamp, model version, prompt/input hash, ATLAS technique, mitigation
CHEATSHEET · 02Framework picks · 2026
Bias / fairness — pick by audience
  • ·Fairlearn (Microsoft) — scikit-learn-native; MetricFrame, equalized_odds_difference, ThresholdOptimizer, ExponentiatedGradient. Default for tabular.
  • ·AIF360 (IBM, LF AI) — broader algorithm catalog (~70 metrics, 11 mitigations); heavier surface area
  • ·Aequitas (Univ. Chicago) — web UI on disparity audit; best for stakeholder demos
  • ·What-If Tool — TensorBoard-era counterfactual explorer; legacy but readable for non-technical audiences
  • ·InterpretML — EBM (Explainable Boosting Machine) is the production-grade interpretable model
Privacy-preserving ML
  • ·Opacus (PyTorch) — DP-SGD; recent Ghost Clipping (Aug 2024) drastically cut memory; PEFT/LoRA tutorial Dec 2024
  • ·TF-Privacy — TensorFlow analogue; less actively developed
  • ·Flower (flower.ai) — federated-learning framework; production OSS option
  • ·PySyft / OpenMined — federated + secure aggregation, more research-leaning
  • ·Microsoft Presidio — PII analyzer + anonymizer; spaCy + transformers; mcr.microsoft.com images
LLM guardrails
  • ·NeMo Guardrails 0.20.0 (Jan 2026) — IORails parallel; OpenAI-compatible server; Colang 2.x DSL
  • ·LLM Guard (protectai) — fully OSS; ~15 input + 20 output scanners; runs offline
  • ·Lakera Guard — closed SaaS; sub-50ms; acquired by Check Point Sep 2025
  • ·Vigil (deadbits) — vector + YARA + transformer + canary tokens
  • ·Guardrails AI — Pydantic-style validators around outputs
Eval harnesses
  • ·Inspect AI 0.3.209 (UK AISI) — Solver/Scorer/Task decorators; 200+ evals via inspect_evals; reference choice for CI
  • ·lm-evaluation-harness (EleutherAI) — academic-benchmark default
  • ·HELM (Stanford CRFM) — holistic, scenario-based
  • ·OpenAI evals — open repo; less actively curated than Inspect now
  • ·MLCommons AILuminate v1.1 — 24K prompts × 12 hazard categories with 5-point safety grade; public scoreboard
Lineage + observability
  • ·OpenLineage + Marquez (LF AI & Data) — reference open implementations; Marquez 0.51.0 visualises the DAG
  • ·Arize Phoenix — OpenTelemetry + OpenInference; OSS, Docker; the free-tier LLM tracer
  • ·Langfuse — self-hostable; most-adopted OSS LLM-engineering platform
  • ·LangSmith — closed; default if you ship LangChain / LangGraph
  • ·Helicone — proxy w/ semantic caching (20–40% cost savings)
Policy as code
  • ·OPA 1.9.x — Rego v1; Rego→SQL WHERE compilation for row-level authz
  • ·Cedar v4.5 (AWS, Apache 2.0) — Amazon Verified Permissions, Bedrock AgentCore policy layer
  • ·Casbin / Themis — niche alternatives
  • ·Conftest — OPA on YAML/HCL for CI gates (terraform / Helm / model registry manifests)
Avoid / migrate
  • ·Hand-rolled spreadsheets for AI inventory — auto-generate from a registry
  • ·Hand-rolled PII regexes — Presidio's transformer recognizers outperform on recall
  • ·torch.load(..., weights_only=False) on untrusted weights — RCE class; pin weights_only=True (PyTorch 2.6+ default)
  • ·lm-eval-harness as your only safety eval — pair with AILuminate / Inspect for hazard coverage
  • ·OPA without Rego v1 strict mode — v1 is the future; turn it on now