GOV2Course

AI Governance & Compliance

Lessons8modules
Total86mfull study
Quick7mtrailer
Projects8docker labs

Bias-audit lab on Folktables ACSIncome

Fairlearn + AIF360 + Aequitas on the post-COMPAS auditor-grade dataset. Three artifacts: HTML, YAML, exec summary.

snap/governance-compliance:bias-auditRepo · snap-bias-audit-folktables
$git clonehttps://github.com/snap-dev/snap-bias-audit-folktables.git
docker-compose.yml
services:
  audit:
    image: snap/governance-compliance:bias-audit
    environment:
      DATASET: ACSIncome
      STATE: CA
      YEAR: "2018"
      SENSITIVE_FEATURE: RAC1P
      GATE_METRIC: equalized_odds_difference
      GATE_THRESHOLD: "0.05"
    volumes:
      - ./out:/workspace/out
    command: ["python", "audit_folktables.py"]

  dashboard:
    image: snap/governance-compliance:bias-dashboard
    depends_on: [audit]
    volumes:
      - ./out:/workspace/out:ro
    ports: ["8090:8090"]
Run
~/snap-bias-audit-folktables · zsh
$ docker compose up --abort-on-container-exit
Folktables ACSIncome CA/2018 loaded; Fairlearn DP/EO/EOpp metrics computed; ThresholdOptimizer applied; three artifacts emitted under ./out.
What you'll observe
Folktables CA/2018 ACSIncome (~195k rows) downloads cleanly
Three Fairlearn metrics computed: DP, EO, EOpp differences
ThresholdOptimizer mitigates if EO > gate; reports post-mit value
disparity_report.html renders 12 slices in Aequitas style
fairness_report.yaml is parseable by the model-card generator
Lift this to your work

Drop your real production model in place of the GBClassifier; bring your sensitive-attribute column. Fairlearn doesn't care about the model class — sklearn, XGBoost, LightGBM, even a wrapped torch model. Wire to CI; gate releases on max equalized_odds_difference. Output is exactly what every fair-lending / AEDT / EU AI Act auditor wants to see. Pair with cardgen — the YAML auto-merges into the model card.