Bias-audit lab on Folktables ACSIncome
Fairlearn + AIF360 + Aequitas on the post-COMPAS auditor-grade dataset. Three artifacts: HTML, YAML, exec summary.
services:
audit:
image: snap/governance-compliance:bias-audit
environment:
DATASET: ACSIncome
STATE: CA
YEAR: "2018"
SENSITIVE_FEATURE: RAC1P
GATE_METRIC: equalized_odds_difference
GATE_THRESHOLD: "0.05"
volumes:
- ./out:/workspace/out
command: ["python", "audit_folktables.py"]
dashboard:
image: snap/governance-compliance:bias-dashboard
depends_on: [audit]
volumes:
- ./out:/workspace/out:ro
ports: ["8090:8090"]
Drop your real production model in place of the GBClassifier; bring your sensitive-attribute column. Fairlearn doesn't care about the model class — sklearn, XGBoost, LightGBM, even a wrapped torch model. Wire to CI; gate releases on max equalized_odds_difference. Output is exactly what every fair-lending / AEDT / EU AI Act auditor wants to see. Pair with cardgen — the YAML auto-merges into the model card.
Bias-audit lab on Folktables ACSIncome
Fairlearn + AIF360 + Aequitas on the post-COMPAS auditor-grade dataset. Three artifacts: HTML, YAML, exec summary.
services:
audit:
image: snap/governance-compliance:bias-audit
environment:
DATASET: ACSIncome
STATE: CA
YEAR: "2018"
SENSITIVE_FEATURE: RAC1P
GATE_METRIC: equalized_odds_difference
GATE_THRESHOLD: "0.05"
volumes:
- ./out:/workspace/out
command: ["python", "audit_folktables.py"]
dashboard:
image: snap/governance-compliance:bias-dashboard
depends_on: [audit]
volumes:
- ./out:/workspace/out:ro
ports: ["8090:8090"]
Drop your real production model in place of the GBClassifier; bring your sensitive-attribute column. Fairlearn doesn't care about the model class — sklearn, XGBoost, LightGBM, even a wrapped torch model. Wire to CI; gate releases on max equalized_odds_difference. Output is exactly what every fair-lending / AEDT / EU AI Act auditor wants to see. Pair with cardgen — the YAML auto-merges into the model card.