SECCourse

AI security & prompt-injection defense

Lessons10modules
Total106mfull study
Quick7mtrailer
Projects8docker labs

threat-model-workbench · STRIDE-for-LLM + ATLAS

YAML threat model + MITRE ATLAS mapper + Markdown report. Design-review checkpoint for any new LLM service.

snap/ai-security:threat-modelRepo · ai-security-threat-model
$git clonehttps://github.com/snap-dev/ai-security-threat-model.git
docker-compose.yml
services:
  workbench:
    image: python:3.11-slim
    working_dir: /app
    volumes:
      - ./src:/app/src:ro
      - ./threats:/app/threats:ro
      - ./reports:/app/reports
      - ./requirements.txt:/app/requirements.txt:ro
    environment:
      ATLAS_DB: /app/threats/atlas-techniques.json
      STRIDE_REQUIRED: "S,T,R,I,D,E"
    command: >-
      bash -c "pip install -q -r requirements.txt && python -m src.lint /app/threats/threats.yml && python -m src.report /app/threats/threats.yml --out /app/reports/threat-model.md"
Run
~/ai-security-threat-model · zsh
$ docker compose up --abort-on-container-exit
[scan] N threats parsed · [stride] coverage breakdown · reports/threat-model.md generated.
What you'll observe
threats.yml linted: every threat has stride / atlas / surface / mitigations / owner / severity
Linter fails the build if any STRIDE letter has 0 threats — forces full coverage
ATLAS references validated against vendored atlas-techniques.json
reports/threat-model.md groups threats by STRIDE / severity / owner
Output paste-ready into Confluence / Notion design review
Lift this to your work

Make this the design-review checkpoint for any new LLM service. Drop a threats.yml next to the repo; gate PRs on green threat-model run; reviewers diff threat changes alongside code. Snap requires this before any new agent / RAG ships — caught a missing tool-allow-list before the agent went live last quarter.