u/nilipilo

I got tired of missing things in 600-line Terraform PR reviews, so I built a free Action that posts an architectural diff back as a comment
▲ 4 r/Terraform+1 crossposts

I got tired of missing things in 600-line Terraform PR reviews, so I built a free Action that posts an architectural diff back as a comment

Hey r/Terraform —

Long-time lurker, first-time poster. I built a tool called ArchiteX because I kept reviewing huge terraform plan diffs and missing the one line that mattered. Sharing it here because this is the audience that will tell me, honestly, whether it's actually useful or just my own itch.

What it does: drop-in GitHub Action. On every PR that touches *.tf, it parses base + head, builds a resource graph for each, computes the architectural delta (added / removed / changed nodes and edges), runs a set of weighted risk rules, and posts a sticky comment with:

  • a 0–10 risk score with explainable reasons (each rule weight is documented and capped at 10.0)
  • a plain-English summary of what changed and why a reviewer should care
  • a focused Mermaid diagram of only the changed nodes + one layer of context — not the whole topology
  • an optional CI gate (mode: blocking) for high-risk changes
  • an audit bundle uploaded as a workflow artifact (summary.md, score.json, egress.json, a self-contained report.html, and a SHA-256 manifest)

Why I think it's different from tfsec / Checkov: those are great at "this line is misconfigured". ArchiteX answers "what changed in the architecture?" — a brand-new public entry point, an SG flipping from 10.0.0.0/16 to 0.0.0.0/0, a resource gated behind count = var.create ? 1 : 0 that you didn't notice was being toggled on. It's the architectural-delta layer on top of those tools, not a replacement. Run them side-by-side.

Things I made deliberate calls on:

  • No LLM in the hot path. Template-based renderer. Same input → byte-identical output across runs, machines, contributors. I wanted a tool where re-running can never quietly change a score and erode reviewer trust.
  • Local-only. Raw HCL never leaves the runner. The only network call is the GitHub REST API call to post the comment. No SaaS, no telemetry, no account, no paid tier.
  • Conditional resources are first-class. Module-author repos have lots of count = var.x ? 1 : 0. Those resources get rendered as conditional phantoms (? prefix in the diagram) and explicitly excluded from per-resource rules so they can't false-positive.
  • Self-contained HTML report — no JS, no CDN, no remote fonts. Open it in an air-gapped browser, the full report renders.

Coverage today: 45 AWS resource types across 7 abstract roles (network, access control, compute, entry points, data, storage, identity), 18 weighted risk rules. Multi-provider (Azure/GCP) is on the roadmap.

Free + MIT. Single Go binary, single Action, zero config to start.

What I'd love your help with:

  1. What breaks it in your repo? Coverage gaps are the #1 thing I want to fix. If you have a Terraform pattern that ArchiteX mis-parses or misses entirely, the smallest reproducer you can paste in an issue is the highest-value contribution I can ask for.
  2. Are the rule weights sensible? They're calibrated to my own taste and a small group of testers. I'd love to hear "rule X at weight Y is too high/low for my team's risk tolerance."
  3. Module authors — does materializing conditional count resources as phantoms match what you'd want, or would you rather have a separate "module health" mode entirely?

Will answer every comment in the thread.

u/nilipilo — 2 hours ago