Part 2 & 3: Zero Secrets and Zero Trust on GKE (PCI-DSS follow-up)
Posted Part 1 last week around cluster hardening for a PCI-DSS setup on GKE.
Just finished Part 2 & 3 this time focusing on two areas that seem to break most “compliant” setups in practice:
- removing secrets from workloads entirely (workload identity instead of keys/env vars)
- locking down service-to-service communication (default deny + mTLS + identity-based access)
One thing that stood out while going deeper into this: a hardened cluster doesn’t really mean much if
- pods still carry credentials
- or everything inside the cluster can talk freely
That’s usually where the real risk is, not the perimeter.
Trying to map this more to how it would actually be implemented in a real fintech environment, not just audit checklists.
Part 2 & 3 here:
https://medium.com/@rasvihostings/building-a-pci-dss-compliant-gke-framework-for-financial-institutions-1d1f2c003622
Curious how others are approaching this in real setups:
- Do you enforce default-deny network policies cluster-wide?
- Anyone running strict mTLS everywhere, or is it usually partial?
Feels like this is where most setups drift away from what zero trust is supposed to be.