u/Human-Philosopher782

Do you trust AI-generated tests, or do they cause more problems than they solve?

Genuine question. I've tried a few AI test generation tools and the pattern is always the same: it generates a full spec file, I spend time tweaking mocks and edge cases, then on the next run it regenerates everything and my changes are gone.

It made me think — the problem isn't AI writing bad tests. The problem is AI writing tests into files that humans also edit, with no concept of ownership.

Has anyone found a tool or workflow that handles this well? Or is the whole approach of AI-generated tests fundamentally at odds with how developers actually work?

reddit.com
u/Human-Philosopher782 — 3 days ago

SilentSpec — VS Code extension that generates unit tests on save, only for uncovered functions

Generates unit tests when you save JS/TS files, but only for exported functions that don't already have coverage. Never overwrites existing tests — creates a companion file if you already have handwritten specs.

  • AST analysis detects exports, marker reconciliation tracks what's covered, generates only gaps
  • TypeScript compiler pass heals broken tests before writing
  • 9 AI providers: GitHub Models (free), OpenAI, Claude, Ollama (local), Azure, Bedrock, Vertex
  • Auto-detects Jest, Vitest, Mocha, Jasmine
  • No login, no telemetry, BYOK

https://github.com/BHARADWAJ-MADDURI/silent-spec

u/Human-Philosopher782 — 3 days ago