u/Deena_Brown81

how to get ISO 26262 certification when your test suite wasn't built for compliance from day one?

We're a 22-person NEV startup and our software team is made of 6 people. We built and tested the HMI stack in about 10 months which felt like a win. We thought the hard part was done. Then we hit ISO 26262 ASIL-B certification.

The software works and test results are clean. but the traceability documentation, requirements mapped to test cases mapped to evidence artifacts has to be produced manually. We don't have a dedicated QA team and we don't have a technical writer with embedded experience. We have engineers who build things, and for the last 11 weeks some of them have been producing spreadsheets for auditors instead.

The manufacturing partner is waiting on us and our commercial pilot keeps slipping because we can't give anyone a confident timeline.

We started looking at whether any tooling could help, polarion, codebeamer, doors next are the obvious ones for requirements traceability, askui has a docgen layer that claims to generate traceability output from existing test artifacts rather than re-entering everything manually. We’re still evaluating, nothing deployed yet.

What I haven't figured out yet is that most of these tools assume you set up the requirements traceability infrastructure before the project, not 10 months in when you're trying to certify something already built.

Has anyone gone through ASIL certification at a startup without a compliance team already in place? specifically curious how you handled the traceability documentation when the test suite existed but wasn't built with certification in mind from day one.

reddit.com
u/Deena_Brown81 — 6 hours ago

AI videos still feel off in 2026. Anyone seeing believable output or are we still 2 years out?

So I've been testing this stuff every few months hoping the quality finally catches up and just last week i tried 4 different ones again.

The image-to-video tools still produce that floaty weightless motion. faces drift, hands do that thing. fine for cinematic shots, useless for anything that's supposed to feel like a real person talking.

The avatar tools are closer but most still have the "hostage video" energy lol, like you can tell, the eye contact is off and the cadence is too even.

The only stuff i've seen that actually fooled me was when the tool was clearly trained on a long enough sample of the actual person, like 2-3 minutes of real footage, not a single photo. The gestures and weird verbal tics came through. One creator i follow on Tiktok has been doing this for months and i only realized last week because he mentioned it.

So my read is: text-to-video and image-to-video, still uncanny. clone-from-actual-video, getting weirdly good.

Am i missing something? anyone using these in production?

reddit.com
u/Deena_Brown81 — 1 day ago