u/Express_Meal_2002

MIR 7.3.1 became mandatory across the EU 12 days ago has anyone hit implementation issues yet?

The updated Manufacturer Incident Report form 7.3.1 went live on 1 May 2026. For most teams it probably looked like an administrative update on paper, but depending on how your vigilance reporting is set up, the operational impact could be more than a simple form swap.

The main things worth flagging for anyone still catching up:

The new form comes with revised XSD and XSL files, which means if you have any automated reporting pipelines or database integrations built around the old structure, those connections need to be reviewed and updated, not just the form itself.

It's also password-protected for access, which adds a small but real friction point for teams trying to implement it into internal systems.

The broader issue is that vigilance reporting rarely lives in one place. If your MDR or IVDR compliance process touches multiple systems (QMS, regulatory databases, internal workflows), a form-level change can ripple in ways that aren't immediately obvious until something breaks.

I'm curious whether anyone has already run into mismatches between the new format and existing system integrations or whether competent authorities are giving any grace period in practice for minor non-conformances during the transition.

reddit.com
u/Express_Meal_2002 — 2 days ago

Most of the conversation in this sub around SaMD focuses on regulatory pathways, AI/ML submissions, and post-market obligations. But the thing that quietly kills a lot of SaMD projects — or at minimum creates serious downstream pain — is data governance. Not exciting, not a conference keynote topic, but absolutely foundational.

Recently came across a new edition of a SaMD-focused book with a chapter dedicated entirely to data governance and management, and it covers some areas I rarely see addressed clearly in one place:

The full data lifecycle, not just the training phase

Most teams think hard about data during model development and then treat governance as someone else's problem post-deployment. But the regulatory expectation increasingly covers the entire arc—from initial data creation and processing through to secure archiving and eventual deletion. If you can't demonstrate control over the full lifecycle, you have a traceability problem when regulators come knocking.

The out-of-distribution problem and data partitioning

This is one of those issues that's well understood in ML research circles but often poorly handled in actual SaMD development. If your training, validation, and test sets aren't rigorously partitioned or if your real-world deployment population doesn't match your training distribution you have an algorithmic bias problem that no amount of post-market surveillance will cleanly fix. Good to see it framed explicitly as a governance issue rather than just a data science issue.

GDPR, EU AI Act, and EHDS in the same frame

For anyone building for the European market, the regulatory intersection here is genuinely complicated. GDPR obligations around health data, incoming EU AI Act requirements for high-risk AI systems, and the European Health Data Space framework all touch data governance but come from different legislative directions. Teams that treat these as separate compliance workstreams are setting themselves up for gaps.

Traceability as a first-class deliverable

The chapter apparently frames traceability documentation not as a box-ticking exercise but as the mechanism by which you actually prove safety and effectiveness to regulators. That framing feels right; if you can't trace a model's outputs back to the data it was trained on, the decisions made during preprocessing, and the validation approach used, you don't really have a defensible submission.

The question I'd put to the sub

For those who've been through an SaMD submission that involved ML, how did you handle the data partitioning documentation? Did your technical file include explicit evidence of train/val/test separation, or was that largely taken on faith by reviewers? And has anyone had a notified body or FDA reviewer push back specifically on data governance practices rather than just clinical performance metrics?

Feels like this is an area where the bar is quietly rising and a lot of teams are still operating on older assumptions.

reddit.com
u/Express_Meal_2002 — 8 days ago

Came across the FDA's denial of a petition that asked for 510(k)-cleared AI device manufacturers to be exempt from filing new submissions when launching subsequent AI products. The petitioners' argument was essentially: we've already proven the technology works; reduce the burden as we scale.

The FDA said no, and the reasoning is more nuanced than just "rules are rules."

The CADe/CADx/CADt distinction matters more than people think

The FDA pushed back hard on treating computer-aided detection, diagnosis, and triage devices as interchangeable. Each category carries different clinical use cases, different evidence requirements, and different risk profiles. A blanket exemption that ignores those distinctions would essentially let a manufacturer parlay one clearance into a whole portfolio without ever demonstrating clinical validity for the new intended use. That's a real problem.

"No reported adverse events" isn't the same as "no risk."

This is the part that stuck with me. The FDA explicitly stated that absence of reported adverse events does not equal absence of risk and that relying on user oversight alone is not sufficient to manage issues like model drift. That's a direct signal that the agency is thinking about post-market AI risk in a more sophisticated way than just waiting for something to go wrong.

Intended use changes are still changes

Even small shifts in intended use can meaningfully alter the benefit-risk profile, and the FDA is holding the line that each intended use gets its own assessment. For teams building AI device portfolios, that has real planning implications, you can't assume a cleared patent covers adjacent use cases just because the underlying model is similar.

The broader question I keep coming back to

Is the 510(k) framework actually equipped to handle AI device portfolios long-term, or are we just patching a framework designed for hardware onto something that behaves very differently? The PCCP pathway helps at the margins, but this denial suggests the FDA isn't ready to create structural shortcuts, at least not yet.

Has anyone been through a 510(k) for a second or third AI device in the same product family? Curious whether the review process treated the previous history as meaningful context or essentially started from scratch each time.

reddit.com
u/Express_Meal_2002 — 16 days ago