u/Admirable-Tip-5221

Assalamu alaikum. Building AQiDA.ai, a verification-first AI, looking for serious Muslim builders/reviewers

Assalamu alaikum,

I come from a core computer science background with an MBA and have spent a lot of my career doing AI product strategy.

Over sometime, I have been heads down building something at the math and architecture level. It's called AQiDA (Autonomous Quasi-Unitary Inference Differentiable Architecture — I know, I know, but the name stuck and it's scientifically accurate).

I want to share what I'm working on because I think there are people in this community who will immediately understand why it matters.

The problem that won't go away -

We have all seen the governance stacks people are putting on top of AI agents. You have got deterministic execution kernels, neuro-symbolic separation (LLM proposes, symbolic layer authorizes), guardrails, firewalls, audit logs, HITL escalation. Its way better than raw LLM agents. But if you talk to anyone who's deployed these at scale, they'll tell you the same thing - it still breaks in ways that are structural, not patchable.

· Agents have no persistent identity. They spin up, act across boundaries, and disappear. What exactly touched what? Under whose authority? There's no clean answer. A recent survey straight-up said "no current technology or regulatory instrument" solves this for nondeterministic, boundary-crossing entities.

· Audit trails are mostly transcripts. You captured the prompt and the output. Cool. But did a policy check actually run? Was the retrieval authorized? ISACA's 2026 guidance says that's not an audit trail. Its a transcript. You recorded what happened, not whether it should have.

· Human review degrades at volume. At some point, someone stops reading the JSON payloads and starts clicking Approve. Every governance team knows this. O'Reilly called it "alert fatigue turning governance into manual throughput management."

And then there is the real-world example that stuck with me. March 2026, A Meta engineer asks an AI agent to analyze a forum post. The agent does its job, then autonomously posts its response. Within minutes, unauthorized engineers can see piles of internal sensitive user and company data. Two hours of exposure. Nobody hacked anything. The agent wasn't misused. It did exactly what it was designed to do.

What I am building -

Every governance approach I have seen treats the AI and the governance as two separate things that need to talk to each other. The neural bit proposes, the symbolic bit checks, the audit layer writes it down, the human reviews. That separation is the bottleneck. And at scale, it breaks.

So I went a different route. I'm building a system where computation and governance are the same physical process.

The intuition (without the mathematical part) -

Imagine every possible decision is a wave. Evidence that supports it makes the wave stronger. Evidence that contradicts it creates an opposite wave. When those two waves meet, they cancel each other out. Not metaphorically, but actually, mathematically, inside the computation.

That means -

· Contradiction isn't something a rule catches after the fact. It's a physical event. If an agent proposes something that violates a constraint, the waves cancel. Zero. Blocked. No policy engine needed.

· The audit trail isn't a log. It's a mathematical witness. An auditor can recompute it themselves and verify that the governance ran. You don't need to trust that someone remembered to turn it on.

· The system can fail honestly. If the evidence is messy or contradictory, the waves don't converge. The system says "I don't know". Not because a guardrail caught it, but because the math literally won't resolve. The bad output was never born.

It's not an LLM with guardrails.

It's a different foundation where governance isn't a layer you add

It's a property that emerges from how the computation itself works.

Where things stand (public-safe version) -

I'm keeping the internals private for now (IP), but I can share whats been built and verified under clean protocols:

· Equation discovery. AQiDA solved a known symbolic-regression benchmark exactly (100%) across 100 runs, using only 20 training points, with zero leakage. The solver never saw the target. The published SOTA on that benchmark is 61%.

· Simulation repair. On a physics benchmark (Darcy Flow), AQiDA improved a baseline neural model's held‑out error from 0.184 → 0.0787, beating all classical and learned controls. No spatial smoothing. Just signal‑cancellation correction.

· Signal search. Early Costas array results: zero collisions, up to 6 dB better sidelobe suppression than published baselines. Cleaner signals, basically.

I'm not claiming any broad SOTA. These are narrow, honest signals that the approach works.

Why I'm posting here -

Muslim developers should be building app-layer things — prayer apps, halal marketplaces, Islamic chatbots — absolutely.

But a few of us also need to be working on the fundamental stuff underneath such as verification, safety, formal reasoning, new AI architectures. The communities doing that (Muslims in ML, Muslamic Makers) are growing, and AQiDA is my attempt to contribute to that side of the ecosystem.

I'm looking for…

Honest reviewers, curious builders, skeptical engineers. If you're into PyTorch/JAX, scientific ML, PDEs, signal processing, MLOps, AI governance, or turning research into real products, I'd love to hear from you. I'm also open to paid consulting work (AI strategy, GenAI governance, RAG evaluation) while I keep building.

One question for the group

A lot of us have seen governed agentic AI up close. Have you hit these structural limits yourself? Where do you think the verification/governance problem is actually going over the next few years?

Drop a comment or DM if any of this resonates. Even if you just want to follow along, I'd appreciate it.

Jazakum Allahu khairan.

May Allah put barakah in work that benefits people and protects them from harm.

reddit.com
u/Admirable-Tip-5221 — 2 days ago

Assalamu alaikum. Building AQiDA.ai, a verification-first AI, looking for serious Muslim builders/reviewers

Assalamu alaikum,

I am building an independent , first principles and novel math based , foundational AI project called AQiDA.

AQiDA = Autonomous Quasi-Unitary Inference Differentiable Architecture.

In plain English - I am trying to build an AI system that does not just guess confidently. The goal is to build AI that tests possible answers, keeps track of contradictions, and only makes a claim when the result can be checked.

Think of it like this - every possible answer is a signal. Good answers keep matching the evidence. Bad answers collide with the evidence and get rejected. AQiDA is my attempt to make AI earn its claims instead of just sounding convincing.

Why this matters - in science, healthcare, engineering, finance, and industry, a confident black-box answer is NOT enough. These fields need AI that can be tested, audited, challenged, and improved.

I am keeping the core implementation private for now because there may be IP involved, but I can share the public-safe status.

1. Symbolic equation discovery
AQiDA recovered a known symbolic-regression benchmark equation exactly across 100 runs under a no-leakage protocol. The key point is not just the score. The important part is the discipline that the solver did not receive the target answer, and the final claim passed a verifier boundary.

2. PDE / simulation repair
AQiDA produced a scientific simulation-repair result where the latest heldout run beat local learned controls. I am not claiming broad SOTA yet. The honest claim is narrower. It produced a real technical signal on a hard scientific-modeling direction.

3. Costas / signal-search optimization
AQiDA has early results in Costas/radar-style signal search, producing valid arrays with 0 collisions and strong sidelobe metrics. In simple terms, this points toward cleaner signal patterns, which can matter in sensing, communications, and optimization.

The bigger mission is this:

Muslim developers should not only build app-layer products. Those are valuable, but some of us should also work at the foundations: AI architecture, verification, safety, scientific reasoning, math, and high-stakes systems.

I have been encouraged seeing communities like Muslims in ML and Muslamic Makers grow. My hope is for AQiDA to contribute to the verification and scientific-reasoning side of that ecosystem.

I am looking for serious people in a few areas:

  • PyTorch / JAX / numerical computing
  • symbolic regression / scientific ML / PDEs
  • signal processing / optimization
  • MLOps, benchmarking, reproducibility
  • AI governance, evaluation, and safety
  • product/founder/operators who can help turn research into demos and products
  • warm intros to ethical grants, labs, advisors, funders, or paid AI consulting opportunities

For immediate practical reality, I am also open to paid AI product strategy, GenAI governance, RAG/evaluation, and applied AI consulting work while building AQiDA.

I am not asking anyone to believe hype.
I am looking for serious Muslims who can review, challenge, build, and help turn this into something useful.

Question for the group:

Should Muslim developers focus mainly on app-layer products, or should more of us also work on the foundations of AI, verification, safety, math, and scientific computing?

If you are interested, please comment or DM with:

  1. your technical background,
  2. which area you can help with,
  3. whether you are interested as a collaborator, reviewer, advisor, operator, intro source, or just want to follow the work.

Jazakum Allahu khairan.
I would love to connect with Muslim builders who care about AI that is not just powerful, but testable, truthful, and beneficial.

reddit.com
u/Admirable-Tip-5221 — 2 days ago