u/EreNN_42

▲ 6 r/ScientificComputing+2 crossposts

I built an open-source ML pipeline for lithium-ion cathode screening — looking for feedback

Hi everyone,

I’ve been working on an open-source machine learning pipeline for lithium-ion battery cathode screening:

https://github.com/ErenAri/CathodeX

The goal is not to replace DFT, but to act as a pre-screening layer before expensive DFT validation. The system predicts energy above hull (E_hull) for candidate cathode materials and classifies them into KEEP / MAYBE / KILL decisions based on uncertainty-aware thresholds.

Current technical direction:

- 5-member MACE-MP-0 fine-tuned ensemble

- CHGNet and CGCNN fallback support

- E_hull prediction for transition metal oxide cathode candidates

- Quantile outputs: q10 / q50 / q90

- Epistemic + aleatoric uncertainty estimation

- Conformal calibration for prediction intervals

- SOAP-LOCO-style validation to test generalization to structurally different materials

- Automated governance checks for ranking, calibration, false-kill rate, KEEP precision, and decision validity

- FastAPI backend + Next.js frontend

- DFT verification workflow direction using Quantum Espresso

The repository currently reports strong in-distribution test metrics, but also clearly shows a major limitation: LOCO generalization is much weaker. I’m trying to make the project honest about where the model is useful and where it should not be trusted without additional validation.

I would especially appreciate feedback on:

  1. Whether the validation methodology is strict enough

  2. Whether the KEEP / MAYBE / KILL policy is scientifically reasonable

  3. Whether the uncertainty and calibration story is convincing

  4. What would make this more useful for actual computational materials researchers

  5. Whether the README communicates the limitations clearly enough

This is not a claim of discovering DFT-verified new cathodes yet. It is an open-source screening and model-governance pipeline intended to reduce the candidate space before deeper simulation or expert review.

Any criticism from materials science, computational chemistry, battery research, or scientific ML people would be very useful.

cathode-screening.vercel.app
u/EreNN_42 — 3 days ago
▲ 2 r/QuantumComputing+1 crossposts

Built an open-source hybrid post-quantum messaging prototype — looking for protocol/security feedback

Hi everyone,

I’ve been working on an open-source research prototype for hybrid post-quantum asynchronous messaging:

https://github.com/ErenAri/post-quantum-messaging-app

The project is not intended to be a production-ready Signal replacement. The goal is protocol engineering and security validation: explicit wire formats, strict parser behavior, fail-closed client logic, reproducible tests, fuzzing, formal-modeling direction, and a clearer support boundary around what is and is not currently safe to use.

Current design direction:

- Hybrid initiation path using classical X25519 plus an ML-KEM-family post-quantum KEM

- PQXDH-style session setup

- One-time prekey consumption for replay resistance

- Hybrid identity authentication direction with Ed25519 + ML-DSA

- Minimal ratcheting channel with authenticated suite/version continuity

- Rust core with Android/web/desktop experimentation surfaces

- Documentation for threat model, wire format, API, crypto agility, deployment, and verification

Important note: I have currently paused the hosted backend that powered the public web demo, so the live demo may not be functional right now. The repository, documentation, and local development flow are still available. I am sharing this mainly for technical feedback, not as a consumer app launch.

I would especially appreciate feedback on:

  1. Whether the protocol boundaries and threat model are clear enough

  2. Whether the hybrid PQ/classical composition is documented in a reviewable way

  3. Whether the fail-closed behavior and support matrix are understandable

  4. What security or verification gaps should be prioritized next

  5. Whether the README communicates “research prototype” clearly enough without overclaiming

I’m aware that cryptographic applications should not be trusted without serious review. This is why I’m trying to make the project easier to inspect, criticize, and improve before making stronger claims.

Thanks for any technical feedback.

github.com
u/EreNN_42 — 3 days ago
▲ 10 r/eBPF+5 crossposts

eBPF LSM runtime security agent for synchronous file/network denial — looking for technical feedback

I’m working on Aegis-BPF, an open-source Linux runtime security project built around eBPF LSM.

The goal is narrow: explore enforcement-first runtime security, where selected file and network operations can be denied before syscall completion, rather than only emitting post-event telemetry.

Current scope:

- BPF-LSM based file/network policy decisions

- cgroup-scoped policy

- OverlayFS/copy-up handling

- audit-mode fallback when enforcement is unavailable

- Prometheus metrics

- Kubernetes/Helm deployment path

I’m not claiming it is a production-ready replacement for Falco, Tetragon, or KubeArmor. I’m treating it as a focused enforcement model project and looking for criticism from people who understand eBPF, Linux security, or container runtime edge cases.

Main feedback I’m looking for:

- Are the hook choices reasonable?

- What enforcement edge cases am I probably missing?

- What would make the failure-mode model more trustworthy?

- What tests would you expect before taking this seriously?

- Are there obvious problems with cgroup-scoped policy or OverlayFS handling?

Repo:

https://github.com/ErenAri/Aegis-BPF

Technical criticism is more useful than general encouragement.

u/EreNN_42 — 5 days ago