Thinking about HIV-1 Nef as a small-molecule design system. Does this make sense?
This post contains content not supported on old Reddit. Click here to view the full post
This post contains content not supported on old Reddit. Click here to view the full post
There is a weird pattern forming in AI antibody design right now.
A small group of closed-source generative biology companies are raising huge amounts of money, publishing very impressive hit rates, and claiming major jumps over public methods. But almost all of the evidence is still coming from preprints, technical reports, company announcements, or company-controlled benchmarks.
The main players I’m thinking about are: Absci, Chai Discovery, Latent Labs, and Nabla Bio.
| Company | Funding reported | Latest public system | Reported results |
|---|---|---|---|
| Absci | Public company. Reported ~$230M pre-IPO funding, then ~$230M IPO. | Origin-1 | Validated antibodies for 4 human protein targets from a 10-target zero-prior epitope panel. Fewer than 100 designs per target. Cryo-EM validation for COL6A3 and AZGP1 at 3.0-3.1 Å, with DockQ 0.73-0.83. IL36RA matured into a functional antagonist with 104 nM potency. |
| Chai Discovery | >$225M total funding, latest round $130M Series B. | Chai-2 | Earlier Chai-2 paper reported a 16% hit rate in fully de novo antibody design across 52 targets. The newer Chai-2 work reports full-length IgG design, >86% developability-like profiles, cryo-EM validation of multiple complexes, and strong results on difficult target classes like GPCRs and pMHCs. |
| Latent Labs | $50M total funding, including $40M Series A. | Latent-Y, powered by Latent-X2 | Latent-X2 reported VHH/scFv binders against 9 of 18 targets, testing 4-24 designs per target. Latent-Y later reported autonomous design campaigns producing lab-confirmed nanobody binders against 6 of 9 targets, with affinities reaching single-digit nM. |
| Nabla Bio | Nearly $37M total funding, latest round $26M Series A. | JAM-2 | Reported binders across 16 unseen targets, with 100% target coverage. Average reported success rates were 39% for VHH-Fcs and 18% for mAbs, using up to 45 designs per format per target. |
To be clear, I don’t think this means the work is fake. Some of this is clearly technically impressive, and the wet-lab validation is legit.
But it is getting harder to separate real progress from generative biology hype.
These are all closed-source models. The weights are not public. The models are not independently benchmarked. The failure modes are not fully visible. The target selection, filtering pipelines, assay definitions, and success criteria are usually controlled by the same companies reporting the results.
So when one company reports a per-design hit rate, another reports target-level success, another reports developability after filtering, and another reports only a selected campaign, are we really comparing models? Or are we comparing narratives?
The key question is not whether these systems can generate binders. They clearly can.
The question is whether they are producing real therapeutic candidates that survive specificity, developability, immunogenicity, manufacturability, in vivo biology, safety, and clinical translation. That part is still much less proven publicly.
This is where I think generative biology might be entering a mini-bubble. Not because the models are useless, but because the public claims are starting to sound much more mature than the public evidence.
It reminds me of binder design competitions where the headline can look like “generative design is solved,” but the actual strategy is redesigning around known positive controls, optimizing for a benchmark, or picking assay-friendly target setups. Useful work, but not true "Generative design".
Isomorphic Labs probably belongs in the broader conversation too, but I would separate it from this table because IsoDDE is more of a broad proprietary drug-design engine than a direct de novo antibody hit-rate model.
My current view: these models may be genuinely important, but the field needs independent benchmarking, peer review, disclosed failures, and real candidate progression before we treat the highest reported hit rates as proof that therapeutic design is close to solved.
We may be having our first major hype cycle in this specific space.
This post contains content not supported on old Reddit. Click here to view the full post