Will BTC become vulnerable to quantum computing within the next 10 years?
This post contains content not supported on old Reddit. Click here to view the full post
This post contains content not supported on old Reddit. Click here to view the full post
Google says it wants to combine quantum computing, AI, and biology through a new initiative called REPLIQA, but the $10 million investment feels surprisingly small for a company of its size. Split across five universities, it almost comes off more like a cautious science experiment than a massive commitment to the future of medicine. Still, the idea of using quantum systems to model proteins, enzymes, and drug interactions is pretty fascinating if it ever becomes practical.
I might be wrong here but do you guys think that this constant metric of usefulness based on quantum advantage/speedup is slowing down progress in the quantum algorithm development? Like we don't know the full boundary of what can efficiently be run on a quantum computer. Shouldn't the space focus on creating more "quantum" algorithms that gets you to an answer, and reward them equally? This obsession on speedup seems to discourage creativity. Shouldn't coming up with creative quantum algorithms be as rewarding or encouraged regardless of speedup?
Like what if some of those "slower" algorithms have features or structures that when combined in a certain way actually unlock quantum advantage? You'd never know if you dismissed them early.
I'm not saying speedup doesn't matter. I'm saying what if we're treating it as a necessary condition when it's really just a sufficient one. No?
I found a serious, embarrassing error in a highly cited Nature Computational Science paper on quantum machine learning:
>The power of quantum neural networks (QNNs) https://www.nature.com/articles/s43588-021-00084-1
Things go like this: In Fig. 3b, the authors claim a training advantage of QNNs over classical neural networks (NNs) on the Iris dataset. I checked the GitHub repo and noticed that I am apparently not the first person to find the classical baseline suspicious. Someone already opened an issue pointing out that the authors used a strange classical NN architecture:
Screenshot of this open issue on 2022
For an 8-parameter classical NN, the authors use 4 layers with neurons 4->1->1->1->2. This means the 4-dimensional Iris input is immediately compressed into one scalar. That is an extremely poor classical baseline.
Actually, the simplest classical NN baseline one can think of — a single linear layer from 4 inputs to 2 outputs — already has 8 parameters, as pointed out in the pull request.
The ridiculous definition of 4->1->1->1->2 by the Nature paper
So I tried the same experiment using the original GitHub code: https://github.com/amyami187/effective_dimension/blob/master/Loss_plots/generate_data/classical_loss.py, but change the definition of classical NN to 4->2.
After this change, the classical NN converges much faster and reaches much lower loss than the quantum NN. So the training advantage shown in original Fig. 3b collapses completely once the classical baseline is changed to the obvious 8-weight linear layer.
This is not a subtle quantum ML issue. This is basic ML benchmarking. The claimed “advantage” appears to come from comparing the QNN against an extremely weak classical NN, a ridiculous baseline that would be unacceptable even in an undergraduate ML final project.
Since this is the ONLY experiment in this paper to support the claim, I believe this is a serious issue and retraction should be discussed.
The codebase is public, so everyone can try it: https://github.com/amyami187/effective_dimension
I can now truly feel “the power” of quantum neural networks!
Hi, I'm a 13 year old Belgian student curious of how quantum computing works and how different qubits are to bits, I'm not trying to sound smart or anything but I'm just curious of how it works, I've tried to do research but it's all too complicated for me.
can somebody explain it to me less overwhelmingly please?
Thanks!
IBM, Cleveland Clinic, and RIKEN say they simulated a 12,635-atom protein (trypsin) using a hybrid quantum + classical approach, which is way beyond the tiny toy systems we usually hear about. They split the workload so classical supercomputers handle decomposition while quantum processors (up to ~94 qubits) tackle the hard quantum chemistry pieces, then stitch it back together. The scaling jump from ~10 atoms to 12k in a short time is wild, and they claim big accuracy gains too. That said, this still feels like early-stage hybrid HPC doing most of the heavy lifting, not quantum replacing anything yet, but it does look like quantum might finally be inching toward problems that actually matter for drug discovery.
As I understand one of the main big advantage of superconducting quantum would be breaking the RSA and ECC encryption, ... so what type of specs should a superconducting quantum system have to achieve that ?
How difficult would be for a superconducting system to operate longer time, like seconds ?
Are there any tech advancements to overcome these decoherence challenges ?
Thanks.
For people actually working with quantum hardware or simulators: what's the biggest gap between what you can do today and what you actually need? Is it qubit count, error rates, software tooling, something else?
You keep hearing about breakthroughs and new hardware, but at the same time it’s still not clear what actually works in practice. The gap between what sounds possible and what you can really run is still huge. What’s interesting is that a lot of the progress now feels less about big claims and more about error correction, stability, and making systems usable at all.
Curious how people here see it. Are we actually getting close to something practical, or is most of the hype still ahead of reality?
Good enough for double-blind IEEE QCNC 2026 proceedings:
https://www.ieee-qcnc.org/2026/accepted-papers.php
Now live on IEEE Xplore:
https://doi.org/10.1109/QCNC69040.2026.00181
…but not good enough for arXiv moderation apparently :P
Here’s the Zenodo stats json since we can’t post those links anymore lol.
For people who want the “Lupe Fiasco - Dumb It Down.mp3” version, here’s the conference presentation:
https://www.youtube.com/watch?v=da7NVwOvy6Y
```
curl -i "https://[bad repository!!!]/api/records/19468197" | tail -n 1 | python -m json.tool | tail -n 16
"stats": {
"downloads": 2429,
"unique_downloads": 2303,
"views": 1153,
"unique_views": 1100,
"version_downloads": 18,
"version_unique_downloads": 18,
"version_unique_views": 22,
"version_views": 23
},
"status": "published",
"submitted": true,
"swh": {},
"title": "A Clean 2D Floquet Logical Qubit from a Purely Imaginary Phase Drive",
}
Baez Crackpot Index Current Score: 35
Im genuinely curious…
USA? CHINA? RUSSIA?
OR
PRIVATE COMPANIES NOT ASSOCIATED WITH ANY SPECIFIC “COUNTRY”?
What would be the best way either the connections or applying to find a good intern for post graduation students. Which is are topics to consider for thesis and projects to build. What else to learn
I'm an undergrad doing research and want to aim to present some work at a conference sometime closer to winter. Obviously it's an uphill battle as an undergrad to get even an intrnship in QC, but was just curious as to what people's experiences were with meeting recruiters and having that convert to j*b offers in QC? Or networking in general
Been deep in QML literature lately and wanted to write up what I actually found vs. what gets hyped. Curious if the community agrees or pushes back.
Where things seem to actually stand:
Barren plateaus are still the core trainability problem. Local cost functions and layerwise training help but don't fully solve it.
QRAM remains the data-loading wall. Without efficient quantum RAM, classical-to-quantum input kills most theoretical speedups before they start.
The one peer-reviewed practical QML advantage I found (early 2026) is Tindall et al. on spatiotemporal chaos prediction in Science Advances. Physics-flavored task, not general ML.
Quantum reservoir computing looks genuinely promising for temporal sequence tasks specifically.
My takeaway: QML has real potential in narrow physics-adjacent tasks but no generic ML advantage yet. The gap between theoretical speedup and practical implementation is still large.
What am I getting wrong? Any recent results I should look at?
Weekly Thread dedicated to all your career, job, education, and basic questions related to our field. Whether you're exploring potential career paths, looking for job hunting tips, curious about educational opportunities, or have questions that you felt were too basic to ask elsewhere, this is the perfect place for you.
​
Are we over reacting to the risks associated with Quantum Computing, under reacting, or managing it appropriately?
Weekly Thread dedicated to all your career, job, education, and basic questions related to our field. Whether you're exploring potential career paths, looking for job hunting tips, curious about educational opportunities, or have questions that you felt were too basic to ask elsewhere, this is the perfect place for you.
​