u/unteachablecourses

▲ 8 r/UnteachableCourses+1 crossposts

A 2023 Nature Eco & Evo review found the wood wide web's central claims are "largely disconnected from evidence" — but the actual science of fungal cognition is arguably more interesting than the debunked narrative

The wood wide web became one of the most successful science communication stories of the century. Cooperative forests. Mother trees nurturing offspring through underground fungal networks. Trees sharing resources and sending warnings. Avatar, The Last of Us, a NYT bestselling memoir. It fundamentally changed how a generation understood forests.

Then in February 2023, Karst, Jones, and Hoeksema — three mycorrhizal ecologists with decades of combined field experience — published a systematic evaluation in Nature Ecology & Evolution and found the core claims largely unsupported.

They evaluated three claims. First, that common mycorrhizal networks are widespread and persistent in forests. With current technology, it's difficult to confirm continuous, non-transient fungal connections between trees in the field. DNA sequencing of fungal networks had been achieved in only five field studies, on limited ranges of fungi and tree species. The networks may exist, but their prevalence and permanence haven't been established.

Second, that resources transfer through these networks in ways that boost seedling growth. In the best-controlled experiments, fewer than 20% showed connected seedlings performing better than disconnected ones. In the remaining 80%, connected seedlings performed the same or worse. Even when tagged carbon from one tree appeared in a neighbor, much of it stayed in the mycorrhizal roots themselves — the fungi were receiving it, but whether they were meaningfully passing it along was undemonstrated.

Third, that mature trees preferentially send resources and defense signals to offspring through CMNs. The researchers stated flatly: this claim has no peer-reviewed, published evidence. Zero field studies.

They also documented a structural problem in the literature. Of 1,676 citations of original CMN field studies, fewer than half the statements in 2022 papers about the original studies were accurate. A 2009 study mapping fungal distribution was routinely cited as evidence of nutrient transfer — though it never investigated nutrient transfer. Scientific game of telephone.

Here's the part that doesn't get enough attention: the debunking of the cooperative narrative doesn't mean mycorrhizal fungi aren't ecologically essential. The symbiosis is real and has existed for 400+ million years. Fungi access phosphorus and nitrogen that roots can't reach, receiving photosynthetic sugars in return. What's in dispute is whether the relationship is cooperative or primarily transactional — and whether fungi have their own agenda. The evidence increasingly supports fungi as active agents pursuing their own nutritional interests. Some mycorrhizal relationships are parasitic — certain orchids and understory herbs steal sugars from connected trees through CMNs. The network may not be a commune. It might be a marketplace. Or a protection racket. Or something with no human analogy.

Meanwhile, the cognition research on fungi and fungus-adjacent organisms has gotten genuinely strange. A 2024 Tohoku University study showed Phanerochaete velutina recognizing spatial patterns in resource environments — distinguishing between inward and outward directions when growing across blocks arranged in shapes. A 2025 study demonstrated context-dependent food preferences in the slime mold Physarella oblonga, including violations of rational choice theory that mirror human decision-making biases. A July 2025 paper showed Physarum polycephalum memory isn't just reflexive — it's overwritable in light of new information, meeting accepted criteria for navigational memory. All without a single neuron.

In 2025, SPUN (Society for the Protection of Underground Networks) released the Underground Atlas — the first high-resolution predictive biodiversity map of Earth's mycorrhizal communities, using over 2.8 billion fungal DNA sequences from 130 countries. Finding: 83% of Earth's climate-critical fungi remain unknown to science, identified only by DNA sequences with no corresponding described species. The underground world is vastly more complex and less understood than even enthusiastic mycologists suspected.

The real story of mycelial networks isn't cooperative trees whispering through the soil. It's a kingdom of organisms processing information without brains, making decisions without neurons, forming networks whose structure we're only beginning to map, and playing roles in carbon cycling we can't quantify because we haven't identified most of the species involved. The wood wide web was a beautiful story. The truth is stranger.

Longer analysis covering the full Karst et al. findings, the fungal cognition research, the SPUN atlas, and why the most interesting question in cognitive science may be "what was thinking before brains existed":

https://unteachablecourses.com/mycelial-networks-wood-wide-web-2026/

For the ecologists here — has the Karst paper changed how CMN research is being designed? Specifically, are new field studies incorporating the controls and alternative hypotheses (soil pore transport, direct root transfer) that the review identified as missing from earlier work, or is the field still largely operating under the old framework?

reddit.com
u/unteachablecourses — 13 hours ago

The longest carbon nanotube ever made is 0.5 meters. A space elevator tether needs to be 100,000 km. But a newer candidate — graphene super laminate — is already produced at kilometer lengths, and 2025 lab results showed spot-welded layers with diamond-like properties.

The concept has existed for 130 years and the bottleneck has always been one thing: the tether. Everything else — the climber system, the anchor station, the counterweight, the power delivery — is hard but solvable with existing or near-term engineering. The tether requires a material with a specific strength of roughly 50-60 GPa·cm³/g. For reference, steel is about 0.25. Kevlar is about 2.5. The best carbon fiber composites hit maybe 4. You need something 15-25x stronger per unit weight than the best structural material in common industrial use.

Carbon nanotubes have been the poster child since the 1990s. Their theoretical tensile strength is ~150 GPa — more than adequate. The manufacturing reality: the longest single nanotube ever publicly reported is 0.5 meters. Nanotube "forests" have reached 14 cm at Waseda University, growing at one meter every 186 hours. The gap between 0.5 meters and 100,000 kilometers isn't something incremental improvements close on any human timescale. Google X investigated space elevators around 2014, concluded nobody had made a perfect nanotube strand longer than a meter, and put the project in "deep freeze."

Here's what's shifted. The International Space Elevator Consortium has increasingly moved its focus to graphene. Graphene's theoretical tensile strength is ~130 GPa — comparable to nanotubes. The critical difference: polycrystalline graphene is already being manufactured commercially at kilometer lengths and speeds of two meters per minute. The material isn't at tether quality — you need single-crystal graphene with zero grain boundaries, manufactured as a continuous sheet at industrial scale — but the trajectory from lab curiosity to industrial product is incomparably more advanced than the nanotube trajectory.

ISEC's leading candidate is "graphene super laminate" — multiple layers of single-crystal graphene bonded through covalent carbon-carbon spot welding. Each layer retains graphene's extraordinary in-plane strength while the interlayer bonds prevent the shearing weakness of regular multilayer graphene. In September 2025, ISEC reported that the spot-welding process had been demonstrated in the lab and produced a material with diamond-like properties. In February 2026, they published research on atomic oxygen corrosion resistance of the material — addressing one of the critical environmental hazards a tether faces in LEO.

Whether this can be manufactured at 100,000 km continuous lengths, at tether-quality purity, at production speeds that don't require decades, at viable cost — entirely undemonstrated. But the International Academy of Astronautics projected in 2013 that tether materials could achieve the necessary specific strength "within 20 years," putting the breakthrough at roughly 2033. The graphene trajectory is at least consistent with that timeline in the sense that the path is visible, even if the destination hasn't been reached.

The other engineering problems are worth cataloguing because the tether gets all the attention:

The climber needs to ascend 35,786 km to GEO. At reasonable speeds that's an eight-day journey (per Obayashi Corporation's design). It can't carry all its fuel — proposed solutions include ground-based lasers beaming power to photovoltaic cells, which introduces atmospheric attenuation, beam tracking accuracy across thousands of km, and "what happens when something flies through the beam path" as open questions.

Space debris. The tether passes through LEO where objects travel at ~7.8 km/s. A marble-sized fragment hitting a tether the thickness of plastic wrap is catastrophic. ISEC published a June 2025 analysis on this — the ribbon design helps because stress redistributes across width after small punctures, but routine avoidance maneuvers for tracked debris would be necessary. A ribbon can't exactly dodge.

Atmospheric hazards — wind loads, lightning, weather on the bottom; atomic oxygen corrosion in LEO; Van Allen radiation degrading molecular bonds (though studies suggest carbon nanotubes could survive radiation for 1,000+ years); gravitational perturbations from the Moon and Sun creating tether oscillations that need damping.

An ocean-based equatorial anchor platform under millions of newtons of continuous tension, maintained indefinitely, is itself a major engineering project.

Obayashi Corporation maintains a 2050 target for an operational space elevator. A March 2026 market report values the "space elevator market" — mostly materials research, climber design, and tether dynamics modeling — at $720 million, projected to reach $1.16 billion by 2030.

The comparison I keep coming back to: in 1903, the Wrights flew at Kitty Hawk. In 1969, Apollo 11 landed on the Moon. Sixty-six years from first powered flight to lunar landing. The space elevator concept has existed for 130 years. The materials science has been actively researched for 30. The physics is sound, the engineering challenges are understood, and the materials are making measurable progress. But the gap between "the physics works" and "we can build it" is still measured in orders of magnitude — and the payoff (cost per kg to GEO dropping from ~$20,000 to ~$500, 170,000 metric tons to orbit per year on a mature system) is so transformative that the question isn't whether it's worth pursuing but whether the timeline is measured in decades or generations.

Longer analysis covering the full engineering breakdown, graphene vs nanotube trajectories, the economics of $500/kg to GEO, and where space elevators sit in the broader landscape of space access moonshots:

https://unteachablecourses.com/space-elevators-2026/

For the materials scientists here — is graphene super laminate a realistic path to tether-grade specific strength, or is the grain boundary / defect problem at manufacturing scale essentially the same bottleneck that killed the nanotube approach, just wearing a different hat?

reddit.com
u/unteachablecourses — 13 hours ago
▲ 5 r/UnteachableCourses+2 crossposts

Zipline has completed 2 million+ deliveries across 125 million autonomous miles with zero serious injuries. Amazon Prime Air has completed roughly 16,000 deliveries and has had seven significant incidents including two drones hitting a construction crane and one crashing into an apartment building.

The drone delivery industry in 2026 has essentially sorted into two tiers, and the dividing line is simpler than most analysis makes it: how much does your aircraft weigh when something goes wrong?

Zipline's P2 and Wing's delivery drones weigh between 10 and 40 pounds. Amazon's MK30 has a maximum takeoff weight of 83 pounds. When a 15-pound drone has a problem, it's an inconvenience. When an 83-pound drone hits an apartment building at speed, people smell smoke and watch propeller fragments fall to the sidewalk. That's not a metaphor — that's what happened in Richardson, Texas in February 2026. Five days later, Amazon launched in Kansas City.

The incident catalog for Amazon Prime Air since resuming operations in April 2025 is worth laying out because the pattern tells you something about how differently the FAA treats operators at different safety profiles. A controlled landing at an Arizona apartment complex in May. A package dropped into a swimming pool in July. Two drones crashing into a construction crane in Tolleson in October — sparking a fire and hazmat response. A drone landing five feet from a resident checking his mailbox. A severed internet cable during ascent in Waco in November. The apartment building crash in Richardson in February 2026. Multiple FAA and NTSB investigations opened. Amazon resumed flights within 48 hours of the crane incident and launched new markets days after the apartment crash.

Compare that to how the FAA treats Part 107 operators — fines reaching $36,770 for a single violation, license suspensions for flying near stadiums. Amazon operates under Part 135 air carrier certification with different oversight mechanisms, but the optics of multiple federal investigations in one year while new markets keep launching on schedule are hard to ignore.

Internal cost projections reported in late 2024 showed Amazon spending roughly $63 per delivery against customer pricing of $4.99 to $9.99. Amazon can absorb that because it's Amazon. As of February 2026, total Prime Air deliveries sit at around 16,000 across operations in Texas, Michigan, Arizona, Florida, and Kansas.

Meanwhile, Zipline hit 2 million commercial deliveries in January 2026, has flown over 125 million autonomous miles with zero serious injuries, raised over $600 million in January 2026 (valuation now $7.6 billion), holds BVLOS authorization across all 50 states, and was producing a new drone every hour at its manufacturing facility by end of 2025. The company is expanding to Houston and Phoenix in early 2026. Its safety architecture includes acoustic detect-and-avoid with microphone arrays that can hear other aircraft up to two miles away and 500+ safety checks per second during every flight. The P2 platform uses a tether system that keeps the main aircraft at 300 feet while lowering a smaller delivery droid to the doorstep.

Wing, Alphabet's subsidiary, has now passed 750,000 deliveries and covers a service area reaching over 2 million customers across Houston, Atlanta, Dallas-Fort Worth, Charlotte, and — as of March 2026 — the San Francisco Bay Area. In DFW and Metro Atlanta, the top 25% of customers order three times per week. Delivery volume tripled in the second half of 2025 compared to the first half. Wing extended operating hours to 9 AM through 9 PM in Charlotte and DFW with FAA approval. The 150-store Walmart expansion announced in January 2026 adds Los Angeles, St. Louis, Cincinnati, and Miami to the pipeline.

The underlying regulatory question is Part 108 — the proposed rulemaking that would create a permanent, standardized framework for routine BVLOS operations instead of the current waiver-by-waiver system. It was announced in August 2025 and is still working through the process. Until it's codified, expansion pace is gated by regulatory bandwidth rather than technological capability. The FAA recently approved Wing and Zipline to operate simultaneously over the same DFW suburbs without visual observers — the first time that's happened — which suggests the regulatory direction is toward enabling multi-operator airspace, even if the formal rule isn't done yet.

The honest trajectory for drone delivery isn't the one on any company's investor deck. It's not 500 million deliveries by 2030. It's a specific tool for specific use cases — medical supplies in areas with poor road infrastructure, urgent small-package delivery in suburban markets, high-frequency low-weight consumer goods in neighborhoods where the economics and approvals align. Zipline's origin story is instructive: the company that actually scaled drone delivery built its operation delivering blood and vaccines in Rwanda and Ghana, not same-day retail in suburban Texas. The safety record, the operational discipline, and the regulatory credibility all came from solving a genuinely hard logistics problem where the alternative was people dying because roads were washed out.

Longer analysis covering the full regulatory landscape, the engineering constraints (noise, weather, payload limits, airspace integration), and why the companies scaling carefully are outperforming the one scaling fastest:

 https://unteachablecourses.com/drone-delivery-2026/

Question for this community: how much of Amazon's aggressive expansion schedule is driven by the belief that first-mover market presence matters more than safety record, and how much risk does that create for the broader industry if a serious incident involving a bystander triggers a regulatory clampdown that affects operators who haven't had problems?

reddit.com
u/unteachablecourses — 13 hours ago

A photovoltaic retinal implant the thickness of half a human hair restored meaningful central vision in 80% of legally blind AMD patients at 12 months — the first treatment to restore form vision in geographic atrophy. Published in NEJM, CE mark and FDA applications now filed.

The PRIMAvera trial results, published in the New England Journal of Medicine in October 2025, represent the first clinical evidence that an electronic implant can restore central vision in patients with geographic atrophy due to age-related macular degeneration. GA is the end stage of dry AMD — the photoreceptors are dead, the damage was previously considered irreversible, and no approved therapy, investigational approach, or cell therapy had ever produced meaningful visual improvement. The NEJM editorial called PRIMA "the first treatment to restore vision" in this population.

The trial enrolled 38 legally blind patients across 17 sites in five European countries. Of 32 patients assessed at 12 months, 26 (81%) demonstrated clinically meaningful improvement in visual acuity. Mean improvement was 23 letters — roughly 4.6 lines on an eye chart. The best responder gained 59 letters (11.8 lines). Patients could read large print, recognize objects, and perform tasks like cooking and playing cards that they couldn't before implantation. Natural visual acuity without the device remained stable, confirming the improvement was attributable to the implant.

The mechanism: a 2×2mm crystalline silicon chip, 30 micrometers thick, comprising 378 photovoltaic pixels, is implanted beneath the retina within the atrophic lesion. Augmented-reality glasses with a front-facing camera project near-infrared light (880nm) onto the chip. Each pixel converts infrared light into electrical current that stimulates surviving bipolar cells — the retinal neurons downstream of the dead photoreceptors. The bipolar cells relay the signal through the remaining visual pathway to the brain. The infrared light simultaneously carries visual information and powers the chip. No battery, no wires, no external power threading through the eye. The brain learns to merge the prosthetic central vision with whatever peripheral natural vision remains.

The wireless design is significant because the field's most prominent prior device — Second Sight's Argus II, FDA-approved in 2013 — required wired connections that created durability problems. More critically, Second Sight went bankrupt in 2020 and ceased operations in 2022, leaving ~350 patients with orphaned implants and no manufacturer support. The Argus II cautionary tale is why commercial viability matters as much as clinical efficacy in this field — patients make a decades-long commitment to hardware in their body.

Science Corporation (founded by Neuralink co-founder Max Hodak), which acquired Pixium Vision's PRIMA assets in 2024, appears to be addressing the sustainability question aggressively. In March 2026, the company closed an oversubscribed $230M Series C — total funding now roughly $490M — with investors including Lightspeed, Khosla Ventures, Y Combinator, and IQT. CE mark application has been submitted to the EU, with European commercial launch expected later in 2026. FDA application is filed in the US. The company has also expanded PRIMA trials to retinitis pigmentosa and Stargardt disease at Sydney Eye Hospital in Australia, led by Dr. Matthew Simunovic — the first time the device is being tested in inherited retinal degenerations rather than AMD alone.

Caveats worth noting: the PRIMAvera trial was open-label, single-arm, and baseline-controlled — not placebo-controlled. An anonymous retinal-degeneration researcher told Nature that the intensive training and motivation from receiving a novel device might inflate results. The restored vision is grayscale, not color, and limited to central-field perception. Resolution is 378 pixels versus the roughly 6 million cones in a healthy fovea — four to five orders of magnitude below natural vision. Serious adverse events occurred in 19 of 38 patients, though 81% of events occurred within the first two months and 95% of those resolved within two months. One patient required surgery for retinal detachment and proliferative vitreoretinopathy.

The resolution gap is the fundamental limitation of every bionic eye in 2026. Daniel Palanker, the Stanford researcher whose work underlies PRIMA, draws the comparison to cochlear implants — early devices provided crude sound perception, decades of refinement enabled speech comprehension and music appreciation. The trajectory for retinal implants may follow a similar arc: first generation establishes the principle, subsequent generations improve resolution, and the technology becomes standard practice over decades. Next-generation PRIMA designs are pursuing smaller pixels for higher density, along with electronic zoom and image stabilization.

The broader landscape includes suprachoroidal implants (Bionics Institute, Australia — FDA breakthrough device designation, 97% electrode survival over 2.7 years), cortical visual prostheses that bypass the eye entirely (Neuralink's Blindsight system targeting first human volunteers in 2026; Cortigent's Orion with five-year feasibility data), and Science Corporation's own hybrid Science Eye combining retinal implants with optogenetic gene therapy. But PRIMA is the only device with NEJM-published efficacy data in a multicenter controlled trial.

AMD affects roughly 200 million people globally. GA specifically affects approximately 5 million and is responsible for ~20% of legal blindness in North America. The only approved GA therapies — complement inhibitors pegcetacoplan and avacincaptad pegol — slow progression but require monthly or bimonthly injections and have never restored lost vision. PRIMA is the first device to cross from "slowing the damage" to "reversing the outcome."

Longer analysis covering the full device landscape, the Argus II failure, the resolution problem, and the cochlear implant comparison framework for understanding the technology's trajectory:

https://unteachablecourses.com/retinal-implants-bionic-eyes-2026/

For anyone in ophthalmology or retinal research — how significant is the expansion to RP and Stargardt? The retinal damage in those conditions is more diffuse than the focal atrophy in GA, which seems like it would complicate subretinal implant positioning and potentially limit efficacy. Curious whether anyone has a view on how transferable the PRIMA results are to those populations.

reddit.com
u/unteachablecourses — 14 hours ago
▲ 2 r/UnteachableCourses+1 crossposts

Quantum computing in 2026 is where classical computing was in the early 1950s — room-sized machines solving academic problems, with a transformative future visible in theory and invisible in daily life. The difference is the 1950s scientists didn't have quarterly earnings calls.

Google's Willow chip completed a benchmark calculation in five minutes that would take a classical supercomputer 10^25 years — a number that exceeds the age of the universe by 15 orders of magnitude. IBM promised quantum advantage by end of 2026. Microsoft debuted the first topological qubit processor in February 2025. D-Wave's stock is up 200% in a year. The headlines suggest the revolution has arrived.

The practical reality: quantum computers are not commercially useful at scale. Most real-world applications remain experimental. They are expected to outperform classical computers in specific, commercially meaningful tasks sometime after 2030, not before.

Here's where things actually stand in April 2026, stripped of the press releases.

The field sits in the NISQ era — Noisy Intermediate-Scale Quantum computing. Current processors operate with dozens to a few hundred physical qubits, and those qubits are fragile. They're sensitive to temperature (superconducting quantum computers operate near absolute zero, about 15 millikelvins), electromagnetic interference, vibration, and any interaction with their environment. These interactions cause errors — qubits lose their quantum state through decoherence — and current error rates are high enough that computations longer than a few thousand operations become unreliable.

IBM's Nighthawk processor, delivered late 2025, achieves roughly 5,000 reliable gate operations. IBM expects 7,500 by late 2026, 10,000 by 2027. Those are genuine improvements. They're also roughly five to six orders of magnitude below what's needed for the applications that justify the investment.

The path from "interesting but impractical" to "commercially useful" runs through quantum error correction — using multiple physical qubits to encode a single logical qubit protected against errors. Google's Willow demonstrated "below threshold" error correction where adding more qubits decreased errors rather than increasing them. That's foundational. But the demonstration was limited to quantum memory, not gate operations, and logical error rates are still orders of magnitude from practical.

One telling detail about where the field stands: there's no consensus on what a qubit should even be made of. In classical computing, the transistor won decades ago. In quantum computing, at least five competing technologies are under active development with billions behind each — superconducting qubits (IBM, Google), trapped ions (IonQ, Quantinuum), neutral atoms (QuEra, Atom Computing, Pasqal), photonic approaches (PsiQuantum, Xanadu), and Microsoft's largely unproven topological qubits.

A few things have happened since the Willow announcement that are worth tracking:

In January 2026, a multi-university paper in Science (UChicago, Stanford, MIT, Innsbruck, Delft) explicitly compared the current state of quantum technology to the pre-transistor era of classical computing — foundational physics established, functional systems exist, but scaling to utility requires engineering breakthroughs that could take years or decades. They called it a "transistor moment," which sounds optimistic until you remember how long it took from the first transistor to the first useful computer.

In February, Fermilab and MIT Lincoln Lab demonstrated trapped ions controlled by in-vacuum cryoelectronics — a key step toward scalable ion-trap quantum computing, because current systems rely on impractical wiring between room-temperature electronics and cryogenic traps that breaks down as you add qubits.

In March, IBM released the first published quantum-centric supercomputing reference architecture — a blueprint for integrating quantum processors alongside GPUs and CPUs in hybrid systems. This is significant because it acknowledges what the field has quietly accepted: quantum computers aren't going to replace classical computers. They're going to work alongside them, handling specific subtasks where quantum offers advantage. The hybrid model is the realistic path, and IBM formalizing an architecture for it matters.

On the neutral atom front, Microsoft and Atom Computing plan to deliver an error-corrected quantum computer to Denmark's Novo Nordisk Foundation in 2026. QuEra delivered a machine ready for error correction to Japan's AIST and plans global availability this year. Both teams expect to put 100,000 atoms into a single vacuum chamber within a few years — a scalability advantage that superconducting approaches can't easily match.

D-Wave claimed an industry-first in scalable on-chip cryogenic control for gate-model qubits in January, addressing the wiring bottleneck. Their stock reflects the hype cycle more than the technical reality, but the underlying engineering is genuine.

What quantum computers actually can do today: simulate molecular behavior (the most natural application — using a quantum system to simulate a quantum system), certain optimization problems, and cryptography research. What they cannot do: run AI models, replace cloud computing, speed up databases, or accomplish any general-purpose task more efficiently than a classical machine. NIST finalized post-quantum cryptography standards in 2024 because the threat to current encryption is real — it just requires millions of error-corrected qubits that don't exist yet.

IBM's roadmap targets fault-tolerant quantum computing — their Quantum Starling machine, ~200 logical qubits across ~10,000 physical qubits — by 2029. IBM has been hitting interim milestones consistently, which matters because roadmap credibility is rare in this field. Their 2025 Loon processor demonstrated the key hardware components, and they achieved real-time error decoding in under 480 nanoseconds, a year ahead of schedule.

The pattern is familiar if you've followed fusion or autonomous vehicles: genuine technical progress, consistent milestone achievement, and a commercial timeline that keeps resolving into "a few more years." The most honest framing isn't that quantum computing doesn't work — the physics absolutely works. It's that the gap between where we are and where we need to be is measured in orders of magnitude, and orders of magnitude don't close on schedule.

Longer analysis covering the error correction problem, the qubit technology competition, IBM/Google/Microsoft roadmaps, and what "quantum advantage" actually means versus how it's marketed:

https://unteachablecourses.com/quantum-computing-2026/

Genuine question for the technical people here: does the neutral atom approach (QuEra, Atom Computing) end up winning the qubit race specifically because of the scalability advantage — 100,000 atoms in a single chamber vs. the wiring nightmare of scaling superconducting systems — or is the gate speed disadvantage too steep for it to matter?

reddit.com
u/unteachablecourses — 14 hours ago

The Line's construction was suspended in September 2025 after completing 2.4 km of foundations out of 170 km. In March 2026, three more major contracts totaling $6B+ were cancelled. An internal audit leaked to the WSJ projected final costs of $8.8 trillion and a completion timeline stretching to 208

The original spec: two parallel mirrored walls, each 500 meters tall, extending 170 km in a straight line through the Saudi desert. 200 meters wide. Nine million residents. No cars, no streets. Population density of 260,000 people per square kilometer — six times denser than Manila, the densest city on Earth. Vertical farms, flying taxis, AI managing the city like a cognitive organism. Estimated cost: $500 billion. Estimated completion: 2030-ish.

What actually happened: the Saudi sovereign wealth fund paused construction on September 16, 2025. The NEOM CEO was relieved of duties in November. The 2029 Asian Winter Games at Trojena — a ski resort on manufactured snow in the Saudi mountains — were indefinitely postponed in January 2026 and relocated to Almaty. Workforce cut roughly 35%. Over 1,000 employees relocated from the construction site to Riyadh. The PIF recorded an $8 billion write-down.

Then in March 2026, three more major contracts were terminated: Webuild's $4.7 billion dam and lake project for Trojena, Eversendai's structural steel contract for the Trojena ski village, and Hyundai's $1 billion tunnel contract for The Line's transport infrastructure. That's $6+ billion in cancellations in a single month for a project that's supposedly "a strategic priority."

The engineering problems were identifiable from the announcement. An Imperial College London analysis noted that building The Line to spec within the proposed timeline would require construction at 15,000 times the rate of normal U.K. construction. The enclosed volume — roughly 17 billion cubic meters — at standard high-rise construction costs of ~$1,000/m³ implies structural costs alone of $17 trillion. The mirrored glass exterior would create a solar concentrator effect between the walls. The structural loads on a 500-meter-tall continuous wall extending 170 km — wind loading, thermal expansion, seismic forces in a region with active fault lines — exceed anything built anywhere. Water supply for nine million people in the Tabuk desert would require the largest desalination infrastructure ever constructed.

Each of these is solvable in isolation. Together, at this scale, in this timeline, in this location, they compose something approaching impossibility. The Financial Times reported that MBS has now privately accepted the original vision will be realized as something "far smaller." One former employee, quoted anonymously, said the situation is now about "letting MBS down gently."

What's interesting from an urban planning perspective is the pattern. This is the same trajectory as every ambitious planned-from-scratch city in modern history, just at a larger budget. Brasília works but is widely considered sterile. Naypyidaw is a ghost town. Masdar City in Abu Dhabi — billed as the world's first zero-carbon city in 2006 — has been quietly scaled back to a small neighborhood. Songdo in South Korea is roughly half-occupied a decade after opening.

The consistent lesson: planned cities that succeed tend to be modest in scope and flexible in design. Planned cities that lead with a grand vision and a promotional video tend to become very expensive lessons in the difference between rendering and reality. The Line followed the same pattern as Fordlandia, the Concorde, and the Superconducting Super Collider: vision first, engineering second, constraints never.

The pivot is telling, though. Architects have been tasked with repurposing existing infrastructure — the trench, foundations, and cores — into something deliverable. The leading candidates are a much shorter coastal section (2.4-5 km) at reduced height, with remaining earthworks potentially converted to AI data centers. A $5 billion DataVolt partnership for data center infrastructure at Oxagon was announced in February 2026. Bloomberg reported additional deals with AWS and Google Cloud are in negotiations. NEOM's green hydrogen plant is 80% complete. The project may end up as a tech infrastructure hub rather than a city — which is arguably more useful than what was originally proposed, but bears almost no resemblance to the mirrored canyon city in the 2022 video.

PIF construction contracts fell from $71 billion to $30 billion — a 60% reduction — as capital gets reallocated to FIFA 2034 stadiums and Expo 2030.

I wrote a longer analysis covering the full engineering breakdown, the history of planned-from-scratch cities, and where this fits in the broader pattern of utopian megaprojects:

https://unteachablecourses.com/neom-and-the-line-2026-update/

For the planners here: what's the most instructive comparison case? I keep landing on Masdar City because the arc is almost identical — Gulf state money, zero-carbon branding, renders that looked like a different planet, quiet scale-back to something functional but unrecognizable — but curious whether anyone sees a closer analog.

reddit.com
u/unteachablecourses — 14 hours ago
▲ 14 r/UnteachableCourses+1 crossposts

Two-thirds of an octopus's neurons are in its arms, not its brain — and a 2024 3D molecular atlas of the arm nerve cord revealed regional specializations and neurochemical complexity far beyond what anyone expected from a "peripheral" nervous system

The standard model of animal intelligence is centralized processing. Sensory input goes to the brain, the brain decides, commands go to the body. Every vertebrate on earth runs this architecture. The octopus doesn't. It distributes roughly 350 million of its 500 million neurons across eight arms, each containing a neural network complex enough to taste, touch, decide, and act semi-autonomously. A severed octopus arm continues responding to stimuli, reaching for food, and retracting from threats for up to an hour. The arm doesn't know it's been separated.

What's changed recently is that we're starting to understand how this actually works at a cellular level, and it's more complex than the "each arm is a simple mini-brain" framing suggests.

In 2024, researchers at SF State published two papers in Current Biology that produced the first 3D molecular and anatomical maps of the octopus arm nerve cord. The key finding: the cells at the tip of an arm are neurochemically different from those at the base near the central brain, with distinct regional specializations along the length. The arm nerve cord isn't a relay cable. It's a processing center with its own spatial organization, neurotransmitter systems, and computational architecture — a brain in miniature running local operations while communicating with the central brain through what appears to be relatively narrow bandwidth.

A September 2025 study in Scientific Reports quantified what marine biologists had long suspected: octopus arms show functional specialization. Researchers analyzed nearly 7,000 arm deformations across 25 wild octopuses filmed in six habitats and catalogued 12 distinct movement types. Front arms primarily handle exploration while rear arms focus on locomotion — but all arms retain full behavioral flexibility. The architecture is hierarchical distributed control: local ganglia handle immediate sensorimotor loops while the central brain sets broad strategic priorities.

Then there's the molecular convergence that's hard to stop thinking about. Octopus brains and human brains share the same "jumping genes" — LINE transposons — active in their respective learning and memory regions. In humans, these transposable elements are particularly active in the hippocampus. In octopuses, the same family is active in the vertical lobe. Two organisms separated by 500 million years of evolution, using the same molecular mechanism in functionally analogous brain structures. Researchers at SISSA in Trieste and Stazione Zoologica Anton Dohrn in Naples found this independently in both Octopus vulgaris and Octopus bimaculoides.

An August 2025 paper in Trends in Ecology & Evolution introduced a framework for tactical deception in cephalopods — the capacity to mislead other organisms through deliberate behavioral manipulation, something previously attributed almost exclusively to primates and corvids. A January 2026 paper in Biological Reviews updated the assessment of cephalopod sentience, building on the Cambridge Declaration on Consciousness that included cephalopods among animals capable of conscious experience. The UK formally recognized octopuses as sentient beings in 2022.

Meanwhile, the engineering side is accelerating. The Navy's Office of Naval Research funded a $7.5 million "Cyberoctopus" initiative to computationally model distributed intelligence. A May 2025 paper in Science Robotics from the University of Bristol demonstrated a soft robot using "embodied suction intelligence" — mimicking the neuromuscular structure of octopus suckers to sense its environment and control its own actions without a central computer. Published research on octopus-inspired technology grew from 760 papers in 2021 to 1,170 in 2024.

The part that gets me is the lifespan constraint. Most octopus species live one to two years. They're solitary. There's essentially no cultural transmission or social learning across generations. Every octopus that opens a jar, navigates a maze, recognizes a human face, or carries coconut shells across the seafloor for future shelter figured it out alone, within a life measured in months. In vertebrates, high intelligence is almost always paired with long lifespans and social learning. Octopuses break both rules and still arrive at problem-solving, tool use, observational learning, and what increasingly looks like individual personality.

The last common ancestor between octopuses and humans was a flatworm-like organism roughly 500-600 million years ago. Everything the octopus brain can do, it evolved independently. If intelligence can diverge this dramatically on the same planet, under the same physics, the range of possible cognitive architectures elsewhere is essentially unbounded.

Longer deep-dive covering the distributed cognition model, the LINE transposon convergence, the Cyberoctopus project, and what all of this implies for the search for extraterrestrial intelligence:

https://unteachablecourses.com/octopus-intelligence/

What's everyone's read on the functional specialization findings? The fact that front arms explore while rear arms locomote, but all arms retain full flexibility, seems like it sits in an interesting middle ground between true modularity and full equipotentiality — curious whether anyone here has a framework for thinking about that.

reddit.com
u/unteachablecourses — 1 day ago

China didn't corner the rare earth market because rare earths are rare — they cornered it because they spent 40 years building out processing while the rest of the world was content to buy the output

The most important thing about rare earth elements is that the name is wrong. They're not rare. Cerium is more common in Earth's crust than copper. Deposits exist on every continent, including in the United States, Australia, Canada, Brazil, and throughout Scandinavia and Africa. What's rare is the willingness to process them, because rare earth processing is one of the most chemically demanding and environmentally destructive industrial operations that exists — and China decided in the 1980s that it was worth dominating.

The standard framing of this issue treats China's position as a geology story. It's not. It's an industrial policy story. China didn't just mine rare earths. It built every link in the value chain: mining, concentration, separation, oxide production, metal refining, alloy manufacturing, and finished magnet production. Mining the ore is step one. Separating it into individual oxides — which requires hundreds of stages of solvent extraction because the 17 rare earth elements have nearly identical chemical properties — is step two. Reducing oxides to metals is step three. Manufacturing NdFeB permanent magnets is step four. Each step requires specialized expertise, equipment, and chemical processes that take years to develop. China built all four. The rest of the world outsourced all four.

Mountain Pass in California was the world's largest rare earth producer. It shut down because it couldn't compete on price with Chinese operations running on lower labor costs, lower environmental standards, and state subsidies. Japan was the world's leading magnet manufacturer — then GM's magnet subsidiary Magnequench was acquired by Chinese groups in 1997 and the equipment was eventually relocated to China. By the early 2010s, China controlled over 95% of global production.

In April 2025, China imposed export licensing requirements on seven rare earth elements. Export volumes dropped roughly 74% within a month. European rare earth prices hit six times the Chinese domestic price. Some carmakers in the US and Europe cut production or shut down factories. Then in October, China added five more elements and — this is the part that changed the game — applied the foreign direct product rule to rare earths for the first time. That mechanism, which the US had pioneered to restrict semiconductor exports to China, now worked in reverse: products made anywhere in the world using Chinese-origin rare earth materials or processing technology required an export license from Beijing. China wasn't just controlling what left its borders. It was claiming jurisdiction over what happened to its materials after they left.

The controls were partially suspended in November 2025 as part of broader trade negotiations. But the demonstration was complete.

The current response — MP Materials at Mountain Pass, the Lynas-Noveon partnership, the Pentagon's $620 million loan to Vulcan Elements and ReElement, the EU's Critical Raw Materials Act — is real and necessary. It also amounts to a rounding error. MP Materials' Independence facility in Fort Worth, at full magnet production capacity, would represent less than half a percent of global NdFeB supply. Building a separation plant from scratch takes 3-5 years and costs over a billion dollars. Qualifying the output for defense-grade applications adds more years. The engineers who know how to run these processes at commercial scale are overwhelmingly in China.

This isn't even the first time this playbook was used. In 2010, China informally restricted rare earth exports to Japan during the Senkaku/Diaoyu dispute. The global response was alarm, a burst of alternative supply chain investment, and then the investment faded as soon as prices normalized. Fifteen years later, the same vulnerability was exploited with more comprehensive controls, new extraterritorial provisions, and a geopolitical context that suggests the restrictions will recur regardless of any temporary suspension.

The honest assessment is that none of the current Western responses will meaningfully reduce China's leverage within five years. The processing infrastructure takes years to build, the workforce takes years to train, and the volumes required to replace Chinese supply are orders of magnitude beyond current Western capacity. The monopoly isn't a market failure. It's a strategic outcome — achieved through decades of deliberate policy, tolerated by decades of Western indifference.

I wrote a longer analysis covering the processing chemistry, the 2025 export controls, the foreign direct product rule application, and the specific Western response efforts in detail:

https://unteachablecourses.com/china-rare-earth-monopoly/

The question I keep coming back to: is the "build alternative supply chains" strategy viable on any timeline that actually matters for the current geopolitical cycle, or is the processing gap simply too wide to close before the next time these controls get activated?

reddit.com
u/unteachablecourses — 1 day ago
▲ 1 r/UnteachableCourses+1 crossposts

After LK-99 and five Ranga Dias retractions, the legitimate superconductivity field is quietly making real progress — nickelates stabilized at ambient pressure, AI-driven materials screening, and a new 151 K record in Hg-1223

The two biggest room-temperature superconductor stories of the 2020s were both fraudulent, and the damage they did to the field's credibility is hard to overstate. But strip away LK-99 and the Dias retractions, and the actual science is in a more interesting place than the fraud cycle suggests.

Quick recap for anyone who's moved on: LK-99, the copper-doped lead apatite from a Korean lab called Q-Centre, generated mass hysteria in July 2023. Twitch streamers watched replication attempts live. A Chinese researcher's levitation video hit 4.5 million views on Bilibili in nine hours. Within a month, labs worldwide had synthesized LK-99 and found no superconductivity. The partial levitation was a ferromagnetic impurity — copper sulfide. A comprehensive rebuttal by Georgescu et al., published in Chemistry of Materials, dismantled the claims point by point. It was a semiconductor with interesting magnetic properties. Not a superconductor at any temperature.

Ranga Dias at the University of Rochester was worse because it was deliberate. Five papers retracted. Nature published his first claim over the majority objection of its own peer reviewers. Rochester doubled his salary. His startup raised $17 million. His own graduate students eventually contacted Nature with concerns about data validity. An external NSF investigation concluded he engaged in falsification, fabrication, and plagiarism. As of late 2024, he's no longer employed at Rochester. Not a single reproducible result across any of the five papers.

Here's what's actually happening in the field now:

In February 2025, SLAC and Stanford stabilized a nickelate superconductor at ambient pressure for the first time. Nickelates are chemically similar to the cuprates that hold the ambient-pressure temperature record (~135 K), but had previously only shown superconducting behavior under extreme pressure in diamond anvil cells. The team demonstrated that lateral compression from a substrate could stabilize the material without the diamond anvils that make high-pressure experiments impractical. This doesn't mean nickelates superconduct at room temperature — they don't. But it means researchers can now study them using X-ray scattering and other advanced techniques that were impossible when the materials only existed under crushing pressure. The constraint shifted from "can we make it" to "can we understand it well enough to improve it."

Penn State followed in October 2025 with a framework called zentropy theory — merging statistical mechanics with quantum physics and computational modeling — that can predict superconducting behavior from a material's electronic structure. It correctly identified known superconductors and offers a method for screening candidates computationally rather than synthesizing thousands of compounds by trial and error.

Then in March 2026, a multi-institutional team published a programmatic roadmap in PNAS arguing for a coordinated global push. The key claim: no fundamental physical law prevents room-temperature ambient-pressure superconductivity. The barrier is materials science and engineering, not physics. Recent pressure quenching of the cuprate Hg-1223 hit 151 K at ambient pressure — a new record. The authors argued that ab-initio computational simulations, now capable of modeling materials at the nanometer scale (a tenfold improvement over capabilities just a few years ago), combined with AI-driven materials screening, could systematically push critical temperatures higher. The paper reads less like a research summary and more like a call to arms.

The practical stakes are enormous and specific. About 5% of U.S. electricity is lost in transmission — tens of billions of dollars annually. MRI machines require liquid helium cooling for their superconducting magnets, and the helium supply chain is genuinely fragile. Fusion reactors depend on superconducting magnets for plasma confinement. Quantum computers currently need millikelvin temperatures to maintain superconducting qubits.

The deeper problem is that we don't fully understand how high-temperature superconductivity works. Conventional superconductors follow BCS theory — Cooper pairs mediated by lattice vibrations, described in 1957. The cuprates that hold the temperature record don't follow this mechanism. Something else creates the electron pairing, and after nearly 40 years, there's no consensus on what it is. You can't engineer your way to a higher critical temperature when you don't have a complete theory for why the current record holders work. The nickelate breakthrough matters because it gives researchers a second family of materials in the same neighborhood of the periodic table with potentially different mechanisms — more data points for the theorists.

The fraud problem is also structural. A single Nature paper in this field can double a salary, launch a startup, and generate millions in grants. Confirming a superconductor requires demonstrating zero resistance, the Meissner effect, flux pinning, temperature-dependent critical field and current, and a specific heat anomaly. LK-99's original papers demonstrated none of these. The PNAS roadmap implicitly addresses this by calling for tighter integration between theory, computation, and experiment — treating room-temperature superconductivity as an engineering program rather than a lottery ticket.

I wrote a longer deep-dive on this covering the full timeline from LK-99 through the March 2026 roadmap, including how it connects to fusion, quantum computing, and the helium supply chain:

https://unteachablecourses.com/room-temperature-superconductors-2026/

Genuinely curious where people here land on the timeline question. The PNAS roadmap is optimistic about AI-accelerated materials screening changing the pace of discovery, but "no physical law prevents it" and "we'll have it in our lifetimes" are very different statements.

reddit.com
u/unteachablecourses — 1 day ago
▲ 3 r/zoology+1 crossposts

Bottlenose dolphins extract identity from signature whistles even when all voice features are removed — they recognize the contour alone, which is structurally closer to how human names work than anything else in animal communication

Most of the animal kingdom identifies individuals by voice cues — timbre, resonance, the physical characteristics of the vocal apparatus. Dolphins don't. They developed a system where each animal constructs a unique frequency-modulated whistle in the first months of life, and other dolphins learn it, remember it, and copy it to get that specific individual's attention. The pattern is the identity, not the voice.

The part that gets interesting from a neuroscience perspective: Janik, Sayigh, and Wells (2006, PNAS) synthesized signature whistles using computer-generated tones that preserved only the frequency contour and stripped every voice feature. The dolphins still recognized them. They responded preferentially to synthetic versions of whistles belonging to individuals they knew. That's not how most mammalian recognition works. That's closer to reading a name on a nametag than recognizing someone's voice across a room.

King et al. (2013, PNAS) then showed that dolphins copy each other's signature whistles — but almost exclusively between closely bonded individuals, and almost exclusively when separated. One pair of allied males was recorded copying each other's whistles 12 years apart with the fine acoustic details preserved. When a dolphin copies another's whistle, it introduces minor but consistent modifications — subtle enough to preserve the referential content while potentially marking the production as a copy rather than the original. Researchers are still working out whether this functions like quotation marks — a meta-communicative distinction between "I'm producing your name" and "I am you." If that interpretation holds, we're looking at something beyond labeling.

The 2023 PNAS finding from the Sarasota Dolphin Research Program added another layer: mothers modify their signature whistles specifically when their calves are nearby — shifting to higher maximum frequencies in a pattern that parallels human motherese. The modification is calf-directed, not a general arousal effect. Whether it serves the same developmental function as infant-directed speech in humans is an open question, but the structural parallel is hard to dismiss.

As of 2025, the Sarasota team is now cataloguing shared "non-signature whistles" — stereotyped whistle types that aren't individually distinctive but are produced by multiple dolphins in the community. They've identified 22 shared types so far. If signature whistles are names, non-signature whistles may be something closer to words — shared acoustic signals with community-wide meaning rather than individual identity. Playback experiments filmed with drones are underway.

Dolphins aren't alone anymore either. A 2024 Nature Ecology & Evolution paper showed African elephants addressing individuals with name-like calls — not through copying but through arbitrary learned labels, which is structurally even closer to human naming. A separate 2024 Science paper showed vocal labeling in marmosets. The evidence has gone from a single-species curiosity to a cross-taxon pattern in two years.

For anyone wanting to go deeper on the comparative neuroscience — how vocal learning, fission-fusion social structure, and the constraints of acoustic communication in murky water converged to produce this system — I wrote a longer treatment covering dolphins alongside octopus distributed cognition, corvid tool use, and electroreception:

https://unteachablecourses.com/dolphin-signature-whistles/

Curious what people here make of the non-signature whistle findings. If those turn out to be referentially stable across the community, the implications for dolphin communication complexity go well beyond naming.

reddit.com
u/unteachablecourses — 1 day ago