r/UnteachableCourses

▲ 9 r/UnteachableCourses+1 crossposts

A 2023 Nature Eco & Evo review found the wood wide web's central claims are "largely disconnected from evidence" — but the actual science of fungal cognition is arguably more interesting than the debunked narrative

The wood wide web became one of the most successful science communication stories of the century. Cooperative forests. Mother trees nurturing offspring through underground fungal networks. Trees sharing resources and sending warnings. Avatar, The Last of Us, a NYT bestselling memoir. It fundamentally changed how a generation understood forests.

Then in February 2023, Karst, Jones, and Hoeksema — three mycorrhizal ecologists with decades of combined field experience — published a systematic evaluation in Nature Ecology & Evolution and found the core claims largely unsupported.

They evaluated three claims. First, that common mycorrhizal networks are widespread and persistent in forests. With current technology, it's difficult to confirm continuous, non-transient fungal connections between trees in the field. DNA sequencing of fungal networks had been achieved in only five field studies, on limited ranges of fungi and tree species. The networks may exist, but their prevalence and permanence haven't been established.

Second, that resources transfer through these networks in ways that boost seedling growth. In the best-controlled experiments, fewer than 20% showed connected seedlings performing better than disconnected ones. In the remaining 80%, connected seedlings performed the same or worse. Even when tagged carbon from one tree appeared in a neighbor, much of it stayed in the mycorrhizal roots themselves — the fungi were receiving it, but whether they were meaningfully passing it along was undemonstrated.

Third, that mature trees preferentially send resources and defense signals to offspring through CMNs. The researchers stated flatly: this claim has no peer-reviewed, published evidence. Zero field studies.

They also documented a structural problem in the literature. Of 1,676 citations of original CMN field studies, fewer than half the statements in 2022 papers about the original studies were accurate. A 2009 study mapping fungal distribution was routinely cited as evidence of nutrient transfer — though it never investigated nutrient transfer. Scientific game of telephone.

Here's the part that doesn't get enough attention: the debunking of the cooperative narrative doesn't mean mycorrhizal fungi aren't ecologically essential. The symbiosis is real and has existed for 400+ million years. Fungi access phosphorus and nitrogen that roots can't reach, receiving photosynthetic sugars in return. What's in dispute is whether the relationship is cooperative or primarily transactional — and whether fungi have their own agenda. The evidence increasingly supports fungi as active agents pursuing their own nutritional interests. Some mycorrhizal relationships are parasitic — certain orchids and understory herbs steal sugars from connected trees through CMNs. The network may not be a commune. It might be a marketplace. Or a protection racket. Or something with no human analogy.

Meanwhile, the cognition research on fungi and fungus-adjacent organisms has gotten genuinely strange. A 2024 Tohoku University study showed Phanerochaete velutina recognizing spatial patterns in resource environments — distinguishing between inward and outward directions when growing across blocks arranged in shapes. A 2025 study demonstrated context-dependent food preferences in the slime mold Physarella oblonga, including violations of rational choice theory that mirror human decision-making biases. A July 2025 paper showed Physarum polycephalum memory isn't just reflexive — it's overwritable in light of new information, meeting accepted criteria for navigational memory. All without a single neuron.

In 2025, SPUN (Society for the Protection of Underground Networks) released the Underground Atlas — the first high-resolution predictive biodiversity map of Earth's mycorrhizal communities, using over 2.8 billion fungal DNA sequences from 130 countries. Finding: 83% of Earth's climate-critical fungi remain unknown to science, identified only by DNA sequences with no corresponding described species. The underground world is vastly more complex and less understood than even enthusiastic mycologists suspected.

The real story of mycelial networks isn't cooperative trees whispering through the soil. It's a kingdom of organisms processing information without brains, making decisions without neurons, forming networks whose structure we're only beginning to map, and playing roles in carbon cycling we can't quantify because we haven't identified most of the species involved. The wood wide web was a beautiful story. The truth is stranger.

Longer analysis covering the full Karst et al. findings, the fungal cognition research, the SPUN atlas, and why the most interesting question in cognitive science may be "what was thinking before brains existed":

https://unteachablecourses.com/mycelial-networks-wood-wide-web-2026/

For the ecologists here — has the Karst paper changed how CMN research is being designed? Specifically, are new field studies incorporating the controls and alternative hypotheses (soil pore transport, direct root transfer) that the review identified as missing from earlier work, or is the field still largely operating under the old framework?

reddit.com
u/unteachablecourses — 11 hours ago
▲ 5 r/UnteachableCourses+2 crossposts

Zipline has completed 2 million+ deliveries across 125 million autonomous miles with zero serious injuries. Amazon Prime Air has completed roughly 16,000 deliveries and has had seven significant incidents including two drones hitting a construction crane and one crashing into an apartment building.

The drone delivery industry in 2026 has essentially sorted into two tiers, and the dividing line is simpler than most analysis makes it: how much does your aircraft weigh when something goes wrong?

Zipline's P2 and Wing's delivery drones weigh between 10 and 40 pounds. Amazon's MK30 has a maximum takeoff weight of 83 pounds. When a 15-pound drone has a problem, it's an inconvenience. When an 83-pound drone hits an apartment building at speed, people smell smoke and watch propeller fragments fall to the sidewalk. That's not a metaphor — that's what happened in Richardson, Texas in February 2026. Five days later, Amazon launched in Kansas City.

The incident catalog for Amazon Prime Air since resuming operations in April 2025 is worth laying out because the pattern tells you something about how differently the FAA treats operators at different safety profiles. A controlled landing at an Arizona apartment complex in May. A package dropped into a swimming pool in July. Two drones crashing into a construction crane in Tolleson in October — sparking a fire and hazmat response. A drone landing five feet from a resident checking his mailbox. A severed internet cable during ascent in Waco in November. The apartment building crash in Richardson in February 2026. Multiple FAA and NTSB investigations opened. Amazon resumed flights within 48 hours of the crane incident and launched new markets days after the apartment crash.

Compare that to how the FAA treats Part 107 operators — fines reaching $36,770 for a single violation, license suspensions for flying near stadiums. Amazon operates under Part 135 air carrier certification with different oversight mechanisms, but the optics of multiple federal investigations in one year while new markets keep launching on schedule are hard to ignore.

Internal cost projections reported in late 2024 showed Amazon spending roughly $63 per delivery against customer pricing of $4.99 to $9.99. Amazon can absorb that because it's Amazon. As of February 2026, total Prime Air deliveries sit at around 16,000 across operations in Texas, Michigan, Arizona, Florida, and Kansas.

Meanwhile, Zipline hit 2 million commercial deliveries in January 2026, has flown over 125 million autonomous miles with zero serious injuries, raised over $600 million in January 2026 (valuation now $7.6 billion), holds BVLOS authorization across all 50 states, and was producing a new drone every hour at its manufacturing facility by end of 2025. The company is expanding to Houston and Phoenix in early 2026. Its safety architecture includes acoustic detect-and-avoid with microphone arrays that can hear other aircraft up to two miles away and 500+ safety checks per second during every flight. The P2 platform uses a tether system that keeps the main aircraft at 300 feet while lowering a smaller delivery droid to the doorstep.

Wing, Alphabet's subsidiary, has now passed 750,000 deliveries and covers a service area reaching over 2 million customers across Houston, Atlanta, Dallas-Fort Worth, Charlotte, and — as of March 2026 — the San Francisco Bay Area. In DFW and Metro Atlanta, the top 25% of customers order three times per week. Delivery volume tripled in the second half of 2025 compared to the first half. Wing extended operating hours to 9 AM through 9 PM in Charlotte and DFW with FAA approval. The 150-store Walmart expansion announced in January 2026 adds Los Angeles, St. Louis, Cincinnati, and Miami to the pipeline.

The underlying regulatory question is Part 108 — the proposed rulemaking that would create a permanent, standardized framework for routine BVLOS operations instead of the current waiver-by-waiver system. It was announced in August 2025 and is still working through the process. Until it's codified, expansion pace is gated by regulatory bandwidth rather than technological capability. The FAA recently approved Wing and Zipline to operate simultaneously over the same DFW suburbs without visual observers — the first time that's happened — which suggests the regulatory direction is toward enabling multi-operator airspace, even if the formal rule isn't done yet.

The honest trajectory for drone delivery isn't the one on any company's investor deck. It's not 500 million deliveries by 2030. It's a specific tool for specific use cases — medical supplies in areas with poor road infrastructure, urgent small-package delivery in suburban markets, high-frequency low-weight consumer goods in neighborhoods where the economics and approvals align. Zipline's origin story is instructive: the company that actually scaled drone delivery built its operation delivering blood and vaccines in Rwanda and Ghana, not same-day retail in suburban Texas. The safety record, the operational discipline, and the regulatory credibility all came from solving a genuinely hard logistics problem where the alternative was people dying because roads were washed out.

Longer analysis covering the full regulatory landscape, the engineering constraints (noise, weather, payload limits, airspace integration), and why the companies scaling carefully are outperforming the one scaling fastest:

 https://unteachablecourses.com/drone-delivery-2026/

Question for this community: how much of Amazon's aggressive expansion schedule is driven by the belief that first-mover market presence matters more than safety record, and how much risk does that create for the broader industry if a serious incident involving a bystander triggers a regulatory clampdown that affects operators who haven't had problems?

reddit.com
u/unteachablecourses — 12 hours ago

The longest carbon nanotube ever made is 0.5 meters. A space elevator tether needs to be 100,000 km. But a newer candidate — graphene super laminate — is already produced at kilometer lengths, and 2025 lab results showed spot-welded layers with diamond-like properties.

The concept has existed for 130 years and the bottleneck has always been one thing: the tether. Everything else — the climber system, the anchor station, the counterweight, the power delivery — is hard but solvable with existing or near-term engineering. The tether requires a material with a specific strength of roughly 50-60 GPa·cm³/g. For reference, steel is about 0.25. Kevlar is about 2.5. The best carbon fiber composites hit maybe 4. You need something 15-25x stronger per unit weight than the best structural material in common industrial use.

Carbon nanotubes have been the poster child since the 1990s. Their theoretical tensile strength is ~150 GPa — more than adequate. The manufacturing reality: the longest single nanotube ever publicly reported is 0.5 meters. Nanotube "forests" have reached 14 cm at Waseda University, growing at one meter every 186 hours. The gap between 0.5 meters and 100,000 kilometers isn't something incremental improvements close on any human timescale. Google X investigated space elevators around 2014, concluded nobody had made a perfect nanotube strand longer than a meter, and put the project in "deep freeze."

Here's what's shifted. The International Space Elevator Consortium has increasingly moved its focus to graphene. Graphene's theoretical tensile strength is ~130 GPa — comparable to nanotubes. The critical difference: polycrystalline graphene is already being manufactured commercially at kilometer lengths and speeds of two meters per minute. The material isn't at tether quality — you need single-crystal graphene with zero grain boundaries, manufactured as a continuous sheet at industrial scale — but the trajectory from lab curiosity to industrial product is incomparably more advanced than the nanotube trajectory.

ISEC's leading candidate is "graphene super laminate" — multiple layers of single-crystal graphene bonded through covalent carbon-carbon spot welding. Each layer retains graphene's extraordinary in-plane strength while the interlayer bonds prevent the shearing weakness of regular multilayer graphene. In September 2025, ISEC reported that the spot-welding process had been demonstrated in the lab and produced a material with diamond-like properties. In February 2026, they published research on atomic oxygen corrosion resistance of the material — addressing one of the critical environmental hazards a tether faces in LEO.

Whether this can be manufactured at 100,000 km continuous lengths, at tether-quality purity, at production speeds that don't require decades, at viable cost — entirely undemonstrated. But the International Academy of Astronautics projected in 2013 that tether materials could achieve the necessary specific strength "within 20 years," putting the breakthrough at roughly 2033. The graphene trajectory is at least consistent with that timeline in the sense that the path is visible, even if the destination hasn't been reached.

The other engineering problems are worth cataloguing because the tether gets all the attention:

The climber needs to ascend 35,786 km to GEO. At reasonable speeds that's an eight-day journey (per Obayashi Corporation's design). It can't carry all its fuel — proposed solutions include ground-based lasers beaming power to photovoltaic cells, which introduces atmospheric attenuation, beam tracking accuracy across thousands of km, and "what happens when something flies through the beam path" as open questions.

Space debris. The tether passes through LEO where objects travel at ~7.8 km/s. A marble-sized fragment hitting a tether the thickness of plastic wrap is catastrophic. ISEC published a June 2025 analysis on this — the ribbon design helps because stress redistributes across width after small punctures, but routine avoidance maneuvers for tracked debris would be necessary. A ribbon can't exactly dodge.

Atmospheric hazards — wind loads, lightning, weather on the bottom; atomic oxygen corrosion in LEO; Van Allen radiation degrading molecular bonds (though studies suggest carbon nanotubes could survive radiation for 1,000+ years); gravitational perturbations from the Moon and Sun creating tether oscillations that need damping.

An ocean-based equatorial anchor platform under millions of newtons of continuous tension, maintained indefinitely, is itself a major engineering project.

Obayashi Corporation maintains a 2050 target for an operational space elevator. A March 2026 market report values the "space elevator market" — mostly materials research, climber design, and tether dynamics modeling — at $720 million, projected to reach $1.16 billion by 2030.

The comparison I keep coming back to: in 1903, the Wrights flew at Kitty Hawk. In 1969, Apollo 11 landed on the Moon. Sixty-six years from first powered flight to lunar landing. The space elevator concept has existed for 130 years. The materials science has been actively researched for 30. The physics is sound, the engineering challenges are understood, and the materials are making measurable progress. But the gap between "the physics works" and "we can build it" is still measured in orders of magnitude — and the payoff (cost per kg to GEO dropping from ~$20,000 to ~$500, 170,000 metric tons to orbit per year on a mature system) is so transformative that the question isn't whether it's worth pursuing but whether the timeline is measured in decades or generations.

Longer analysis covering the full engineering breakdown, graphene vs nanotube trajectories, the economics of $500/kg to GEO, and where space elevators sit in the broader landscape of space access moonshots:

https://unteachablecourses.com/space-elevators-2026/

For the materials scientists here — is graphene super laminate a realistic path to tether-grade specific strength, or is the grain boundary / defect problem at manufacturing scale essentially the same bottleneck that killed the nanotube approach, just wearing a different hat?

reddit.com
u/unteachablecourses — 12 hours ago

A photovoltaic retinal implant the thickness of half a human hair restored meaningful central vision in 80% of legally blind AMD patients at 12 months — the first treatment to restore form vision in geographic atrophy. Published in NEJM, CE mark and FDA applications now filed.

The PRIMAvera trial results, published in the New England Journal of Medicine in October 2025, represent the first clinical evidence that an electronic implant can restore central vision in patients with geographic atrophy due to age-related macular degeneration. GA is the end stage of dry AMD — the photoreceptors are dead, the damage was previously considered irreversible, and no approved therapy, investigational approach, or cell therapy had ever produced meaningful visual improvement. The NEJM editorial called PRIMA "the first treatment to restore vision" in this population.

The trial enrolled 38 legally blind patients across 17 sites in five European countries. Of 32 patients assessed at 12 months, 26 (81%) demonstrated clinically meaningful improvement in visual acuity. Mean improvement was 23 letters — roughly 4.6 lines on an eye chart. The best responder gained 59 letters (11.8 lines). Patients could read large print, recognize objects, and perform tasks like cooking and playing cards that they couldn't before implantation. Natural visual acuity without the device remained stable, confirming the improvement was attributable to the implant.

The mechanism: a 2×2mm crystalline silicon chip, 30 micrometers thick, comprising 378 photovoltaic pixels, is implanted beneath the retina within the atrophic lesion. Augmented-reality glasses with a front-facing camera project near-infrared light (880nm) onto the chip. Each pixel converts infrared light into electrical current that stimulates surviving bipolar cells — the retinal neurons downstream of the dead photoreceptors. The bipolar cells relay the signal through the remaining visual pathway to the brain. The infrared light simultaneously carries visual information and powers the chip. No battery, no wires, no external power threading through the eye. The brain learns to merge the prosthetic central vision with whatever peripheral natural vision remains.

The wireless design is significant because the field's most prominent prior device — Second Sight's Argus II, FDA-approved in 2013 — required wired connections that created durability problems. More critically, Second Sight went bankrupt in 2020 and ceased operations in 2022, leaving ~350 patients with orphaned implants and no manufacturer support. The Argus II cautionary tale is why commercial viability matters as much as clinical efficacy in this field — patients make a decades-long commitment to hardware in their body.

Science Corporation (founded by Neuralink co-founder Max Hodak), which acquired Pixium Vision's PRIMA assets in 2024, appears to be addressing the sustainability question aggressively. In March 2026, the company closed an oversubscribed $230M Series C — total funding now roughly $490M — with investors including Lightspeed, Khosla Ventures, Y Combinator, and IQT. CE mark application has been submitted to the EU, with European commercial launch expected later in 2026. FDA application is filed in the US. The company has also expanded PRIMA trials to retinitis pigmentosa and Stargardt disease at Sydney Eye Hospital in Australia, led by Dr. Matthew Simunovic — the first time the device is being tested in inherited retinal degenerations rather than AMD alone.

Caveats worth noting: the PRIMAvera trial was open-label, single-arm, and baseline-controlled — not placebo-controlled. An anonymous retinal-degeneration researcher told Nature that the intensive training and motivation from receiving a novel device might inflate results. The restored vision is grayscale, not color, and limited to central-field perception. Resolution is 378 pixels versus the roughly 6 million cones in a healthy fovea — four to five orders of magnitude below natural vision. Serious adverse events occurred in 19 of 38 patients, though 81% of events occurred within the first two months and 95% of those resolved within two months. One patient required surgery for retinal detachment and proliferative vitreoretinopathy.

The resolution gap is the fundamental limitation of every bionic eye in 2026. Daniel Palanker, the Stanford researcher whose work underlies PRIMA, draws the comparison to cochlear implants — early devices provided crude sound perception, decades of refinement enabled speech comprehension and music appreciation. The trajectory for retinal implants may follow a similar arc: first generation establishes the principle, subsequent generations improve resolution, and the technology becomes standard practice over decades. Next-generation PRIMA designs are pursuing smaller pixels for higher density, along with electronic zoom and image stabilization.

The broader landscape includes suprachoroidal implants (Bionics Institute, Australia — FDA breakthrough device designation, 97% electrode survival over 2.7 years), cortical visual prostheses that bypass the eye entirely (Neuralink's Blindsight system targeting first human volunteers in 2026; Cortigent's Orion with five-year feasibility data), and Science Corporation's own hybrid Science Eye combining retinal implants with optogenetic gene therapy. But PRIMA is the only device with NEJM-published efficacy data in a multicenter controlled trial.

AMD affects roughly 200 million people globally. GA specifically affects approximately 5 million and is responsible for ~20% of legal blindness in North America. The only approved GA therapies — complement inhibitors pegcetacoplan and avacincaptad pegol — slow progression but require monthly or bimonthly injections and have never restored lost vision. PRIMA is the first device to cross from "slowing the damage" to "reversing the outcome."

Longer analysis covering the full device landscape, the Argus II failure, the resolution problem, and the cochlear implant comparison framework for understanding the technology's trajectory:

https://unteachablecourses.com/retinal-implants-bionic-eyes-2026/

For anyone in ophthalmology or retinal research — how significant is the expansion to RP and Stargardt? The retinal damage in those conditions is more diffuse than the focal atrophy in GA, which seems like it would complicate subretinal implant positioning and potentially limit efficacy. Curious whether anyone has a view on how transferable the PRIMA results are to those populations.

reddit.com
u/unteachablecourses — 12 hours ago
▲ 2 r/UnteachableCourses+1 crossposts

Quantum computing in 2026 is where classical computing was in the early 1950s — room-sized machines solving academic problems, with a transformative future visible in theory and invisible in daily life. The difference is the 1950s scientists didn't have quarterly earnings calls.

Google's Willow chip completed a benchmark calculation in five minutes that would take a classical supercomputer 10^25 years — a number that exceeds the age of the universe by 15 orders of magnitude. IBM promised quantum advantage by end of 2026. Microsoft debuted the first topological qubit processor in February 2025. D-Wave's stock is up 200% in a year. The headlines suggest the revolution has arrived.

The practical reality: quantum computers are not commercially useful at scale. Most real-world applications remain experimental. They are expected to outperform classical computers in specific, commercially meaningful tasks sometime after 2030, not before.

Here's where things actually stand in April 2026, stripped of the press releases.

The field sits in the NISQ era — Noisy Intermediate-Scale Quantum computing. Current processors operate with dozens to a few hundred physical qubits, and those qubits are fragile. They're sensitive to temperature (superconducting quantum computers operate near absolute zero, about 15 millikelvins), electromagnetic interference, vibration, and any interaction with their environment. These interactions cause errors — qubits lose their quantum state through decoherence — and current error rates are high enough that computations longer than a few thousand operations become unreliable.

IBM's Nighthawk processor, delivered late 2025, achieves roughly 5,000 reliable gate operations. IBM expects 7,500 by late 2026, 10,000 by 2027. Those are genuine improvements. They're also roughly five to six orders of magnitude below what's needed for the applications that justify the investment.

The path from "interesting but impractical" to "commercially useful" runs through quantum error correction — using multiple physical qubits to encode a single logical qubit protected against errors. Google's Willow demonstrated "below threshold" error correction where adding more qubits decreased errors rather than increasing them. That's foundational. But the demonstration was limited to quantum memory, not gate operations, and logical error rates are still orders of magnitude from practical.

One telling detail about where the field stands: there's no consensus on what a qubit should even be made of. In classical computing, the transistor won decades ago. In quantum computing, at least five competing technologies are under active development with billions behind each — superconducting qubits (IBM, Google), trapped ions (IonQ, Quantinuum), neutral atoms (QuEra, Atom Computing, Pasqal), photonic approaches (PsiQuantum, Xanadu), and Microsoft's largely unproven topological qubits.

A few things have happened since the Willow announcement that are worth tracking:

In January 2026, a multi-university paper in Science (UChicago, Stanford, MIT, Innsbruck, Delft) explicitly compared the current state of quantum technology to the pre-transistor era of classical computing — foundational physics established, functional systems exist, but scaling to utility requires engineering breakthroughs that could take years or decades. They called it a "transistor moment," which sounds optimistic until you remember how long it took from the first transistor to the first useful computer.

In February, Fermilab and MIT Lincoln Lab demonstrated trapped ions controlled by in-vacuum cryoelectronics — a key step toward scalable ion-trap quantum computing, because current systems rely on impractical wiring between room-temperature electronics and cryogenic traps that breaks down as you add qubits.

In March, IBM released the first published quantum-centric supercomputing reference architecture — a blueprint for integrating quantum processors alongside GPUs and CPUs in hybrid systems. This is significant because it acknowledges what the field has quietly accepted: quantum computers aren't going to replace classical computers. They're going to work alongside them, handling specific subtasks where quantum offers advantage. The hybrid model is the realistic path, and IBM formalizing an architecture for it matters.

On the neutral atom front, Microsoft and Atom Computing plan to deliver an error-corrected quantum computer to Denmark's Novo Nordisk Foundation in 2026. QuEra delivered a machine ready for error correction to Japan's AIST and plans global availability this year. Both teams expect to put 100,000 atoms into a single vacuum chamber within a few years — a scalability advantage that superconducting approaches can't easily match.

D-Wave claimed an industry-first in scalable on-chip cryogenic control for gate-model qubits in January, addressing the wiring bottleneck. Their stock reflects the hype cycle more than the technical reality, but the underlying engineering is genuine.

What quantum computers actually can do today: simulate molecular behavior (the most natural application — using a quantum system to simulate a quantum system), certain optimization problems, and cryptography research. What they cannot do: run AI models, replace cloud computing, speed up databases, or accomplish any general-purpose task more efficiently than a classical machine. NIST finalized post-quantum cryptography standards in 2024 because the threat to current encryption is real — it just requires millions of error-corrected qubits that don't exist yet.

IBM's roadmap targets fault-tolerant quantum computing — their Quantum Starling machine, ~200 logical qubits across ~10,000 physical qubits — by 2029. IBM has been hitting interim milestones consistently, which matters because roadmap credibility is rare in this field. Their 2025 Loon processor demonstrated the key hardware components, and they achieved real-time error decoding in under 480 nanoseconds, a year ahead of schedule.

The pattern is familiar if you've followed fusion or autonomous vehicles: genuine technical progress, consistent milestone achievement, and a commercial timeline that keeps resolving into "a few more years." The most honest framing isn't that quantum computing doesn't work — the physics absolutely works. It's that the gap between where we are and where we need to be is measured in orders of magnitude, and orders of magnitude don't close on schedule.

Longer analysis covering the error correction problem, the qubit technology competition, IBM/Google/Microsoft roadmaps, and what "quantum advantage" actually means versus how it's marketed:

https://unteachablecourses.com/quantum-computing-2026/

Genuine question for the technical people here: does the neutral atom approach (QuEra, Atom Computing) end up winning the qubit race specifically because of the scalability advantage — 100,000 atoms in a single chamber vs. the wiring nightmare of scaling superconducting systems — or is the gate speed disadvantage too steep for it to matter?

reddit.com
u/unteachablecourses — 12 hours ago

The Line's construction was suspended in September 2025 after completing 2.4 km of foundations out of 170 km. In March 2026, three more major contracts totaling $6B+ were cancelled. An internal audit leaked to the WSJ projected final costs of $8.8 trillion and a completion timeline stretching to 208

The original spec: two parallel mirrored walls, each 500 meters tall, extending 170 km in a straight line through the Saudi desert. 200 meters wide. Nine million residents. No cars, no streets. Population density of 260,000 people per square kilometer — six times denser than Manila, the densest city on Earth. Vertical farms, flying taxis, AI managing the city like a cognitive organism. Estimated cost: $500 billion. Estimated completion: 2030-ish.

What actually happened: the Saudi sovereign wealth fund paused construction on September 16, 2025. The NEOM CEO was relieved of duties in November. The 2029 Asian Winter Games at Trojena — a ski resort on manufactured snow in the Saudi mountains — were indefinitely postponed in January 2026 and relocated to Almaty. Workforce cut roughly 35%. Over 1,000 employees relocated from the construction site to Riyadh. The PIF recorded an $8 billion write-down.

Then in March 2026, three more major contracts were terminated: Webuild's $4.7 billion dam and lake project for Trojena, Eversendai's structural steel contract for the Trojena ski village, and Hyundai's $1 billion tunnel contract for The Line's transport infrastructure. That's $6+ billion in cancellations in a single month for a project that's supposedly "a strategic priority."

The engineering problems were identifiable from the announcement. An Imperial College London analysis noted that building The Line to spec within the proposed timeline would require construction at 15,000 times the rate of normal U.K. construction. The enclosed volume — roughly 17 billion cubic meters — at standard high-rise construction costs of ~$1,000/m³ implies structural costs alone of $17 trillion. The mirrored glass exterior would create a solar concentrator effect between the walls. The structural loads on a 500-meter-tall continuous wall extending 170 km — wind loading, thermal expansion, seismic forces in a region with active fault lines — exceed anything built anywhere. Water supply for nine million people in the Tabuk desert would require the largest desalination infrastructure ever constructed.

Each of these is solvable in isolation. Together, at this scale, in this timeline, in this location, they compose something approaching impossibility. The Financial Times reported that MBS has now privately accepted the original vision will be realized as something "far smaller." One former employee, quoted anonymously, said the situation is now about "letting MBS down gently."

What's interesting from an urban planning perspective is the pattern. This is the same trajectory as every ambitious planned-from-scratch city in modern history, just at a larger budget. Brasília works but is widely considered sterile. Naypyidaw is a ghost town. Masdar City in Abu Dhabi — billed as the world's first zero-carbon city in 2006 — has been quietly scaled back to a small neighborhood. Songdo in South Korea is roughly half-occupied a decade after opening.

The consistent lesson: planned cities that succeed tend to be modest in scope and flexible in design. Planned cities that lead with a grand vision and a promotional video tend to become very expensive lessons in the difference between rendering and reality. The Line followed the same pattern as Fordlandia, the Concorde, and the Superconducting Super Collider: vision first, engineering second, constraints never.

The pivot is telling, though. Architects have been tasked with repurposing existing infrastructure — the trench, foundations, and cores — into something deliverable. The leading candidates are a much shorter coastal section (2.4-5 km) at reduced height, with remaining earthworks potentially converted to AI data centers. A $5 billion DataVolt partnership for data center infrastructure at Oxagon was announced in February 2026. Bloomberg reported additional deals with AWS and Google Cloud are in negotiations. NEOM's green hydrogen plant is 80% complete. The project may end up as a tech infrastructure hub rather than a city — which is arguably more useful than what was originally proposed, but bears almost no resemblance to the mirrored canyon city in the 2022 video.

PIF construction contracts fell from $71 billion to $30 billion — a 60% reduction — as capital gets reallocated to FIFA 2034 stadiums and Expo 2030.

I wrote a longer analysis covering the full engineering breakdown, the history of planned-from-scratch cities, and where this fits in the broader pattern of utopian megaprojects:

https://unteachablecourses.com/neom-and-the-line-2026-update/

For the planners here: what's the most instructive comparison case? I keep landing on Masdar City because the arc is almost identical — Gulf state money, zero-carbon branding, renders that looked like a different planet, quiet scale-back to something functional but unrecognizable — but curious whether anyone sees a closer analog.

reddit.com
u/unteachablecourses — 13 hours ago
Week