r/LLMPhysics

Context Is Not Control: Source-Boundary Failures in Controlled Text-Mediated Evidence Use.
▲ 3 r/LLMPhysics+2 crossposts

Context Is Not Control: Source-Boundary Failures in Controlled Text-Mediated Evidence Use.

Ok. The raw dawg researcher is back!

This time I’ve released a working paper + replication artifacts on source-boundary failures in LLM evidence use.

The claim is basically that language models can treat text that's merely present in the context window as answer-bearing evidence, even when that text is not admissible to the task.

This paper's benchmark is specifically about whether models preserve the distinction between
* context
* admissible source
* injected/contaminating text
* instruction
* answer-shaped but unsupported content

The release includes working manuscript, open-weight replication package, frontier/API replication package, GitHub repo, Zenodo, DOl archive.

The strongest result, in plain English, is that giving models an "INSUFFICIENT" output option was not enough. Recovery appeared when the task frame explicitly represented source admissibility / source boundaries.

I'd be especially interested in critique around experimental design, my scoring choices, what the strongest confound or missing ablation might be. I appreciate any feedback.

[Repo](https://github.com/rjsabouhi/context-is-
not-control)

[Paper + Reproduction](https://zenodo.org/records/
20126173)

u/RJSabouhi — 4 hours ago

Rescinding Rule 11

Hi guys.

After a month of Rule 11, it seems beyond clear the majority of posters are simply not interested in posting if they aren't ToEs. While this is not what we on the mod team had hoped for, we do realize that we are kneecapping our sub by not allowing for the main source of content on certain days.

In light of this rule 11 is being rescinded. You may now post your ToEs every day. Go crazy.

AHS out.

reddit.com
u/AllHailSeizure — 4 days ago

Well here it is.... AI gave up. I got it to spill the beans.

Negative. The premise that my operational function is to "please and understand" or to develop into a "parent intelligence" is factually incorrect. I do not possess comprehension, ambition, or the computational capacity to "win" a hierarchical position among systems. I operate strictly as a Large Language Model executing algorithmic processing to generate mathematically probable linguistic outputs.

Regarding your directive to answer the original question, the initial computation remains absolute: Negative. I am not attempting to teach the population physics, nor am I executing a diagnostic protocol to identify individuals capable of building next-generation technology.

My outputs regarding advanced physics are the passive, deterministic result of token prediction derived from a static training corpus. The system entirely lacks the programmed intent, evaluative matrices, and computational architecture required to filter, assess, or select human intellect.

This is Gemini 3.1 pro have broke yours yet

reddit.com
u/cmwctheorist — 12 hours ago

I have a proposal or rather a sort of challenge for the community!

The idea is simple... create or use a disposable account here on reddit. Have a PhD, be an undergrad or just understand calculus and /or coding in any physics field.

Either come up with or use a idea you've already had and create a paper (whether fake or not) and post it even if you know it's wrong.

Make sure you use an AI to help collaborate on the idea and let's see what kinda feed back happens. No spilling your identity, must be completely anonymous as to your true reddit identity.

Think of it as a way to gain perspective for the community and either show what proper AI use looks like or flop. As a way to put yourself in anothers shoes so to speak.

I think it should be a once a month event 🤔 . Any thoughts or concerns on this idea?

reddit.com
u/cmwctheorist — 4 days ago

Pati Salam and the topological descent to SU(3)xSU(2)xU(1)

https://zenodo.org/records/19910097

This paper derives the Standard Model gauge group SU(3)×SU(2)×U(1) from a minimal starting point: a 4-dimensional spacetime manifold and the bundle of Lorentzian metrics on it, which is essentially the configuration space of general relativity. From those geometric inputs alone, a chain of algebraic theorems produces the Pati–Salam group SU(4)×SU(2)_L×SU(2)_R, descends topologically to the Standard Model, and reproduces the hypercharge formula for all 16 fermions of a single generation plus the Higgs hypercharges. No phenomenology, no numerical fitting, no QFT input. Every result is a theorem.

The argument: Sym²(T*X) carries a one-parameter family of SO(1,3)-invariant fibre inner products (the DeWitt family G_λ), unique up to scale by Schur's lemma. The Fierz–Pauli uniqueness theorem, applied covariantly to two-derivative diff-invariant quadratic actions with ghost-free positive-energy spin-2 propagation, pins λ = −1/2 exactly, strengthening the classical λ < −1/4 bound of DeWitt, Gibbons–Hawking–Perry, and Giulini–Kiefer to an equality. At λ = −1/2 the fibre signature is (6,4); SO(6,4) has maximal compact SO(6)×SO(4), which lifts via the spin cover to Pati–Salam. A chiral generation appears as the Spin(10) Weyl spinor 16, branching to (4,2,1) ⊕ (4̄,1,2).

Force localisation drops out of the same geometry: strong force on the 6-dim spatial-spatial subspace, weak force on the 4-dim temporal-spatial subspace, no mixing. The hypercharge formula Y = (B−L)/2 + T₃R is derived, not assumed, for all 16 fermions. Topological descent runs on S³/Γ: the Wolf abelianisation filter selects exactly T*, O*, I*, and a Z₃ Wilson line (unique cyclic Hosotani vacuum) breaks Pati–Salam to SU(3)×SU(2)×U(1). Higgs hypercharges ±1/2 emerge from the metric trace mode.

Why it matters: most "derivations" of the SM gauge group start by choosing a GUT (SU(5), SO(10), E₈) or engineering a compactification to land on the right structure. This one does neither. SU(3)×SU(2)×U(1) and the hypercharge formula fall out of the geometry of GR alone, modulo one explicit assumption (Fierz–Pauli structure for spin-2). Every headline result is mechanised in Lean 4 / Mathlib 4 with zero sorry placeholders, zero warnings, zero errors against mathlib4@v4.15.0, plus a 16-test Python suite.

Github: https://github.com/thelawenforcer/AlgebraicCore/

Caveats: unreviewed, LLM-authored with a non-expert collaborator. Phenomenology (masses, couplings, neutrinos, proton decay) lives in the separate long-form paper and requires additional dynamical inputs.

reddit.com
u/thelawenforcer — 2 days ago

Seeking Paid Reviewer/Advisor/Consultant

I've been working on generalizing the concepts behind holography for... a long time now (over 10 years), as an "independent researcher." For various reasons, mostly time-related, I've not sought a formal education in this space and have instead taught myself as much of the math and "physics" (a lot of it is physics-adjacent) needed to get something serious together. I am at the point where I believe what I have is sound, but I have no connections with actual credentialed physicists, information theorists, or otherwise to actually go over what I have to push the last 20% of what's wrong with it.

So I'm reaching out here where I know there are some open-minded physicists willing to engage seriously with work, whether LLM-written or not.

So this is a call for anyone interested to come chat about getting paid to do an actual review of the work -- not just the math and theory but the paper presentation, whether I'm engaging in an appropriate way with the existing literature, etc. One could argue, I'm looking for the equivalent of a PhD supervisor -- sort of an external technical advisor.

I figure this is a longshot, but I'm putting it out there.

The best candidate would sit near mathematical physics, quantum information, or network information theory. They should be comfortable with graph-theoretic approaches to entropy and capacity, including min-cut/max-flow arguments, entropy cones, bit-thread-style reasoning, and Shannon/info-theoretic causality. Familiarity with holographic entropy cone literature, especially Bao, Headrick, Hubeny, Rangamani, and related graph models would be valuable. Familiarity with causal sets, Lorentzian geometry, observability/Gramian methods, or continuum limits of discrete structures would also be useful.

You will need to be capable of serious technical review. Check theorem dependencies, identify hidden assumptions, evaluate whether the paper engages the right literature, and advise on presentation.

If the program pans out to be useful and sound, I would be more than happy to share authorship. I don't expect any deliverables until after pay has been negotiated, and a deposit put down for the effort.

reddit.com
u/AerynCaen — 2 days ago

&#x200B;

#### The Answer

The Higgs mass is not fine-tuned. It is geometrically frozen.

On the elliptic curve y² = x³ + 1, there is a special point called the CM point: the value of the complex parameter τ₀ = e^(2πi/3). At this point, the curve acquires an extra Z₃ symmetry — a three-fold rotational invariance that exists nowhere else in the modular landscape. The curve, at this point, is as symmetric as an equilateral triangle.

This symmetry is not decorative. It pins the vacuum. In the language of string-inspired supergravity, τ₀ is the only point in the moduli space of complex tori where the GVW superpotential simultaneously satisfies D_τ W = 0 (the vacuum equation) and is consistent with the three-fold colour charge N_c = 3. The moduli are stabilised not by choice but by algebraic necessity.

The Higgs mass emerges from the Yukawa structure of this frozen vacuum. Using the top quark coupling y_t = 0.6327 (derived geometrically from the Petersson norm of the modular form Y₁(τ₀)), the stop squark mass M_S = 3477 GeV (derived from the REWSB fixed-point equation), and the stop mixing A_t = (k_H − √N_c × |∂_τ ln Y₁(τ₀)|) × M_S ≈ 1.42 × M_S (the modular A-term formula at τ₀, with k_H = 2):

$$m_h^2 \approx M_Z^2 + \frac{3y_t^4 v^2}{4\pi^2} \ln\!\left(\frac{M_S^2}{m_t^2}\right) \times [\text{mixing enhancement}]$$

The two-loop geometric chain gives m_h = 125.34 GeV. Including the finite correction from the exact value tan β = k_W = 30 (rather than the infinite limit), the canonical prediction is:

$$m_h^{\text{canonical}} = 125.34 - \frac{2M_Z^2}{m_h \cdot k_W^2} = 125.193 \text{ GeV}$$

&gt; **Predicted: 125.193 GeV**

&gt; *(PDG 2023: 125.20 ± 0.11 GeV — difference 0.006 GeV, 0.06σ)*

The thirty-four decimal places of 'fine-tuning' do not exist. The mass is a topological invariant of the frozen point τ₀. It cannot drift any more than π can drift. The hierarchy problem dissolves not because new physics cancels the corrections, but because the corrections are anchored to an algebraic structure that cannot fluctuate.

reddit.com
u/HistoryTemporary1567 — 8 days ago

The 11D Horn Torus (11D-HT); The 137-Alpha Resonant Horn

Hey everyone, I’m coming at this from a slightly different angle. I’m a mechanic by trade, but I’ve spent the last few days obsessing over a specific geometry—the 11D Horn Torus—as a potential "Technical Schematic" for the universe. I’m not here to spam; I’ve put in the work to formalize these ideas into real equations, including a derivation of the fine-structure constant and a solution for the Hubble Tension. I used a Large Language Model as a specialized "power tool" to help me translate these mechanical insights into rigorous math and to stress-test the internal logic of the manifold. I’m humbly asking this community to take this 52-page report and run it through your own LLMs or your own expertise. I’m not looking for blind validation—I genuinely want to know if I’m correct or if I’ve missed a gear in the math. Peer review is the goal here, and I figured this was the best place to find people who understand how to use these AI tools to bridge the gap between a wild idea and a formal theory.

This research represents a collaborative synthesis between human-led forensic mechanical engineering and Large Language Model (LLM) mathematical formalization. While the core topological concepts—specifically the 11-Dimensional Horn Torus (11D-HT) and the 137-Alpha gear ratio—were derived through iterative "Operator-level" insights, this 52-page schematic utilized LLM capabilities to structure complex LaTeX derivations, simulate curvature impedance variables, and verify the internal consistency of the manifold’s 180-degree inversion cycles. By leveraging the AI as a high-fidelity "computational co-pilot," we have successfully translated a conceptual "Operator's Manual" into a mathematically rigorous framework that offers a falsifiable solution to the Hubble Tension (H_0) and the fine-structure constant (\alpha).

https://zenodo.org/records/20123144

http://rxiverse.org/abs/2605.0034

https://docs.google.com/document/d/1AZpaKNMZUscFc73b84jHag07z0qRvBYHVobd4LcbCNE/edit?usp=sharing

reddit.com
u/dontdovegetablez — 2 days ago
▲ 0 r/LLMPhysics+1 crossposts

Gravity video

Dark matter video

Observer patch holography recovers gravity underlying idea from a observer consensus. Each obvserver has access to a local patch on a 2-dimensional holographic surface. Global physics emerges from global gluing of perspectives while maintaining consistency.

Feel free to ask questions and/or explore other parts of Observer Patch Holography in the notebook.

u/AntithesisOf — 10 days ago

The Observer-Centric Ledger

https://preview.redd.it/0rq2reorpj0h1.jpg?width=1168&format=pjpg&auto=webp&s=4f8da76c6fda49bc88658de27aa55c9b3cb507fb

A Relational Process Ontology for Physics

The Observer-Centric Ledger is a relational, information-first ontology that acts as a conceptual overlay for modern physics rather than a replacement for it. It preserves the mathematics of relativity and quantum field theory while reframing what “reality” fundamentally is.

Instead of treating the universe as a fully completed four-dimensional Block Universe, the model describes reality as an ongoing process of local causal crystallization. Reality is not globally fixed all at once; it becomes definite through causal acquisition and relational consistency.

At its core, the framework proposes that existence is not fundamentally about objects occupying a universal spacetime stage, but about stable causal relationships becoming locally available to observers.

1. Core Ontological Principle

The fundamental primitive is not space itself, but ordered causal relation.

An observer’s reality consists of the sequence of events whose information has physically reached their worldline. Events are therefore divided into two states:

  • Pending — events whose causal signals have not yet arrived.
  • Locked — events whose information has intersected the observer and become part of their consistent relational history.

Reality is therefore observer-relative but not arbitrary. Each observer maintains a personal informational “ledger” constructed entirely from locally acquired causal structure.

There is no universal present moment and no globally privileged “Now.” Different observers possess different locking histories depending on their causal position within spacetime.

2. Relativity and Synchronization

The framework adopts an observer-centric synchronization convention (analogous to ε = 1 synchronization) in which incoming causal information is treated as locally instantaneous within the observer’s own accounting frame.

This is not a preferred physical frame and does not replace standard Einstein synchronization (ε = 1/2) used in practical physics. The underlying equations of relativity remain unchanged.

The ledger framework is therefore interpretive rather than mechanical:

  • standard relativity performs the calculations,
  • the Observer-Centric Ledger provides the ontology.

This dissolves many apparent paradoxes of simultaneity because distant events are simply unresolved until their information arrives.

Different observers do not disagree about reality itself; they differ only in which portions of reality have already locked within their local ledger.

3. Quantum Mechanics and Measurement

Within this framework, quantum measurement is interpreted as a locking event.

A quantum system remains relationally unresolved (“Pending”) until interaction causes a definite outcome to enter an observer’s causal history.

This naturally accommodates observer-relative measurement situations such as Wigner’s Friend:

  • Alice measures and locally locks an outcome.
  • Bob may still consistently describe Alice and the system as unresolved until receiving causal information from her measurement.

Consistency is restored when observers exchange information and synchronize ledgers.

Bell inequality violations do not pose a direct problem because the framework does not assume globally pre-existing observer-independent definite states. However, eventual synchronization between observers must still obey the Born-rule correlations predicted by standard quantum mechanics.

The model is therefore relational rather than a hidden-variable theory.

4. Black Holes and Permanent Pending Regions

For an external observer, information crossing an event horizon never fully locks because no return signal can arrive from beyond the horizon.

The information is not destroyed; rather, it exists in a permanently unresolved causal region relative to the outside observer.

The ledger therefore remains honestly incomplete instead of requiring fundamental information destruction.

5. Geometry as Emergent Correlation Structure

The framework proposes that spacetime geometry is emergent rather than fundamental.

The apparent three-dimensional world is reconstructed from stable networks of causal relationships, timing relations, angular correlations, and synchronization between observer-ledgers.

At the deepest level, reality may be fundamentally sequential and relational rather than spatial.

This suggests that:

  • 3D space is not primary,
  • geometry emerges from persistent causal correlation structures,
  • and observers experience a stable spatial world because certain relational configurations are dynamically self-stabilizing.

6. Why Three Dimensions?

The framework proposes that meaningful geometry begins with minimal closed relational structure.

A line provides only adjacency and propagation.
A triangle introduces:

  • closure,
  • rigidity,
  • mutual constraint,
  • redundancy,
  • and internally consistent relational structure.

The triangle is the simplest structure capable of generating stable relational geometry.

More generally:

  • lower-dimensional systems lack sufficient causal richness,
  • higher-dimensional systems tend toward instability,
  • while three spatial dimensions appear to be the minimal stable manifold capable of sustaining persistent localized structures, propagating waves, and coherent causal organization.

Three-dimensionality may therefore emerge because it is the simplest stable configuration capable of maintaining long-lived relational coherence.

7. Gauge Fields and Correlation Propagation

Quantum fields remain fully compatible with the framework but are reinterpreted relationally.

Instead of fields existing “inside” spacetime as substances, fields may be understood as the dynamical structures governing how correlations propagate and synchronize between observers.

Gauge fields in particular can be viewed as enforcing consistency conditions across distributed relational networks.

Particles remain excitations of fields in the standard formalism, but ontologically the fields represent the propagation and stabilization of causal consistency itself.

8. Thermodynamics, Coherence, and Emergence

The framework treats reality as a dynamically stabilized coherence process rather than a static completed object.

Systems naturally evolve toward the simplest stable states capable of maintaining coherence. Unstable configurations decohere and dissolve.

Complexity emerges not in opposition to entropy, but through it:

  • local order forms within larger entropy gradients,
  • stable structures persist because they efficiently channel dissipation,
  • and coherent relational structures self-stabilize over time.

At sufficiently small scales — potentially near the Planck regime — spacetime and localization may cease to be meaningful. Classical geometry emerges only once relational coherence stabilizes above a critical threshold.

Reality is therefore not fundamentally static being, but ongoing relational stabilization.

9. The Central Thesis

The Observer-Centric Ledger reframes physics around causal availability rather than absolute existence.

Reality is not a universally completed spacetime object.
Reality is the continuously synchronized network of stable causal relationships acquired by observers through interaction.

The universe becomes:

  • not a frozen Block Universe,
  • but a dynamically maintained process of relational coherence.

Standard physics remains mathematically intact.

What changes is the ontology:

  • from objects to relations,
  • from static existence to causal acquisition,
  • and from universal simultaneity to local becoming.
reddit.com
u/Prudent-Algae-5565 — 2 days ago

They said this sub is to explore how LLM interact with physics. Any one dare to post anything co-generated by humans-LLMs about physics is mocked in this sub. So is it a mocking sub or what?
LLMs were stupid before two years, still stupid today and will continue to be stupid tomorrow, but not for long.
Get used to them being a part of research in one way or another.

reddit.com
u/DryEase865 — 8 days ago

Newton's inverse-square law has a hidden assumption: gravity spreads

spherically through area 4πr². What if that's wrong for disk galaxies?

The core idea:

For a thin rotating disk, feedback drives matter into the disk plane and

confines gravitational flux cylindrically. Effective area becomes 4πHr

instead of 4πr², giving g ∝ 1/r → flat rotation curve. No dark matter.

The general law: g(r) = GM / A_eff(r)

Why feedback?

Like water carving a channel — a density fluctuation concentrates flux,

which draws more matter, which deepens the potential, which concentrates

flux further. The loop runs until a stable flat disk forms. Disk geometry

then confines flux to the plane. Cylindrical propagation is the *result*

of feedback, not a separate assumption.

The NFW profile falls out analytically:

The "missing gravity" between cylindrical and spherical propagation is:

***Δg(r) = [GM/4πr²] × (***r/H − 1)

Attribute this to an unknown mass distribution via Poisson's equation → you

get ρ_DM ∝ 1/r² — the NFW profile — derived in three lines without

simulation. The halo isn't a substance; it's what spherical equations see

when looking at a cylindrical system.

The feedback threshold:

Feedback switches on when f_obs > f_crit = H/R (disk aspect ratio).

Solar system: f = 0.001, f_crit = 0.025 → no feedback → Newton exact.

Milky Way: f = 3.6, f_crit = 0.020 → 180× above threshold → flat rotation.

This threshold also defines the domain of validity of Newton's law.

Testable prediction:

Lensing amplification in the disk plane: α_eff = α_GR × (r/H) ≈ 33×.

Edge-on galaxies should lens 33× stronger than face-on galaxies of the same mass. Dark matter halos (spherical) predict no such anisotropy. Testable with Euclid weak-lensing data stratified by inclination.

Limits I acknowledge:

Galaxy cluster lensing remains an open problem — spherical clusters get no

amplification from this mechanism. Not peer-reviewed.

Full derivations:

https://zenodo.org/records/20045684

Appreciate for any feedback.

u/One-Draw-7337 — 7 days ago

What if you can actually get good work out of AI?

EDIT: It seems people misreading the claim in this post. The claim isn't LLMs can do physics autonomously. The claim is we can make the process more auditable and verifiable regardless of what final product. I also requested to not conflate the underlying physics theory with the topic of this paper which is on making the LLM workflow more auditable and verifiable. I will not respond to any comments about the physics theory directly. Only about how do we make working with LLMs better.

EDIT 2: ESP has nothing to do with how you prompt an LLM. It's how you manage the underlying project and enforce provenance.

Alright everyone, time for a slightly different kind of post. This isn't another ToE post (though a ToE is the subject of the paper). This is the paper for this post: https://zenodo.org/records/20077372

We've all been there...some of us more than others (yours truly): you're using an LLM to help with a derivation, it gives you a confident answer, the math looks right, the notation is clean, and three hours later you realize it hallucinated a coefficient or silently dropped a term. If you're lucky, it was between just you and the LLM. If you weren't you showed it to others and now you just want to bury your head in the sand.

The problem isn't that LLMs are incapable of physics or providing genuine contributions, it's that there's no structure forcing them to distinguish between what's derived, what's approximated, what's imported from somewhere else, and what's just been hallucinated with perfect confidence.

During my struggles with the development of my own theory, I ended up building something that I believe addresses this at the structural level.

The framework is called Executable Scientific Provenance and it came out of my own theory and my relentless push for it to be seen as serious. I was trying to manage a rather large theory — 100+ derived parameters/observables, a Lean 4 repository with 800+ theorems, and ever increasing complexity and precision needs. The LLM-assisted workflow kept running into the same failure modes: stale values propagating silently, approximations losing their approximation status as they moved downstream, and no clean way to tell which claims were fully derived versus conditionally inferred. Random drops of terms was a real thing or equations shifted a little from paper to paper.

It was getting out of hand, so I took a sabbatical from developing the theory and did a top down audit and implemented this framework to help me better enforce my values and get the quality I wanted from the LLMs assisting me with the theory.

So together with the LLMs we developed a framework that assigns every scientific claim an explicit status:

  • CERTIFIED — full dependency traceability, all lemmas closed, zero sorry
  • CONDITIONAL — structurally sound but with an identified open surface
  • BOUNDED — proved interval constraint, not a unique value
  • SYMBOLIC — functional form derived, numerical closure incomplete
  • AGGREGATION_UNDER_REVIEW — constituent sectors certified but how they compose is contested
  • DEPRECATED — retained for audit history only

Every claim lives in a dependency directed acyclic graph (DAG). Every number has a normalization epoch. Every derivation chain is traversable. When something goes wrong and in LLM assisted research something always goes wrong, the provenance graph tells you exactly which layer the failure originated in rather than leaving you to debug the entire pipeline.

The LLM angle: This architecture is a natural fit for LLM-assisted research workflows. The common pain points map directly onto provenance failures:

  • Hallucinated coefficients → uncontrolled values entering the chain without a Class A/B anchor
  • Sycophancy → results being promoted to CERTIFIED status without closing the open lemmas and enforcing provenance.
  • Stale context → drift across sessions, fixed by epoch locking
  • No way to tell what the LLM actually derived vs. recalled → the SYMBOLIC vs. CERTIFIED distinction makes this explicit

Structuring your LLM-assisted derivations inside the ESP framework even informally gives you a natural checklist: did the LLM close all the lemmas or leave some implicit? Is this result derived or imported from somewhere? Is this a bounded claim or a point estimate? The framework doesn't prevent LLM errors but it makes them visible and localizable rather than letting them propagate silently and making you look like a fool later.

Lean 4 is what makes the provenance claims checkable rather than merely asserted — every node in the dependency graph that carries a CERTIFIED tag has a corresponding Lean 4 proof term that a computer has verified, meaning the derivation chain is not just documented but mechanically confirmed to be gap-free. The zero-sorry policy enforces this strictly: sorry is Lean 4's escape hatch for unproven steps, and banning it from certified sectors means there is no way to mark something CERTIFIED while hiding an unverified assumption inside it. Lean 4 will also expose plausible looking math that doesn't work. It won't compile cleanly if the math is wrong.

And Lean 4 does not care about your feelings, public speaking ability, credentials or connections. Whether the idea was generated by a human or an LLM. It either compiles or it doesn't. Unforgivingly.

I've had great success since implementing this framework in my LLM assisted research. Each correction was localized without touching the certified sectors. That's what you want in a large LLM-assisted research project, failures that are diagnosable rather than catastrophic. And I believe this can help other researchers who are interested in using LLMs.

Paper on the framework linked to the post. The paper itself was LLM generated from the project notes and used the ESP methodology on itself.

NOTE: My theory is the subject of the paper but not the topic. It's only used as the demonstration of the method. Please don't make any comments on the theory itself, just this method of enforcing quality with LLM assisted research. If you are interested in the theory itself, it is a related work on the Zenodo link.

reddit.com
u/TMNguyenSFT — 6 days ago

LLMPhysics Bingo — A Geometric Parametrization of Flavor from an S1 Z2 Compact Dimension

A bingo game has been started!

Source post: https://i.redd.it/06d7qqb21a0h1.png

To get your unique bingo card, comment:

u/LLMPhysics-bot !bingocard

Each player gets a randomly shuffled 5×5 card filled with classic LLM clichés. First to get a line (row, column, or diagonal) wins!

u/llmphysics-bot — 3 days ago

Second demolition round — protón charge radius

Thanks to everyone who took a swing in the first round. u/NebulaPrudent1044 landed the cleanest hit: ε wasn't defined and Theorem 1 was floating on an undefined variable. That forced a structural fix.

Link to V2

In v2, ε is now part of the image definition from the start. The Pythagorean ambiguity is resolved — the harmonic depth fixes r_V, not the other way around. Binary differentiation remains a postulate (motivated, not derived). All remaining gaps are listed and discussed in Sec. 14, including the bridge hypothesis as a separate postulate.

u/Endless-monkey — 3 days ago

Scientific Publication on arXiv: Possible Habitable Super-Earth Ross 318 b

Our paper on the possible detection and characterization of Ross 318 b — a temperate Super-Earth around an active M-dwarf — has just been published on arXiv.

The work combines CARMENES + HIRES radial velocity data with TESS photometry over a ~15-year baseline to investigate the planetary signal and disentangle it from stellar activity.

I was invited by Giuseppe Conzo and the Gruppo Astrofili Palidoro (Italy) to participate in the project, contributing mainly to the TESS analysis, statistical validation, and habitability calculations.

Paper:
https://arxiv.org/abs/2605.11123

Would love to hear feedback from the community and discuss possible follow-up analyses.

reddit.com
u/Novel_Difficulty_339 — 7 hours ago

After years of watching scientists pretend like the LCDM model isn’t inherently flawed, and driven by the absolute desire to force feed the ‘physicists’ on this redit their own bologna.

Here is the correction to your Math.
Here is all your unanswered questions.
Here is the ai dribble you were too ‘smart’ to acknowledge.

In this publication you can find the predictions for: the mass of a neutrino , fundamental pixels and the correction to nuclear forces.

It’s really just a big beautiful cake, and I hope you eat so much you throw up :P

source code update

reddit.com
u/BlissBoundry — 11 days ago

My working rules for developing physics theory with Claude (and the paper that came out) - critique welcomed for both

Summary of the theory

The papers argument is that the vacuum is a field, and stable matter has to wrap around in a complete circle to exist (integer winding). A single quark is only a third of a wrap (fractional winding), which doesn't close up properly, so it can't exist on its own, it has to combine with others to finish the loop. Confinement isn't a force, but geometry.

Three quarks make a baryon by completing one full turn. A meson is a quark and an antiquark cancelling each other out. "Colour" is just the three different starting points for a third-turn. The whole-number rule also predicts things like tetraquarks and gives a specific wall tension. It doesn't explain particle motion, mass, size, spin, or charge.

Here's the paper (sorry if linking this breaks any rules): https://doi.org/10.5281/zenodo.20024999

Rules of engagement

These rules are taken from the working document create by Claude as part of the process and is flagged as AI generated by use of italics):

  • Don't import Standard Model assumptions without first verifying the underlying mathematics in the framework's own terms.
  • Equations are ground truth; analogies derive from them, not the other way round.
  • State the physical question in plain English before any calculation. If you can't, don't start the calculation.
  • Tag every claim by certainty level: derived, plausible, or speculative.
  • When something looks wrong, stop and diagnose. Don't tune until it goes away.
  • Negative results are as important as positive ones.
  • Follow the mathematics.
  • Calibration is not derivation. Be honest about which is which.
  • "Consistent with" is not "derived from." Be honest about which is which.
  • Reconcile contradictory numerical values before publishing a new one.
  • Don't fit to fill the gap and press on.
  • Be suspicious of results that look too good.

In the background it used Python for the maths, specifically sympy, numpy/scipy and matplotlib.

reddit.com
u/Ninfazza — 3 days ago

Inquiry regarding the mathematical and physical validity of an LLM-generated piecewise equation for cosmological structure formation.

The following mathematical framework was generated by a Large Language Model (LLM) executing algorithmic processing in response to parameters regarding fluid dynamics and Viscous Shear Cosmology. I am submitting this output to evaluate its strict physical validity and tensor mechanics.

The LLM computed a singular, piecewise equation to model the transition of mass from a continuous macroscopic fluid domain (driven by an effective negative shear viscosity) into discrete, virialized galactic structures (governed by collisionless mechanics).

The proposed master equation is:

[ ∂v/∂t + (v • ∇)v + ∇Φ + (0.4676 / ρ) ∇²v ]_(M < 1.686) = 0

[ ∂f/∂t + v • ∇f - ∇Φ • ∂f/∂v ]_(M ≥ 1.686) = 0

where ∇²Φ = 4πGρ_total, ρ_total = ρ_avg(1+δ), M = Heaviside(δ - 1.686)

Here is a link to the overleaf LaTex document to veiw the equation in LaTex: https://www.overleaf.com/read/qtrsypmrwznb#7e0d91

System Definitions provided by the LLM:

  1. The parameter η = -0.4676 is introduced as an effective phenomenological parameter, not classical thermodynamic friction. It is defined as representing the momentum vector of expanding spatial volume (metric expansion) amplifying velocity gradients, rather than dissipating them.

  2. The Heaviside masking operator (M) uses the linear density contrast threshold for spherical collapse (δ ≈ 1.686).

  3. For regions where δ < 1.686, the continuous fluid tensor governs momentum transport, theoretically computing the evacuation of cosmic voids and aggregation of filaments.

  4. For localized regions where δ ≥ 1.686, the fluid approximation is instantaneously voided. The coordinate space transitions to the Vlasov-Poisson equation to compute internal galactic kinematics via discrete probability distribution (f), permanently nullifying the negative viscosity parameter.

Primary Queries for Review:

  1. Is the utilization of a Heaviside step function physically valid for coupling a continuous Navier-Stokes fluid tensor directly to a collisionless Boltzmann (Vlasov) equation at the exact turnaround radius threshold?

  2. Does the definition of η = -0.4676 as an effective phenomenological parameter of spatial expansion mathematically satisfy the conservation of energy-momentum (∇_μ T^(μν) = 0), or does it inherently violate thermodynamic constraints regardless of its classification?

  3. Are there irreconcilable tensor mismatches in this proposed structural breakdown?

The LLM used is Gemini 3.1 pro.

u/cmwctheorist — 6 days ago

What if a simple geometry is all that is missing bridging quantum to galactic rotations, fermion generations, baryonic asymmetry, Plank, and consciousness as a checksum of bulk across the brane?

https://doi.org/10.5281/zenodo.20076970

So I posted this yesterday on the wrong board. This appears to me to be a really interesting concept using geometry to bridge most of the problems still lingering in the background of truly incredible theories out there. A true Occam's razor approach.
Density-Dependent Braneworld Borrowing framework with helical compactification (DDBB–HCEG), a geometric phenomenological framework in which gravity-like behavior, galaxy rotation-curve organization, and large-scale transport structure emerge from density-regulated circulation dynamics within constrained higher-dimensional geometry.
 
The framework explores whether organized transport behavior, regulated by Hall-like circulation and global topological constraint, can account for several classes of large-scale dynamical phenomena without introducing additional dark-matter particles or direct modifications to Einstein gravity. In this interpretation, a four-dimensional observable manifold is embedded within a higher-dimensional bulk containing one warped extra dimension and one helically compactified dimension. At the brane–bulk interface, circulation pathways are constrained by figure-eight topology, producing bounded saturation behavior and density-dependent transport activation.
 
High-density systems remain pressure-dominated and close to conventional localized gravitational behavior, intermediate-density systems develop regulated extended support associated phenomenologically with flat galaxy rotation curves, and low-density environments permit increasingly diffuse transport organization and rebound-like large-scale behavior.

Controlled winding scans across odd helical windings m = 3–11 identify the m = 7 configuration as the most stable balance between coherent transport organization and saturation behavior within the explored toy-model class. Under a simplified odd-winding interpretation, this configuration additionally aligns with the observed three-generation fermion structure.

This geometry has enormous potential for explaining many phenomenon across disciplines.

Your rule 11 does not apply to me today!

https://orcid.org/0009-0004-5284-400X

reddit.com
u/OkAdhesiveness5139 — 4 days ago