r/neuro

▲ 216 r/neuro+2 crossposts

So a new paper that was just published in Frontiers in Human Neuroscience proposes that self-referential thinking, which can be thought of as the ego, functions as the biological switch between System 1 and System 2 in the brain which it proposes are quantum and classical modes.

It proposes that the brain operates under a tight metabolic budget and that the DMN's process of sustaining boundaries through self-referential activity consumes a substantial portion of that budget which is the connection to Carhart-Harris' entropic brain hypothesis work.

So it describes that when the ego runs hot, the energy needed for energy pumping to maintain quantum coherence in microtubule tryptophan networks is unavailable and the brain falls back into classical sequential computation (System 2), then when the ego quiets, metabolic resources free up for energy pumping like a laser does to sustain coherence, and the brain enters the parallel processing mode (System 1) which it connects to flow states and insights. Then it points to significant implications this has for consciousness. It poses itself as an alternative to Roger Penrose and Stuart Hameroff's Orch Or theory

Paper here: https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2026.1783138/full

u/SalvationsElite — 5 days ago
▲ 30 r/neuro

Can Neuroscience Help You Understand Human Behavior Better?

Does Neuroscience Training Change Social Perception and Emotional Regulation?

I’m curious about whether studying neuroscience significantly changes the way people interpret human behavior and social interaction.

For example, does deeper knowledge of cognition, emotion, and neural processing improve someone’s ability to recognize deception, discomfort, emotional suppression, or nonverbal communication patterns?

I’m also interested in whether neuroscience training influences self-regulation. Do people in the field generally become more aware of their own cognitive biases, emotional reactions, and behavioral patterns, or does having scientific knowledge of the brain not necessarily translate into greater emotional control in everyday life?

reddit.com
u/Worried-Pen7857 — 2 days ago
▲ 190 r/neuro+6 crossposts

The peptide therapeutics conversation has spent years on a small set of targets: AMPK (the cell's energy switch), mitochondrial peptides like MOTS-c and humanin, the gut-brain axis, and BDNF (a growth factor that protects neurons). Metformin hits all four. It gets into the brain. It triggers natural GLP-1 release from the gut (the same hormone Ozempic and Wegovy mimic). It calms brain inflammation. In animals, it grows new neurons in the hippocampus. In two large human studies, long-term users had slower memory decline and lower dementia rates.

What the field does not have is a clean look at the brain on actual scans. No one has measured how well metformin users' brains clear waste, how much inflammation is sitting in the wiring, or how healthy the wiring is, compared to matched non-users at scale.

Dr. Faye McKenna's lab at Albert Einstein / Montefiore is about to. They're using UK Biobank, the world's largest brain scanning study (around 100,000 brain MRIs plus full prescription records and 325 markers in blood). They'll match people on metformin to similar people who aren't, on age, sex, weight, blood sugar, blood pressure, smoking, activity, neighborhood, and the main genetic risk factor for Alzheimer's. They'll measure three things on the brain scans (waste clearance, inflammation in the wiring, wiring health), the same comparison for body fat (liver, belly, muscle), and then test whether body changes lead to brain changes lead to better thinking and memory.

Plan is locked in writing before the data is opened. Code, results, and a preprint will all be public.

If the brain results hold, metformin becomes the first drug on this pathway with real human imaging evidence linking body fat changes to brain waste clearance to memory and thinking. If they don't, that's a publishable answer too, and it tells the rest of the longevity drug field something important.

u/cryptarsh — 7 days ago
▲ 35 r/neuro

My understanding is that the brain has the ability to self-monitor in a few different ways. I guess what I’m asking is whether some people are able to bring those monitoring mechanisms into conscious awareness.

Like, can you use your firsthand conscious state and activity—not just your intellectual knowledge—to deduce what’s going on neurologically?

An analogy might be a mechanic driving a stick shift who knows both on an intuitive level and an intellectual level what’s happening when he shifts gears.

EDIT: Thank you so much for your thoughtful answers! It was a subtle question that I’m still unable to fully articulate but I appreciate the responses :-)

reddit.com
u/azamraa — 8 days ago
▲ 32 r/neuro

I’m contemplating what I want to go to college for and have always had an interest in brains and why they function the way they do. I love learning but always struggled in school, especially during high school (covid fucked a lot of it up for me mentally) but now that I’m in community college, I’ve been flourishing in my classes and want to pursue a harder education and want to push myself more as I never gave myself the chance to in high school, along with bad mental health + getting little to no help with school from family and pot use as well. I’ve quit and have been on medication and my productivity has skyrocketed and I’ve rediscovered my love for school. I’m in Oregon and I am considering UO as they have a good neuroscience program. I plan on doing as many classes in community to save myself the money before transferring over but reading through the subreddit has made me worried on my possible choice, I excel in writing and reading, math has never been my strong suit but I enjoy physics and science. I would love to get into research and eventually transition to being a professor as I love helping people and teaching others new things. Mental health and nerurodivergency has always interested me as I have struggled with both in my life and want to better understand these things not just for me but for others. I would love to study these things and write about them but everyone here seems to say it is a bad idea, I don’t know what other career I could possibly go towards that involves something I am passionate about and am at a loss currently. I could use some positive possibilities lol, anything helps though

reddit.com
u/Opposite-Pilot-556 — 8 days ago
▲ 16 r/neuro

Interest in Neuroimmunology

Hi! I’m finishing up my junior year of college as a neuroscience major. I’m really interested in neuro immunology, though I’m not going to be able to take a course in it. Does anyone have any textbook/leaning material recommendations for self study? If so, I’d really appreciate it!

reddit.com
u/whimsicalturbulence — 2 days ago
▲ 11 r/neuro+1 crossposts

Biology student trying to understand how to get into neurotech. What’s the reality of the field?

Hey everyone,

I’m a biology undergrad and I’ve recently become really interested in neurotech (BCIs, neural decoding, neuroAI-type work). I’m trying to figure out what the actual path into this field looks like and whether it’s realistic for someone like me.

A few things I’m confused about:

Can people from biology/neuroscience backgrounds realistically break into neurotech, or is it mostly CS/EE students?

What do neuroscience trained people typically end up doing in industry roles?

How is the neurotech industry actually right now. Is it growing, stable, or still very niche?And realistically, what does the pay range look like at different levels (entry-level to senior roles)?

What skills matter most early on if I want to move in this direction (Python, ML, math, research experience, etc.)?

Right now I’m planning to start learning Python and try to join a lab that works with neural data, but I’m not fully sure if that’s the right direction or just one of several possible paths.

Would really appreciate honest perspectives from people actually in the field especially how they transitioned and what they’d do differently.

reddit.com
u/Radiant-Rain2636 — 2 days ago
▲ 62 r/neuro

Hello! I am still searching for where and what I want to go after graduation in 2 years-ish, but I am really interested in doing research so far, and is thinking about going into research related fields! But I don't exactly know what it looks like, or if it is a sustainable career to have at all.

I am currently doing double major in neuroscience and psychology, but my strength seems to excel in psychology-related courses, while I am lacking quite a bit in neuroscience (around average). So I am wondering if it would be worth it at all to continue pushing on with the neuroscience major for the research, since research in both fields really interests me and makes me think a lot (silly, I know).

So sorry if this post is disjointed! I don't know how to put my thoughts to words at all, so they're quite all over the place. And thank you so much for reading and especially if you comment!

PS. thank you to the people who have replied to my last post on here! Y'all's advice on studying for neuroscience have been very helpful, and I did much better this semester compared to the last one over all! Thank you, you kind people!

reddit.com
u/MintyMents — 12 days ago
▲ 10 r/neuro

(Question for Europeans) I want to study neuroscience, but i don't how(more info below)

Hi, i'm from non-EU country in eastern Europe and i'm currently a first year CS student. I wanted to study medicine initially, but some things happened not long before i needed to start applying and i ended up not having the financial support to study 6+ years anymore. I hastily chose CS cause i didn't know what i wanted to do, but now i'm realizing that Neuroscience might be the field i want to get into. Unfortunately you can't study it in my country.

My question is: what should i do? Can i proceed with CS degree and try to get Masters in another country after graduation?(can i even be accepted with CS degree in Europe?) I know that it's possible in the US, but i haven't seen a European perspective on it. Any advice would be helpful.

reddit.com
u/Sector-Difficult — 3 days ago
▲ 0 r/neuro

The cellular vault. A 40-year biological mystery that may be the quantum material basis of the mind.

First-time poster here. Would welcome feedback on a quantum cognition hypothesis I've been developing.

I've developed an information-theoretic framework grounded in the Free Energy Principle that proposes a direct physical basis for consciousness at the subcellular level. A systematic search for the biological hardware capable of instantiating this led me to the cellular vault, an enigmatic ribonucleoprotein complex that has resisted functional characterization for 40 years despite being among the most conserved and abundant structures in eukaryotic biology. Here is what vaults look like. Its precise function remains unknown, it has been excluded from leading textbooks, and very few researchers study it. This framework proposes it is the quantum material basis of the mind, advances a specific quantum mechanism that addresses the core structural weaknesses of Orch OR, and generates falsifiable predictions. The core argument is outlined here and the preprint is linked below.

The biological mystery

The vault is present in tens of thousands of copies per nucleated cell, conserved across two billion years of eukaryotic evolution at significant metabolic cost. Yet its precise function remains unknown.

Consider what this means in practice. A slime mold with three times the human MVP gene expression, no neurons, no synapses, and no nervous system of any kind, solved the Tokyo rail network optimization problem in a day, as documented in a 2010 paper in Science. This is vault-dense computation with no other hardware present and no explanation in mainstream cell biology.

In a 2024 Science article, one of the vault's co-discoverers noted that macrophages carry the highest vault density of any human cell type, cells that must recognize and respond to molecular threats with no prior template. Vault density tracks computational demand, from a forest floor slime mold to the front line of the human immune system.

The proposed mechanism

The vault is a 13-megadalton ribonucleoprotein complex whose architecture is unlike any other cellular organelle. Precisely engineered, massively abundant, and architecturally unchanged for billions of years, its hollow interior functions not as a container but as a shielded resonant cavity: a protected computational medium in which internal RNA undergoes precise geometric transitions insulated from the thermal noise of the cytoplasm. Its emptiness is its operational state.

This framework proposes that its function is not biochemical in the conventional sense but physical: a dipole geometric interferometer whose opposing caps load discrete informational states, one the genetic prediction from nuclear instruction, the other the sensory observation from the cytoplasmic environment, with the barrel computing their physical interference. When prediction and observation align, the system resolves into a geometric ground state. When they conflict, torsional shear accumulates and the system signals error. The vault does not execute a fixed program. It constrains the available space of cellular responses until only coherent futures remain.

Why the vault and not microtubules

Quantum coherence in biological systems requires shielding from thermal noise, defect-free assembly, and a physically grounded mechanism for ground state stabilization. Microtubules, the substrate proposed by Orch OR, self-assemble stochastically with documented lattice defects and sit fully exposed to cytoplasmic thermal noise with no shielding mechanism. These are structural objections Orch OR has never convincingly answered.

The vault addresses each one directly. Its shell is deterministically printed by a polyribosome nanoprinting mechanism with atomic geometric precision. The protein barrel and the nanoconfined biointerfacial water layer of the cap provide the physical shielding. Four independent condensed matter physics papers published in 2026 establish that the vault's 39-to-13 symmetry ratio generates a moiré superlattice whose 3-to-1 periodicity matches the crystallization condition for a thermodynamically stable bosonic exciton crystal at physiological temperature. The vault is a more physically defensible quantum candidate than the microtubule in every sense.

How this engages existing frameworks

This framework engages its competitors at the level of mechanistic grounding. GWT describes the broadcasting architecture by which information becomes globally available across the brain, but does not specify the subcellular engine that generates the content it broadcasts. This framework proposes that engine. IIT identifies integration as central to consciousness and has produced important formal tools, but makes no contact with molecular biology, offers no evolutionary rationale, and leaves unanswered why integrated information should feel like anything. This framework grounds all three.

Why this reframes the brain's role

This framework proposes that every nucleated cell runs vault-based predictive inference locally, making the brain not the seat of cognition but the integrator of a distributed computation running across the body's forty quadrillion synchronized oscillators. The thalamocortical system phase-locks this distributed output into a unified interference pattern across the neural vault swarm. Brain waves visible in EEG are the macroscopic signature of that integration.

Three otherwise inexplicable anomalies dissolve within this framework. The brain's twenty-watt energy budget, xenon anesthesia's complete erasure of subjective time despite having no receptor pharmacology, and the divergent cognitive effects of chemically identical lithium isotopes each find a specific proposed mechanism in the vault framework that classical models cannot provide.

If the vault is the quantum material basis of healthy cognition, disease is what happens when that material degrades. The framework proposes specific mechanisms for Alzheimer's, depression, and autism as distinct modes of vault material failure, detailed in the paper.

I've shared insights from this project in a series of papers. Here is the final preprint. I recognize many will naturally be skeptical of a framework that crosses this many disciplines and challenges this many longstanding paradigms. But this is the result of a rigorous, systematic theoretical synthesis, and the explanatory power of the resulting model is compelling. All feedback would be much appreciated.

zenodo.org
u/zocolos — 4 days ago
▲ 9 r/neuro

Hi everyone!

I was the freelancer, who made a post a few days ago, with my questions about the temporal "barcode" on the weight matrix. I wanted to share some updates on my project and the progress I've made. I have now provided the run with more informative plots for a better view of the network.

So, the recap:

I built an SNN with the following specs:

Architecture: 2x256 hidden layers. AdLIF neurons, feed-forward.

Encoding: Pure Latency Coding (Time-to-First-Spike). I moved away from Poisson to capture the temporal structure of the input.

Learning Rule: Purely local STDP for the internal layers. Weight clamp(0.001 to 1.0), Synaptic Scaling: multiplicative normalization (L1-norm based). "Surprise-driven" learning rate gating mechanism for the STDP.

Readout: A simple linear readout head with a leaky lowpass filter. This is the only part of the system that uses the optimizer for supervised classification.

Dataset: MNIST (0-9 digits, 28x28).

The learning rates was 2e-2 for the STDP and 2.0 for the linear readout.

And now, my project progression update:

First I replaced the STDP learning rate gating mechanism with a more refined accuracy-driven gating mechanism. I found it unstable to use the readout loss as a gating mechanism, so I switched to an accuracy-driven gating mechanism. If the accuracy is dropping, the gate opens up, if it is increasing, the gate closes. I set the activation threshold at 80% accuracy.

Then, because i was currious, I did an ablation study on the weight decay parameter that the readout uses. An interesting range was the optimal one, 1e-1. It seems that these strange ranges are the personal perversion of this SNN.

After that I thought: I want to see the run and the logs with my own eyes. Then what I saw, made me think my logger or the system was broken. It's not possible, I thought, something must have gone wrong, I'm screwing something up. I checked it and there was no error or bug, this was the real performance. I never saw a neural net accuracy so high literaly at the begining of the training:

Steps:   500 | Loss: 1.160 | Acc: 96.40% | Valve: 0.42 | Sp L1: 0.070 | Sp L2: 0.068 | Sp Tot: 0.069 | W-Delta: 26.1120
Steps:  1000 | Loss: 1.122 | Acc: 96.20% | Valve: 0.22 | Sp L1: 0.056 | Sp L2: 0.048 | Sp Tot: 0.052 | W-Delta: 8.2499
Steps:  1500 | Loss: 0.914 | Acc: 96.40% | Valve: 0.20 | Sp L1: 0.059 | Sp L2: 0.043 | Sp Tot: 0.051 | W-Delta: 5.6242
Steps:  2000 | Loss: 0.524 | Acc: 96.60% | Valve: 0.20 | Sp L1: 0.053 | Sp L2: 0.035 | Sp Tot: 0.044 | W-Delta: 4.9330
Steps:  2500 | Loss: 0.603 | Acc: 96.00% | Valve: 0.23 | Sp L1: 0.050 | Sp L2: 0.028 | Sp Tot: 0.039 | W-Delta: 6.1300
Steps:  3000 | Loss: 0.476 | Acc: 96.20% | Valve: 0.22 | Sp L1: 0.048 | Sp L2: 0.024 | Sp Tot: 0.036 | W-Delta: 4.8336
Steps:  3500 | Loss: 0.624 | Acc: 96.80% | Valve: 0.18 | Sp L1: 0.052 | Sp L2: 0.031 | Sp Tot: 0.041 | W-Delta: 5.4503

Then i let it run for 30000 steps. The last steps:

Steps: 27000 | Loss: 0.361 | Acc: 96.20% | Valve: 0.22 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.8722
Steps: 27500 | Loss: 0.354 | Acc: 96.20% | Valve: 0.22 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.8889
Steps: 28000 | Loss: 0.325 | Acc: 96.40% | Valve: 0.21 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.7192
Steps: 28500 | Loss: 0.360 | Acc: 96.20% | Valve: 0.22 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.8478
Steps: 29000 | Loss: 0.333 | Acc: 96.40% | Valve: 0.21 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.7299
Steps: 29500 | Loss: 0.362 | Acc: 96.20% | Valve: 0.22 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.6118
Steps: 30000 | Loss: 0.353 | Acc: 96.40% | Valve: 0.21 | Sp L1: 0.040 | Sp L2: 0.016 | Sp Tot: 0.028 | W-Delta: 0.8433

It turns out that when you push the system to its mathematical extremes, the "dead end" of neuromorphic learning, because this is what a lot of people says about neuromorphic learning,  might not be so dead after all.

Key takeaways from this run:

Instant Lock-in: The network reached 96.40% accuracy by step 500. It didn't just learn; it practically "recognized" the patterns immediately. Although there was a slight fluctuation in accuracy, the network was incredibly confident. Accuracy was over 96% throughout the entire run.

Structural Purge: By applying a brutal weight decay combined with fast learning rates, the network became extremely sparse. L1 sparsity dropped to 4% and L2 sparsity dropped to 1.6%, leaving only the most critical, high-precision synapses alive. Like a ruthless digital Darwinism.

Temporal Stability: Despite the discrete 1.0ms timesteps, the 25ms integration window and Latency Coding created a causal chain so robust that the confusion matrix is almost a perfect diagonal.

New adaptive gating: I refined the "Learning Valve" to be accuracy-driven. As it turned out, this was a good decision. As soon as the network hit the 96% mark, the valve throttled down learning to 0.20, effectively freezing the successful internal state.

It is fascinating to see how biological principles (STDP, latency, etc.) combined with “cruel” mathematical constraints can create such efficiency and how they can outperform complex surrogate gradient methods in this task, at least in speed, as we can see.

Now imagine the learning speed and efficiency of this if it ran on real neuromorphic hardware, rather than Neumann hardware, although the entire 30,000 steps took no more than ~30 seconds on a laptop processor.

However, I'm having a bit of trouble with the receptive field interpretation. According to them, since the dark spot is in the middle, they don't look at the digit, but at the outline of the digit and its surroundings. Did the neurons learned to recognize the silhouette? Am I understanding it correctly?

I remain open to any exchange of ideas, criticism, or explanation that I can learn from.

u/Androo_94 — 6 days ago
▲ 13 r/neuro

Hi everyone!

First off, I’m not a professional CompNeuro researcher—just a very enthusiastic and somewhat obsessed freelancer diving deep into the neuromorphic direction. I’ve been spending my time reading studies and conducting my own research, and I’ve reached a point where the results honestly blew me away. I’d love to get some expert eyes on this.

I built an SNN with the following specs:

  • Architecture: 2x256 hidden layers.
  • Encoding: Pure Latency Coding (Time-to-First-Spike). I moved away from Poisson to capture the temporal structure of the input.
  • Learning Rule: Purely local STDP for the internal layers.
  • Readout: A simple linear readout head with a leaky lowpass filter. This is the only part of the system that uses the Adam optimizer for supervised classification. I used readout loss-dependent gating for the STDP learning rate (surprise-based learning) as neuromodulation, although it was open throughout the run.
  • Dataset: N-MNIST (0-9 digits, 28x28).

After a logarithmic grid search for stability and jitter, I found a configuration that produced these metrics:

  • Peak Accuracy: 89.40% (at step 20,500).
  • Emergent Sparsity: I didn't enforce a specific sparsity level, but the L2 layer self-organized into an extreme 0.75% activity rate (0.0075 sparsity). And the L1 produces a 4.0% (0.04) sparsity.
  • Stability: Even with a high readout learning rate, the system remained mathematically stable with a Stability Score of 0.6703 and a Jitter (Std) of 0.0611.

The learning rates was 2e-2 for the STDP and 2.0 for the linear readout.

My Questions to the Experts:

The most fascinating part for me is the attached weight map. The L1 weights developed these incredibly sharp, vertical "barcode-like" patterns. To my eyes, it looks like the network physically carved out specific temporal windows to respond to the latency-encoded input.

As someone coming from a non-academic background, I find this visual structure beautiful, but I want to understand it better:

  1. Biological Analogy: How does this vertical stripping correlate with biological visual processing (like the optic nerve or V1)? Is this a known phenomenon when STDP meets latency coding?
  2. Sparsity: Is a 0.75% emergent sparsity typical for these types of networks, or is my "Readout Loss" acting as a pressure cooker that forces this extreme efficiency?

I’m genuinely impressed by how the math and the biological inspiration converged here. Looking forward to your insights and critiques!

Postscript: I used LLM to write and format this post because English is not my native language and I didn't want you to get annoyed by my possibly bad English.

u/Androo_94 — 8 days ago
▲ 10 r/neuro

I thought I'd share this with the good folks here. I've been exploring a cognitive pattern present in human reasoning, particularly in dialectic reasoning. Dialectics being the Socratic and Hegelian principle of using conflicting viewpoints to filter and synthesize more effective and truthful conclusions. This type of dialectical reasoning appears to heavily recruit the dorsal anterior cingulate cortex (dACC), a region known for conflict monitoring.

The Medium article I linked here discusses the dACC and how its conflict rerouting system can be "simulated" inside an AI large language model using prompt engineering to improve the quality of the AI's reasoning. This has the effect of "augmenting" the LLM's legacy transformer architecture with a systematic way of thinking that it didn't have before.

Here's some neuroscience research I was able to find that support my theory:

1. Wang et al. (2016) – "The Dorsal Anterior Cingulate Cortex Modulates Dialectical Self-Thinking"

Published in Frontiers in Psychology.

Key finding: Higher dispositional dialectical thinking correlates with increased dACC (dorsal ACC) activity when processing self-relevant conflicting information. The dACC plays a crucial role in monitoring and resolving conflict in dialectical self-thinking.

Link: PubMed | Full paper

2. Botvinick et al. (2004) – "Conflict Monitoring and Anterior Cingulate Cortex: An Update"

Classic review in Trends in Cognitive Sciences.

Key finding: Establishes the dACC as a key region for detecting and signaling conflicts in information processing (including cognitive dissonance and competing representations).

Link: PubMed

3. Hu et al. (2025) – "The Neural Basis of Dialectical Thinking: Recent Advances and Future Directions"

Link: PubMed

Key finding: Reviews evidence that the dACC is central to conflict monitoring in dialectical thinking and proposes a "dialectical-integration network" (DIN) with the dACC as a core hub.

A more systematic and expansive argument is in the linked Medium article.

Would welcome thoughts and constructive criticism from the larger neurological community to stress test my theory.

Thank you.

u/RazzmatazzAccurate82 — 11 days ago
▲ 3 r/neuro

Hi everyone, I’m looking for advice for strong justification of my choice of methods. The details-

*This is for EEG: It’s a salience attribution and reward learning task. I’m doing decoding/machine learning as part of my analysis. In my analysis, I’ve chosen to decode the entire epoch rather than doing time-resolved decoding; however, I’m not looking at spatiotemporal dynamics because I’m later averaging across all time points. I need a strong justification for choosing to have done it since I’ve already done it now that isn’t related to allowing me to look at temporal dynamics (i.e., later and earlier responses) since I’m averaging these values. I’ve considered part of my justification including the fact that full-epoch decoding provides more robust/better decoding accuracy in general, but it feels like a weak point. I’ve read so many papers, as many as I think they are since it’s such a new thing, and I can’t find any other argument that’s more sound or strong. Please don’t suggest doing time or ERP related signatures as it’s far too late. I’ve also talked about larger signal to noise ratio but it’s quite a broad/general point. Any help is greatly appreciated. Thank you!

reddit.com
u/silveronyxx — 12 days ago