u/Pixedar

Interactive online demo of brain information flow

Link for online interactive demo:
https://pixedar.github.io/ai/mindvisualizer/

Main GitHub repo:
https://github.com/Pixedar/MindVisualizer

This is a follow-up to my open-source brain information flow exploration repo from this post:

https://www.reddit.com/r/compmathneuro/comments/1sy150g/open_source_brain_information_flow_exploration

I decided to make a small online demo of the repo to make the idea more accessible to a broader group of people, and to give people an easier way to first interact with the visualization.

I see the web demo mostly as an entry point into the broader effort and repo. More broadly, I see this as part of a larger effort to build better intuition and mental models for large-scale brain dynamics. I know the current technology and methods may not be fully there yet, but I think this kind of exploratory / collaborative tooling is emerging and worth trying

However, a few caveats:

  • The current flow data is not peer-reviewed. It is based on real brain data from my preprint / Zenodo record: https://zenodo.org/records/18200415 In the future, it would be nice to turn this into a more rigorous version, possibly with higher-quality data, better-validated flow models, or collaboration with people who work more directly on this kind of problem.
  • Please remember that the online demo is only a limited demo. It currently shows only one of the three modes from the full repo. The other modes in the repo may actually be more important / relevant than the one currently shown in the browser demo, especially for the broader brain-manifold and information-propagation idea. For the full functionality, please check the actual GitHub repo: https://github.com/Pixedar/MindVisualizer
  • The real repo is the main project, not the web demo. It contains the three modes, the broader brain-manifold / information-propagation idea, the LLM/RAG interpretation part, and the informal observations file: https://github.com/Pixedar/MindVisualizer/blob/master/OBSERVATIONS.md The observations file is there so people can add interesting flow paths, perturbation effects, or intuitions about resting-state organization. The hope is to slowly build a shared record of patterns that might help us think about how the brain works internally.
  • The site is intended for demo / accessibility purposes only. The web version was made more quickly just to make the idea easier to try in the browser. The GitHub repo is the more complete version of the project, with more functionality and better code structure. For anything beyond just trying the browser demo, please look at the repo.
  • I do not expect a huge amount of traffic, but since the LLM analysis costs tokens, I included only a small amount of my own credits, so it may run out over time if people use it.

The original repo post was basically about combining brain information flow derived from real fMRI and tractography data with an LLM, including RAG-based interpretation of this flow and propagation of information in the brain.

It is still not peer-review quality and should rather be treated as a tool for building intuition about the brain and building a mental model of brain dynamics.

Feedback is very welcome, especially from people who know the field better or have ideas about validation, better data, better flow models, or how to make the observation/collaboration part more useful

u/Pixedar — 3 days ago

What is this?

Long story short, it shows current trends in AI research and how they tend to change over time.

The idea is that we can map text into a point location in semantic space. Then, if we have textual data that changes over time, the consecutive point locations create a trajectory in that semantic space. From many such paths, we can compute a generalized flow model that shows where the trends tend to go.

What I did here is that, for each arXiv paper category, I created a path showing how the papers’ meanings and topics changed over the last 6 months. Then, from many such paths, the generalized flow model was computed.

What it found:

The three main components that seem to govern the current AI research space are:

X: abstraction level
Y: perception emphasis
Z: agentic emphasis

It also found two distinct global attractor basins.

The first attractor basin seems to represent AI research moving toward grounded perception and interaction with the real world. This is less about abstract model behavior and more about making AI systems understand messy, changing environments, where inputs are noisy, incomplete, distributed, or constrained by deployment conditions.

The second attractor basin seems to represent AI research moving toward agentic behavior, reasoning, and control of model objectives. This is more about making models follow the intended goal, avoid shortcut solutions, and behave reliably when trained or evaluated through imperfect signals.

So, roughly speaking, one attractor is about AI becoming better at perceiving and operating in the physical world, while the other is about AI becoming better controlled as an agentic reasoning system.

The video is from this interactive web version, which you can try here:
https://pixedar.github.io/ai/tracescope/

The tool that was used to build these semantic flows is my open source repo here:
https://github.com/Pixedar/TraceScope

If you are interested in the details of how the points are projected and how the axes are computed, there is an explanation in the repo README as well. I also explained more in my previous post about semantic flow, where I mapped step by step LLM reasoning and explained the details in the comments:
https://www.reddit.com/r/learnmachinelearning/comments/1suorcm/mapped_the_semantic_flow_of_stepbystep_llm

I made this web demo version to make the semantic flow concept more accessible

Limitations:

Another thing is that the paper data might not be ideal, because there is a lot of randomness in when a given paper gets published, so it introduces a lot of noise. Nevertheless, it should still approximate the global trends. The TraceScope open source repo works better if we have native time series like data, such as step by step reasoning.

This result cannot be treated as a peer reviewed quality grade result about current research directions, since proper statistical validation would take a lot of time. So if you want to use it for research, you should experiment with the model parameters and validate it statistically

u/Pixedar — 6 days ago
▲ 131 r/compmathneuro+2 crossposts

I made a open source repo that combines brain information flow derived from real fMRI data with an LLM, with access to RAG-based interpretation of this flow, as well as propagation of information in the brain here: https://github.com/Pixedar/MindVisualizer

It is not peer review quality and should rather be treated as a tool for building intuition about the brain and building a mental model of brain dynamics .It is more of an exploratory visualization / intuition-building tool, and I would be happy to hear feedback from people who know the field better

u/Pixedar — 16 days ago