
I taught AI the 13 thinking tools that Einstein and Picasso used — it independently discovered laws I spent months extracting manually
What it is
An open-source framework where AI uses the same 13 cognitive tools that history's greatest minds used (from the book "Sparks of Genius" by Root-Bernstein, 1999): observe, imagine, abstract, find patterns, analogize, empathize, play, transform, synthesize, etc.
You give it a goal + data. It thinks through the data using all 13 tools and extracts core principles.
GitHub: https://github.com/PROVE1352/cognitive-sparks
Why I built it
Every AI agent framework (LangGraph, CrewAI, AutoGPT) teaches agents what to do — call tools, manage state, follow workflows.
Nobody teaches them how to think.
I wanted to see: if the 13 thinking tools are truly universal (used by scientists, artists, and engineers identically), can we implement them as AI primitives?
The weird part: it has a nervous system
Most frameworks use a "CEO pattern" — one orchestrator tells tools what to run in what order. That's how corporations work, not how intelligence works.
Sparks has an actual neural circuit (~30 neuron populations, ~80 learned connections). Tools don't run in a fixed order. The execution sequence emerges from neural dynamics:
- Empty state → "observation hunger" signal drives the observe tool to fire first
- After observations → pattern recognition neurons activate highest
- After patterns → abstraction neurons win
- No code says "observe then patterns then abstract." It just happens.
The connections learn via STDP (spike-timing dependent plasticity) and evolve across sessions. The framework literally gets smarter with every use.
The validation that convinced me
I had 15 months of densely analyzed market data. Over those months, I manually extracted 3 "core laws" governing market behavior. Took months of work.
I fed the raw data to Sparks: "find the fundamental laws."
It found 12 principles. The top 3 matched my manually-extracted laws. Plus 9 additional principles I hadn't formalized.
| Standard (7 tools) | Deep (13 tools) | |
|---|---|---|
| Principles | 7 | 12 |
| Avg confidence | 80% | 91% |
| Coverage | 68% | 85% |
| Cost | $6 | $9 |
The 6 "creative" tools (imagine, body-think, empathize, play, shift-dimension, transform) contributed 5 principles that the analytical-only pipeline missed.
What makes it different
LangGraph/CrewAI: Conductor tells musicians what to play and when
Sparks: No conductor. Musicians hear each other. Order emerges.
- 13 cognitive primitives (not just "call this API")
- Neural circuit drives execution (not if-else rules)
- Self-optimization: it analyzes its own output quality and fixes its own prompts
- Full loop: extract → validate → evolve → predict → feedback
- Multi-model: Claude, GPT-4o, Gemini, Ollama — any LLM backend
- Cross-session learning: connection weights persist and evolve
Try it
pip install -e .
sparks run --goal "Find the core principles" --data ./your-data/ --depth standard
Works with Claude Code CLI (free with subscription), OpenAI, Google Gemini, or any OpenAI-compatible API (Ollama, Groq).
What's next
- Google Colab notebook (try without installing)
- Benchmark against GPT-Researcher, STORM
- Embedding-based convergence detection
Built solo with Claude Code over a long weekend. Happy to answer any questions about the architecture or results.
![[P] I built an AI framework with a real nervous system (17 biological principles) instead of an orchestrator — inspired by a 1999 book about how geniuses think](https://external-preview.redd.it/GltjhMRmSSR5YdHQqEk0UU9hMd_7ZzdG10AMfMTw6ZM.png?width=1080&crop=smart&auto=webp&s=7342a271f9503b9da8d444bd157e2f37413f634d)