r/AI_Application

Which AI app are you actually lowkey addicted to right now?

Which AI app are you actually lowkey addicted to right now?

What everyone here is using daily, not just testing once and forgetting.

For me, it ended up being this AI news app CuriousCats AI. I installed it thinking I would try it for a day or two, and now it has quietly become part of my routine. I open it in the morning, skim a few summaries, maybe tap into one or two stories, and that is pretty much my news time for the day.

it is not the flashiest AI thing I use, and it definitely is not perfect, but it is the one I keep coming back to without thinking about it. You guys should also check it out, I swear some of you will probably love it as much as I do: iPhone, Android

It

u/saalipagal — 24 minutes ago
[P] I built an AI framework with a real nervous system (17 biological principles) instead of an orchestrator — inspired by a 1999 book about how geniuses think
▲ 2 r/learnmachinelearning+1 crossposts

[P] I built an AI framework with a real nervous system (17 biological principles) instead of an orchestrator — inspired by a 1999 book about how geniuses think

I'm a CS sophomore who read "Sparks of Genius" (Root-Bernstein, 1999) — a book about the 13 thinking tools shared by Einstein, Picasso, da Vinci, and Feynman.

I turned those 13 tools into AI agent primitives, and replaced the standard orchestrator with a nervous system based on real neuroscience:

- Threshold firing (signals accumulate → fire → reset, like real neurons)

- Habituation (repeated patterns auto-dampen)

- Hebbian plasticity ("fire together, wire together" between tools)

- Lateral inhibition (tools compete, most relevant wins)

- Homeostasis (overactive tools auto-inhibited)

- Autonomic modes (sympathetic=explore, parasympathetic=integrate)

- 11 more biological principles

No conductor. Tools sense shared state and self-coordinate — like a starfish (no brain, 5 arms coordinate through local rules).

What it does: Give it a goal + any data → it observes, finds patterns, abstracts to core principles (Picasso Bull method), draws structural analogies, builds a cardboard model, and synthesizes.

Demo: I analyzed the Claude Code source leak (3 blog posts). It extracted 3 architecture laws with analogies to the Maginot Line and Chernobyl reactor design.

**What no other framework has:**

- 17 biological nervous system principles (LangGraph: 0, CrewAI: 0, AutoGPT: 0)

- Picasso Bull abstraction (progressively remove non-essential until essence remains)

- Absent pattern detection (what's MISSING is often the strongest signal)

- Sleep/consolidation between rounds (like real sleep — prune noise, strengthen connections)

- Evolution loop (AutoAgent-style: mutate → benchmark → keep/rollback)

Built entirely with Claude Code. No human wrote a single line.

GitHub: https://github.com/PROVE1352/cognitive-sparks

Happy to answer questions about the neuroscience mapping or the architecture.

u/RadiantTurnover24 — 20 hours ago
I taught AI the 13 thinking tools that Einstein and Picasso used — it independently discovered laws I spent months extracting manually

I taught AI the 13 thinking tools that Einstein and Picasso used — it independently discovered laws I spent months extracting manually

What it is

An open-source framework where AI uses the same 13 cognitive tools that history's greatest minds used (from the book "Sparks of Genius" by Root-Bernstein, 1999): observe, imagine, abstract, find patterns, analogize, empathize, play, transform, synthesize, etc.

You give it a goal + data. It thinks through the data using all 13 tools and extracts core principles.

GitHub: https://github.com/PROVE1352/cognitive-sparks

Why I built it

Every AI agent framework (LangGraph, CrewAI, AutoGPT) teaches agents what to do — call tools, manage state, follow workflows.

Nobody teaches them how to think.

I wanted to see: if the 13 thinking tools are truly universal (used by scientists, artists, and engineers identically), can we implement them as AI primitives?

The weird part: it has a nervous system

Most frameworks use a "CEO pattern" — one orchestrator tells tools what to run in what order. That's how corporations work, not how intelligence works.

Sparks has an actual neural circuit (~30 neuron populations, ~80 learned connections). Tools don't run in a fixed order. The execution sequence emerges from neural dynamics:

  • Empty state → "observation hunger" signal drives the observe tool to fire first
  • After observations → pattern recognition neurons activate highest
  • After patterns → abstraction neurons win
  • No code says "observe then patterns then abstract." It just happens.

The connections learn via STDP (spike-timing dependent plasticity) and evolve across sessions. The framework literally gets smarter with every use.

The validation that convinced me

I had 15 months of densely analyzed market data. Over those months, I manually extracted 3 "core laws" governing market behavior. Took months of work.

I fed the raw data to Sparks: "find the fundamental laws."

It found 12 principles. The top 3 matched my manually-extracted laws. Plus 9 additional principles I hadn't formalized.

Standard (7 tools) Deep (13 tools)
Principles 7 12
Avg confidence 80% 91%
Coverage 68% 85%
Cost $6 $9

The 6 "creative" tools (imagine, body-think, empathize, play, shift-dimension, transform) contributed 5 principles that the analytical-only pipeline missed.

What makes it different

LangGraph/CrewAI:  Conductor tells musicians what to play and when
Sparks:            No conductor. Musicians hear each other. Order emerges.
  • 13 cognitive primitives (not just "call this API")
  • Neural circuit drives execution (not if-else rules)
  • Self-optimization: it analyzes its own output quality and fixes its own prompts
  • Full loop: extract → validate → evolve → predict → feedback
  • Multi-model: Claude, GPT-4o, Gemini, Ollama — any LLM backend
  • Cross-session learning: connection weights persist and evolve

Try it

pip install -e .
sparks run --goal "Find the core principles" --data ./your-data/ --depth standard

Works with Claude Code CLI (free with subscription), OpenAI, Google Gemini, or any OpenAI-compatible API (Ollama, Groq).

What's next

  • Google Colab notebook (try without installing)
  • Benchmark against GPT-Researcher, STORM
  • Embedding-based convergence detection

Built solo with Claude Code over a long weekend. Happy to answer any questions about the architecture or results.

u/RadiantTurnover24 — 3 hours ago
Week