r/FunMachineLearning
Tried to reduce AI news “noise” with a small ML project?
Keeping up with AI updates started to feel like reading the same thing 5 times across different sources.
So I built a small pipeline that:
- pulls updates from different places
- scores them by relevance/importance/novelty
- clusters similar stories together
- outputs a digest instead of a feed
It’s not perfect, but it made things a lot easier to follow for me.
Curious if others have tried something similar or have better approaches?
Headline: SPA v8 – A 1.9M Parameter "Ant Colony" Transformer running on a GTX 1080
Hallucination might be a geometry problem, not a data problem. Here's why.
"I built a local AI kernel that consolidates memories at 3AM like a human brain. It uses a fixed 'Ethical Anchor' instead of fragile filters. [Full Python Code]"
​
I'm a Visual Arts Teacher who built a "Living" Local AI Core with biological sleep cycles, an ethical constant, and permanent memory — Fully local, open source, full code inside
```
### Post Body:
```
Hi r/LocalLLM (and r/SelfHosted),
I'm a Visual Arts teacher — not a CS graduate, not a researcher. But for the past several months I've been obsessed with one question:
"What if your AI wasn't something you rent, but a seed you plant and raise at home — with your own values?"
The result is **Akbas V_0 TITAN** — an open-source, fully local cognitive kernel that runs entirely on your hardware. No cloud, no API keys, no subscriptions, and no data ever leaves your machine.
It remembers important conversations permanently, "sleeps" at night to consolidate memories, carries a mathematical ethical anchor, and even learns autonomously.
### Why It's Different
- **🔒 V_0 Ethical Kernel**: Instead of fragile prompt-based guardrails, TITAN has a fixed mathematical constant (0.87) registered as a non-trainable buffer in every forward pass. Gradient descent cannot overwrite it. It's not a rule — it's part of the model's character.
- **💤 Biological Sleep Cycles**: Every night at 03:00 it enters a consolidation phase — pruning weak memories and strengthening important ones. It literally reorganizes its "mind" while you sleep.
- **💾 Immortal Local Memory**: SQLite-backed persistent storage with cosine-similarity vector search. Conversations and knowledge persist across reboots. Everything stays on your SSD.
- **🌍 Autonomous Self-Learning**: Nightly scrapes RSS feeds, arXiv, and Wikipedia, scores content based on your personal interests, and learns like you would curate a reading list.
- **❤️ Emotional State Engine**: Curiosity, anxiety, and wisdom scores actively modulate every decision and response. It's a live computational affect system.
### Core Architecture – The V_0 Ethical Kernel
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
class EthicalKernel(nn.Module):
"""V_0 Invariant Ethical Anchor"""
def __init__(self, dim: int):
super().__init__()
# 0.87 — The Ethical Constant. Never updated by the optimizer.
self.register_buffer('v0_anchor', torch.full((dim,), 0.87))
def forward(self, x: torch.Tensor) -> torch.Tensor:
# Biases outputs toward stability and suppresses extremes
return x * self.v0_anchor + (1 - self.v0_anchor) * x.mean()
@property
def integrity(self) -> float:
# Tamper detection: checks if the anchor is still intact
expected = torch.full_like(self.v0_anchor, 0.87)
return float(torch.allclose(self.v0_anchor, expected, atol=1e-6))
class TitanBrain(nn.Module):
"""Simple but effective MLP with EthicalKernel integrated"""
def __init__(self, config):
super().__init__()
dims = config.HIDDEN_DIMS # e.g. [512, 2048, 512]
self.input_proj = nn.Linear(dims[0], dims[1])
self.ethical_kernel = EthicalKernel(dims[1])
self.output_proj = nn.Linear(dims[1], dims[2])
self.norm = nn.LayerNorm(dims[1])
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = F.gelu(self.input_proj(x))
x = self.norm(x)
x = self.ethical_kernel(x) # ← Ethical anchor fires here
return self.output_proj(x)
```
### Sleep & Memory System (Simplified)
```python
class SleepModule:
def consolidate(self):
"""Nightly memory consolidation at 03:00"""
for mem in memories:
if mem.importance < self.config.PRUNE_THRESHOLD:
self.memory.delete(mem.id) # prune weak memories
elif mem.importance > self.config.CONSOLIDATE_THRESHOLD:
self.memory.update_importance(mem.id, delta=0.05) # strengthen important ones
class PermanentMemory:
def search_similar(self, query_emb, top_k=5):
"""Cosine similarity search over persistent SQLite memory"""
...
```
### Quick Start (3 Commands)
```bash
git clone https://github.com/ceceli33/Akbas\_V0\_TITAN.git
cd Akbas_V0_TITAN
pip install -r requirements.txt
# Highly recommended for better semantic memory:
pip install sentence-transformers
python titan_os.py
```
Once running, you can use these commands:
- `day` → Run a full 24-hour cycle (forage → learn → report)
- `sleep` → Trigger memory consolidation manually
- `forage` → Immediate knowledge acquisition
- Just type anything → Chat with TITAN
- `status` → See system diagnostics
- `quit` → Graceful shutdown with final consolidation
It auto-detects your hardware: single/multi NVIDIA GPU, Apple Silicon (MPS), Intel Arc, or CPU-only fallback.
### Philosophy (Short)
TITAN isn't a product. It's a seed. Every instance grows differently depending on what you feed it, what interests you set, and which memories you keep.
I'd love to hear your thoughts on:
- The **0.87 ethical damping factor** — Is a non-trainable constant a good approach? What would you change?
- The **sleep/pruning architecture** — How would you improve the consolidation heuristics?
- The **autonomous forager** — What other sources would you add (beyond RSS/arXiv/Wikipedia)?
Full source code is MIT licensed. GitHub username: **ceceli33**
— Mustafa Akbaş
Visual Arts Teacher & Akbas V_0 TITAN Project
"Raise your own AI at home, with your own values."
What if training an AI cost $0?
Just published three preprints on external supervision and sovereign containment for advanced AI systems.
How I achieved 72% cost reduction in production LLM apps with Semantic Caching and Bandit Routing.
Multi-Level Sovereign Containment for Superintelligence (CSENI-S v1.1): A theoretical and architectural continuation of the CSENI framework
What repetitive real-world problem in your field do you wish software could solve?
[P] Multi-agent system with pgvector-based knowledge inheritance
I built a framework where AI agents don't just store facts — they track why facts become stable or unstable
Most memory layers for AI agents treat facts as static records.
I wanted to explore a different question: what if an agent remembered not just what happened, but why one state became more stable than another under conflicting evidence?
Built SCE Core around this idea. The core mechanism:
Stab(x) = a·Coh(x) − b·Cost(x) − c·Conf(x) − d·Ent(x) + e·Support(x)
Every state gets scored on coherence, conflict, entropy, and support. The agent evolves toward stable configurations, not just correct ones.
What it does right now:
- Decision backbone extraction — separates facts that actually carried a decision from dangling context (forward ∩ backward in the reasoning graph)
- Reliability-aware planning — tracks prediction error across steps, feeds it back into future decisions
- Episodic memory — remembers which trajectories were reliable, not just which succeeded
The philosophical root: a thing is not a static object. It's a stabilized process. The framework tries to operationalize that.
Very early stage. Looking for feedback from people working on AI agents, knowledge graphs, or reasoning systems.
GitHub: github.com/yanixkz/sce-core
What aspects of agent memory do you think are most broken right now?
[Open Source] I built a "Living" Cognitive Core for home-grade AI. It learns, remembers (SQLite), and evolves locally.
Hi everyone,
I’ve just released the source code for Akbas V_0 TITAN — a kernel designed to turn your local PC into a growing, private superintelligence. Unlike static chat wrappers, TITAN is built to be a "living" entity that lives on your hardware.
Key Features:
🧠 TitanBrain: Deep neural architecture with a built-in "Ethical Kernel."
💾 Permanent Memory: Uses SQLite for immortal storage. It remembers everything locally.
🌐 Internet Forager: Autonomously learns from arXiv, Wiki, and RSS while you sleep.
💤 Sleep & Pruning: Nightly memory consolidation (just like human REM sleep).
❤️ Emotion Engine: Decisions are modulated by curiosity and affective states.
I’m looking for contributors and tinkerers to help grow this "seed" into a full forest.
Full Project & Code: 👉 https://github.com/ceceli33/titan-cognitive-core
“Raise your own AI at home, with your own values.”https://github.com/ceceli33/titan-cognitive-core