u/BackInternational743

I vibe-coded a no-history browser that shows the noise around every page

I vibe-coded a no-history browser that shows the noise around every page

I’ve been playing with an idea that started as a small question:

What if a browser didn’t treat “searching” and “going somewhere” as the same thing?

So I built a tiny Electron prototype called Nubea.

It has two separate actions:

  • Go to a website — when you already know the destination
  • Search the web — when you actually need a search engine

The other idea is that Nubea does not keep local browsing history by design. Instead, it shows live metrics about what the current page is trying to load:

  • external requests
  • attempted cookies
  • trackers / analytics
  • ad networks
  • redirects
  • identity syncs
  • blocked requests
  • “cognitive pollution” score

It has three modes:

Normal
Loads the web normally and only observes.

Clean
Blocks known analytics / ads / tracking.

Mirror
Blocks third-party requests aggressively, so you can see how much of the page depends on external systems.

The funny part is that after testing news sites, the browser started showing hundreds of invisible requests around a single article. That became the real concept:

>

It’s still rough. It has bugs. It uses Chromium through Electron, so I’m not pretending I built a browser engine from scratch.

But the prototype works, and the idea feels strong:

Nubea is not trying to be another Chrome.
It is trying to separate navigation from capture.

Repo: https://github.com/Hanzzel-corp/nubea

Would love feedback, especially on the UX and the idea of separating “go to a site” from “search the web”.

https://preview.redd.it/8b9ler6x6m0h1.png?width=1914&format=png&auto=webp&s=72208be9f2687f1a04fa6199c445066f343f4ad4

reddit.com
u/BackInternational743 — 2 days ago

I am sharing this mainly for technical feedback on the validation design, not as a claim of state-of-the-art performance.

Repo:

https://github.com/Hanzzel-corp/nct-depth-motif

The idea is to test whether local depth-map structure can be represented as discrete 3D symbolic motifs across X/Y/Z components, and whether those motifs survive statistically against random baselines.

It includes:

- RGB-D / depth-map experiments

- grouped split validation

- RGB-cluster leave-one-out validation

- CUDA-accelerated random baseline evaluation

- empirical p-values

- reproducibility scripts

- documented limitations

The strongest current variant is motif_survival_binary, which showed a consistent but modest positive signal against random motif baselines.

Important clarification: this is not a claim of state-of-the-art performance and not a peer-reviewed result. I am sharing it mainly for technical feedback, especially around the validation design, baselines, and whether the scene split methodology is strong enough.

Feedback is very welcome.

u/BackInternational743 — 12 days ago
▲ 3 r/node

Built a personal project I'm finally sharing publicly.

**Blue Arrow** is a modular PC automation framework. The core idea: instead of parsing text commands ambiguously, the system uses a state machine (IDLE → INTENT → PLANNING → EXECUTING → VERIFYING → COMPLETED).

**Architecture highlights:**

- Runtime in Node.js orchestrates 30 modules via a JSON Lines message bus

- Each module declares its ports (inputs/outputs) in a manifest – no cross-imports

- A Python Verifier Engine calculates confidence scores after each action (checks process PID, window ID, focus state via wmctrl/xdotool)

- Local AI via Ollama/LLaMA handles intent parsing and text generation

- Core vs Satellite module classification – satellite modules fail gracefully without taking down the system

**Why I built this:**

I wanted automation that *verifies* it worked, not just fires and forgets.

**Repo:** https://github.com/Hanzzel-corp/blue-arrow

**Stack:** Node.js 20 / Python 3.11 / Linux

**License:** MIT

v0.1.0 just released. Feedback welcome.

u/BackInternational743 — 13 days ago