r/AIDeveloperNews

▲ 83 r/AIDeveloperNews+53 crossposts

This sub gets the assignment better than most so I'll be direct.

The no-code movement solved half the problem. You can build almost anything now without knowing how to code, which is genuinely incredible and wasn't true five years ago. But there's still a gap that nobody talks about. Even with the best no-code tools you still have to know which tools to pick, how to connect them, how to write copy that converts, how to set up ad accounts, how to source products, how to structure a funnel. The learning curve didn't disappear, it just moved.

Most people in this sub know exactly what I mean. You've spent a weekend deep in Zapier trying to get two things to talk to each other that should just work. You've rebuilt your Webflow site three times because the first two didn't convert. You've watched your Notion dashboard get more elaborate while the actual business stayed the same size.

That's the gap Locus Founder closes.

You describe what you want to build. The AI handles everything else. It sources products directly from AliExpress and Alibaba (or sell YOUR OWN digital services, products, or content), builds a real storefront around them, writes conversion-optimized copy, then autonomously creates and runs ads on Google, Facebook and Instagram. No Zapier. No Webflow. No piecing together eight tools that half work. Just a running business.

If you don't have an idea yet it interviews you and figures out what makes sense for your situation.

We got into YCombinator this year and we're opening 100 free beta spots this week before public launch. Free to use, you keep everything you make.

For the people in this sub specifically, this isn't a replacement for no-code tools for people who love building. It's for everyone who wanted the outcome but never wanted to become a tools expert to get there. Big difference.

Beta form: https://forms.gle/nW7CGN1PNBHgqrBb8

Happy to answer anything about how it works under the hood.

u/IAmDreTheKid — 2 days ago
▲ 137 r/AIDeveloperNews+30 crossposts

https://stampedios.com

Stamped..the first community powered iOS app discovery platform

Millions of apps go unnoticed every single year and its not because they’re bad.

The App Store front page is basically a corporate billboard. Apple, Google, Microsoft, the usual names. The same apps that already have millions of users and a marketing budget bigger than most startups will ever see. Meanwhile the indie developer who spent 8 months building something genuinely useful is buried on page 47 where nobody is scrolling.

I’m talking about the AI video editor that does in 30 seconds what Adobe charges you $60 a month for. The finance tool that actually makes sense for how normal people manage money. The productivity app built by one person who got frustrated enough to just build the thing themselves. The privacy scanner that tells you exactly what apps on your phone are tracking you. The note taking app that finally figured out how your brain actually works. The sleep tracker that doesn’t need a subscription to tell you you’re not sleeping enough. The language learning app that doesn’t feel like a game designed to manipulate you into a streak.

None of those apps are on the front page of the App Store. Most of them never will be.

Thats the problem Stamped exists to fix.

Stamped is a community powered iOS discovery platform built specifically for the apps that deserve to be found. Not the brands. Not the corporations. The builders. Every app gets a real developer profile, community voting across 5 categories, demo videos so you see exactly what you’re getting, and direct links to the developer’s Discord or Telegram so you can actually be part of what they’re building.

We’re in beta right now and looking for indie iOS developers who want their app in front of people who are actually looking.

https://stampedios.com it’s free

The best apps shouldn’t be the hardest ones to find.

u/ElkItchy6813 — 8 days ago
▲ 5 r/AIDeveloperNews+6 crossposts

Engineering Whitepaper: Gator

​The Gator Sovereign Entity is a hybrid inference system designed to deliver enterprise-grade intelligence to consumer-grade hardware. It moves away from bloated, dependency-heavy AI setups toward a lean, native architecture that prioritizes efficiency and local control.

​Philosophy: "Big Boy" Power for Every User

​The mission was simple: eliminate the need for $30,000 server clusters. We have built a bridge that allows a user with a mid-range, 6GB or 12GB GPU to command 35B-grade intelligence. By grafting a 35B "Logic Donor" onto a fast, native C++ Kernel, we’ve effectively tricked standard hardware into running lab-level logic. This isn't just an agent; it’s a self-contained intelligence system that manages its own VRAM, allowing for high-density logic on the hardware you already own.

​The "Graft" & The Forge (Bootstrap Protocol)

​The Bootstrap is the "zero-to-sixty" mechanism for the build. It automates a complex "Build-then-Burn" process to ensure your environment is professional and clutter-free:

​The Procurement: It pulls the 35B Logic Donor (~18GB) from a manifest and verifies it via checksum.

​The Synthesis: We use llama.cpp as "raw ore," but the real magic is in the rewrite.

We’ve taken core components from the Hermes Agent Framework and the OpenClaw Framework and merged them into the ZeroClaw foundation. This isn't a wrapper; it's a native rewrite into the specialized libgator_kern.so binary.

​The Purge: Once the kernel is birthed and the 'wakeup' command is verified, the bootstrap incinerates all "installation waste"—the source code, archives, and temporary artifacts are wiped to reclaim disk space.

​Out-of-the-Box Mastery: Embedded Skills

​Gator arrives fully weaponized with native skills that require zero extra configuration:

​The Custom Camofox Skill: Our proprietary stealth-browsing and data-retrieval module. It allows Gator to navigate the web, bypass cluttered JS environments, and pull clean intelligence back into the Lance Scratchpad without leaving a heavy footprint.

​Native OpenClaw Compatibility: Because we’ve mapped the OpenClaw DNA into our kernel, Gator can use the entire ecosystem of existing tools and skills natively.

​Integrated Voice Layer: Gator isn't just text. We’ve built in a low-latency Voice Chat system that operates directly within the UI and the Telegram gateway. It supports real-time vocal interaction, allowing you to hear the 35B logic process its thoughts with zero-lag response times.

​The Soul System: Persistence & Self-Evolution

​Unlike standard AI that resets after every prompt, the Gator Soul is a living, evolving state:

​Context Management (The Lance Scratchpad): To bridge the gap between the 1.5B chassis and the 35B donor, we implemented the Lance Scratchpad. This acts as a high-speed buffer that manages the massive context flow, ensuring the smaller model doesn't lose the "thread" of the 35B’s complex reasoning.

​Dream Maintenance: Through the Agentic Cron, the system performs "Internal Housekeeping" (Dream Maintenance) while you sleep, pruning logs and optimizing LanceDB vector storage.

​No Manual Updates: This build doesn't need traditional version updates. If you want a new feature, you simply ask the agent to add it. It will code the addition into the build, map the new logic, and run its own tests to integrate it into the existing architecture.

​Sovereign UI & 35B Multi-Worker Scaling

​We avoided heavy Electron apps in favor of a low-resource HTMX dashboard:

​Personality Adjusters: The UI features "Layers" that allow you to fine-tune traits and behavioral weights in real-time via the Persona Engine.

​Toggleable Resource Management: To maintain a "Zero-Waste" footprint, the Voice Chat and Agentic Cron systems can be toggled on or off directly from the dashboard. If you don't need voice interaction, you can kill the service instantly to reclaim overhead resources for the core logic workers.

​The Clone Button (35B Force Multiplier): A single click spawns a new 35B Worker. Because of our unique memory management, you can run 6 independent 35B Workers on a 12GB GPU or 3 independent 35B Workers on a 6GB GPU. You can watch the "Prime Gator" delegate tasks to these 35B clones in real-time.

​One-Click Telegram: Instantly hook your 35B logic into a Telegram bot for remote access from your phone, complete with voice note support.

​Performance & "Ghost Test" Validation

​The system is built for speed and stability, verified by the Ghost Test:

​VRAM Baseline: The build holds a steady 2228 MiB target, ensuring room for multiple concurrent workers.

​Native Speeds: By stripping out the scaffolding and running on a compiled C++ kernel, we’ve hit peak tokens-per-second for 35B logic on mid-spec silicon.

​Gator represents a shift to Sovereign Intelligence. It is a lean, self-correcting entity that gives the "little guy" the power of a world-class AI lab in a single-button setup.

https://github.com/Mexor-dev/Gator

u/Mexium — 8 days ago
▲ 11 r/AIDeveloperNews+2 crossposts

open-source AI evaluation platform

he problem I kept seeing:

Companies are deploying AI agents into healthcare, legal, and finance. Their testing process is one developer asking it a few questions and saying "looks good."

The people who actually know what a correct answer looks like — doctors, lawyers, compliance officers — have zero tools they can use. Everything in the eval space requires Python, CLI setup, or JSON configs. Completely inaccessible to domain experts.

What I built:

EvalDesk — open source, self-hostable, no-code AI evaluation.

The workflow is three steps:

Designed specifically so a doctor or lawyer can use it without an engineer in the room. Self-hostable so sensitive data never leaves your infrastructure — critical for HIPAA and legal contexts.

Current features:

What I'm looking for:

Honest feedback. Is this solving a real problem or am I wrong about the gap? Anyone working in AI deployment in regulated industries — does this workflow actually match how your team operates?

GitHub: https://github.com/ramandagar/EvalDesk

u/Immediate-Tap-4777 — 4 days ago

ErnOS AI

ErnOS is a high-performance AI agent engine that runs entirely on your hardware. No cloud. No telemetry. No API keys required. Point it at any GGUF model via llama-server, and you get a full agentic system: a dual-layer inference engine with ReAct reasoning, a 31-tool executor, a 7-tier persistent memory system, an observer audit pipeline, autonomous learning, and a 12-tab WebUI dashboard — all compiled into a single Rust binary.

\[https://github.com/MettaMazza/ErnOSAgent\\\]
(Still a work in progress)

.

🛡️ Built-in Quality Control
Observer System: A background auditor automatically intercepts and forces retries for hallucinations, laziness, or ignored instructions.
Ironclad Safety: Hardcoded, core-level boundaries prevent unauthorized system access or destructive actions.

🛠️ The Toolbelt (22 Local Tools)
System Access: Executes terminal commands, reads/writes files, and edits codebases directly.
Web & Media: Includes a headless browser, multi-provider web search, and local image generation.
Sub-Agents: Spawns child agents for background task delegation.

🧬 Deep, Persistent Memory
7-Tier System: Mimics human memory with active scratchpads, comprehensive timelines, and saved user preferences.
Skill Building: Converts complex problem-solving experiences into reusable procedures for instant future execution.

📈 Continuous Self-Improvement
Background Learning: Continuously analyzes interactions to adapt to preferences and correct behavior.
Sleep Cycles: Periodically compresses memories, prunes useless data, and solidifies new skills.
Self-Training: Uses past successes and failures to automatically retrain and upgrade its core model.

🔬 "Under the Hood" Control
Brain Inspection: Allows developers to view internal neural activations to understand the AI's decision-making.
Steering: Enables real-time instruction injection to alter personality or behavior mid-process.

🌐 User Interface & Flexibility
12-Tab Dashboard: A comprehensive web UI for chatting, managing memory, monitoring tools live, and adjusting settings.
Voice & Video: Supports live, multimodal audio and video interactions.
Model Freedom: Seamlessly swap between local models (e.g., Llama, Gemma) and external APIs (e.g., OpenAI) without code changes.

reddit.com
u/Leather_Area_2301 — 1 day ago

A lot of people still talk about AI as if the main question is: will the model be smarter?

That question matters, but it is not the whole game.

The deeper question is this: who is building the system around the model?

Raw intelligence is no longer the rarest part. Models are getting stronger, cheaper, faster, and more widely available. Eventually, everyone will have access to capable models. That means the advantage will not come only from having “the best AI.”

The advantage will come from architecture.

The future of AI belongs to people who know how to structure intelligence. Not just prompt it. Not just chat with it. Not just bolt it onto an app and hope it behaves.

The real work is in the layers around the model: memory, context, governance, retrieval, tool use, action limits, drift control, continuity, testing, feedback, and human intent.

That is where the future is being built.

Models Are Not Enough

A powerful model without architecture is like a powerful engine with no chassis, no steering, no brakes, and no road map.

It can produce force. It can move. It can impress people. But it cannot reliably become a useful system on its own.

This is why so many AI products feel clever for five minutes and then fall apart under real use.

They can answer. They can summarise. They can generate. But they do not always hold shape.

They forget what matters. They drift from the original goal. They overreact to recent context. They repeat themselves. They use tools at the wrong time. They lose the thread. They confuse confidence with correctness.

They behave like powerful minds with no internal skeleton.

The model is not the whole organism.

The architecture is what gives it form.

The Human Architect

The next important role in AI will not simply be “AI user” or “prompt engineer.”

It will be the human architect.

The human architect does not just ask questions. They design the environment in which an AI system thinks, remembers, acts, and corrects itself.

They decide what the system should retain.

They decide what should decay.

They decide which memories are anchors and which are noise.

They decide when the system should act, pause, ask, refuse, escalate, or reconsider.

They build the gates.

They build the feedback loops.

They build the tests.

They define what stable behaviour means.

This is not just software engineering. It is behavioural design. It is systems thinking. It is psychology, logic, memory architecture, interface design, risk control, and human judgement all fused together.

The model may generate the output.

But the architect shapes the conditions under which that output emerges.

The New Stack

The old AI stack was mostly about model capability.

Bigger model. More data. More parameters. More benchmarks.

The new AI stack is different.

It looks more like this:

Human intent enters the system first. Then structured context gives the model situational awareness. A memory layer decides what should matter from the past. Retrieval brings in relevant external information. The reasoning or generation model produces possible outputs. A governance layer checks stability, risk, and drift. A tool or action layer decides what can actually happen. An audit loop records the outcome. Feedback updates the memory state.

That is the shape of serious AI systems.

Not one giant brain.

A layered system.

Each layer matters.

Context tells the model what situation it is in. Memory tells it what has mattered before. Retrieval gives it relevant information. Governance prevents unstable or unsafe action. Tools let the system affect the world. Audit trails let humans inspect what happened. Feedback lets the system improve without becoming chaotic.

This is where the future is heading.

Behaviour Over Raw Scale

There is a growing shift from “bigger model” to “better behaviour.”

That shift matters.

A smaller model with good architecture can sometimes be more useful than a larger model with none.

A controlled system can outperform a powerful but unstable one.

A system with memory, constraints, and proper routing can feel more reliable than one that simply produces fluent text.

In real deployments, behaviour matters.

Does the agent stay on task?

Does it remember what matters?

Does it avoid repeating mistakes?

Does it know when not to act?

Does it preserve continuity over time?

Does it degrade safely under uncertainty?

Does it remain useful after fifty interactions, not just one?

That is where architecture beats spectacle.

Memory Is Not Just Recall

Most AI systems still treat memory as retrieval.

The system remembers a fact, pulls it into context, and uses it in the next answer.

That is useful, but limited.

Real continuity requires more than recalling facts.

Some past events should change future behaviour. A correction should reduce future error. A repeated preference should become a stronger signal. A high-salience event should matter more than a throwaway detail. A revoked fact should not keep resurfacing. A long-term goal should shape short-term decisions.

This is where memory becomes behavioural.

Not just: what did the user say before?

But: how should what happened before change what the system does next?

That distinction is huge.

It is the difference between a chatbot with notes and an agent with continuity.

Governance Is Not Optional

As AI systems become more capable, governance becomes more important.

Not corporate buzzword governance.

Actual behavioural governance.

A useful AI system needs internal checks. It needs to know when confidence is low. It needs to know when memory may be stale. It needs to detect drift. It needs to avoid runaway loops. It needs to separate user pressure from evidence. It needs to pause when action would be unsafe.

It needs brakes.

Without governance, intelligence becomes volatility.

With governance, intelligence becomes usable.

This is why the best systems will not simply be the most powerful.

They will be the most stable under pressure.

Human Architects Will Matter More, Not Less

A strange thing is happening.

The better AI gets, the more human architecture matters.

That sounds backwards, but it is not.

Weak AI needs humans to do everything.

Strong AI needs humans to define what should happen, what should matter, what should be constrained, and what should be preserved.

The human role moves upward.

Less manual execution.

More system design.

Less typing every instruction.

More shaping the environment.

Less asking for outputs.

More designing behaviour.

That is not humans being replaced.

That is humans becoming architects of intelligent systems.

The people who understand this early will build differently.

They will not just ask: what can this model answer?

They will ask: what kind of system does this model need around it to behave properly?

The Real Moat

In the long run, model access will become less rare.

Interfaces will become easier.

Agents will become common.

The real moat will be architecture.

A company with a better behavioural layer will have an advantage. A studio with better NPC continuity will have an advantage. An enterprise with better agent governance will have an advantage. A researcher with better memory and audit structure will have an advantage.

A builder who understands context, memory, and control will have an advantage.

The future will not belong only to whoever has the biggest model.

It will belong to whoever can make intelligence behave.

Final Thought

AI is not just a model problem anymore.

It is an architecture problem.

The next generation of useful systems will be built by people who understand that intelligence needs structure.

Memory needs weighting.

Action needs governance.

Context needs shape.

Tools need restraint.

Continuity needs design.

And models need human architects.

The future of AI is not simply artificial intelligence replacing human judgement.

It is artificial intelligence being shaped by human architecture.

That is where the real change begins...

reddit.com
u/nice2Bnice2 — 10 days ago
▲ 2 r/AIDeveloperNews+2 crossposts

I got tired of constantly switching apps, copying context, and trying to find old threads. So I built a new type of ai workspace… But I don’t think anyone cares!

We have entered a world of “look at my new app. It solves world hunger…”. It is an easy trap, I have written code for almost 30 years and now have a super power. Unfortunately so does everyone else that hasn’t touched a terminal. Vibe coding your way to glory is fun until your new app flops and you get lost in the noise of 1000 others.

The problem with all of this is that vibe coding these apps is a market tied to at least one thing. Making the LLM vendors more money and the people building the “apps” are potentially oblivious. The gaming by the LLM during the build is literally crazy, all of a sudden dumb moments as the context windows grow, the “you did what”. Every file update almost has to be a commit to keep from losing ground. To what end? So I can pay anthropic more money for building something people will likely never try? Is the output the market or the vibe coders themselves?

.Yo is genuinely unique and when we drop cyphers I think people might actually download it and try it. But I wonder how many new ai workspaces or awesome product ideas will be lost to the masses of… “look at my new app”!

u/Successful-Seesaw525 — 8 days ago
▲ 4 r/AIDeveloperNews+2 crossposts

Maybe I am barking up the wrong tree? Are you just seriously unwilling to try anything new when it comes to an ai workspace? Even if it is free and uses your Claude subscription?

Promise not to follow this with endless threads about how amazingly ai my twiszler thing-ama-bopper is… I am not pushing slop, this thing is useful. but I can’t get anyone to freaking try it. What the hell?

Seriously, what I am doing wrong here? I cant get a single person to download a new Claude code workspace that has VSCode Server, Monaco. Terminals, connectivity to 3500 mcp servers through pipedream, Claude skills and agent loop fully integrated, RPA built into the ui so it can scrape context from pages, all runs on your own Claude subscription not purely byok, your actual subscription so no upcharge… the skills enhancements to anthropic stack are a game changer… it allows you to connect 3500 apps via mcp and full page context switching and autonomous navigation/ integration with a thin js process layer.

from anywhere, any text box or terminal: “.yo <skill-name> hey can you find that email from jim last week on abc soup and then grab the jira ticket it references then create me a slack to post in #abc-soup but don’t post it yet just open slack and ping me when it’s ready”.. again from anywhere the agent pops up “done” you type “.dip . slack” dot dip is the change page command, the second dot is “this window” and instantly your in slack.. review, hit post. “.dip .back” back to where you were FROM the slack input box like a /slash command. Then in an hour when your slack is going nuts your in VSCode and u need to give Claude the context from that entire channel and email and jira “.yo . <skill-name> pull that jira up from earlier in abc-soup”. That skill will format and summarize the entire context so you can feed it directly to into code. all from anywhere you might be at the time like your VSCode terminal, or gmail, or wherever… it is insanely useful on context switching…. as I ramble about shit no-one cares about.

maybe software is dead, maybe I am like every other schmo out there trying to bet noticed in a sea of others with a new cool thing. Sucks honestly, this is a grind… reminds me of game dev and unity if I am being honest. Saturated market…

yosup.dev if you care at all…

u/Successful-Seesaw525 — 6 days ago
▲ 3 r/AIDeveloperNews+1 crossposts

I’ve had this “main idea” sitting in my notes for almost a year. Every few weeks I’d come back to it, tweak it, watch a few videos, maybe redesign a flow… and then leave it again. It always felt like I wasn’t ready to actually build it yet.

This weekend I got a bit fed up and decided to do the opposite. Instead of touching that idea, I picked something small and kind of random and just tried to ship it in one sitting. No planning, no polishing just get something live.

I used a mix of tools I’ve been seeing around (ChatGPT for rough ideas, a bit of Claude for rewriting things, and an AI builder called Runable to put the page together) mainly so I wouldn’t get stuck on setup or design decisions. The goal was just to remove friction and see what happens if I actually finish something.

It’s nothing special honestly pretty basic. But it’s already getting more attention than the “perfect” idea I’ve been overthinking for months. Not huge numbers or anything, just enough to make me realize I was probably avoiding shipping more than I thought.

I think the biggest shift for me was realizing that planning feels productive, but it’s not the same as putting something in front of real people. Curious if anyone else has run into this where the quick, low effort thing ends up teaching you more than the idea you’ve been refining forever?

reddit.com
u/Anantha_datta — 11 days ago
▲ 22 r/AIDeveloperNews+2 crossposts

TL;DR: I built TDD-Guard a year ago. I’m now working on Conduct, a more general policy engine for coding agents (Claude Code, Codex, GitHub Copilot CLI, and VS Code Chat). It includes a TDD rule that works with any language and test runner out of the box, supports parallel sessions, and handles refactoring properly.

Hi all,

The demo shows me prompting Claude Code to build a shopping cart in an empty project with Conduct’s TDD rule installed. I make no mention of TDD because I want to show how it is enforced out of the box. Hooks intercept each agent action, and a separate agent reviews the recent session, the pending action, and the current file before allowing it through. That extra context also helps it handle refactoring cleanly.

Repository: https://github.com/nizos/conduct

The project is in an early state. Feedback is welcome!

Background

I started using Claude Code about a year ago and was immediately convinced that I could make it follow Test-Driven Development (TDD) as it was a requirement if I were to ever use it for production. I tried different prompts and just like everyone else experienced how unreliable that was. The agents would drift as the context rotted, take shortcuts, and I had to keep supervising their practices.

Luckily, Claude introduced hooks around that time. You can think of them as events that fire automatically when an agent wants to perform an action like writing a file or running a command. The information in them lets you determine if the agent is, for example, trying to write multiple tests at once, and block the action with feedback on how to course correct. So I decided to use this to enforce TDD. I created a custom test reporter to capture test run output, combined it with the hook data, and provided it to a separate agent that judged whether the pending action violated TDD.

It worked really well. I called the project TDD-Guard. The community contributed support for several languages, and I’ve kept working on it since.

TDD Guard has its quirks though. It needs a dedicated reporter per test runner, which makes new language support slow. It can’t handle parallel sessions because reporter output gets overwritten. The validator also only sees the latest test output and the pending change, which isn’t always enough context to tell refactoring apart from new behavior. The validation ends up either too strict or too permissive.

Over time I noticed gaps in my workflow outside of TDD that I still had to supervise, and friction from teams using different agents in the same project with overlapping instructions and plugins. So I started a new project, Conduct, that takes a more general approach.

Conduct makes it easy to define rules that get enforced through hooks across all supported agents: Claude Code, Codex, GitHub Copilot CLI, and VS Code Chat, with more to come. It ships with deterministic rules for forbidding commands or content using string or regex matching, and it includes a TDD rule that addresses the limitations above.

The TDD rule reads recent session history instead of relying on a sidecar reporter, so it works with any language or test runner out of the box, parallel sessions don’t collide, and the validator has enough context to handle refactoring properly. It uses AI to validate, and reuses your existing subscription via the official SDKs. The validation instructions can be customized and you can scope which files TDD applies to.

I’ve been using Conduct over the past week in production with Claude Code and I’m genuinely impressed by how well it works. It catches real oversights without the friction TDD-Guard sometimes caused.

u/Ok_Bet7598 — 13 days ago
▲ 9 r/AIDeveloperNews+7 crossposts

Hey I lead a product at this repo: https://github.com/alichherawalla/off-grid-mobile-ai and we are exploring building a PRO version on top of our O.S.S. where we play with voice ai, MCPs first then build towards a ambient ai on your phone - local - nothing leaves your phone ethos.

Would love to understand from the community on what do you folks think about this move? Is this worth it? Should we do something else? - and everything in between.

Do DM - if you want to discuss architecture, use-cases and more in detail.

Available on comments for a while.

u/Ok_Needleworker_6431 — 11 days ago
▲ 13 r/AIDeveloperNews+7 crossposts

Another Asena has arrived—this time, it defeats Skynet at the edge.
Hidden inside a smart ring, this tiny intelligence awakens with a single command. No clouds. No latency. Just raw, embedded cognition. Asena_ESP32 is not just a model—it’s a silent operator, running on ultra-constrained hardware yet speaking with precision, control, and intent. Powered by the Behavioral Consciousness Engine (BCE), it doesn’t just generate text—it adapts behavior, filters risk, and responds like a disciplined digital mind.

One command is all it takes.
Servers align. Systems optimize. Workflows compress into efficiency. From the smallest signal, Asena reshapes its environment—an “Extreme Edge AI” built to act where others can’t even load. Compiled in C++, optimized through ggml and llama.cpp, it turns minimal compute into maximum impact. This is not about scale. This is about control, speed, and presence—AI that exists exactly where it is needed.

Welcome to the future of invisible intelligence.
A ring. A whisper. A response. Asena doesn’t wait for the cloud—it is the edge.

Huggingface Model Link: https://huggingface.co/pthinc/Asena_ESP32

u/Connect-Bid9700 — 11 days ago
▲ 5 r/AIDeveloperNews+2 crossposts

I built a workspace with 3,500+ mcp apps, multi-model AI, skills, automation, and full dev tooling — all in one place. Driven by claude code, expanded by glyphh ai. First release video.

Not a context tool. Not an AI wrapper. Not an automation platform. All of it.

Yo is one fast workspace. Every panel you open builds context automatically. .yo drops you into an agent that navigates, codes, runs commands, hits any MCP tool. .council spins up a multi-model debate. .dip into any of 3,500+ connected apps and your context travels with you. Skills handle the automation. Dev Spaces run multi-agent workflows. .drip when you ship. It can feel like a console app, vscode, terminals, local file access...

"Cyphers" coming soon. Don't know that that is? Don't fret you wont be able to resist when it drops.

One surface. Every app. Every LLM. Every workflow. Fast, secure, local...

Download link in comments. Mac and Windows.

u/Successful-Seesaw525 — 9 days ago

What we're building

An AI agent platform that helps companies find and analyze relevant public tenders across Europe. Not just scraping — actual matching + pre-evaluation, so companies stop drowning in irrelevant RFPs and only see what's worth bidding on.

Where we are

We're a 3-person founding team. 3 months ago we found our 3rd co-founder, who covers exactly what we were missing on the business/sales/fundraising side — so the founding team itself is set.

The MVP is in good shape, our early-access pipeline keeps growing (25+ companies on the list), and we're kicking off our funding round in mid/late June with the goal of closing in September.

Who we're looking for

Not another co-founder — 1–2 Software Engineers to back up our CTO and help us actually scale this thing properly.

Ideal profile:

→ Solid in AI/ML and backend

→ Can build LLM-powered agents (matching, analysis, scoring) one day and dig into infra the next

→ Comfortable with ambiguity, moves fast, takes ownership

What you'd actually do

→ Build and optimize our AI agents across millions of tender documents

→ Architect and ship backend/infra alongside our CTO

→ Real ownership — you're shaping the core product, not picking up tickets

Comp — being honest

Until the round closes we can't pay you well, but we can pay you something out of our own pockets — plus meaningful ESOP. Once the round is in, you become a key part of building out the dev team with us.

If this sounds like your thing — or you know someone it fits — drop a comment or DM me. 👇

reddit.com
u/RaspberryPrior5634 — 13 days ago