r/bun

The JS Event Loop isn't just a queue , here's the mental model most tutorials get wrong
▲ 12 r/bun+1 crossposts

The JS Event Loop isn't just a queue , here's the mental model most tutorials get wrong

Most explanations of the event loop teach you the mechanics but leave you with the wrong intuition. They show you a call stack and a task queue and say "JavaScript runs one thing at a time" , which is true but incomplete.

What they miss:

The microtask queue is not part of the event loop cycle in the way the task queue is. It drains completely after every task , including tasks queued by microtasks, before the loop moves on. This is why Promise chains never interleave with setTimeout callbacks.

The render step sits between tasks, not between microtasks. Queue enough microtasks and you'll block painting without blocking the call stack in any obvious way.

setTimeout(fn, 0) is a task queue entry. Promise.resolve().then(fn) is a microtask. These are fundamentally different lanes , not just different timings.

I wrote a deep dive on this with an interactive visualizer that animates every queue in real time as you run snippets. The framing is unconventional , I mapped it to Vedic karma and dharma as a mental model layer, but the mechanics are spec-accurate.

If you've ever been surprised by async execution order, this should close the gap permanently.

Interactive version: https://beyondcodekarma.in/javascript/js-event-loop/

u/svssdeva — 3 days ago
🔥 Hot ▲ 95 r/bun

Bun is not stable enough for production nor faster than node in production - a crude investigation into memory leaks

I'd like to start by saying that I ’m still pretty new to the JavaScript world and sometimes I actually don't know what I'm talking about so despite my best efforts please excuse any mistakes in my research, but I’ve now read enough, seen YouTube videos and other complaints that point to the same story.

I'd like to start by saying bun is one of the greatest things to happen in the JS world in years. I'd not want to move away from it back to node.js I'd like to keep using it despite it's flaws & make it perhaps a better framework. it's the only reason I've not jumped to Go and left backend JS. As a package manager & as a runtime I deeply enjoy bun and leaving it would mean leaving JS for me.

I think Bun is currently deeply flawed and unstable for some long running production workloads, it may be apparent to people with long running NextJS apps. I could have been alone in this but once you start seeing the same class of problems come up across official Bun docs, Bun release notes, GitHub issues, SSR repros, DB related workloads, child process workloads, and even production posts from people who actually like Bun, it becomes deeply concerning why the issues are not brought to the spotlight and the community + the developers have not put the pieces together.

Any runtime that is new will have real maturity problems that will be ironed out with time but I am concerned that buns development roadmap looks more like adding features on top of features while ignoring stability issues & bug fixes. Bun has grown to be very complex and without these fixes I doubt it will ever gain as much production grade maturity as node.

The first thing that pushed me in this direction was Bun’s own documentation followed by a YouTube video, why was not a JS flaw but a bun flaw and the CC didn't realise:

https://youtu.be/gNDBwxeBrF4?si=4t8r8FtPo06GcGim

In Bun’s official docs, they explicitly separate:

JavaScript heap, Non JavaScript or native memory, RSS. Native heap stats, mimalloc stats.

That alone tells you something important:

With Bun, “my JS heap looks okay” does not automatically mean my process memory is healthy. Source: official Bun docs, especially the “Benchmarking” page and Bun’s memory debugging material.

And that matters because a lot of the reports follow the exact same pattern:

Heap is not exploding that badly, GC runs but RSS keeps climbing anyway. Then the container gets pressured.

Then performance gets worse and it drops to a point that nodejs was actually much superior. Then the process restarts, crashes, or gets OOM killed.

That is a very different kind of story from a simple beginner mistake where someone forgot to clear an array.

The repeated smell here is native retention, allocator behaviour, runtime internals, or cleanup bugs outside the normal JS object graph. That is an inference on my part, but it is an inference strongly supported by how Bun itself tells people to debug memory.

Then I started looking at Bun’s own release notes:

The official Bun v1.3.12 release notes explicitly say they fixed per query memory leaks in the `bun:sql` MySQL adapter that caused RSS to grow unboundedly until OOM on Linux.

That is Bun itself admitting there were native leaks bad enough to push RSS until the process died. Source: official Bun blog, v1.3.12 release notes.

The same v1.3.12 notes also mention a memory leak in `Bun.serve()` when a `Promise<Response>` never settles after client disconnect.

Again, that is important because it shows the problem is not only “some random third party package did something stupid.”

There have been real leaks in Bun’s own serving/runtime paths. Source: official Bun blog, v1.3.12 release notes.

then there is the most important named example I found at Trigger.dev.

Nick, the Founding Engineer at Trigger.dev, wrote a post in March 2026 called “Why we replaced Node.js with Bun for 5x throughput.” They said they also found a memory leak that only exists in Bun’s HTTP model. Even more interesting, the post was updated on March 30, 2026 saying Bun shipped a fix shortly after the article went live.

That tells me two things at once:

  1. Bun can be genuinely fast.

  2. Bun can also still have production relevant memory bugs in core runtime behaviour. Source: Trigger.dev engineering post by Nick.

That Trigger.dev example is actually one of the strongest pieces of evidence because it is not written by someone uninitiated like me.

So when even a pro Bun migration story still contains “we found a Bun specific memory leak,” that should make people slow down before pretending Bun is serious enough to deploy yet, at least until you face the same memory problems.

Source again: Trigger.dev’s Firestarter writeup.

then you get into issue reports...

Not all of these are named companies, so I am not going to overstate them. Most are GitHub issue reporters, not polished case studies. But there are enough of them, across enough different workloads, that they are worth taking seriously as a pattern.

Example one:

Issue #17723 on Bun’s GitHub, opened by `@rbilgil` in February 2025.

Report says moving from Node to Bun caused a service on GKE to spike from roughly 500 MB on Node to roughly 1.2 GB on Bun until restart, with high CPU and memory usage and no application errors.

Example two:

Issue #14664, opened by `@boomNDS` in October 2024.

This one reports memory leak behaviour when using Prisma with Bun on an API server handling around 30 requests per second. The reporter says CPU usage rises over time, server performance degrades, and restart temporarily fixes it. (Typical bun behaviour).

Example three:

Issue #15518, opened by `@ricardojmendez` in late 2024.

This one describes an Elysia + Prisma setup processing hundreds or thousands of requests per second, where terminal memory use continually increases over a couple of hours.

//This issue is slightly older but bun exhibits the same behaviour today.

Example four:

Issue #21560, opened by `@Playys228` in August 2025.

This one is especially interesting because it is about spawned child processes. The reporter says RSS keeps creeping up over hours even when JS heap is flat, and says it is not fixed by GC.

Once again the pattern is:

heap relatively flat

RSS rising

long running unhealthy process

Example five:

Issue #24118, opened in October 2025.

This report isolates RSS growth with the MongoDB Node module under Bun. The issue text says heap inspection shows Bun is performing garbage collection, but RSS still rises by around 8 to 12 MB per hour per application with little more than an open Mongo connection. hey even note reconnecting does not reduce RSS and the only reliable control is application restart.

Example six:

Issue #25948, opened in January 2026.

This one reports Mongoose related memory growth in Docker with no hot reload, where memory rises even while the server is idle and not receiving requests.

example 7:

Issue #29267, opened in April 2026:

“Memory leak in Next.js SSR under `bun --bun next start`”

The reporter says concurrent SSR requests cause the heap not to be reclaimed properly and memory keeps rising. There is also a duplicate issue and a linked Next.js side issue around the same repro.

---------

So what do I think is going on?

I do not think there is one magical single Bun bug causing all of this.

I think it is more likely a cluster of maturity problems that can show up differently depending on workload.

Possible buckets:

  1. Native memory retention

  2. Allocator or page release behaviour

  3. Bugs in Bun internal runtime paths

  4. Framework integration edge cases

  5. Certain I/O or DB patterns exposing cleanup issues

  6. Long running workloads amplifying problems that short benchmarks never reveal

That is my interpretation, not something I am claiming Bun itself officially stated. But I think it is the fairest reading of the evidence. Supported by Bun’s memory model docs, the official leak fixes, and the issue pattern above.

Here is the part people keep getting wrong in these debates:

A runtime can be Genuinely faster than Node in short benchmarks and still be slower than Node for long running services.

With bun you can win the first 60 seconds and still lose the next 24 hours.

I'd want the community/bun users to report similar issues so we can, perhaps someone far more knowledgeable than me about runtimes can look into this, correct me where wrong and bring this to the official Devs as I don't think bun will go anywhere near long production loads if long running memory bugs are part of it. Every few months there's loads of new feature drops but no one is talking about overall stability first in bun. It is the main thing holding this runtime back.

Sources used:

[1] https://bun.com/docs/project/benchmarking?utm\_source=chatgpt.com "Benchmarking"

[2]: https://bun.com/blog/bun-v1.3.12?utm\_source=chatgpt.com "Bun v1.3.12"

[3]: https://trigger.dev/blog/firebun?utm\_source=chatgpt.com "Why we replaced Node.js with Bun for 5x throughput"

[4]: https://github.com/oven-sh/bun/issues/17723?utm\_source=chatgpt.com "Moving from Node to Bun spikes container CPU and ..."

[5]: https://github.com/oven-sh/bun/issues/14664?utm\_source=chatgpt.com "Memory leak when using Prisma · Issue #14664"

[6]: https://github.com/oven-sh/bun/issues/15518?utm\_source=chatgpt.com "Memory leak with Elysia + Prisma project · Issue #15518"

[7]: https://github.com/oven-sh/bun/issues/21560?utm\_source=chatgpt.com "Memory (RSS) in Bun Spawned Child Process Grows ..."

[8]: https://github.com/oven-sh/bun/issues/24118?utm\_source=chatgpt.com "isolated memory leak with mongodb nodejs module #24118"

[9]: https://github.com/oven-sh/bun/issues/25948?utm\_source=chatgpt.com "Memory leak with Mongoose and Bun (Production build / ..."

[10]: https://github.com/oven-sh/bun/issues/29267?utm\_source=chatgpt.com "Memory leak in Next.js SSR under `bun ..."

u/Xtergo — 5 days ago
▲ 15 r/bun

memory leak in bun version 1.3.9 to 1.3.12 in some virtual environments

I've been trying to run a project of mine for quite a while in my server, but it failed to run bun every time, and when looking on google for any answers, I found github issues with no human answers (all tests been done by AI).

turns out, when going back to older versions:

on version 1.3.8, running bun -e "console.log('hello')" returns hello after 0.032s

on versions 1.3.9-1.3.12, running bun -e "console.log('hello')" hangs. checking htop shows that bun is filling up the memory until it runs out of memory, where the kernel kills bun, returning killed.

although on versions 1.3.9-1.3.12, bun install and bun repl work with no issues.

also, a note about AI in this case:
when asking different AIs about this (Gemma 4 31B, Nemotron 3 Super and GLM 5.1), they seem to suggest you to increase swap and RAM, increase the swappiness of the kernel and removing different memory guardrails of the kernel to stop OOM from happening, while the problem is clearly a memory leak in the code that can't be fixed by even disabling OOM killer entirely.
this have also been the case with "robobun", the automated issue checking bot that tries to reproduce the issue and respond to the user with a solution before the team responds. this bot can't seem to reproduce this issue on it's end, so it blames the linux configuration of the user to be the problem. (this bot runs on claude code apparently)

if you're hitting this problem and don't know what to do, try version 1.3.8 until the issue is resolved.

u/HoseanRC — 3 days ago
▲ 9 r/bun

OneBun: NestJS-style application framework, Bun-native, with built-in observability

Hey r/bun. Author here. I've been building a full application framework on Bun and wanted to share it with the people who'll actually know what I'm talking about.

OneBun is what I wished existed when I moved from NestJS/Node to Bun: DI container, module system, decorators — the architecture patterns that make large codebases manageable — but native on Bun, not ported from Node.

Highlights:

  • Full DI with constructor injection, module system, guards, exception filters
  • ArkType validation → runtime checks + auto-generated OpenAPI 3.1 (no DTO classes needed)
  • Prometheus metrics (@Timed, @Counted) + OpenTelemetry tracing (@Span) built in
  • Drizzle ORM, Redis cache, NATS queues — first-party packages
  • Zero build step, runs TS directly
  • Uses native Bun APIs: WebSocket, SQLite, Redis, router, file I/O — no Node.js compatibility shims
  • ~2x faster than NestJS+Fastify on Node in CI benchmarks
  • 2500+ tests, ~90% coverage, full suite in ~14s

It's opinionated by design — one ORM, one queue, one validation library. Less choice, more integration.

v0.3.x, pre-1.0, just me building it. Looking for early adopters.

Specifically curious what r/bun thinks about:

  • Which native Bun APIs would you want deeper integration with? (I already use Bun.serve, WebSocket, SQLite, Redis, file I/O — what's missing?)
  • Thoughts on the Effect.ts trade-off — I use it internally for DI/resource management but keep it out of user-facing API. Good call or should it be exposed?

https://github.com/RemRyahirev/onebun | https://onebun.dev

u/Top-Kitchen-6635 — 5 days ago
▲ 2 r/bun

my first ever saas with bun

I just would like to share my first saas ever with hono and bun ! hono is the only dependency everything this tool use it come from bun : https://découvrez.me/

u/Ok-Delivery307 — 4 days ago
▲ 9 r/bun+1 crossposts

Memory Leak with bun and mongodb

I am using bun react template ( bun init --react ), hono and mongodb atlas database.

I see the RSS memory usage keeps on increasing.

If I do not connect to mongodb, it is stable.

So the issue seems to be with mongoose and bun.

Is there any solution? I am using it in production and it crashes my server every few weeks due to high memory usage.

Thank you for your time.

EDIT:

Versions
Bun : 1.3.7
Mongoose : 9.3.3
Hono : 4.12.9

reddit.com
u/BhavyajainTheBest — 6 days ago
▲ 0 r/bun+1 crossposts

Release v1.6.0 — Bun Runtime Support · kasimlyee/dotenv-gad

dotenv-gad can now be used on bun runtime. Bun users can have the same advantages of dotenv-gad

github.com
u/Individual-Wave7980 — 4 days ago
▲ 0 r/bun

Is vibe coding really the future?

I was working on a Bun project and needed a module, so I searched GitHub and Google for something ready to use. In the end, I asked Claude AI to write it from scratch, and honestly, it was a perfect fit, fast, and exactly what I needed.

Later, I started using Claude AI for almost everything, and I even paid for the Pro tier.

Now I’ve hit a weird problem: the code works perfectly, but I do not fully understand how it works, so modifying it manually is hard.

I’m honestly confused. Is vibe coding really the future?

reddit.com
u/Connect-Fall6921 — 5 days ago
▲ 5 r/bun

Kesha Voice Kit — fully local STT + TTS for agent stacks

Been annoyed for a while with the friction of plugging voice into agent workflows without round-tripping to the cloud. So I built kesha-voice-kit — a local voice toolkit built for Bun and optimized for Apple Silicon.

This CLI gets invoked by LLM agents (OpenClaw routes voice messages through it) and from shell scripts. Every kesha audio.ogg pays the cold-start tax. Bun’s JS startup is noticeably faster than Node’s — and when an agent fires off 5 tool calls in parallel, those milliseconds compound. Not scientific numbers here, but Bun felt instant from day one; Node felt sluggish.

The whole app is a subprocess wrapper around kesha-engine (Rust binary). Twelve Bun.* calls across six files — Bun.spawn, Bun.file, Bun.write, Bun.which. No async/sync ceremony, no pipe-handling weirdness, pipe-friendly by default. Writing Bun.file(path).json() feels like it should’ve always been this way.

Voice in: NVIDIA Parakeet TDT 0.6B for speech-to-text (25 languages, not Whisper).
Voice out: Kokoro-82M for English, Piper for Russian. Auto-routed by detected text language — just kesha say "Привет" and it picks Piper automatically.

Fully on-device — no cloud, no API keys, no telemetry. Ships as an npm package + a ~20 MB Rust engine binary; first-class on macOS arm64 (CoreML via FluidAudio), also runs on Linux and Windows x64 (ONNX).

Numbers (M3 Pro)

Compared against whisper large-v3-turbo:

  • ~15× faster on M3 Pro (CoreML / Apple Neural Engine)
  • ~2.5× faster on CPU
  • Real-time factor small enough for live dictation and responsive voice UX

Full methodology, fixtures, and exact commands in BENCHMARK.md.

OpenClaw agents receive voice on Telegram/WhatsApp/Slack today but can only reply in text. Kesha closes that loop:

bun install -g u/drakulavich/kesha-voice-kit
brew install espeak-ng
kesha install --tts               # one-time, opt-in (~390 MB)
kesha voice.ogg                    # transcribe Russian voice message
kesha say "Hello World" &gt; reply.wav   # and talk back

The existing OpenClaw plugin path already hooks into tools.media.audio.models for input; the output side is a matter of a few lines of TS.

Happy to share more detailed numbers, tweak the API for real use cases, or walk through how the bidirectional voice pipeline is wired up.

reddit.com
u/drakulavich — 3 days ago