
Opened Dart SDK discussion on server runtime hot-path overhead (dart-zig PoC + benchmarks)
I opened a Dart SDK issue here: https://github.com/dart-lang/sdk/issues/63352
This is a discussion about backend runtime architecture for high-concurrency HTTP workloads.
I built an experimental PoC (dart-zig) where Dart stays at handler/business-logic level, and some HTTP hot-path runtime work is native- side (event loop, request framing/parsing, batched completions, process-per-worker with SO_REUSEPORT).
Initial HttpArena snapshot (AOT, throughput-focused):
| Test | Conn | dart:io RPS | dart-zig RPS | Relative |
|---|---|---|---|---|
| baseline | 512 | 601,780 | 1,353,265 | ~2.25x |
| baseline | 4096 | 583,020 | 1,665,927 | ~2.86x |
| pipelined | 512 | 998,153 | 1,364,400 | ~1.37x |
| pipelined | 4096 | 997,674 | 1,477,162 | ~1.48x |
Notes:
- This is an initial PoC snapshot for directional signal.
- Memory footprint is not optimized yet.
- Claims are limited to HTTP hot-path behavior.
Also important: this is not a “replace dart:io” claim. I’m trying to discuss whether an official/experimental server-optimized runtime profile could make sense for Dart backend workloads, and what upstream criteria/hook points would be appropriate.
Would love feedback from people doing high-load Dart backend work.