r/erlang

▲ 12 r/erlang+1 crossposts

ex_data_sketch v0.8.0 — Deterministic Foundations

ex_data_sketch v0.8.0 is out. This release invests entirely in the substrate that all 15 existing sketches share, preparing the grounds for release v0.9.0 where we add streaming integrations for Broadway / GenStage support, ETS / DETS / Zarr.

What's new:

  • Deterministic hashing. Every sketch now goes through a validated, byte-stable hash layer. HLL, ULL, Theta, and CMS accept hash_strategy: :murmur3 for Apache DataSketches interop — this was silently ignored in v0.7.x. XXHash3 remains the default and fastest path (~30 M items/sec at p=14 on the Rust NIF).

  • Binary stability & corruption detection. Serialized sketches now carry a CRC32C trailer and an embedded hash metadata block (EXSK v2). Bit-flip corruption that previously would silently produce wrong estimates is now caught and returns a structured DeserializationError. v0.8.0 reads v1 frames; v0.7.x cannot read v2 — stage your rollout accordingly.

  • Murmur3 hot path. 8 new Rust NIFs extend in-Rust hashing to Murmur3. The Murmur3 path is within 8% of XXH3 throughput. No more falling off the fast path when you select :murmur3.

  • Precompiled NIFs for Windows. x86_64 and ARM64 MSVC targets join the matrix. 16 artifacts total (8 targets x 2 NIF versions). No Rust toolchain needed on any supported platform.

  • Property-locked guarantees. 14 StreamData properties lock HLL/ULL monotonicity and error bounds, KLL/REQ rank consistency, CMS overestimation-only, and Bloom/XorFilter/Cuckoo no-false-negative. A 200-mutation fuzz suite verifies that binary v2 corruption never silently propagates.

Breaking changes (2):

  1. EXSK v2 is one-way. v0.7.x readers can't decode v2 frames. Deploy readers first, then producers.
  2. hash_strategy: :murmur3 is no longer silently overridden to :xxhash3. Sketches that specified Murmur3 will now actually use it — estimates are correct but differ from v0.7.x.

One-liner upgrade:

{:ex_data_sketch, "~> 0.8.0"}

Most users need no code changes. Full migration guide ships in HexDocs.

Stats: 1,317 tests, 171 properties, 92.7% coverage, 0 credo issues.

GitHub | Hex | Docs

reddit.com
u/Shoddy_One4465 — 1 day ago
▲ 38 r/erlang+1 crossposts

Edge Core: a self-hostable control plane for distributed Linux fleets, built in Elixir

Hey guys! We finally opened up the codebase for something we've been working on for over a year.

I joined a company that spent 3 years (and counting) trying to ship products on locked down edge hardware. Every product kept hitting the same walls: deployments and monitoring were a black box, machines on the same LAN couldn't reliably find each other, and every new app had to reimplement the same WS/MQTT logics just to stay in touch with the cloud.

So we built Edge Core to solve these pain points. In V1, we used Headscale/Tailscale for the VPN. It worked mostly for what we wanted (remote execution, SSH, metrics aggregation, etc.), but couldn't scale past ~100 nodes (mesh explosion with O(n2)) and gave us no isolation between different projects (each project must spin up its own core, though ACLs exist). In V2 (current version), we moved towards Netmaker for a proper mesh/network segmentation solution, added a forward proxy + dynamic proxy chaining for cloud-to-edge communication, and built the whole orchestration layer on top.

OpenAPI/Swagger Docs

AsyncAPI docs

Some Elixir specific stuff that might interest you:
- Masterless clustering for the control plane: no (strong) leader election, no Raft consensus. Admins coordinate via `:syn` registry and Postgres. Each admin runs the same deterministic sharding algorithm and converges independently.
- Oban and Quantum for async background jobs
- API-first control plane with clustering HTTP/SOCKS5 proxy servers and first class fleet metrics discovery + scraping that are prometheus compatible
- MCP server that mirrors the full REST API, basically every API endpoint is also an MCP tool that AI agents can drive the whole fleet
- Webhook system and event broker integration for async system events with 7 adapters (NATS, Kafka, AMQP 0.9.1/RabbitMQ, Redis, MQTT, AWS SNS, and GCP Pub/Sub).
- Agent and shared libs are Apache 2.0. Admin is ELv2.

Links:
- Repo: https://github.com/wenet-ec/edge-core
- Docs: https://wenet-ec.github.io/edge-core/
- Learn about edge core's concepts: https://wenet-ec.github.io/edge-core/guide/
- Architecture: https://wenet-ec.github.io/edge-core/architecture/

reddit.com
u/Best_Recover3367 — 3 days ago
▲ 55 r/erlang+1 crossposts

I was in the Golang subreddit speaking about my new business and they got mad because I told them that the BEAM is better than Golang in certain problem sets. I built a Golang SDK just for them so they could get the reliability the BEAM could offer in serverless, and they downvoted my post. So I was curious what you guys think. I'm open to all criticism and feedback, but I truly cannot imagine any design coming close. I may have learned the BEAM with AI after getting laid off, but I feel like my years of operations work with Java, Golang, and Python stacks in serverless make me more pissed off than you all. Lol JK, I just wanted to get you excited for the design. I promise I'm humble.

If you have no idea what the internet was like before TCP — to send a file over the internet you had to know C. It was a huge pain in the ass. If you missed a chunk of data you had to rewrite the program. Every developer had their own custom retry logic. Everyone just sent packets as fast as possible with no appreciation for pacing. Then a small team — Vint Cerf and Bob Kahn — wrote TCP and it became the most foundational algorithm of the entire internet. We built the modern web, APIs, and databases on top of that algorithm. Sending email became trivial.

One more story that is important before my design: before Ericsson, in order to make a call there was a human switch operator. If the switch was full, the caller would be rejected. It didn't matter if the caller had an emergency — the model was first come, first served. Ericsson built the runtime to deal with concurrent processes with isolation boundaries that would make sure telecom systems were resilient to crashes.

Now on to my design. Agent retry storms are coming for everyone's APIs. A human being might visit 10–30 websites and call APIs maybe 20 times. The Cloudflare CEO said it plainly at SXSW this year: "Your agent will often go to a thousand times the number of sites a human would — it might go to 5,000 sites. And that's real traffic, and that's real load." Those agents will call APIs as fast as possible. The APIs will throw 429s, which inspires the servers to send more requests. The servers of the API will slow down, which will inspire clients to send more until it crashes. The fleet of machines that was over capacity at N machines will find only a ticking time bomb before N-1 machines handle the same capacity. The autoscalers will provision new machines and warm up — but crash before the entire fleet is down.

Enter EZThrottle. The same way Ericsson absorbs bursts of call requests and routes them to the best switch, EZThrottle queues, paces, and reroutes API calls past partial outages — in both directions. It protects the APIs you call and the API you run. It solves the noisy neighbor problem by giving each user their own queue. When it receives a 500, it uses the Fly.io network to send directly to another region to see if it works over there. It's what Cloudflare is for inbound traffic, but for your outbound API calls. Stripe, Google, OpenAI, and your gateway server could all be having partial outages and EZThrottle will fight to get each call through. No cold starts. No performance choking on retry storms. No spiky traffic — just smooth, predictable requests sent at the pace the API can actually handle. The resilience of the BEAM in your non-BEAM services.

I've linked the actual writeups below, but tell me — have you ever seen a more elegant architecture on the BEAM?

https://ezthrottle.network/blog/making-failure-boring-again
https://ezthrottle.network/blog/serverless-2-rip-operations
https://ezthrottle.network/blog/a-queue-per-user-at-scale

u/Noobcreate — 8 days ago
▲ 17 r/erlang

I wanted Ecto's ergonomics in Erlang without writing Elixir, so I wrote Kura. It sits on pgo. You define a schema, build queries, get changesets, run migrations.

-module(user).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0]).
table() -> ~"users".
fields() ->
[#kura_field{name = id, type = id, primary_key = true},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = name, type = string},
#kura_field{name = inserted_at, type = utc_datetime}].

Querying:

Q = kura_query:from(user),
Q1 = kura_query:where(Q, fun(U) -> U#user.age > 18 end),
my_repo:all(kura_query:order_by(Q1, [{desc, inserted_at}])).

It does the things you'd expect: schemas, changesets, composable queries with joins/CTEs/subqueries/window functions, migrations, associations with preloading, embedded JSONB, transaction pipelines, multitenancy via schema prefix, optimistic locking, audit trail, cursor streaming, pagination.

It's not a port. Records and functions, no macro DSL. The README has a coming-from-Ecto cheatsheet.

https://github.com/Taure/kura

u/taure1 — 5 days ago
▲ 26 r/erlang+2 crossposts

Hi everyone!

TL;DR: I*’ve just released* Hexy*, a simple and efficient app to track and monitor your* Hex.pm package downloads. I’d love for you to try it out, it would honestly make my day! If you want to hear the story and the "why" behind the tech choices, keep reading below.

App Store: https://apps.apple.com/it/app/hexy-watcher/id6762607967

screenshot

If you've made it this far, congratulations! You've unlocked the long, confusing version.

First off, please bear with me if I’m terrible at this "social media" thing. Writing sensible announcements, convincing people, or trying to "sell" a product isn't really my forte. I’m way too much of a DIY/maker person—the kind who’d rather spend time at the workbench or glued to the keyboard than figuring out how to communicate.

Actually, I’ve realized over time that the things I find genuinely cool, useful, or interesting usually don’t resonate with most people. I’m a niche person, often excited about details that others don't even notice. But if there’s one place where "niche and passionate" is the norm, it’s here.

So, Let's start:

Since I started diving into the BEAM world (Elixir, Erlang, Gleam ), I’ve been blown away by the energy. This community has a vibe that’s just different: welcoming, active, and genuinely cool. I’ve felt at home here from day one.

I wanted to make a little something to say thank you. No strings attached, nothing pretentious, just a small gift for all of us who build and share: Hexy Watcher (or "Hexy" for friends).

We all know the feeling: you run mix hex.publish (or, in my case, gleam publish), you close the terminal, and that’s it. But a download isn’t just a stat; it’s a dev on the other side of the world trusting your code to build their dream (so heartwarming). It’s a sign that your work is out there, breathing and moving. I built this so we can keep those trends a bit closer, making the life of a package feel a little more "real" and visible.

The Tech Stack (and why native): 
The app is written in Swift. I know, I know... I could have used a cross-platform framework like Flutter, React Native, Tauri, or even Elixir Desktop.

So, please, don’t ask me things like "Why didn’t you build it for Windows/Linux/Android?" just yet. Please: after years of C# and Windows native dev, I felt the need to get my hands "dirty" with a completely new ecosystem from the ground up. I didn't want to hide behind a multi-platform abstraction. I wanted to experience the full, raw process of publishing something entirely "mine" from the first line of code to the final App Store submission.

I wanted to be responsible for every single pixel and every sync logic, rather than just being the dev who builds a small piece of a larger machine (a terrible idea). That’s why I chose to ignore the "build once, run everywhere" path for a moment: I opened Xcode and went full native, focusing on macOS first and then iOS, using iCloud to keep everything in sync between devices without any setup.

Status:

  • iOS: Live now!
  • macOS: Currently stuck in the "Apple Review Maze" (they’re taking their sweet time!). I might release it outside the App Store soon if they don’t hurry up.
  • Android: I haven’t forgotten you! Once the Apple dust settles, I’ll see if I can embark on that journey.
  • Linux: (P.S. I’m experimenting with Rust + eww, so something might pop up there too!)

I’d love your feedback! If you have a moment to try it out, please let me know what you think. Honestly, even just some "emotional support" would mean the world to me putting your own work out there for the first time is always a bit nerve-wracking! 😅

I hope you find it useful. It’s just my way of giving back to a community that’s been so great to me.

App Store: https://apps.apple.com/it/app/hexy-watcher/id6762607967

Happy coding, everyone! 💜

One last thing (the "awkward" part): 
I know, I might sound a bit desperate here... but hey, if you appreciate the effort of building a (simple) app for free, with no ads, no tracking, and no spy attached, maybe consider buying me a coffee? It would help keep the DIY spirit (and my caffeine levels) alive while I figure out the Android/Linux versions! ☕️

Ko-fi.com/lupodevelop

reddit.com
u/lupodevelop — 13 days ago