r/elixir

wrote a phoenix liveview app that searches across youtube video transcripts and the real time search feels absurdly good
▲ 24 r/elixir

wrote a phoenix liveview app that searches across youtube video transcripts and the real time search feels absurdly good

i work at a mid size marketing agency and we have about 200 youtube videos. client case study recordings, internal strategy sessions, conference talks from our founders, onboarding walkthroughs for new hires. all unlisted and shared through notion. nobody can find anything because the only way to search is by video title which is usually something useless like "Q3 strategy call sept 14."

i've been looking for an excuse to build something real in phoenix liveview so i used this.

the app is a single liveview page. search box at the top, results below. as you type, results update live through the socket. each result shows the video title, date, speaker, and a snippet of the transcript around the matching text with the match highlighted. click the result and it opens the youtube video.

the backend is postgres with full text search. tsvector on the transcript column, GIN index, ts_headline for the snippet extraction. the liveview handles the search with a debounce on the phx-change event so it's not hammering postgres on every keystroke. i set it to 250ms which feels right. fast enough that it seems instant but not so aggressive that it fires on every character.

for pulling the actual transcripts i use transcript api:

npx skills add ZeroPointRepo/youtube-skills --skill youtube-full

i wrote a mix task for ingestion. give it a youtube url and it pulls the transcript, parses it, and inserts it into the database. added a --file flag so i could point it at a text file with all 200 urls and let it run through them. the whole ingestion took maybe 3 minutes.

the thing that sold my coworkers on it was the liveview search. i demoed it in a meeting and people immediately started shouting out search terms to try. someone typed in a client name and found every video where that client was discussed. someone else searched for "attribution modeling" and found a conference talk from 2022 that nobody remembered existed.

the codebase is small. one liveview module, one context module with the search query, the mix task for ingestion, and two templates. maybe 300 lines of elixir total. deployed it on fly.io on the free tier since it's just internal and the traffic is light.

the part i keep coming back to is how well liveview fits this use case. server rendered search with live updates over websockets and zero javascript. the search box, the debounce, the result list, the highlighting, all just liveview. i would have needed react or vue for this in any other framework.

u/scheemunai_ — 24 hours ago
▲ 27 r/elixir

Where do Elixir devs find good remote jobs?

Hey folks,

I’m an Elixir developer with 3.5+ years of experience, mostly working with Phoenix + LiveView. Lately I’ve been thinking about switching jobs because I feel a bit underpaid in my current role, but I’m mainly looking for good remote opportunities.

Most openings I come across are either “US/UK remote only” or ask for 5–8+ years of experience, so I wanted to ask:

Where do Elixir devs usually find solid remote jobs that are open internationally?

Also, if anyone here knows of openings, is hiring, or wouldn’t mind referring me / giving me a shot at an interview, I’d really appreciate it.

Would also love to hear how others in the Elixir community found their remote roles. Btw I'm from India 🙂

Thanks!

reddit.com
▲ 10 r/elixir+1 crossposts

ex_data_sketch v0.8.0 — Deterministic Foundations

ex_data_sketch v0.8.0 is out. This release invests entirely in the substrate that all 15 existing sketches share, preparing the grounds for release v0.9.0 where we add streaming integrations for Broadway / GenStage support, ETS / DETS / Zarr.

What's new:

  • Deterministic hashing. Every sketch now goes through a validated, byte-stable hash layer. HLL, ULL, Theta, and CMS accept hash_strategy: :murmur3 for Apache DataSketches interop — this was silently ignored in v0.7.x. XXHash3 remains the default and fastest path (~30 M items/sec at p=14 on the Rust NIF).

  • Binary stability & corruption detection. Serialized sketches now carry a CRC32C trailer and an embedded hash metadata block (EXSK v2). Bit-flip corruption that previously would silently produce wrong estimates is now caught and returns a structured DeserializationError. v0.8.0 reads v1 frames; v0.7.x cannot read v2 — stage your rollout accordingly.

  • Murmur3 hot path. 8 new Rust NIFs extend in-Rust hashing to Murmur3. The Murmur3 path is within 8% of XXH3 throughput. No more falling off the fast path when you select :murmur3.

  • Precompiled NIFs for Windows. x86_64 and ARM64 MSVC targets join the matrix. 16 artifacts total (8 targets x 2 NIF versions). No Rust toolchain needed on any supported platform.

  • Property-locked guarantees. 14 StreamData properties lock HLL/ULL monotonicity and error bounds, KLL/REQ rank consistency, CMS overestimation-only, and Bloom/XorFilter/Cuckoo no-false-negative. A 200-mutation fuzz suite verifies that binary v2 corruption never silently propagates.

Breaking changes (2):

  1. EXSK v2 is one-way. v0.7.x readers can't decode v2 frames. Deploy readers first, then producers.
  2. hash_strategy: :murmur3 is no longer silently overridden to :xxhash3. Sketches that specified Murmur3 will now actually use it — estimates are correct but differ from v0.7.x.

One-liner upgrade:

{:ex_data_sketch, "~> 0.8.0"}

Most users need no code changes. Full migration guide ships in HexDocs.

Stats: 1,317 tests, 171 properties, 92.7% coverage, 0 credo issues.

GitHub | Hex | Docs

reddit.com
u/Shoddy_One4465 — 1 day ago
▲ 38 r/elixir+1 crossposts

Edge Core: a self-hostable control plane for distributed Linux fleets, built in Elixir

Hey guys! We finally opened up the codebase for something we've been working on for over a year.

I joined a company that spent 3 years (and counting) trying to ship products on locked down edge hardware. Every product kept hitting the same walls: deployments and monitoring were a black box, machines on the same LAN couldn't reliably find each other, and every new app had to reimplement the same WS/MQTT logics just to stay in touch with the cloud.

So we built Edge Core to solve these pain points. In V1, we used Headscale/Tailscale for the VPN. It worked mostly for what we wanted (remote execution, SSH, metrics aggregation, etc.), but couldn't scale past ~100 nodes (mesh explosion with O(n2)) and gave us no isolation between different projects (each project must spin up its own core, though ACLs exist). In V2 (current version), we moved towards Netmaker for a proper mesh/network segmentation solution, added a forward proxy + dynamic proxy chaining for cloud-to-edge communication, and built the whole orchestration layer on top.

OpenAPI/Swagger Docs

AsyncAPI docs

Some Elixir specific stuff that might interest you:
- Masterless clustering for the control plane: no (strong) leader election, no Raft consensus. Admins coordinate via `:syn` registry and Postgres. Each admin runs the same deterministic sharding algorithm and converges independently.
- Oban and Quantum for async background jobs
- API-first control plane with clustering HTTP/SOCKS5 proxy servers and first class fleet metrics discovery + scraping that are prometheus compatible
- MCP server that mirrors the full REST API, basically every API endpoint is also an MCP tool that AI agents can drive the whole fleet
- Webhook system and event broker integration for async system events with 7 adapters (NATS, Kafka, AMQP 0.9.1/RabbitMQ, Redis, MQTT, AWS SNS, and GCP Pub/Sub).
- Agent and shared libs are Apache 2.0. Admin is ELv2.

Links:
- Repo: https://github.com/wenet-ec/edge-core
- Docs: https://wenet-ec.github.io/edge-core/
- Learn about edge core's concepts: https://wenet-ec.github.io/edge-core/guide/
- Architecture: https://wenet-ec.github.io/edge-core/architecture/

reddit.com
u/Best_Recover3367 — 3 days ago
▲ 55 r/elixir+1 crossposts

I was in the Golang subreddit speaking about my new business and they got mad because I told them that the BEAM is better than Golang in certain problem sets. I built a Golang SDK just for them so they could get the reliability the BEAM could offer in serverless, and they downvoted my post. So I was curious what you guys think. I'm open to all criticism and feedback, but I truly cannot imagine any design coming close. I may have learned the BEAM with AI after getting laid off, but I feel like my years of operations work with Java, Golang, and Python stacks in serverless make me more pissed off than you all. Lol JK, I just wanted to get you excited for the design. I promise I'm humble.

If you have no idea what the internet was like before TCP — to send a file over the internet you had to know C. It was a huge pain in the ass. If you missed a chunk of data you had to rewrite the program. Every developer had their own custom retry logic. Everyone just sent packets as fast as possible with no appreciation for pacing. Then a small team — Vint Cerf and Bob Kahn — wrote TCP and it became the most foundational algorithm of the entire internet. We built the modern web, APIs, and databases on top of that algorithm. Sending email became trivial.

One more story that is important before my design: before Ericsson, in order to make a call there was a human switch operator. If the switch was full, the caller would be rejected. It didn't matter if the caller had an emergency — the model was first come, first served. Ericsson built the runtime to deal with concurrent processes with isolation boundaries that would make sure telecom systems were resilient to crashes.

Now on to my design. Agent retry storms are coming for everyone's APIs. A human being might visit 10–30 websites and call APIs maybe 20 times. The Cloudflare CEO said it plainly at SXSW this year: "Your agent will often go to a thousand times the number of sites a human would — it might go to 5,000 sites. And that's real traffic, and that's real load." Those agents will call APIs as fast as possible. The APIs will throw 429s, which inspires the servers to send more requests. The servers of the API will slow down, which will inspire clients to send more until it crashes. The fleet of machines that was over capacity at N machines will find only a ticking time bomb before N-1 machines handle the same capacity. The autoscalers will provision new machines and warm up — but crash before the entire fleet is down.

Enter EZThrottle. The same way Ericsson absorbs bursts of call requests and routes them to the best switch, EZThrottle queues, paces, and reroutes API calls past partial outages — in both directions. It protects the APIs you call and the API you run. It solves the noisy neighbor problem by giving each user their own queue. When it receives a 500, it uses the Fly.io network to send directly to another region to see if it works over there. It's what Cloudflare is for inbound traffic, but for your outbound API calls. Stripe, Google, OpenAI, and your gateway server could all be having partial outages and EZThrottle will fight to get each call through. No cold starts. No performance choking on retry storms. No spiky traffic — just smooth, predictable requests sent at the pace the API can actually handle. The resilience of the BEAM in your non-BEAM services.

I've linked the actual writeups below, but tell me — have you ever seen a more elegant architecture on the BEAM?

https://ezthrottle.network/blog/making-failure-boring-again
https://ezthrottle.network/blog/serverless-2-rip-operations
https://ezthrottle.network/blog/a-queue-per-user-at-scale

u/Noobcreate — 8 days ago
▲ 32 r/elixir+1 crossposts

Anyone here running Elixir + Rust in production?

New BEAM There, Done That episode with Florian Gilcher (Ferrous Systems) and Leandro Pereira (MDEx, BeaconCMS) dives into where Rust actually fits in Elixir systems — NIFs, ports, performance bottlenecks, and hybrid architectures.

case system do

  :cpu_bound -> Rust

  :distributed -> Elixir

  :both -> "why not both?"

end

Good discussion on when to stay on the BEAM vs when Rust genuinely helps instead of just adding complexity.

https://youtu.be/w5Pl09lpSmE

u/rtrusca — 6 days ago
▲ 15 r/elixir

As the title suggests, I'm relatively new to elixir. I've read books and I've done some projects here and there. I've dedicated this year to getting my hands into elixir, and boy am I being radicalized lol. I love elixir, erlang/otp, the community, and the advancements. I actually just finished watching Chris McCord's 2025 keynote, inspired by a recent post about the DurableServer, good stuff. Anyway, onto the actual post.


Long story short, to try and test out elixir + wasm (using popcorn), I've decided implement the game Snake. I've also decided to do it with no coding agents. The first phase was to just create a genserver for something to use in iex, just to get the logic down. (Small aside: I tried to make a full TUI version of it, but elixir doesn't seem to have great TUI libs/support. I'm using iTerm2 if that helps.)

A part of the snake implementation is the logic to place the food pellet, this is the logic. The constraints are only that the pellet needs to be on the grid and it cannot land on the snake.

def move_food(%__MODULE__{grid_height: height, grid_width: width} = state) do
  new_food_spt =
    Enum.reduce_while(1..(height * width), nil, fn _, _ ->
      spt = %{x: Enum.random(0..width-1), y: Enum.random(0..height-1), direction: nil}

      if spt_full?(state, spt.x, spt.y) do
        {:cont, nil}
      else
        {:halt, spt}
      end
    end)

  if new_food_spt == nil do
    raise "No space for food?"
  end

  %{state | snake_food: new_food_spt}
end

defp spt_full?(%__MODULE__{snake_head: head, snake_tail: tail}, x, y)
   when x >= 0 and y >= 0 do
[head | tail] |> Enum.any?(fn spt -> spt.x == x and spt.y == y end)
end

spt is short for snake point, it's different from just a point as it also has a direction. In my head it reads as "spot."

This is the implementation I went with. As I'm new to elixir, I'm unsure if this is a good way to go about it. I know that this is a super small thing, and I don't need to have it hyper optimized, I'm mainly asking to see other implementations or any thoughts.

The idea here is to randomly generate a point, then check to see if the spot is already taken, if it is check again. Do this area-of-the-grid times.

I know this is super simple, but it feels wrong somehow to use reduce like this. Also, since the random point generator is random, it could technically try the same, taken, spot(s) for each iteration which would be a false representation. I just feel that it's ... so unlikely ... until you've properly "snaked" and have filled up the board lol. Should I add a "seen" situation within the reduce fn? Any thoughts? Implementations? Admonishments?

u/amzwC137 — 8 days ago
▲ 25 r/elixir

Hey folks,

Hologram UI is in the works - the official component library for Hologram (my full-stack Elixir framework that compiles Elixir to run in the browser) - and the waiting list is now open.

A common reaction when I mention this is: "Just generate it with an LLM."

But there's more to a good component than markup - accessibility, responsive design, cross-browser compatibility, and the dozen edge cases nobody remembers until someone hits them. And it's not just individual components, it's them working together: consistent API, shared design tokens, predictable behavior, reusable composition patterns. A library designed around Hologram's conventions gives you idiomatic components - not generic ones bolted on.

If rolling your own works, no reason to stop. But not everyone wants to spend time on components when they could be building their actual product.

What you get on the waiting list:

  • Early access before the general launch
  • Help shape what ships in v1 (input on which components get priority)
  • Proceeds sustain Hologram's development

Join the waiting list: https://hologram.page/ui

Curious what you'd prioritize - what components do you find yourself rebuilding most often? Forms, modals, data tables, navigation? Would love to hear what your typical Hologram (or Elixir frontend in general) project actually needs.

u/BartBlast — 8 days ago
▲ 12 r/elixir

Kubernetes scales compute. It doesn't solve fairness or spiky traffic. When an agentic burst hits your services, pods get hammered, one noisy tenant crowds out everyone else, and the autoscaler takes 2 minutes to respond while requests are already failing.

EZThrottle Local is a single BEAM node that queues inbound jobs in memory and drains them to your upstream at a pace the service controls — via response headers. Your service responds with X-EZTHROTTLE-RPS: 20 and the queue drains at 20 RPS. As Kubernetes scales up, you raise the number. No redeploy, no config change.

Each user gets their own independent pace when you need it. A premium user can run at 50 RPS while a free-tier user runs at 2, in parallel, without either affecting the other — opt-in via a single response header.

On a 32GB machine it holds 3–32 million jobs in memory. That's hours of buffer at most agentic workloads — enough for any autoscaler to catch up before a single request is dropped. Built on the BEAM so hot code reloads preserve the in-memory queue across deploys.

https://github.com/rjpruitt16/ezthrottle-local

u/Noobcreate — 9 days ago
▲ 26 r/elixir+2 crossposts

Hi everyone!

TL;DR: I*’ve just released* Hexy*, a simple and efficient app to track and monitor your* Hex.pm package downloads. I’d love for you to try it out, it would honestly make my day! If you want to hear the story and the "why" behind the tech choices, keep reading below.

App Store: https://apps.apple.com/it/app/hexy-watcher/id6762607967

screenshot

If you've made it this far, congratulations! You've unlocked the long, confusing version.

First off, please bear with me if I’m terrible at this "social media" thing. Writing sensible announcements, convincing people, or trying to "sell" a product isn't really my forte. I’m way too much of a DIY/maker person—the kind who’d rather spend time at the workbench or glued to the keyboard than figuring out how to communicate.

Actually, I’ve realized over time that the things I find genuinely cool, useful, or interesting usually don’t resonate with most people. I’m a niche person, often excited about details that others don't even notice. But if there’s one place where "niche and passionate" is the norm, it’s here.

So, Let's start:

Since I started diving into the BEAM world (Elixir, Erlang, Gleam ), I’ve been blown away by the energy. This community has a vibe that’s just different: welcoming, active, and genuinely cool. I’ve felt at home here from day one.

I wanted to make a little something to say thank you. No strings attached, nothing pretentious, just a small gift for all of us who build and share: Hexy Watcher (or "Hexy" for friends).

We all know the feeling: you run mix hex.publish (or, in my case, gleam publish), you close the terminal, and that’s it. But a download isn’t just a stat; it’s a dev on the other side of the world trusting your code to build their dream (so heartwarming). It’s a sign that your work is out there, breathing and moving. I built this so we can keep those trends a bit closer, making the life of a package feel a little more "real" and visible.

The Tech Stack (and why native): 
The app is written in Swift. I know, I know... I could have used a cross-platform framework like Flutter, React Native, Tauri, or even Elixir Desktop.

So, please, don’t ask me things like "Why didn’t you build it for Windows/Linux/Android?" just yet. Please: after years of C# and Windows native dev, I felt the need to get my hands "dirty" with a completely new ecosystem from the ground up. I didn't want to hide behind a multi-platform abstraction. I wanted to experience the full, raw process of publishing something entirely "mine" from the first line of code to the final App Store submission.

I wanted to be responsible for every single pixel and every sync logic, rather than just being the dev who builds a small piece of a larger machine (a terrible idea). That’s why I chose to ignore the "build once, run everywhere" path for a moment: I opened Xcode and went full native, focusing on macOS first and then iOS, using iCloud to keep everything in sync between devices without any setup.

Status:

  • iOS: Live now!
  • macOS: Currently stuck in the "Apple Review Maze" (they’re taking their sweet time!). I might release it outside the App Store soon if they don’t hurry up.
  • Android: I haven’t forgotten you! Once the Apple dust settles, I’ll see if I can embark on that journey.
  • Linux: (P.S. I’m experimenting with Rust + eww, so something might pop up there too!)

I’d love your feedback! If you have a moment to try it out, please let me know what you think. Honestly, even just some "emotional support" would mean the world to me putting your own work out there for the first time is always a bit nerve-wracking! 😅

I hope you find it useful. It’s just my way of giving back to a community that’s been so great to me.

App Store: https://apps.apple.com/it/app/hexy-watcher/id6762607967

Happy coding, everyone! 💜

One last thing (the "awkward" part): 
I know, I might sound a bit desperate here... but hey, if you appreciate the effort of building a (simple) app for free, with no ads, no tracking, and no spy attached, maybe consider buying me a coffee? It would help keep the DIY spirit (and my caffeine levels) alive while I figure out the Android/Linux versions! ☕️

Ko-fi.com/lupodevelop

reddit.com
u/lupodevelop — 14 days ago