u/Dizzy-Bus-6044

Lately I’ve been spending a lot of time thinking about how fragmented transaction flow has become across high-performance chains.

If you’re building anything latency-sensitive (MEV, arb, liquidation bots, even just aggressive trading infra), you probably already feel this:

  • Public RPC is too slow / inconsistent
  • Private relays are opaque and fragmented
  • Direct validator relationships don’t scale cleanly
  • Mempool visibility is partial at best

So everyone ends up duct-taping their own setup:
multiple RPCs, custom routing logic, some relay integrations, maybe a few validator connections if you’re deep enough.

It works… until it doesn’t.

You start seeing weird behavior:

  • Transactions landing inconsistently across similar conditions
  • Same payload performing differently depending on route
  • Latency variance killing otherwise profitable strategies
  • No clear attribution of why something failed or got outcompeted

At some point it stops being about strategy and starts being about distribution.

Feels like we’re heading toward a world where:

  • Transaction routing becomes a first-class layer (not just infra glue)
  • Builders care less about where they send from, more about how intelligently it’s routed
  • “Best execution” starts to include path selection across relays/validators, not just price

Curious how others here are approaching this right now:

  • Are you running your own routing logic or relying on a single path?
  • How are you thinking about redundancy vs latency tradeoffs?
  • Anyone experimenting with dynamic routing based on slot/leader conditions?
  • Or is everyone just quietly building this in-house and not talking about it 🙂

Feels like an area where a lot is happening, but very little is openly discussed.

reddit.com
u/Dizzy-Bus-6044 — 19 days ago
▲ 9 r/ethdev

A few years ago, I was in the same position as these candidates. Getting that first real opportunity mattered a lot to me, so I’ve tried to give that same chance to others. I’ve been bringing in younger candidates for internship roles, mostly early-career or students.

Here’s the pattern I keep seeing:

  • They do really well in assignments/assessments during the hiring process
  • They seem sharp, responsive, and capable
  • Then within a few days or weeks of actually working… everything drops off

Output quality dips, ownership disappears, and the same people who looked great in evaluation suddenly struggle with basic execution.

I’m trying to figure out what’s actually going wrong here.

Is this:

  1. A flaw in how I’m hiring and evaluating?
  2. A gap between “test performance” and real-world work ability?
  3. The impact of AI tools helping them clear assessments but not actually building skills?
  4. Or just normal early-career inconsistency that I’m underestimating?

I don’t want to become cynical and stop giving people early opportunities, but this pattern is too consistent to ignore.

Curious if others hiring at the junior/intern level are seeing the same thing, and what you’ve changed (if anything) to fix it.

reddit.com
u/Dizzy-Bus-6044 — 19 days ago

Everyone calls themselves a “marketer” but:

  • can’t ship a landing page
  • can’t read on-chain data
  • can’t write a single SQL query
  • and needs 3 days to draft one tweet thread

Meanwhile founders are out here doing everything themselves.

Hot take:
If you can’t code even a little, you’re not a Web3 marketer. You’re just doing social media.

We’re looking for someone different:

  • you ship, not suggest
  • you experiment without being told
  • you can write, analyze, and execute
  • you’re comfortable getting your hands dirty across ops, data, and growth

If you’ve ever:

  • scraped data to find alpha
  • built your own dashboards
  • or automated your own workflows

you already stand out.

If this offends you, you’re probably not a marketer

reddit.com
u/Dizzy-Bus-6044 — 22 days ago