u/am0123

I implemented Raft, a KV store, and a sharded system in Go (MIT 6.5840)

I implemented Raft, a KV store, and a sharded system in Go (MIT 6.5840)

I recently completed the labs from MIT 6.5840 Distributed Systems and implemented everything in Go, including:

  • Raft consensus algorithm
  • A replicated Key/Value store
  • A sharded KV system with dynamic reconfiguration

The implementation focuses a lot on concurrency and failure handling:

  • goroutines for RPC handling and background tasks
  • channels for coordination between Raft and the state machine
  • dealing with unreliable networks (dropped / delayed / out-of-order RPCs)

Some interesting challenges:

  • ensuring commitIndex never goes backward under out-of-order RPC responses
  • handling retries safely with client/request IDs (idempotency)
  • keeping deduplication state consistent across snapshots and shard transfers

I wrote a detailed README explaining both the design and the tricky edge cases I encountered.

github.com
u/am0123 — 11 hours ago

The journey of a request in a Raft-based KV store (from client to commit)

After implementing the MIT 6.5840 distributed systems labs, I wanted to better understand what actually happens when a client sends a request to a replicated key-value store built on Raft.

I wrote a short article where I follow the full path of a request:
client → leader → replication → commit → apply → response

What surprised me is how quickly this “simple” flow breaks in practice:

  • leader can change mid-request
  • network partitions create stale leaders
  • retries can lead to duplicate execution

A lot of the complexity isn’t in Raft itself, but in making the system behave correctly under these conditions.

Would be interested in feedback, especially if you’ve built something similar.

abdellani.dev
u/am0123 — 2 days ago