I had applied for DL with Real ID but got the DL without Real Id and the dl says federal limits apply. I am an international student, can that be the reason that they did not gave me the real Id or it's just a mistake from their end. Also if I ask to update it with a real Id will I be charged?
u/3pcil0n
A few months ago I realized I'd been writing networked code for years and couldn't actually explain how TCP works. I could repeat the words "reliable, ordered, connection-oriented" but I couldn't have implemented retransmission or congestion control if my life depended on it.
So I wrote one. In Java, on top of UDP. It's not novel, it's not faster than TCP, and it's not production-ready. None of those was the point. The point was to find out what I'd been pattern-matching on without actually understanding.
A few things that surprised me.
The textbook is misleading on cumulative ACKs.
Every textbook explains them in a paragraph with a diagram, and they look obvious. Implementing one is genuinely weird. You aren't acknowledging the packet you just received; you're acknowledging the next byte you expect. Which means the receiver's ACK number can stay stuck at the same value for many incoming packets in a row, if there's a gap in the sequence space. The sender has to interpret that not as "the network ate my ACKs" but as "the receiver has a hole it hasn't filled yet." Your gut wants to flip both of those assumptions the first time.
"Slow start" doesn't mean what I assumed.
I'd spent years assuming slow start meant "TCP starts conservatively and stays cautious." That's mostly wrong. Slow start grows the congestion window exponentially. It's "slow" only in absolute terms (one MSS at first) and only relative to "blast everything immediately." In a low-loss session it ramps up fast. The thing that actually is slow is recovery after a loss event, not the start.
The hardest part wasn't the algorithm. It was measuring it honestly.
I went through three different loss-simulation strategies before I had numbers I trusted.
The first was application-layer drops in the server. Easy to implement, but only my protocol felt them, so any "comparison to TCP" was fake.
The second was a userspace UDP forwarding proxy that dropped datagrams. My protocol felt this, but TCP didn't (the proxy was passthrough for TCP), so still not a fair comparison.
The third was supposed to be the real one: kernel-level loss with pfctl + dummynet on macOS. I ran the whole sweep, got pretty "TCP wins, my protocol loses" charts, and then noticed the loopback fast-path on macOS 14+ bypasses the pf hook entirely. Nothing was actually being dropped. The charts were lying to me. I had to throw the data out, rewrite the README to say "I tried this, here's how I know it didn't work, here's what I'd do on Linux instead," and rebuild the chart to be protocol-only with a disclaimer. That was the most uncomfortable change in the project and probably the most useful one.
Writing the spec down forced a different kind of honesty.
Until I had to put a packet format and a receiver state machine into the README, I was hand-waving over half a dozen edge cases. "Forcing yourself to write 'the receiver MUST do X when Y'" turns out to be a different cognitive task from "code something that handles this thing I'm thinking about right now." It surfaced bugs my tests had been silently passing.
Code (Java, MIT-ish): https://github.com/bhaveshGhanchi/leap
I'm writing it up as a series of posts; happy to drop the link to chapter 1 in a comment if anyone wants more detail. Also happy to take questions or pushback on any of this.