this model is kinda half right, but it's wrong in a crucial way. It suggests that if you have a 1gigabit/sec pipe, that you can have ten 100megabit/sec flows all flowing "simultaneously", like a kind of stacking or mixing.
-
-
The window says "hey, feel free to send me this many packets, even if I haven't acknowledged them yet". Ideally you want the window to be as big as the number of free slots between you and the recipient.
Show this thread -
So TCP tries to find this value: it starts out slow, and increases until it detects a drop - a packet that didn't make it, which is assumes is because there aren't enough slots. When that happens it reduces (often by a lot) the size of the window. It sort of "homes in".
Show this thread -
On modern big fast networks, like AWS's, the default for this window size on common OS's is often too small, as is the number of "slots" in other layers. This is why increasing the TCP window size, and the TCP read/write buffers, and the ethernet queue lengths can be dramatic.
Show this thread -
O.k. the next thing our model shows is that acknowledging every packet is kind of dumb; sending back a sushi roll for every one that you receive is just wasteful. We can acknowledge what we got, and what we didn't get, in batches!
Show this thread -
The modern form of this is selective acknowledgements (SACK) ... where we can basically just scribble a note back on the sushi belt that tells the sender "hey, here's what I got and didn't get". The sender than re-transmit only what it needs to.
Show this thread -
So if you do those two things: make your window size big, and make sure selective acknowledgements are on, you can make a big difference to your performance! That video can get to you more quickly.
Show this thread -
Of course we do this kind of stuff ourselves for our own services, but if you're transferring data between your own machines or whatever, take a look!
Show this thread -
now let's extend the model: I said that a pipe or link is like a sushi belt, but there are lots of links interconnecting! So it's like a stadium full of sushi belts, with packets hopping belts. It's like Tim Burton and the Coen brothers made a movie together.
Show this thread -
when the belts interconnect, they might be moving at different speeds, or one might have less capacity than the other, so we have little holding areas, we call these "buffers".
Show this thread -
Generally packets enter and leave these buffers in order, but in some networks you can have priority lanes here, giving priority to some packets over others. At AWS, our belts move so quickly and there are so many free slots that we don't need to do this, it'd be pointless.
Show this thread -
But if there is congestion, and slots are busy, it's because senders are sending too much; it's key that they know quickly, so we make these buffers small. The problem of having these buffers too big is called buffer bloat (https://www.bufferbloat.net/projects/bloat/wiki/What_can_I_do_about_Bufferbloat/ …)
Show this thread -
This can all be very confusing without the right mental model. Because we generally want one kind of buffer - the window size - to be big, but another kind of buffer - the buffers between links - to be small.
Show this thread -
But if you see the network as train-cars or sushi on a belt, what you can see is that what we *really* want is to fill as many slots as we can when we're sending data! That's really all that's going on.
Show this thread -
One problem with the metaphor: packets don't actually go in loops, they come at the other end, so unlike a sushi belt, there's a kind of off-ramp at each end. Also packets only enter and exit at the ends. There's really no perfect metaphor.
Show this thread -
I'm going to meditate on better metaphors, so that's it for now :)
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.


