Often your clients will have started to retry their requests. Clients build up over over time, more and more of them. You can easily get into a situation where you now have way more requests than you ever ordinarily would. And this just as you are trying to recover. Uggh.
-
-
IME, how it "really" works is that the folks who builds clients and services have a sort of code of honor system. Everyone recognizes that it would be a wasteful race to the bottom, and mostly self-policing works. So browsers, SDKs, etc ... all do sane, safe, decent things.
Show this thread -
I don't really have a big technical lesson there, I just find it fascinating that huge sections of the economy can get by like that. It's very inspiring and reassuring! /EOF
Show this thread
End of conversation
New conversation -
-
-
Blocking isn't the only option out there. There are also server-side throttling and AQM. Yes, those might duplicate some of the retry/backoff logic at the clients, but IMO that's worth it to be better than straight-out blocking.
-
I've even deployed a combination of a CoDel-ish algorithm (but delay instead of drop) and CFQ purely on the server side, in a large system where the protocol (NFSv3) didn't even support client-side retry/backoff, to some success.
End of conversation
New conversation -
-
-
Just a thought, wouldn’t adaptive throttling help here? Think of service reducing throttling configuration for ALL clients in case of event / recovery and gradually increases to original limits post recovery.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Surely a per-client rate limit fixes this? That’s what we do to garantee fairness of requests with all our internal platform consumers.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.