Consuming rate-limited APIs at scale is like interacting with a database that has pretty painful latency. I keep "inventing" what are surely preexisting design patterns / solutions to this problem. Can anyone point me to a good guide so that I can stop wasting my time?
3. Returning responses Here’s the latency thing. If I have a request that will take many minutes if not hours or days, it’s either a callback or push to a message broker or something. But there is a resource usage issue for queued requests and responses!
-
-
Most most recent pass essentially used a map[uuid]*Response for each request that had a timeout for deletion, with polling for completion. But it feels fragile AF and I’m not going to use it. So now that I have intuition I’d like to skip to other people’s good ideas.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.