I guess you could divide it up into a bunch of different files rather than relying on range requests. However, I'm not sure how well it will handle caching that's so rapidly changing. If it's not cached, I think it will just pass through. I don't think it will merge requests.
Conversation
You need the cache to be warmed up by usage. There isn't a way to push something to their CDN. It acts as a caching reverse proxy using their custom nginx fork rather than a traditional CDN that you push to in advance, which I don't think would fit with real-time usage.
1
1
Replying to
I guess you could log & analyze how they request data from your server to figure a lot of it out.
1
Replying to
They add a CF-Cache-Status header showing whether a cache hit occurred too.
1
1
For example, try this, which should be a HIT:
curl -v "www2.coinbase.com/assets/16x16.p" |& grep -i cf-cache
Then, the following, which should be a MISS:
curl -v "www2.coinbase.com/assets/16x16.p" |& grep -i cf-cache
Then, repeat that request a dozen times and it should become a HIT after 1-3.
1
On that note, I really hate Twitter's link handling... it shouldn't have the ... in the copy-pasted text. They can and should make changes like that with CSS and have it copy exactly as it was entered...
1
So, from client-side testing, if you do a range request for an uncached file, it ends up pulling the whole file and caching that. It isn't quite what they did before. They still only cache files as a whole, but range requests can pull the whole file from the origin to cache it.
1
Replying to
Is the range request by any chance satisfiable before the whole file finishes transferring?
2
Replying to
I was testing with that tiny image. It would be more interesting to see what it does on a much larger file. It definitely handles -H "Range: bytes=50-100" differently than it used to though.

