The more time I spend writing Serverless functions the more I understand why the container image is overkill.
-
Show this thread
-
Replying to @kelseyhightower
Last time I tried serverless (AWS Lambda with Zappa) the cold-start times were prohibitive :(
2 replies 0 retweets 5 likes -
Replying to @Jasonprogrammer @kelseyhightower
What was the cutoff point for you? What's better than that?
1 reply 0 retweets 0 likes -
Replying to @alexellisuk @kelseyhightower
In my tests I was measuring seconds (not just milliseconds) of cold-start delay in serving requests after a deploy. If the delay would have been a couple hundred milliseconds at most, that would have been workable
2 replies 0 retweets 1 like -
Replying to @Jasonprogrammer @kelseyhightower
After a deployment or after a "sleep" i.e. inactive duration? It should be doing a rolling update for you
1 reply 0 retweets 0 likes -
Replying to @alexellisuk @kelseyhightower
Right after a deploy only. Zappa has ways to keep lambda functions "warm", and we were using that.
1 reply 0 retweets 0 likes -
Replying to @Jasonprogrammer
this is typical for the very first invokes after a deploy, but with on-going traffic it shouldn't be a problem. did you test further after deploys at all?
1 reply 0 retweets 0 likes -
Replying to @chrismunns
No, I didn't test further. We needed a guarantee of more real-time responses, so even a few invocations being slow was a non-starter for us
1 reply 0 retweets 0 likes -
Replying to @Jasonprogrammer @chrismunns
Wouldn’t the ability to do canary deployments and only send a small amount of traffic to the new version to pre-warm it help? https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/ …
1 reply 0 retweets 2 likes
Thanks for sending that, I'll take a look
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.