I have little experience with event-driven workloads, and I can see serverless stacks being well suited for that (data pipelines, etc). My perspective is from building user-facing apps, running at human scale: An e-commerce site, a productivity web app, a mobile app, etc.
-
-
Replying to @dvassallo @log4code and
I’m not talking about event-driven, that wouldn’t even be a fair comparison. Even in standard RESTful request-response architectures, managed service designs will drag your servers into a dark alley and beat them senseless
1 reply 0 retweets 0 likes -
That's probably true. But at what cost? (And I don't mean $ cost). I believe in a Lambda/Fargate future. I really do. But today, in 2019, there are too many restrictions to make the benefits outweigh the cons. (An IMO, by a big margin.) At least for the kind of things I work on.
1 reply 0 retweets 0 likes -
Replying to @dvassallo @log4code and
I think we just disagree on what’s a “restriction” and what’s an “abstraction” I’m willing to pay for. I don’t care about the underlying filesystem so I don’t see much value in having unfettered access to it.
1 reply 0 retweets 0 likes -
The abstraction is the restriction. Quick example of something I'm working on right now: I'm using a stateful websocket session for a real-time app. With Lambda/APIG I would have to fetch & store the session state (what each client has seen, etc) in DDB on every msg ! ...
1 reply 0 retweets 0 likes -
Replying to @dvassallo @rchrdbyd and
... Not only that makes things ~50X slower (from 1ms per msg to 50ms) & mostly costly (1 DDB read/write per msg), but it also introduces a new failure mode that I would have to handle on every message (DDB failures).
1 reply 0 retweets 0 likes -
Replying to @dvassallo @log4code and
And you’re using some kind of distributed in-memory cache to persist the state across your whole ASG?
1 reply 0 retweets 0 likes -
Replying to @rchrdbyd @dvassallo and
Or you’re leaving the socket connected for the whole ux and just persisting it on that host until the session ends then sending some long-term state to a persistent data-store?
1 reply 0 retweets 0 likes -
The websock state is basically a cursor on a data stream pushed by the server. Each client would have a different position coz they consume at a different rate. There's no need to persist this state. If the connection drops, the client restarts it based on its own state. ...
1 reply 0 retweets 0 likes -
Replying to @dvassallo @rchrdbyd and
Simply broadcasting data via WS on APIG/Lambda seems nearly impossible (or maybe possible but very convoluted). A very common use-case. Oh, and how about this for planetary-scale. A 500 conn *HARD* limit per account:
pic.twitter.com/gYc5fBniIS
2 replies 0 retweets 1 like
That's 500 new connections per second, not concurrent connections per account.
-
-
Ok, so about the capacity of 1x C5.4XL
But that limit is prob ok, and not really my point. The problem is that WS make server-side push easy to impl and reason about; but doing it on Lambda/APIG is significantly harder. I'm sure it will get better, but it's not right now. ...1 reply 0 retweets 0 likes -
Replying to @dvassallo @colmmacc and
And I work in the present, not in the future. It's just trade offs. I don't disagree with the benefits of APIG; but I'm weighing the disadvantages as well. And I'm still using EC2 & ELBs. I'm still a customer on the platform.
1 reply 0 retweets 0 likes - 7 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.