Kubernetes Borg/Omega history topic 6: Watch. This is a deep topic. It's a follow-up to the controller topic. I realized that I forgot to link to the doc about Kubernetes controllers: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/controllers.md …
-
-
We also wanted Kubernetes to run with a small number of dependencies, and with bounded compute and storage capacity: if we assumed a managed message bus that could store a week of events and an elastic compute platform to process them in parallel, the design would be different
Show this thread -
Watch works well for our typical scenario of mostly active entities with high rates of change per entity, and not a vast number of inactive entities (as opposed to, say, sales catalog entries), since it assumes access to all the relevant state. At some point, we'll need to shard
Show this thread
End of conversation
New conversation -
-
-
There is an interesting symmetry here. With message busses or pubsub the topics are dumb and the messages are rich. With watch it is the other way around. Topics/entries are rich and notifications/messages are dumb.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What about the issue that message bus based systems would typically hit scaling issues with the message bus being the bottleneck? Were you all aware of problems associated with that approach and take that into your decision process as well?
-
All distribution methods have scaling challenges. A lot depends on required semantics of the data and update delivery: atomicity, serialization, consistency, freshness, etc. For instance, if you modify 7 entities of different types, will clients observe updates in the same order?
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.