Kubernetes Borg/Omega history topic 6: Watch. This is a deep topic. It's a follow-up to the controller topic. I realized that I forgot to link to the doc about Kubernetes controllers: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/controllers.md …
-
Show this thread
-
Borgmaster had 2 models: built-in logic used synchronous edge-triggered state machines, while external components were asynchronous and level-based. More on level vs. edge triggering:https://hackernoon.com/level-triggering-and-reconciliation-in-kubernetes-1f17fe30333d …
1 reply 1 retweet 10 likesShow this thread -
One of the first things I did when joining the Borgmaster team back in 2009 was to parallelize the handling of read requests. Something like 99% of requests were reads, primarily from polling external controllers and monitoring systems.
1 reply 0 retweets 2 likesShow this thread -
Only BNS (analogous to K8s Endpoints) was written to Chubby, which enabled replicated caching and update notification. That enabled it to scale to much larger numbers of readers (~every container in Borg) and reduced latency, which for polling could be tens of seconds
1 reply 0 retweets 1 likeShow this thread -
Watch-like notification APIs (aka sync and tail) were common for storage systems such as Chubby, Colossus, and Bigtable. In 2013, a generalized Watch API was designed so that each system wouldn't need to reinvent the wheel. A variant "Observe" added per-entity sequencing
1 reply 0 retweets 1 likeShow this thread -
We built Kubernetes upon Etcd due to its similarities to Chubby and to the Omega store. When we exposed Etcd's watch (https://coreos.com/etcd/docs/latest/learning/api.html …) through the K8s API, we let more Etcd details bleed through than originally intended. We need to clean up some of those details soon
2 replies 0 retweets 8 likesShow this thread -
The Kubernetes model is described here: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/resource-management.md#declarative-control …
1 reply 1 retweet 7 likesShow this thread -
Some other systems use message buses for notifications. Why didn't we? Controllers need to start from the initial state, we also don't want them to fall behind or operate on state that's too stale, and they need to be able to handle "missed" events -- the level-based rationale
3 replies 3 retweets 14 likesShow this thread -
We also wanted Kubernetes to run with a small number of dependencies, and with bounded compute and storage capacity: if we assumed a managed message bus that could store a week of events and an elastic compute platform to process them in parallel, the design would be different
2 replies 0 retweets 5 likesShow this thread
Watch works well for our typical scenario of mostly active entities with high rates of change per entity, and not a vast number of inactive entities (as opposed to, say, sales catalog entries), since it assumes access to all the relevant state. At some point, we'll need to shard
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.