Hundreds to thousands of clients interfaced with this API. Many of them were asynchronous controllers or monitoring agents, as discussed in previous threads, and there was a simple command-line tool, and two widely used configuration CLIs
-
Show this thread
-
The APIs were manually mapped into the two Turing-complete configuration languages, and there was also a hand-crafted diff library for comparing the previous and new desired states. The sets of concepts, RPC operations, and configurable resource types were not easily extended
1 reply 1 retweet 5 likesShow this thread -
Some extensions of the core functionality, such as for batch scheduling and vertical autoscaling, used the Borgmaster as a configuration store by manually adding substructures stored with Job objects, which were then retrieved by polling Jobs.
1 reply 0 retweets 1 likeShow this thread -
Others, such as for load balancing, built independent services with their own service APIs and configuration mechanisms. This enabled teams to evolve their services independently, but created a heterogeneous, inconsistent management surface.
1 reply 0 retweets 2 likesShow this thread -
Omega supported an extensible object model, and
@davidopp had proposed putting an API in front of the persistent store, as we later did in Kubernetes, but it wasn't declarative. Separate work on a common configuration store was discontinued as Google Cloud became the focus1 reply 0 retweets 0 likesShow this thread -
GCP was comprised of independent services, with some common standards, such as the org hierarchy and authz. They used REST APIs, as the rest of the industry, and gRPC didn't exist yet. But, GCP’s APIs were not natively declarative, and Terraform didn’t exist, either
2 replies 0 retweets 1 likeShow this thread -
@jbeda proposed layering an aggregated config store/service with consistent, declarative CRUD REST APIs over underlying GCP and third-party service APIs. This sort of later evolved into Deployment Manager.2 replies 0 retweets 0 likesShow this thread -
We folded learnings from these 5+ systems into the Kubernetes Resource Model, which now supports arbitrarily many built-in types, aggregated APIs, and centralized storage (CRDs), and can be used to configure 1st-party and 3rd-party services, including GCP:http://youtu.be/s_hiFuRDJSE
1 reply 1 retweet 2 likesShow this thread -
KRM is consistent and declarative. Metadata and verbs are uniform. Spec and status are distinctly separated. Resource identifiers, modeled closely after Borgmaster’s (http://issues.k8s.io/148 ), provide declarative names. Label selectors enable declarative sets.
1 reply 0 retweets 2 likesShow this thread -
For the most part, controllers know which fields to propagate from one resource instance to another and wait gracefully on declarative object (rather than field) references, without assuming referential integrity, which enables relaxed operation ordering.
1 reply 0 retweets 2 likesShow this thread
There are some gaps in the model (e.g., http://issues.k8s.io/34363 , http://issues.k8s.io/30698 , http://issues.k8s.io/1698 , http://issues.k8s.io/22675 ), but for the most part it facilitates generic operations on arbitrary resource types.
-
-
In the next thread, I’ll cover more about configuration itself, such as the origin of kubectl apply
1 reply 0 retweets 3 likesShow this thread -
BTW, when I was digging through old docs/decks, I found a diagram from the Dec 2013 API proposal. Sunit->Pod, SunitPrototype->PodTemplate, Replicate->ReplicaSet, Autoscale->HorizontalPodAutoscaler.pic.twitter.com/oOd84Lzw3B
1 reply 1 retweet 5 likesShow this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.