Others, such as for load balancing, built independent services with their own service APIs and configuration mechanisms. This enabled teams to evolve their services independently, but created a heterogeneous, inconsistent management surface.
-
Show this thread
-
Omega supported an extensible object model, and
@davidopp had proposed putting an API in front of the persistent store, as we later did in Kubernetes, but it wasn't declarative. Separate work on a common configuration store was discontinued as Google Cloud became the focus1 reply 0 retweets 0 likesShow this thread -
GCP was comprised of independent services, with some common standards, such as the org hierarchy and authz. They used REST APIs, as the rest of the industry, and gRPC didn't exist yet. But, GCP’s APIs were not natively declarative, and Terraform didn’t exist, either
2 replies 0 retweets 1 likeShow this thread -
@jbeda proposed layering an aggregated config store/service with consistent, declarative CRUD REST APIs over underlying GCP and third-party service APIs. This sort of later evolved into Deployment Manager.2 replies 0 retweets 0 likesShow this thread -
Replying to @bgrant0607 @jbeda
Related to this, since I always wanted to pick your brains on this: not all REST operations in K8s signal async behavior (e.g. HTTP 202). Delete returns 200 (https://github.com/kubernetes/kubernetes/issues/33196 …). "k run" returns 200, where 202 would be more precise. Create is 201. Appreciate any feedback
1 reply 0 retweets 0 likes -
Replying to @embano1 @bgrant0607
This is one of the tricky parts of an eventually consistent declarative system. There is no strong idea of "done". When you ask for a pod to be created the *resource* is created in the data store. But the actions implied by that resource aren't complete.
1 reply 0 retweets 1 like -
But there is no well understood idea of what "complete" means for a resource. There *are* no terminal states. We can say that a resource has converged and that the controller is idle but that could change at any time.
1 reply 0 retweets 1 like -
In addition the resource may *never* converge. That could be the result of a temporary condition (that resolves in 15 seconds) or the result of a permanent condition. Sometimes the controller/system in question can't tell.
1 reply 0 retweets 3 likes -
Example -- you could ask the system to schedule a Pod and it doesn't get scheduled. Is that because a machine is rebooting and you'll have capacity in a matter of seconds? Or do you have more machines on order from Dell and you need to wait a week?
1 reply 0 retweets 1 like -
Replying to @jbeda @bgrant0607
Fully agree, eventual consistency does not imply (guarantee) the system will converge to the desired state. That's why I thought 202s semantically would be more correct than 200/201s?
1 reply 0 retweets 1 like
The inconsistency is a bug, but disruptive to fix
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.