Yeah, Deployments originated in OpenShift and were ported to upstream, from my understanding.
-
-
Replying to @AlexB138
The story I heard (and this was back in 2016ish, secondhand and in passing, so take however many grains of salt), was:
1 reply 0 retweets 0 likes -
Replying to @caffeinepresent @AlexB138
When the community decided to rework replica management, DeploymentConfig managing ReplicationControllers was proposed by the folks from Red Hat, but the community eventually went with a different proposal of Deployment managing a new resource called a ReplicaSet.
1 reply 0 retweets 0 likes -
Replying to @caffeinepresent @AlexB138
The key issue seems to be kubernetes/kubernetes#1743, which almost five years later reads as
@smarterclayton proposing the adoption of DeploymentConfig as a result of existing discussion and@bgrant0607 taking off from there, paring it down and bending it into a different shape.1 reply 0 retweets 0 likes -
Replying to @caffeinepresent @AlexB138 and
The OpenShift doc text makes it sound like Kubernetes Deployment is a lineal descendant of OpenShift DeploymentConfig.
2 replies 1 retweet 1 like -
Deployment was inspired by DeploymentConfig, but I wanted to make updates continuous and the rollout constraints intent-oriented. Rather than the typical rate limit and max-in-flight, I proposed minReadySeconds and maxUnavailable. It was a big effort to get it to beta in 1.2
2 replies 0 retweets 7 likes -
I remember that, I was new to Kubernetes then and was going “aw man, I just learned this and they’re changing it already?” Little did I know...
1 reply 0 retweets 1 like -
I think the end result of deployments is the right model for most flows (try to keep forcing a rollout forever) - some DC mindset was colored by more conservative rollout mindsets in large enterprise deployment where trying forever felt too radical
1 reply 0 retweets 1 like -
I remember pre-DaemonSet the only way to schedule one replica per node was scheduling tricks (binding a host port was popular at one point I think) plus creating an RC with a larger-than-node-count number of replicas, but that inflicted performance penalties on the scheduler.
1 reply 0 retweets 1 like -
Yeah, and that’s better today. However we’re still fixing fundamental kube stuff - ie it’s hard to have a service load balancer offer zero disruption rolling updates and preserve source ip (traffic policy = local doesn’t play nice with rolling deploy)
2 replies 0 retweets 0 likes
There's definitely a lot of room for improvement. The surface area expands faster than we can paint it.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.