I think I proposed the shared network namespace and ip per pod along with the per container filesystem/chroot. It might have been Tim though. Lots of bikeshedding to come up with the name Pod.
-
-
Replying to @jbeda @stephenaugustus and
Originally we got a ton of pushback on it as people didn't get it. Had to hack it into docker with pause container. It was only after there were common sidecars that it clicked for folks. Docker issue from 2014 is still open.https://github.com/moby/moby/issues/8781 …
2 replies 2 retweets 43 likes -
Replying to @jbeda @stephenaugustus and
Borg "alloc" -> Omega "scheduling unit" -> Kubernetes "pod". The name itself was from a brainstorm. Docker's logo is a whale. A group of whales is called a ... Also, it was short. The network model was explored in Borg a few years prior, but wasn't feasible at the time.
8 replies 41 retweets 192 likes -
Anyone know when alloc was born? Was this capability an epiphany or an accretion? If this isn’t lost to history, I would love to do the archeology.
4 replies 2 retweets 7 likes -
Replying to @littleidea @thockin and
The borg folks would know better but here is my guess -- alloc was originally there to create a kind of "mini personal borg" that you could schedule into. It was a bit of a recursive concept. You could create the alloc to set aside space and then schedule into it later.
3 replies 0 retweets 8 likes -
Replying to @jbeda @littleidea and
But the usage was evolved was that folks would put coupled pairs of services in there. The canonical examples were a log saver and a server or a search server and a search data loader. You could use constraints to ensure that one of each ended up in each alloc.
2 replies 0 retweets 6 likes -
Replying to @jbeda @littleidea and
But the behavior here was somewhat confusing. Allocs were indexed (similar to k8s StatefulSets) and the things that scheduled into them were indexed too. Because there was this recursive scheduling system it woldn't line up.
1 reply 0 retweets 2 likes -
Replying to @jbeda @littleidea and
Specifically: alloc #3 might have logsaver #10 and server #8. IIRC this could cause problems when doing rolling upgrades. Over time this pairing pattern was the dominate use of alloc and provisions were made so that all of the indexes lined up cleanly.
2 replies 0 retweets 5 likes -
Replying to @jbeda @littleidea and
Pod skilled that "sub allocation" capability and instead went strait to a clearer concept of having explicitly co-located resources.
2 replies 0 retweets 5 likes -
Replying to @jbeda @littleidea and
Another thing -- at the time Borg generally didn't use linux namespaces. As such the network was shared across the node, not across the alloc/container. Ports were assigned to containers and they were kept honest about which ports they were allowed to use.
2 replies 1 retweet 7 likes
Yes, dynamic port allocation had pervasive impact, on scheduling, configuration, discovery, monitoring, health checking, load balancing, various proxies, authentication, network isolation, and probably other things.
-
-
Replying to @bgrant0607 @jbeda and
This is the general pattern though, and I'm waiting for it to hit k8s. You start out by keeping a port map (google did the same), and at some point hit a wall that you can't maintain that list anymore and need something more dynamic.
1 reply 0 retweets 0 likes -
Replying to @josebiro @bgrant0607 and
Dynamic ports changed everything because they were a massive shift from the way systems were designed up until that point. To be fair, most systems don't require that sort of scale still. However maintaining that list is still a PITA and should be offloaded to the scheduler.
1 reply 0 retweets 0 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.