Worth remembering, Kubernetes was built to Google's needs, and Google runs with a shared network space on any given VM and assigns an entire /24 to the VM running docker. Each container gets one of those addresses. [1] This probably won't work for everyone - be sure to read into the fine grained details before drinking the koolade.
They're also at least two build versions behind Docker.[2]
Yes, it has! Again, I don't think the design decisions were made to lock out other vendors. It's very reasonable to me that the first platform to be supported by Google engineers is Google's platform.
Side note: I should be a better citizen and link to specific commits next time.
Indeed the networking requirements of Kubernetes are very different from that of Docker.
However, it isn't too difficult to setup a Layer 2 unification overlay network. I've had success with flannel[1], it's pretty easy to set up and supports VxLAN.
Current is Docker 1.6.2, released about 20 days ago. There also was a 1.6.1. That makes it two versions. Also, v1.6.0 was released almost two months ago. [0]
Users of the Google Cloud run Docker in VMs, since VMs are what the Google Cloud Platform sells.
(as does every public cloud provider [e.g. AWS])
For now, VMs are required to ensure a security barrier between different user's containers on the same physical machine. See some of Dan Walsh's posts on the subject (e.g. https://opensource.com/business/14/9/security-for-docker)
for more context.
It's most likely that even the "CM"s from both providers are actually Virtual Machines running on a hypervisor running on bare metal. You just can't tell and don't need to care (for most workloads).
Because you're the one setting them up. Basically you run Amazon provided agent on an EC2 instance and ECS will see it as a host for ECS.
Also Amazon bills you for that EC2 instance as any other instance.
Personally I have hard time understanding the benefits of running docker in public cloud, you still run a VM you still pay for that VM. It just one extra abstraction layer which increases complexity of your infrastructure and also reduces performance.
I do understand the benefits of using containers in own data center, when you run it on bare hosts. There's simplicity and and lower costs (because you don't have VM) you have more resources which lets you run more containers than VMs on that host.
> Personally I have hard time understanding the benefits of running docker in public cloud, you still run a VM you still pay for that VM. It just one extra abstraction layer which increases complexity of your infrastructure and also reduces performance.
Simpler deployment and basically forcing "12-factor", as well as easier development environment setups. Nothing you can't achieve with other tooling, but it's nice to be able to guarantee that your dev environment is identical to your prod.
My problem is that I don't believe you can use docker without using containers. And if you want to simplify pipeline, why not just use rpm-maven-plugin[1] you can easily deploy including dependencies, it is fast, you can easily upgrade or downgrade. And no need to trying to figure complexities imposed due to involving LXC.
They're also at least two build versions behind Docker.[2]
[1] https://github.com/GoogleCloudPlatform/kubernetes/blob/maste...
[2] https://github.com/GoogleCloudPlatform/kubernetes/blob/maste...