Keeping containers together in a scale out world with Kubernetes
Are you ready for a world of apps run in containers? Following his presentation at TechSummit Amsterdam we spoke with NetApp Solutions Architect Christopher Madden about deploying stateful apps in containers with Kubernetes. App owners build stateful apps which must be underpinned by infrastructure resources like persistent volumes and storage classes. If your stateful app is a single instance database the basics might be enough, but with a scale-out database architecture application-specific “glue” is needed to manage this gracefully. There are a few approaches to providing this glue by deploying a scale out database on Kubernetes using Trident, NetApp’s dynamic storage provisioning tool.
Why are app developers so drawn to using containers today?
Developers are attracted to the ease of testing and deployment containers give. They offer portability and repeatability and so are a very convenient way to package an application. There was trend years ago to run everything from a USB stick to avoid the messiness of installing apps on different machines. Containers keeps your environment clean so you can have a known state. An image is read only and helps prevent configuration and version drift.
Containers are hot right now, but in the long term would we have a conference on executables or RPMs? Containers are like that: it’s the bigger ecosystem that is more important and more interesting. Kubernetes is more interesting than the containers themselves. It’s similar to how transport containers changed the shipping world. The containers are just one piece of the puzzle – there needed to be a whole ecosystem of ships, ports and transport networks working together.
Where are the gaps between what developers are doing with containers and what cloud admins need to deploy them in a scalable manner?
Most developers are more concerned with making their app than how it is scaled and deployed in a cloud. The developer doesn’t realize the app has to be written such that it can fail and restart at any moment in a way that’s not disruptive to the user experience. Developers are used to running something on infrastructure that is more stable, like a virtual machine. In a container world, they are starting and stopping all the time and the app has to be written to cope with that. The gap here is the developer has to know this.
Another gap is historically developers have been able to save state locally to files. In containers that is not the case and so some developers might make assumptions that are not valid. To save state they will need to specifically define persistent storage.
What can both developers and admins do with containers to help close this classic DevOps gap?
More testing will help and a mindset shift to make the app tolerant of regular restarts. Also minimise the use of persistent storage unless a container is a backing store like a database.
How is Kubernetes different to other container platforms? What are its strengths and weaknesses for developers and admins?
Kubernetes has a lot of momentum behind it and its maturity and feature set make it attractive. The high speed of new releases and the many industry players contributing to it are also good.
Kubernetes was open sourced by Google in 2014 and was created by the same engineers who have worked on Google’s internal Borg container platform. It has a lot of thinking and code years from Google behind it and 12 years of experience running containers. Experience is hard to beat. Kubernetes is also available as a service on numerous public clouds.
The weak point of Kubernetes today is the user interface. There is a basic GUI, but it could be improved. However, if you’re doing things in a repeatable way it’s best to put configuration in code anyway.
Why is managing scale out apps more challenging with containers? How does Kubernetes help overcome this?
Kubernetes has storage as a first class resource type so you can create a storage resource in the cluster and any pod or app can tap into it. That’s a great foundation and for single instance databases and you can make an app like WordPress fault tolerant.
Scale out is harder and more costly. For single instance setups we are good to go, but for scale out there is a lot of app awareness that needs to happen as to what nodes are in the database. By default, Kubernetes doesn’t give stable names and with a scale out app most require it. This is something a stateful set will overcome and it’s a way to give it that predictability.
Also, requests for storage are unique per app node. This is good for getting independent storage to scale out the app or database.
The next step is you have to have some way for nodes to find out about one another and a ‘headless service’ can help here. Then there are design choices and techniques to glue them together. All those decisions are app-specific logic and with scale out you need to have that logic somewhere. That’s why it’s more challenging and actually not a container specific challenge either.
With the sidecar technique inside the pod you have a second manager container which reaches out to other nodes to find out how they are working. It’s all about building consensus and app-level awareness. For example, if the master went offline it will promote another node to be the master. This logic is database specific. Another example is when it comes to upgrading the database some logic might also be needed and can be provided by the sidecar.
With the operator model a new Kubernetes resource type is created which understands how to manage the scale out app. This user then creates resources of this type and the operator handles lifecycle management of the app. The interface is clean and there are few examples out there such as etcd and Elasticsearch that can be used as references.
What more can public clouds do to streamline the develop-to-deploy process with containers?
Kubernetes Operators could be an area public clouds could do more with. If they have services outside of Kubernetes then they could make them accessible inside Kubernetes. They can help by supporting recent versions of Kubernetes to be current.
Kubernetes designed to be open so if you’re a cloud provider and you start doing too many tweaks you could alienate your customers. The hyperscalers can help by making Kubernetes as easy to deploy and streamlined as possible, including things like scaling underlying cloud specific resources and so on.
Cloud providers could do more with physical resources like compute, storage and network. At NetApp we support Kubernetes so it can dynamically provision our storage. Public clouds can do the same. At NetApp we can also make snapshots, replicate storage synchronously or asynchronously, and provide shared storage between pods using NFS. None of these are currently services offered by the public cloud providers.
Kubernetes will dynamically allocate resources in response to a business demand. It is moving from automation to autonomous computing where you tell it your desired state and it will do it. For example it can automatically scale your replicas if they are out of CPU resource Cloud providers can also scale the infrastructure underneath to meet that new demand.
You could do this in the VM world, but it would be more complex and in Kubernetes it is a native feature of platform. In the future, it can go even further. For example, imagine setting parameters like desired response times in specific geographical markets, or to put a ceiling on cloud costs for a batch job, and Kubernetes and the public cloud platform will make it so.
How are containers and Kubernetes evolving to deal with scale out apps? What will future technology look like?
The Operator model seems to work well. Kubernetes has defined resource types (volume or node) and you can create a third-party resource (or in v1.7 a custom resource definition) where you are responsible for writing the resource. For example, Kubernetes uses etcd for persistent storage of the cluster state. With an Operator and etcd you can say “how many replicas would you like” and the manager would handle the lifecycle of pods.
NetApp gives the ability to integrate storage with Kubernetes in a dynamic way on-premises, in the cloud with our storage software, or in a hybrid model with both.