Understand Kubernetes cluster upgrade process
The upgrade processes are provider specific.
- generic cookbook (talks about GCE procedures mostly):
- ubuntu specific
- Patch upgrades (eg 1.7.0 -> 1.7.1) cause no disruption to cluster
- Minor version upgrades (eg 1.7.1 -> 1.8.0) are more complex:
- separate etcd upgrades to be done first
- master upgrades
- worker upgrades may be done in “inplace” and “blue-green” way
- Flannel and easy rsa can be upgrafes at any time independently of k8s cluster upgrades.
- with kubeadm:
- General instructions (action order) same as for ubuntu
- install a k8s cluster with kubeadm at https://labs.play-with-k8s.com/
- deploy some pod
- perform upgrade to from 1.7 to 1.8
- verify that all works
Facilitate operating system upgrades
- this one is generic and not too useful except the part on node maintenance https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-container-cluster
- Node rebooting may cause pod rescheduling (depending on kubectl –pod-eviction-timeout), so nothing has to be done
- Better control over upgrade process with e.g.
- “kubectl drain $NODENAME”, to migrate pods from the node
- then “kubectl uncordon$NODENAME” to make the node schedulable again
- task 1. kubectl cordon & uncordon
- task 2. kubectl drain & uncordon
Implement backup and restore methodologies
There are differences for etcd 2&3, this is focused on v3.
- About etcd, look at the chapter focused on backup&restore:
- Etcd disaster recovery documentation
- create some k8s objects like pods etc, backup etcd, reinstall it and restore it, verify that all works.
- Should there be something on master backup (keys?)? Is something else is missing?