A lot of our services run more than one container in a single Pod to properly present the endpoint or execute the task at hand. In Kubernetes, this is encouraged as you’re taught to think of a Pod as a single unit of work representing your overall service. For example, you have a PHP-FPM container fronted by an Nginx container; both of these would exist in a single Pod.
Note this article is only focusing on Pod’s with more than one container.
At present, a Pod where one of these containers is failing will still appear as Running and never actually exert a Failed condition, meaning a controller object like a Deployment won’t create a new Pod. Basically, the Pod doesn’t “eventually die and get recreated”. Instead, kubelet will continue to restart the container within the Pod, trying to bring it back to life. To the outside world, the Pod is still alive and properly representing the overall Deployment.
This is important for two reasons:
You now have a service instance which can’t actually serve its purpose
A new Pod is a fresh start executing init-containers, possibly landing on another host, etc..
As readiness probes (which, run during the entire container lifecycle) are used to know when it’s safe to send traffic to the Pod, they work as expected in relation to problematic containers in Pods.
While a Pod with only one container can easily die with
RestartPolicy: never, this isn’t an option with many containers. There is an open KEP (Kubernetes Enhancement Proposal) to enable this through a critical container concept.
While this isn’t the end of the world, this method of handling failure is something to be aware of. In some cases, your overall application could mis-behave or other residual side-effects of having half-healthy instances might occur.