Most people don’t seem to fully understand Kubernetes probes beyond “they make sure my service is running”. Through my DevOps journeys, I’ve discovered probes can be incredibly powerful when leveraged effectively for your particular service. Here’s some of the things I’ve learned debugging and applying optimized probes to our deployments.

General Probe Guidance

Probes can be enabled for each and every container in a pod. It’s important to note that Probes don’t apply for a Pod, merely the containers within. (See my article deep-diving Pod healthiness here)

Because of this, it’s also important to remember that Probes don’t require any explicit port, service, or ingress declaration. You don’t even need to provide a containerPort expose statement in your Podspec. If you have a service in the container listening on a port, that’s all you need to write a TCP or HTTP probe to verify that particular service is operating properly.

Additionally, that service is available at the Pod’s IP address (kubectl describe pod/blah). See the container object docs for more info.

Finally, both readiness and liveliness probes run throughout the entire lifecycle of a particular container. In this way, they operate very similar depending how configured, while impacting differently what Kubernetes and Kubelet do with the container on a failure.

Liveliness Probes

Liveliness probes exhibit the functionality most people think of for “health checks”: ensuring the service is operating with the given check and, when failure occurs, restarting the container.

Without Liveliness probes, Kubernetes has no idea when your application is actually meeting its goals as a service, unless it completely exits (either successfully or otherwise).

Readiness Probes

Readiness probes work slightly differently. Instead of impacting the container itself, they instead report upstream whether traffic from Services should continue flowing to that particular Pod.

By Service, I’m talking about a Kubernetes Service which provides an abstraction for efficiently directing traffic to your running Pods (instances of your app).

Without these probes, your users could be hitting instances of your application which aren’t actually working properly.

How Best To Leverage Probes

Ideally, I like to think of these two options as application-focused and service-focused. As a liveliness probe only really enables reaction for that specific container, it really is only focused on one singular question: Is the application running inside healthy?

On the other front, a readiness probe provides wider coverage in that it impacts the overall service reliability which impacts the actual consumers of the service. This turns the question into: Is the application to perform its job to the fullest, including interacting with dependencies like object storage or third-party API’s to accurately process requests?

Of course, these questions may vary depending on your tech stack, principles around development, and the types of clients. If you’re looking for modern guidance in the microservice realm, I strongly recommend both 12factor.net and Production Ready Microservices.

Hopefully this clears up some common misconceptions around the functionality of probes as it relates to enabling the most rock solid deployments in your Kubernetes environments. Feel free to ask more questions in the comments or tell me something I don’t know!

Mario Loria is a builder of diverse infrastructure with modern workloads on both bare-metal and cloud platforms. He's traversed roles in system administration, network engineering, and DevOps. You can learn more about him here.