Knowledge Base

Design & Configure

Liveness Probe TCP Socket in Kubernetes

Liveness Probe TCP Socket is a part of Kubernetes, thanks to which you can control the health of the pods. If it is possible to open in the container, the specified port of the container can be considered healthy, otherwise, the status failure will be returned

In this article, I will show you an example of using Kubernetes Liveness Probe TCP Socket and available options.

TCP Socket settings

Liveness Probe restarts the container when the command returns a failure code. This feature also has some useful settings such as: 

  1. initialDelaySeconds – time after which Liveness Probe should start polling the endpoint 
  2. periodSeconds – the frequency of polling the endpoint 
  3. timeoutSeconds – time after which the timeout will expire. 
  4. successThreshold – the minimum number of successful attempts after which the probe will determine the correct operation of the container 
  5. failureThreshold – number of failed attempts after which the container will restart 

TCP Socket options: 

  1. port – port the liveness probe tries to open

To show how it works, we will use two paths: positive and negative paths (returns failure).

Liveness Probe – examples

Positive path:

# YAML example
# liveness-pod-example.yaml
#
apiVersion: v1 
kind: Pod 
metadata: 
  name: liveness-tcpsocket
spec: 
  containers: 
  - name: liveness 
    image: nginx 
    ports: 
        - containerPort: 80 
    livenessProbe: 
      tcpSocket:
        port: 80
      initialDelaySeconds: 2 #Default 0 
      periodSeconds: 2 #Default 10 
      timeoutSeconds: 1 #Default 1 
      successThreshold: 1 #Default 1 
      failureThreshold: 3 #Default 3

Create pod:

kubectl create -f liveness-pod-example.yaml

Describe the pod:

kubectl describe pod liveness-tcpsocket  
Restart Count:  0
.
.
.
Events:
  Type    Reason     Age        From                          Message
  ----    ------     ----       ----                          -------
  Normal  Scheduled  <unknown>  default-scheduler             Successfully assigned jenkins/liveness-tcpsocket to dcpoz-d-sou-k8swor2
  Normal  Pulling    3s         kubelet, dcpoz-d-sou-k8swor2  Pulling image "nginx"
  Normal  Pulled     1s         kubelet, dcpoz-d-sou-k8swor2  Successfully pulled image "nginx"
  Normal  Created    1s         kubelet, dcpoz-d-sou-k8swor2  Created container liveness
  Normal  Started    0s         kubelet, dcpoz-d-sou-k8swor2  Started container liveness

Pod works great, no issue count because nginx exposes the port 80 by default (open the port).

Delete the pod:

kubectl delete -f liveness-pod-example.yaml

Negative path:

# YAML example 
# liveness-pod-example2.yaml 
# 
apiVersion: v1 
kind: Pod metadata: 
name: liveness-tcpsocket 
spec: containers: 
  - name: liveness 
    image: nginx 
    ports: 
    - containerPort: 80 
    livenessProbe: 
      tcpSocket:
        port: 8888 
    initialDelaySeconds: 2 #Default 0 
    periodSeconds: 2 #Default 10 
    timeoutSeconds: 1 #Default 1 
    successThreshold: 1 #Default 1 
    failureThreshold: 3 #Default 3

Create pod:

kubectl create -f liveness-pod-example2.yaml

Describe the pod:

kubectl describe pod liveness-tcpsocket

Restart Count: 1 # It will keep growing!
.
.
.
Events:
  Type     Reason     Age              From                          Message
  ----     ------     ----             ----                          -------
  Normal   Scheduled  <unknown>        default-scheduler             Successfully assigned jenkins/liveness-tcpsocket to dcpoz-d-sou-k8swor2
  Normal   Pulling    7s               kubelet, dcpoz-d-sou-k8swor2  Pulling image "nginx"
  Normal   Pulled     5s               kubelet, dcpoz-d-sou-k8swor2  Successfully pulled image "nginx"
  Normal   Created    4s               kubelet, dcpoz-d-sou-k8swor2  Created container liveness
  Normal   Started    4s               kubelet, dcpoz-d-sou-k8swor2  Started container liveness
  Warning  Unhealthy  0s (x2 over 2s)  kubelet, dcpoz-d-sou-k8swor2  Liveness probe failed: dial tcp 10.32.0.11:8888: connect: connection refused

The pod is constantly restarting because nginx exposes the default port and the liveness probe is trying to monitor on port 8888 which is closed.

Curious to see what else you can test with Kubernetes probes? Check this.

Author: Wojciech Tokarski
 
Previous
 

Newsletter