
You have tried to create an HTTP Liveness probe to check for the state of your WordPress container but you are getting a really strange connection refused error.
You are getting something like this:
Warning Unhealthy 3m46s (x4 over 4m56s) kubelet Liveness probe failed: Get "http://10.28.1.25:80/": dial tcp 10.28.1.25:80: connect: connection refused
Warning Unhealthy 3m46s (x8 over 4m56s) kubelet Readiness probe failed: Get "http://10.28.1.25:80/": dial tcp 10.28.1.25:80: connect: connection refused
Warning BackOff 13s (x10 over 2m57s) kubelet Back-off restarting failed container

It says connection refused, and you are really confused because you are sure that the WordPress Apache server is listening on port 80.
You are most indeed correct.
But actually, what is happening, is related to the fact that WordPress is redirecting the liveness probe to the canonical domain(your real website domain name) because you are making the request without using the canonical domain. WordPress doesn’t like it and tries to redirect you to the real domain. Inside the container most likely the redirect will fail with a connection refused.
The fix is to add a X-Forwarded-Host and Host
header to ensure that the Apache Server doesn’t attempt to redirect the liveness probe to the canonical domain.
Below is how the httpGet section of a liveness HTTP probe should look like:
livenessProbe:
httpGet:
path: /
port: 80
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: X-Forwarded-Host
value: {{ .host }}
- name: Host
value: {{ .host }}
Note that this configuration for httpGet also applies to a Readiness and a startup probe.
Getting a timeout exception in your liveness probe
Another thing that can catch you out. The Kubernetes liveness probe might fail if your WordPress container is tooslow.
You might see an error like this:
Warning Unhealthy 2m29s kubelet Readiness probe failed: Get "http://10.28.2.11:80/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
By default, the liveness probe has a timeoutInSeconds set to 1s. This might not be enough time for your WordPress container to respond and the kubelet might end up restarting your WordPress container multiple times. This of course will only worsen the problem as the more containers try to restart, the more competition for resources there will be. And more containers could also fail because of that.
An obvious solution is to increase the value of timeoutInSeconds to a value high enough so that your liveness probe randomly. Or/And you can increase the failureThreshold, to ensure that one or two outliers don’t immediately restart the container due to a slow response.
If you are only concerned about making sure that your WordPress container is available to receive requests after a fresh deploy, then you might want to use a startup probe.
Here is an example of a startup probe I have created for my wordpress container:
startupProbe:
httpGet:
path: /
port: 80
httpHeaders:
- name: X-Forwarded-Proto
value: https
{{- with (index .Values.ingress.hosts 0) }}
- name: X-Forwarded-Host
value: {{ .host }}
- name: Host
value: {{ .host }}
{{- end }}
failureThreshold: 30
periodSeconds: 5
timeoutSeconds: 10
Resources
Kubernetes Documentation for Readiness, Liveness and Startup Probes
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Stackoverflow post where I found this solution:
https://stackoverflow.com/questions/59280829/kubernetes-http-liveness-probe-fails-with-connection-refused-even-though-url-w