How to set up an internal and external Load balancer with Nginx ingress on Kubernetes on GKE

Photo by Nick Fewings on Unsplash

In this article I will show you two methods on how you can configure Nginx Ingress to expose a Kubernetes service using a Google Kubernetes Engine(GKE) public load balancer or a Kubernetes Internal Load Balancer. Or both at the same time.

There are two ways to setup a separate Load Balancer for Nginx Ingresss on GKE to handle public and internal traffic. The hard way, without Helm, and the easy way with Helm.

There are also two ways of handling internal and external traffic in Nginx ingress. You can setup multiple ingress controllers as the documentation suggests or you can somehow enable Nginx ingress to provision an internal and external load balancer. As far as I can tell the second method is only documented in the helm documentation.

The disadvantage of setting up multiple ingress controllers is that you need to set up separate ingresses even if the paths being served are exactly the same.

The HARD WAY – Setup Multiple Ingress Controllers using kubectl

To setup multiple Nginx ingress controllers in Google Kubernetes Engine(GKE), we first follow the standard installation instructions for the Nginx Ingress setup:

Create a Cluster Role Binding for Nginx Ingress service account

kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole cluster-admin \
  --user $(gcloud config get-value account)

The step above is very important. If you miss it you will find that your ingresses don’t work and the nginx-ingress-controller will have the following errors:

E1209 10:33:40.596779       7 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope

Deploy ingress-nginx

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml

Running the file above will create a namespace named ingress-nginx, with an ingress controller and other associated objects within that namespace. A public load balancer will also be created for the ingress controller. This public load balancer will be used to serve external traffic.

So let’s say that you have a requirement to expose some websites via a VPC private ip address? Just to be clear these could be the same websites as the ones that are exposed externally, but in this case, we want to allow access to additional pages, for instance, a CMS. But only via a VPN.

What we will need is to setup a second Nginx ingress controller in a separate namespace(e.g. nginx-ingress-internal) and assign a separate ingress-class(nginx-internal) to the new ingress controller:

 args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx-internal
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key

You also need to change the service definition to have an additional annotation for GKE to know that it needs to provision a private load balancer:

# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    helm.sh/chart: ingress-nginx-3.10.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.41.2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx-internal
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
....

Because we are using a separate namespace, there is no need to rename anything other than the namespace we deploy to.

I have prepared a Yaml file based on the ingress-nginx yaml that we deployed earlier. This new yaml is basically a result of renaming the namespace and the ingress-class to nginx-internal, and of course the change described above to provision an internal load balancer.

The resulting yaml is below:

https://gist.github.com/armindocachada/ac07111f9b0eacbcef2124bc37a6b716

Now it is time to apply the yaml:

kubectl apply -f https://gist.githubusercontent.com/armindocachada/ac07111f9b0eacbcef2124bc37a6b716/raw/e1dd5549208175e50c1f0be20fbdd0edf44cc8b7/nginx-ingress-internal-gke.yaml

namespace/ingress-nginx-internal created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller configured
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx configured
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission configured
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

To make sure that everything works as expected, let’s create a simple deployment with an nginx pod and two ingresses. One ingress to serve traffic via external ip address and another ingress to serve traffic via internal ip address

1. We create the development namespace:

kubectl create -f https://k8s.io/examples/admin/namespace-dev.json

2. We deploy a pod into the development namespace:

$ kubectl create deployment --image nginx 
my-nginx --namespace development
deployment.apps/my-nginx created

Now we expose our deployment as a deployment of type NodePort

$ kubectl expose deployment my-nginx -n development --type=NodePort --port=8080
service/my-nginx exposed

Let’s create an example Ingress for the public nginx ingress controller

public-ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: development
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: my-nginx-external
      http:
        paths:
          - path: /
            backend:
              serviceName: my-nginx
              servicePort: 80

And for internal.

internal-ingress.yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress-internal
  namespace: development
  annotations:
    kubernetes.io/ingress.class: "nginx-internal"
spec:
  rules:
    - host: my-nginx-internal
      http:
        paths:
          - path: /
            backend:
              serviceName: my-nginx
              servicePort: 80

Notice the difference between the two ingresses. They look exactly the same except that the kubernetes.io/ingress.class definition is different. This is the only way you can prevent an ingress from being randomly picked up by an ingress controller.

Let’s deploy both ingresses:

$ kubectl apply -f public-ingress.yaml
ingress.networking.k8s.io/nginx-ingress created

$ kubectl apply -f internal-ingress.yaml
ingress.networking.k8s.io/nginx-ingress-internal created

After a few minutes we can check the state of the ingresses:

$ kubectl get ingress --all-namespaces

NAMESPACE     NAME                     HOSTS               ADDRESS         PORTS   AGE
development   nginx-ingress            my-nginx-external   <public ip address>   80      5m
development   nginx-ingress-internal   my-nginx-internal                   80      5m

Note that in my case I only was able to see a public ip address associated to the first ingress. The second ingress did not show any IP. However I looked at the loadbalancers and saw this:

ingress-nginx-internal   ingress-nginx-controller             LoadBalancer   <ClusterIP1>   <VPC IP>    80:31418/TCP,443:30845/TCP   34m
ingress-nginx            ingress-nginx-controller             LoadBalancer   <ClusterIP2>    <Public IP>   80:31566/TCP,443:30084/TCP   4h12m

Both load balancers have ip addresses, which is good. To test that the ingresses work modify your host files and add two entries:

<Public IP> my-nginx-external
<VPC IP> my-nginx-internal

These are the two virtual hosts that we have configured.

Let’s do a curl:

curl http://my-nginx-external

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


and 
$ curl http://nginx-internal

$ <!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

I am assuming here that you are within the same VPC as your kubernetes cluster, otherwise you might not have access to the VPC IP. You would need to do the test with curl elsewhere.

So in my case I followed these exact steps on GKE GCP and I was able to configure two ingress controller, one for public traffic and the other for internal traffic.

The EASY WAY – Using Helm Charts

If you are looking to setup an internal and external load balancer and these are serving the same paths, there’s a far easier way to set this up using Helm Chars. Example below is for GKE but similar method for AWS with some adjustments Here are the steps you need to follow:

Create config.yaml with the following content,:

controller:
  service:
    internal:
      enabled: true
      annotations:
        # Create internal LB
        cloud.google.com/load-balancer-type: "Internal"
        # Any other annotation can be declared here.

Now time to run the install of ingress-nginx:

helm install -f config.yaml my-release ingress-nginx/ingress-nginx -n ingress-nginx --create-namespace

To verify that nginx ingress has been provisioned correctly:

$ kubectl get svc -n ingress-nginx

ingress-nginx  my-release-ingress-nginx-controller      LoadBalancer   <A CLUSTER IP>   <A PUBLIC IP>   80:31867/TCP,443:30132/TCP   33s
ingress-nginx   my-release-ingress-nginx-controller-admission   < A ClusterIP>        <none>          443/TCP                      33s
ingress-nginx     my-release-ingress-nginx-controller-internal    LoadBalancer   <CLUSTER IP>    <INTERNAL IP>       80:30626/TCP,443:30629/TCP   33s

Note that it might take some time to provision the external and internal ip address for the Load Balancer, so you might have to wait a couple of minutes after running helm install before you can see the ip addresses that you have requested.

Resources:

Ingress Nginx documentation: https://kubernetes.github.io/ingress-nginx/deploy/#using-helm

Helm Configuration for Ingress-nginx: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#configuration