Using nginx-ingress for cross-namespace services

Support for externalNames in nginx ingress has been asked for a while on nginx-ingress. Finally, it was released on nginx plus (https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/externalname-services/README.md) and, AFAIK, id does not work on the standard version. I tried to set it up anyway, but the upstream always ends up on a 127.0.0.1:8181 endpoint if you try to configure an externalName as an upstream.

So I came up with this workaround. Probably not the most elegant solution, but it works, and the Service itself acts a load balancer.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-resource
  namespace: test-namespace
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.org/server-snippets: |
      location /exchangerates/ {
        proxy_set_header Host test.test.svc.cluster.local;
        proxy_pass http://test.test.svc.cluster.local:80/;
      }
spec:
  rules:
  - host: 8.8.8.8.xip.io
    http:
      paths:
      - backend:
          serviceName: test
          servicePort: 80
        path: /test

Here i used xip to build a valid hostname (http://xip.io/), it’s really useful for quick tests, give it a try!

RWX persistent volumes on GKE using NFS

One of the first things you’ll notice when you start deploying your apps on GKE is that peristent volumes cannot be accessed in read-write mode if a PVC is exposed to multiple pods. In fact, GKE does not offer RWX mode on his GCE Persistent Disks (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). As of today, you can use these in RWO (Read Write Once, connected to a single pod in read-write access mode) or ROX (Read Only Many, connected to multiple pod in read-only access mode).

Kubernetes was born with stateless loads in mind, so having a RWX volume wasn’t really a concern; anyway, there are (rare) situations where this is a requisite. Consider this scenario: you are offering a geolocation service based on MaxMind GeoIP database. You code your own API that accesses this binary database and exposes a set of routes. This application can scale-up to accomodate incoming requests. This sounds good, and you don’t really need an RWX in this scenario… but what happens when you periodically need to update this database? A possible solution would be to create a new GCP persistent disk and run a provisioner pod that downloads the new MaxMind release; then you perform a rolling upgrade changing claimName in the deployment. This path is hard to automate since it requires a chain of operations (provision a new persistent disk, create a pv and a pvc, run a provisioner deployment/pod, change the claimName in the running deployment, delete the old pv, pvc and persistent disk). This is where NFS comes into play.

NFS sits in the middle of the architecture, providing a middle layer between a RWO pvc and multiple pods mounting the xposed NFS export (possibly) in read-write access mode.

DEPLOY NFS FROM HELM CHART

First of all, we need a namespace to deploy everything into. You might want to use a yaml and run a kubectl apply:

apiVersion: v1
kind: Namespace
metadata:
  name: examplenfs
  labels:
    app.kubernetes.io/name: examplenfs
    app.kubernetes.io/instance: examplenfs
    app.kubernetes.io/version: "0.0.1"
    app.kubernetes.io/managed-by: manual

or just forget the fancy stuff and run:

$ kubectl create namespace examplenfs

Now, the easiest path to run a NFS provisioner is to use the NFS Server Provisioner Helm Chart (https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner). Let’s just download the values.yaml from the repo and change the following lines:

persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 1Gi

and install the chart:

$ helm install stable/nfs-server-provisioner --namespace v -f values.yaml

In a few moments you will have the nfs-server-provisioner-0 pod and nfs-server-provisioner service up and running:

$ kubectl get pod

NAME                           READY   STATUS      RESTARTS   AGE
nfs-server-provisioner-0       1/1     Running     0          4d1h


$ kubectl get svc

NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)
nfs-server-provisioner       ClusterIP   10.52.143.59   <none>        2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP   4d1h

PROVIDE AN APPLICATION PVC

The helm chart will create a new storageclass named “nfs”. Now we just need to create a PVC for our application with the following:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs
  namespace: examplenfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "nfs"
  resources:
    requests:
      storage: 500Mi

Please note the storage size needs to be at least a little lower than the one provided in the chart’s value.yaml (because of formatting).

DEPLOY THE APPLICATION

Now you might want to deploy Secrets, Configmaps, Services and the Deployment. This is an excerpt with relevant parts of a sample deployment:

[...]

        volumeMounts:
        - mountPath: /data/
          name: data
          readonly: true
[...]
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nfs
          readOnly: true
[...]

in this case, the application mounts the pvc in read only, since it does not require write access.

THE UPDATER JOB

We will make use of Kubernetes Cronjob and the maxmindinc/geoipupdate image. In this case, we mount the volume in RW mode:

apiVersion: v1
kind: Secret
metadata:
  name: maxmind
type: Opaque
data:
  GEOIPUPDATE_ACCOUNT_ID: REDACTED_BASE64
  GEOIPUPDATE_LICENSE_KEY: REDACTED_BASE64
  GEOIPUPDATE_EDITION_IDS: REDACTED_BASE64
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: geoipupdate
  namespace: examplenfs
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: app
            image: maxmindinc/geoipupdate
            env:
              - name: GEOIPUPDATE_ACCOUNT_ID
                valueFrom:
                  secretKeyRef:
                    name: maxmind
                    key: GEOIPUPDATE_ACCOUNT_ID
              - name: GEOIPUPDATE_LICENSE_KEY
                valueFrom:
                  secretKeyRef:
                    name: maxmind
                    key: GEOIPUPDATE_LICENSE_KEY
              - name: GEOIPUPDATE_EDITION_IDS
                valueFrom:
                  secretKeyRef:
                    name: maxmind
                    key: GEOIPUPDATE_EDITION_IDS
            volumeMounts:
            - name: nfs
              mountPath: /usr/share/GeoIP
          restartPolicy: Never
          volumes:
          - name: nfs
            persistentVolumeClaim:
              claimName: nfs
              readOnly: false