Support for externalNames in nginx ingress has been asked for a while on nginx-ingress. Finally, it was released on nginx plus (https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/externalname-services/README.md) and, AFAIK, id does not work on the standard version. I tried to set it up anyway, but the upstream always ends up on a 127.0.0.1:8181 endpoint if you try to configure an externalName as an upstream.
So I came up with this workaround. Probably not the most elegant solution, but it works, and the Service itself acts a load balancer.
One of the first things you’ll notice when you start deploying your apps on GKE is that peristent volumes cannot be accessed in read-write mode if a PVC is exposed to multiple pods. In fact, GKE does not offer RWX mode on his GCE Persistent Disks (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). As of today, you can use these in RWO (Read Write Once, connected to a single pod in read-write access mode) or ROX (Read Only Many, connected to multiple pod in read-only access mode).
Kubernetes was born with stateless loads in mind, so having a RWX volume wasn’t really a concern; anyway, there are (rare) situations where this is a requisite. Consider this scenario: you are offering a geolocation service based on MaxMind GeoIP database. You code your own API that accesses this binary database and exposes a set of routes. This application can scale-up to accomodate incoming requests. This sounds good, and you don’t really need an RWX in this scenario… but what happens when you periodically need to update this database? A possible solution would be to create a new GCP persistent disk and run a provisioner pod that downloads the new MaxMind release; then you perform a rolling upgrade changing claimName in the deployment. This path is hard to automate since it requires a chain of operations (provision a new persistent disk, create a pv and a pvc, run a provisioner deployment/pod, change the claimName in the running deployment, delete the old pv, pvc and persistent disk). This is where NFS comes into play.
NFS sits in the middle of the architecture, providing a middle layer between a RWO pvc and multiple pods mounting the xposed NFS export (possibly) in read-write access mode.
DEPLOY NFS FROM HELM CHART
First of all, we need a namespace to deploy everything into. You might want to use a yaml and run a kubectl apply:
$ helm install stable/nfs-server-provisioner --namespace v -f values.yaml
In a few moments you will have the nfs-server-provisioner-0 pod and nfs-server-provisioner service up and running:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-server-provisioner-0 1/1 Running 0 4d1h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nfs-server-provisioner ClusterIP 10.52.143.59 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 4d1h
PROVIDE AN APPLICATION PVC
The helm chart will create a new storageclass named “nfs”. Now we just need to create a PVC for our application with the following: