One of the first things you’ll notice when you start deploying your apps on GKE is that peristent volumes cannot be accessed in read-write mode if a PVC is exposed to multiple pods. In fact, GKE does not offer RWX mode on his GCE Persistent Disks (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). As of today, you can use these in RWO (Read Write Once, connected to a single pod in read-write access mode) or ROX (Read Only Many, connected to multiple pod in read-only access mode).
Kubernetes was born with stateless loads in mind, so having a RWX volume wasn’t really a concern; anyway, there are (rare) situations where this is a requisite. Consider this scenario: you are offering a geolocation service based on MaxMind GeoIP database. You code your own API that accesses this binary database and exposes a set of routes. This application can scale-up to accomodate incoming requests. This sounds good, and you don’t really need an RWX in this scenario… but what happens when you periodically need to update this database? A possible solution would be to create a new GCP persistent disk and run a provisioner pod that downloads the new MaxMind release; then you perform a rolling upgrade changing claimName in the deployment. This path is hard to automate since it requires a chain of operations (provision a new persistent disk, create a pv and a pvc, run a provisioner deployment/pod, change the claimName in the running deployment, delete the old pv, pvc and persistent disk). This is where NFS comes into play.
NFS sits in the middle of the architecture, providing a middle layer between a RWO pvc and multiple pods mounting the xposed NFS export (possibly) in read-write access mode.
DEPLOY NFS FROM HELM CHART
First of all, we need a namespace to deploy everything into. You might want to use a yaml and run a kubectl apply:
apiVersion: v1
kind: Namespace
metadata:
name: examplenfs
labels:
app.kubernetes.io/name: examplenfs
app.kubernetes.io/instance: examplenfs
app.kubernetes.io/version: "0.0.1"
app.kubernetes.io/managed-by: manual
or just forget the fancy stuff and run:
$ kubectl create namespace examplenfs
Now, the easiest path to run a NFS provisioner is to use the NFS Server Provisioner Helm Chart (https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner). Let’s just download the values.yaml from the repo and change the following lines:
persistence:
enabled: true
accessMode: ReadWriteOnce
size: 1Gi
and install the chart:
$ helm install stable/nfs-server-provisioner --namespace v -f values.yaml
In a few moments you will have the nfs-server-provisioner-0 pod and nfs-server-provisioner service up and running:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-server-provisioner-0 1/1 Running 0 4d1h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nfs-server-provisioner ClusterIP 10.52.143.59 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 4d1h
PROVIDE AN APPLICATION PVC
The helm chart will create a new storageclass named “nfs”. Now we just need to create a PVC for our application with the following:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
namespace: examplenfs
spec:
accessModes:
- ReadWriteMany
storageClassName: "nfs"
resources:
requests:
storage: 500Mi
Please note the storage size needs to be at least a little lower than the one provided in the chart’s value.yaml (because of formatting).
DEPLOY THE APPLICATION
Now you might want to deploy Secrets, Configmaps, Services and the Deployment. This is an excerpt with relevant parts of a sample deployment:
[...]
volumeMounts:
- mountPath: /data/
name: data
readonly: true
[...]
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs
readOnly: true
[...]
in this case, the application mounts the pvc in read only, since it does not require write access.
THE UPDATER JOB
We will make use of Kubernetes Cronjob and the maxmindinc/geoipupdate image. In this case, we mount the volume in RW mode:
apiVersion: v1
kind: Secret
metadata:
name: maxmind
type: Opaque
data:
GEOIPUPDATE_ACCOUNT_ID: REDACTED_BASE64
GEOIPUPDATE_LICENSE_KEY: REDACTED_BASE64
GEOIPUPDATE_EDITION_IDS: REDACTED_BASE64
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: geoipupdate
namespace: examplenfs
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: app
image: maxmindinc/geoipupdate
env:
- name: GEOIPUPDATE_ACCOUNT_ID
valueFrom:
secretKeyRef:
name: maxmind
key: GEOIPUPDATE_ACCOUNT_ID
- name: GEOIPUPDATE_LICENSE_KEY
valueFrom:
secretKeyRef:
name: maxmind
key: GEOIPUPDATE_LICENSE_KEY
- name: GEOIPUPDATE_EDITION_IDS
valueFrom:
secretKeyRef:
name: maxmind
key: GEOIPUPDATE_EDITION_IDS
volumeMounts:
- name: nfs
mountPath: /usr/share/GeoIP
restartPolicy: Never
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
readOnly: false