Support for externalNames in nginx ingress has been asked for a while on nginx-ingress. Finally, it was released on nginx plus (https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/externalname-services/README.md) and, AFAIK, id does not work on the standard version. I tried to set it up anyway, but the upstream always ends up on a 127.0.0.1:8181 endpoint if you try to configure an externalName as an upstream.
So I came up with this workaround. Probably not the most elegant solution, but it works, and the Service itself acts a load balancer.
One of the first things you’ll notice when you start deploying your apps on GKE is that peristent volumes cannot be accessed in read-write mode if a PVC is exposed to multiple pods. In fact, GKE does not offer RWX mode on his GCE Persistent Disks (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). As of today, you can use these in RWO (Read Write Once, connected to a single pod in read-write access mode) or ROX (Read Only Many, connected to multiple pod in read-only access mode).
Kubernetes was born with stateless loads in mind, so having a RWX volume wasn’t really a concern; anyway, there are (rare) situations where this is a requisite. Consider this scenario: you are offering a geolocation service based on MaxMind GeoIP database. You code your own API that accesses this binary database and exposes a set of routes. This application can scale-up to accomodate incoming requests. This sounds good, and you don’t really need an RWX in this scenario… but what happens when you periodically need to update this database? A possible solution would be to create a new GCP persistent disk and run a provisioner pod that downloads the new MaxMind release; then you perform a rolling upgrade changing claimName in the deployment. This path is hard to automate since it requires a chain of operations (provision a new persistent disk, create a pv and a pvc, run a provisioner deployment/pod, change the claimName in the running deployment, delete the old pv, pvc and persistent disk). This is where NFS comes into play.
NFS sits in the middle of the architecture, providing a middle layer between a RWO pvc and multiple pods mounting the xposed NFS export (possibly) in read-write access mode.
DEPLOY NFS FROM HELM CHART
First of all, we need a namespace to deploy everything into. You might want to use a yaml and run a kubectl apply:
$ helm install stable/nfs-server-provisioner --namespace v -f values.yaml
In a few moments you will have the nfs-server-provisioner-0 pod and nfs-server-provisioner service up and running:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-server-provisioner-0 1/1 Running 0 4d1h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nfs-server-provisioner ClusterIP 10.52.143.59 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 4d1h
PROVIDE AN APPLICATION PVC
The helm chart will create a new storageclass named “nfs”. Now we just need to create a PVC for our application with the following:
In this case, we need to echo a varible to stdout, the pipe will feed the stdin (which corresponds to /proc/self/fd/0 in linux) which will be read via proc filesystem by install.
In this case this might be not so useful, but for a simple create/copy of a dir/file it comes in handy:
Sometimes it might be useful to run a command on a subset of pods inside a namespace. This is where this script (called kexall mimicking kubectl aliases) comes in handy.
#!/usr/bin/env bash
PROGNAME=$(basename $0)
function usage {
echo "usage: $PROGNAME [-n NAMESPACE] [-m MAX-PODS] -s SERVICE -- COMMAND"
echo " -s SERVICE K8s service, i.e. a pod selector (required)"
echo " COMMAND Command to execute on the pods"
echo " -n NAMESPACE K8s namespace (optional)"
echo " -m MAX-PODS Max number of pods to run on (optional; default=all)"
echo " -q Quiet mode"
echo " -d Dry run (don't actually exec)"
}
function header {
if [ -z $QUIET ]; then
>&2 echo "###"
>&2 echo "### $PROGNAME $*"
>&2 echo "###"
fi
}
while getopts :n:s:m:qd opt; do
case $opt in
d)
DRYRUN=true
;;
q)
QUIET=true
;;
m)
MAX_PODS=$OPTARG
;;
n)
NAMESPACE="-n $OPTARG"
;;
s)
SERVICE=$OPTARG
;;
\?)
usage
exit 0
;;
esac
done
if [ -z $SERVICE ]; then
usage
exit 1
fi
shift $(expr $OPTIND - 1)
while test "$#" -gt 0; do
if [ "$REST" == "" ]; then
REST="$1"
else
REST="$REST $1"
fi
shift
done
if [ "$REST" == "" ]; then
usage
exit 1
fi
PODS=()
for pod in $(kubectl $NAMESPACE get pods --output=jsonpath={.items..metadata.name}); do
echo $pod | grep -qe "^$SERVICE" >/dev/null 2>&1
if [ $? -eq 0 ]; then
PODS+=($pod)
fi
done
if [ ${#PODS[@]} -eq 0 ]; then
echo "service not found in ${NAMESPACE:-default}: $SERVICE"
exit 1
fi
if [ ! -z $MAX_PODS ]; then
PODS=("${PODS[@]:0:$MAX_PODS}")
fi
header "{pods: ${#PODS[@]}, command: \"$REST\"}"
for i in "${!PODS[@]}"; do
pod=${PODS[$i]}
header "{pod: \"$(($i + 1))/${#PODS[@]}\", name: \"$pod\"}"
if [ "$DRYRUN" != "true" ]; then
kubectl $NAMESPACE exec $pod -- $REST
fi
done
A simple script to locally clone a list of repos and bundle them. Will skip empty repos and bundle wikis as well. One list file in the same path is required, one repo per line in the format: https://git:MYGITTOKEN@git.server.address/path/repo.git
# Directory that hosts bundles
mkdir bundles
# Bundle all non-empty repos
while read line; do
git clone --mirror $line folder
cd folder
filename=$( echo $line | awk -F/ '{print $NF}')
# is repo empty?
if $( test -n "$(git rev-list -n1 --all 2>&1 2> /dev/null)" ) ; then
echo "Git repo has commits, bundling.."
git bundle create ../bundles/$filename.bundle --all
else
echo "Git repo has no commits, skipping"
fi
cd ..
rm -rf folder
done < list
# Bundle all non-empty wikis (we might as well have empty repos with wikis)
while read line; do
wikiurl=$( echo $line | sed 's/\.git/\.wiki\.git/g' )
git clone --mirror $wikiurl folder
cd folder
filename=$( echo $line | awk -F/ '{print $NF}')
# is repo empty?
if $( test -n "$(git rev-list -n1 --all 2>&1 2> /dev/null)" ) ; then
echo "Git repo has commits, bundling.."
git bundle create ../bundles/$filename.wiki.bundle --all
else
echo "Git repo has no commits, skipping"
fi
cd ..
rm -rf folder
done < list
Let’s begin saying that this should never ever happen. It is really bad practice. It completely defeats the purpose of using a version control system but you know, it isn’t all puppy dogs and rainbows out there.
Sometimes it happens that a partner accesses an on-premise software of ours in order to update some configs on his own. And yes, it is done in production changing a huge config file, often during the night. For our convenience this file is versioned on our gitlab server. A merge request is submitted by devs and a CI/CD pipeline is in charge of calling an ansible script that triggers a git pull after testing.
But hey, the pipeline obviously fails if the local file has modifications. We don’t really want to use a git reset –hard and destroy all the work done in production by our partner, but we don’t want our pipelines to fail miserabily because of local changes.
So here is a script that does this nasty job:
#!/bin/bash
cd /etc/my-onpremise-software
/bin/git fetch --allif [[ $(git status --porcelain --untracked-files=no | wc -l) -gt 0 ]]; then
/bin/git add configfile
/bin/git commit -m "Automatic forced push"
/bin/git push --force
fi
You might be wondering why we do a git push –force. We want the actual working copy stored in production to have precedence over developers modifications on config file; in this case devs are aware that their work might get lost, that’s why working on branches in order to ask a new MR is essential.
Just add this to your crontab. The frequency depends on how often the changes are applied in production. Since this is a relatively sporadic event in our case once a day at 6am was ok for us.
But please, remember: do this only if it is a matter of life or death. Extremis malis, extrema remedia.
Letsencrypt in the last few years has changed the way we think about SSL certificates. Do you remember those dark (and expensive) days when you needed to buy a yearly certificate from their majesty the Certification Authorities and manually deploy it on all of your websites? This led to two consequences: first, SSL was only implemented when really needed and second expire deadlines quickly turned out to be as critical as due dates for fees.
After Letsencrypt was born, with its short 90-days renewal period, it became clear that we needed some kind of automation. Certbot was one of the most promising solutions, being it straight-to-the-point and easy to automate. Standard HTTP challenge was trouble-free and could automagically change your web server’s configuration. DNS challenge became available as well, supporting wildcard certificates. But this required you to add a specific TXT record every time in you DNS for issuance and renewals. Certbot provides a complete list of plugins to support DNS challenges on major Cloud and on-premise DNS providers. Additionally, docker images with preloaded plugins are available on dockerhub, making the renewal process effortless and one-liner. But how do you seamlessly integrate certificate renewals with DNS challenges in a cloud and on-premise DNS environment, without messing up your servers installing certbot and its python dependencies?
How would you add custom records on your Bind9 installation, which does not expose APIs? In this article, we will be focusing on renewal of certificates linked to on-premise BIND9 DNS server.