Let’s begin saying that this should never ever happen. It is really bad practice. It completely defeats the purpose of using a version control system but you know, it isn’t all puppy dogs and rainbows out there.
Sometimes it happens that a partner accesses an on-premise software of ours in order to update some configs on his own. And yes, it is done in production changing a huge config file, often during the night. For our convenience this file is versioned on our gitlab server. A merge request is submitted by devs and a CI/CD pipeline is in charge of calling an ansible script that triggers a git pull after testing.
But hey, the pipeline obviously fails if the local file has modifications. We don’t really want to use a git reset –hard and destroy all the work done in production by our partner, but we don’t want our pipelines to fail miserabily because of local changes.
So here is a script that does this nasty job:
#!/bin/bash
cd /etc/my-onpremise-software
/bin/git fetch --allif [[ $(git status --porcelain --untracked-files=no | wc -l) -gt 0 ]]; then
/bin/git add configfile
/bin/git commit -m "Automatic forced push"
/bin/git push --force
fi
You might be wondering why we do a git push –force. We want the actual working copy stored in production to have precedence over developers modifications on config file; in this case devs are aware that their work might get lost, that’s why working on branches in order to ask a new MR is essential.
Just add this to your crontab. The frequency depends on how often the changes are applied in production. Since this is a relatively sporadic event in our case once a day at 6am was ok for us.
But please, remember: do this only if it is a matter of life or death. Extremis malis, extrema remedia.