A Job in Kubernetes is a one-off and immutable task to be carried out during deployment. But what if a job needs to run for each deployment? A new job with the same name can’t be deployed on top of the existing one, given it in completed
or failed
state. Since Kubernetes 1.23, A TTL(Time To Live) can be set for a job so it will be cleaned up after the seconds given. I personally like ArgoCD’s alternative solution: Resource Hooks for reasons:
- The migration job is always visible there until next deploy
- As in
PreSync
, if the migration job failed or got stuck the main app won’t be deployed which prevents running stale database schemas
In the following example, a job will be cleaned up before a new deploy so a new version of the same job can be deployed – This is perfect for the scenario where a DB migration happens before each release.
apiVersion batch/v1
kind Job
metadata
name db-migration
annotations
# set to trigger before a sync starts
argocd.argoproj.io/hook PreSync
# delete the existing one before sync
argocd.argoproj.io/hook-delete-policy BeforeHookCreation
spec
template
metadata
name db-migration
labels
app django
annotations
# no we don't want istio for a job
sidecar.istio.io/inject"false"
spec
restartPolicy Never
containers
name django-job
image my-django-app v1.0
command
/bin/bash
-c
|
# run the migration
python3 manage.py migrate
# load fixtures
python3 manage.py loaddata my-project/fixtures/*.yaml
volumeMounts
name django-config-volume
# django settings from a mounted secret
mountPath /var/www/django/my-project/settings.py
subPath settings.py
resources
requests
cpu 100m
memory 200Mi
volumes
name django-config-volume
secret
secretName django-settings
This works really well for my tiny Django app
Edit: applied this for the Mastodon DB migration job too. It worked perfectly.