Handle Schema Migrations In Kubernetes The Proper Way

Quick Problem Recap Link to heading

You have a service. In order to operate, service depends on database.

Once database schema gets an update, you’d also want to deploy new version of your service.

There are different approaches to solve this problem. During my experience I saw these solutions:

  • run migrations before app starts - what if you want to start 100 replicas at the same time?
  • run migrations before app starts as an initContainer - does it really solves anything more than above one?
  • separate job

So far separate job approach works the best. But what happens if your migrations fails, will new version of service work with semi-broken schema?

Below I’ll show you how to make consistent zero-downtime database migrations and service deployments in one go.

Kubernetes Job Link to heading

job.yaml Link to heading
apiVersion: batch/v1
kind: Job
metadata:
  name: migrations-$VERSION
  labels:
    app: migrations
spec:
  backoffLimit: 1
  template:
    metadata:
      labels:
        app: migrations
    spec:
      containers:
      - name: migrations
        image: myrepo/migrations:$VERSION
        imagePullPolicy: Always
        env:
        - name: DBHOST
          value: dbhost.local
        - name: DBUSER
          value: user
        - name: DBPASS
          value: pass
        - name: DBNAME
          value: database
      restartPolicy: Never

backoffLimit Link to heading

Indicates the number of times a job should be retried if it fails.

If the migrations fail on the second attempt, the Job will fail completely and the new version of the application won’t run.

restartPolicy Link to heading

Indicates if the pod should be restarted in case of failure.

This is separate from job restart.

Testing the job Link to heading

VERSION=1.2.3 envsubst < job.yaml | kubectl apply -f -

➜  ~ k get job -l app=migrations
NAME               COMPLETIONS   DURATION   AGE
migrations-1.2.3   1/1           4s         4s

➜  ~ k get pod -l app=migrations
NAME                     READY   STATUS             RESTARTS   AGE
migrations-1.2.3-mxj2h   0/1     Completed          0          12s

Using init container to delay pod startup Link to heading

We will include container with k8s-wait-for script as an initContainer in all serivces which depend on database schema migrations.

deploy.yaml Link to heading
apiVersion: apps/v1
kind: Deployment
metadata:
  name: service
spec:
  template:
    spec:
      # The init containers
      initContainers:
      - name: wait-for-job
        image: groundnuty/k8s-wait-for:v1.3
        imagePullPolicy: Always
        args:
        - job
        - migrations-$VERSION

Once you have updated your deployment definition, apply to test:

VERSION=1.2.3 envsubst < deploy.yaml | kubectl apply -f -

This might not be obvious, but you’ll need to allow serviceaccount to get job statuses by creating role and rolebinding.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: job-status-reader
  namespace: $NAMESPACE
rules:
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: job-status-reader
  namespace: $NAMESPACE
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: job-status-reader
subjects:
- kind: ServiceAccount
  name: $SERVICE_ACCOUNT
  namespace: $NAMESPACE

Congratz! That’s your first fully completed zero-downtime database migration and deployment with Kubernetes!