Managing MySQL Persistence with Docker Containers

I recently started using Docker to manage deployment of some applications I maintain as part of my work. Docker makes developing on a Mac and deploying to a Linux machine much simpler, and obviates a lot of configuration (absolute paths work fine). Moreover, it makes integration tests far less painful; I have Jenkins rebuild my containers and end-to-end test them each time I push to my repositories (they’re small apps so e2e tests are still quick).

Containerising applications is nice but it does have some problems. The religion^Wconvention that all containers are supposed to be ephermal is great until you want to store anything. My applications typically consist of 2-3 containers; an application server, the database, and optionally some processing backend (abusing the database as a job queue). When it comes to testing these, my containers are actually ephermal so I can tear them down and rebuild them from scratch.

Unfortunately, I can’t tear down the deployed apps. This means a changed database schema ruins the nice workflow. This post shows how I got around that problem.

MySQL in Docker

The MySQL team has made a MySQL image available on Docker Hub. This makes it really easy to get a database running. Using docker-compose makes knitting the application to it a breeze too. The nice thing about the MySQL Docker images is that they work completely out of the box. There’s no need to layer your own stuff on top of the image. Instead, you can specify some scripts that run (once) to populate your database.

The MySQL image recommends you put the database on a volume. Doing this means when you tear down the container you can link a new one to it. Compose automatically does this for you so you can docker-compose stop database && docker-compose rm -f database at your leisure (assuming you’ve set the volume up in your compose file!).

This design does mean that your init scripts will never run more than once. This is probably a good thing – they probably all start with DROP DATABASE foo. Unfortunately it means there’s no way to make changes to your schema out of the box.

Database Migrations

The solution to the problem is fairly straight forward. Schema changes need to be played on top of the deployed version. Each time you change the schema, create an SQL script with the relevant ALTER TABLE commands. Then, when the application is restarted, run the sequence of changes against the database.

Using docker-compose this consists of defining a new container that runs the mysql image, connects to the database and runs each migration in turn. Ordering their names lexically allows you to avoid having to worry about making each migration totally bulletproof. That doesn’t mean the SQL scripts don’t have to have some defensive checks in them (especially if they transform existing data rather than just adding new, empty fields).

I’ve included an example docker-compose.yml file that shows how this works. This uses the current mysql image to run a database and uses an image that includes the script to run the migrations (derived from the original mysql init script). The Dockerfile for the image is available on GitHub and the image is available on Docker Hub.

  image: mysql
    - 3306
    - ./schema:/docker-entrypoint-initdb.d
    - /var/lib/mysql
    MYSQL_DATABASE: database
    MYSQL_USER: user
    MYSQL_PASSWORD: password
    MYSQL_ROOT_PASSWORD: root_password
  image: mathewhall/mysql_migration
    - ./migrations:/docker-entrypoint-migrations.d
    - database
    MYSQL_HOST: database
    MYSQL_DATABASE: database
    MYSQL_USER: user
    MYSQL_PASSWORD: password

When this compose file is run, the migration container will start and wait for the database to come up. Once it does, it runs the migrations in the host-linked migrations directory. The directory link ensures new changes will be picked up without having to rebuild an image or recreate the container. As long as the migration container is run (either automatically with docker-compose up or manually with docker-compose run migration) the database schema will be up to date.