Building and tagging container images in CI

docker-ci-tagging-anon

Been thinking a lot recently about how to manage versioning and deployment using Docker for a small scale containerised solution. It’s different from a traditional release pipeline as the build artifacts are the container images with the latest code and configuration, instead of the CI having a zip of the built application.

In a completely ideal containerised microservice solution all containers are loosely coupled and can be tested and built independently. Their CI configuration can be kept independent as well, with the CI and testing setup for the entire orchestrated solution taking the latest safe versions of the containers and performing integration/smoke tests against test/staging environments.

If your solution is smaller scale and the containers linked together, this is my proposed setup.

Build

Images should be built consistently, so dependencies should be resolved and fixed at point of build. This is done for node with npm shrinkwrap which generates a file fixing the npm install to specific dependency versions. This should be done as part of development each time package.json is updated, to ensure all developers as well as images use the exact same versions of packages.

On each commit to develop the image is built and tagged twice, once with “develop” to tag it as the latest version for develop branch code, and then with the version number in the git repo VERSION.md (“1.0.1”). You cannot currently build with multiple tags, but building images with same content/instructions does not duplicate image storage due to Docker image layers.

Tagging

The “develop” tagged image is used as the latest current version of the image to be deployed as part of automated builds to the Development environment, in the develop branch docker-compose.yml all referenced images will use that tag.

The version number tagged image, “1.0.1”, is used as a historic fixed version for traceability, so for specific releases the tagged master docker-compose.yml will reference specific versioned images. This means we have a store of built versioned images which can be deployed on demand to re-create and trace issues.

On each significant release, the latest version image will be pushed to the image repository with the tag “latest” (corresponding to the code in the master branch).

Infrastructure as code, containers as artifacts

As a developer, one of the things I love about containers is how fast they are are to create, spin up and then destroy. For rapid testing of my code in production like environments this is invaluable.

But containers offer another advantage, stability. Containers can be versioned, defined with static build artifact copies of the custom components they will host and explicitly versioned dependencies (e.g. nginx version). This allows for greater control in release management, knowing exactly what version of not just the custom code you have on an environment but the infrastructure running it. Your containers become an artifact of your build process.

While managing versions of software has long been standard practise, this isn’t commonly extended to the use of infrastructure as code (environment creation/update by scripts and tools like Puppet). Environments are commonly moving targets, separation of development and operations teams mean software and environment releases are done independently, with environment dependencies and products being patched independently of functionality releases (security patching, version updates etc.). This can cause major regression issues which often can’t be anticipated until it hits pre-production (if you are lucky).

By using containerisation with versioning you can control the release of environmental changes with precise control, something that is very important when dealing with a complex distributed architecture. You can release and test changes to individual servers, then trace back issues to the changes introduced. The containers that make up your infrastructure become build artifacts, which can be identified and updated like any other.

Here’s a sequence diagram showing how this can be introduced into your build process:

Containers as artifacts (1)

At the end of this process you have a fixed release deployed into production, with traceable changes to both custom code and infrastructure. Following this pattern allows upfront testing of infrastructure changes (including developer level) and makes it very difficult to accidentally cause any differences between your test and production environments.

Docker nginx/dropwizard with Travis CI

Source

Update:¬†Can’t recommend using Travis for this kind of CI, it turned out very flakey once I used actual images rather than the simple echo tool images. Builds would sometimes timeout after 15mins of hanging, either on building images or attaching. Not sure why, but I imagine Travis isn’t really intended for loading multiple containers ~400meg in size (JVM+deps).

Example using Docker to create an nginx image and dropwizard image, then link them together so nginx acts as a reverse proxy for Dropwizard. Can be extended to link together multiple Dropwizard applications. Uses Docker Compose to create and configure the images.

terminal gif

Requires:

To run locally:

gradle run
# ./go

To run containers:

gradle buildJar
docker-compose up -d

# retrieve your docker host IP from boot2docker
boot2docker ip

# curl dropwizard/nginx containers using docker host IP
curl http://192.168.59.103:8080/hello
curl http://192.168.59.103:8090/hello

Details

The docker-compose.yml file configures the two images, creating a dropwizard container and linking it to an nginx container. With the link in place, docker creates a hosts entry for the dropwizard container which can be used in the nginx config volumes-nginx-conf.d/default.conf when setting up the reverse proxy.

Based Travis build on moul/travis-docker.

Using a client image to test the nginx/dropwizard images, as you cannot curl directly from Travis CI. Ideally, once Travis has started the containers in demonised form I would run a test script which uses curl/selenium to test the various endpoints exposed from nginx and hit dropwizard. If this needs to be done via the client then the results of the test can be output to a write-enabled volume and parsed to determine build result, as docker-compose will always return exit code 0 if the containers run.

Ruby CI on Jenkins – Installation steps

Steps for setting up Ruby CI on Jenkins running on EC2 micro instance, Ubuntu 12.04.

  1. sudo apt-get update
  2. Install jre/jdk
    • sudo apt-get install openjdk-6-jre openjdk-6-jdk
  3. Install jenkins
  4. Switch user to jenkins user
    • sudo su – jenkins
  5. Install git
    • sudo apt-get git-core
  6. Set jenkins git user/email
    • git config –global user.email “jenkins@none.com
    • git config –global user.name “jenkins”
  7. Install rvm and install/use ruby 1.9.2 as jenkins user (takes ages…)
  8. Setup jenkins project and add bash script build step to call rake (needs all lines to set paths, install bundler, get gems, run rake)
    • #!/bin/bash
      source ~/.bash_profile
      rvm use 1.9.2
      gem install bundler
      bundle install
      bundle exec rake

Call explicit rake tasks to change build steps to perform build/deploy tasks.