CoreOS Rkt compared to Docker

Looking at CoreOS Rkt to get a comparison with Docker, as quite a few of the experienced operations developers I’ve spoke to about containers have raised concerns with Docker and prefer the pure standardised containers appoach that Rkt (Rocket) is aimed at compared to Dockers propriety format and platform.

Comparison with Docker

Rkt is purely concerned with creating and running containers in isolation securely, it is not attempting to become a wider containerisation platform like Docker. It’s an implementation of the App Container (appc) specification and uses ACI container images from the specification. Rkt can run other types of containers such as Docker images. As of writing it is v0.5.5 and is not recommended for production use.

Docker (as of v1.5.0) does not attempt to implement the appc specification and does not support ACI images. This makes Docker images a open but proprietary format, so users require third party tools to generate ACI images from Docker images to allow their use in other container tools. It offers a large amount of tooling that Rkt cannot offer, such as a public repository for images and tools for integrating with hosting providers. For developers Docker is a easier to use, as there are tools (boot2docker on mac etc.) for using Docker directly from the commandline on the host machine, while with Rkt you need to ssh onto a VM before running any Rkt commands. Docker documentation is better, as is the range of images available.

Rkt is thought as more secure, as it runs daemonless while Docker needs a daemon running as root to manage containers and allocate resources. Rkt also uses a trusted key model for verifying that an image you have downloaded is what you expect and has not been altered by someone.

Conclusion

In practise creating and using Rkt containers is very similar to Docker, simple commandline interface using a JSON container definition file that is like a Dockerfile. The lack of documentation is a problem, as it’s hard to find examples of using Rkt or guides on best practises. Kubernetes has recently announced supporting Rkt for managementing containers instead of Docker, which is great as it gives a choice of what container format/tools you can use.

Rkt is quite immature compared to Docker, right now it would be irresponsible to use it in production. When it reaches v1 and has been used in serious production environments it could well be a better choice than Docker due to it’s strong focus on standards and security. This may be a deciding factor between the two when containerisation becomes more widespread and hosting providers begin offering virtualisation via containers rather than Virtual Machines, to take advantage of the cost/density benefits.

Docker currently has a big advantage in ease of use, documentation and the platform they have made. This makes Docker a much better option to learn about containerisation and try it out. Also many cloud hosting providers are falling over themselves to offer containerisation and right now Docker is the only option, this gives it a huge lead over the alternatives. As there are tools to convert your Docker images into ACI format it’s not a huge lock-in risk to start developing and deploying containerised applications in Docker, as it will be possible to change your mind later.

For developers, I’d recommend you start with Docker to get your head around the containerisation concepts as the documentation is great. But you should be aware that Docker is not the only option. Containerisation has been around for ages and Docker seems to be trying to make itself a way to bridge the gap between IaaS and PaaS, abstracting away important details that as a developer you need to be aware of to make production ready code.

Useful links

Authentication and authorisation on Kubernetes cluster

This is a description of the steps to deploy the Docker Authentication and authorisation solution (from earlier blog here) on a kubernetes cluster, hosted on Google Cloud platform, fully split into pods/services so it can be scaled/load balanced.

The original source is here. I used MySql pods for persisting data to make the session/person pods stateless, which is described here.

Kubernetes architecture diagram

Based on the Kubernetes Guestbook tutorial.

Requires:

Setup

  1. Build microservices and copy jar/config into volumes
gradle buildJar

  1. Login to gcloud and set project/zone
gcloud auth login
gcloud config set project PROJECTID
gcloud config set compute/zone europe-west1-b
  1. Build the container images and publish to Google Container Registry
# Build
docker build -t stevena/replicated-nginx-lua:latest      -f kubernetes/replicated/dockerfiles/Dockerfile-nginx-lua .
docker build -t stevena/replicated-frontend:latest       -f kubernetes/replicated/dockerfiles/Dockerfile-frontend .
docker build -t stevena/replicated-authentication:latest -f kubernetes/replicated/dockerfiles/Dockerfile-authentication .
docker build -t stevena/replicated-authorisation:latest  -f kubernetes/replicated/dockerfiles/Dockerfile-authorisation .
docker build -t stevena/replicated-session:latest        -f kubernetes/replicated/dockerfiles/Dockerfile-session .
docker build -t stevena/replicated-person:latest         -f kubernetes/replicated/dockerfiles/Dockerfile-person .

# Tag
docker tag stevena/replicated-nginx-lua      gcr.io/PROJECTID/replicated-nginx-lua
docker tag stevena/replicated-frontend       gcr.io/PROJECTID/replicated-frontend
docker tag stevena/replicated-authentication gcr.io/PROJECTID/replicated-authentication
docker tag stevena/replicated-authorisation  gcr.io/PROJECTID/replicated-authorisation
docker tag stevena/replicated-session        gcr.io/PROJECTID/replicated-session
docker tag stevena/replicated-person         gcr.io/PROJECTID/replicated-person

# Publish
gcloud preview docker push gcr.io/PROJECTID/replicated-nginx-lua
gcloud preview docker push gcr.io/PROJECTID/replicated-frontend
gcloud preview docker push gcr.io/PROJECTID/replicated-authentication
gcloud preview docker push gcr.io/PROJECTID/replicated-authorisation
gcloud preview docker push gcr.io/PROJECTID/replicated-session
gcloud preview docker push gcr.io/PROJECTID/replicated-person
  1. Create the Google Cloud persistent disk for the MySql databases
# size is 200GB for performance recommendations https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance
gcloud compute disks create --size 200GB replicated-session-mysql-disk
gcloud compute disks create --size 200GB replicated-person-mysql-disk
  1. Create the Cluster, Pod and allow external web access
# Create cluster
gcloud alpha container clusters create replicated-ms-auth --num-nodes 7 --machine-type n1-standard-1
# 8 instances is the default max quota, so 7 nodes plus 1 master
gcloud config set container/cluster replicated-ms-auth

# Create Mysql pods and services
gcloud alpha container kubectl create -f kubernetes/replicated/pods/person-mysql-pod.yaml
gcloud alpha container kubectl create -f kubernetes/replicated/pods/session-mysql-pod.yaml

gcloud alpha container kubectl create -f kubernetes/replicated/services/person-mysql-service.yaml
gcloud alpha container kubectl create -f kubernetes/replicated/services/session-mysql-service.yaml

# Create microservices pods and services
gcloud alpha container kubectl create -f kubernetes/replicated/pods/frontend-pod.json
gcloud alpha container kubectl create -f kubernetes/replicated/services/frontend-service.json

gcloud alpha container kubectl create -f kubernetes/replicated/pods/authentication-pod.json
gcloud alpha container kubectl create -f kubernetes/replicated/services/authentication-service.json

gcloud alpha container kubectl create -f kubernetes/replicated/pods/authorisation-pod.json
gcloud alpha container kubectl create -f kubernetes/replicated/services/authorisation-service.json

gcloud alpha container kubectl create -f kubernetes/replicated/pods/session-pod.json
gcloud alpha container kubectl create -f kubernetes/replicated/services/session-service.json

gcloud alpha container kubectl create -f kubernetes/replicated/pods/person-pod.json
gcloud alpha container kubectl create -f kubernetes/replicated/services/person-service.json

# Create Nginx pod and service with load balancer
gcloud alpha container kubectl create -f kubernetes/replicated/pods/nginx-lua-pod.json
gcloud alpha container kubectl create -f kubernetes/replicated/services/nginx-lua-service.json

# Allow external web access
gcloud compute firewall-rules create k8s-replicated-ms-auth-node-80 --allow tcp:80 --target-tags k8s-replicated-ms-auth-node

# DEBUG
# check pod logs with `gcloud alpha container kubectl log nginx-lua-2mspc`,
# ssh onto node with `gcloud compute ssh k8s-replicated-ms-auth-node-1`, access container `sudo docker exec -it CONTAINERID bash`

Clean up

gcloud alpha container clusters delete replicated-ms-auth
gcloud compute firewall-rules delete k8s-replicated-ms-auth-node-80
gcloud compute disks delete replicated-session-mysql-disk
gcloud compute disks delete replicated-person-mysql-disk

# NOTE you must delete these as if you try to recreate your service with `createExternalLoadBalancer=true` it will fail
# silently if existing target-pools and forwarding-rules exist for your nodes
# Get IDs from `gcloud compute forwarding-rules list` and `gcloud compute target-pools list`
gcloud compute forwarding-rules delete RULEID
gcloud compute target-pools delete POOLID

Conclusion

I’m very impressed with how easy it was make Pods and services for each of my microservices. While there is an individual json/yaml definition file per pod and service, most are identical except for names/labels and minor configuration. Kubernetes wires together services (which are available across the cluster) out of the pods using labels, so any pod labelled “name=authentication” is selected as part of the Authentication service. Traffic to services is distributed across the pods randomly by kube-proxy running on each node.

It took a little bit of time to understand that the pods are hosted randomly across the cluster nodes, their host location isn’t meant to be important as the pods can be created and destroyed as needed on the cluster. In this setup you could kill individual pods and the replication controllers would just spin up replacements. Even killing individual cluster nodes should not affect the system, as there are redundant pods spread across the other nodes.

While I’ve created this solution using Google Cloud platform and tools Kubernetes is platform independent, so you can create your own cluster on any servers (see here for a redhat tutorial). This means you aren’t locked into a provider and can deploy your solution using Kubernetes for controlling your containers on multiple providers and still meet project specific hosting requirements on production (security etc.)

Future improvements and questions:

  • Use environmental variables for configuration rather than hard coding in the docker files
  • Find a better way to manage logging/monitoring (graphite container?)
  • Find a better way to startup and restart pods
  • Find a way to force nodes to retrieve the latest version of the image from a repository (currently its caching and not checking for updates)
  • Fully script building the cluster
  • How to do rolling updates
  • How to manage database updates/migrations/backups

Kubernetes persisting data using MySql and Persistent Disk

This is a description of the steps to deploy a microservice on a kubernetes cluster with persistent storage via a database pod.

Source here.

I’m using an existing microservice I created for storing session details, described here.

Based on the Kubernetes Using Persistent Disks tutorial.

Requires:

Setup

  1. Build microservices and copy jar/config into volumes
gradle buildJar

You can test DB access locally by spinning up a MySql container, setting the database environment variable to create the session DB, then starting the microservice container linked to the MySql container. This mirrors how it will connect to the Kubernetes data Pod when it is run as a Service (connecting by Service host name).

# MySql container with environment variable to create DB and link to session microservice container
docker run --name session-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=session -d mysql:latest
docker run -it --link session-mysql:session-mysql --rm -p 8084:8084 stevena/persistent-disks-session:latest
  1. Login to gcloud and set project/zone
gcloud auth login
gcloud config set project PROJECTID
gcloud config set compute/zone europe-west1-b
  1. Build the container images and publish to Google Container Registry

These container images are largely the same as the definitions from the Development Docker Compose file with the exception that the jars/config files are copied into the images to avoid needing mounted volumes.

# Build
docker build -t stevena/persistent-disks-session:latest -f kubernetes/persistent-disks/Dockerfile-session .

# Tag
docker tag stevena/persistent-disks-session gcr.io/PROJECTID/persistent-disks-session

# Publish
gcloud preview docker push gcr.io/PROJECTID/persistent-disks-session
  1. Create the Google Cloud persistent disk for MySql
# size is 200GB for performance recommendations https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance
gcloud compute disks create --size 200GB persistent-disks-session-mysql-disk
  1. Create the Cluster, Pod and allow external web access
# Create cluster
gcloud alpha container clusters create session-persist --num-nodes 2 --machine-type g1-small
gcloud config set container/cluster session-persist

# MySql pod and Service
gcloud alpha container kubectl create -f kubernetes/persistent-disks/mysql-pod.yaml
gcloud alpha container kubectl create -f kubernetes/persistent-disks/mysql-service.yaml

# Session pod
gcloud alpha container kubectl create -f kubernetes/persistent-disks/session-pod.yaml
# see pod and get external IP from HOST column `gcloud alpha container kubectl get pods`

# Allow external web access
gcloud compute firewall-rules create k8s-session-persist-node-80 --allow tcp:80 --target-tags k8s-session-persist-node

# cURL service
curl 'http://PODIP/api/sessions' --data-binary '1234'
curl 'http://PODIP/api/sessions/ACCESSTOKENFROMABOVE'

# DEBUG
# check pod logs with `gcloud alpha container kubectl log single-session-persist session`,
# ssh onto node with `gcloud compute ssh k8s-session-persist-node-1`, access container `sudo docker exec -it CONTAINERID bash`

# Delete MySql pod and restart to test persisted data
gcloud alpha container kubectl delete -f kubernetes/persistent-disks/mysql-pod.yaml
# curl service and see error as DB is down
gcloud alpha container kubectl create -f kubernetes/persistent-disks/mysql-pod.yaml
# curl service with previously created token and see it has restarted with data from persistent disk

Clean up

gcloud alpha container clusters delete session-persist
gcloud compute firewall-rules delete k8s-session-persist-node-80
gcloud compute disks delete persistent-disks-session-mysql-disk

Conclusion

I like the idea of easily creating database containers per microservice, saving all persisted data on a store that can be easily backed up and controlled. Using this approach avoids the performance trap of a monolithic database saving all your data from the entire solution. The MySql pod started up extremely fast and there were no problems when I deleted then restarted the pod.

Still a lot of things would need to be considered, replication, user control (I used root access for simplicity) and data migration/updates. Most likely you would need to make custom Dockerfile database containers per store, which includes logic for how to setup/control the container. Think I would need to sit down with a DBA to discuss how you could manange and maintain production databases when using this approach.

Kubernetes authentication and authorisation single pod

This is a description of the steps to deploy a complex microservice solution to Google Cloud Platform using kubernetes as a single Pod.

This is based on my Docker authentication and authorisation solution, which I’ve written about here.

Source

The solution is a web application using microservices to implement authentication and authorisation, consisting of 5 microservices and Nginx with Lua scripting, described here.

Requires:

Setup

  1. Build microservices and copy jar/config into volumes
gradle buildJar
  1. Login to gcloud and set project/zone
gcloud auth login
gcloud config set project PROJECTID
gcloud config set compute/zone europe-west1-b
  1. Build the container images and publish to Google Container Registry

These container images are largely the same as the definitions from the Development Docker Compose file with the exception that the jars/config files are copied into the images to avoid needing mounted volumes.

# Build
docker build -t stevena/single-pod-nginx-lua:latest      -f kubernetes/single-pod-solution/Dockerfile-nginx-lua .
docker build -t stevena/single-pod-frontend:latest       -f kubernetes/single-pod-solution/Dockerfile-frontend .
docker build -t stevena/single-pod-authentication:latest -f kubernetes/single-pod-solution/Dockerfile-authentication .
docker build -t stevena/single-pod-authorisation:latest  -f kubernetes/single-pod-solution/Dockerfile-authorisation .
docker build -t stevena/single-pod-session:latest        -f kubernetes/single-pod-solution/Dockerfile-session .
docker build -t stevena/single-pod-person:latest         -f kubernetes/single-pod-solution/Dockerfile-person .

# Tag
docker tag stevena/single-pod-nginx-lua      gcr.io/PROJECTID/single-pod-nginx-lua
docker tag stevena/single-pod-frontend       gcr.io/PROJECTID/single-pod-frontend
docker tag stevena/single-pod-authentication gcr.io/PROJECTID/single-pod-authentication
docker tag stevena/single-pod-authorisation  gcr.io/PROJECTID/single-pod-authorisation
docker tag stevena/single-pod-session        gcr.io/PROJECTID/single-pod-session
docker tag stevena/single-pod-person         gcr.io/PROJECTID/single-pod-person

# Publish
gcloud preview docker push gcr.io/PROJECTID/single-pod-nginx-lua
gcloud preview docker push gcr.io/PROJECTID/single-pod-frontend
gcloud preview docker push gcr.io/PROJECTID/single-pod-authentication
gcloud preview docker push gcr.io/PROJECTID/single-pod-authorisation
gcloud preview docker push gcr.io/PROJECTID/single-pod-session
gcloud preview docker push gcr.io/PROJECTID/single-pod-person
  1. Create the Cluster, Pod and allow external web access
# Create cluster
gcloud alpha container clusters create ms-auth --num-nodes 1 --machine-type g1-small
gcloud config set container/cluster ms-auth

# Pods
gcloud alpha container kubectl create -f kubernetes/single-pod-solution/single-pod.json
# see pod and get external IP from HOST column `gcloud alpha container kubectl get pods`

# Allow external web access
gcloud compute firewall-rules create k8s-ms-auth-node-80 --allow tcp:80 --target-tags k8s-ms-auth-node

# View site by pod external IP

# DEBUG
# check pod logs with `gcloud alpha container kubectl log single-ms-auth person`,
# ssh onto node with `gcloud compute ssh k8s-ms-auth-node-1`, access container `sudo docker exec -it CONTAINERID bash`

Clean up

gcloud alpha container clusters delete ms-auth
gcloud compute firewall-rules delete k8s-ms-auth-node-80

Conclusion

This architecture is a single pod running on a single node with all the components. It’s basically the same as running it locally for development and doesn’t offer any horizontal scaling. Didn’t require much changes to the configuration, only updating server names to localhost (containers in a pod can address each other locally). It could be scaled by creating a data persistence pod for a database, updating config to point all the services which need persistence at it, then placing a load balancer in front of the pod and creating multiple nodes hosting the web/data persistence pod.

Going forward I would like to work with volumes, split the services into individual pods/services to allow full scaling and try out separate data persistence.

I noticed a couple of things while trying to get this working:

  • You can’t open firewall access to the pod to non-standard ports (e.g. port 8081), something blocks access to that port outside the node. I ended up ssh’ing on the node and testing these locally.
  • The nodes won’t automatically check the latest version of the image if you update latest and push it to the repository. There is probably some command to force this or you need to name your versions explicitly.

Publishing custom docker image and running on cluster in Kubernetes

Steps to create a custom Nginx instance with Lua scripting running on a Kubernetes cluster with a single Pod.

Serves as a simple example of how to create and publish a custom docker image to the Google Container Registry.

Source here.

Requires:

Building and publishing the custom Nginx image

To use a non-standard image in Google Cloud you will first need to build and publish it to Google Container Registry.

# Build
docker build -t stevena/nginx-lua-example:latest kubernetes/nginx-lua-example/.
# Test image locally by running 'docker run -i -p 80:80 stevena/nginx-lua-example'

# Tag image with Google Cloud project id
docker tag stevena/nginx-lua-example gcr.io/PROJECTID/nginx-lua-example

# Push the image to the registry
gcloud preview docker push gcr.io/PROJECTID/nginx-lua-example

Setup

# Login
gcloud auth login

# Set project (created through dashboard)
gcloud config set project PROJECTID

# Set your zone
gcloud config set compute/zone europe-west1-b

# Create cluster with one node
gcloud alpha container clusters create nginx-lua --num-nodes 1 --machine-type g1-small

# Set to use the cluster
gcloud config set container/cluster nginx-lua

# Create the Pod
gcloud alpha container kubectl create -f kubernetes/nginx-lua-example/nginx-lua-example-pod.json

# allow external traffic
gcloud compute firewall-rules create nginx-lua-node-80 --allow tcp:80 --target-tags k8s-nginx-lua-node

# get IP from the HOST column for the Pod (not IP column)
gcloud alpha container kubectl get pod nginx-lua-example

You should now be able to curl/visit the external IP of your pod and see “Hello world by Lua!”.

Clean up

gcloud alpha container clusters delete nginx-lua

gcloud compute firewall-rules delete nginx-lua-node-80

Microservice authentication and authorisation using Docker

I’ve created a sample implementation of the microservice authentication and authorisation pattern I described in a previous blog posts (here for pattern, here for how it could scale). It uses Nginx with Lua and Dropwizard for the microservices, provisioned into containers using Docker.

Source: here

Requires:

I created this project to test using Docker as part of the development process to reduce the separation between developers and operations. The idea being that developers create and maintain both the code and the containers that their code will run in, including scripts/tools used to configure and setup those containers. Hopefully this will reduce the knowledge gap that forms a barrier between developers and operations in projects, causing problems when developers push code that breaks in production (“throwing over the wall” at operations).

I’m aware that Docker and containers in general are not a cure-all for ‘devOps’, they are only an abstraction that tries to make your applications run in an environment as similar to production as possible and make deployment/setup more consistent. Containers running locally or on a test environment are not the same as the solution running on production. There are concerns about performance/networking/configuration/security which developers need to understand in order to produce truly production ready code that de-risks regular releases. Creating a ‘devOps’ culture to decrease the time necessary to release and increase quantity requires a change in process and thinking, not just technology.

Running the containers

# Build microservices and copy their files into volume directories
gradle buildJar

# Run containers with dev architecture
docker-compose -f dev-docker-compose.yml up

# curl your boot2docker VM IP on port 8080 to get the login page, logs are stored in docker/volume-logs

Details

The solution is composed of microservices, using nginx as a reverse proxy and Lua scripts to control authentication/sessions. Uses Docker and Docker Compose to build container images which are deployed onto a Docker host VM.

Microservices

The solution is split into small web services focused on a specific functional area so they can be developed and maintained individually. Each one has it’s own data store and can be deployed or updated without affecting the others.

  • Authentication – used to authenticate users against a set of stored credentials
  • Authorisation – used to check authenticated users permissions to perform actions
  • Frontend – HTML UI wrapper for the login/person functionality
  • Person – used to retrieve and update person details, intended as a simple example of an entity focused microservice which links into the Authorisation microservice for permissions
  • Session – used to create and validate accessTokens for authenticated users

There is an Api library containing objects used by multiple services (for real solution should be broken up into API specific versioned libraries for use in various clients, e.g. personApi, authorisationApi).

Nginx reverse proxy

Nginx is used as the reverse proxy to access the Frontend microservice and it also wires together the authentication and session management using Lua scripts. To provision the Nginx container I created a DockerFile which installs nginx with OpenResty

  • Dockerfile – defines the Nginx container image, with modules for Lua scripting
  • nginx.conf – main config for Nginx, defines the endpoints available and calls the Lua scripts for access and authentication
  • access.lua – run anytime a request is received, defines a list of endpoints which do not require authentication and for other endpoints it checks for accessToken cookie in the request header then validates it against the Session microservice
  • authenticate.lua – run when a user posts to /login, calls the Authentication microservice to check the credentials, then calls the Session microservice to create an accessToken for the new authenticated session and finally returns a 302 response with the accessToken in a cookie for future authenticated requests.
  • logout.lua – run when a user calls /logout, calls the Session microservice to delete the users accessToken

Authentication and authorisation sequence diagram

Authentication and authorisation sequence diagram

Docker containers and volumes

The interesting thing about using Docker with microservices is that you can define a container image per microservice then host those containers in various arrangements of Docker host machines to make your architecture. The containers can be created/destroyed easily, give guarantees of isolation from other containers and only expose what you define (ports/folders etc.). This makes them easily portable between hosts compared to something like a puppet module that needs more care and configuration to ensure it can operate on the puppet host.

To develop and test the solution locally I used a development architecture defined in a Docker Compose yaml file (here). This created a number of containers with volumes and exposed ports then links them together appropriately.

Below shows architectures which can be built using the containers.

Development architecture

Development architecture diagram

This is a small scale architecture intended for local development. Developers can spin this up quickly and work on the full application stack. It uses a single Docker host (the boot2docker VM) with single containers for each microservice. This means that if any of the containers or services fail there is no redundancy.

Small scaled architecture

Small scaled architecture diagram

This is a larger scale architecture, using HAProxy to load balance and introduce redundancy. This architecture allows scaling the business microservices to handle increasing/decreasing load.

Large scaling architecture

Large scaling architecture diagram

This is an example production architecture, running on multiple Docker hosts with redundancy for all microservices and load balancing for the web servers. The number of hosts you have per container can be increased/decreased dynamically based on the individual load on each service and each container can be updated without downtime by rolling updates.

On a real production architecture you would want to include:

  • Healthchecks
  • Monitoring (e.g. Dropwizard Metrics pushing to Graphite)
  • Dynamic scaling based on load monitoring
  • Periodic backups of peristed data
  • Security testing

Conclusions

I found working with Docker extremely easy, the tooling and available images made it simple to create containers to do what I needed. For development the speed I could create and start containers for the microservices was amazing, 5 secs to spin up the entire 6 container solution with Docker Compose. Compared to development using individual VMs provisioned by Puppet and Vagrant this was lightning fast. Accessing the data/logs on the containers was simple also, making debug a lot easier, and remote debug by opening ports was also possible.

Still have some concerns about how production ready my containers would be and what I would need to do to make them secure. I did not touch on a lot of the work which would be necessary to create and provision the Docker hosts themselves, including configuration of the microservices and Nginx containers per host. For a reasonable sized architecture this would require a tool like Puppet anyway so would not save much effort on the operations side.

I would like a chance to use some sort of containerisation in a real project and see how it works out, in the development side, operations for deployment in environment and in actual production use. For now I’d definitely recommend developers to try it out for defining and running their local development environments as an alternative to complex boxen/vagrant setups.