Managing data store changes in containers


When creating microservices it’s common to keep their persistence stores loosely coupled so that changes to one service do not affect others. Each service should manage it’s own concerns, be the owner of retrieving/updating it’s data and define how and where it gets it from.

When using a relational database for a store there is an additional problem that each release may require schema/data updates, a known problem of database migration (schema migration/database change management).

There a large number of tools for doing this; flyway, liquibaseDbUp. They allow you to define the schema/data for your service as a series of ordered migration scripts, which can be applied to your database regardless of it’s state as a fresh DB or existing one with production data.

When your container service needs a relational database with a specific schema and you are performing continuous delivery you will need to handle this problem. Traditionally this is handled separately from the service by CI, where a Jenkins/Teamcity task runs the database migration tool before the task to deploy the updated release code for the service. You will have similar problems with containers that require config changes to non-relational stores (redis/mongo etc.).

This is still possible in a containerised deployment, but has disadvantages. Your CI will need knowledge/connection to each containers data store, and run the task for each container with a store. As the number of containers increase this will add more and more complexity into your CI which will need to be aware of all their needs and release changes.

To prevent this from happening the responsibility of updating their persistence store should be on developer for the container itself, as part of the containers definition, code and orchestration details. This allows the developers to define what their persistence store is and how it should be updated each release, leaving CI only responsible for deploying the latest version of the containers.

node_modules/.bin/knex migrate:latest --env development

As an example of this I created a simple People API node application and container, which has a dependency on a mysql database with people data. Using Knex for database migration, the source defines the scripts necessary to setup the database or upgrade it to the latest version. The Dockerfile startup command waits for the database to be available then runs the migration before starting the Node application. The containers necessary for the solution and the dependency on mysql are defined and configured in the docker-compose.yml.

docker-compose up

For a web application example I created People Web node application, that wraps the API and displays the results as HTML. It has a docker-compose.yml that spins up containers for mysql, node-people-api (using the latest image pushed to Docker Hub) and itself. As node-people-api manages it’s own store inside the container node-people-web doesn’t need any knowledge of the migration scripts to setup the mysql database.


Session data is evil

I’ve been working in ASP.MVC recently after working in Java for a long time. One of the things that struck me was the common use of session data in web application.

Now I know that people can and do use abuse sessions in Java, but the default routing and ease of access make using it more tempting in ASP.MVC. The standard routing convention of “/Controller/Action/:id” means you need to explicitly code to use RESTful paths that give you multiple IDs in URLs like “order/2/item/3” for non-trivial scenarios, and out of the box convenience methods like “TempData” seem to offer magical persistence between requests. These incentives combine to make using session data the path of least resistance in ASP.MVC.


Any data stored in session is inherently unreliable and use of it makes load balancing and scaling your application much more difficult. Once you use it, each instance of your web application must be able to find the users session data to reliably handle requests. Since it’s now extremely common to use multiple instances even for small applications (irresponsible not to for disaster recovery and redundancy) you will need to think about this before you deploy into production.

It also adds hidden complexity to testing your application. Each endpoint which relies on state stored in session needs to be tested with that application state simulated. This means you have at least two places in your code defining and using the same semi-structured data, which makes your tests complex/fragile and your code harder to maintain.


Once you make using session data part of your architecture it’s very hard to refactor and remove it. That little innocent use of TempData to store details from the last request will spread as Developers think “If it was ok there then it’s ok here…” and “one more can’t hurt” (the broken windows theory). Now your user flows in the web application rely on session stored details to go from screen A to B to C, and refactoring them means re-writing and testing a lot of the view/controller logic to replace the data held in session.

There are acceptable uses for session data in web application, authentication is the obivous one. What they have in common is having alternative flows to cope if session data is not found without breaking functionality.

If you have an over reliance of session in your application you are making a flakey, hard to scale and maintain application that will at best limp into production. At worst it will fall over and take your users data with it.

There are common patterns and methods to avoid needing session data, below are some links to help:

Design for Devs – Change sequence diagrams

I’ve been asked a few times by junior developers how to get started in designing code, as if it’s some sort of art technique. In truth every developer is doing design, no one spontaneously writes complex formal logic. Most just do it in their head based on experience and patterns they’ve used before. For small well understood problems this is fine, but when you are dealing with larger or more complex changes doing a little design work up-front can really help clarify things and ultimately save time while giving a better solution.

I’m writing this to document a simple type of design I’ve used on many projects, a change sequence diagram, one you can do quickly on paper or on a whiteboard in ten minutes and I’ve found to be very helpful in thinking about what changes are required, the size of the overall change and promoting re-use of existing code.

Here’s an example:

It’s a pretty simple variation of a sequence diagram, where you show the sequence of events which should occur as a series of interactions between the involved people/components. It normally starts with person, like someone clicking a link on web page, then shows how the system responds. The change part is about highlighting what components in each part of the system need to change to handle the functionality, what parts need to be added/updated/removed.

Doing this forces you to think up-front about what you will need to change and how the system will interact to get the end result. It ties the individual component changes to the overall user requirement, e.g. you’re not just adding a new database column and view field, you’re adding them so the user can see and update their middle name on the personal details screen. This helps you understand how the parts of your system interact and use consistent implementations and design patterns in your changes, plus identify the unit tests and test scenarios.

When you are done, the number and type of changes shows the scale of the overall change, useful for estimates, and breaks it down into manageable chunks of work. You’ll get the best results if you do it paired with someone or get someone else to review your design. Doing this checks that you aren’t breaking existing patterns in the code or missing something that could increase or decrease the complexity. You can expand it to include alternate flows and consider NFR’s for security and performance.

Next time you’re looking at a new requirement or user story give this a try, you’ll be surprised how easy it is to do and what you’ll get out of it.

Hadoop summit Dublin 2016


Just back from Hadoop summit in Dublin, thought I would give a write up of the talks I went to and my impressions. All of the videos have been put up so it could help you decide what to spend time on.

Overall it was good, a nice spread of speakers covering highly technical topics, new products and business approaches. The key notes were good, some promotional talks by sponsors but balanced with some very good speakers covering interesting topics.

One impression I got was no one was trying to answer the question why people should try to use big data anymore, that has been accepted and now the topics have moved onto how to best use it. A lot of talks about security, governance and how to efficiently roll out big data analytics across organisations. Loads of new products to manage analytics workflows and simplify access for users for multiple resources. Organisational approaches, analytics as part of business strategy rather than cool new tools for individual projects.

One nitpick is they tried to push using a conference mobile app which required crazy permissions. No. I just wanted a schedule. A mobile first web site would have done the job and been more appropriate for the data conscious audience.


Enterprise data lake – Metadata and security – Hortonworks

Mentions of ‘Data lakes’ were all over the conference, as were security concerns about how to manage and govern data access when you start to roll out access across your organisation. This talk covered Hortonworks projects which are attempting to address these concerns, Apache Atlas and Ranger.

Altas is all about tagging data in your resources to allow you to classify them, e.g. put a ‘personal’ tag on columns in your Hive table which identify people, or an ‘expiry(1/1/2015)’ on tax data from 2012. Ranger is a security policy manager which uses the Altas tags and has plugins that you add to your resources to control access, e.g. only Finance users can access tax data and to enforce expiries.

You create policies to restrict and control who can do what to your resource data based on this metadata. This is an approach which scales and follows your data as it is used, rather than attempting to control access at each individual resource as data is ingested, which gets unmanageable as your data/resources grows, providing a single place to manage your policies. It also provides audits of access. Later talks also suggested using automation to detect and tag data based on content to avoid having to manually find it, such as identifiable or sensitive data.

Querying the IoT with streaming SQL

This talk was a bait and switch, not really about IoT but it still was interesting. It was really about streaming SQL, which the presenter thinks will become a popular way to query streaming data across multiple tools. I do agree with the idea, SQL is such a common querying language and most users would prefer not to learn tool specific query languages all the time.

The push for using streaming data is that your data is worth most when it is new and new data plus old is worth more. This means you should be trying to process your data as you get it, producing insights as quickly as possible. Streaming makes this possible.

He went into a lot of technical detail and examples of how you would use it as a super-set of SQL. Mentioned using Apache Calcite as a query optimiser to run these queries across your multiple data sources.

Advanced execution visualisation of Spark job – Hungarian Academy of Sciences

Talk from researchers who worked with local telcos to try and analyse mobile data. They won a Spark community award for their work in creating visualisations of Spark jobs to help find anomalies in data that cause jobs to finish slower.

They named this the ‘Bieber’ effect, based on the spike in tweets caused by Justin Bieber (and other celebrities). This spike can hurt job executiion if you are using default random bucket allocation in Spark based on hashes of keys, as suddenly a load of work needs to be aggregated across multiple nodes where it would be more efficient to partition it into specific ones closer to the data. The real example they found was when cell tower usage spiking due to a local Football match.

They’ve created the tools to view these spikes and test in advance using samples, and aim to create ways to dynamically allocate and partition tasks based on these spikes and improve the efficiency of their jobs.

Award blog

Real world NoSQL schema design – MapR

Talk about how you can take advantage of the flexible schema in NoSQL DB’s to improve data read times, doing things like putting your data into the column names and keys.

Very useful for people doing analysis on large amounts of simple data or large de-normalised datasets.

TensorFlow – Large scale deep learning for intelligent computer systems – Google

Could you recognise a British Shorthair cat?

If not, you know less about cats than a server rack at Google. Expect that list of things to grow.

Good talk on how they are using machine learning at Google, using classified tagged image data to train models that can recognise objects in other images, including things like are the people in the picture smiling.

Talked about TensorFlow, their open source deep learning project. If you are interested in machine learning I’d take a look at the videos they have up.

Migrating Hundreds of Pipelines in Docker containers – Spotify

I like containers, so was looking forward to this.

Good talk, covered Spotify’s use of Big Data over the last 6 years, going from AWS hosted Hadoop as a service, running their own cluster and their current move to Google Compute using Docker with their own container orchestration solution, Helios.

They are now working a service, Styx (don’t search Spotify Styx, you’ll just get heavy metal), which will allow them to do “Execution as a service”. This is a very exciting idea, allowing users to define jobs to run along with the docker images to execute it. This is a great way to manage dependencies and resources for complex big data tasks, making it easier to do self-service for users with governance.

Hadoop helps deliver high quality, low cost healthcare services – Healtrix

After a load of talks mainly about ROI and getting revenue it was nice to hear a talk about trying to give something back and improve quality of life. The speaker grew up in a poor Indian village and had experience of poor access to healthcare.

His talk was about providing at risk people with healthcare sensors (for blood sugar/pressure etc.) that connect to common mobile devices and send sensor data to backend servers that analyse the data. This can be used as part of predictive and preventative care to reduce the cost of unplanned hospital visits. Using this, healthcare providers can monitor patients with Diabetes or heart conditions, vary their drug prescriptions or advise appointments without waiting for the standard time between appointments.

This is especially important in areas with poor health coverage and bad transport links, as the data can move a lot easier than the patient can see a doctor.


Apache Hadoop YARN and the Docker container runtime – Hortonworks

Nice to know about this but the talk was pretty dry unless you are really interested in YARN. YARN supports running in Docker containers and there are now some resources provided by new docker and YARN releases for how to manage security and networking.

It did show that it’s possible to run YARN in YARN (YARNception), which apparently has real world uses for testing new versions of YARN without updating your existing version of YARN. YARN.


Organising the data lake – Information governance in a Big Data world – Mike Ferguson

More coverage of governance and security when using big data in your organisation, mainly from a business view rather than technical. If you are interested in how to roll out access of data across your organisation while centralising control and governance you should watch this talk.


Using natural language processing on non-textual data with MLLib – Hortonworks

Good talk, mainly about using Word2Vec (a google algorithm for finding relationships between words) to analyse medical records and find links between diagnosis codes (US data). This can be used to find previously undocumented links between conditions to aid diagnosis or even predict undiagnosed conditions (note, not a doctor).

The approach could be used in many other contexts and seems very straightforward to apply.

How do you decide where you customer was – Turkcell

Slightly creepy talk (from implications, the speaker was very nice and genuine) about how Turkcell, a telco, is using mobile cell tower data to analyse their customers movements, currently to predict demand for growth and rollout LTE upgrades. But they are also using it to get extremely valuable data about movement and locations of customer demographics which they can provide to businesses like shopping centers.

From a technical point of view it was interesting and gave a good perspective on the challenges of processing very high volumes of data in the real world.

Made me think, is my mobile company doing this? Then I realise of course, they would be stupid not to.


Using sequence statistics to fight advanced persistent threats – Ted Dunning – MapR

Great talk by an excellent speaker, highly recommend watching the video. Real world examples of large hacking attacks on businesses and insight into how large companies manage those threats.

Was about using very simple counting techniques with mass volumes of data and variables, comparing how often certain conditions (such as header values/orders, request timings etc.) occur together. Using these you can identify patterns of how normal requests look and detect anomalous patterns used by attackers. This approach is simple, works “well enough” to detect attackers who cannot know in advance the internals of your servers to mask themselves.

If you have an interest in security take a look at the talk.

Ted Dunning is a rock star, notice the abnormal co-occurrence of female audience members in the front row.


Videos for the keynotes aren’t published but I thought I should recognise some of the really interesting ones.

Data is beautiful – David McCandless

Standout talk from David McCandless with a focus on how to use visualisations to show complex relationships and share insights, data visualisation as a language. Lots of great visual examples, showing scale of costs of recent high profile events, timeline of ‘fear’ stories in the press and most common breakup times from scaping facebook.

Very interesting talk and I’m going to look up more about his work. Check out his visualisation twitter account for examples:

Technoethics – Emer Coleman – Irish gov

Talk about how big data is shaping society and how we should be considering the ethics of the software and services we are creating, both in private companies and government. Mentioned the UK’s snoopers charter, Facebook’s experiment in manipulating users emotions and Google’s ability to skew election results.

I do think we should consider the ethical impact of our work more in software development, trying to do more good and less ignoring the negative effects.

The incremental complexity trap

This is a common problem which developers face, particularly in agile projects when it’s normal to make user stories related to previous user stories to add new functions to existing screens and flows.

It happens all the time. You start with a something like a simple screen and do the necessary work to make it. Then you get a few new requirements, add some more fields and new validation for alternate user flow etc. You do the same, add the fields to your screens and logic to the controllers. Then it happens again, then again and again. More user flows are added, more fields appearing in some of those flows, more complex validation. Your original simple screen, controller and validation logic is now a monster, unmaintainable and a nightmare to debug.

Often teams don’t do anything to solve it, just live with the problems. Commonly by the time they realise what’s happening it’s easier just to keep layering on the complexity rather than deal with it properly. No one wants to be holding the bag when it’s time to stop and refactor everything.

This can be particularly bad if it’s happening in multiple places at the same time during a project. The effects of shortsighted decisions start to snowball, affecting development speed and increasing regression issues, and it’s not feasible to refactor everything (try selling a sprint full of that to a Product Owner).

So how do you avoid the trap?

  • Keep it simple

Establish code patterns that encourage separation of concerns, make the team aware of them and how to repeat the patterns.

  • Unit tests

Test complexity helps highlight when things are getting out of hand before it’s too late and make it much easier to refactor by reducing the chance of regression issues.

  • Anticipate it before it happens and design

This is the job of the Technical Architect, to know what requirements and stories are coming and how they will be implemented. If an area is going to get a lot of complexity it needs to be handled or you will end up in the trap.

What can you do if you are already in it?

  • Stop making it worse

For complex classes stop adding new layers of logic. Obvious, but the temptation will be there. Stop and plan a better approach, as the sooner you do this the easier it will be.

  • Don’t try for perfection and refactor everything

Big bang refactors are risky and time consuming. Take part of the complexity and split it out, establishing a pattern to remove more.

  • Focus on your goals

Overly complex classes aren’t bad out of principle. They are bad because they slow development and give bugs places to hide. Refactoring and creating a framework to add new functionality can speed development and reduce occurance of defects, which also eat time. You should focus your effort on refactoring areas which will need more complexity later, investing time now to save more later.

Infrastructure as code, containers as artifacts

As a developer, one of the things I love about containers is how fast they are are to create, spin up and then destroy. For rapid testing of my code in production like environments this is invaluable.

But containers offer another advantage, stability. Containers can be versioned, defined with static build artifact copies of the custom components they will host and explicitly versioned dependencies (e.g. nginx version). This allows for greater control in release management, knowing exactly what version of not just the custom code you have on an environment but the infrastructure running it. Your containers become an artifact of your build process.

While managing versions of software has long been standard practise, this isn’t commonly extended to the use of infrastructure as code (environment creation/update by scripts and tools like Puppet). Environments are commonly moving targets, separation of development and operations teams mean software and environment releases are done independently, with environment dependencies and products being patched independently of functionality releases (security patching, version updates etc.). This can cause major regression issues which often can’t be anticipated until it hits pre-production (if you are lucky).

By using containerisation with versioning you can control the release of environmental changes with precise control, something that is very important when dealing with a complex distributed architecture. You can release and test changes to individual servers, then trace back issues to the changes introduced. The containers that make up your infrastructure become build artifacts, which can be identified and updated like any other.

Here’s a sequence diagram showing how this can be introduced into your build process:

Containers as artifacts (1)

At the end of this process you have a fixed release deployed into production, with traceable changes to both custom code and infrastructure. Following this pattern allows upfront testing of infrastructure changes (including developer level) and makes it very difficult to accidentally cause any differences between your test and production environments.

ASP.MVC Datatables server-side

This is an example implementation of JQuery Datatables with server-side processing. The source is here.



JQuery Datatables is a great tool, attach it to a table of results and it gives you quick and easy sorting/searching. Against a small dataset this works fine, but once you start to have >1000 records your page load is going to take a long time. To solve this Datatables recommend server-side processing.

This code is an example of implementing server-side processing for an ASP.MVC web appliction, using a generic approach with Linq so that you can re-use it for different entities easily with little code repetition. It also shows an implementation of full word search across all columns, which is something that the Javascript processing version offers but is very tricky to implement on the database side with decent performance. It’s a C# .NET implementation but you can take the interfaces and calls from the controllers and convert the approach for Java or Ruby (missing the nice Linq stuff tho).


I’ll skip the basic view/js details as that is easily available on the datatables documentation.

The request comes into the controller as a GET with all the sort/search details as query parameters (see here), it expects a result matching this interface:

public interface IDatatablesResponse<T>
    int draw { get; set; }
    int recordsTotal { get; set; }
    int recordsFiltered { get; set; }
    IEnumerable<T> data { get; set; }
    string error { get; set; }

The controller extracts the parameters, creates the DB context and repository and makes three calls asynchronously:

  • get the total records
  • get the total filtered records
  • get the searched/sorted/paged data

The data is returned and Datatables Javascript uses it to render the table and controls for the correct searched/sorted/paged results.

The magic happens in the DatatablesRepository objects which handle those calls.

DatatablesRepository classes


public interface IDatatablesRepository<TEntity>
    Task<IEnumerable<TEntity>> GetPagedSortedFilteredListAsync(int start, int length, string orderColumnName, ListSortDirection order, string searchValue);
    Task<int> GetRecordsTotalAsync();
    Task<int> GetRecordsFilteredAsync(string searchValue);
    string GetSearchPropertyName();

The base class DatatablesRepository has a default implementation which provides generic logic for paging, searching and ordering an entity:

protected virtual IQueryable<TEntity> CreateQueryWithWhereAndOrderBy(string searchValue, string orderColumnName, ListSortDirection order)
    query = GetWhereQueryForSearchValue(query, searchValue);
    query = AddOrderByToQuery(query, orderColumnName, order);

protected virtual IQueryable<TEntity> GetWhereQueryForSearchValue(IQueryable<TEntity> queryable, string searchValue)
    string searchPropertyName = GetSearchPropertyName();
    if (!string.IsNullOrWhiteSpace(searchValue) && !string.IsNullOrWhiteSpace(searchPropertyName))
        var searchValues = Regex.Split(searchValue, "\\s+");
        foreach (string value in searchValues)
            if (!string.IsNullOrWhiteSpace(value))
                queryable = queryable.Where(GetExpressionForPropertyContains(searchPropertyName, value));
        return queryable;
    return queryable;

protected virtual IQueryable<TEntity> AddOrderByToQuery(IQueryable<TEntity> query, string orderColumnName, ListSortDirection order)
    var orderDirectionMethod = order == ListSortDirection.Ascending
            ? "OrderBy"
            : "OrderByDescending";

    var type = typeof(TEntity);
    var property = type.GetProperty(orderColumnName);
    var parameter = Expression.Parameter(type, "p");
    var propertyAccess = Expression.MakeMemberAccess(parameter, property);
    var orderByExp = Expression.Lambda(propertyAccess, parameter);
    var filteredAndOrderedQuery = Expression.Call(typeof(Queryable), orderDirectionMethod, new Type[] { type, property.PropertyType }, query.Expression, Expression.Quote(orderByExp));

    return query.Provider.CreateQuery<TEntity>(filteredAndOrderedQuery);

The default implementation for creating the Where query (for searching) will only work if you provide a SearchPropertyName for a property that exists in the database that is a concatenation of all the values you want to search in the format displayed.

You can implement and override to use a custom method if your Entity does not support this, here is an example from the Person Entity:

public class PeopleDatatablesRepository : DatatablesRepository<Person>
    protected override IQueryable<Person> GetWhereQueryForSearchValue(IQueryable<Person> queryable, string searchValue)
        return queryable.Where(x =>
                // id column (int)
                // name column (string)
                || x.Name.Contains(searchValue)
                // date of birth column (datetime, formatted as d/M/yyyy) - limitation of sql prevented us from getting leading zeros in day or month
                || (SqlFunctions.StringConvert((double)SqlFunctions.DatePart("dd", x.DateOfBirth)) + "/" + SqlFunctions.DatePart("mm", x.DateOfBirth) + "/" + SqlFunctions.DatePart("yyyy", x.DateOfBirth)).Contains(searchValue));

The same is true of the order by query, which may need customisation to sort correctly for data, i.e. dates. Here is an example from the PersonDepartmentListViewRepository, which replaces the formatted date column being formatted with the raw date:

public class PersonDepartmentListViewRepository : DatatablesRepository<PersonDepartmentListView>
    protected override IQueryable<PersonDepartmentListView> AddOrderByToQuery(IQueryable<PersonDepartmentListView> query, string orderColumnName, ListSortDirection order)
        if (orderColumnName == "DateOfBirthFormatted")
            orderColumnName = "DateOfBirth";
        return base.AddOrderByToQuery(query, orderColumnName, order);

Using a view will make life much easier, as the data can be pre-formatted and you can supply a search column to do the full word searching, here’s the view I used to combine results from two tables:

CREATE VIEW [dbo].[PersonDepartmentListView]
SELECT dbo.Person.Id, 
CONVERT(varchar(10), CONVERT(date, dbo.Person.DateOfBirth, 106), 103) AS DateOfBirthFormatted,
dbo.Department.Name AS DepartmentName,
CONVERT(varchar(10), dbo.Person.Id) + ' ' + dbo.Person.Name + ' ' + CONVERT(varchar(10), CONVERT(date, dbo.Person.DateOfBirth, 106), 103) + ' ' + dbo.Department.Name AS SearchString
       dbo.Department ON dbo.Person.DepartmentId = dbo.Department.Id


  • If you are displaying date values be aware that you will need to format the date for display before returning in JSON, and the date format will affect how you sort the column on the backend as you will need to identify the actual date column property rather than the formated string
  • For effort and performance you are better off creating view than using complex Linq queries
  • I created the initial example with the help of Stephen Anderson