Adding Metrics to Java Play on Heroku

Recently I’ve been working on adding metric reporting into an existing application using the great Metrics library from Coda Hale. Adding it to Dropwizard applications is extremely easy but adding to Play is more tricky, so I’ve created a sample project to record how to do this.

Source

Metrics are a vital tool in monitoring the health of your application but are often overlooked early in development. Without some way of seeing how your application is behaving under use you can end up relying on your users to tell you what’s going on, being reactive to problems instead of proactively monitoring and taking steps to prevent them. Metrics can be simple as number of active operations, or as complex as JVM usage and detailed request result breakdown, any thing you think will help monitor the health of your application.

metrics-traffic

Once you have some metrics being produced you need a way to see them, in this example I’m using open source Graphite for storing and graphing the metrics data. Metrics has a reporter library which periodically sends the metric data to Graphite. Once your data is in you can create custom graphs that suit your monitoring needs. Heroku offers a free hosted Graphite instance (with usage limitations) so I’m using it in this application as an easy way to setup and try Graphite.

To test the reporting I ran some ApacheBench scripts with changing concurrent requests to represent increasing/decreasing load. Below shows a graph detailing 2xx response types:

response-graph

Detail

See the source for full instructions on running and deploying the application to heroku.

I based the implementation from the metrics-play play plugin, which is written in Scala. I wanted a clear Java Play implementation which gave me control over the metrics names, but if you want to quickly add metrics into your Play application without fuss this is a good plugin.

This example creates metrics registries for JVM, Logback and request details by hooking into the Play application using theGlobal.java file, using the filters() and onStart methods.

public class Global extends GlobalSettings {
...
    @Override
    public <T extends EssentialFilter> Class<T>[] filters() {
        return new Class[]{MetricsFilter.class};
    }
...
    @Override
    public void onStart(Application application) {
        super.onStart(application);

        setupMetrics(application.configuration());

        setupGraphiteReporter(application.configuration());
    }
...
    private void setupMetrics(Configuration configuration) {
        ...
        if (metricsJvm) {
            metricRegistry.registerAll(new GarbageCollectorMetricSet());
            metricRegistry.registerAll(new MemoryUsageGaugeSet());
            metricRegistry.registerAll(new ThreadStatesGaugeSet());
        }

        if (metricsLogback) {
            InstrumentedAppender appender = new InstrumentedAppender(metricRegistry);

            ch.qos.logback.classic.Logger logger = 
                (ch.qos.logback.classic.Logger)Logger.underlying();
            appender.setContext(logger.getLoggerContext());
            appender.start();
            logger.addAppender(appender);
        }

        if (metricsConsole) {
            ConsoleReporter consoleReporter = ConsoleReporter.forRegistry(metricRegistry)
                .convertRatesTo(TimeUnit.SECONDS)
                .convertDurationsTo(TimeUnit.MILLISECONDS)
                .build();
            consoleReporter.start(1, TimeUnit.SECONDS);
        }
    }
...
    private void setupGraphiteReporter(Configuration configuration) {
        boolean graphiteEnabled = configuration.getBoolean("graphite.enabled", false);

        if (graphiteEnabled) {
            ...
            final Graphite graphite = new Graphite(new InetSocketAddress(host, port));
            graphiteReporter = GraphiteReporter.forRegistry(metricRegistry)
                .prefixedWith(prefix)
                .convertRatesTo(TimeUnit.SECONDS)
                .convertDurationsTo(TimeUnit.MILLISECONDS)
                .filter(MetricFilter.ALL)
                .build(graphite);

            graphiteReporter.start(period, periodUnit);
        }
    }
}

Metrics about the requests are captured using a Filter MetricsFilter, which is applied to all requests hitting the application and can see both the request header and result data.

public class MetricsFilter implements EssentialFilter {

    private final MetricRegistry metricRegistry = SharedMetricRegistries.getOrCreate("play-metrics");

    private final Counter activeRequests = metricRegistry.counter(name("activeRequests"));
    private final Timer   requestTimer   = metricRegistry.timer(name("requestsTimer"));

    private final Map<String, Meter> statusMeters = new HashMap<String, Meter>() {{
        put("1", metricRegistry.meter(name("1xx-responses")));
        put("2", metricRegistry.meter(name("2xx-responses")));
        put("3", metricRegistry.meter(name("3xx-responses")));
        put("4", metricRegistry.meter(name("4xx-responses")));
        put("5", metricRegistry.meter(name("5xx-responses")));
    }};

    public EssentialAction apply(final EssentialAction next) {

        return new MetricsAction() {

            @Override
            public EssentialAction apply() {
                return next.apply();
            }

            @Override
            public Iteratee<byte[], Result> apply(final RequestHeader requestHeader) {
                activeRequests.inc();
                final Context requestTimerContext = requestTimer.time();

                return next.apply(requestHeader).map(new AbstractFunction1<Result, Result>() {

                    @Override
                    public Result apply(Result result) {
                        activeRequests.dec();
                        requestTimerContext.stop();
                        String statusFirstCharacter = String.valueOf(
                            result.header().status()).substring(0,1);
                        if (statusMeters.containsKey(statusFirstCharacter)) {
                            statusMeters.get(statusFirstCharacter).mark();
                        }
                        return result;
                    }

                    @Override
                    public <A> Function1<Result, A> andThen(Function1<Result, A> result) {
                        return result;
                    }

                    @Override
                    public <A> Function1<A, Result> compose(Function1<A, Result> result) {
                        return result;
                    }

                }, Execution.defaultExecutionContext());
            }


        };
    }

    public abstract class MetricsAction extends
        AbstractFunction1<RequestHeader, Iteratee<byte[], Result>>
        implements EssentialAction {}
}

Customisation and improvements

This example gives basic metrics on the Application, but for your own solution you would probably want to get specific metrics about controller actions. You can do this by either creating your own Play Filters and attaching them to the action methods or coding metrics directly into the actions. I used the Dropwizard Metrics own style for reporting on requests (2xx-responses) but you may be interested in specific results or requests and can use the Filter to intercept and report on these.

Microservice authentication and authentication scaling

In a previous post I put up the sequence diagram below describing a design for implementing authentication and authorisation using Microservices.

Microservice authentication and authorisation seq

What I didn’t cover was the advantages of this approach when scaling your services. Authentication and authorisation are needed by most parts of your system so they easily become a performance bottle neck. Any service in your system which needs to authenticate a user or check their permissions will need to access a central data source holding this data. Outside a monolith architecture (which has it’s own problems) this can be difficult, as a varying number of services will need to perform these functions so it needs to scale with them.

This is one of the classic arguments for microservices, as its easier to scale a small focused service doing one thing rather than a large application with many dependencies and data sources.

Here’s the most basic architecture using the microservice authentication and authorisation design above:

MS auth and authorisation - simple architecture

This architecture can only scale vertically, by increasing the specification of the single web server. If just one of the services hosted on the box is getting a lot of requests, like the authorisation box dealing with permission checks from 10 business services, then the performance of the whole application is affected. Increasing the processor and memory can only help you so much in this situation, and of course the system has multiple single points of failure.

Now here’s what is possible if you use load balancers and partition your microservices into separate servers:

MS auth and authorisation - scaled architecture

This architecture can scale horizontally, by increasing the number of server instances for the specific services that are experiencing heavy load. This may seem overly complex but really if your application needs to scale well this is the only practical way to do it. It can also save hosting costs, since as well as being able to scale up (increase instances) you can scale down (reduce instances) when individual services are not under much load. The costs for a single high spec server on all the time are normally higher than multiple tiny instances being turned on and off automatically.

The tools necessary to implement this architecture are now very mature (haproxy, Puppet, Docker, etc.) and Cloud IaaS providers are offering better tools for managing your instances automatically.

Useful links

ANTLR based rules evaluator

ANTLR4 grammar

While investigating how to handle complex business rules in a project a colleague of mine came up the idea for this and I created this library as a proof of concept.

The problem its trying to solve is quite common:

  • An application needs to evaluate data against a large number of complex/simple business rules
  • The business rules are mostly concerned with a limited set of values within a single business domain
  • The business rules need to be maintained and are updated regularly (with mostly small changes)
  • The users who define and maintain the rules are non-technical and cannot code to implement rule changes

Normally a problem like this is solved by either custom code or adding a large Rules Engine product, but both of these have a number of downsides.

Custom code disadvantages:

  • Requires custom code for each business rule
  • Rules cannot be changed without code release
  • Rules cannot be maintained by non-technical users

Rules Engine disadvantages:

  • Requires installation and maintenance of a new complex product (e.g. Drools)
  • Requires developer up-skilling to use correctly
  • Rules cannot be maintained by non-technical users (in practise)

Bad experiences in the past with large Rule Engine products discouraged us from using one, and in practise we would not be needing anything like the full set of features it provides. Custom code would quickly become a maintenance nightmare, and would add barriers between our users and the implementation.

The rules themselves were normally defined in english in documents and spreadsheets, so why not use something that’s closer to their “natural” state? The users aren’t idiots, they use Excel formulas to calculate all this manually, why couldn’t we find a compromise closer to what they understood?

Enter ANTLR, an open source Java based language parser. It’s used in a lot of places to convert things from one well defined language to another, such as in Hibernate to generate SQL from HQL. You can use it to define a grammar, generate parsers and apply them against text to validate it against the grammar and build a tree structure that matches the elements in your grammar.

The idea was we could use ANTLR to define a limited domain English grammar for our business rules that covered everything we needed inside our small business domain. That way we could allow users to write rules in almost natural English that we could parse and convert to executable business rules in code. That way the users can define the rules close to their normal way and maintain them on the system when they need to be updated.

e.g.

In our grammar we define a specification, with a rule being one or more specifications, as something like:

value_expr 'equals' string_comparison_value # StringEqualsComparisonSpecificationExpression

So when ANTLR parses the string “status equals approved”, it can identify:

  • “status” as the value_expr
  • “approved” as the string_comparison_value
  • The specification as type StringEqualsComparisonSpecificationExpression

This is can be easily parsed and used to build a Rule expression out of Java objects that can be evaluated against a set of data (i.e. evaluating json data {"status": "approved"} gives true).

The grammar can be made to parse complex statements, allowing definition of complex business rules out of a series of simple specifications in the grammar.

e.g.

(applicationArea / totalAvailableArea * 100 ) greater than 50 and options contains 'GRASSLAND'

As the rules are simply strings, they can be persisted and edited using a CRUD UI, web based or otherwise. The UI can use knowledge of the grammar to aid users when editing rules, validating against the grammar, testing against known data and auto-completing for valid syntax. If necessary, rules can be versioned to maintain audit trails and published to control when they come into effect.

This approach has it’s own set of disadvantages:

  • Have to code business specific grammar and rule specification logic covering required rules
  • Grammar cannot cover all possible scenarios without excessive code
  • Requires users to learn the grammar and understand how it is applied to the data used in the system

I believe this approach is a good fit for when the set of business rules you are dealing with is well known and applied to similar data sets, changes frequently in small repetitive ways and there’s a requirement for users to be able to quickly test and apply changes. Giving the users who understand the rules the best the ability to directly edit and test gives them extremely useful functionality and avoids the need for defining Rule requirements documentation and long periods of testing for each time the rules are updated.

Implementation details

I’d recommend reading up about ANTLR before driving into the code, as you need to understand the grammar and how it parses rules to understand how the tree builder constructs the expressions and applies data to it.

ANTLR4 is included in the project via sbt-antlr4. The ANTLR grammar file is located atsrc/main/antlr4/RuleSet.g4 and generated ANTLR classes based on that grammar are intarget/scala-2.11/classes/com/example/rules. The generated parser is used in the RuleSetCompilerand a listener, RuleSetTreeBuilder, is attached to it to react to events when parsing Rules.

RuleSetTreeBuilder has a number of methods that are fired when the parser enters and exits identified tokens and labelled elements from the grammar, such as enterRule_set andexitArithmeticExpressionPlus. The logic inside these methods build the logical rule expressions that can be applied to the data. Classes for specifications are under the packagecom.example.rules.grammar.specification.

JsonPath, a JSON implementation of XPath, is used to allow complex queries of the JSON for the cases when the data being evaluated isn’t simple.

e.g.

$.options[?(@.code=='G1')].area equals 3
SUM($.options[*].area) greater than 4

The grammar can be expanded to include specific business evaluations, rather than generic operations, based on knowledge of business domain and data. This allows the grammar to be more english readable instead of generic formulas. In the same way custom expressions to extract or process the data, e.g. GRASS options area instead of $.options[?(@.code=='G1' || @.code=='G2')].area.

Useful Links