Article

Functional Testing and Microservices: A Journey To Consumer Driven Contracts

23 January 2018 | About a 7 minute read
Tags: functional, microservices, squad lead, testing


Introduction

Having worked for a number of years with clients attempting to move away from monolithic applications in a waterfall environment, one common architectural pattern to adopt is microservices. A sensible, Domain Driven Design (DDD) led approach to adoption allows companies to migrate their legacy systems piece by piece into a set of services which can be sensibly maintained, modified, deployed, and scaled.

One of the key challenges this adoption brings is how to sensibly automate tests to ensure that the system as a whole is not impacted by changes to one individual service.

In this post we will describe a journey to microservices from a test automation point of view, and how solving problems as and when they appear leads naturally to the idea of consumer driven contracts (CDC).

Test automation maturity

We’ll be starting this process from a start point which should be familiar to most. For organisations coming from a manual testing background a first attempt at automation will often follow the following pattern:

 

  • Push for developers to deliver more unit level tests with their code. This would be validated through process with code reviews and with tools like SonarQube.
  • Expect manual testers to create automated functional tests to replace their existing test plans with tools like WebDriver for web UI or ApacheHttpClient for API testing.
  • Create a simple CI pipeline which handles build, deploy, test and publish operations.

 

At this level of maturity functional tests will almost always focus on the UI and treat the underlying architecture as a black box. This is perfectly valid, and allows us to genuinely not care about the architecture. The tests are doing exactly what functional/acceptance tests should, which is to verify that the system responds as it should where it matters most – when users are interacting with it.

 

This level of maturity delivers most of what is needed from a first-pass automation approach, but often suffers when it comes to fast feedback due to the speed of the functional test suite which if web UI focused can run for hours.

A shift towards Microservices

How does our test maturity affect us when we start to deliver a microservice application?

We still want our system to be fully tested at the functional level with regression checks, but we now have an architecture that allows us to rapidly deliver new functionality with minimal impact on other parts of the system. When a developer makes a change we are going to run the unit and integration tests for that service then trigger the functional test suite.

 

If our functional test suite is running UI tests for a couple of hours on every new deployment, then we get into a situation where a change to any microservice prevents any other services from being changed within the same time frame. We have created a bottleneck in our deployment pipeline. At this point we can slim down our functional tests to a bare bones smoke check and rely more on integration tests as mentioned previously.

 

Given we’ve already moved away from a monolithic application, why then do we still have a monolithic test suite? With a well implemented set of microservices we should be able to produce sets of functional tests (either at the API or UI level depending on the service) specific to each business area which validate interactions around a given service. These would still be running against deployed infrastructure rather than being mocked, but being more targeted means you can run subsets when a service changes.

 

If you are communicating by HTTP between your services then it’s easy to create much faster functional tests that validate the behaviour of that business area at the API level.

We are now starting to evolve the test automation maturity of the team and breaking up the tests in line with the principles that apply to microservices architecture. Splitting out our test suite means each microservice effectively has its own pipeline so we have unblocked ourselves and are now getting much faster test feedback. Our full functional test suite can now either be run on a schedule or slimmed down to a bare minimum to validate multi-service journeys work correctly.

An example pipeline with service specific functional tests and scheduled E2E tests

The full picture

Unfortunately, there’s still a problem. Changes to a service will kick off functional tests specific to that business area to ensure that the service itself and everything it consumes work as expected. It doesn’t however validate that its consumers are unaffected.

The problem we fail to catch is not with the service itself, but with how a consuming service is interacting with it. This kind of issue would not be captured until our full set of tests has been run for each service.

 

We can address this by kicking off all of our test sets at the same time, but that almost lands us back where we started with our test monolith. We have some advantages now in that we can run subsets which allows us to run the tests for the modified service and its consumers. However we still have a bottleneck in our deployment pipeline.

A pipeline where we are running all our functional test sets on each deployment

 

In the image above we can see an example of what happens if we have a build pipeline which triggers tests for consumers on deploy. User service is consumed by the Payments service, so we trigger both of these test sets when the User service changes. While those tests are running the Payments service is modified, but we now have to wait for the first test run to complete before we can validate that change. In reality we wouldn’t even be able to deploy the Payments service until the tests complete. As more services get added into our architecture the problem is compounded.

Enter consumer driven contracts

So now we are running most (if not all) of our test jobs every time we make a change. Our deployment pipelines are complex, but what we have is working. We have gained some advantages from breaking up our tests and we’re still catching breaking changes.

However, we still haven’t enabled ourselves to take advantage of independent releases for our microservices.

 

A key part of microservices is using contracts to dictate how services will interact with each other. These contracts detail the expected structure of the consumer request and provider response, and allows the two to be modified independently – both just need to remain aligned with the contract. We can write mocked tests against some documentation of the contract (e.g. swagger documentation), but the best source of truth for these contracts is the service itself.

 

An answer for this is to treat the contract itself as an entity which can be used in tests. If any services that interact with that contract (as a producer or consumer) execute their tests not against their understanding of the contract (e.g. the documentation) but against a single source of truth then problems like those discussed can be identified at the integration test level.

 

This is the essence of consumer driven contract (CDC) development. CDC enables you to validate that contracts have been adhered to as part of your unit testing, identifying problems related to contract changes much earlier in our process while reducing the dependence upon end to end integration tests which rely on running all services together (and cannot be reliably triggered in a microservice deployment pipeline).


We can now go back to running just the tests for a specific business area, and the contracts within that area. We no longer depend for fast feedback on our end to end functional tests. We could also run the whole batch of CDC contract tests as an overnight job with confidence that they will catch edge cases and UI flow changes.


Consumer Driven Contracts define the capabilities of our providers from the perspective of our consumers and the interactions between the two, while providing the test itself which defines successful contract adherence. When developing consumers and providers, using CDC means defining this contact up front and developing toward it’s adherence. That is, developing with consumer driven contracts is test driven development but for services.

Share this blog post

Related Articles

Careers

We’re looking for bright, dynamic people to join our team!

Discover More Roles