Isolated Integration Tests: Oxymoron or Best Practice?



Whenever I discuss integration testing, I find that I first have to define the scope of the conversation. It is a vast topic with many, seemingly contradictory definitions. In the broadest terms, an integration test is any test that exercises the interactions between any set of components, modules, or applications in order to expose faults within those interactions. For this article I am focused on end-to-end testing of an entire application such as a service layer, microservice, or customer facing website that has dependencies on other systems.


It's 2018, so I'm going to assume that we all agree that automating tests is the best way to ensure quality and stability of any system. It bestows developers with the ability to build and refactor with confidence, and produce stable, verified releases that help gain the confidence of stakeholders. There are more tools for building and executing tests than I can count, including utilities, libraries, frameworks, runtimes, even some markup-like script interpreters dedicated to the purpose of testing: fitnesse, selenium, jmeter, soapui, junit, nunit, xunit, cunit, hunit, dbunit, … I'm sensing a pattern here. The list goes on.


Despite this abundance of tools, it still requires good design and discipline to write good, testable code. Among the myriad best practices for coding are a set that aid in writing testable code by helping to facilitate isolation. These include designing APIs that are deterministic, meaning the same set of inputs always produces the same set of outputs. Using dependency injection allows you to code to a contract and distance modules from their concrete dependencies. Last, but not least, high cohesion and low coupling are vital at the class level, but just as important at higher levels of abstraction. All of these practices aid in isolation of code for testing.


As you start testing larger sets of code such as an entire subsystem or application, isolation can be hard to achieve except in non-trivial cases. A standalone application is generally easier to test than an enterprise service layer. To be honest, my bread and butter is the integration between systems. That's where the problems get interesting: vendor services, microservices, cloud / social media authentication providers, software as a service. How can we possibly isolate a single system in modern, complex, integrated environments?



For some, the answer is deceptively simple: "don't".


For those who prefer not to isolate their systems, I have seen these fully integrated, always on, integration testing environments. The data in that environment is manufactured in such a way that every system residing in it can share data and coexist peacefully. When unexpected test failures appear due to a change in one or more of those systems, or even data pollution from another test in the same system, this is indicative of a problem with the integrations. This inevitably leads to a meeting between developers of various teams to figure out what changed and why it affected their systems in that unexpected way. It can easily be argued that this is a good thing as it forces us to have these conversations about unintended consequences perhaps earlier than we otherwise would.


While this type of forced, early integration does have the potential to be beneficial, I don't like this tactic for a few reasons. I think teams quickly become adept at choosing data sets that they know won't clash with other systems, thus dodging the very benefit they are supposed to be realizing by using this approach. If they do have the discipline to do this correctly, it does not scale very well. The complexity of this type of setup increases dramatically as the number of entities, systems, employees, teams, and projects that access and overlap within the environment increase. Then there is the issue of those systems that are outside of your realm of control like vendors, 3rd party providers, legacy mainframe systems, or even that newly acquired sister company that doesn't participate.


As I was trying to solve this problem for one of my clients, I bumped into a simple platform for endpoint virtualization called Mountebank. To my knowledge it is the only open-source tool of its kind so I think it is worth sharing what I have discovered. Mountebank has a healthy following on GitHub with 111 forks and 34 contributors, is written and maintained primarily by Brandon Byars of ThoughtWorks, and is released under the MIT license. Using this tool you get on demand test doubles using common internet protocols. With it you can virtualize all of your dependent endpoints and test both success and failure scenarios. Simple configuration changes can repoint a service layer / application to Mountebank virtual endpoints at test time. It also gives the flexibility to run as many tests in parallel as you can handle, testing against different predicates, different ports, and so on. You can mimic production by inserting a delay into your virtual endpoints as well. You can even put Mountebank into proxy mode to record, export, and replay actual production calls. This list is far from exhaustive, it has many more features and is extensible if you need to add behavior. In my research, the other tools that I found that offer this functionality were generally robust and mature, but saddled with a hefty price tag.


In fact, for that particular client, part of the project involved the design and build for a greenfield enterprise service layer. Having recently discovered Mountebank, we decided to design it with the usage of Mountebank in mind. Our configurations for the system are stored in database or yaml files, and objects that cache those values are injected into the classes that needed them. This allows us to repoint the application to Mountebank virtual endpoints very cleanly. We also run the service layer in memory during our tests so that we can run many tests in parallel with little overhead. Each test uses a different predicate to avoid clashing between virtual endpoints on the Mountebank server to isolate the tests from each other.


I know this sounds too good to be true and there is a catch. Specifically, if you adopt this testing strategy with reckless abandon, you may be in for some long nights when one of the systems you depend on promotes an API change to production. If you aren't proactive, your tests will continue to hum along happily green and you will be unaware of your impending doom until it is too late. In reality, these types of events don't occur in a vacuum. Vendors will generally communicate with their customers prior to making an API change. Other application teams in your organization ought to share upcoming changes with their clients. This really comes down to having good organizational governance systems and structures. This generally results in structured communication of planned upcoming changes and will raise awareness of dependencies both within an organization and without.


There will be cases of unintentional consequences or other teams that make breaking changes to their APIs without warning, but that is a challenge whether you're using this approach or not. If you have a vendor making unannounced API changes, I highly recommend choosing a new vendor! Regardless, you have to protect yourself from the inevitable integration SNAFU. After using this strategy for a few different applications, I recommend coupling testing virtual endpoints with an additional small set of tests that ensure stability of integration points by calling the actual applications and comparing the results against expected output. These tests can run on a schedule outside of your normal CI strategy and don't need to be exhaustive. These tests will be your early warning sirens and your insurance policy. They can also be helpful when the other guy tries to claim that nothing changed at all!


Please check out Mountebank and see how it can help your organization. It is freely available from the Mountebank website at http://www.mbtest.org/. Mountebank developer Brandon Byars is also working on a new book "Testing Microservices with Mountebank". It is due to come out in February 2018 and I highly recommend checking it out.


Happy testing!

Topics: Testing

Written by Jeff Gitter

Leave a Comment