Testing microservices can be difficult. Often, we underestimate these difficulties when first working with microservices.
As the number of microservices grows and complexities increase, many of us wonder if our existing testing strategies still make sense. Although microservices are smaller and simpler than other architectures on a one-by-one basis, ensuring that the microservice ecosystem works correctly still presents challenges.
As microservices become more prevalent, rethinking our testing strategies becomes critical to building confidence. We want to make sure our tests allow us to push code to production without fear or worry. If you have doubts every time code ships to production, it’s time to reassess your testing strategies and look to improve.
Microservices and Testing Complexity
First, let’s briefly cover microservices and why testing strategies can be tricky.
Microservices provide small services with focused functionality or domain. They work in concert with other services to create a larger product. For example, you may have a microservice that acts as an online shopping cart and another microservice that covers a store’s inventory. Those work in cooperation with other microservices to allow your customer to buy and ship their order.
Microservices can communicate with each other and with other dependencies in various ways. They often use a mix of synchronous and asynchronous communication and a combination of protocols as well. Perhaps some microservices communicate using REST, while some use RPC, GraphQL, or others. Asynchronous microservices can use those methods or publish and subscribe to a message broker like Kafka or RabbitMQ. With all that variation, testing becomes incredibly complex.
Further factors complicate the testing landscape. First, the increase in surface area and integration between services means that there are more ways an app can fail. You also have to account for network failures. And microservices can involve a lot of asynchronous choreography, which results in brittle, flaky tests that take too much effort to maintain.
Despite these complications, there are still ways to test microservices. Let’s cover common testing strategies now. We’ll also briefly cover tips and tricks and what you should watch out for.
Combine these strategies based on your needs to create an overarching strategy that improves deployment confidence. Consider what works for your product and organization through metrics, and don’t follow a generic recommendation.
1. Unit Tests
At the base of every testing pyramid, we find the humble unit test. Unit tests run fast and validate functions, methods, or classes within a microservice. The speed makes them ideal candidates for frequent local runs and the first step in any automated testing pipeline.
Your unit test suite validates business logic, state, and output of functions or methods. Additionally, they validate interactions between internal microservice components, AKA, the plumbing.
As unit tests make up the bulk of most testing strategies, let’s review a few recommendations.
Unit Testing Recommendations
Two important characteristics of unit tests include speed and coverage. Unit tests must run quickly. Your engineering team will determine the ideal test speed, typically measured in seconds. And coverage ideally falls in the range of 80 to 100%.
As mentioned in the section above, unit tests can test states or interactions. When testing functionality that involves states, use real objects and functions. For interactive or plumbing tests, use mocks, test doubles, and spies to ensure that the coordination of processes works correctly.
You may find that collaborative (or plumbing) tests don’t provide much value, and you may want to limit them. When determining which tests to invest in, consider the effort in creating and maintaining them. Ultimately, unit tests should not need to change when the underlying implementation changes. However, with plumbing tests, you almost always change the tests with the code, even when functionality doesn’t change.
Though testing strategies for microservices vary, it’s fairly unusual to see a codebase that doesn’t have more unit tests than any other tests.
2. Integration Tests
Next, let’s take a look at integration tests. In short, integration tests validate how our microservices integrate with dependencies.
For example, integration tests can validate that the integration with dependencies like the database or cache works correctly. These tests can validate how the service reacts when dependencies respond with errors or are unreachable.
Typically, we run into problems when integration tests try to validate the functionality of a dependency. So to be explicit, integration tests validate the communication between your service and a dependency or even a framework. Therefore, they should not attempt to validate that a dependency works correctly. To help keep that separation clear, we use mocks, test doubles, or spies.
Integration tests can be a bit slower than unit tests if they require a framework like Spring to start up, but execution time should still be quick. These tests usually don’t run repeatedly during local development. Instead they run before pushing code to CI and again as part of the CI pipeline.
3. Component Tests
Component tests validate subsystems of a microservice. Some teams treat unit and component tests as the same thing. Typically, component tests cover the functionality of many unit tests, ensuring that a particular business function or flow works correctly. For example, a component test might verify the onboarding process for a new customer within a microservice that manages the customer.
Also, component tests validate the service without its dependencies. These tests also typically use test doubles, mocks, or fake external services that mimic actual dependencies. They can include testing tools that record and play back transactions that occur in a staging or production environment.
Depending on the complexity of your microservice, you may not need many component tests. The functionality could be covered by unit tests and integration tests.
Manage microservices without the chaos
Automate the tracking and reporting of all your microservice migration or upgrade campaigns with OpsLevel.See OpsLevel in Action
4. Contract Tests
For our next testing type, we’ll review contract tests. Contract testing validates that APIs work as expected. For example, contract tests may test third-party dependencies to ensure the contract doesn’t break with new releases and provides backward compatibility.
In practice, contract tests might not fully prevent errors involving API contracts. Contract tests often simply notify you of a broken contract, which may be out of your control. However, they still provide value in knowing that a newer dependency or API version will continue to work correctly.
Since this typically implies a third party changed the contract, you’re giving yourself an early warning signal to quickly change or ask for backward compatibility.
Contract tests only test inputs and outputs. Ideally, the consumer of the API creates the contract test, and the producer of the API runs the test as part of their deployment pipeline. In that scenario, contract tests can prevent broken contracts before the dependency ships. Unfortunately, the same team that creates the contract often maintains the contract test. And if they update their tests to match the contract, issues can fall through and affect consumers of the API. To avoid this scenario, consider contract tests as static to an API version, and add additional tests when the contract changes.
5. End-to-End/Acceptance Tests
Next, let’s consider end-to-end, or acceptance, tests. These tests use actual dependencies whenever possible. They’re the final check that we haven’t broken vital functionality for our customers. End-to-end tests increase our confidence that significant issues are caught before going to production.
As a best practice, keep these tests coarse and only test the most likely success paths and a couple of common error paths. When it comes to these acceptance tests, the system is a black box. And validation on state changes or events happens at the perimeter of the system. To further simplify these tests, consider excluding the UI, and test a headless build of the system to reduce issues with complex interactions.
In microservices, the main difficulty here involves flaky or brittle tests. If a complete business function involves multiple microservices, small changes throughout the system can make the tests unreliable and prone to error—especially for asynchronous processes that spawn events and fire-and-forget interactions.Minimize the number of acceptance tests to the most likely scenarios to reduce flakiness.
6. Performance Tests
Many organizations also need performance tests. These can be done automatically, on a schedule, or ad hoc, depending on the organization’s maturity.If you have a system with large variances in load, you may want to automate stress and load tests as part of a deployment cycle or periodic schedule.
7. Manual or Exploratory Tests
And finally, let’s talk about manual or exploratory testing.
First, don’t rely on manual testing to ship code. Most tests should be automated, and manual testing should be left for a quick confidence boost before shipping.
With exploratory testing, we want to answer the question “What if?” For example, what if the customer didn’t follow the normal flow of operation? What if the data is in a bad state when a process occurs? This type of testing helps find unusual issues or even feature improvements and relies heavily on the tester’s imagination. Exploratory testing should happen periodically but often doesn’t happen on a regular schedule.
We’ve covered many test strategies and considerations. In the end, you’ll have to experiment with what tests you perform and how many of each type. Which tests add no value and require heavy maintenance? Which tests repeatedly catch issues that weren’t caught during development?
With all these tests, automation is critical. It’s important to automate executions and reporting and make the results visible to all.
You can add visibility to both targets and actuals in your OpsLevel Service Maturity levels, making it easy to see if a microservice is on track.By tailoring your testing strategy to the specific of your microservice ecosystem, you can incorporate the right kinds of tests in the perfect ratios to gain the confidence you and your engineers need to ship quickly and frequently.
This post was written by Sylvia Fronczak. Sylvia is a software developer that has worked in various industries with various software methodologies. She’s currently focused on design practices that the whole team can own, understand, and evolve over time.