Context
In today’s world, we develop systems that interact with applications developed by other teams or people. This is even more the case in the world of Microservices and Serveless setup. Systems that we develop might be in some cases consumers of other services, while in others can be providers that are consumed by other people services.
No matter if we are building providers (systems that provide and share data) or consumers (systems that fetch data from some other system and use it), making everything work perfectly when we hit production is the constant difficulty that we have. This is enhanced even more, by the fact, that teams can develop and push applications to production at different frequencies.
What we do today to mitigate this challenge
This challenge isn’t new and is not unique to the microservices era.
As soon as we have two systems that need to communicate, there is a question of how to keep both implementations in sync. If both systems are developed by the same team, it is easier, however problem still exists. If systems are developed by different teams, sync of some sort needs to exist between developers.
Common approach and limitations of it
The most logical solution for this problem is to have a clear specification of interactions and to cover it with tests by both teams.
In the majority of cases we are dealing with APIs, so usage of some sort of API specification is expected, for example OpenAPI spec.
Teams might go with my favorite approach in development, specification first.
In this case, once both teams agree on specifications, each team can go their own way and start developing according to the agreed specifications.
They can also use API spec to create mocks that can be used during development, this helps with the team moving at a different speed, and the ability to use different programming languages to create providers and consumers.
Since quality is important, teams would write Unit tests and Integration tests, sometimes also called End-to-end tests to validate if all is good before reaching production.
Things of course still can go wrong.
Things break during the testing integration phase
The first problem that can happen is that things break during the integration phase, in pre-prod or acceptance env (different names can be used).
This might be a strange problem to have since we want to catch problems before they hit production correctly. The problem isn’t the fact that things broke, the problem might be to figure out which team made a mistake and where. Team building API or team consuming API.
Looking at errors and agreed specifications for API, should give us the answer in most cases to which team need to go back to the development phase.
The limitation of this is that in most cases this needs to be done manually by someone taking a look at errors and specifications. Also, what is needed for this to be as painless as possible, is for people to be pedantic on making sure specifications reflect the real state of things, that it was communicated to all parties, and that all parties are aware of the latest spec to be used.
In past, I encountered teams that were flowless on this, and at the same time, I saw teams who wrote specifications a few years ago and never bothered to update them.
So this can be anywhere from trivial to huge issues.
In the end, the more teams invest in this part being correct, it will lead to fewer issues down the road.
Things break in production
The reality is that things can still break in production.
This usually leads to the revelation that some use case wasn’t covered in the testing phase by either team. Insufficient quality and lack of testing.
Again resolution of this problem might be from trivial to very complex, depending on the use case and problem at hand.
What is a bigger problem is the fact that error landed in production, and got exposed to end consumers.
As systems grow in complexity and interactions between them, the possibility of having this issue grows exponentially.
In the end, we as developers need to think of and write all the tests to cover all these use cases, and since there will be a lot of variations, overlooking some isn’t so unlikely.
Changes need to be made to the API specification
Whenever there is a need to modify or enhance existing APIs, it can be from trivial to almost impossible task.
If you are adding new endpoints and ways for people to interact with API, usually it shouldn’t be the problem. However, in cases where you want to modify an existing endpoint or to remove some old endpoint, there is also doubt if you identified all consumers, and have shared with them that that endpoint is changing or is being removed.
Unless you have good monitoring, there also might be the question of usage of those endpoints in the first place. Since it might be the case that no one is actually using this endpoint, removing it won’t really cause problems.
How Pact and Contract testing is trying to solve this problem
Now that we have a better understanding of what we want to solve, let us look at PACT and Contract testing and how it can help.
In a nutshell, there should be a contract that is agreed upon between the provider and consumer, that defines the contract for their interaction.
The difference between specification and this contract is subtle, while very important. Specification tries to document all things that providers expose to the world, how a request should look like and how a response will look like in this case. As already discussed, the fact that for example, some API has a certain endpoint doesn’t guarantee that anyone will call it.
In the case of PACT, all start with the consumer.
How it works
Consumer create unit tests using PACT DSL and map its side of interaction with the provider. It can be API base interaction or interaction defined on messages.
Once it is done, the consumer can run the Unit test, and PACT implementation will create a mock provider that will reply to calls to it from the consumer according to what was defined in PACT DSL. This ensures that things are automated and that it is easily identified if the consumer makes mistakes.
The good thing about PACT is that it is a language agnostic, and there are implementations in multiple languages. So, consumers and providers can be implemented in different technologies.
In the case of Java, we can look in more detail how one example looks like in JUnit5 https://docs.pact.io/implementation_guides/jvm/consumer/junit5, or if you use Junit4 let us look at an example here https://docs.pact.io/implementation_guides/jvm/consumer/junit
Once the Unit tests are run, and executed successfully, a PACT file will be created. This file is written in PACT DSL and then can be sent over to the Provider.
Consumer have created contracts on how they will interact with provider and inform them of their intentions.
The provider then uses the PACT framework, to run all Pact files in unit tests, and validate that all will be good. The Pact framework plays the role of mocking consumers, sending appropriate requests to providers and validating if the response meets the expectations written in the Pact file.
An example of Provider testing can be seen here https://docs.pact.io/implementation_guides/jvm/provider/junit5
All of this is done in an automated way by using Mocks and Unit tests which allow for all of it to be run on the developer machine or as part of the pipeline.
Conclusion
As we saw Pact and Contract testing isn’t a replacement for things that we have in our toolbox, instead it is a very useful addition that can help us solve some tricky problems that we face in day-to-day life, like easily identifying potential integration problems without of need of having full environments, since all is done in unit testing. Also make sure that in case we make changes on the provider side, we don’t break anything by accident on the side of consumers.
The Pact has implementations in multiple programming languages, that increase the value of having it in our toolbox.
Resources – Additional read
- http://www.itshark.xyz/posts/2022/12/20/How_to_Tackle_the_Pyramid_of_Quality_in_the_Real_World
- https://docs.pact.io/
- https://docs.pact.io/implementation_guides/jvm/consumer/junit5
- https://docs.pact.io/implementation_guides/jvm/provider/junit5
- https://swagger.io/specification/
Author: Vladimir Dejanovic
Founder and leader of AmsterdamJUG.
JavaOne Rock Star, CodeOne Star speaker
Storyteller
Software Architect ,Team Lead and IT Consultant working in industry since 2006 developing high performance software in multiple programming languages and technologies from desktop to mobile and web with high load traffic.
Enjoining developing software mostly in Java and JavaScript, however also wrote fair share of code in Scala, C++, C, PHP, Go, Objective-C, Python, R, Lisp and many others.
Always interested in cool new stuff, Free and Open Source software.
Like giving talks at conferences like JavaOne, Devoxx BE, Devoxx US, Devoxx PL, Devoxx MA, Java Day Istanbul, Java Day Minks, Voxxed Days Bristol, Voxxed Days Bucharest, Voxxed Days Belgrade, Voxxed Days Cluj-Napoca and others