Site icon JVM Advent

Events: Love Triangle in Integration Testing

Some years ago, events appeared in the IT world with the idea of decoupling many applications or microservices, improving performance, reducing complexity, and allowing change flows. Some companies adopted this new paradigm instead of the classic synchronic world where the client needed to wait until the provider answered; initially, everything looked fine because it solved many problems and provided the flexibility to create or change flow without any problem.

The developers adopted this new paradigm quickly but introduced a new problem that is directly related to the type of communication; in the past, most tests executed an HTTP request and waited until the response validated the results, but now this approach is not possible because things happen in an async way. Some questions could appear in your mind:

Depending on your application, many other questions could arise. However, this reveals that changing one communication approach to another involves more than adding dependencies and creating topics.

In this article, you will learn different approaches to tackle these problems of testing asynchronous communication in an agnostic way, such as which framework or library you use on your application.

context of the situation

Imagine that you work for a travel agency selling flights. This would represent a scenario where different situations could happen on the same microservice. Your team is responsible for developing and maintaining one microservice, which manages all the information about different reservations. The microservice is simple because it offers a CRUD for all the standard operations. Still, there is one particular consideration: the reservations need to listen to all the payment events to confirm or cancel the record.

The following figure represents the most relevant flows on the microservice and the things you need to consider to validate them.

Different flows could happen in the system.

Considering these different scenarios, you have other minor problems that combine into huge problems; the idea is to tackle them in parts to keep them simple and reusable.

NOTEYou can access this GitHub repository to find the complete solution.

PROBLEM #1 – HOW TO CREATE THE TESTS?

The first problem is which library helps to test the application in an agnostic way. There are tons of different testing libraries, but one of the most recognized is Karate, which uses Gherkin to write tests like Cucumber behind the scenes. This library has some advantages, like the simple syntax used to write tests and the possibility of integrating with other tools like Playwright or Gatling to cover different types of tests.

To use this library, you first need to add the dependency on your application. The following block represents how to do it on a Maven project:

<dependency>
<groupId>com.intuit.karate</groupId>
<artifactId>karate-junit5</artifactId>
<version>${karate.version}</version>
<scope>test</scope>
</dependency>

As a recommendation, constantly check which version of this library is the latest on the official webpage or a repository like this.

The step is to define the request you will use to create a reservation. A good practice is to define an external file containing the request’s body so you can reuse it for multiple scenarios or keep the tests simple. Let’s create a file called create_reservation_request.json in the test’s resource folder with the following information:

{
"passengers" : [
{
"firstName" : "Andres",
"lastName" : "Sacco",
"documentNumber" : "31434284",
"documentType" : "DNI",
"birthday" : "1985-01-01"
}
],
"itinerary" : {
"segment" : [
{
"origin" : "BUE",
"destination" : "MIA",
"departure" : "2023-12-31",
"arrival" : "2024-01-01",
"carrier" : "AA"
}
],
"price" : {
"totalPrice" : 30.0,
"totalTax" : 20.0,
"basePrice" : 10.0
}
}
}

The request is not the only file you must create; a good practice is validating the entire response. To do this, let’s make a file called  create_reservation_response.json, which contains the following information:

{
"id" : "#notnull",
"passengers" : [
{
"firstName" : "Andres",
"lastName" : "Sacco",
"documentNumber" : "31434284",
"documentType" : "DNI",
"birthday" : "1985-01-01"
}
],
"itinerary" : {
"segment" : [
{
"origin" : "BUE",
"destination" : "MIA",
"departure" : "2023-12-31",
"arrival" : "2024-01-01",
"carrier" : "AA"
}
],
"price" : {
"totalPrice" : 30.0,
"totalTax" : 20.0,
"basePrice" : 10.0
}
},
"creationDate" : "#notnull",
"status" : "CREATED"
}

As you can see, the response looks like a JSON file but with some strange little things like #notnull. The idea declares these fields on the document as wildcards, so Karate will only do simple validation, ignoring the inside values. There are many other wildcards for other types of validation, so check the official documentation to obtain more information. 

This problem’s core is creating a simple test that makes a POST request to some specific endpoint using Karate. Create a file called create_reservation.feature, which contains the following information:

Feature: Create a new reservation
Background:
* def api_URL = `http://localhost:8080/api/`
* def response_ok = read('./response/create_reservation_response.json')
* def request_ok = read('./request/create_reservation_request.json')

Scenario: Persist the information
Given url api_URL + 'reservation'
And request request_ok
And header Accept = '*/*'
And header Content-Type = 'application/json'
When method POST
Then status 201
And match response == response_ok

In the background section of the previous block of code, some variables use the read function, native to Karate, to load the information of one file. The syntax is simple, but let’s summarize the idea of each part in the following table:

Keyword

Description

Feature

This keyword contains a high-level description of the idea of the different scenarios.

Background

It usually indicates something that needs to be done before executing one or multiple tests. It’s a good idea to put all the variables you will use across all the scenarios inside.

Scenario

This keyword is used to represent a particular test or case that you want to validate.

Given

Describes the initial context or preconditions for the scenario. It is the starting point of the test.

When

Specifies the action or event that triggers the behavior in the scenario.

Then

Describes the expected outcome or result of the action performed in the When step.

And

Used to concatenate multiple Given, When, or Then steps in a scenario.

def

Use this word when you declare a variable

match

It’s responsible for validating something in the response. You could validate all the responses or just one attribute.

The last part of this problem is to create a class responsible for executing all the karate files; the class always needs to use the annotation @Karate.Test to indicate that this test is not a Junit or another type of test.

import com.intuit.karate.junit5.Karate;

class APITest {

@Karate.Test
Karate runAllTests() {
return Karate.run("flow/create_reservation.feature").tags("~@ignore").relativeTo(getClass());
}
}

As you can see in the class, the route and the file name contain the tests, but if you prefer, you can not indicate the name, so Karate will scan all the directories and find files with the extension .feature.

SIMULATING external communications

One particularity of this testing scenario is the endpoint, which creates a reservation to interact with another microservice. Thus, you have two options: interact with the real microservices in a nonproductive environment or use a tool to mock or simulate their behavior.

The best option is always to simulate the behavior of the external communications; to do this, you can use tools like Microcks, Hoverfly, or Wiremock. There are many reasons for choosing one instead of another, but consider the simplicity of using the best option, Wiremock.

The first thing to do is modify or create a docker-compose file with the image of Wiremock exposing the same port as the actual application, reducing the number of changes you need to introduce.

version: "3.1"
services:

api-catalog:
image: wiremock/wiremock:2.32.0
ports:
- 6070:8080
volumes:
- ./wiremock:/home/wiremock
restart: always

The next step is to declare the different stubs or mocks that Wiremock needs to return depending on the parameters and the URL the application invokes. To do this, let’s create a file called operation-success.json, which will contain the following information:

{
"mappings": [
{
"request": {
"method": "GET",
"urlPath": "/api/flights/catalog/city/BUE",
"headers": {
"Content-Type": {
"equalTo": "application/json"
}
}
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json"
},
"bodyFileName": "api-catalog/response/response-BUE.json"
}
},
{
"request": {
"method": "GET",
"urlPath": "/api/flights/catalog/city/MIA",
"headers": {
"Content-Type": {
"equalTo": "application/json"
}
}
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json"
},
"bodyFileName": "api-catalog/response/response-MIA.json"
}
}
]
}

They contain all the stubs with a part related to the request that the tool needs to receive to match and return something as a response. As you can see, you need to indicate the URL, HTTP method, and some headers on the request to have a granular response that only works with this request.

In the response section, you could indicate the entire response or create files containing the response. To do this, let’s make a file called response-BUE.json with the following information:

{
"name": "Buenos Aires",
"code": "BUE",
"timeZone": "America/Argentina/Buenos_Aires"
}

As you can see, the file format is just a simple JSON file with nothing strange so you can copy and paste a natural response to the application. 

There are two different requests in this scenario, so you need to create two files or one for each mapping, considering that you don’t always want to return the same response. 

Let’s create a file called response-MIA.json with the following information to complete the scenario:

{
"name": "Miami",
"code": "MIA",
"timeZone": "America/New_York"
}

If you want more information about the parameters you could indicate on the request, look at the official documentation, particularly this link. At the same time, if you need more information about the format of the response on the mock, you can read this article.

PROBLEM #2 – How to simulate events?

Most applications use events to communicate things to one another, and in many cases, they use tools like Kafka. Still, in another case where most of the infrastructure is based on AWS (Amazon Web Services), the companies decide to use that cloud provider’s tools, such as SQS/SNS. The problem with the second approach is that it is impossible to use the actual infrastructure without affecting something on the environment, so you need to find a way to reduce the impact on all the applications by running a simple integration test.

In this case, an excellent approach to solving the problem is using a tool like Localstack. This tool provides most of the services that virtually exist in AWS so that you can use more or less the exact behavior of the fundamental infrastructure but with the same limitations. 

There are many ways to use Localstack, but a possible approach is to create a Dockerfile that configures everything for us instead of doing it on each docker-compose file. So let’s make it with the following information:

FROM localstack/localstack:0.14.5

ENV SERVICES=sns,sqs DEBUG=1 DEFAULT_REGION=us-east-1 HOSTNAME_EXTERNAL=localhost DOCKER_HOST=unix:///var/run/docker.sock

VOLUME /docker-entrypoint-initaws.d/
VOLUME /var/run/docker.sock

EXPOSE 4566

The problem when you use Localstack is to run some specific commands like creating queues, topics, or anything else you need to do using the command of the AWS inside the container, which represents a problem because it’s possible that each time that you want to execute the tests someone run the commands. A solution to this problem is to create a script that runs at the container’s start and makes everything you need, like some topics and queues. Let’s create a file called init.sh with the following information:

#!/usr/bin/env bash

set -euo pipefail

# enable debug
# set -x

aws configure set aws_access_key_id "test"
aws configure set aws_secret_access_key "test"

echo "configuring sns/sqs"
echo "==================="
# https://gugsrs.com/localstack-sqs-sns/
LOCALSTACK_HOST=localhost
AWS_REGION=us-east-1
LOCALSTACK_DUMMY_ID=000000000000

get_all_queues() {
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sqs list-queues
}

create_queue() {
local QUEUE_NAME_TO_CREATE=$1
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sqs create-queue --queue-name ${QUEUE_NAME_TO_CREATE} --attributes FifoQueue=true,ContentBasedDeduplication=true
}

get_all_topics() {
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sns list-topics
}

create_topic() {
local TOPIC_NAME_TO_CREATE=$1
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sns create-topic --name ${TOPIC_NAME_TO_CREATE} --attributes FifoTopic=true,ContentBasedDeduplication=true
}

link_queue_and_topic() {
local TOPIC_ARN_TO_LINK=$1
local QUEUE_ARN_TO_LINK=$2
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sns subscribe --topic-arn ${TOPIC_ARN_TO_LINK} --protocol sqs --notification-endpoint ${QUEUE_ARN_TO_LINK} --attributes RawMessageDelivery=true
}

guess_queue_arn_from_name() {
local QUEUE_NAME=$1
echo "arn:aws:sqs:${AWS_REGION}:${LOCALSTACK_DUMMY_ID}:$QUEUE_NAME"
}

guess_topic_arn_from_name() {
local TOPIC_NAME=$1
echo "arn:aws:sns:${AWS_REGION}:${LOCALSTACK_DUMMY_ID}:$TOPIC_NAME"
}

PAYMENTS_IN_PROCESS_QUEUE_NAME="payments_in_process.fifo"
PAYMENTS_CONFIRMED_QUEUE_NAME="payments_confirmed.fifo"
RESERVATION_CONFIRMED_TOPIC_NAME="reservation_confirmed.fifo"
ASSERTIONS_QUEUE_NAME="reservation_confirmed-assertions.fifo"

echo "creating queue: $PAYMENTS_IN_PROCESS_QUEUE_NAME"
QUEUE_ARN=$(create_queue ${PAYMENTS_IN_PROCESS_QUEUE_NAME})
echo "created queue: $QUEUE_ARN"


echo "creating queue: $PAYMENTS_CONFIRMED_QUEUE_NAME"
QUEUE_ARN=$(create_queue ${PAYMENTS_CONFIRMED_QUEUE_NAME})
echo "created queue: $QUEUE_ARN"


echo "creating topic: $RESERVATION_CONFIRMED_TOPIC_NAME"
TOPIC_ARN=$(create_topic ${RESERVATION_CONFIRMED_TOPIC_NAME})
echo "created topic: $TOPIC_ARN"

echo "creating queue: $ASSERTIONS_QUEUE_NAME"
QUEUE_ARN=$(create_queue ${ASSERTIONS_QUEUE_NAME})
echo "created queue: $QUEUE_ARN"


echo "linking topic $RESERVATION_CONFIRMED_TOPIC_NAME to queue $ASSERTIONS_QUEUE_NAME"
LINKING_RESULT=$(link_queue_and_topic $(guess_topic_arn_from_name $RESERVATION_CONFIRMED_TOPIC_NAME) $(guess_queue_arn_from_name $ASSERTIONS_QUEUE_NAME))
echo "linking done:"
echo "$LINKING_RESULT"


echo "all topics are:"
echo "$(get_all_topics)"

echo "all queues are:"
echo "$(get_all_queues)"

After configuring the Localstack with the topics and queues, it’s time to create a docker-compose file to run the tests so that you can run them many times without depending on the database or the network. The docker-compose file will contain the Localstack image and the database that appears on the following block:

version: "3.1"
services:
localstack:
build: localstack/
ports:
- 4566:4566
volumes:
- ./localstack/:/docker-entrypoint-initaws.d/
- /var/run/docker.sock:/var/run/docker.sock

api-reservation-db:
image: mongo:5
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: muppet
MONGO_INITDB_DATABASE: flights_reservation
ports:
- 27017:27017

api-catalog:
image: wiremock/wiremock:2.32.0
ports:
- 6070:8080
volumes:
- ./wiremock:/home/wiremock
restart: always

One possible way to execute all the containers on the application when you run the test is using Testcontainers, which have support on many databases or brokers but also can manage docker-compose files. This library has support not just for Java; you can use it for other languages like .NET, Rust, or Go.

To use TestContainers, you must first add the dependency to your POM file. The latest version is available at this link or in the official documentation.

<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<version>${testcontainers.version}</version>
<scope>test</scope>
</dependency>

The next step is to create a base class in which all the tests on the application could be used. Consider that this class will contain the annotations related to TestContainers to indicate that there are docker containers to manage inside.

Let’s create a class called BaseTest with the following content:

import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.springframework.boot.test.context.SpringBootTest;
import org.testcontainers.containers.DockerComposeContainer;
import org.testcontainers.containers.wait.strategy.Wait;
import org.testcontainers.junit.jupiter.Testcontainers;

import java.io.File;

@Testcontainers
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT)
public class BaseTest {

static DockerComposeContainer dockerComposeContainer = new DockerComposeContainer(
new File("src/test/resources/docker/docker-compose.yml"))
.waitingFor("localstack", Wait.forLogMessage(".*all queues are.*\\n", 1))
.waitingFor("api-reservation-db",
Wait.forLogMessage(".*MongoDB init process complete; ready for start up.*\\n", 1))
.withLocalCompose(true);

@BeforeAll
static void setUp() {
dockerComposeContainer.start();
}

@AfterAll
static void tearDown() {
dockerComposeContainer.stop();
}
}

In the previous block of code, some remarkable things appear, like waiting for the containers’ declaration; the idea is not to run anything until all the containers are ready, like the database and the queues. Consider that you need to modify the class APITest to extend from BaseTest to use all the benefits of the containers.

There is one slight modification after writing any test to check the behavior of the events; let’s modify the application.yml to connect with Localstack instead of the actual infrastructure. To do this, you only need to change the endpoint of AWS and the location of the queues, like appears on the following block

spring:
main:
allow-bean-definition-overriding: true
data:
mongodb:
uri: "mongodb://root:muppet@localhost/flights_reservation?authSource=admin"
cloud:
aws:
endpoint: http://localhost:4566
region:
static: us-east-1
credentials:
access-key: test
secret-key: test
events:
queues:
payments-in-process: http://localhost:4566/000000000000/payments_in_process.fifo
payments-confirmed: http://localhost:4566/000000000000/payments_confirmed.fifo
topics:
reservation-confirmed: arn:aws:sns:us-east-1:000000000000:reservation_confirmed.fifo

The first step after all the modifications is to create the scenario to send a message to a queue, so let’s create a file with the name payment_confirmed_query.txt, which contains a message like the following:

Action=SendMessage&MessageBody=${reservation_id}&QueueUrl=http%3A%2F%2Flocalhost%3A4566%2F000000000000%2Fpayments_confirmed.fifo&MessageGroupId=group-id

The next step is to create a file called reservation_confirmed_query.txt, which represents the message that the application will send after doing some task. The file content looks like this:

Action=ReceiveMessage&VisibilityTimeout=10&MaxNumberOfMessages=1

The last step is to create a file called payment_confirmed.feature, which contains everything related to tests. The idea of this scenario is to create a reservation on the application; after that, send a message that the application will listen to and process, and at the end, check if a new message will appear on another topic.

Feature: Check the process of confirm the payments
Background:
* def api_URL = `http://localhost:8080/api/`
* def localstack_URL = `http://localhost:4566/000000000000/payments_confirmed.fifo`
* def localstack_assertions_URL = `http://localhost:4566/000000000000/reservation_confirmed-assertions.fifo`

Scenario: Check the confirmation of the payments
# Create reservation
* def response_ok = read('./response/create_reservation_response.json')
* def request_ok = read('./request/create_reservation_request.json')

Given url api_URL + 'reservation'
And request request_ok
And header Accept = '*/*'
And header Content-Type = 'application/json'
When method POST
Then status 201
* def reservationId = response.id
And match response == response_ok

# Send message to the queue
* def payment_event = read('./events/payment_confirmed_query.txt')
* replace payment_event.${reservation_id} = reservationId

Given url localstack_URL
And header Content-Type = 'application/x-www-form-urlencoded'
And request payment_event
When method POST
Then status 200

# Check if the message exists
* def assertion_event = read('./events/reservation_confirmed_query.txt')
* replace assertion_event.${reservation_id} = reservationId

* print localstack_assertions_URL + `?` + assertion_event

Given url localstack_assertions_URL + `?` + assertion_event
And retry until karate.match("response/ReceiveMessageResponse/ReceiveMessageResult/Message/Body == '#present'").pass == true
When method Get
Then status 200
And match response/ReceiveMessageResponse/ReceiveMessageResult/Message/Body == '#present'

As you can see, the first part of the test looks similar to the scenario on Problem #1. Still, it appears to have some differences, like reading a file with the message of an event to replace one variable in the message and doing a POST to a URL, which exposes Localstack for some services like SQS/SNS.

The last part of the test checks if the execution result is okay or not. To do this, verify if a new message exists in a specific topic. The test retries until the message appears, considering that this is not a synchronic scenario, so the message could take the same seconds to appear on the topic.

Consider that if you don’t check the message on the queues, create some mechanism to remove it because it could affect the execution of another test.

PROBLEM #3 – How to check the database?

The last part of this scenario, and one of the most relevant, checks what happens in a database after some flow is executed. There are multiple approaches to solving this problem, like using an existing endpoint or creating a new one just for testing processes. Still, as you can imagine, this approach is impossible and not a good idea in all cases. Consider this situation: The best alternative is to access the database to obtain the information and check if everything is okay.

Not all databases have a tool that provides a REST API that solves all the problems related to accessing the database, like the query language and the way to obtain the data. However, in the case of MongoDB, one tool called Restheart connects with any database and exposes a simple REST interface to access the different collections or documents.

The first step is to add to the docker-compose.yml the image of Restheart connected to the database, which, in this case, is part of the same docker-compose file.

api-reservation-db-rest:
image: softinstigate/restheart:6.3.3
ports:
- 8082:8080
volumes:
- ./restheart:/opt/restheart/etc
depends_on:
- api-reservation-db

After that, you must create two unique files containing all the tool configurations. Changing many things in both files is unnecessary, but let’s start with the file default.properties, which you can download from here.

The second file is restheart.yml, which you can download from here. It contains most of the tool’s default configuration with just one minor modification: the URI, as appears in the following block.

mongo-uri: mongodb://root:muppet@api-reservation-db/flights_reservation?authSource=admin

With these modifications, if you run the docker-compose file and try to access the localhost:8082, you will see all the collections in the database appear on the following image:

Restheart output

The last step after all the modifications is to create the scenario to check the database, so let’s create a file with the name payment_in_process_query.txt, which contains a message like the following:

Action=SendMessage&MessageBody=${reservation_id}&QueueUrl=http%3A%2F%2Flocalhost%3A4566%2F000000000000%2Fpayments_in_process.fifo&MessageGroupId=group-id

The next step is to define into a file called in_process_reservation_database.json all the content of the JSON, which represents a document in the database

{
"_id": {
"$oid": #notnull
},
"passengers": [
{
"first_name": "Andres",
"last_name": "Sacco",
"document_number": "31434284",
"document_type": "DNI",
"birthday": {
"$date": 473396400000
}
}
],
"itinerary": {
"segment": [
{
"origin": "BUE",
"destination": "MIA",
"departure": "2023-12-31",
"arrival": "2024-01-01",
"carrier": "AA"
}
],
"price": {
"total_price": "30.0",
"total_tax": "20.0",
"base_price": "10.0"
}
},
"_class": "com.twa.reservations.model.Reservation",
"creation_date": {
"$date": #notnull
},
"status": "IN_PROCESS"
}

The last part of creating the scenario is developing the test that uses all these files. Let’s make a file with the name payment_in_process.feature, which contains the reservation creation, and send the event to modify the information in the database, as you can see in the following block of code.

Feature: Check the process the payment
Background:
* def api_URL = `http://localhost:8080/api/`
* def restheart_URL = `http://localhost:8082/flights_reservation/reservation/`
* def localstack_URL = `http://localhost:4566/000000000000/payments_in_process.fifo`

Scenario: Check the creation and process of payments events
# Create reservation
* def response_ok = read('./response/create_reservation_response.json')
* def request_ok = read('./request/create_reservation_request.json')

Given url api_URL + 'reservation'
And request request_ok
And header Accept = '*/*'
And header Content-Type = 'application/json'
When method POST
Then status 201
* def reservationId = response.id
And match response == response_ok

# Send message to the queue
* def payment_event = read('./events/payment_in_process_query.txt')
* replace payment_event.${reservation_id} = reservationId

Given url localstack_URL
And header Content-Type = 'application/x-www-form-urlencoded'
And request payment_event
When method POST
Then status 200

# Check the modification into the database
* configure retry = { count: 10, interval: 3000 }
* def change_database = read('./database/in_process_reservation_database.json')

Given url restheart_URL + reservationId
And retry until karate.match("response contains change_database").pass == true
When method GET
Then status 200
And match response == change_database

As you can see, the last step of the test is to check the data in the database with a series of retries until the result is correct.

After all these changes related to the different problems in the article, the last step is to run the tests to see what happens. To do this, you only need to run the command mvn test in the same way that appears on the following block:

~$ mvn test
..................
10:59:13.794 [main] INFO tc.docker-compose - Docker Compose has finished running
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.16 s -- in com.twa.reservations.APITest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ---------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ---------------------------------------------------------
[INFO] Total time: 29.020 s
[INFO] Finished at: 2024-11-21T10:59:17-03:00
[INFO] ---------------------------------------------------------

The execution result shows that all the tests pass without problems or require some fundamental infrastructure. If you want to see the results in another way, remember that Karate generates a report during each execution to simplify the validation process for each test step. 

The result in a graphical way of the previous execution looks like the following image

Results of Karate Execution

If you click on any of these rows, you will obtain all the information about one particular test, including all the steps, as shown in the following image.

Details of a Karate Test

Remember both options: CLI or UI have the same information but are represented differently.

WHAT’S NEXT?

There are many resources about different topics connected with other types of testing, but a few tackle the problems associated with creating applications that use events and databases. The following is just a short list of resources:

Other resources could be great for understanding some concepts related to events or asynchronous communication in depth:

Consider that this is just a small list of all the available resources about event-driven or asynchronous communication. If something is unclear, find another video or resource.

CONCLUSION

Leveraging events on a platform can be highly beneficial, offering advantages such as enabling parallel collaboration with other teams and allowing flexibility to adapt workflows. However, it would be best if you found a way to validate all the possible scenarios related to this new paradigm before deploying something new on production because the risk that something wrong happens is too high compared with traditional approaches.

There are tons of different databases, and not in all the cases have a tool that offers to expose the information like a simple REST API; if this is the case, try to think of an alternative, like creating an endpoint that exposes the information and replaces the request to RESTHeart to a request on the API.

One last thing related to the selection of technologies: There is no unique way to solve the issue of using events. Still, all your decisions must be documented and discussed with other partners or colleagues. Hence, I suggest using ADR (Architecture Decision Record), an excellent way to track architectural choices.

Author: Andres Sacco

Andres Sacco has been a developer since 2007 in different languages, including Java, PHP, NodeJs, Scala, and Kotlin. His background is mostly in Java and the libraries or frameworks associated with this language. In most of the companies he worked for, he researched new technologies to improve the performance, stability, and quality of the applications of each company. In 2017 he started to find new ways to optimize the transference of data between applications to reduce the cost of infrastructure. He suggested some actions, some of them applicable in all the manual microservices and others in just a few. All this work concludes with the creation of a series of theoric-practical projects, which are available on the page Manning.com Recently he published a book on Apress about the last version of Scala. Also, he published a set of theoric-practical projects about uncommon ways of testing, like architecture tests and chaos engineering. He dictated internal courses to different audiences like developers, business analysts, and commercial people. Also, he participates as a Technical Reviewer on the books of the editorials: Manning, Apress, and Packt.
Exit mobile version