JVM Advent

The JVM Programming Advent Calendar

The Art and Benefits of Computing Eventfully

Science is what we understand well enough to explain to a computer. Art is all the rest.

Donald E. KNUTH, THINGS THAT A COMPUTER SCIENTIST RARELY TALKS ABOUT

The term “event” has become overloaded in today’s modern computing world.  Event streaming, event processing, event messaging, event sourcing, event storming, event-driven architecture, and so on. Each represents different aspect of eventful computing. Just like anything, there are also challenges, but the benefits of the event-based approach should outweigh any obstacles in the long run, and will prove itself to be a viable and dynamic solution for today’s modern systems that are “hungry” for data.

A Brief Look Back in Time

Event-driven style of programming appeared in the early days of computing, when the need arose to respond to requests. These inputs primarily came from user interactions with the machines, such as clicking on the mouse, resizing the window, pressing the key on the keyboard, and so on. Inputs also came from the devices, for example, on the instruction set level where events complement interrupts.

As we move up the computing layers closer to the current era, we see event-based models being used practically everywhere. A common pattern is the Model-View-Controller (MVC) and its variants are at the heart of graphical user interface (GUI) design. Briefly speaking, the controller mediates between the model and the view. It takes input from the user, and passes the data to the model for processing, and in turn, it receives the results from the model, then passes it to the view for rendering.

Events are indeed ubiquitous in almost every aspect of computing. In fact, we live in an eventful world. Everything that happens in our lives is an event. Birthing of a new baby, a world cup game, a rockstar performing at a concert, starting a new job, and so on. However, our minds have been conditioned to think procedurally from our initial training with computers. In order to adapt to the event-driven world, we need to shift our minds to think on a different level.

The Current Cloud Era

Fast forward to the present time, event style programming and systems design are no longer dealing with a single processor. The explosive expansion in the cloud and the exponential growth of data from all imaginable spaces have elevated the levels of event-based computing to new heights. These days we are handling messages between disparate devices in massive scale and velocity, as well as possibly across different geographical areas. Event streaming refers to the ongoing delivery of the data streams to their destinations in near real-time. The beauty of the approach is that there is no time wasted in between. Data gets ingested and processed as it arrives.

As a hands-on tech person you may be asking: “All those talks won’t buy me anything, show me some code!”. What may pique your interest is to show a very basic example of a publish-and-subscribe messaging pattern used by Apache Pulsar, a cloud-native event streaming platform. The same pattern is being used a number of other messaging and event streaming platforms, such as Apache Kafka, the MQTT broker, Google Pub/Sub, and so on. For this particular example usage, in order to simplify the setup, I am using DataStax’s Astra Streaming, the managed Apache Pulsar cloud platform. (Note: Anyone can register with the Astra platform and get $25 free-tier access.)

The Publish/Subscribe Messaging Pattern

The Pub/Sub Messaging pattern is one of the most efficient approach to use for transmitting messages from the sender to the receiver(s). Notice that the receiver can be more than one. Because of the lack of coupling between the sender and the receiver(s), scalability is extremely high with this setup. At the heart of this approach is the broker, a stateless component that primarily handles message dispatching and delivery. The sender, also referred to as the publisher, does not send its message directly to the receiver(s) but publishes it to a topic. In fact, it does not bother knowing where the receiver(s) is/are. It lets the broker handle the delivery to the receiver(s) accordingly. It is up to the receivers, or subscribers, to subscribe to the topic(s) in order to be able to receive the messages that are of interests to them respectively.

Publisher / Producer

Writing an Apache Pulsar publisher client does not involve too many steps, as illustrated in the following example of a basic producer:

  • Establish the client connection (PulsarClient)
  • Create the publisher client (Producer)
    • associate it with the topic name
  • Send the message (*note: here we’re using the asynchronous sendAsync(), but send() is the one to use for synchronous send).
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.Producer;
import java.io.IOException;

public class SimpleProducer {

    private static final String SERVICE_URL = "pulsar+ssl://pulsar-gcp-uscentral1.streaming.datastax.com:6651";

    public static void main(String[] args) throws IOException
    {

        // Create client object
        PulsarClient client = PulsarClient.builder()
                .serviceUrl(SERVICE_URL)
                .authentication(
                    AuthenticationFactory.token(YOUR_PULSAR_TOKEN)
                )
                .build();

        // Create producer on a topic
        Producer<byte[]> producer = client.newProducer()
                .topic("persistent://mg-twitch-tenant1/astracdc/data-9569760f-a558-4db4-8b05-7b50e57cdf94-mgtwitchkeyspace.movies_and_tv")
                .create();

        // Send a message to the topic
        producer.sendAsync("Hello World".getBytes());

        //Close the producer
        producer.close();

        // Close the client
        client.close();

    }

}

Subscriber / Consumer

Likewise, putting together a basic Apache Pulsar consumer client involves only another few steps:

  • Establish the client connection (PulsarClient)
  • Create the consumer client (Consumer)
    • associate it with the topic name
  • Loops until the message arrives, and consumes it
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.Consumer;
import org.apache.pulsar.client.api.Message;
import java.io.IOException;
import java.util.concurrent.TimeUnit;

public class SimpleConsumer {

        private static final String SERVICE_URL = "pulsar+ssl://pulsar-gcp-uscentral1.streaming.datastax.com:6651";

        public static void main(String[] args) throws IOException
        {

            // Create client object
            PulsarClient client = PulsarClient.builder()
                    .serviceUrl(SERVICE_URL)
                    .authentication(
                        AuthenticationFactory.token(YOUR_PULSAR_TOKEN)
                    )
                    .build();

            // Create consumer on a topic with a subscription
            Consumer consumer = client.newConsumer()
                    .topic("mg-twitch-tenant1/astracdc/data-9569760f-a558-4db4-8b05-7b50e57cdf94-mgtwitchkeyspace.movies_and_tv")
                    .subscriptionName("my-subscription")
                    .subscribe();

            boolean receivedMsg = false;
            // Loop until a message is received
            do {
                // Block for up to 1 second for a message
                Message msg = consumer.receive(1, TimeUnit.SECONDS);

                if(msg != null){
                    System.out.printf("Message received: %s", new String(msg.getData()));

                    // Acknowledge the message to remove it from the message backlog
                    consumer.acknowledge(msg);

                    receivedMsg = true;
                }

            } while (!receivedMsg);

            //Close the consumer
            consumer.close();

            // Close the client
            client.close();

        }

}

Closing Thoughts

This article attempts to scratch the surface of one of the most fascinating subject areas in computing: event computing, which may not have been talked about as much as in other more “glamorous” areas in the market. The reason could be due to the fact that event computing often addresses the “hidden” spots and solves problems behind the scenes, much like the plumbing pipelines that are hiding underneath the building.

Think of events as the air that we breathe in, without which there would be no life. So it is with the event-driven systems, in which all of the moving parts – messages, streams, pipelines, connectors, transformers, remediators, etc. are operating independently and without direct dependencies on one another, and yet as a whole, they come together to produce the coherent results that we expect.

Water Flowing Over Derwent Dam
Water Flowing Over Derwent Dam by Tim Hallam is licensed under CC-BY-SA 2.0

Author: Mary Grygleski

Currently the AI Practice Lead at Callibrity. Will always be a passionate globe-trotting technical advocate in topic areas such as Java, Open Source, Cloud, Event “stuff” such as Streams, Data pipelines, and now also AI/ML that includes GenAI. By night, you will find her busy as an active tech community builder, the President of the Chicago JUG, and an assistant organizer of the Chicago chapter of the GenAI Collective. Grateful to be a Java Champion since 2021.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2024 JVM Advent | Powered by steinhauer.software Logosteinhauer.software

Theme by Anders Norén