Java in 2015 – Major happenings

2015 was the year where Java the language, platform, ecosystem and community continue to dominate the software landscape, with only Javascript having a similar sized impact on the industry. In case you missed the highlights of 2015, here’s some of the major happenings that occurred.

Java 20 years old and still not dead yet!

Java turned 20 this year and swept back to the top of the Tiobe index in December 2015. Although the Tiobe index is hardly a 100% peer reviewed scientific methodology, it is seen as a pretty strong barometer for the health of a language/platform. So what the heck happened to boost Java so dramatically again?

Firstly, the release of Java 8 the previous year was adopted by mainstream Java enterprise shops. The additional functional capabilities of Lambdas combined with the new Streams and Collections framework breathed a new lease of life into the language. Although Java 8 is not as rich in its feature set as say Scala or Python it is seen as the steady workhorse that now has at least some feature parity with more aggressive languages. Enterprises love a stable platform and it’s unlikely that Java will be disappearing any time soon.

Secondly, Java has become a strong platform to use for infrastructure platforms/frameworks. Many popular NoSQL, datagrid solutions such as Apache Cassandra, Hazelcast are written in Java, again due to its stability and strong threading and networking support. CI tools such as Jenkins are widely adopted and of course business productivity tools such as Atlassian’s JIRA are again Java based.

Oracle guts its Java evangelism team

Oracle fired much of its Java evangelism team just before JavaOne which wasn’t the greatest PR move by the stewards of Java. Over the subsequent months it became clearer that this wasn’t a step by Oracle to reduce its engineering efforts into Java but there were nervous times for much of the community as they feared the worst. A salient reminder that big corporations don’t always get their left hand talking to their right!

Java 9 delay announced

In the “We’re not really surprised” bucket came the announcement the Java 9 will be delayed until March 2017 in order to ensure that the new modularisation system will not break the millions of Java applications running out there today.

Although the technical work of Jigsaw is progressing nicely, the entire ecosystem will need to test on the new system. The Quality group in OpenJDK is leading this effort. I highly recommend you contact them to be part of the early access and feedback loop.

OpenJDK supports further mobile platforms

The creation of the OpenJDK mobile project came as a surprise to many and although it doesn’t represent a change in Oracle’s business direction it was a wlecome release of code to enable Java on ARM, Android and iOS platforms. There’s much technical work to do but it will be interesting to watch if the software community at large picks up on this new support and tries Java out as a language for the iOS and Android platforms in 2016 and beyond. There is a possibility that OpenFX (JavaFX) combined with Java mobile on iOS or Android may entice a slew of developers to this ‘new’ platform.

Was I right about 2015?

It’s always fun to look at past predictions, let’s see how I did!

  1. I expected 2015 to be a little bit quieter. Well I clearly got that wrong! Despite no major releases for ME, SE or EE, the excitement of celebrating 20 years of Java and a surge of new developers using Java 8 meant 2015 was busier than ever.
  2. Embracing Javascript for the front end. This trend continues and stacks such as JHipster show the new love affair that Java developers have with Javascript.
  3. Devops toolchains to the fore. Docker continues to steamroll ahead in terms of popularity and Java developers are especially starting to use Docker in test environments to avoid polluting environments with variations in Java runtimes, web servers, data stores etc.
  4. IoT and Java to be a thing. Nope, not yet! Perhaps in 2016 with the new Mobile Java project in OpenJDK and further refinement of Java ME, we may start to see serious inroads.

I’m not going to make any predictions for 2016 as I clearly need to stick to my day job ūüôā

One final important note. Project Jigsaw is the modularisation story for Java 9 that will massively impact tool vendors and day to day developers alike. The community at large needs your help to help test out early builds of Java 9 and to help OpenJDK developers and tool vendors ensure that IDEs, build tools and applications are ready for this important change. You can join us in the Adoption Group at OpenJDK. I hope everyone has a great holiday break – I look forward to seeing the Twitter feeds and the GitHub commits flying around in 2016 :-).

Martijn (CEO – jClarity, Java Champion & Diabolical Developer)

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Quick Web App Prototyping with Spring Boot & MongoDB

Back in one of my previous projects I was asked to produce a little contingency application. The schedule was tight and the scope simple. The in-house coding standard is PHP, so trying to get a classic Java EE stack in place would have been a real challenge. And, to be really honest, completely oversized. So, what then? I took the chance and gave Spring a try. I used it before, but in old versions, hidden away in the tech stack of the portal software I was plagued with at this time.

My goal was to have something the WebOps can simply put on a server with Java installed and run it. No fiddling with dozens of XML configurations and memory fine tuning. Just as easy as java -jar application.jar.
It was the perfect call for “Spring Boot”. This Spring project is all about making it easy to bring you, the developer, up to speed and take away the need of loads of configuration and boilerplate coding.

Another thing my project was crying for was a document-oriented data storage. I mean, the main purpose of the application was to offer a digital version of a real-world paper form. So why create a relational mess if we can represent the document as a document?! I used MongoDB in a couple of small projects before, so I decided to go with it.

What has this got to do with this article? Well, I will show you how quickly you can bring together all the bits and pieces needed for a web application. Spring Boot will make a lot of things fairly easy and will keep the code minimal. And at the end you will have a JAR file, which is executable and can be deployed by just dropping it onto a server. Your WebOps will love you for it.

Let’s imagine we are about to create the next big product administration web application. As it is the next big thing, it needs a big name: Productr (this is the reason I am a software engineer and not in sales or marketing…).
Productr will do amazing things and this article will show you its early stages, which are:

  • providing a simple REST interface to query all available products
  • loading these products from a MongoDB
  • providing a production-ready monitoring facility
  • displaying all products by using a JavaScript UI

All you need to start is:

  • Java 8
  • Maven
  • Your favourite IDE (IntelliJ, Eclipse, vi, edlin, a butterfly…)
  • A browser (ok, or Internet Explorer / MS Edge, but who would really want this?!)

And for the impatient, the code is also available on GitHub.

Let’s get started

Create a pom.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi=""





In these few lines a lot of stuff is already happening. Most important is the defined parent project. This will bring us a lot of useful and needed dependencies like logging, the Tomcat runtime and lots more. Thanks to Spring’s modularity, everything is re-configurable via pom.xml or dependency injection. For getting everything up quickly the defaults are absolutely fine. (Convention over configuration, anybody?)

Now, create the obligatory Maven folder structure:

mkdir -p src/main/java src/main/resources src/test/java src/test/resources

And we are settled.

Start the engines

Let’s get to work. We want to offer a REST interface to get access to our huge amount of products. So let’s start with creating a REST collection available under /api/products. To do so we have to do a few things:

  1. Our “data model” holding all information about our incredible products needs to be created
  2. We need a controller offering a method which does everything necessary to answer a GET request
  3. Create the main entry point for our application

The data model is pretty simple and done quickly. Just create a package called demo.model and a class called Product in it. The Product class is very straightforward:

package demo.model;


 * Our very important and sophisticated data model
public class Product implements Serializable {

    String productId;
    String name;
    String vendor;

    public String getProductId() {
        return productId;

    public void setProductId(String productId) {
        this.productId = productId;

    public String getName() {
        return name;

    public void setName(String name) { = name;

    public String getVendor() {
        return vendor;

    public void setVendor(String vendor) {
        this.vendor = vendor;

    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Product product = (Product) o;

        if (getProductId() != null ? !getProductId().equals(product.getProductId()) : product.getProductId() != null)
            return false;
        if (getName() != null ? !getName().equals(product.getName()) : product.getName() != null) return false;
        return !(getVendor() != null ? !getVendor().equals(product.getVendor()) : product.getVendor() != null);


    public int hashCode() {
        int result = getProductId() != null ? getProductId().hashCode() : 0;
        result = 31 * result + (getName() != null ? getName().hashCode() : 0);
        result = 31 * result + (getVendor() != null ? getVendor().hashCode() : 0);
        return result;

Our product has the incredible amount of 3 properties: an alphanumeric product ID, a name and a vendor (just the name, to keep things simple). It is serialisable and the getters, setters and the methods equals() & hashCode() are implemented by using my IDE’s code generation.

Alright, so creating a controller with a method to offer the GET listener it is now. Go back to your favourite IDE and create the package demo.controller and a class called ProductsController with the following content:

package demo.controller;

import demo.model.Product;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

 * This controller provides the REST methods
@RequestMapping(value = "/", method = RequestMethod.GET)
public class ProductsController {

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public List getProducts() {
        List products = new ArrayList();

        return products;


This is really everything you need to provide a REST interface. Ok, at the moment, an empty list is returned, but it is that easy to define.

The last thing missing is an entry point for our application. Just create a class called Productr in the package demo and give it the following content:

package demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

 * This is the entry point of our application
public class ProductrApplication {

    public static void main (String... opts) {, opts);


Spring Boot saves us a lot of keystrokes. @SpringBootApplication does a few things we would need for every web application anyway. This annotation is shorthand for the following ones:

  • @Configuration
  • @EnableAutoConfiguration
  • @ComponentScan

Now it is time to start our application for the first time. Thanks to Spring Boot’s maven plugin, which we configured in our pom.xml, starting the application is as easy as: mvn spring-boot:run. Just run this command in your project root directory. You prefer the lazy point-n-click way provided by your IDE? Alright, just instruct your favourite IDE to run ProductrApplication.

Once it is started, use a browser, a REST client (you should check out Postman, I love this tool) or a command line tool like curl. The address you are looking for is: http://localhost:8080/api/products/. So, with curl, the command looks like this:

curl http://localhost:8080/api/products/

Data please

Ok, returning an empty list isn’t that shiny, is it? So let’s bring in data.
In many projects a classic relational database is usually overkill (and painful if you have to use it AND scale out). This may be one reason for the hype around NoSQL databases. One (in my opinion good) example is MongoDB.

Getting MongoDB up and running is pretty easy. On Linux you can use your package manager to install it. For Debian / Ubuntu, for example, simply do: sudo apt-get install mongodb.

For Mac, the easiest way is homebrew: brew install mongodb and follow the instructions in the “Caveats” section.

Windows users should go with the MongoDB installer (and toi toi toi).

Alright, we just got our data store sorted. It is about time to use it.
There is one particular Spring project dealing with data – called Spring Data. And by sheer coincidence a sub-project called Spring Data MongoDB is just waiting for us. Even more, Spring Boot provides a dependency package to get up to speed instantly. No wonder that the following few lines in the pom.xml‘s <dependencies> section are enough to bring in everything we need:


Now, create a new package called demo.domain and put in a new interface called ProductRepository. Spring provides a pretty neat way to get rid of writing code which is usually needed to interact with a data source. Most of the basic queries are generated by Spring Data – all you need is to define an interface. A couple of query methods are available without even specifying method headers. One example is the findAll() method, which will return all entries in the collection.
But hey, let’s see it in action instead of talking about it. The bespoke ProductRepository interface should look like this:

package demo.domain;

import demo.model.Product;

 * This interface lets Spring generate a whole Repository implementation for
 * Products.
public interface ProductRepository extends MongoRepository {


Next, create a class called ProductService in the same package. Purpose of this class is to actually provide some useful methods to query products. For now, the code is as easy as this:

package demo.domain;

import demo.model.Product;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;

 * This is a little service class we will let Spring inject later.
public class ProductService {

    private ProductRepository repository;

    public List getProducts() {
        return repository.findAll();


See how we can use repository.findAll() without even defining it in the interface? Pretty slick, isn’t it? Especially if you are in a hurry and need to get things up quickly.

Alright, so far we prepared the foundation for the data access. I think it is time to wire it together. To do so, simply head back to our class demo.controller.ProductsController and modify it slightly. All we have to do is to inject our shiny new ProductService service and call its getProducts() method. The class will look like this afterwards:

package demo.controller;

import demo.domain.ProductService;
import demo.model.Product;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

 * This controller provides the REST methods
public class ProductsController {

    // Let Spring DI inject the service for us
    private ProductService productService;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public List getProducts() {
        // Ask the data store for a list of products
        return productService.getProducts();


That’s it. Start MongoDB (if not already running), start our application again (remember the mvn spring-boot:run thingy?!) and start another GET request to http://localhost:8080/api/products/:

$ curl http://localhost:8080/api/products/

Wait, still an empty list? Yes, or do you remember us putting anything into the database? Let’s change this by using the following command:

mongo localhost/test --eval "db.product.insert({productId: 'a1234', name: 'Our First Product', vendor: 'ACME'})"

This adds one product called “Our First Product” to our database. Ok, so what is our service returning now? This:

$ curl http://localhost:8080/api/products/
[{"productId":"5657654426ed9d921affc3c0","name":"Our First Product","vendor":"ACME"}]

Easy, wasn’t it?!

Looking for a little more data but no time to create it yourself? Alright, it’s nearly Christmas, so take my little test selection:

curl | mongoimport -d test -c product --jsonArray

Basic requirements at your fingertips

In today’s hectic days and with “microservice” culture spreading, it is getting harder and harder to keep an eye on what is really running on your servers or cloud environments. So in nearly all environments I was working on over the last years monitoring was a big thing. One common pattern is to provide health check endpoints. One can find everything from simple ping endpoints to health metrics, returning a detailed overview of business relevant metrics.
All of this is most of the times a copy-n-paste adventure and involves tackling a lot of boilerplate code. Here is what we have to do – simply add the following dependency to your pom.xml:


and restart the service. Let’s have a look what happens if we query http://localhost:8080/health:

$ curl http://localhost:8080/health

This should provide sufficient data for a basic health check. If you follow the startup log messages, you’ll probably spotted a number of other endpoints. Experiment a bit and check the Actuator documentation for more information.

Show it to me

Ok, we got ourselves a REST service and some data. But we want to show this data to our users. So let’s go on and provide a page with an overview of our awesome products.

Thank Santa that there is a really active web UI community working on loads of nice and easy usable frontend frameworks and libraries. One pretty popular example is Bootstrap. It is easy to use and all the needed bits and pieces are provided via open CDNs.

We want to have a short overview of our products, so a table view would be nice. Bootstrap Table will help us with that. It is built on top of Bootstrap and also available via CDNs. What a world we live in…

But wait, where to put our HTML file? Spring Boot makes it easy, again. Just create a folder called src/main/resources/static and create a new HTML file called index.html with the following content:

<!DOCTYPE html>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">


    <!-- Import Bootstrap CSS from CDNs -->
    <link rel="stylesheet" href="//">
    <link rel="stylesheet" href="//">
<nav class="navbar navbar-inverse">
    <div class="container">
        <div class="navbar-header">
            <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
                <span class="sr-only">Toggle navigation</span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
            <a class="navbar-brand" href="#">Productr</a>
        <div id="navbar" class="collapse navbar-collapse">
            <ul class="nav navbar-nav">
                <li class="active"><a href="#">Home</a></li>
                <li><a href="#about">About</a></li>
                <li><a href="#contact">Contact</a></li>
        </div><!--/.nav-collapse -->
    <div class="container">
        <table data-toggle="table" data-url="/api/products/">
                <th data-field="productId">Product Reference</th>
                <th data-field="name">Name</th>
                <th data-field="vendor">Vendor</th>

<!-- Import Bootstrap, Bootstrap Table and JQuery JS from CDNs -->
    <script src="//"></script>
    <script src="//"></script>
    <script src="//"></script>

This file isn’t pretty complex. It is just a HTML file, which includes the minimised CSS files from the CDNs. If you see a reference like // for the first time, it is not a bad mistake that the protocol (http or https) is missing. A resource referenced that way will be loaded via the same protocol the main page got loaded with. Say, if you use http://localhost:8080/, it will use http: to load the CSS files.

The <body> block contains a navigation bar (using the HTML5 <nav> tag) and a table. The interesting part of this table definition is the provided data-url attribute. It is interpreted by Bootstrap Table to load the data. Our definition points to our previously created REST endpoint.
Which part of our JSON objects is used in which column is defined via the data-field attributes on the <th> definitions. Can you spot the matching attribute names?

Last but not least we load the needed JavaScript libraries. All Bootstrap-related JavaScript functionality needs JQuery, so this is the first library to load. Followed straight by the main Bootstrap and the Bootstrap Table JavaScript files. Each of these library files is loaded in the minimised version, to keep download times at a minimum.

Where to go now

It is fair to say that we have a really simple web application now. Well, the main purpose of this article was to show you how to get up to speed with as little code as possible. You’ve seen that sometimes just a dependency in your POM file brings you a complete new feature, without the need of any additional line of code.
Take a step back, look at what we’ve built so far and think about the next steps needed. And just start to take a look around in the Spring universe.

I think one of the most crucial steps needed next, beside adding the missing tests, is to bring in security. Check out Spring Security and its subprojects Spring Security OAuth.
More interested in “classic” web pages? Check out Spring MVC and how easy it is to integrate quite sophisticated template engines (e. g. by following this guide).

Hopefully, you enjoyed this article as much as I enjoyed its creation. I wish you all a merry Christmas and if the one or the other wants to get in touch, you can find me e. g. on Twitter, G+ and LinkedIn.

Functional vs Imperative Programming. Fibonacci, Prime and Factorial in Java 8

There are multiple programming styles/paradigms, but two well-known ones are Imperative and Functional.

Imperative programming is the most dominant paradigm as nearly all mainstream languages (C++, Java, C#) have been promoting it. But in the last few years functional programming started to gain attention. One of the main driving factors is that simply all new computers are shipped with 4, 8, 16 or more cores and it’s very difficult to write a parallel program in imperative style to utilize all cores. Functional style moves this difficultness to the runtime level and frees developers from hard and error-prone work.

Wait! So what’s the difference between these two styles.

Imperative programming is a paradigm where you tell how exactly and which exact statements machine/runtime should execute to achieve desired result.

Functional programming is a form of declarative programming paradigm where you tell what you would like to achieve and machine/runtime determines the best way how to do it.

Functional style moves the how part to the runtime level and helps developers focus on the what part. By abstracting the how part we can write more maintainable and scalable software.

To handle the challenges introduced by multicore machines and to remain attractive for developers Java 8 introduced functional paradigm next to imperative one.

Enough theory, let’s implement few programming challenges in Imperative and Functional style using Java and see the difference.

‚ě§Fibonacci Sequence Imperative vs Functional (The Fibonacci Sequence is the series of numbers: 1, 1, 2, 3, 5, 8, 13, 21, 34, … The next number is found by adding up the two numbers before it.)

Fibonacci Sequence in iterative and imperative style

public static int fibonacci(int number) {
  int fib1 = 1;
  int fib2 = 1;
  int fibonacci = fib1;
  for (int i = 2; i < number; i++) {
    fibonacci = fib1 + fib2;
    fib1 = fib2;
    fib2 = fibonacci;
  return fibonacci;

for(int i = 1; i  <= 10; i++) {
  System.out.print(fibonacci(i) +" ");
// Output: 1 1 2 3 5 8 13 21 34 55 

As you can see here we are focusing a lot on how (iteration, state) rather that what we want to achieve.

Fibonacci Sequence in iterative and functional style

IntStream fibonacciStream = Stream.iterate(
    new int[]{1, 1},
    fib -> new int[] {fib[1], fib[0] + fib[1]}
  ).mapToInt(fib -> fib[0]);

fibonacciStream.limit(10).forEach(fib ->  
    System.out.print(fib + " "));
// Output: 1 1 2 3 5 8 13 21 34 55 

In contrast, you can see here we are focusing on what we want to achieve.

‚ě§Prime Numbers Imperative vs Functional (A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself.)

Prime Number in imperative style

public boolean isPrime(long number) {  
  for(long i = 2; i <= Math.sqrt(number); i++) {  
    if(number % i == 0) return false;  
  return number > 1;  
isPrime(9220000000000000039L) // Output: true

Again here we are focusing a lot on how (iteration, state).

Prime Number in functional style

public boolean isPrime(long number) {  
  return number > 1 &&  
     .rangeClosed(2, (long) Math.sqrt(number))  
     .noneMatch(index -> number % index == 0);
isPrime(9220000000000000039L) // Output: true

Here again we are focusing on what we want to achieve. The functional style helped us to abstract away the process of explicitly iterating over the range of numbers.

You might now think, hmmm, is this all we can have …. ? Let’s see how can we use all our cores (gain parallelism) in functional style.

public boolean isPrime(long number) {  
  return number > 1 &&  
    .rangeClosed(2, (long) Math.sqrt(number))
    .noneMatch(index -> number % index == 0);
isPrime(9220000000000000039L) // Output: true

That’s it! We just added .parallel() to the stream. You can see how library/runtime handles complexity for us.

‚ě§Factorial Imperative vs Functional ( The factorial of n is the product of all positive integers less than or equal to n.)

Factorial in iterative and imperative style

public long factorial(int n) {
  long product = 1;
  for ( int i = 1; i <= n; i++ ) {
    product *= i;
  return product;
factorial(5) // Output: 120

Factorial in iterative and functional style

public long factorial(int n) {
 return LongStream
   .rangeClosed(1, n)
   .reduce((a, b) -> a *   b)
factorial(5) // Output: 120

It’s worth repeating that by abstracting the how part we can write more maintainable and scalable software.

To see all the functional goodies introduced by Java 8 check out the following Lambda Expressions, Method References and Streams guide. Continue reading Functional vs Imperative Programming. Fibonacci, Prime and Factorial in Java 8

Composing Multiple Async Results via an Applicative Builder in Java 8

A few months ago, I put out a publication where I explain in detail an abstraction I came up with named Outcome, which helped me A LOT to code without side-effects by enforcing the use of semantics. By following this simple (and yet powerful) convention, I ended up turning any kind of failure (a.k.a. Exception) into an explicit result from a function, making everything much easier to reason about. I don’t know you but I was tired of dealing with exceptions that teared everything down, so I did something about it, and to be honest, it worked really well. So before I keep going with my tales from the trenches, I really recommend going over that post. Now let’s solve some asynchronous issues by using eccentric applicative ideas, shall we?

Something wicked this way comes

Life was real good, our coding was fast-paced,¬† cleaner and composable as ever, but, out of the blue, we stumble upon a “missing” feature (evil laughs please): we needed to combine several asynchronous Outcome instances in a non-blocking fashion….


Excited by the idea, I got down to work. I experimented for a fair amount of time seeking for a robust and yet simple way of expressing these kind of situations; while the new ComposableFuture API turned out to be much nicer that I expected (though I still don’t understand why they decided to use names like applyAsync¬† or thenComposeAsync instead of map or flatMap), I always ended up with implementations too verbose and repetitive comparing to some stuff I did with Scala, but after some long “Mate” sessions, I had my “Hey! moment”: Why not using something similar to an applicative?

The problem

Suppose that we have these two asynchronous results

and a silly entity called Message

I need something that given textf and numberf it will give me back something like

//After combining textf and numberf
CompletableFuture<Outcome<Message>> message = ....

So I wrote a letter to Santa Claus:

  1. I want to asynchronously format the string returned by textf using the number returned by numberf only when both values are available, meaning that both futures completed successfully and none of the outcomes did fail. Of course, we need to be non-blocking.
  2. In case of failures, I want to collect all failures that took place during the execution of textf and/or numberf and return them to the caller, again, without blocking at all.
  3. I don’t want to be constrained by the number of values to be combined,¬† it must be capable of handling a fair amount of asynchronous results. Did I say without blocking? There you go…
  4. Not die during the attempt.


Applicative  builder to the rescue

If you think about it, one simple way to put what we’re trying to achieve is as follows:

// Given a String -> Given a number -> Format the message
f: String -> Integer -> Message

Checking the definition of¬† f, it is saying something like: “Given a String, I will return a function that takes an Integer as parameter, that when applied, will return an instance of type Message“, this way, instead of waiting for all values to be available at once, we can partially apply one value at a time, getting an actual description of the construction process of a Message instance. That sounded great.

To achieve that, it would be really awesome if we could take the construction lambda Message:new and curry it, boom!, done!, but in Java that’s impossible (to do in a generic, beautiful and concise way), so for the sake of our example, I decided to go with our beloved Builder pattern, which kinda does the job:

And here’s the WannabeApplicative<T> definition

public interface WannabeApplicative<V>
    V apply();

Disclamer: For those functional freaks out there, this is not an applicative per se, I’m aware of that, but I took some ideas from it an adapted them according to the tools that the language offered me out of the box. So, if you’re feeling curious, go check this post for a more formal example.

If you’re still with me, we could agree that we’ve done nothing too complicated so far, but now we need to express a building step, which, remember, needs to be non-blocking and capable to combine any previous failure that might have took place in other executions with potentially new ones. So, in order to do that, I came up with something as follows:

First of all, we’ve got two functional interfaces: one is Partial<B>, which represents a lazy application of a value to a builder, and the second one, MergingStage<B,V>, represents the “how” to combine both the builder and the value. Then, we’ve got a method called value that, given an instance of type CompletableFuture<Outcome<V>>, it will return an instance of type MergingStage<B,V>, and believe or not, here’s where the magic takes place. If you remember the MergingState definition, you’ll see it’s a BiFunction, where the first parameter is of type Outcome<B> and the second one is of type Outcome<V>. Now, if you follow the types, you can tell that we’ve got two things: the partial state of the building process on one side (type parameter B)¬† and a new value that need to be applied to the current state of the builder (type parameter V), so that, when applied, it will generate a new builder instance with the “next state in the building sequence”, which is represented by Partial<B>. Last but not least, we’ve got the stickedTo method, which basically is a (awful java) hack to stick to a specific applicative type (builder) while defining building step. For instance, having:

I can define partial value applications to any Builder instance as follows:

See that we haven’t built anything yet, we just described what we want to do with each value when the time comes, we might want to perform some validations before using the new value (here’s when Outcome plays an important role) or just use it as it is, it’s really up to us, but the main point is that we haven’t applied anything yet. In order to do so, and to finally tight up all loose ends, I came up with some other definition, which looks as follows:

Hope it’s not that overwhelming, but I’ll try to break it down as clearer as possible. In order to start specifying how you’re going to combine the whole thing together, you will start by calling begin with an instance of type WannabeApplicative<V>, which, in our case, type parameter V is equal to Builder.

FutureCompositions<Message, Builder> ab = begin(Message.applicative())

See that, after you invoke begin, you will get a new instance of FutureCompositions with a lazily evaluated partial state inside of it, making it the one and only owner of the whole building process state, and that was the ultimate goal of everything we’ve done so far, to fully gain control over when and how things will be combined. Next, we must specify the values that we want to combine, and that’s what the binding method is for:


This is how we supply our builder instance with all the values that need to be merged together along with the specification of what’s supposed to happen with each one of them, by using our previously defined Partial instances. Also see that everything’s still lazy evaluated, nothing has happened yet, but still we stacked all “steps” until we finally decide to materialize the result, which will happen when you call perform.

CompletableFuture<Outcome<Message>> message = ab.perform();

From that very moment everything will unfold,¬† each building stage will get evaluated, where failures could¬†be returned and collected within an Outcome instance or simply the newly available values will be supplied to the target builder instance, one way or the other, all steps will be executed until nothing’s to be done. I will try to depict what just happened as follows


If you pay attention to the left side of the picture, you can easily see how each step gets “defined” as I showed before, following the previous “declaration” arrow direction, meaning, how you actually described the building process. Now, from the moment that you call perform, each applicative instance (remember Builder in our case) will be lazily evaluated in the opposite direction:¬† it will start by evaluating the last specified stage in the stack, which will then proceed to evaluate the next one and so forth up to the point where we reach the “beginning” of the building definition, where it will start to unfold o roll out evaluation each step up to the top, collecting everything¬† it can by using the MergingStage specification.

And this is just the beginning….

I’m sure a lot could be done to improve this idea, for example:

  • The two consecutive calls to dependingOn at CompositionSources.values() sucks, too verbose to my taste, I must do something about it.
  • I’m not quite sure to keep¬†passing¬†Outcome¬†instances to¬†a MergingStage, it would look¬†cleaner¬†and easier¬†if we unwrap the values to be merged before invoking it and just return Either<Failure,V> instead – this will reduce¬†complexity and increase¬†flexibility on what’s supposed to happen behind the scenes.
  • Though using the Builder pattern did the job, it feels old-school, I would love to easily curry constructors, so in my to-do list is to check if jOOőĽ or Javaslang have something to offer on that matter.
  • Better type inference so that the any unnecessary noise gets remove from the code, for example, the stickedTo method, it really is a code smell, something that I hated from the first place. Definitely need more time to figure out an alternative way to infer the applicative type from the definition itself.

You’re more than welcome to send me any suggestions and comments you might have. Cheers and remember…..





Migrating Spring App to MicroServices App on AWS

Migrating Spring App to MicroServices App on AWS


The company I am working for has recently gone through a migration of refactoring our code base from a monolithic application (Java Spring WAR) into a MicroServices Application hosted on the Amazon PAAS (specifically Beanstalk and CloudFront). As part of this blog post I have provided a small and simple Sales Demo application and will discuss the steps of what is required for refactoring the application so that it can be run within Beanstalk/S3/CloudFront environments.

For the purposes of this blog, I will be using a SalesTax demo application and the code can be found here ( This site will provide users a list of products and give them the ability to create an order and apply sales tax. I have created a more detailed guide, which includes steps for creating the different services in AWS. The guide can be found at this location ( The following is a diagram of the Spring Architecture:



The above architecture is a pretty standard Spring architecture for most monolithic web applications. In our migration, we broke up our code and separated the backend services from the front end content JSPs(Now HTML), CSS and JS. The following is a diagram illustrating our model of how we controlled access:


Amazon Web Services

I am going to start by explaining at a high-level what these different components in AWS are and how we integrate them together.


Route 53

Route 53 is a Domain Name Service( which allows you to route traffic to different internal AWS services. In our model we used Route 53 to host our DNS servers (for example



Amazon S3 ( is a simple storage service which allows you to store content (html, css, js files in buckets in the cloud). In this demo we will be using Amazon S3 to host the static content (html, css, and JS).



Beanstalk ( an application stack which will be used to host our individual services. Beanstalk has access to multiple stacks (Tomcat, PHP, Node, Ruby, Go, .Net). In this demo we will be using Beanstalk to host our different web services (as Spring WARS running on Tomcat).



Amazon Relational Database Service (RDS will be used to host our database. We will create an RDS database and our web services will be used to connect to the database.



Amazon CloudFront is the glue that will tie all your different services together under one common URL. We will define an origin (which will correspond to our URL, defined in Route 53 When the user hits this URL Route53 will route the traffic to CloudFront. CloudFront will host the content and push it to edge locations around the world. In CloudFront you are able to redirect traffic based on URL patterns. For example anyone coming to the default pattern (/*) can be redirected to a bucket in S3 which hosts your static content (i.e. html, css, images). If they come to say an API URL (/api/products) you can route them to a Beanstalk service in the backend.

Infrastructure Security

In our production systems we have all our web services hidden behind different VPCs and have implemented network rules to restrict access to our backend services. I do not think I will have time to address this in this blog, but will try to talk about this in my next.


Application Security

One major component I have not included in the Sales Demo is Spring Security. In our application, we removed our Spring Security and replaced access control using an API Gateway. I will discuss this concept briefly at the end of this blog.


NOTE: AWS is a very sophisticated and complex ecosystem that provides multiple ways to integrate these different services. The model I will be discussing is similar to the model which we implemented at our company.


SalesTax Application Overview


The SalesTax Demo application will look like a traditional Spring Application with one exception. The JSP pages do not follow the traditional Spring MVC model with data being passed from the controller and then the JSP pages rendering the view. Instead we are using Angular, which makes REST calls to the backend controllers and renders of the content in the browser. The reason that we are doing this is so that we can migrate our static content (html, css, js files) to S3 buckets and have our backend services run in beanstalk.


I have created a guide, which provides step-by-step instructions with pictures on how to setup your environment in AWS. You can find a link to the document on github at this location. The rest of the document will provide a summary of the process with references to the guide. If you would like to try this on your own AWS setup I recommend you look at the detailed guide here ( ).


Migration Process


The following section will provide a high-level overview of the migration process. Again if you would like to try this out for yourself, I would recommend using the detailed guide.


Deploy Application to Beanstalk


The first step will be to build the application and deploy it into a beanstalk instance. To checkout the code please run the following command:

Git clone step0


You can import the project into your IDE (Eclipse, NetBeans, STS, etc) or you can just build this from the command line. To build the project run the following commands:


mvn clean install


Once the WAR has been built, log into the AWS Adminstration console and deploy your WAR in a new Beanstalk Instance. For detailed instructions see the install guide


Configure CloudFront to point to yourBeanstalk Instance


Login into the Amazon Console and click on the CloudFront link. At this point you have two options:

-Use your own domain name(

-Use the default provided by Cloud Front(this will look something like

If you already have your own domain name you can add it to Route 53. The following link provides detailed instructions on how to do this ( If you do not have your own you can just create a CloudFront Origin and it will give you a url.


The goal of this step is to use CloudFront to map your url (either your own or generated to your hosted application in BeanStalk. In CloudFront you will define a Web Distribution and then for that distribution you will define an Origin.   Origins in Cloud Front represent backend services (i.e. S3 buckets to host static content or Beanstalk Applications which host your Spring Apps). Finally, you will create a Behavior that will instruct CloudFront to map all requests of a certain url pattern to a specific Beanstalk Instance. For first step we will map all requests (/) to the Beanstalk instance. In future steps will map all requests of the format (/api/) to your Beanstalk instance and the rest (/*) will go to your S3 Bucket. Below is an image of what the screen for creating a Behavior would look like.


Create RDS Postgres instance and connect to Beanstalk


In this step we create a publicly accessible RDS instance and then connect to it from our pgAdmin tool to create the database. The sql script and updated code can be found by pulling down the step1 branch as follows:


Git clone step1


The sql create script can be found in the following location

src/resources/sql/ createSalesTax-DB-Postgres.sql


Once your database is created you can rebuild your project with maven using the following command:

mvn clean install


Log back into your Amazon console and redeploy your latest war file. You will also need to append environment properties to your Beanstalk instance so it knows where to find your database. This can be done by clicking on Configuration, Software Configuration, and adding them to Environment Properties


If you reload your application you will see that it is now pulling the products from the database instance in AWS.


Create an S3 Bucket and deploy Static Content to it


In this step we are going to create an S3 bucket and will move our Static Content (html, css, images, etc) to it. To get the latest code for this we will need to pull down the latest changes from the git. Run the following command



Git clone step2


Log back into the Amazon Console and click on S3. Click on Create Bucket and create a new bucket.



Once your bucket is created, click on Properties (upper right corner) and click on Static Website Hosting to enable hosting of content. Once your S3 bucket is ready you can transfer the static content of the project to S3. The code to transfer is in the following directory:


Update Cloud Front to reflect new origins

We will need to update CloudFront to redirect the requests to their appropriate origins. The first step will be to log into CloudFront and create an Origin for your newly created bucket. Once your Origin has been created you will need to modify the Behavior so that your default Behavior () now points to your static content in S3 and your API requests (/api/) are redirected to your Elastic Beanstalk instance.  The following is a diagram of the proposed changes to CloudFront.


Redeploy Application

Once CloudFront has been updated and the status has changed to deployed, your static content, which is hosted in S3, will now be accessible by your CloudFront url. The only thing left to do is rebuild the sales demo application and redeploy it into Beanstalk. At this stage, all the front end code (html, js, css) has been moved to the web directory and the backend functionality is in the services directory. To rebuild your application run the maven command in services directory


mvn clean install


Log back into the Amazon Console and redeploy your Beanstalk application with the new WAR.

The above architecture is a good starting point for anyone who is looking at migrating their Spring application to a cloud based MicroServices. As part of your migration I would suggest looking at incorporating an API Gateway. There are a series of open source and commercially available API Gateways (Amazon released their API Gateway in July 2015,, etc). The API Gateway will sit in between CloudFront and your backend services and will handle authentication and access control, and it will redirect your requests to the appropriate Beanstalk instance.   I have included a picture of the API Gateway below.





Reactive Development Using Vert.x

Lately, it seems like we’re hearing about the latest and greatest frameworks for Java. Tools like Ninja, SparkJava, and Play; but each one is opinionated and make you feel like you need to redesign your entire application to make use of their wonderful features. That’s why I was so relieved when I discovered Vert.x. Vert.x isn’t a framework, it’s a toolkit and it’s un-opinionated and it’s liberating. Vert.x doesn’t want you to redesign your entire application to make use of it, it just wants to make your life easier. Can you write your entire application in Vert.x? Sure! Can you add Vert.x capabilities to your existing Spring/Guice/CDI applications? Yep! Can you use Vert.x inside of your existing JavaEE applications? Absolutely! And that’s what makes it amazing.


Vert.x was born when Tim Fox decided that he liked a lot of what was being developed in the NodeJS ecosystem, but he didn’t like some of the trade-offs of working in V8: Single-threadedness, limited library support, and JavaScript itself. Tim set out to write a toolkit which was unopinionated about how and where it is used, and he decided that the best place to implement it was on the JVM. So, Tim and the community set out to create an event-driven, non-blocking, reactive toolkit which in many ways mirrored what could be done in NodeJS, but also took advantage of the power available inside of the JVM. Node.x was born and it later progressed to become Vert.x.


Vert.x is designed to implement an event bus which is how different parts of the application can communicate in a non-blocking/thread safe manner. Parts of it were modeled after the Actor methodology exhibited by Eralng and Akka. It is also designed to take full advantage of today’s multi-core processors and highly concurrent programming demands. As such, by default, all Vert.x VERTICLES are implemented as single-threaded by default. Unlike NodeJS though, Vert.x can run MANY verticles in MANY threads. Additionally, you can specify that some verticles are ‚Äúworker‚ÄĚ verticles and CAN be multi-threaded. And to really add some icing on the cake, Vert.x has low level support for multi-node clustering of the event bus via the use of Hazelcast. It has gone on to include many other amazing features which are too numerous to list here, but you can read more in the official Vert.x docs.

The first thing you need to know about Vert.x is, similar to NodeJS, never block the current thread. Everything in Vert.x is set up, by default, to use callbacks/futures/promises. Instead of doing synchronous operations, Vert.x provides async methods for doing most I/O and processor intensive operations which might block the current thread. Now, callbacks can be ugly and painful to work with, so Vert.x optionally provides an API based on RxJava which implements the same functionality using the Observer pattern. Finally, Vert.x makes it easy to use your existing classes and methods by providing the executeBlocking(Function f) method on many of it’s asynchronous APIs. This means you can choose how you prefer to work with Vert.x instead of the toolkit dictating to you how it must be used.

The second thing to know about Vert.x is that it composed of verticles, modules, and nodes. Verticles are the smallest unit of logic in Vert.x, and are usually represented by a single class. Verticles should be simple and single-purpose following the UNIX Philosophy. A group of verticles can be put together into a module, which is usually packaged as a single JAR file. A module represents a group of related functionality which when taken together could represent an entire application or just a portion of a larger distributed application. Lastly, nodes are single instances of the JVM which are running one or more modules/verticles. Because Vert.x has clustering built-in from the ground up, Vert.x applications can span nodes either on a single machine or across multiple machines in multiple geographic locations (though latency can hider performance).

Example Project

Now, I’ve been to a number of Meetups and conferences lately where the first thing they show you when talking about reactive programming is to build a chat room application. That’s all well and good, but it doesn’t really help you to completely understand the power of reactive development. Chat room apps are simple and simplistic. We can do better. In this tutorial, we’re going to take a legacy Spring application and convert it to take advantage of Vert.x. This has multiple purposes: It shows that the toolkit is easy to integrate with existing Java projects, it allows us to take advantage of existing tools which may be entrenched parts of our ecosystem, and it also lets us follow the DRY principle in that we don’t have to rewrite large swathes of code to get the benefits of Vert.x.

Our legacy Spring application is a contrived simple example of a REST API using Spring Boot, Spring Data JPA, and Spring REST. The source code can be found in the “master” branch HERE. There are other branches which we will use to demonstrate the progression as we go, so it should be simple for anyone with a little experience with git and Java 8 to follow along. Let’s start by examining the Spring Configuration class for the stock Spring application.

public class Application {
    public static void main(String[] args) {
        ApplicationContext ctx =, args);

        System.out.println("Let's inspect the beans provided by Spring Boot:");

        String[] beanNames = ctx.getBeanDefinitionNames();
        for (String beanName : beanNames) {

    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        return builder.setType(EmbeddedDatabaseType.HSQL).build();

    public EntityManagerFactory entityManagerFactory() {
        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();

        LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();

        return factory.getObject();

    public PlatformTransactionManager transactionManager(final EntityManagerFactory emf) {
        final JpaTransactionManager txManager = new JpaTransactionManager();
        return txManager;

As you can see at the top of the class, we have some pretty standard Spring Boot annotations. You’ll also see an @Slf4j annotation which is part of the lombok library, which is designed to help reduce boiler-plate code. We also have @Bean annotated methods for providing access to the JPA EntityManager, the TransactionManager, and DataSource. Each of these items provide injectable objects for the other classes to use. The remaining classes in the project are similarly simplistic. There is a Customer POJO which is the Entity type used in the service. There is a CustomerDAO which is created via Spring Data. Finally, there is a CustomerEndpoints class which is the JAX-RS annotated REST controller.

As explained earlier, this is all standard fare in a Spring Boot application. The problem with this application is that for the most part, it has limited scalability. You would either run this application inside of a Servlet container, or with an embedded server like Jetty or Undertow. Either way, each requests ties up a thread and is thus wasting resources when it waits for I/O operations.

Switching over to the Convert-To-Vert.x-Web branch, we can see that the Application class has changed a little. We now have some new @Bean annotated methods for injecting the Vertx instance itself, as well as an instance of ObjectMapper (part of the Jackson JSON library). We have also replaced the CustomerEnpoints class with a new CustomerVerticle. Pretty much everything else is the same.

The CustomerVerticle class is annotated with @Component, which means that Spring will instantiate that class on startup. It also has it’s start method annotated with @PostConstruct so that the Verticle is launched on startup. Looking at the actual content of the code, we see our first bits of Vert.x code: Router.

The Router class is part of the vertx-web library and allows us to use a fluent API to define HTTP URLs, methods, and header filters for our request handling. Adding the BodyHandler instance to the default route allows a POST/PUT body to be processed and converted to a JSON object which Vert.x can then process as part of the RoutingContext. The order of routes in Vert.x CAN be significant. If you define a route which has some sort of glob matching (* or regex), it can swallow requests for routes defined after it unless you implement chaining. Our example shows 3 routes initially.

    public void start() throws Exception {
        Router router = Router.router(vertx);

Notice that the HTTP method is defined, the “Accept” header is defined (via consumes), and the “Content-Type” header is defined (via produces). We also see that we are passing the handling of the request off via a call to the blockingHandler method. A blocking handler for a Vert.x route accepts a RoutingContext object as it’s only parameter. The RoutingContext holds the Vert.x Request object, Response object, and any parameters/POST body data (like “:id”). You’ll also see that I used method references rather than lambdas to insert the logic into the blockingHandler (I find it more readable). Each handler for the 3 request routes is defined in a separate method further down in the class. These methods basically just call the methods on the DAO, serialize or deserialize as needed, set some response headers, and end() the request by sending a response. Overall, pretty simple and straightforward.

    private void addCustomer(RoutingContext rc) {
        try {
            String body = rc.getBodyAsString();
            Customer customer = mapper.readValue(body, Customer.class);
            Customer saved =;
            if (saved!=null) {
            } else {
                rc.response().setStatusMessage("Bad Request").setStatusCode(400).end("Bad Request");
        } catch (IOException e) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", e);

    private void getCustomerById(RoutingContext rc) {"Request for single customer");
        Long id = Long.parseLong(rc.request().getParam("id"));
        try {
            Customer customer = dao.findOne(id);
            if (customer==null) {
                rc.response().setStatusMessage("Not Found").setStatusCode(404).end("Not Found");
            } else {
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);

    private void getAllCustomers(RoutingContext rc) {"Request for all customers");
        List customers =, false).collect(Collectors.toList());
        try {
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);

“But this is more code and messier than my Spring annotations and classes”, you might say. That CAN be true, but it really depends on how you implement the code. This is meant to be an introductory example, so I left the code very simple and easy to follow. I COULD use an annotation library for Vert.x to implement the endpoints in a manner similar to JAX-RS. In addition, we have gained a massive scalability improvement. Under the hood, Vert.x Web uses Netty for low-level asynchronous I/O operations, thus providing us the ability to handle MANY more concurrent requests (limited by the size of the database connection pool).

We’ve already made some improvement to the scalability and concurrency of this application by using the Vert.x Web library, but we can improve things a little more by implementing the Vert.x EventBus. By separating the database operations into Worker Verticles instead of using blockingHandler, we can handle request processing more efficiently. This is show in the Convert-To-Worker-Verticles branch. The application class has remained the same, but we have changed the CustomerEndpoints class and added a new class called CustomerWorker. In addition, we added a new library called Spring Vert.x Extension which provides Spring Dependency Injections support to Vert.x Verticles. Start off by looking at the new CustomerEndpoints class.

    public void start() throws Exception {"Successfully create CustomerVerticle");
        DeploymentOptions deployOpts = new DeploymentOptions().setWorker(true).setMultiThreaded(true).setInstances(4);
        vertx.deployVerticle("java-spring:com.zanclus.verticles.CustomerWorker", deployOpts, res -> {
            if (res.succeeded()) {
                Router router = Router.router(vertx);
                final DeliveryOptions opts = new DeliveryOptions()
                        .handler(rc -> {
                            opts.addHeader("method", "getCustomer")
                                    .addHeader("id", rc.request().getParam("id"));
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        .handler(rc -> {
                            opts.addHeader("method", "addCustomer");
                            vertx.eventBus().send("com.zanclus.customer", rc.getBodyAsJson(), opts, reply -> handleReply(reply, rc));
                        .handler(rc -> {
                            opts.addHeader("method", "getAllCustomers");
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
            } else {
                log.error("Failed to deploy worker verticles.", res.cause());

The routes are the same, but the implementation code is not. Instead of using calls to blockingHandler, we have now implemented proper async handlers which send out events on the event bus. None of the database processing is happening in this Verticle anymore. We have moved the database processing to a Worker Verticle which has multiple instances to handle multiple requests in parallel in a thread-safe manner. We are also registering a callback for when those events are replied to so that we can send the appropriate response to the client making the request. Now, in the CustomerWorker Verticle we have implemented the database logic and error handling.

public void start() throws Exception {

public void handleDatabaseRequest(Message<Object> msg) {
    String method = msg.headers().get("method");

    DeliveryOptions opts = new DeliveryOptions();
    try {
        String retVal;
        switch (method) {
            case "getAllCustomers":
                retVal = mapper.writeValueAsString(dao.findAll());
                msg.reply(retVal, opts);
            case "getCustomer":
                Long id = Long.parseLong(msg.headers().get("id"));
                retVal = mapper.writeValueAsString(dao.findOne(id));
            case "addCustomer":
                retVal = mapper.writeValueAsString(
                                                    ((JsonObject)msg.body()).encode(), Customer.class)));
                log.error("Invalid method '" + method + "'");
                opts.addHeader("error", "Invalid method '" + method + "'");
      , "Invalid method");
    } catch (IOException | NullPointerException e) {
        log.error("Problem parsing JSON data.", e);, e.getLocalizedMessage());

The CustomerWorker worker verticles register a consumer for messages on the event bus. The string which represents the address on the event bus is arbitrary, but it is recommended to use a reverse-tld style naming structure so that it is simple to ensure that the addresses are unique (“com.zanclus.customer”). Whenever a new message is sent to that address, it will be delivered to one, and only one, of the worker verticles. The worker verticle then calls handleDatabaseRequest to do the database work, JSON serialization, and error handling.

There you have it. You’ve seen that Vert.x can be integrated into your legacy applications to improve concurrency and efficiency without having to rewrite the entire application. We could have done something similar with an existing Google Guice or JavaEE CDI application. All of the business logic could remain relatively untouched while we tried in Vert.x to add reactive capabilities. The next steps are up to you. Some ideas for where to go next include Clustering, WebSockets, and VertxRx for ReactiveX sugar.

Merry Christmas everyone!

This is the first year of the Java Advent Project and I am really grateful to all the people that got involved, published articles, twitted, shared, +1ed, shared etc. etc.
It was an unbelievable journey and all the glory needs to go to the people that took some time from their loved ones to give us their wisdom. As they say, the Class of 2014 of Java Advent is comprised of (in the order of publishing date):

Thank you girls and guys for making it happen yet once more. And sorry for stressing and pushing you out. Also, last but not least thanks to Voxxed editors Lucy Carey and Mite Mitreski.

A Musical Finale

What could be more fitting than Christmas music for Christmas Eve?

In this post I want to discuss the joy of making music with Java and why/how I have come to use Python…

But first, let us celebrate the season!

We are all human and irrespective of our beliefs, it seems we all enjoy music of some form. For me some of the most beautiful music of all was written by Johan Sebastian Bach. Between 1708 and 1717 he wrote a set of pieces which are collectively called Orgelb√ľchlein (Little Organ Book). For this post and to celebrate the Java Advent Calendar I tasked Sonic Field to play this piece of music modelling the sounds of a 18th century pipe organ. If you did not know, yes some German organs of about that time really were able to produce huge sounds with reed pipes (for example, Passacaglia And Fugue the Trost Organ). The piece here is a ‘Choral Prelude’ which is based on what we would in English commonly call a Carol to be sung by an ensemble.

BWV 610 Jesu, meine Freude [Jesus, my joy]
This performance dedicated to the Java Advent Calendar
and created exclusively on the JVM using pure
How was this piece created?
Step one is to transcribe the score into midi. Fortunately, someone else already did this for me using automated score reading software. Not so fortunately, this software makes all sorts of mistakes which have to be fixed. The biggest issue with automatically generated midi files is that they end up with overlapped notes on the same channel; that is strictly impossible in midi and ends up with an ambiguous interpretation of what the sound should be. Midi considers audio as note on, note off. So Note On, Note On, Note Off, Note Off is ambiguous; does it mean:

One note overlapping the next or:

One note entirely contained in a longer note?

Fortunately, tricks can be used to try and figure this out based on note length etc. The Java decoder always treats notes as fully contained. The Python method looks for very short notes which are contained in long ones and guesses the real intention was two long notes which ended up overlapped slightly. Here is the python (the Java is here on github).

def repareOverlapMidi(midi,blip=5):
print "Interpretation Pass"
while mute:
print "Demerge pass:",endAt
midi=sorted(midi, key=lambda tup: tup[0])
midi=sorted(midi, key=lambda tup: tup[3])
while index<endAt:

# Merge interpretation
if dif<blip and tkey==nkey and ttickOff>=ntickOn and ttickOff<=ntickOff:
print "Separating: ",this,next," Diff: ",(ttickOff-ntickOn)
midiOut.append([ttickOn ,ntickOn ,tnote,tkey,tvelocity])
elif dif<blip:
print "Removing blip: ",(ttickOff-ttickOn)
# iterate the loop
if index==endAt:
if not mute:
return midiOut

[This AGPL code is on Github]

Then comes some real fun. If you know the original piece, you might have noticed that the introduction is not original. I added that in the midi editing software Aria Maestosa. It does not need to be done this way; we do not even need to use midi files. A lot of the music I have created in Sonic Field is just coded directly in Python. However, from midi is how it was done here.

Once we have a clean set of notes they need to be converted into sounds. That is done with ‘voicing’. I will talk a little about that to set the scene then we can get back into more Java code oriented discussion. After all, this is the Java advent calendar!

Voicing is exactly the sort of activity which brings Python to the fore. Java is a wordy language which has a large degree of strictness. It favours well constructed, stable structures. Python relies on its clean syntax rules and layout and the principle of least astonishment. For me, this Pythonic approach really helps with the very human process of making a sound:

def chA():
global midi,index
print "##### Channel A #####"


Above is a ‘voice’. Contrary to what one might think, a synthesised sound does not often consist of just one sound source. It consists of many. A piece of music might have many ‘voices’ and each voice will be a composite of several sounds. To create just the one voice above I have split the notes into long notes and short notes. Then the actual notes are created by a call to doMidi. This takes advantage of Python’s ‘named arguments with default values’ feature. Here is the signature for doMidi:

def doMidi(voice,vCorrect,pitchShift=1.0,qFactor=1.0,subBass=False,flatEnv=False,pure=False,pan=-1,rawBass=False,pitchAdd=0.0,decay=False,bend=True):

The most complex (unsurprisingly) voice to create is that
of a human singing. I have been working on this for
a long time and there is a long way to go; however, its
is a spectrogram of a piece of music which does
a passable job of sounding like someone singing.

The first argument is actually a reference to a function which will create the basic tone. The rest of the arguments describe how that tone will be manipulated in the note formation. Whilst an approach like this can be mimicked using a builder pattern in Java; this latter language does not lend it self to the ‘playing around’ nature of Python (at least for me).

For example, I could just run the script and add flatEvn=True to the arguments and run it again and compare the two sounds. It is an intuitive way of working.

Anyhow, once each voice has been composited from many tones and tweaked into the shape and texture we want, they turn up as a huge list of lists of notes which are all mixed together and written out to disk as a flat file format which is basically just a dump of the underlying double data. At this point it sounds terrible! Making the notes is often only half the story.

Voice Synthesis by Sonic Field
played specifically for this post.

You see, real sounds happen in a space. Our Choral is expected to be performed in a church. Notes played without a space around them sound completely artificial and lack any interest. To solve this we use impulse response reverberation. The mathematics behind this is rather complex and so I will not go into it in detail. However in the next section I will start to look at this as a perfect example of why Java is not only necessary but ideal as the back end to Python/Jython.

You seem to like Python Alex – Why Bother With Java?

My post might seem a bit like a Python sales job so far. What has been happening is simply a justification of using Python when Java is so good as a language (especially when written in a great IDE like Eclipse for Java). Let us look at something Python would be very bad indeed at. Here is the code for performing the Fast Fourier Transform, which is a the heart of putting sounds into a space.


public class CacheableFFT

private final int n, m;

// Lookup tables. Only need to recompute when size of FFT changes.
private final double[] cos;
private final double[] sin;
private final boolean forward;

public boolean isForward()
return forward;

public int size()
return n;

public CacheableFFT(int n1, boolean isForward)
this.forward = isForward;
this.n = n1;
this.m = (int) (Math.log(n1) / Math.log(2));

// Make sure n is a power of 2
if (n1 != (1 << m)) throw new RuntimeException(Messages.getString("CacheableFFT.0")); //$NON-NLS-1$

cos = new double[n1 / 2];
sin = new double[n1 / 2];
double dir = isForward ? -2 * Math.PI : 2 * Math.PI;

for (int i = 0; i < n1 / 2; i++)
cos[i] = Math.cos(dir * i / n1);
sin[i] = Math.sin(dir * i / n1);


public void fft(double[] x, double[] y)
int i, j, k, n1, n2, a;
double c, s, t1, t2;

// Bit-reverse
j = 0;
n2 = n / 2;
for (i = 1; i < n - 1; i++)
n1 = n2;
while (j >= n1)
j = j - n1;
n1 = n1 / 2;
j = j + n1;

if (i < j)
t1 = x[i];
x[i] = x[j];
x[j] = t1;
t1 = y[i];
y[i] = y[j];
y[j] = t1;

// FFT
n1 = 0;
n2 = 1;

for (i = 0; i < m; i++)
n1 = n2;
n2 = n2 + n2;
a = 0;

for (j = 0; j < n1; j++)
c = cos[a];
s = sin[a];
a += 1 << (m - i - 1);

for (k = j; k < n; k = k + n2)
t1 = c * x[k + n1] - s * y[k + n1];
t2 = s * x[k + n1] + c * y[k + n1];
x[k + n1] = x[k] - t1;
y[k + n1] = y[k] - t2;
x[k] = x[k] + t1;
y[k] = y[k] + t2;

[This AGPL code is on Github]

It would be complete lunacy to implement this methematics in JPython (dynamic late binding would give unusably bad performance). Java does a great job of running it quickly and efficiently. In Java this runs just about as fast as it could in any language plus the clean, simple object structure of Java means that using the ‘caching’ system as straight forward. The caching comes from the fact that the cos and sin multipliers of the FFT can be re-used when the transform is the same length. Now, in the creation of reverberation effects (those effects which put sound into a space) FFT lengths are the same over and over again due to windowing. So the speed and object oriented power of Java have both fed into creating a clean, high performance implementation.

But we can go further and make the FFT parallelised:

def reverbInner(signal,convol,grainLength):
def rii():
if mag>0:
if newMag>0:
# tail out clicks due to amplitude at end of signal
return sf.Realise(signal_)
return sf.Silence(sf.Length(signal_))
return signal
return sf_do(rii)

def reverberate(signal,convol):
def revi():
grainLength = sf.Length(+convol)
for grain in sf.Granulate(signal_,grainLength):
return sf.Clean(sf.FixSize(sf.MixAt(out)))
return sf_do(revi)

Here we have the Python which performs the FFT to produce impulse response reverberation (convolution reverb is another name for this approach). The second function breaks the sound into grains. Each grain is then processes individually and they all have the same length. This performs that windowing effect I talked about earlier (I use a triangular window which is not ideal but works well enough due to the long window size). If the grains are long enough, the impact of lots of little FFT calculation basically the same as the effect of one huge one. However, FFT is a nLog(n) process, so lots of little calculations is a lot faster than one big one. In effect, windowing make FFT become a linear scaling calculation.

Note that the granulation process is performed in a future. We define a closure called revi and pass it to sf_do() which is executed it at some point in the future base on demand and the number of threads available.  Next we can look at the code which performs the FFT on each grain – rii. That again is performed in a future. In other words, the individual windowed FFT calculations are all performed in futures. The expression of a parallel windowed FFT engine in C or FORTRAN ends up very complex and rather intractable. I have not personally come across one which is integrated into the generalised, thread pooled, future based schedular. Nevertheless, the combination of Jython and Java makes such a thing very easy to create.

How are the two meshed?

Now that I hope I have put a good argument for hybrid programming between a great dynamic language (in this case Python) and a powerful mid level static language (in this case Java) it is time to look at how the two are fused together. There are many ways of doing this but Sonic Field picks a very distinct approach. It does not offer a general interface between the two where lots of intermediate code is generated and each method in Java is exposed separately into Python; rather it uses a uniform single interface with virtual dispatch.

Sonic Field defines a very (aggressively) simple calling convention from Python into Java which initially might look like a major pain in the behind but works out to create a very flexible and powerful approach.

Sonic Field defines ‘operators’ which all implement the following interface:

/* For Copyright and License see LICENSE.txt and COPYING.txt in the root directory */
package com.nerdscentral.sython;


* @author AlexTu
public interface SFPL_Operator extends Serializable

* <b>Gives the key word which the parser will use for this operator</b>
* @return the key word
public String Word();

* <b>Operate</b> What ever this operator does when SFPL is running is done by this method. The execution loop all this
* method with the current execution context and the passed forward operand.
* @param input
* the operand passed into this operator
* @param context
* the current execution context
* @return the operand passed forward from this operator
* @throws SFPL_RuntimeException
public Object Interpret(Object input, SFPL_Context context) throws SFPL_RuntimeException;
The word() method returns the name of the operator as it will be expressed in Python. The Interpret() method processes arguments passed to it from Python. As Sonic Field comes up it creates a Jython interpreter and then adds the operators to it. The mechanism for doing this is a little involved so rather than go into detail here, I will simply give links to the code on github:
The result is that every operator is exposed in Python as where xxx is the return from the word() method. With clever operator overloading and other syntactical tricks in Python I am sure that the approach could be refined. Right now, there are a lot of calls in Sonic Field Python ( I call it Synthon ) but I have not gotten around to improving on this simple and effective approach.

You might have noticed that everything passed into Java from Python is just ‘object’. This seems a bit crude at first take. However, as we touched on in the section of futures in the previous post, it offers many advantages because the translation from Jython to Java is orchestrated via the Caster object and a layer of Python which transparently perform many useful translations. For example, the code automatically translates multiple arguments in Jython to a list of objects in Java:

def run(self,word,input,args):
if len(args)!=0:
return ret

Here we can see how the arguments are processed into a list  (which is Jython is implemented as an ArrayList) if there are more than one but are passed as a single object is there is only one. We can also see how the Python stack trace is passed into a thread local in  the Java SFSignal object. Should an SFSignal not be freed or be double collected, this Python stack is displayed to help debug the program.

Is this interface approach a generally good idea for Jython/Java Communication?

Definitely not! It works here because of the nature of the Sonic Field audio progressing architecture. We have processors which can be chained. Each processor has a simple input and output. The semantic content passed between Python and Java is quite limited. In more general purpose programming, this simple architecture, rather than being flexible and powerful, would be highly restrictive. In this case, the normal Jython interface with Java would be much more effective. Again, we can see a great example of this simplicity in the previous post when talking about threading (where Python access Java Future objects). Another example is the direct interaction of Python with SFData objects in this post on modelling oscillators in Python.

from import SFData
for x in range(0,length):

Which violated the programming model of Sonic Field by creating audio samples directly from Jython, but at the same time illustrates the power of Jython! It also created one of the most unusual soundscapes I have so far achieved with the technology:

Engines of war, sound modelling
from oscillators in Python.

Wrapping It Up

Well, that is all folks. I could ramble on for ever, but I think I have answered most if not all of the questions I set out in the first post. The key ones that really interest me are about creativity and hybrid programming. Naturally, I am obsessed with performance as I am by profession an optimisation consultant, but moving away from my day job, can Jython and Java be a creating environment and do they offer more creativity than pure Java?

Transition State Analysis using
hybrid programming

Too many years ago I worked on a similar hybrid approach in scientific computing. The GRACE software which I helped develop as part of the team at Bath was able to break new ground because it was easier to explore ideas in the hybrid approach than writing raw FORTRAN constantly. I cannot present in deterministic, reductionist language a consistent argument for why this applied then to science or now to music; nevertheless, experience from myself and others has show this to be a strong argument.

Whether you agree or disagree; irrespective of if you like the music or detest it; I wish you a very merry Christmas indeed.

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

The Java Ecosystem – My top 5 highlights of 2014

1. February the 1st – RedMonk Analyst firm declares that Java is more popular & diverse than ever!

The Java Ecosystem started off with a hiss and a roar in 2014 with the annual meeting of the Free Java room at FOSDEM. As well as the many fine deep technical talks on OpenJDK and related topics there was also a surprise presentation on the industry from Steve O’Grady (RedMonk Analyst). Steve gave a data lead insight into where Java ranked in terms of both popularity and scope at the start of 2014. The analysis on where Java is as a language is repeated on RedMonk’s Blog. The fact it remains a top two language didn’t surprise anyone, but it was the other angle that really surprised even those of us heavily involved in the ecosystem. Steve’s talk clearly showed that Java is aggressively diverse, appearing in industries such as social media, messaging, gaming, mobile, virtualisation, build systems and many more, not just Enterprise apps that people most commonly think about. Steve also showed that Java is being used heavily in new projects (across all of those industry sectors) which certainly killed the myth of Java being a legacy enterprise platform.

2. March the 18th – Java 8 arrives

The arrival of Java 8 ushered in a new Functional/OO hybrid direction for the language giving it a new lease of life. The adoption rates have been incredible (See Typesafe’s full report on this) it was clearly the release that Java developers were waiting for.

Some extra thoughts around the highlights of this release:

  • Lambdas (JSR 335) – There has been so much written about this topic already with a ton of fantastic books and tutorials to boot. For me the clear benefit to most Java developers was that they’re finally able to express the correct intent of behaviour with collections without all of the unnecessary boiler plate that imperative/OO constructs forced upon them. It boils down to the old adage of That there are only two problems in computer science, cache invalidation, naming things, and off-by-one errors. The new streams API for collection in conjunction with Lambdas certainly helps with the last two!
  • Project Nashorn (JSR 223, JEP 174) – The JavaScript runtime which allows developers to embed JavaScript code within their Java applications. Although I personally won’t be using this anytime soon, it was yet another boost to the JVM in terms of first class support for dynamically typed languages. I’m looking forward to this trend continuing!
  • Date and Time API (JSR 310, JEP 150) – This is sort of bread and butter API that a blue collar language like Java just needs to get right, and this time (take 3) they did! It’s been great to finally be able to work with timezones correctly and it also set a new precedence of Immutable First as a conscious design decision for new APIs in Java.

3. ~July – ARM 64 port (AArch64)

RedHat have lead the effort to get the ARMv8 64-bit architecture supported in Java. This is clearly an important step to keep Java truly “Run anywhere” and alongside SAP’s PowerPC/AIX port represents two major ports that are primarily maintained by non-Oracle participants in OpenJDK. If you’d like to help get involved see the project page for more details.

Java still has a way to go before becoming a major player in the embedded space, but the signs in 2014 were encouraging with Java SE Embedded featuring regularly on the Raspberry Pi and Java ME Embedded getting a much needed feature parity boost with Java SE APIs.

4. Sept/Oct – A Resurgence in the JCP & it’s 15th Anniversary

The Java Community Process (JCP) is the standards body that defines what goes into Java SE, Java EE and the Java ME. It re-invented itself as a much more open community based organisation in 2013 and continued that good work in 2014, reversing the dropping membership trend. Most importantly – it now truly represents the incredible diversity that the Java ecosystem has. You can see the make up of the existing Executive Committee – you can see that institutions like Java User Groups sitting alongside industry and end user heavyweights such as IBM, Twitter and Goldman Sachs.

Community Collaboration at an all time high & Microsoft joins OpenJDK.

The number of new joiners to OpenJDK (see Mani’s excellent post on this) was higher than ever. OpenJDK now represents a huge melting pot of major tech companies such as Red Hat, IBM, Oracle, Twitter and of course the shock entry this year of Microsoft.

The Adopt a JSR and Adopt OpenJDK programmes continue to bring more day to day developers involved in guiding the future of various APIs with regular workshops now being organised globally around the world to test new APis and ideas out early and feed that back in OpenJDK and the Java EE specifications in particular.

Community conferences & the number of Java User Groups continue rise in numbers, JavaOne in particular having it’s strongest year in recent memory. It was also heartening to see a large number of community efforts helping kids learn to code with after school and weekend programmes such as Devoxx for Kids.

What for 2015?

I’ll expect 2015 to be a little bit quieter in terms of changes for the core language or exciting new features to Java EE or Java ME as their next major releases aren’t due to 2016. On the community etc front I expect to see Java developers having to firmly embrace web/UI technologies such as AngularJS, More systems/Devops toolchains such as Docker, AWS, Puppet etc and of course migrate to Java 8 and all of the functional goodness it now brings! The community I’m sure will continue to thrive and the looming spectre of IoT will start to come into the mainstream as well. Java developers will likely have to wait until Java 9 to get a truly first class platform for embedded, but early adopters will want to begin taking a look at early builds throughout 2015. Java/JVM applications now tend to be complex, with many moving parts and distributed deployments. It can often take poor frustrated developers weeks to fix issues in production. To combat this there are a new wave of interesting analysis tools dealing with Java/JVM based applications and deployments. Oracle’s Mission Control is a powerful tool that can give lots of interesting insights into the JVM and other tools like Xrebel from ZeroTurnaround, jClarity’s Censum and Illuminate take the next step of applying machine learned analysis to the raw numbers. One final important note. Project Jigsaw is the modularisation story for Java 9 that will massively impact tool vendors and day to day developers alike. The community at large needs your help to help test out early builds of Java 9 and to help OpenJDK developers and tool vendors ensure that IDEs, build tools and applications are ready for this important change. You can join us in the Adoption Group at OpenJDK: I hope everyone has a great holiday break – I look forward to seeing the Twitter feeds and the GitHub commits flying around in 2015 :-).
Martijn (CEO – jClarity, Java Champion & Diabolical Developer)

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

A persistent KeyValue Server in 40 lines and a sad fact

Advent time again .. picking up Peters well written overview on the uses of Unsafe, i’ll have a short fly-by on how low level techniques in Java can save development effort by enabling a higher level of abstraction or allow for Java performance levels probably unknown to many.

My major point is to show that conversion of Objects to bytes and vice versa is an important fundamental, affecting virtually any modern java application.

Hardware enjoys to process streams of bytes, not object graphs connected by pointers as “All memory is tape” (M.Thompson if I remember correctly ..).

Many basic technologies are therefore hard to use with vanilla Java heap objects:

  • Memory Mapped Files – a great and simple technology to persist application data safe, fast & easy.
  • Network communication is based on sending packets of bytes
  • Interprocess communication (shared memory)
  • Large main memory of today’s servers (64GB to 256GB). (GC issues)
  • CPU caches work best on data stored as a continuous stream of bytes in memory

so use of the Unsafe class in most cases boil down in helping to transform a java object graph into a continuous memory region and vice versa either using

  • [performance enhanced] object serialization or
  • wrapper classes to ease access to data stored in a continuous memory region.

(source of examples used in this post can be found here, messaging latency test here)

    Serialization based Off-Heap

    Consider a retail WebApplication where there might be millions of registered users. We are actually not interested in representing data in a relational database as all needed is a quick retrieve of user related data once he logs in. Additionally one would like to traverse the social graph quickly.

    Let’s take a simple user class holding some attributes and a list of ‘friends’ making up a social graph.

    easiest way to store this on heap, is a simple huge HashMap.

    Alternatively one can use off heap maps to store large amounts of data. An off heap map stores its keys and values inside the native heap, so garbage collection does not need to track this memory. In addition, native heap can be told to automagically get synchronized to disk (memory mapped files). This even works in case your application crashes, as the OS manages write back of changed memory regions.

    There are some open source off heap map implementations out there with various feature sets (e.g. ChronicleMap), for this example I’ll use a plain and simple implementation featuring fast iteration (optional full scan search) and ease of use.

    Serialization is used to store objects, deserialization is used in order to pull them to the java heap again. Pleasantly I have written the (afaik) fastest fully JDK compliant object serialization on the planet, so I’ll make use of that.


    • persistence by memory mapping a file (map will reload upon creation). 
    • Java Heap still empty to serve real application processing with Full GC < 100ms. 
    • Significantly less overall memory consumption. A user record serialized is ~60 bytes, so in theory 300 million records fit into 180GB of server memory. No need to raise the big data flag and run 4096 hadoop nodes on AWS ;).
    Comparing a regular in-memory java HashMap and a fast-serialization based persistent off heap map holding 15 millions user records, will show following results (on a 3Ghz older XEON 2×6):

    consumed Java Heap (MB) Full GC (s) Native Heap (MB) get/put ops per s required VM size (MB)
    HashMap 6.865,00 26,039 0 3.800.000,00
    OffheapMap (Serialization based)

    [test source / blog project] Note: You’ll need at least 16GB of RAM to execute them.

    As one can see, even with fast serialization there is a heavy penalty (~factor 5) in access performance, anyway: compared to other persistence alternatives, its still superior (1-3 microseconds per “get” operation, “put()” very similar).

    Use of JDK serialization would perform at least 5 to 10 times slower (direct comparison below) and therefore render this approach useless.

    Trading performance gains against higher level of abstraction: “Serverize me”

    A single server won’t be able to serve (hundreds of) thousands of users, so we somehow need to share data amongst processes, even better: across machines.

    Using a fast implementation, its possible to generously use (fast-) serialization for over-the-network messaging. Again: if this would run like 5 to 10 times slower, it just wouldn’t be viable. Alternative approaches require an order of magnitude more work to achieve similar results.

    By wrapping the persistent off heap hash map by an Actor implementation (async ftw!), some lines of code make up a persistent KeyValue server with a TCP-based and a HTTP interface (uses kontraktor actors). Of course the Actor can still be used in-process if one decides so later on.

    Now that’s a micro service. Given it lacks any attempt of optimization and is single threaded, its reasonably fast [same XEON machine as above]:

    • 280_000 successful remote lookups per second 
    • 800_000 in case of fail lookups (key not found)
    • serialization based TCP interface (1 liner)
    • a stringy webservice for the REST-of-us (1 liner).

    [source: KVServer, KVClient] Note: You’ll need at least 16GB of RAM to execute the test.

    A real world implementation might want to double performance by directly putting received serialized object byte[] into the map instead of encoding it twice (encode/decode once for transmission over wire, then decode/encode for offheaping map).

    “RestActorServer.Publish(..);” is a one liner to also expose the KVActor as a webservice in addition to raw tcp:

    C like performance using flyweight wrappers / structs

    With serialization, regular Java Objects are transformed to a byte sequence. One can do the opposite: Create  wrapper classes which read data from fixed or computed positions of an underlying byte array or native memory address. (E.g. see this blog post).

    By moving the base pointer its possible to access different records by just moving the the wrapper’s offset. Copying such a “packed object” boils down to a memory copy. In addition, its pretty easy to write allocation free code this way. One downside is, that reading/writing single fields has a performance penalty compared to regular Java Objects. This can be made up for by using the Unsafe class.

    “flyweight” wrapper classes can be implemented manually as shown in the blog post cited, however as code grows this starts getting unmaintainable.
    Fast-serializaton provides a byproduct “struct emulation” supporting creation of flyweight wrapper classes from regular Java classes at runtime. Low level byte fiddling in application code can be avoided for the most part this way.

    How a regular Java class can be mapped to flat memory (fst-structs):

    Of course there are simpler tools out there to help reduce manual programming of encoding  (e.g. Slab) which might be more appropriate for many cases and use less “magic”.

    What kind of performance can be expected using the different approaches (sad fact incoming) ?

    Lets take the following struct-class consisting of a price update and an embedded struct denoting a tradable instrument (e.g. stock) and encode it using various methods:

    a ‘struct’ in code
    Pure encoding performance:
    Structs fast-Ser (no shared refs) fast-Ser JDK Ser (no shared) JDK Ser
    26.315.000,00 7.757.000,00 5.102.000,00 649.000,00 644.000,00

    Real world test with messaging throughput:

    In order to get a basic estimation of differences in a real application, i do an experiment how different encodings perform when used to send and receive messages at a high rate via reliable UDP messaging:

    The Test:
    A sender encodes messages as fast as possible and publishes them using reliable multicast, a subscriber receives and decodes them.

    Structs fast-Ser (no shared refs) fast-Ser JDK Ser (no shared) JDK Ser
    6.644.107,00 4.385.118,00 3.615.584,00 81.582,00 79.073,00

    (Tests done on I7/Win8, XEON/Linux scores slightly higher, msg size ~70 bytes for structs, ~60 bytes serialization).

    Slowest compared to fastest: factor of 82. The test highlights an issue not covered by micro-benchmarking: Encoding and Decoding should perform similar, as factual throughput is determined by Min(Encoding performance, Decoding performance). For unknown reasons JDK serialization manages to encode the message tested like 500_000 times per second, decoding performance is only 80_000 per second so in the test the receiver gets dropped quickly:

    ***** Stats for receive rate:   80351   per second *********
    ***** Stats for receive rate:   78769   per second *********
    SUB-ud4q has been dropped by PUB-9afs on service 1
    fatal, could not keep up. exiting

    (Creating backpressure here probably isn’t the right way to address the issue ūüėČ  )


    • a fast serialization allows for a level of abstraction in distributed applications impossible if serialization implementation is either
      – too slow
      – incomplete. E.g. cannot handle any serializable object graph
      – requires manual coding/adaptions. (would put many restrictions on actor message types, Futures, Spore’s, Maintenance nightmare)
    • Low Level utilities like Unsafe enable different representations of data resulting in extraordinary throughput or guaranteed latency boundaries (allocation free main path) for particular workloads. These are impossible to achieve by a large margin with JDK’s public tool set.
    • In distributed systems, communication performance is of fundamental importance. Removing Unsafe is  not the biggest fish to fry looking at the numbers above .. JSON or XML won’t fix this ;-).
    • While the HotSpot VM has reached an extraordinary level of performance and reliability, CPU is wasted in some parts of the JDK like there’s no tomorrow. Given we are living in the age of distributed applications and data, moving stuff over the wire should be easy to achieve (not manually coded) and as fast as possible. 
    Addendum: bounded latency

    A quick Ping Pong RTT latency benchmark showing that java can compete with C solutions easily, as long the main path is allocation free and techniques like described above are employed:

    [credits: charts+measurement done with HdrHistogram]

    This is an “experiment” rather than a benchmark (so do not read: ‘Proven: Java faster than C’), it shows low-level-Java can compete with C in at least this low-level domain.
    Of course its not exactly idiomatic Java code, however its still easier to handle, port and maintain compared to a JNI or pure C(++) solution. Low latency C(++) code won’t be that idiomatic either ūüėČ

    About me: I am a solution architect freelancing at an exchange company in the area of realtime GUIs, middleware, and low latency CEP (Complex Event Processing).
    I am blogging at,
    hacking at

    This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!