Java in 2015 – Major happenings

2015 was the year where Java the language, platform, ecosystem and community continue to dominate the software landscape, with only Javascript having a similar sized impact on the industry. In case you missed the highlights of 2015, here’s some of the major happenings that occurred.

Java 20 years old and still not dead yet!

Java turned 20 this year and swept back to the top of the Tiobe index in December 2015. Although the Tiobe index is hardly a 100% peer reviewed scientific methodology, it is seen as a pretty strong barometer for the health of a language/platform. So what the heck happened to boost Java so dramatically again?

Firstly, the release of Java 8 the previous year was adopted by mainstream Java enterprise shops. The additional functional capabilities of Lambdas combined with the new Streams and Collections framework breathed a new lease of life into the language. Although Java 8 is not as rich in its feature set as say Scala or Python it is seen as the steady workhorse that now has at least some feature parity with more aggressive languages. Enterprises love a stable platform and it’s unlikely that Java will be disappearing any time soon.

Secondly, Java has become a strong platform to use for infrastructure platforms/frameworks. Many popular NoSQL, datagrid solutions such as Apache Cassandra, Hazelcast are written in Java, again due to its stability and strong threading and networking support. CI tools such as Jenkins are widely adopted and of course business productivity tools such as Atlassian’s JIRA are again Java based.

Oracle guts its Java evangelism team

Oracle fired much of its Java evangelism team just before JavaOne which wasn’t the greatest PR move by the stewards of Java. Over the subsequent months it became clearer that this wasn’t a step by Oracle to reduce its engineering efforts into Java but there were nervous times for much of the community as they feared the worst. A salient reminder that big corporations don’t always get their left hand talking to their right!

Java 9 delay announced

In the “We’re not really surprised” bucket came the announcement the Java 9 will be delayed until March 2017 in order to ensure that the new modularisation system will not break the millions of Java applications running out there today.

Although the technical work of Jigsaw is progressing nicely, the entire ecosystem will need to test on the new system. The Quality group in OpenJDK is leading this effort. I highly recommend you contact them to be part of the early access and feedback loop.

OpenJDK supports further mobile platforms

The creation of the OpenJDK mobile project came as a surprise to many and although it doesn’t represent a change in Oracle’s business direction it was a wlecome release of code to enable Java on ARM, Android and iOS platforms. There’s much technical work to do but it will be interesting to watch if the software community at large picks up on this new support and tries Java out as a language for the iOS and Android platforms in 2016 and beyond. There is a possibility that OpenFX (JavaFX) combined with Java mobile on iOS or Android may entice a slew of developers to this ‘new’ platform.

Was I right about 2015?

It’s always fun to look at past predictions, let’s see how I did!

  1. I expected 2015 to be a little bit quieter. Well I clearly got that wrong! Despite no major releases for ME, SE or EE, the excitement of celebrating 20 years of Java and a surge of new developers using Java 8 meant 2015 was busier than ever.
  2. Embracing Javascript for the front end. This trend continues and stacks such as JHipster show the new love affair that Java developers have with Javascript.
  3. Devops toolchains to the fore. Docker continues to steamroll ahead in terms of popularity and Java developers are especially starting to use Docker in test environments to avoid polluting environments with variations in Java runtimes, web servers, data stores etc.
  4. IoT and Java to be a thing. Nope, not yet! Perhaps in 2016 with the new Mobile Java project in OpenJDK and further refinement of Java ME, we may start to see serious inroads.

I’m not going to make any predictions for 2016 as I clearly need to stick to my day job ūüôā

One final important note. Project Jigsaw is the modularisation story for Java 9 that will massively impact tool vendors and day to day developers alike. The community at large needs your help to help test out early builds of Java 9 and to help OpenJDK developers and tool vendors ensure that IDEs, build tools and applications are ready for this important change. You can join us in the Adoption Group at OpenJDK. I hope everyone has a great holiday break – I look forward to seeing the Twitter feeds and the GitHub commits flying around in 2016 :-).

Cheers,
Martijn (CEO – jClarity, Java Champion & Diabolical Developer)

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Adopt OpenJDK & Java community: how can you help Java !

Introduction

I want to take the opportunity to show what we have been doing in last year and also what we have done so far as members of the community. Unlike other years I have decided to keep this post less technical compare to the past years and compared to the other posts on Java Advent this year.

InTheBeginning

This year marks the fourth year since the first OpenJDK hackday was held in London (supported by LJC and its members) and also when the Adopt OpenJDK program was started. Four years is a small number on the face of 20 years of Java, same goes to the size of the Adopt OpenJDK community which forms a small part of the Java community (9+ million users). Although the post is non-technical in nature, the message herein is fairly important for the future growth and progress of our community and the next generation developers.

Creations of the community

Creations from the community

Over the many months a number of members of our community contributed and passed on their good work to us. In no specific order I have enlisted these picking them from memory. I know there are more to name and you can help us by sharing those with us (we will enlist them here).  So here are some of those that we can talk about and be proud of, and thank those who were involved:

  • Getting Started page – created to enabled two way communication with the members of the community, these include a mailing list, an IRC channel, a weekly newsletter, a twitter handle, among other social media channels and collaboration tools.
  • Adopt OpenJDK project: jitwatch – a great tool created by Chris Newland, its one of its kind, ever growing with features and helping developers fine-tune the performance of your Java/JVM applications running on the JVM.
  • Adopt OpenJDK: GSK – a community effort gathering knowledge and experience from hackday attendees and OpenJDK developers on how to go about with OpenJDK from building it to creating your own version of the JDK. Many JUG members have been involved in the process, and this is now a e-book available in many languages (5 languages + 2 to 3 more languages in progress).
  • Adopt OpenJDK vagrant scripts – a collection of vagrant scripts initially created by John Patrick from the LJC, later improved by the community members by adding more scripts and refactoring existing ones. Theses scripts help build OpenJDK projects in a virtualised container i.e. VirtualBox, making building, and testing OpenJDK and also running and testing Java/JVM applications much easier, reliable and in an isolated environment.
  • Adopt OpenJDK docker scripts – a collection of docker scripts created with the help of the community, this is now also receiving contributions from a number of members like Richard Kolb (SA JUG). Just like the vagrant scripts mentioned above, the docker scripts have similar goals, and need your DevOps foo!
  • Adopt OpenJDK project: mjprof – mjprof is a Monadic jstack analysis tool set. It is a fancy way to say it analyzes jstack output using a series of simple composable building blocks (monads). Many thanks to Haim Yadid for donating it to the community.
  • Adopt OpenJDK project: jcountdown – built by the community that mimics the spirit of ie6countdown.net. That is, to encourage users to move to the latest and greatest Java! Many thanks to all those involved, you can already see from the commit history.
  • Adopt OpenJDK CloudBees Build Farm – thanks to the folks at CloudBees for helping us host our build farm on their CI/CD servers. This one was initially started by Martijn Verburg and later with the help of a number of JUG members have come to the point that major Java projects are built against different versions of the JDK. These projects include building the JDKs themselves (versions 1.7, 1.8, 1.9, Jigsaw and Shenandoah). This project has also helped support the Testing Java Early project and Quality  Outreach program.

These are just a handful of such creations and contributions from the members of the community, some of these projects would certainly need help from you. As a community one more thing we could do well is celebrate our victories and successes, and especially credit those that have been involved whether as individuals or a community. So that our next generation contributors feel inspired and encourage to do more good work and share it with us.

Contributions from the community

We want to contribute

In a recent tweet and posts to various Java / JVM and developer mailing lists, I requested the community to come forward and share their contribution stories or those from others with our community. The purpose was two-fold, one to share it with the community and the other to write this post (which in turn is shared with the community). I was happy to see a handful of messages sent to me and the mailing lists by a number of community members. I’ll share some of these with you (in the order I have received them).

Sebastian Daschner:

I don’t know if that counts as contribution but I’ve hacked on the
OpenJDK compiler for fun several times. For example I added a new
thought up ‘maybe’ keyword which produces randomly executed code:
https://blog.sebastian-daschner.com/entries/maybe_keyword_in_java

Thomas Modeneis:

Thanks for writing, I like your initiative, its really good to show how people are doing and what they have been focusing on. Great idea.
From my part, I can tell about the DevoxxMA last month, I did a talk on the Hacker Space about the Adopt the OpenJDK and it was really great. We had about 30 or more attendees, it was in a open space so everyone that was going to any talk was passing and being grabbed to have a look about the topic, it was really challenging because I had no mic. but I managed to speak out loud and be listen, and I got great feedback after the session. I’m going to work over the weekend to upload the presentation and the recorded video and I will be posting here as soon as I have it done! ūüôā

Martijn Verburg:

Good initiative.  So the major items I participated in were Date and Time and Lambdas Hackdays (reporting several bugs), submitted some warnings cleanups for OpenJDK.  Gave ~10 pages of feedback for jshell and generally tried to encourage people more capable than me to contribute :-).

Andrii Rodionov:

Olena Syrota and Oleg Tsal-Tsalko from Ukraine JUG: Contributing to JSR 367 test code-base (https://github.com/olegts/jsonb-spec), promoting ‚ÄėAdopt a JSR‚Äô and JSON-B spec at JUG UA meetings (http://jug.ua/2015/04/json-binding/) and also at JavaDay Lviv conference (http://www.slideshare.net/olegtsaltsalko9/jsonb-spec).

Contributors

Contributors gathering together

As you have seen that from out of a community of 9+ million users, only a handful of them came forward to share their stories. While I can point you out to another list of contributors who have been paramount with their contributions to the Adopt OpenJDK GitBook, for example, take a look at the list of contributors and also the committers on the git-repo. They have not just contributed to the book but to Java and the OpenJDK community, especially those who have helped translate the book into multiple languages. And then there are a number of them who haven’t come forward to add their names to the list, even though they have made valuable contributions.
Super heroes together

From this I can say contributors can be like unsung heroes, either due their shy or low-profile nature or they just don’t get noticed by us. So it would only be fair to encourage them to come forward or share with the community about their contributions, however simple or small those may be. In addition to the above list I would like to also add a number of them (again apologies if I have missed out your name or not mentioned about you or all your contributions). These names are in no particular order but as they come to my mind as their contributions have been invaluable:

  • Dalibor Topic (OpenJDK Project Lead) & the OpenJDK team
  • Mario Torre & the RedHat OpenJDK team
  • Tori Wieldt (Java Community manager) and her team
  • Heather Vancura & the JCP team
  • NightHacking, vJUG and RebelLabs (and the great people behind them)
  • Nicolaas & the team at Cloudbees
  • Chris Newland (JitWatch developer)
  • Lucy Carey, Ellie & Mark Hazell (Devoxx UK & Voxxed)
  • Richard Kolb (JUG South Africa)
  • Daniel Bryant, Richard Warburton, Ben Evans, and a number of others from LJC
  • Members of SouJava (Otavio, Thomas, Bruno, and others)
  • Members of Bulgarian JUG (Ivan, Martin, Mitri) and neighbours
  • Oti, Ludovic & Patrick Reinhart
  • and a number of other contributors who for some reason I can’t remember…

I have named them for their contributions to the community by helping organise Hackdays during the week and weekends, workshops and hands-on sessions at conferences, giving lightening talks, speaking at conferences, allowing us to host our CI and build farm servers, travelling to different parts of the world holding the Java community flag, writing books, giving Java and advance-level training, giving feedback on new technologies and features, and innumerable other activities that support and push forward the Java / JVM platform.

How you can make a difference ? And why ?

Make a difference

You can make a difference by doing something as simple as clicking the like button (on Twitter, LinkedIn, Facebook, etc…) or responding to a message on a mailing list by expressing your opinion about something you see or read about –as to why you think about it that way or how it could be different.

The answer to the question “And why ?” is simple, because you are part of a community and ‘you care’ and want to share your knowledge and experience with others — just like the others above who have spared free moments of their valuable time for us.

Is it hard to do it ? Where to start ? What needs most attention ?

important-checklist The answer is its not hard to do it, if so many have done it, you can do it as well. Where to start and what can you do ? I have written a page on this topic. And its worth reading it before going any further.

There is a dynamic list of topics that is worth considering when thinking of contributing to OpenJDK and Java. But recently I have filtered this list down to a few topics (in order of precedence):

We need you!

With that I would like to close by saying:

i_need_you_duke3

Not just “I”, but we as a community need you.

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Composing Multiple Async Results via an Applicative Builder in Java 8

A few months ago, I put out a publication where I explain in detail an abstraction I came up with named Outcome, which helped me A LOT to code without side-effects by enforcing the use of semantics. By following this simple (and yet powerful) convention, I ended up turning any kind of failure (a.k.a. Exception) into an explicit result from a function, making everything much easier to reason about. I don’t know you but I was tired of dealing with exceptions that teared everything down, so I did something about it, and to be honest, it worked really well. So before I keep going with my tales from the trenches, I really recommend going over that post. Now let’s solve some asynchronous issues by using eccentric applicative ideas, shall we?

Something wicked this way comes

Life was real good, our coding was fast-paced,¬† cleaner and composable as ever, but, out of the blue, we stumble upon a “missing” feature (evil laughs please): we needed to combine several asynchronous Outcome instances in a non-blocking fashion….

ohgodwhy

Excited by the idea, I got down to work. I experimented for a fair amount of time seeking for a robust and yet simple way of expressing these kind of situations; while the new ComposableFuture API turned out to be much nicer that I expected (though I still don’t understand why they decided to use names like applyAsync¬† or thenComposeAsync instead of map or flatMap), I always ended up with implementations too verbose and repetitive comparing to some stuff I did with Scala, but after some long “Mate” sessions, I had my “Hey! moment”: Why not using something similar to an applicative?

The problem

Suppose that we have these two asynchronous results

and a silly entity called Message

I need something that given textf and numberf it will give me back something like

//After combining textf and numberf
CompletableFuture<Outcome<Message>> message = ....

So I wrote a letter to Santa Claus:

  1. I want to asynchronously format the string returned by textf using the number returned by numberf only when both values are available, meaning that both futures completed successfully and none of the outcomes did fail. Of course, we need to be non-blocking.
  2. In case of failures, I want to collect all failures that took place during the execution of textf and/or numberf and return them to the caller, again, without blocking at all.
  3. I don’t want to be constrained by the number of values to be combined,¬† it must be capable of handling a fair amount of asynchronous results. Did I say without blocking? There you go…
  4. Not die during the attempt.

waaat

Applicative  builder to the rescue

If you think about it, one simple way to put what we’re trying to achieve is as follows:

// Given a String -> Given a number -> Format the message
f: String -> Integer -> Message

Checking the definition of¬† f, it is saying something like: “Given a String, I will return a function that takes an Integer as parameter, that when applied, will return an instance of type Message“, this way, instead of waiting for all values to be available at once, we can partially apply one value at a time, getting an actual description of the construction process of a Message instance. That sounded great.

To achieve that, it would be really awesome if we could take the construction lambda Message:new and curry it, boom!, done!, but in Java that’s impossible (to do in a generic, beautiful and concise way), so for the sake of our example, I decided to go with our beloved Builder pattern, which kinda does the job:

And here’s the WannabeApplicative<T> definition

public interface WannabeApplicative<V>
{
    V apply();
}

Disclamer: For those functional freaks out there, this is not an applicative per se, I’m aware of that, but I took some ideas from it an adapted them according to the tools that the language offered me out of the box. So, if you’re feeling curious, go check this post for a more formal example.

If you’re still with me, we could agree that we’ve done nothing too complicated so far, but now we need to express a building step, which, remember, needs to be non-blocking and capable to combine any previous failure that might have took place in other executions with potentially new ones. So, in order to do that, I came up with something as follows:

First of all, we’ve got two functional interfaces: one is Partial<B>, which represents a lazy application of a value to a builder, and the second one, MergingStage<B,V>, represents the “how” to combine both the builder and the value. Then, we’ve got a method called value that, given an instance of type CompletableFuture<Outcome<V>>, it will return an instance of type MergingStage<B,V>, and believe or not, here’s where the magic takes place. If you remember the MergingState definition, you’ll see it’s a BiFunction, where the first parameter is of type Outcome<B> and the second one is of type Outcome<V>. Now, if you follow the types, you can tell that we’ve got two things: the partial state of the building process on one side (type parameter B)¬† and a new value that need to be applied to the current state of the builder (type parameter V), so that, when applied, it will generate a new builder instance with the “next state in the building sequence”, which is represented by Partial<B>. Last but not least, we’ve got the stickedTo method, which basically is a (awful java) hack to stick to a specific applicative type (builder) while defining building step. For instance, having:

I can define partial value applications to any Builder instance as follows:

See that we haven’t built anything yet, we just described what we want to do with each value when the time comes, we might want to perform some validations before using the new value (here’s when Outcome plays an important role) or just use it as it is, it’s really up to us, but the main point is that we haven’t applied anything yet. In order to do so, and to finally tight up all loose ends, I came up with some other definition, which looks as follows:

Hope it’s not that overwhelming, but I’ll try to break it down as clearer as possible. In order to start specifying how you’re going to combine the whole thing together, you will start by calling begin with an instance of type WannabeApplicative<V>, which, in our case, type parameter V is equal to Builder.

FutureCompositions<Message, Builder> ab = begin(Message.applicative())

See that, after you invoke begin, you will get a new instance of FutureCompositions with a lazily evaluated partial state inside of it, making it the one and only owner of the whole building process state, and that was the ultimate goal of everything we’ve done so far, to fully gain control over when and how things will be combined. Next, we must specify the values that we want to combine, and that’s what the binding method is for:

ab.binding(textToApply)
  .binding(numberToApply);

This is how we supply our builder instance with all the values that need to be merged together along with the specification of what’s supposed to happen with each one of them, by using our previously defined Partial instances. Also see that everything’s still lazy evaluated, nothing has happened yet, but still we stacked all “steps” until we finally decide to materialize the result, which will happen when you call perform.

CompletableFuture<Outcome<Message>> message = ab.perform();

From that very moment everything will unfold,¬† each building stage will get evaluated, where failures could¬†be returned and collected within an Outcome instance or simply the newly available values will be supplied to the target builder instance, one way or the other, all steps will be executed until nothing’s to be done. I will try to depict what just happened as follows

applicative

If you pay attention to the left side of the picture, you can easily see how each step gets “defined” as I showed before, following the previous “declaration” arrow direction, meaning, how you actually described the building process. Now, from the moment that you call perform, each applicative instance (remember Builder in our case) will be lazily evaluated in the opposite direction:¬† it will start by evaluating the last specified stage in the stack, which will then proceed to evaluate the next one and so forth up to the point where we reach the “beginning” of the building definition, where it will start to unfold o roll out evaluation each step up to the top, collecting everything¬† it can by using the MergingStage specification.

And this is just the beginning….

I’m sure a lot could be done to improve this idea, for example:

  • The two consecutive calls to dependingOn at CompositionSources.values() sucks, too verbose to my taste, I must do something about it.
  • I’m not quite sure to keep¬†passing¬†Outcome¬†instances to¬†a MergingStage, it would look¬†cleaner¬†and easier¬†if we unwrap the values to be merged before invoking it and just return Either<Failure,V> instead – this will reduce¬†complexity and increase¬†flexibility on what’s supposed to happen behind the scenes.
  • Though using the Builder pattern did the job, it feels old-school, I would love to easily curry constructors, so in my to-do list is to check if jOOőĽ or Javaslang have something to offer on that matter.
  • Better type inference so that the any unnecessary noise gets remove from the code, for example, the stickedTo method, it really is a code smell, something that I hated from the first place. Definitely need more time to figure out an alternative way to infer the applicative type from the definition itself.

You’re more than welcome to send me any suggestions and comments you might have. Cheers and remember…..

index

@gszeliga

Sources

 

Reactive Development Using Vert.x

Lately, it seems like we’re hearing about the latest and greatest frameworks for Java. Tools like Ninja, SparkJava, and Play; but each one is opinionated and make you feel like you need to redesign your entire application to make use of their wonderful features. That’s why I was so relieved when I discovered Vert.x. Vert.x isn’t a framework, it’s a toolkit and it’s un-opinionated and it’s liberating. Vert.x doesn’t want you to redesign your entire application to make use of it, it just wants to make your life easier. Can you write your entire application in Vert.x? Sure! Can you add Vert.x capabilities to your existing Spring/Guice/CDI applications? Yep! Can you use Vert.x inside of your existing JavaEE applications? Absolutely! And that’s what makes it amazing.

Background

Vert.x was born when Tim Fox decided that he liked a lot of what was being developed in the NodeJS ecosystem, but he didn’t like some of the trade-offs of working in V8: Single-threadedness, limited library support, and JavaScript itself. Tim set out to write a toolkit which was unopinionated about how and where it is used, and he decided that the best place to implement it was on the JVM. So, Tim and the community set out to create an event-driven, non-blocking, reactive toolkit which in many ways mirrored what could be done in NodeJS, but also took advantage of the power available inside of the JVM. Node.x was born and it later progressed to become Vert.x.

Overview

Vert.x is designed to implement an event bus which is how different parts of the application can communicate in a non-blocking/thread safe manner. Parts of it were modeled after the Actor methodology exhibited by Eralng and Akka. It is also designed to take full advantage of today’s multi-core processors and highly concurrent programming demands. As such, by default, all Vert.x VERTICLES are implemented as single-threaded by default. Unlike NodeJS though, Vert.x can run MANY verticles in MANY threads. Additionally, you can specify that some verticles are ‚Äúworker‚ÄĚ verticles and CAN be multi-threaded. And to really add some icing on the cake, Vert.x has low level support for multi-node clustering of the event bus via the use of Hazelcast. It has gone on to include many other amazing features which are too numerous to list here, but you can read more in the official Vert.x docs.

The first thing you need to know about Vert.x is, similar to NodeJS, never block the current thread. Everything in Vert.x is set up, by default, to use callbacks/futures/promises. Instead of doing synchronous operations, Vert.x provides async methods for doing most I/O and processor intensive operations which might block the current thread. Now, callbacks can be ugly and painful to work with, so Vert.x optionally provides an API based on RxJava which implements the same functionality using the Observer pattern. Finally, Vert.x makes it easy to use your existing classes and methods by providing the executeBlocking(Function f) method on many of it’s asynchronous APIs. This means you can choose how you prefer to work with Vert.x instead of the toolkit dictating to you how it must be used.

The second thing to know about Vert.x is that it composed of verticles, modules, and nodes. Verticles are the smallest unit of logic in Vert.x, and are usually represented by a single class. Verticles should be simple and single-purpose following the UNIX Philosophy. A group of verticles can be put together into a module, which is usually packaged as a single JAR file. A module represents a group of related functionality which when taken together could represent an entire application or just a portion of a larger distributed application. Lastly, nodes are single instances of the JVM which are running one or more modules/verticles. Because Vert.x has clustering built-in from the ground up, Vert.x applications can span nodes either on a single machine or across multiple machines in multiple geographic locations (though latency can hider performance).

Example Project

Now, I’ve been to a number of Meetups and conferences lately where the first thing they show you when talking about reactive programming is to build a chat room application. That’s all well and good, but it doesn’t really help you to completely understand the power of reactive development. Chat room apps are simple and simplistic. We can do better. In this tutorial, we’re going to take a legacy Spring application and convert it to take advantage of Vert.x. This has multiple purposes: It shows that the toolkit is easy to integrate with existing Java projects, it allows us to take advantage of existing tools which may be entrenched parts of our ecosystem, and it also lets us follow the DRY principle in that we don’t have to rewrite large swathes of code to get the benefits of Vert.x.

Our legacy Spring application is a contrived simple example of a REST API using Spring Boot, Spring Data JPA, and Spring REST. The source code can be found in the “master” branch HERE. There are other branches which we will use to demonstrate the progression as we go, so it should be simple for anyone with a little experience with git and Java 8 to follow along. Let’s start by examining the Spring Configuration class for the stock Spring application.


@SpringBootApplication
@EnableJpaRepositories
@EnableTransactionManagement
@Slf4j
public class Application {
    public static void main(String[] args) {
        ApplicationContext ctx = SpringApplication.run(Application.class, args);

        System.out.println("Let's inspect the beans provided by Spring Boot:");

        String[] beanNames = ctx.getBeanDefinitionNames();
        Arrays.sort(beanNames);
        for (String beanName : beanNames) {
            System.out.println(beanName);
        }
    }

    @Bean
    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        return builder.setType(EmbeddedDatabaseType.HSQL).build();
    }

    @Bean
    public EntityManagerFactory entityManagerFactory() {
        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
        vendorAdapter.setGenerateDdl(true);

        LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
        factory.setJpaVendorAdapter(vendorAdapter);
        factory.setPackagesToScan("com.zanclus.data.entities");
        factory.setDataSource(dataSource());
        factory.afterPropertiesSet();

        return factory.getObject();
    }

    @Bean
    public PlatformTransactionManager transactionManager(final EntityManagerFactory emf) {
        final JpaTransactionManager txManager = new JpaTransactionManager();
        txManager.setEntityManagerFactory(emf);
        return txManager;
    }
}

As you can see at the top of the class, we have some pretty standard Spring Boot annotations. You’ll also see an @Slf4j annotation which is part of the lombok library, which is designed to help reduce boiler-plate code. We also have @Bean annotated methods for providing access to the JPA EntityManager, the TransactionManager, and DataSource. Each of these items provide injectable objects for the other classes to use. The remaining classes in the project are similarly simplistic. There is a Customer POJO which is the Entity type used in the service. There is a CustomerDAO which is created via Spring Data. Finally, there is a CustomerEndpoints class which is the JAX-RS annotated REST controller.

As explained earlier, this is all standard fare in a Spring Boot application. The problem with this application is that for the most part, it has limited scalability. You would either run this application inside of a Servlet container, or with an embedded server like Jetty or Undertow. Either way, each requests ties up a thread and is thus wasting resources when it waits for I/O operations.

Switching over to the Convert-To-Vert.x-Web branch, we can see that the Application class has changed a little. We now have some new @Bean annotated methods for injecting the Vertx instance itself, as well as an instance of ObjectMapper (part of the Jackson JSON library). We have also replaced the CustomerEnpoints class with a new CustomerVerticle. Pretty much everything else is the same.

The CustomerVerticle class is annotated with @Component, which means that Spring will instantiate that class on startup. It also has it’s start method annotated with @PostConstruct so that the Verticle is launched on startup. Looking at the actual content of the code, we see our first bits of Vert.x code: Router.

The Router class is part of the vertx-web library and allows us to use a fluent API to define HTTP URLs, methods, and header filters for our request handling. Adding the BodyHandler instance to the default route allows a POST/PUT body to be processed and converted to a JSON object which Vert.x can then process as part of the RoutingContext. The order of routes in Vert.x CAN be significant. If you define a route which has some sort of glob matching (* or regex), it can swallow requests for routes defined after it unless you implement chaining. Our example shows 3 routes initially.


    @PostConstruct
    public void start() throws Exception {
        Router router = Router.router(vertx);
        router.route().handler(BodyHandler.create());
        router.get("/v1/customer/:id")
                .produces("application/json")
                .blockingHandler(this::getCustomerById);
        router.put("/v1/customer")
                .consumes("application/json")
                .produces("application/json")
                .blockingHandler(this::addCustomer);
        router.get("/v1/customer")
                .produces("application/json")
                .blockingHandler(this::getAllCustomers);
        vertx.createHttpServer().requestHandler(router::accept).listen(8080);
    }

Notice that the HTTP method is defined, the “Accept” header is defined (via consumes), and the “Content-Type” header is defined (via produces). We also see that we are passing the handling of the request off via a call to the blockingHandler method. A blocking handler for a Vert.x route accepts a RoutingContext object as it’s only parameter. The RoutingContext holds the Vert.x Request object, Response object, and any parameters/POST body data (like “:id”). You’ll also see that I used method references rather than lambdas to insert the logic into the blockingHandler (I find it more readable). Each handler for the 3 request routes is defined in a separate method further down in the class. These methods basically just call the methods on the DAO, serialize or deserialize as needed, set some response headers, and end() the request by sending a response. Overall, pretty simple and straightforward.


    private void addCustomer(RoutingContext rc) {
        try {
            String body = rc.getBodyAsString();
            Customer customer = mapper.readValue(body, Customer.class);
            Customer saved = dao.save(customer);
            if (saved!=null) {
                rc.response().setStatusMessage("Accepted").setStatusCode(202).end(mapper.writeValueAsString(saved));
            } else {
                rc.response().setStatusMessage("Bad Request").setStatusCode(400).end("Bad Request");
            }
        } catch (IOException e) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", e);
        }
    }

    private void getCustomerById(RoutingContext rc) {
        log.info("Request for single customer");
        Long id = Long.parseLong(rc.request().getParam("id"));
        try {
            Customer customer = dao.findOne(id);
            if (customer==null) {
                rc.response().setStatusMessage("Not Found").setStatusCode(404).end("Not Found");
            } else {
                rc.response().setStatusMessage("OK").setStatusCode(200).end(mapper.writeValueAsString(dao.findOne(id)));
            }
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);
        }
    }

    private void getAllCustomers(RoutingContext rc) {
        log.info("Request for all customers");
        List customers = StreamSupport.stream(dao.findAll().spliterator(), false).collect(Collectors.toList());
        try {
            rc.response().setStatusMessage("OK").setStatusCode(200).end(mapper.writeValueAsString(customers));
        } catch (JsonProcessingException jpe) {
            rc.response().setStatusMessage("Server Error").setStatusCode(500).end("Server Error");
            log.error("Server error", jpe);
        }
    }

“But this is more code and messier than my Spring annotations and classes”, you might say. That CAN be true, but it really depends on how you implement the code. This is meant to be an introductory example, so I left the code very simple and easy to follow. I COULD use an annotation library for Vert.x to implement the endpoints in a manner similar to JAX-RS. In addition, we have gained a massive scalability improvement. Under the hood, Vert.x Web uses Netty for low-level asynchronous I/O operations, thus providing us the ability to handle MANY more concurrent requests (limited by the size of the database connection pool).

We’ve already made some improvement to the scalability and concurrency of this application by using the Vert.x Web library, but we can improve things a little more by implementing the Vert.x EventBus. By separating the database operations into Worker Verticles instead of using blockingHandler, we can handle request processing more efficiently. This is show in the Convert-To-Worker-Verticles branch. The application class has remained the same, but we have changed the CustomerEndpoints class and added a new class called CustomerWorker. In addition, we added a new library called Spring Vert.x Extension which provides Spring Dependency Injections support to Vert.x Verticles. Start off by looking at the new CustomerEndpoints class.


    @PostConstruct
    public void start() throws Exception {
        log.info("Successfully create CustomerVerticle");
        DeploymentOptions deployOpts = new DeploymentOptions().setWorker(true).setMultiThreaded(true).setInstances(4);
        vertx.deployVerticle("java-spring:com.zanclus.verticles.CustomerWorker", deployOpts, res -> {
            if (res.succeeded()) {
                Router router = Router.router(vertx);
                router.route().handler(BodyHandler.create());
                final DeliveryOptions opts = new DeliveryOptions()
                        .setSendTimeout(2000);
                router.get("/v1/customer/:id")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "getCustomer")
                                    .addHeader("id", rc.request().getParam("id"));
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        });
                router.put("/v1/customer")
                        .consumes("application/json")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "addCustomer");
                            vertx.eventBus().send("com.zanclus.customer", rc.getBodyAsJson(), opts, reply -> handleReply(reply, rc));
                        });
                router.get("/v1/customer")
                        .produces("application/json")
                        .handler(rc -> {
                            opts.addHeader("method", "getAllCustomers");
                            vertx.eventBus().send("com.zanclus.customer", null, opts, reply -> handleReply(reply, rc));
                        });
                vertx.createHttpServer().requestHandler(router::accept).listen(8080);
            } else {
                log.error("Failed to deploy worker verticles.", res.cause());
            }
        });
    }

The routes are the same, but the implementation code is not. Instead of using calls to blockingHandler, we have now implemented proper async handlers which send out events on the event bus. None of the database processing is happening in this Verticle anymore. We have moved the database processing to a Worker Verticle which has multiple instances to handle multiple requests in parallel in a thread-safe manner. We are also registering a callback for when those events are replied to so that we can send the appropriate response to the client making the request. Now, in the CustomerWorker Verticle we have implemented the database logic and error handling.

@Override
public void start() throws Exception {
    vertx.eventBus().consumer("com.zanclus.customer").handler(this::handleDatabaseRequest);
}

public void handleDatabaseRequest(Message<Object> msg) {
    String method = msg.headers().get("method");

    DeliveryOptions opts = new DeliveryOptions();
    try {
        String retVal;
        switch (method) {
            case "getAllCustomers":
                retVal = mapper.writeValueAsString(dao.findAll());
                msg.reply(retVal, opts);
                break;
            case "getCustomer":
                Long id = Long.parseLong(msg.headers().get("id"));
                retVal = mapper.writeValueAsString(dao.findOne(id));
                msg.reply(retVal);
                break;
            case "addCustomer":
                retVal = mapper.writeValueAsString(
                                    dao.save(
                                            mapper.readValue(
                                                    ((JsonObject)msg.body()).encode(), Customer.class)));
                msg.reply(retVal);
                break;
            default:
                log.error("Invalid method '" + method + "'");
                opts.addHeader("error", "Invalid method '" + method + "'");
                msg.fail(1, "Invalid method");
        }
    } catch (IOException | NullPointerException e) {
        log.error("Problem parsing JSON data.", e);
        msg.fail(2, e.getLocalizedMessage());
    }
}

The CustomerWorker worker verticles register a consumer for messages on the event bus. The string which represents the address on the event bus is arbitrary, but it is recommended to use a reverse-tld style naming structure so that it is simple to ensure that the addresses are unique (“com.zanclus.customer”). Whenever a new message is sent to that address, it will be delivered to one, and only one, of the worker verticles. The worker verticle then calls handleDatabaseRequest to do the database work, JSON serialization, and error handling.

There you have it. You’ve seen that Vert.x can be integrated into your legacy applications to improve concurrency and efficiency without having to rewrite the entire application. We could have done something similar with an existing Google Guice or JavaEE CDI application. All of the business logic could remain relatively untouched while we tried in Vert.x to add reactive capabilities. The next steps are up to you. Some ideas for where to go next include Clustering, WebSockets, and VertxRx for ReactiveX sugar.

The Java Ecosystem – My top 5 highlights of 2014

1. February the 1st – RedMonk Analyst firm declares that Java is more popular & diverse than ever!

The Java Ecosystem started off with a hiss and a roar in 2014 with the annual meeting of the Free Java room at FOSDEM. As well as the many fine deep technical talks on OpenJDK and related topics there was also a surprise presentation on the industry from Steve O’Grady (RedMonk Analyst). Steve gave a data lead insight into where Java ranked in terms of both popularity and scope at the start of 2014. The analysis on where Java is as a language is repeated on RedMonk’s Blog. The fact it remains a top two language didn’t surprise anyone, but it was the other angle that really surprised even those of us heavily involved in the ecosystem. Steve’s talk clearly showed that Java is aggressively diverse, appearing in industries such as social media, messaging, gaming, mobile, virtualisation, build systems and many more, not just Enterprise apps that people most commonly think about. Steve also showed that Java is being used heavily in new projects (across all of those industry sectors) which certainly killed the myth of Java being a legacy enterprise platform.

2. March the 18th – Java 8 arrives

The arrival of Java 8 ushered in a new Functional/OO hybrid direction for the language giving it a new lease of life. The adoption rates have been incredible (See Typesafe’s full report on this) it was clearly the release that Java developers were waiting for.

Some extra thoughts around the highlights of this release:

  • Lambdas (JSR 335) – There has been so much written about this topic already with a ton of fantastic books and tutorials to boot. For me the clear benefit to most Java developers was that they’re finally able to express the correct intent of behaviour with collections without all of the unnecessary boiler plate that imperative/OO constructs forced upon them. It boils down to the old adage of That there are only two problems in computer science, cache invalidation, naming things, and off-by-one errors. The new streams API for collection in conjunction with Lambdas certainly helps with the last two!
  • Project Nashorn (JSR 223, JEP 174) – The JavaScript runtime which allows developers to embed JavaScript code within their Java applications. Although I personally won’t be using this anytime soon, it was yet another boost to the JVM in terms of first class support for dynamically typed languages. I’m looking forward to this trend continuing!
  • Date and Time API (JSR 310, JEP 150) – This is sort of bread and butter API that a blue collar language like Java just needs to get right, and this time (take 3) they did! It’s been great to finally be able to work with timezones correctly and it also set a new precedence of Immutable First as a conscious design decision for new APIs in Java.

3. ~July – ARM 64 port (AArch64)

RedHat have lead the effort to get the ARMv8 64-bit architecture supported in Java. This is clearly an important step to keep Java truly “Run anywhere” and alongside SAP’s PowerPC/AIX port represents two major ports that are primarily maintained by non-Oracle participants in OpenJDK. If you’d like to help get involved see the project page for more details.

Java still has a way to go before becoming a major player in the embedded space, but the signs in 2014 were encouraging with Java SE Embedded featuring regularly on the Raspberry Pi and Java ME Embedded getting a much needed feature parity boost with Java SE APIs.

4. Sept/Oct – A Resurgence in the JCP & it’s 15th Anniversary

The Java Community Process (JCP) is the standards body that defines what goes into Java SE, Java EE and the Java ME. It re-invented itself as a much more open community based organisation in 2013 and continued that good work in 2014, reversing the dropping membership trend. Most importantly – it now truly represents the incredible diversity that the Java ecosystem has. You can see the make up of the existing Executive Committee – you can see that institutions like Java User Groups sitting alongside industry and end user heavyweights such as IBM, Twitter and Goldman Sachs.

Community Collaboration at an all time high & Microsoft joins OpenJDK.

The number of new joiners to OpenJDK (see Mani’s excellent post on this) was higher than ever. OpenJDK now represents a huge melting pot of major tech companies such as Red Hat, IBM, Oracle, Twitter and of course the shock entry this year of Microsoft.

The Adopt a JSR and Adopt OpenJDK programmes continue to bring more day to day developers involved in guiding the future of various APIs with regular workshops now being organised globally around the world to test new APis and ideas out early and feed that back in OpenJDK and the Java EE specifications in particular.

Community conferences & the number of Java User Groups continue rise in numbers, JavaOne in particular having it’s strongest year in recent memory. It was also heartening to see a large number of community efforts helping kids learn to code with after school and weekend programmes such as Devoxx for Kids.

What for 2015?

I’ll expect 2015 to be a little bit quieter in terms of changes for the core language or exciting new features to Java EE or Java ME as their next major releases aren’t due to 2016. On the community etc front I expect to see Java developers having to firmly embrace web/UI technologies such as AngularJS, More systems/Devops toolchains such as Docker, AWS, Puppet etc and of course migrate to Java 8 and all of the functional goodness it now brings! The community I’m sure will continue to thrive and the looming spectre of IoT will start to come into the mainstream as well. Java developers will likely have to wait until Java 9 to get a truly first class platform for embedded, but early adopters will want to begin taking a look at early builds throughout 2015. Java/JVM applications now tend to be complex, with many moving parts and distributed deployments. It can often take poor frustrated developers weeks to fix issues in production. To combat this there are a new wave of interesting analysis tools dealing with Java/JVM based applications and deployments. Oracle’s Mission Control is a powerful tool that can give lots of interesting insights into the JVM and other tools like Xrebel from ZeroTurnaround, jClarity’s Censum and Illuminate take the next step of applying machine learned analysis to the raw numbers. One final important note. Project Jigsaw is the modularisation story for Java 9 that will massively impact tool vendors and day to day developers alike. The community at large needs your help to help test out early builds of Java 9 and to help OpenJDK developers and tool vendors ensure that IDEs, build tools and applications are ready for this important change. You can join us in the Adoption Group at OpenJDK: http://adoptopenjdk.java.net I hope everyone has a great holiday break – I look forward to seeing the Twitter feeds and the GitHub commits flying around in 2015 :-).
Cheers,
Martijn (CEO – jClarity, Java Champion & Diabolical Developer)

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

How jOOQ Allows for Fluent Functional-Relational Interactions in Java 8

In this year’s Java Advent Calendar, we’re thrilled to have been asked to feature a mini-series showing you a couple of advanced and very interesting topics that we’ve been working on when developing jOOQ.

The series consists of:

Don’t miss any of these!

How jOOQ allows for fluent functional-relational interactions in Java 8

In yesterday’s article, we’ve seen How jOOQ Leverages Generic Type Safety in its DSL when constructing SQL statements. Much more interesting than constructing SQL statements, however, is executing them.

Yesterday, we’ve seen a sample PL/SQL block that reads like this:

BEGIN
FOR rec IN (
SELECT first_name, last_name FROM customers
UNION
SELECT first_name, last_name FROM staff
)
LOOP
INSERT INTO people (first_name, last_name)
VALUES rec.first_name, rec.last_name;
END LOOP;
END;

And you won’t be surprised to see that the exact same thing can be written in Java with jOOQ:

for (Record2<String, String> rec : 
dsl.select(CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
.union(
select(STAFF.FIRST_NAME, STAFF.LAST_NAME ).from(STAFF))
) {
dsl.insertInto(PEOPLE, PEOPLE.FIRST_NAME, PEOPLE.LAST_NAME)
.values(rec.getValue(CUSTOMERS.FIRST_NAME), rec.getValue(CUSTOMERS.LAST_NAME))
.execute();
}

This is a classic, imperative-style PL/SQL inspired approach at iterating over result sets and performing actions 1-1.

Java 8 changes everything!

With Java 8, lambdas appeared, and much more importantly, Streams did, and tons of other useful features. The simplest way to migrate the above foreach loop to Java 8’s “callback hell” would be the following

dsl.select(CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
.union(
select(STAFF.FIRST_NAME, STAFF.LAST_NAME ).from(STAFF))
.forEach(rec -> {
dsl.insertInto(PEOPLE, PEOPLE.FIRST_NAME, PEOPLE.LAST_NAME)
.values(rec.getValue(CUSTOMERS.FIRST_NAME), rec.getValue(CUSTOMERS.LAST_NAME))
.execute();
}

This is still very simple. How about this. Let’s fetch a couple of records from the database, stream them, map them using some sophisticated Java function, reduce them into a batch update statement! Whew… here’s the code:

dsl.selectFrom(BOOK)
.where(BOOK.ID.in(2, 3))
.orderBy(BOOK.ID)
.fetch()
.stream()
.map(book -> book.setTitle(book.getTitle().toUpperCase()))
.reduce(
dsl.batch(update(BOOK).set(BOOK.TITLE, (String) null).where(BOOK.ID.eq((Integer) null))),
(batch, book) -> batch.bind(book.getTitle(), book.getId()),
(b1, b2) -> b1
)
.execute();

Awesome, right? Again, with comments

// Here, we simply select a couple of books from the database
dsl.selectFrom(BOOK)
.where(BOOK.ID.in(2, 3))
.orderBy(BOOK.ID)
.fetch()

// Now, we stream the result as a Java 8 Stream
.stream()

// Now we map all book titles using the "sophisticated" Java function
.map(book -> book.setTitle(book.getTitle().toUpperCase()))

// Now, we reduce the books into a batch update statement...
.reduce(

// ... which is initialised with empty bind variables
dsl.batch(update(BOOK).set(BOOK.TITLE, (String) null).where(BOOK.ID.eq((Integer) null))),

// ... and then we bind each book's values to the batch statement
(batch, book) -> batch.bind(book.getTitle(), book.getId()),

// ... this is just a dummy combiner function, because we only operate on one batch instance
(b1, b2) -> b1
)

// Finally, we execute the produced batch statement
.execute();

Awesome, right? Well, if you’re not too functional-ish, you can still resort to the “old ways” using imperative-style loops. Perhaps, your coworkers might prefer that:

BatchBindStep batch = dsl.batch(update(BOOK).set(BOOK.TITLE, (String) null).where(BOOK.ID.eq((Integer) null))),

for (BookRecord book :
dsl.selectFrom(BOOK)
.where(BOOK.ID.in(2, 3))
.orderBy(BOOK.ID)
) {
batch.bind(book.getTitle(), book.getId());
}

batch.execute();

So, what’s the point of using Java 8 with jOOQ?

Java 8 might change a lot of things. Mainly, it changes the way we reason about functional data transformation algorithms. Some of the above ideas might’ve been a bit over the top. But the principal idea is that whatever is your source of data, if you think about that data in terms of Java 8 Streams, you can very easily transform (map) those streams into other types of streams as we did with the books. And nothing keeps you from collecting books that contain changes into batch update statements for batch execution.

Another example is one where we claimed that Java 8 also changes the way we perceive ORMs. ORMs are very stateful, object-oriented things that help manage database state in an object-graph representation with lots of nice features like optimistic locking, dirty checking, and implementations that support long conversations. But they’re quite terrible at data transformation. First off, they’re much much inferior to SQL in terms of data transformation capabilities. This is topped by the fact that object graphs and functional programming don’t really work well either.

With SQL (and thus with jOOQ), you’ll often stay on a flat tuple level. Tuples are extremely easy to transform. The following example shows how you can use an H2 database to query for INFORMATION_SCHEMA meta information such as table names, column names, and data types, collect those information into a data structure, before mapping that data structure into new CREATE TABLE statements:

DSL.using(c)
.select(
COLUMNS.TABLE_NAME,
COLUMNS.COLUMN_NAME,
COLUMNS.TYPE_NAME
)
.from(COLUMNS)
.orderBy(
COLUMNS.TABLE_CATALOG,
COLUMNS.TABLE_SCHEMA,
COLUMNS.TABLE_NAME,
COLUMNS.ORDINAL_POSITION
)
.fetch() // jOOQ ends here
.stream() // Streams start here
.collect(groupingBy(
r -> r.getTableName(),
LinkedHashMap::new,
mapping(
r -> r,
toList()
)
))
.forEach(
(table, columns) -> {
// Just emit a CREATE TABLE statement
System.out.println(
"CREATE TABLE " + table + " (");

// Map each "Column" type into a String
// containing the column specification,
// and join them using comma and
// newline. Done!
System.out.println(
columns.stream()
.map(col -> " " + col.getName() +
" " + col.getType())
.collect(Collectors.joining(",n"))
);

System.out.println(");");
}
);

The above statement will produce something like the following SQL script:

CREATE TABLE CATALOGS(
CATALOG_NAME VARCHAR
);
CREATE TABLE COLLATIONS(
NAME VARCHAR,
KEY VARCHAR
);
CREATE TABLE COLUMNS(
TABLE_CATALOG VARCHAR,
TABLE_SCHEMA VARCHAR,
TABLE_NAME VARCHAR,
COLUMN_NAME VARCHAR,
ORDINAL_POSITION INTEGER,
COLUMN_DEFAULT VARCHAR,
IS_NULLABLE VARCHAR,
DATA_TYPE INTEGER,
CHARACTER_MAXIMUM_LENGTH INTEGER,
CHARACTER_OCTET_LENGTH INTEGER,
NUMERIC_PRECISION INTEGER,
NUMERIC_PRECISION_RADIX INTEGER,
NUMERIC_SCALE INTEGER,
CHARACTER_SET_NAME VARCHAR,
COLLATION_NAME VARCHAR,
TYPE_NAME VARCHAR,
NULLABLE INTEGER,
IS_COMPUTED BOOLEAN,
SELECTIVITY INTEGER,
CHECK_CONSTRAINT VARCHAR,
SEQUENCE_NAME VARCHAR,
REMARKS VARCHAR,
SOURCE_DATA_TYPE SMALLINT
);

That’s data transformation! If you’re as excited as we are, read on in this article how this example works exactly.

Conclusion

Java 8 has changed everything in the Java ecosystem. Finally, we can implement functional, transformative algorithms easily using Streams and lambda expressions. SQL is also a very functional and transformative language. With jOOQ and Java 8, you can extend data transformation directly from your type safe SQL result into Java data structures, back into SQL. These things aren’t possible with JDBC. These things weren’t possible prior to Java 8.

jOOQ is free and Open Source for use with Open Source databases, and it offers commercial licensing for use with commercial databases.

For more information about jOOQ or jOOQ’s DSL API, consider these resources:

Stay tuned for tomorrow’s article “How jOOQ helps pretend that your stored procedures are a part of Java”
This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

How jOOQ Leverages Generic Type Safety in its DSL

In this year’s Java Advent Calendar, we’re thrilled to have been asked to feature a mini-series showing you a couple of advanced and very interesting topics that we’ve been working on when developing jOOQ.

The series consists of:

Don’t miss any of these!

How jOOQ leverages generic type safety in its DSL

Few Java developers are aware of this, but SQL is a very type safe language. In the Java ecosystem, if you’re using JDBC, you’re operating on dynamically constructed SQL strings, which are sent to the server for execution – or failure. Some IDEs may have started to be capable of introspecting parts of your static SQL, but often you’re concatenating predicates to form a very dynamic query:

String sql = "SELECT a, b, c FROM table WHERE 1 = 1";

if (someCondition)
sql += " AND id = 3";

if (someOtherCondition)
sql += " AND value = 42";

These concatenations quickly turn nasty and are one of the reasons why Java developers don’t really like SQL

SQL as written via JDBC. Image (c) by Greg Grossmeier. License CC-BY-SA 2.0

But interestingly, PL/SQL or T-SQL developers never complain about SQL in this way. In fact, they feel quite the opposite. Look at how SQL is nicely embedded in a typical PL/SQL block:

BEGIN

-- The record type of "rec" is inferred by the compiler
FOR rec IN (

-- This compiles only when I have matching
-- degrees and types of both UNION subselects!
SELECT first_name, last_name FROM customers
UNION
SELECT first_name, last_name FROM staff
)
LOOP

-- This compiles only if rec really has
-- first_name and last_name columns
INSERT INTO people (first_name, last_name)

-- Obviously, VALUES must match the above target table
VALUES (rec.first_name, rec.last_name);
END LOOP;
END;

Now, we can most certainly discuss syntax. Whether you like SQL’s COBOLesque syntax or not is a matter of taste and a matter of habit, too. But one thing is clear, SQL is absolutely type safe, and most sane people would consider that a very good thing. Read The Inconvenient Truth About Dynamic vs. Static Typing for more details.

The same can be achieved in Java!

JDBC’s lack of type safety is a brilliant feature for the low-level API that JDBC is. At some point, we need an API that can simply send SQL strings over the wire without knowing anything about the wire protocol, and retrieve back cursors of arbitrary / unknown type. However, if we don’t execute our SQL directly via JDBC, but maintain a type safe SQL AST (Abstract Syntax Tree) prior to query execution, then we might actually anticipate the returned type of our statements.

jOOQ’s DSL API (Domain-specific language) works exactly like that. When you create SQL statements with jOOQ, you’re implicitly creating an AST both for your Java compiler, but also for your runtime environment. Here’s how that works:

DSL.using(configuration)
.select(CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
.union(
select(STAFF.FIRST_NAME, STAFF.LAST_NAME ).from(STAFF))
.fetch();

If we look closely at what the above query really does, we’ll see that we’re calling one of several overloaded select() methods on jOOQ’s DSLContext class, namely DSLContext.select(Field, Field), the one that takes two argument columns.

The whole API looks like this, and we’ll see immediately after why this is so useful:

<T1> SelectSelectStep<Record1<T1>> 
select(Field<T1> field1);
<T1, T2> SelectSelectStep<Record2<T1, T2>>
select(Field<T1> field1, Field<T2> field2);
<T1, T2, T3> SelectSelectStep<Record3<T1, T2, T3>>
select(Field<T1> field1, Field<T2> field2, Field<T3> field3);
// and so on...

So, by explicitly passing two columns to the select() method, you have chosen the second one of the above methods that returns a DSL type that is parameterised with Record2, or more specifically, with Record2<String, String>. Yes, the String parameter bindings are inferred from the very columns that we passed to the select() call, because jOOQ’s code generator reverse-engineers your database schema and generates those classes for you.

The generated Customers class really looks like this (simplified):

// All table references are listed here:
class Tables {
Customers CUSTOMERS = new Customers();
Staff STAFF = new Staff();
}

// All tables have an individual class each, with columns inside:
class Customers {
final Field<String> FIRST_NAME = ...
final Field<String> LAST_NAME = ...
}

As you can see, all type information is already available to you, automatically, as you have defined those types only once in the database. No need to define them again in Java.

Generic type information is ubiquitous

The interesting part is the UNION. The union() method on the DSL API simply looks like this:

public interface SelectUnionStep<R extends Record> {
SelectOrderByStep<R> union(Select<? extends R> select);
}

If we go back to our statement, we can see that the type of the object upon which we call union() is really this type:

SelectUnionStep<Record2<String, String>>

… thus, the method union() that we’re calling is really expecting an argument of this type:

union(Select<? extends Record2<String, String>> select);

… which essentially means that we’ll get a compilation error if we don’t provide two string columns also in the second subselect:

DSL.using(configuration)
.select(CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
.union(
// ^^^^^ doesn't compile, wrong argument type!
select(STAFF.FIRST_NAME).from(STAFF))
.fetch();

or also:

DSL.using(configuration)
.select(CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME).from(CUSTOMERS)
.union(
// ^^^^^ doesn't compile, wrong argument type!
select(STAFF.FIRST_NAME, STAFF.DATE_OF_BIRTH).from(STAFF))
.fetch();

Static type checking helps finding bugs early

… indeed! All of the above bugs can be found at compile-time because your Java compiler will not accept the wrong SQL statements. When writing dynamic SQL, this can be incredibly subtle, as the different UNION subselects may not be created all at the same place. You may have a complex DAO that generates the SQL across several methods. With this kind of generic type safety, you can continue to do so, safely.

As mentioned before, this extends through the whole API. Check out…

IN predicates

This compiles:

// Get all customers whose first name corresponds to a staff first name
DSL.using(configuration)
.select().from(CUSTOMERS)
.where(CUSTOMERS.FIRST_NAME.in(
select(STAFF.FIRST_NAME).from(STAFF)
))
.fetch();

This doesn’t compile:

DSL.using(configuration)
.select().from(CUSTOMERS)
.where(CUSTOMERS.FIRST_NAME.in(
// ^^ wrong argument type!
select(STAFF.FIRST_NAME, STAFF.LAST_NAME).from(STAFF)
))
.fetch();

But this compiles:

// Get all customers whose first and last names both correspond
// to a staff first and last names
DSL.using(configuration)
.select().from(CUSTOMERS)
.where(row(CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME).in(
select(STAFF.FIRST_NAME, STAFF.LAST_NAME).from(STAFF)
))
.fetch();

Notice the use of row() to construct a row value expression, an extremely useful but little known SQL feature.

INSERT statements

This compiles:

DSL.using(configuration)
.insertInto(CUSTOMERS, CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME)
.values("John", "Doe")
.execute();

This doesn’t compile:

DSL.using(configuration)
.insertInto(CUSTOMERS, CUSTOMERS.FIRST_NAME, CUSTOMERS.LAST_NAME)
.values("John")
// ^^^^^^ Invalid number of arguments
.execute();

Conclusion

Internal domain-specific languages can express a lot of type safety in Java, almost as much as the external language really implements. In the case of SQL – which is a very type safe language – this is particularly true and interesting.

jOOQ has been designed to create as little cognitive friction as possible for any Java developer who wants to write embedded SQL in Java, i.e. the Java code will look and feel exactly like the SQL code that it represents. At the same time, jOOQ has been designed to offer as much compile-time type safety as possible in the Java language (or also in Scala, Groovy, etc.).

jOOQ is free and Open Source for use with Open Source databases, and it offers commercial licensing for use with commercial databases.

For more information about jOOQ or jOOQ’s DSL API, consider these resources:

Stay tuned for tomorrow’s article “How jOOQ allows for fluent functional-relational interactions in Java 8”
This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!