Site icon JVM Advent

5 cool applications you can build with Java and GraalVM

There are many cool apps you can build with Java, and GraalVM can make them even better — faster, smaller, more secure. In the recent years GraalVM and Native Image have gained significant traction in the Java ecosystem, so now building applications with it is easier than ever.

There are applications for which GraalVM is well known and widespread, such as microservices, cloud deployments, CLI, but you can do so much more. Read this article to find out what exactly!

What is GraalVM and why GraalVM?

GraalVM is many things; here is your 1-minute refresher about GraalVM, just so we are on the same page.

GraalVM is a JDK, like the other JDKs you might know, but with unique features and capabilities. One of such unique capabilities is ahead-of-time compilation of applications with Native Image.

Native Image can compile your application into a native executable with the following advantages:

Now, let’s see how we can build applications with it!

A blazing fast web server 🚀

If there’s one thing you’ve heard about GraalVM, it’s probably that it gives Java applications instant startup. Native Image lets you move all the overhead work of loading, analyzing, profiling, and compiling your code to build time so your applications start instantaneously.

As an example, let’s look at a base Micronaut web app. By compiling the app with mvn package -Dpackaging=native-image, we get the following:

➜ native-micronaut-web ./target/webserver 
__ __ _ _ 
| \/ (_) ___ _ __ ___ _ __ __ _ _ _| |_ 
| |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __|
| | | | | (__| | | (_) | | | | (_| | |_| | |_ 
|_| |_|_|\___|_| \___/|_| |_|\__,_|\__,_|\__|
10:42:38.556 [main] INFO io.micronaut.runtime.Micronaut - Startup completed in 33ms. Server Running: http://localhost:8080

 

On my rather average Linux instance that I use for demo purposes, the application starts in 33 ms. On a more powerful machine, this time can go even below 20 ms!🔥 Not only this is a functional Java web server, it’s framework-based, meaning that you can easily extend it with various Micronaut modules and dependencies (they have an excellent Testocontainers implementation)!

Not only our application is fast, it’s also very efficient in terms of memory usage. Even under a load of executing 500000 requests via hey, max RSS stays at around 150 MB with no tuning and profiling, and can be cut down further by limiting Xmx.

CLI app 👾

What qualities would you like to see in CLI apps? Small, fast, responsive. Sounds familiar, right? Those are exactly the benefits that Native Image can give you. So it’s not surprising that in Thomas Vitale’s recent poll about CLI tools GraalVM was one of the favorites 🙂

Thanks everyone for sharing your experience building CLI applications with #Java ☕️ keep sharing 🙏🏻📚 Picocli, Spring Shell, or Jcommander for building CLIs🐇 @graalvm.org for native executables🚀 Nobody mentioned it yet, but I find @jreleaser.org key to manage releases of your CLI tools

Thomas Vitale ☀️ (@thomasvitale.com) 2024-11-28T15:11:43.413Z

When talking about CLI apps and GraalVM, I want to start with an honorable mention of picocli. picocli is one of community favorites for building powerful, user-friendly, and GraalVM-enabled command line apps. It’s a great tool with many rich features — give it a try!

Additionally, many frameworks offer their own CLI modules. As an example, let’s look at Spring Shell. It’s intended to help Spring users easily build their custom CLI tools, with several default built-in commands.

Let’s take Spring Shell for a spin by implementing a cute version of ls. Our application is rather straightforward, most of its logic is implemented in the ls method, that goes through the current directory listing files & directories in different formats. Here it is running as a native executable:

➜ native-spring-shell git:(main) ✗ ./target/native-spring-shell 

⢠⣤⣀⠀⠀⠀⠀⠀⠀⠀⢰⢦⡀⠀⢰⢆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠙⣎⠙⠲⢤⣀⠀⠀⠀⡌⠈⣗⣄⡌⠘⣷⣀⣠⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠈⢦⡀⠀⠈⠙⠲⢄⣣⡖⣯⡷⣧⠀⢸⢿⣦⡀⠈⠳⣦⡀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠳⡄⠀⠀⠀⠈⣿⣼⣯⣄⣿⣦⣸⢸⣏⠻⣤⣀⣈⣷⡄⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠈⠳⢤⠤⠴⠻⣯⠈⠋⠙⠃⠀⠈⣿⠻⣾⡿⠛⠁⠉⣦⡀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⡰⠋⠀⠀⠀⠀⣠⣴⣄⠀⠀⠀⠈⠷⣤⡛⠛⢿⣿⡟⢹⡄⠀⠀
⠀⠀⠀⠀⢀⡜⠁⠀⠀⠀⠀⠀⢻⠿⠿⠀⠀⠀⠀⠀⠈⢻⡙⢹⡏⠿⣆⣹⠀⠀
⠀⠀⠀⢀⡞⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⠾⢷⡞⠁⣀⠙⢿⠂⠀
⠀⠀⢀⢾⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢷⡀⣿⠀⢸⡄⠀
⠀⠀⡞⠘⢧⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⡾⣷⣾⠇⠀
⠀⢸⠈⠿⠀⠙⠲⣄⠀⠀⢀⣀⣀⣀⣀⣠⠀⠀⠀⠀⠀⠀⠀⠀⣸⠇⢀⣼⠀⠀
⠀⠀⠳⣄⠀⠀⠀⣘⠷⠊⠉⠀⠀⠀⠀⠀⢃⠀⠀⠀⠀⠀⢴⣶⠿⠖⠛⣿⡄⠀
⠀⠀⠀⠀⠉⠛⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠘⣄⠀⢀⣠⠖⠋⠀⠀⠀⠀⠀⠉⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

2024-12-04T15:52:29.523Z INFO 44103 --- [native-spring-shell] [ main] c.e.demo.NativeSpringShellApplication : Starting AOT-processed NativeSpringShellApplication using Java 23 with PID 44103 (/home/opc/demo-central/native-spring-shell/target/native-spring-shell started by opc in /home/opc/demo-central/native-spring-shell)
2024-12-04T15:52:29.523Z INFO 44103 --- [native-spring-shell] [ main] c.e.demo.NativeSpringShellApplication : No active profile set, falling back to 1 default profile: "default"
2024-12-04T15:52:29.557Z INFO 44103 --- [native-spring-shell] [ main] c.e.demo.NativeSpringShellApplication : Started NativeSpringShellApplication in 0.042 seconds (process running for 0.045)
shell:>ls
🤖.git/
🦄.gitignore
🤖.mvn/
🦄LICENSE
🦄README.md
🦄mvnw
🦄mvnw.cmd
🦄pom.xml
🤖src/
🤖target/

shell:>help
AVAILABLE COMMANDS

Built-In Commands
help: Display help about available commands
stacktrace: Display the full stacktrace of the last error.
clear: Clear the shell screen.
quit, exit: Exit the shell.
history: Display or save the history of previously run commands
version: Show version info
script: Read and execute commands from a file.

Cute LS
ls: Lists files in the specified directory, or in the current directory as default.



shell:>history
[ls, help, history]
shell:>

help reveals available commands — ls implemented by us, and several convenient commands courtesy of Spring Shell.

A look at our pom.xml reveals another interesting detail — the -Os configuration flag that stands for “optimize for size”. It’s a new flag introduced in GraalVM for JDK 23, that optimizes native executables for size, that can be particularly interesting for CLIs and other apps meant for distribution. It quite impactful — it can help you decrease the image size by about 30%.

By the way did you know you that in Spring Boot you can have your own custom banners printed upon startup just buy placing them as a txt under main/resources?😍

LLM Inference Engine 🤯

You can build a complete LLM inference engine in Java and GraalVM! No C, no Python, no dependencies, no calls to cloud-based LLMs — a complete inference engine, which along with a model gives you a full-blown local LLM assistant. How cool is that!

This is one of my favorite projects this year, built by my brilliant colleague Alfonso² Peterssen — llama3.java. It contains all the inference components, such as tokenizer, GGFUF file parser, sampler, and more, in a one fairly short (2K LOC) Java file. There are no external dependencies — all the inference logic is implemented in Java, leveraging the latest APIs, such as the Foreign Function and Memory API for interoperability between Java code and native code, and the Vector API for fast vector computations. I believe this project is also a great showcase of how powerful and versatile the Java platform is!

Now what’s cool, GraalVM makes this project even better!🚀 Even though our Vector API implementation is a work in progress, it’s already blazing fast — on average, applications using Vector API are ~10% faster on Oraсle GraalVM JIT!🔥 Better yet, our LLM inference application is fully Native Image-compatible out of the box!

You can give a try yourself following the link above. Here’s what I get upon running a compiled application:

➜  llama3.java git:(main) ✗ ./llama3 --model Llama-3.2-1B-Instruct-Q8_0.gguf --chat
Parse Llama-3.2-1B-Instruct-Q8_0.gguf: 1179 millis
Load LlaMa model: 1397 millis
> hello
Hello! How can I assist you today?
34.49 tokens/s (21)

> what can you do?
I can do a wide range of things! Here are some examples:
**Conversations**
* Answer questions on various topics, from science and history to entertainment and culture
* Provide definitions and explanations for words and phrases
* Engage in simple conversations and chat about your day
* Discuss current events and news

**Writing and Language**
* Generate text on a given topic or topic ideas
* Help with writing tasks, such as proofreading and editing
* Provide grammar and spelling suggestions

The speed of the engine is expressed in tokens/s, and what we see here, 34.49 tokens/s on again rather mid-range machine with no GPU, is really impressive. For reference, this is way above normal human reading speed, and on par or even slightly faster than llama.cpp 🔥

Now here comes a truly fascinating part. Native Image offers a unique combination of Java’s programming model and AOT optimizations of native programs. Therefore we can preload the model’s metadata at the build time and avoid parsing the model metadata at runtime, completely eliminating any startup overhead — and this kind of optimization is possible only in Java and Native Image:

➜  llama3.java git:(main) ✗ ./llama3 --model Llama-3.2-1B-Instruct-Q8_0.gguf --chat
Load tensors from pre-loaded model: 0 millis
> hello
Hello! How can I assist you today?
37.41 tokens/s (21)

> give me a nice Christmas greeting as a poem

Here's a Christmas poem for you:

'Tis the season of joy and cheer,
A time for love and laughter clear.
The tree is lit, the stockings too,
A festive atmosphere, for me and you.

The fireplace crackles, warm and bright,
As snowflakes fall gently through the night.
The room is filled with scents so sweet,
Of pine and cinnamon, a treat to greet.

The startup went down to 0ms!🔥🤯

Give llama3.java a try (also watch our Devoxx talk!), I truly believe that this is the most convenient (and fun!) way to have a local LLM-assistant implemented in Java.

Extra-secure application 🛡️

We see more and more attention to Native Image because of its additional security layer. Why are natively compiled applications more secure?

Additionally, Native Image includes support for software bill of materials (SBOM). The SBOM file can be either embedded in the executable, or made available as a classpath resource, and can be used to analyze the components of your application and scan for vulnerabilities. It’s also integrated with several tools and frameworks. For example, starting with Spring Boot 3.4 all you need to do is to build your application with the --enable-sbom=classpath flag, and Spring Boot will automatically🤌 pick it up and expose in Actuator:

SBOM support in GraalVM Native Image

See the complete example here: github.com/alina-yur/native-spring-boot-sbom.

Green application 🌿

Another aspect of Native Image where we see increasing interest is resource savings thanks to AOT compilation and optimizations. This applies to memory and CPU, but also to energy consumption 🔋. As an example, we measured energy consumption of Spring PetClinic running on JIT and on Native Image in several scenarios with increasing load:

Energy consumption of Spring PetClinic, JIT vs AOT

As we see, the natively compiled version of the application consistently consumes less energy, even under constant load. Our findings are also aligned with a community study performed by Ionut Balosin.

As Josh Long says, save the planet and the turtles, use GraalVM!🌍

Bonus applications

This article is getting out of hand, but there are a few more fun applications that you can build with GraalVM that also deserve honorable mentions. So here’s a lightning round:

Observability of native applications via Micrometer

Conclusion

I hope this article was useful for you, and inspired you to create something similar (or completely different!) with GraalVM. With all the ecosystem love and support, and the optimizations and features brought by our team, there has never been a better time to build applications with GraalVM.

Let us know what you build, and have great holidays!🎄

Author: Alina Yurenko

Alina is a developer advocate for GraalVM at Oracle Labs, a research & development organization at Oracle. Loves both programming and natural languages, compilers, and open source.

Exit mobile version