When many Java developers hear the word WebAssembly, the first thing they think is “browser technology”. The second thing: “it’s the JVM all over again”. After all, for a Java developer, in-browser apps are prehistory.
In the last few weeks, there have been quite a few announcements around WebAssembly, such as the Docker+Wasm Technical Preview. As a Java geek myself, I think we should not dismiss this technology as just a fad.
Indeed, WebAssembly is “a bytecode for the Web” (I mean, that’s the name after all), but the similarities between Java and Wasm (lower-cased: it’s a contraction, not an acronym!) really end here.
If you want to know more about how we came to define the WebAssembly standard, you can learn more about its history on my own blog. In the following, I will try to argue that there is more to WebAssembly than “just the web”.
First of all, a WebAssembly runtime is only shallowly similar to a JVM. For instance, WebAssembly was always meant to be a proper compilation target for different programming languages, while the JVM was not, at least, not originally.
Myth #1: The JVM Is A Polyglot Compilation Target
Of course, everyone knows the JVM is one of richest, interoperable language ecosystems there is. We don’t have just Java, we also have Scala, Jython, JRuby, Clojure, Groovy, Kotlin and many many others.
However, the sad, sad reality is that Java bytecode was never really meant to be a general-purpose compilation target. In fact, you can even find literary references that spell that out clearly; in “Bytecodes meet combinators: invokedynamic
on the JVM”, John Rose writes (bold mine):
The Java Virtual Machine (JVM) has been widely adopted in part because of its classfile format, which is portable, compact, modular, verifiable, and reasonably easy to work with. However, it was designed for just one language—Java— and so when it is used to express programs in other source languages, there are often “pain points” which retard both development and execution.
The paper describes how and why the invokedynamic
opcode was introduced in the JVM; in fact, it was specifically introduced to support dynamic languages targeting the JVM as a runtime. At the time, those were many: JRuby, Jython, Groovy, etc… This opcode was not added because the JVM was supposed to support such languages; but because people were doing it anyway: so, it was better just to acknowledge it!
In other words, the JVM, as it was at the time, was not an adequate compilation target for dynamic languages. We may even argue that the JVM became a compiler target not because it was the best compilation target, but because people wanted to interoperate with it because of adoption and support …just like JavaScript!
GraalVM: One VM to Rule Them All
The GraalVM project has recently gone mainstream. This project includes a Just-in-Time compiler targeting regular Java bytecode, an API to build efficient language interpreters, and, recently, a native image compiler.
One of the original goals for GraalVM was to be “One VM to rule them all”, i.e. to be a polyglot runtime.
But Truffle does not define a polyglot compilation target. Instead, the Truffle API allows you to build an efficient, JITting interpreter for dynamic programming languages using a very high-level representation (an AST-based interpreter, if you are interested).
Note for the nitpicker. Now, once you enter the programming-language-rabbit-hole everything gets kind of “meta”. Indeed, with Truffle you can write a JITting interpreter for some other “proper” bytecode format.
In fact, there is a Truffle-based interpreter for LLVM (Sulong); and, sure, LLVM bitcode is meant to be a multi-platform/multi-target compilation target. So, by the transitive property, you may argue that GraalVM/Truffle do support a multi-platform compilation target.
This is technically correct (which is the best kind of correct), but there are many considerations to be made, and there is not enough space here to discuss them all. In short, LLVM bitcode is meant to be a compilation target, but it was not necessarily meant to be a cross-platform runtime language (e.g., there are slight variations in the instructions you may have to use, depending on the CPU/OS you want to target). Moreover, as opposed to WebAssembly, which is a multi-vendor standard, GraalVM and Truffle are, to this day, open source, community-driven, but single-implementation efforts (work has recently started to bring it to the OpenJDK and possibly to the Java Language Specification).
Ultimately, WebAssembly is also another language that GraalVM/Truffle is able to support, so if you want to use GraalVM, you might even target Wasm!
Myth #2: It’s Just Another Stack-based Language VM
WebAssembly is defined as a virtual instruction set architecture (ISA) for a structured stack-based virtual machine.
The word structured here is key, because it is a very significant departure from the way, say, the JVM works. In practice, in a structured stack machine most computations use a stack of values, but control flow is expressed in structured constructs such as blocks, ifs, and loops. Moreover, in the WebAssembly language, some instructions can be represented both as “simple” and as “nested”.
Let’s see an example. In the stack-based Wasm machine the expression:
( x + 2 ) * 3
int exp(int); Code: 0: iload_1 1: iconst_2 2: iadd 3: iconst_3 4: imul 5: ireturn
Could be translated in the following sequence of instructions:
(local.get $x) (i32.const 2) i32.add (i32.const 3) i32.mul
local.get
puts the value of the local variable$x
on the stack- then the
i32.const
pushes the 32-bit integer (i32
) constant2
on the stack i32.add
pops the two values from the stack, and push the result$x+2
on the stack- we then push the integer constant
3
i32.mul
pops the two integer values and pushes thei32
result of the multiplication (($x+2)*3
)
You may have noticed how instructions that take at least one argument are parenthesized. The one we just saw is the “linearized” version of WebAssembly. It is the one that is straightforwardly translated into its binary representation in a .wasm
file. There is however another, semantically equivalent “nested” representation:
(i32.mul (i32.add (local.get $x) (i32.const 2)) (i32.const 3))
The nested representation is particularly interesting because it shows a peculiar difference with other types of bytecodes (such the JVM’s), i.e. operations nest and read like operations in a more conventional programming language. Well, for some definition of conventional: it reads like Scheme (a language in the family of LISPs), and the convention for parenthesization is a clear homage to it. Of course, this is not by accident; if you know a bit about JavaScript’s evil origin story you’ll definitely know that it was originally written in 10 days; and you may also know that Brendan Eich initially was hired to develop a Scheme dialect.
However, the even more interesting detail (at least to me) is that the nested sequence naturally linearizes to the other version; in fact, if you follow the precedence rule for parenthesized expressions, you have to start at the innermost parentheses:
(i32.add (local.get $x) (i32.const 2))
so first you get $x
, then you evaluate the constant to 2
, then you sum them; then you continue with the outermost expression:
(i32.mul (i32.add ...) (i32.const 3))
Now you have evaluated the contained i32.add
, you evaluate the constant 3
and you can multiply them. That’s exactly the same order of evaluation of the stack-based version!
We have also mentioned structured control flow. The reason for this choice is, again, safety; but also simplicity:
The WebAssembly stack machine is restricted to structured control flow and structured use of the stack. This greatly simplifies one-pass verification, avoiding a fixpoint computation like that of other stack machines such as the Java Virtual Machine (prior to stack maps). This also simplifies compilation and manipulation of WebAssembly code by other tools.
Let’s see an example:
void print(boolean x) { if (x) { System.out.println(1); } else { System.out.println(0); } }
This translates to the bytecode:
void print(boolean); Code: 0: iload_1 1: ifeq 14 4: getstatic #7 // java/lang/System.out:Ljava/io/PrintStream; 7: iconst_1 8: invokevirtual #13 // java/io/PrintStream.println:(I)V 11: goto 21 14: getstatic #7 // java/lang/System.out:Ljava/io/PrintStream; 17: iconst_0 18: invokevirtual #13 // java/io/PrintStream.println:(I)V 21: return
You will notice the unstructured jump instructions ifeq
and goto
which are missing from the equivalent WebAssembly definition, replaced instead by proper if...then...else
blocks!
(module ;; import the browser console object, ;; you'll need to pass this in from JavaScript (import "console" "log" (func $log (param i32))) (func ;; change to positive number (true) ;; if you want to run the if block (i32.const 0) (call 0)) (func (param i32) local.get 0 (if (then i32.const 1 call $log ;; should log '1' ) (else i32.const 0 call $log ;; should log '0' ))) (start 1) ;; run the first function automatically )
You can see and play with the original example on the Mozilla Developer Network
Obviously, this also linearizes to a non-nested version:
(module (type (;0;) (func (param i32))) (type (;1;) (func)) (import "console" "log" (func (;0;) (type 0))) (func (;1;) (type 1) i32.const 1 call 0) (func (;2;) (type 0) (param i32) local.get 0 if ;; label = @1 i32.const 1 call 0 else i32.const 0 call 0 end) (start 1))
More Differences: Memory Management
For better or worse, another area where WebAssembly virtual machines greatly differ from a JVM is memory management. As you probably know, Java languages do not require you to allocate and deallocate memory, or care about stack vs. heap allocations; at least in general: you may care about those and there are ways to deal with them explicitly if you really need to. But the reality is that most people won’t.
This is not a language-level feature, it is really also how the VM works. You do not have primitives to deal with memory at the VM-level; in fact, primitives for heap allocation are available, but they are exposed as JDK APIs. There is no way for you to opt out of managed memory: you cannot just say “I don’t care about the garbage collected heap, I am going to do my own memory management”.
At this time, WebAssembly is quite the opposite. It is no coincidence that most languages targeting WebAssembly today really manage their own memory. Some languages do garbage collection; but in those cases, they have to roll their own garbage collection routines, because the VM does not provide such a facility.
Instead, with WebAssembly you get a slice of linear memory, and then you can do whatever you want with it. Allocate, deallocate; even move it around if you’d like. While this is, in a way, more powerful than what the JVM provides, it also comes with caveats.
For instance, the JVM does not require you to specify the memory layout of an object, because it is up to the VM to deal with structure packing, word alignment, etc. In the case of WebAssembly, you deal with those issues.
On the one hand, this makes it perfect as a target for manually-managed programming languages, where a higher degree of control is expected and desired. On the other hand, it could make it harder for such languages to interoperate with each other.
Now, structure and object layout is an ABI concern: a thing of the past for JVM developers, except for some very limited and notable exceptions.
Interestingly enough, the draft GC spec for WebAssembly has recently moved forwards, and it does not just deal with garbage collection, but it effectively describes how to deal with structures, and how to make them interoperate, regardless of the originating language. So, while this is still not ready, things are continuously evolving and multiple concerns are being addressed.
More Than Web
Now, in all we have learned so far, you might have noticed that I never mentioned the word Web once.
Indeed, it took me a while to get to the point, but this is where I tell you, the Java Geek, why you should care.
Even if you do not care about front-end, you should not dismiss WebAssembly as a purely front-end technology. There is nothing in the design and specification of WebAssembly that makes it specifically tied to the front-end. In fact, most mainstream JavaScript runtimes are now able to load and link WebAssembly binaries, even outside the browser; so you can run a Wasm executable in a Node.js runtime, with a thin layer of JS glue code to interact with the rest of the platform.
But there are also many pure-WebAssembly runtimes, such as Wasmtime, WasmEdge, Wazero that are completely untied from a JavaScript host. These runtimes are usually lighter-weight than a full-blown JavaScript engine, and they are often easy to embed inside larger projects.
In fact, many projects are starting to embrace WebAssembly as a polyglot platform to host extensions and plug-ins.
One notable example, for instance, is the Envoy proxy: the codebase is mostly C++; it does support plug-ins, but with the same caveats as browser plug-ins: you have to compile them, you have to ship them, they may not run at the right level of privileges, they may even tear down the entire process in case of a fatal fault. Now, you could embed a Lua or a JS interpreter and let your users script their way to success: the interpreter is safer because it is isolated from your main business logic, and it only interacts in a safe way with the host environment; the main downside: you have to pick a language for your users.
Or, you could just embed a WebAssembly runtime, let your users pick their own language and just compile it to Wasm. You will have the same safety guarantees, and happier users.
These pure WebAssembly runtimes are not just for extensions. Many projects are creating thin layers of Wasm-native APIs to provide stand-alone platforms.
For instance, wasmCloud is a distributed platform for writing portable business logic that can run anywhere from the edge to the cloud.
Fastly has developed a platform for serverless computing at the edge, where the serverless functions are implemented by user-provided WebAssembly executables.
Fermyon is a startup that is developing a rich ecosystem of tooling and Web-based APIs to write Web apps using only Wasm. One of their latest announcement is their Fermyon Cloud offering.
These solutions offer custom, ad-hoc APIs for specific use cases; and this is indeed one way to use WebAssembly. But that is not the end of it. In 2019, Docker founder Solomon Hykes wrote:
If WASM+WASI existed in 2008, we wouldn’t have needed to created Docker. That’s how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task! https://t.co/wnXQg4kwa4
— Solomon Hykes (@solomonstre) March 27, 2019
https://platform.twitter.com/widgets.js
If you pull this out of context your first question may be “What the hell has Wasm to do with Docker?” and, of course, “What the hell is WASI?”.
WASI is the WebAssembly System Interface. You can think of it as of a collection of (POSIX-like) APIs that allow a Wasm runtime to interact with the operating system. Is this like the JDK Class Library? Not quite. It is a thin layer of capability-oriented APIs for interaction with the operating system. You can read more on the Mozilla announcement blog., but, in short, this is the last piece of the puzzle: WASI allows to define backend applications that directly interact with the operating system without any extra layer, and without ad-hoc APIs. The current effort is to make WASI widely-adopted and, in a way, a standard de facto for backend development.
WASI APIs include things like file system access, networking and even threading APIs. These APIs work hand-in-hand with the lower-level capabilities of the runtime, enabling easier ports to the platform.
Porting Java
With all its challenges, for the first time, we have a technology with the potential to become a truly multi-vendor, multi-platform, safe, polyglot programming platform. I believe that we, as Java geeks, should not lose the occasion to be relevant in this space.
The WebAssembly specification and the WASI effort are still in flux. But all these pieces together are paving the way to allow an easier port of any programming language, not just those with a manual memory management.
Indeed, some garbage collected languages are already available, albeit not all of them take the same approach. For instance, Go can be compiled to Wasm (albeit with some limitations). For instance, the Python port is a port of the interpreter. So they compiled the CPython interpreter to Wasm, and then that is used to evaluate Python scripts, just like in a traditional execution environment.
In fact, memory management is really just part of the story, and only one of the many caveats that at this time would allow to port Java. You can always stick a GC in your executable (indeed, this is how GraalVM Native Image currently work); in my opinion, however it is harder to support other CPU features or system calls that are currently still unstable or not widely supported.
For instance:
– threading support is still lacking or experimental in most stand-alone Wasm runtimes;
– even browser support is experimental, and simulated through WebWorkers.
– there is not a standardized support for socket access: all the services that allow you to write custom HTTP handlers usually provides you with a pre-configured socket, limiting low-level access
– Exception handling is another experimental feature that is harder to simulate, because of the lack of unstructured jumps in the Wasm bytecode: this will likely need proper support in Wasm VMs before it can be adopted.
– each language brings its own constraints on memory layout and object shapes: it is therefore harder for languages to share data across boundaries, hindering compatibility between different languages and thus limiting the suitability of Wasm as a polyglot plaform (this is however being addressed as part of the GC spec itself).
In short, there are many challenges to porting Java to the WebAssembly platform inside and outside the browser.
Java Support on WebAssembly
Currently, multiple projects and library that deal with WebAssembly and Java. I have compiled a list of those that I found around the web. At this time, however, most of these are hobby projects.
Running Java in the Browser
Many projects target Java translation to WebAssembly. Most of them, however, do not emit code that is compatible with leaner Wasm runtimes: in general, they are meant for running in the browser.
- Bytecoder, JWebAssembly, and TeaVM are all translators from Java bytecode into WebAssembly that take a slightly different approach to translating Java bytecode to browser-friendly code. Among the others, TeaVM seems the most promising, as shown in Fermyon’s fork which includes initial support for WASI
- CheerpJ is a very promising, albeit proprietary, attempt to support the full extent of Java, including Swing. There is also a Chrome extension to run good ol’ applets through Web tech
Here are also some honorable mentions of projects that target browser runtimes (with experimental Wasm support in some cases):
- J2CL (successor to GWT) is a source-to-source translator (i.e. a transpiler) from Java to JavaScript, which has recently gained support for Wasm. This compiler has also bleeding-edge support for the GC spec.
- Bck2Brwsr is another compiler from bytecode that targets JavaScript and the browser
- Kotlin/Native also supports being compiled to Wasm via LLVM. It comes with all the caveats of Kotlin/Native (e.g. it may not support all of your Java libraries)
- DoppioJVM is an interesting project that I wish to mention because it takes a completely different approach, similar to Python’s: instead of compiling bytecode to Wasm, it is instead an in-browser VM (written in JavaScript) that is able to interpret JVM bytecode. Unfortunately, the project is currently unmaintained.
Running WebAssembly on the JVM
We have been talking about running Java programs on a Wasm runtime. But of course, you may want to be able to do the opposite, too. In all fairness, the JVM already provides quite a few programming languages, and the current programming model that most Wasm runtimes offer (with manual memory management) seems kind of off when hosted on a JVM. But I still want to mention these for completeness, and because in general, they may still be interesting.
- The prime candidate is obviously the aforementioned GraalVM’s Truffle implementation of a WebAssembly interpreter, which benefits from all the JIT superpowers and polyglot interoperability of the GraalVM/Truffle platform
- asmble is a suite of tools that includes a compiler from Wasm to bytecode and a Wasm interpreter
- Happy New Moon With Report (JVM) is a WebAssembly runtime for the JVM (that I am including in this list because I just love the silly name!)
- There are also bindings to native Wasm runtimes, such as kawamuray/wasmtime-java
- The Extism project has recently launched: it provides a unified API across different host languages to interface with a native WebAssembly runtime (Wasmtime)
- Katai WebAssembly is a Wasm parser written using the Katai Struct binary parser generator that I am currently maintaining (PRs welcome!): this is not meant necessarily for running Wasm on the JVM, but it is actually useful when you want to be able to manipulate or query Wasm executables for information. In fact, a Kaitai grammar allows one to generate a binary parser for any supported language, so not just Java, but also Python, Ruby, Go, C++, and many others.
Conclusion
I hope that this post sparked some interest in you. It is still early days for Java-on-Wasm, but I invite you to explore this brand-new world with an open mind: it may surprise you!
Author: Edoardo Vacchi
After my PhD at University of Milan on programming language design and implementation, I worked for three years at UniCredit Bank’s R&D department.
Later, I have joined Red Hat where I worked on the Drools rule engine, the jBPM workflow engine and the Kogito cloud-native business automation platform.
I joined Tetrate to work on the wazero WebAssembly runtime for Go.
Now, at Dylibso I still contribute to and wazero, and I am now also working on Chicory Wasm runtime for the JVM, and other runtimes!
I sometimes write on my own personal blog.
sandoradam December 23, 2022
Big thanks for this! As a Java geek I learned more about wasm from this article then the previous 10 read.
Thomas Darimont December 24, 2022
Great article!
There is another option for running Webassembly workloads in the JVM.
The extism plugin frame work https://extism.org/ recently added a Java sdk that can be used to call functions in wasn’t binaries from JVM-based applications:
https://extism.org/docs/integrate-into-your-codebase/java-host-sdk
This could be added to the list “running Webassembly in the JVM”
Cheers,
Thomas