u/minamoto108

▲ 8 r/WebAssembly+1 crossposts

https://preview.redd.it/kkoxkrhstpzg1.png?width=2048&format=png&auto=webp&s=e4e72504a50976b170a7c580315c962d33957910

Hexana started life as a plugin for JetBrains IDEs (IntelliJ IDEA, RustRover, WebStorm, GoLand, CLion, PyCharm, etc.) that treats .wasm and .wit as first-class IDE artifacts. It now also ships as a VS Code extension — version 0.0.2 just landed on Open VSX.

Install (VS Code command palette):

ext install JetBrains.hexana-wasm

Or from VSCode Marketplace: on VSCode Marketplace: https://marketplace.visualstudio.com/items?itemName=JetBrains.hexana-wasm or here: https://open-vsx.org/extension/JetBrains/hexana-wasm

Below: what's in the VS Code release on day one.

Custom binary editor for .wasm

Opens .wasm files in a dedicated read-only editor instead of the default VS Code hex view. The editor auto-detects whether the binary is a Core Wasm module, a Component Model binary, or a generic Wasm file. The structural analysis panel adjusts based on which kind it is.

Hex viewer

Virtual-scrolling hex dump. Byte selection via click, shift-click, drag. Keyboard navigation. Text search across the byte stream.

Structural analysis — up to 11 tabbed views

Surfaced based on the binary kind:

  • Summary — section table + binary statistics
  • Exports — kind, name, index, function signature
  • Imports — kind, module, name
  • Functions — index, name, signature
  • Data — data segments
  • Custom — custom sections
  • Top — largest contributors by size
  • Monos — monomorphisation analysis
  • Garbage — unreferenced / dead code detection
  • Modules — clickable nested-module drill-down (component model)
  • WAT — WebAssembly Text rendered in a native VS Code editor tab with syntax highlighting

Every table sorts by column and supports text search.

Run support

Run a .wasm from the editor toolbar via wasmtime. The Run dialog asks which export to call and what program arguments to pass.

  • Core modules → import stubs auto-generated.
  • Component-Model binaries → dependencies resolved and composed before run.

Component Model

  • Automatic dependency resolution by scanning workspace directories for matching .wasm files, transitively.
  • Open a nested module inside a component binary in its own editor tab — same custom editor, full structural analysis.

Day-one scope

This is the day-one VS Code feature set. The JetBrains plugin has been around longer and currently has additional capabilities not yet in the VS Code extension — experimental WASM debugging (shipped in JetBrains 0.9, also out today), DWARF source mapping, WIT language support, JS↔Wasm type inference, Java embedder support (Chicory, GraalWasm), and additional runtimes for Run (WAMR, GraalVM).

If you need any of those today, the JetBrains plugin: https://plugins.jetbrains.com/plugin/29090-hexana

File issues if you hit something

If a .wasm should open and doesn't, or a section doesn't parse, the "doesn't load on this binary" reports are exactly what helps right now — ideally with a reproducer.

Install: ext install JetBrains.hexana-wasm
Web listing: https://open-vsx.org/extension/JetBrains/hexana-wasm

VSCode Marketplace: on VSCode Marketplace: https://marketplace.visualstudio.com/items?itemName=JetBrains.hexana-wasm

reddit.com
u/minamoto108 — 7 days ago

https://preview.redd.it/be313tkzmpzg1.png?width=2048&format=png&auto=webp&s=3be561a087277bed40da9f0b3be77c3e1899c730

Hexana is a plugin for JetBrains IDEs (built on the IntelliJ Platform — works in IntelliJ IDEA, RustRover, WebStorm, GoLand, CLion, PyCharm, etc.) that treats `.wasm` and `.wit` as first-class IDE artifacts: explorer tree, hex view, WAT view, navigation, MCP API for AI assistants. Free on the JetBrains Marketplace.

0.9 just shipped. Highlights below; per-version detail on the Marketplace listing: https://plugins.jetbrains.com/plugin/29090-hexana

Experimental WASM debugging

You can step through .wasm from the IDE — pause, inspect, continue. It's experimental and the constraints are explicit:

  • LLVM 22.1 or newer required
  • Works with Wasmtime and WAMR only
  • The target has to be debuggable with lldb

Within those bounds, it works. If you've been doing wasm debugging via printf-into-host-imports, this should feel like a real upgrade. If your toolchain is older than LLVM 22.1, you're out for now.

WAMR support for run + debug

WAMR is now a selectable runtime in run configurations alongside Wasmtime (which shipped in 0.8). Same UI, pick a runtime, hit Run or Debug.

Custom GraalVM home

Until 0.9 the GraalVM run option used the bundled Graal only. You can now point at any GraalVM install on your machine.

UX

  • Information bar across the top of the binary view: file size (hover for stats), module kind, inline Run/Debug buttons.
  • Top tab: proper headers, sortable columns, scrolling.
  • Nested modules: opening one now shows a backreference to the containing module so you can navigate back out.

Java embedder support

If you're embedding wasm in Java:

  • Chicory (RedHat): Java completion + inspections specific to Chicory APIs
  • GraalWasm (Oracle): same, for GraalWasm

File issues if you hit something

If you've got a .wasm that should debug and doesn't (LLVM ≥ 22.1, wasmtime or WAMR target, lldb-debuggable), the "doesn't work" reports are exactly what helps right now — ideally with a reproducer.

Plugin: https://plugins.jetbrains.com/plugin/29090-hexana

reddit.com
u/minamoto108 — 7 days ago
▲ 14 r/WebAssembly+1 crossposts

https://preview.redd.it/w2x0ka2iinyg1.png?width=2048&format=png&auto=webp&s=1923e39f182efc80179a0f66ffd186269d4ea741

Hexana is a JetBrains IntelliJ plugin that treats .wasm binaries (and .wit definitions) as first-class IDE artifacts: explorer tree, hex view, WAT view, navigation, MCP API for AI assistants. Free on the JetBrains Marketplace. Below is a consolidated changelog from 0.5 → 0.8.2 — six weeks, five releases.

Major features added since 0.5

  • Component Model + WIT support. Component sections, instances, type definitions, imports, exports, interfaces, and worlds all show up in the explorer tree. WIT files get full language support — go-to-definition, find usages, hover docs, keyword completion, formatting — and cross-navigate from WIT into the corresponding .wasmdefinitions.
  • DWARF source mapping. Hexana detects and parses DWARF in .wasm and maps functions back to source files and lines. Click a function in the binary, land in the source.
  • Code-Size Profiler for WebAssembly. See exactly which functions, sections, and data segments are eating bytes in your .wasm, right in the IDE.
  • JS interop with Wasm awareness. Real code completion and type inference for instance.exports.*, import namespaces, and property names — derived from the actual .wasm module, not a stale .d.ts.
  • Run configurations. Pick Wasmtime or GraalVM, hit Run.
  • WAT view that's actually usable. Offset-based line numbers matching byte positions, IDE zoom, line numbers, text selection, search, smooth scrolling.
  • Hex view polish. Text selection across hex and text columns, arrow keys behave.
  • Search across imports / exports / functions in any table view (filter-as-you-type).
  • Broader opcode coverage in WAT and MCP. reference-types and bulk-memory instruction families, plus Legacy Exception Handling parsing/rendering.
  • MCP improvements. Tool descriptions tightened for cleaner AI-assisted binary analysis.

Stability picked up alongside this — Go-compiled .wasm modules load, KDoc rendering doesn't break with Hexana enabled, shared-memory limits handled correctly, big WAT files don't lag, run configs work on Windows, and a long-running data race on the shared byte buffer that caused sporadic UnParsedOpcodeExceptions on larger modules is gone.

Chronological breakdown

0.6 — Component Model + WIT (2026-03-18)

Added

  • Component Model binary support: component sections, instances, type definitions, imports, exports, interfaces, worlds — all parsed and shown in the explorer tree
  • WIT language support: code model, go-to-definition, find usages, hover documentation
  • Cross-navigation from WIT to Wasm: click an export in .wit, jump to its definition in .wasm

Fixed

  • Several MCP-side issues affecting AI-assisted analysis

0.7 — WAT usability + search (2026-03-31)

Added

  • WAT files now show offset-based line numbers that match byte positions in the binary — finally makes WAT ↔ hex correlation trivial
  • Search across imports, exports, and functions in any table view (filter-as-you-type, no shortcut)
  • Arrow-key navigation, scrolling, layout fixes across all table views
  • WIT basic editing: keyword completion, code formatting

Fixed

  • KDoc rendering no longer breaks when Hexana is enabled
  • Go-compiled .wasm modules load without crashing
  • Shared memory limits handled correctly

0.7.1 — UX polish (2026-04-09)

Added / improved

  • IDE zoom now works in the WAT tab (presentations, screenshots, blog posts — readable at last)
  • Big WAT files: line numbers, text selection, search, smooth scrolling — proper editor instead of a flat dump
  • Hex view: text selection works across hex and text columns, arrow keys behave

Fixed

  • Unbalanced tree parsing in WAT no longer trips the plugin
  • .wasm/.wat served over HTTP (local debug scenarios) handled correctly
  • WIT folding with empty ranges

0.8 — DWARF + profiler + JS interop + run configs (2026-04-21)

Added

  • DWARF support. Detects and parses DWARF in .wasm, maps functions back to source files and lines. Click a function in the binary, land in the source.
  • Code-Size Profiler. See exactly which functions, sections, and data segments are consuming bytes in your .wasm.
  • JS interop with Wasm awareness. Real code completion and type inference for instance.exports.*, import namespaces, and property names — derived from the actual .wasm module, not a stale .d.ts.
  • Run configurations for Wasmtime and GraalVM. Pick a runtime, hit Run.
  • Explorer integration: Hexana views slot into the Project tool window
  • MCP tool descriptions optimized for cleaner AI-assisted analysis

Fixed

  • IJPL-242167 (Project tool window crash on certain configurations)
  • WIT ClassCastException

0.8.2 — patch (2026-04-30)

Added

  • Legacy EH (exception handling) parsing/rendering — for modules built against the older proposal
  • WAT/MCP rendering of reference-types and bulk-memory instruction families

Fixed

  • Run configurations now work on Windows (Wasmtime / GraalVM run configs in 0.8 didn't actually launch on Windows — they do now)
  • Wasm parser fixes (vector, table)
  • Element segment type 6 now reads the reference-type per WebAssembly 3.0 spec §5.5.12
  • Data race on shared CommonByteBuffer causing sporadic UnParsedOpcodeExceptions on larger modules — fixed

(0.8.1 didn't ship publicly — the Windows fix needed an extra revision before going out.)

Where this is going

Short list of what's actively in progress, in case anyone has opinions to share before it's frozen:

  • WASM debugging via DWARF — read-only inspection works; stepping through wasm in the IntelliJ debugger is next
  • Cross-navigation from Wasm imports back to WIT (the inverse of what shipped in 0.6)
  • More opcodes / proposals coverage in WAT and MCP (threads, tail-call, GC types are the obvious gaps)

Plugin: https://plugins.jetbrains.com/plugin/29090-hexana
Issues / feature requests: https://github.com/JetBrains/hexana/issues

If you've hit something that should be here and isn't — ideally with a .wasm reproducer — file it. The "doesn't load" / "crashes on" tickets get prioritized over feature work.

reddit.com
u/minamoto108 — 13 days ago
▲ 30 r/WebAssembly+2 crossposts

https://preview.redd.it/wqercu2zcnyg1.png?width=2048&format=png&auto=webp&s=370de2ae8a59213447dc5f136bed6dead124ac2f

We've been running wasm modules inside a JVM application (a Rust wasmprinter embedded via GraalWasm) and the obvious follow-up question was: how does this compare to the alternatives, and when should we actually pick something else?

So I built a small JMH harness that runs the same proxy.wasm artifact through six execution paths and wrote up the results. Sharing here because I couldn't find a head-to-head comparison covering all of these in one place, and I'd genuinely like to hear if anyone has reasons to expect different numbers on different workloads.

The workload

A tiny Rust crate compiled to wasm32-wasip1 exposing one export:

#[no_mangle]
pub unsafe extern "C" fn decode_jpeg(
    in_ptr: *const u8, in_len: usize,
    out_ptr: *mut u8, out_cap: usize,
) -> i32 { /* jpeg-decoder → RGB8 */ }

Input: a 320×240 JPEG baked into the wasm via include_bytes!. Output: 230,400 bytes of RGB. Steady-state ~1 ms of native CPU — small enough to expose call/dispatch overhead, big enough that the JIT actually kicks in. Cross-variant correctness check: every backend produces byte-identical output (sha256 matches across all six).

The six backends

Backend What it actually is
chicory Chicory's pure-Java interpreter
chicory-aot Chicory + MachineFactoryCompiler.compile(...) at JVM startup
chicory-aot-plugin Chicory build-time AOT via chicory-compiler-maven-plugin (wasm → JVM .class at mvn compile)
graalwasm GraalWasm with Truffle JIT enabled (libgraal)
graalwasm-interp GraalWasm with engine.Compilation=false
native-ffm Wasmtime/Cranelift in a Rust cdylib, called via Java's FFM API

JVM: Oracle GraalVM 25 (25+37-LTS-jvmci-b01), Apple Silicon. JMH 5×1s warmup + 5×2s measurement, 1 fork, single thread.

Results (µs/op, lower is better)

Backend Mean vs Wasmtime
nativeFfm — Wasmtime/Cranelift via FFM 971 ± 10 1.00×
graalwasm — GraalWasm Truffle JIT 1,275 ± 332 1.31×
chicoryAot — Chicory runtime AOT 9,037 ± 118 9.31×
chicoryAotPlugin — Chicory build-time AOT 9,198 ± 131 9.47×
graalwasmInterp — GraalWasm Truffle no-JIT 69,992 ± 1,204 72.1×
chicory — Chicory pure interpreter 240,707 ± 2,560 248×

A few things worth pulling out

GraalWasm JIT is almost native. 1.31× of Wasmtime/Cranelift is genuinely good — I expected a bigger gap given that Truffle goes through partial evaluation while Cranelift goes wasm → CLIF → assembly directly. After warmup, libgraal produces code competitive with Cranelift's output for this workload. The ±25% CI on graalwasm is the only weak number here, probably tier-promotion noise that more forks would smooth out.

Build-time vs runtime AOT in Chicory is a wash. 9,037 vs 9,198 µs/op, CIs overlap. They run identical bytecode — Chicory's compiler produces the same .class content whether invoked at mvn compile or at JVM startup. Choose based on deployment story, not perf.

The calibration trap. graalwasm-interp at 70,000 µs/op is what you get on stock OpenJDK without JVMCI / libgraal. Truffle prints exactly one warning at startup:

>

…and then runs at interpreter speed. If you benchmark GraalWasm on Temurin or Corretto and conclude it's unusable, you're running it without its compiler. The fix on most platforms is to install Oracle GraalVM 25 (or CE) — the Graal compiler ships in the JDK and Truffle picks it up automatically. If you can't change vendor, the "jargraal" path with org.graalvm.compiler:compiler + org.graalvm.truffle:truffle-compiler on --upgrade-module-path and -XX:+EnableJVMCI works but is fiddly.

Pure interpreters aren't benchmarks. 248× slower means Chicory's interpreter isn't a viable production path for non-trivial workloads. It's still the right default for "run untrusted user wasm with a 100 ms budget" sandbox scenarios — instant startup, no codegen step.

Bonus silliness

While I had the harness open: I compiled Cranelift's codegen library itself to wasm32-wasip1, AOT'd that 2.7 MB wasm artifact via chicory-compiler-maven-plugin into a JVM .class file, and used the resulting Chicory-hosted, JVM-resident Cranelift to emit native machine code for all six host triples. Output sizes for an add(i32,i32) -> i32 test function:

Triple Object bytes Format
aarch64-apple-darwin 320 Mach-O
aarch64-unknown-linux-gnu 600 ELF
aarch64-pc-windows-msvc 126 COFF
x86_64-apple-darwin 328 Mach-O
x86_64-unknown-linux-gnu 608 ELF
x86_64-pc-windows-msvc 130 COFF

Six of Cranelift's ~4,000 internal functions exceed the JVM's 64 KB method-size limit and fall back to Chicory's interpreter; the rest AOT cleanly into a single 2.6 MB .class. Not (yet) a wasm-to-CLIF translator inside the sandbox — cranelift-wasm was deprecated at 0.112 and the translator now lives inside Wasmtime, so a real wasm-compiling-wasm pipeline would mean pinning to deprecated 0.112 or hand-rolling it on wasmparser. Separate project.

Caveats

One workload (small JPEG, ~1 ms of native CPU), one platform (Apple Silicon, GraalVM 25), one JMH config. These generalize well for "small to medium pure-compute wasm modules that don't touch WASI on the hot path" but will shift for: large modules (GraalWasm setup cost grows with module size), WASI-heavy workloads (host-call cost differs across runtimes), JIT-cold workloads (you're measuring tier-up, not steady state), and other JVMs (J9, Zing not measured).

Harness

Source: https://github.com/minamoto79/webasm-java-integration-benchmark

Switching backends in the harness is two lines of Kotlin — happy to take PRs adding workloads or runtimes I missed (wasmer-java? wazero-on-JVM via JNI? would love numbers on those if anyone has them). And if you're seeing materially different ratios on a different workload or JDK, please post — would help calibrate where these numbers actually generalize.

reddit.com
u/minamoto108 — 13 days ago