tauri-js-runtime
HuakunShen/tauri-plugin-jsAdd JavaScript runtime backend capabilities to Tauri v2 desktop apps. Covers both using the tauri-plugin-js plugin and building from scratch. Use when integrating Bun, Node.js, or Deno as backend processes in Tauri, setting up type-safe RPC between frontend and JS runtimes, creating Electron-like architectures in Tauri, or managing child processes with stdio communication.
SKILL.md
name: tauri-js-runtime description: Add JavaScript runtime backend capabilities to Tauri v2 desktop apps. Covers both using the tauri-plugin-js plugin and building from scratch. Use when integrating Bun, Node.js, or Deno as backend processes in Tauri, setting up type-safe RPC between frontend and JS runtimes, creating Electron-like architectures in Tauri, or managing child processes with stdio communication. version: 1.0.0 license: MIT metadata: domain: desktop-apps tags: - tauri - bun - node - deno - rpc - kkrpc - electron-alternative - process-management - compiled-sidecar
Tauri + JS Runtime Integration
Give Tauri apps full JS runtime backends (Bun, Node.js, Deno) with type-safe bidirectional RPC. This covers two approaches: using the tauri-plugin-js plugin, and building the integration from scratch.
When to Use
- User wants to run JS/TS backend code from a Tauri desktop app
- User asks about Electron alternatives or "Electron-like" features in Tauri
- User needs to spawn/manage child processes (Bun, Node, Deno) from Rust
- User wants type-safe RPC between a Tauri webview and a JS runtime
- User needs stdio-based IPC between Rust and a child process
- User asks about kkrpc integration with Tauri
- User wants multi-window apps where windows share backend processes
- User needs runtime detection (which runtimes are installed, paths, versions)
- User wants to ship a Tauri app without requiring JS runtimes on user machines (compiled sidecars)
- User asks about
bun build --compileordeno compilewith Tauri
Core Architecture
Frontend (Webview) <-- Tauri Events --> Rust Core <-- stdio --> JS Runtime
- Rust spawns child processes, pipes their stdin/stdout/stderr, and relays data via Tauri events
- Rust never parses RPC payloads — it forwards raw newline-delimited strings
- kkrpc handles the RPC protocol on both ends (frontend webview + backend runtime)
- Frontend IO adapter bridges Tauri events to kkrpc's IoInterface (read/write/on/off)
- Multi-window works because all windows receive the same Tauri events; kkrpc request IDs handle routing
Approach A: Using tauri-plugin-js (Recommended)
The plugin handles process management, stdio relay, event emission, and provides a frontend npm package with typed wrappers and an IO adapter.
Step 1: Install
Rust — add to src-tauri/Cargo.toml:
[dependencies]
tauri-plugin-js = "0.1"
Frontend — install npm packages:
pnpm add tauri-plugin-js-api kkrpc
Step 2: Register the plugin
In src-tauri/src/lib.rs:
pub fn run() {
tauri::Builder::default()
.plugin(tauri_plugin_js::init())
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
Step 3: Add permissions
In src-tauri/capabilities/default.json:
{
"permissions": [
"core:default",
"js:default"
]
}
Step 4: Define a shared API type
Create a type definition shared between frontend and backend workers:
// backends/shared-api.ts
export interface BackendAPI {
add(a: number, b: number): Promise<number>;
echo(message: string): Promise<string>;
getSystemInfo(): Promise<{
runtime: string;
pid: number;
platform: string;
arch: string;
}>;
}
Step 5: Write backend workers
Each runtime has its own IO adapter from kkrpc:
Bun (backends/bun-worker.ts):
import { RPCChannel, BunIo } from "kkrpc";
import type { BackendAPI } from "./shared-api";
const api: BackendAPI = {
async add(a, b) { return a + b; },
async echo(msg) { return `[bun] ${msg}`; },
async getSystemInfo() {
return { runtime: "bun", pid: process.pid, platform: process.platform, arch: process.arch };
},
};
const io = new BunIo(Bun.stdin.stream());
const channel = new RPCChannel(io, { expose: api });
Node (backends/node-worker.mjs):
import { RPCChannel, NodeIo } from "kkrpc";
const api = {
async add(a, b) { return a + b; },
async echo(msg) { return `[node] ${msg}`; },
async getSystemInfo() {
return { runtime: "node", pid: process.pid, platform: process.platform, arch: process.arch };
},
};
const io = new NodeIo(process.stdin, process.stdout);
const channel = new RPCChannel(io, { expose: api });
Deno (backends/deno-worker.ts):
import { DenoIo, RPCChannel } from "npm:kkrpc/deno";
import type { BackendAPI } from "./shared-api.ts"; // .ts extension required by Deno
const api: BackendAPI = {
async add(a, b) { return a + b; },
async echo(msg) { return `[deno] ${msg}`; },
async getSystemInfo() {
return { runtime: "deno", pid: Deno.pid, platform: Deno.build.os, arch: Deno.build.arch };
},
};
const io = new DenoIo(Deno.stdin.readable);
const channel = new RPCChannel(io, { expose: api });
Step 6: Frontend — spawn and call
import { spawn, createChannel, onStdout, onStderr, onExit } from "tauri-plugin-js-api";
import type { BackendAPI } from "../backends/shared-api";
// Spawn
const cwd = await resolve("..", "backends"); // from @tauri-apps/api/path
await spawn("my-worker", { runtime: "bun", script: "bun-worker.ts", cwd });
// Events
onStdout("my-worker", (data) => console.log(data));
onStderr("my-worker", (data) => console.error(data));
onExit("my-worker", (code) => console.log("exited", code));
// Type-safe RPC
const { api } = await createChannel<Record<string, never>, BackendAPI>("my-worker");
const result = await api.add(5, 3); // compile-time checked
Step 7: Compiled binary sidecars (no runtime on user machine)
Both Bun and Deno can compile TS workers into standalone executables. The compiled binaries preserve stdin/stdout behavior, so kkrpc works unchanged.
Compile with target triple suffix:
TARGET=$(rustc -vV | grep host | cut -d' ' -f2)
# Bun — compile directly from the project directory
bun build --compile --minify backends/bun-worker.ts --outfile src-tauri/binaries/bun-worker-$TARGET
# Deno — MUST compile from a separate Deno package (see pitfall #8 below)
deno compile --allow-all --output src-tauri/binaries/deno-worker-$TARGET path/to/deno-package/main.ts
Configure Tauri to bundle sidecars in src-tauri/tauri.conf.json:
{
"bundle": {
"externalBin": ["binaries/bun-worker", "binaries/deno-worker"]
}
}
Tauri automatically appends the current platform's triple when resolving externalBin paths, so the binary is included in the app bundle and runs on the user's machine without any runtime installed.
Spawn with sidecar instead of runtime:
import { spawn, createChannel } from "tauri-plugin-js-api";
await spawn("compiled-worker", { sidecar: "bun-worker" });
// RPC works identically
const { api } = await createChannel<Record<string, never>, BackendAPI>("compiled-worker");
await api.add(5, 3); // => 8
Key points:
config.sidecarresolves the binary via Tauri's sidecar mechanism — looks next to the app executable, tries both plain name (production) and{name}-{triple}(development)- The same worker TS source compiles into a binary that runs identically to the runtime-based version
getSystemInfo()still reportsruntime: "bun"orruntime: "deno"— the runtime is embedded in the binary- No filesystem path resolution needed on the frontend — just pass the sidecar name
Step 8: Runtime detection (optional)
import { detectRuntimes, setRuntimePath } from "tauri-plugin-js-api";
const runtimes = await detectRuntimes();
// [{ name: "bun", available: true, version: "1.2.0", path: "/usr/local/bin/bun" }, ...]
// Override path for a specific runtime
await setRuntimePath("node", "/custom/path/to/node");
Plugin API Summary
| Command | Description |
|---|---|
spawn(name, config) |
Start a named process |
kill(name) |
Kill by name |
killAll() |
Kill all |
restart(name, config?) |
Restart with optional new config |
listProcesses() |
List running processes |
getStatus(name) |
Get process status |
writeStdin(name, data) |
Write raw string to stdin |
detectRuntimes() |
Detect bun/node/deno availability |
setRuntimePath(rt, path) |
Set custom executable path |
getRuntimePaths() |
Get custom path overrides |
| Event | Payload |
|---|---|
js-process-stdout |
{ name: string, data: string } |
js-process-stderr |
{ name: string, data: string } |
js-process-exit |
{ name: string, code: number | null } |
Approach B: Building from Scratch
When you need full control or a different architecture (e.g., single shared process instead of named processes, Tauri event relay instead of direct stdio, or SvelteKit/other frameworks).
Step 1: Rust — spawn and relay
Add tokio to src-tauri/Cargo.toml:
[dependencies]
tokio = { version = "1", features = ["process", "io-util", "sync", "rt"] }
Core Rust pattern in src-tauri/src/lib.rs:
use std::sync::Arc;
use tauri::{async_runtime, AppHandle, Emitter, Listener, Manager};
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::{Child, ChildStdin, Command};
use tokio::sync::Mutex;
struct ProcessState {
child: Child,
stdin: ChildStdin,
}
struct AppState {
process: Arc<Mutex<Option<ProcessState>>>,
}
fn spawn_runtime(app: &AppHandle) -> Result<ProcessState, String> {
let mut cmd = Command::new("bun");
cmd.args(["src/backend/main.ts"]);
cmd.stdin(std::process::Stdio::piped());
cmd.stdout(std::process::Stdio::piped());
cmd.stderr(std::process::Stdio::piped());
let mut child = cmd.spawn().map_err(|e| e.to_string())?;
let stdin = child.stdin.take().ok_or("no stdin")?;
let stdout = child.stdout.take().ok_or("no stdout")?;
let stderr = child.stderr.take().ok_or("no stderr")?;
// Relay stdout to all frontend windows via Tauri events
let handle = app.clone();
async_runtime::spawn(async move {
let reader = BufReader::new(stdout);
let mut lines = reader.lines();
while let Ok(Some(line)) = lines.next_line().await {
let _ = handle.emit("runtime-stdout", &line);
}
});
// Relay stderr
let handle2 = app.clone();
async_runtime::spawn(async move {
let reader = BufReader::new(stderr);
let mut lines = reader.lines();
while let Ok(Some(line)) = lines.next_line().await {
eprintln!("[runtime stderr] {}", line);
let _ = handle2.emit("runtime-stderr", &line);
}
});
Ok(ProcessState { child, stdin })
}
Listen for frontend-to-runtime messages:
// In .setup() closure:
app.listen("frontend-to-runtime", move |event| {
let payload = event.payload().to_string();
let state = state_clone.clone();
async_runtime::spawn(async move {
let mut guard = state.lock().await;
if let Some(ref mut proc) = *guard {
let msg: String = serde_json::from_str(&payload).unwrap_or(payload);
let mut to_write = msg;
if !to_write.ends_with('\n') {
to_write.push('\n');
}
let _ = proc.stdin.write_all(to_write.as_bytes()).await;
let _ = proc.stdin.flush().await;
}
});
});
Step 2: Frontend IO adapter
Bridge Tauri events to kkrpc's IoInterface:
import { emit, listen, type UnlistenFn } from "@tauri-apps/api/event";
export class TauriEventIo {
name = "tauri-event-io";
isDestroyed = false;
private listeners: Set<(msg: string) => void> = new Set();
private queue: string[] = [];
private pendingReads: Array<(value: string | null) => void> = [];
private unlisten: UnlistenFn | null = null;
async initialize(): Promise<void> {
this.unlisten = await listen<string>("runtime-stdout", (event) => {
// CRITICAL: re-append \n that BufReader::lines() strips
const message = event.payload + "\n";
for (const listener of this.listeners) listener(message);
if (this.pendingReads.length > 0) {
this.pendingReads.shift()!(message);
} else {
this.queue.push(message);
}
});
}
async read(): Promise<string | null> {
if (this.isDestroyed) return new Promise(() => {}); // hang, don't spin
if (this.queue.length > 0) return this.queue.shift()!;
return new Promise((resolve) => this.pendingReads.push(resolve));
}
async write(message: string): Promise<void> {
await emit("frontend-to-runtime", message);
}
on(event: "message" | "error", listener: (msg: string) => void) {
if (event === "message") this.listeners.add(listener);
}
off(event: "message" | "error", listener: Function) {
if (event === "message") this.listeners.delete(listener as any);
}
destroy() {
this.isDestroyed = true;
this.unlisten?.();
this.pendingReads.forEach((r) => r(null));
this.pendingReads = [];
this.queue = [];
this.listeners.clear();
}
}
Step 3: Connect kkrpc
import { RPCChannel } from "kkrpc/browser";
import type { BackendAPI } from "../backend/types";
const io = new TauriEventIo();
await io.initialize();
const channel = new RPCChannel<{}, BackendAPI>(io, { expose: {} });
const api = channel.getAPI() as BackendAPI;
// Type-safe calls
const result = await api.add(5, 3);
Step 4: Clean shutdown
// In .build().run() callback:
.run(move |app_handle, event| {
if let RunEvent::ExitRequested { .. } = &event {
let state = app_handle.state::<AppState>();
let proc = state.process.clone();
async_runtime::block_on(async {
let mut guard = proc.lock().await;
if let Some(mut proc) = guard.take() {
drop(proc.stdin); // drop stdin first
let _ = proc.child.kill().await;
let _ = proc.child.wait().await;
}
});
}
});
Step 5: Capabilities for multi-window
{
"windows": ["main", "window-*"],
"permissions": [
"core:default",
"core:event:default",
"core:webview:allow-create-webview-window"
]
}
Note: WebviewWindow from @tauri-apps/api/webviewWindow requires core:webview:allow-create-webview-window, not core:window:allow-create.
Critical Pitfalls
1. Newline framing
Rust's BufReader::lines() strips trailing \n. kkrpc (and all newline-delimited JSON protocols) need \n to delimit messages. The frontend IO adapter MUST re-append \n to every payload received from Tauri events.
2. kkrpc read loop spin
kkrpc's internal listen() loop continues on null reads — it only stops if the IO adapter has isDestroyed === true. If read() returns null without isDestroyed being set, the loop spins at 100% CPU. Solution: read() should return a never-resolving promise when destroyed, and expose isDestroyed.
3. Channel cleanup
Call channel.destroy() (not just io.destroy()) to properly reject pending RPC promises. The channel's destroy will call io.destroy internally.
4. Mutex contention in Rust
The Tauri event listener for stdin writes and the kill/restart commands both need the process mutex. Take the process handle out of the lock scope before kill/wait. Drop stdin first to unblock pending writes.
5. Tauri event serialization
Tauri events serialize payloads as JSON strings. When the Rust event listener receives a message to forward to stdin, it may need to deserialize the outer JSON string wrapper: serde_json::from_str::<String>(&payload).
6. Vite pre-bundle cache
When using the plugin with file: dependency links, Vite caches the pre-bundled version. After rebuilding the plugin's guest-js, delete node_modules/.vite in the consuming app and run pnpm install to pick up new exports.
7. Deno imports
Deno workers must use npm:kkrpc/deno for the import specifier and .ts file extensions for local imports (e.g., ./shared-api.ts).
8. deno compile and node_modules
deno compile will crash with a stack overflow if run from a directory that contains node_modules — Deno attempts to traverse and compile the entire directory tree. Deno worker source must live in a separate directory that is set up as its own Deno package with a deno.json declaring dependencies (e.g., kkrpc).
Example setup:
examples/
deno-compile/ # Separate Deno package — no node_modules here
deno.json # { "imports": { "kkrpc/deno": "npm:kkrpc/deno" } }
main.ts # Deno worker source
shared-api.ts # Type definitions (copy from backends/)
tauri-app/
backends/ # Contains node_modules from npm
deno-worker.ts # Used for dev mode (deno run), NOT for deno compile
Run deno install in the deno package directory to cache dependencies before compiling. The build script should compile from the separate directory:
deno compile --allow-all --output src-tauri/binaries/deno-worker-$TARGET ../deno-compile/main.ts
9. Dev vs Prod mode
In dev mode, spawn runtimes directly (bun script.ts) or use compiled binaries via config.sidecar (see Step 7 above). In production, consider:
- Compiled sidecar (recommended):
bun build --compile/deno compileproduces a standalone binary — useconfig.sidecarto spawn it, and Tauri'sexternalBinto bundle it. No runtime needed on user machines. - Bundled JS scripts: Worker scripts import
kkrpcwhich needsnode_modules. Bundle them first withbun build --target bun/nodeto inline dependencies, then add as Tauri resources viabundle.resources. Resolve at runtime withresolveResource()from@tauri-apps/api/path - The Rust code checks for sidecar first, then falls back to bundled JS with system runtime
Production Deployment
Option 1: Compiled sidecar (no runtime needed on user machine)
TARGET=$(rustc -vV | grep host | cut -d' ' -f2)
bun build --compile src/backend/main.ts --outfile src-tauri/binaries/backend-$TARGET
Add to tauri.conf.json:
{ "bundle": { "externalBin": ["binaries/backend"] } }
Spawn with:
await spawn("backend", { sidecar: "backend" });
Option 2: Bundled JS as resource (requires runtime on user machine)
Bundle the worker to inline dependencies (kkrpc):
bun build backends/worker.ts --target bun --outfile src-tauri/workers/worker.js
Add to tauri.conf.json:
{ "bundle": { "resources": { "workers/worker.js": "workers/worker.js" } } }
Spawn with:
import { resolveResource } from "@tauri-apps/api/path";
const script = await resolveResource("workers/worker.js");
await spawn("worker", { runtime: "bun", script });
References
- kkrpc — cross-runtime RPC library
- Tauri v2 Plugin Guide
- Tauri v2 Capabilities