Interop Should Be a New Rewrite
67 reads.
The Future of Language Interop: Rust as the Performance Core
Modern software is increasingly multilingual, not in syntax, but in runtime needs.
We write applications in high-level languages like JavaScript and Python for speed of development, but we rely on systems languages like Rust for performance, safety, and control. The future is not about replacing high-level languages,it’s about composing them with low-level ones correctly.
This is where language interoperability (interop) becomes the most important technical problem of the next decade.
Why Interop Matters Now
There is a growing industry-wide push away from writing code in unsafe languages like C/C++ in favor of modern, safer languages like Rust. This shift has been driven by rising concerns around memory safety, security vulnerabilities, and long-term maintainability.
Rust’s influence has become difficult to overstate. Memory safety has caught the attention of governments in the EU and the US, fearless concurrency matches today’s demand for parallel systems, and developers consistently report better tooling, reliability, and productivity.
According to JetBrains’ 2025 Rust report, Rust adoption is accelerating far beyond systems programming into backend services, infrastructure, and tooling. Rust is no longer just “fast C++” , it’s becoming a foundational layer for modern stacks.
At the same time:
- JavaScript dominates application logic
- Python dominates data and ML
- But both hit performance ceilings
Interop allows us to:
- Keep developer ergonomics
- While pushing heavy computation into native code
FFI: The Real Bridge Between Languages
At the heart of interop is FFI (Foreign Function Interface).
FFI is how one language calls into another language’s compiled code. It’s not about syntax,it’s about:
- Memory layout
- ABI compatibility
- Ownership and lifetime rules
- Type translation
Traditionally, FFI was painful and unsafe (C headers, manual bindings, undefined behavior). Rust changes this by providing:
- Strong typing
- Explicit ownership
- Compile-time guarantees
Rust is increasingly being used as an FFI-safe core, not just an application language.
Node.js + Rust: Why napi-rs Exists
JavaScript runs on a VM (V8). Rust compiles to native machine code.
They cannot talk directly.
Node.js solves this via Node-API (N-API): a stable ABI that allows native modules to interact with JavaScript without depending on V8 internals.
napi-rs builds on this idea.
What napi-rs actually does
- Lets you write Rust functions
- Compiles them into native binaries
- Exposes them to Node.js safely
- Automatically generates TypeScript types
You write Rust once. Node consumes it as if it were JavaScript.
What Gets Built Under the Hood
A napi-rs project is still a Rust crate, built by Cargo.
During build:
- Rust compiles your code as a
cdylib - Cargo produces platform-native binaries:
.so(Linux).dylib(macOS).dll(Windows)
- These are packaged as
.nodefiles (Node’s native module format) napi-rsgenerates a matching.d.tsfile for TypeScript- A JS loader resolves the correct binary at runtime
Result:
Rust (source of truth)
↓
Native binary (.node)
↓
JS runtime + TypeScript types
The binary is never lost.
TypeScript is just a projection.
Cross-Platform Without Recompiling
One of the hardest problems in native interop is distribution.
napi-rs solves this by:
- Building binaries for each target (Linux, macOS, Windows)
- Publishing them as part of the package
- Letting the runtime load the correct one automatically
This avoids:
- User-side compilation
- Toolchain mismatches
- Node version breakage
This is critical for real-world adoption.
Type Generation Is Not an Afterthought
A key insight of modern interop tooling is:
Types are part of the API contract, not documentation
napi-rs inspects Rust function signatures and maps them to TypeScript:
String→stringVec<u8>→Uint8ArrayOption<T>→T | null
This keeps:
- Rust as the canonical API
- TypeScript always in sync
- No handwritten bindings
The Hidden cost of Interop
Type Mismatch
Interop introduces many advantages, but the point where most teams hesitate is type mismatch.
Every interop approach involves tradeoffs in how types are represented across language boundaries. These tradeoffs can quietly weaken the guarantees that made the original system attractive in the first place.
Tools like napi-rs address this by enforcing a strict separation between the Rust side and the TypeScript side. Rust remains the canonical source of truth, while TypeScript types are generated as a projection, not a redefinition.
This distinction matters because Rust’s type system is inherently expressive. Ownership, lifetimes, borrowing, and thread-safety guarantees cannot be fully encoded in most host languages. The core risk is not that mismatches exist, but that they are discovered late,often at runtime or during integration testing.
Any interop design must therefore consider not just how types are translated, but when mismatches surface. Preserving Rust’s strengths means pushing failure as early as possible in the development lifecycle.
Monorepo Ambiguity
Rust itself is monorepo-friendly, with workspace support and clear crate boundaries. In practice, however, polyglot monorepos often contain:
- Rust crates under
crates/ - TypeScript packages under
packages/ - Python modules elsewhere
While this structure works, it introduces ambiguity around how runtime boundaries, build steps, and API contracts align across languages. Each organization tends to invent its own conventions, and there is little standardization around how interop-focused monorepos should be structured.
As systems grow, this ambiguity becomes a source of friction,especially when APIs evolve independently across languages.
Fragmentation Across Ecosystems
The primary challenge is not re-exporting functionality for multiple languages,it is that each ecosystem requires a different structural model.
For example:
napi-rsrelies on Rust attributes and Node-specific conventions- JVM interop introduces an entirely different toolchain and mental model
- Python bindings come with their own packaging and lifecycle constraints
In practice, this often means:
- Writing a shared Rust core
- Implementing separate bindings per ecosystem
- Building, testing, and shipping each variant independently
This creates an additional interop layer that must be maintained alongside the product itself, increasing long-term complexity and coordination costs.
A Possible Direction Forward
One emerging idea is to treat Rust itself as an interface definition layer, rather than maintaining separate contracts per language.
In this model:
- Rust defines the canonical API surface
- Other languages consume generated, typed projections
- Ownership and safety rules are encoded once, at the source
- Monorepo structure and build coordination become explicit, not ad hoc
This approach does not replace existing tools like napi-rs or PyO3. Instead, it builds on their lessons, aiming to reduce duplication and fragmentation as systems scale.
Interop, in this sense, becomes infrastructure,designed to grow with the codebase rather than fight against it.
Memory Safety: Because Who Doesn't Love Fixing Bugs in Prod?
Large-scale security studies show that vulnerabilities decay exponentially over time. Most vulnerabilities exist in new or recently modified code, not in mature codebases.
This leads to two key insights:
- The problem is overwhelmingly with new code
- Rewriting old code offers diminishing returns over time
For example, 5-year-old code can have 3.4x to 7.4x fewer vulnerabilities than newly written code.
This challenges the long-held belief that rewriting everything is the only path to safety. Instead:
Interop is the new rewrite.
By creating a language boundary and writing new components in Rust, teams can achieve similar security benefits without the massive cost of full rewrites.
The "Unstoppable Forces" of Language Adoption (a.k.a., Good Luck Changing Anything)
Language adoption follows three primary axes:
- Social – developer motivation
- Economic – resource tradeoffs
- Technical – what is feasible
In some environments, Rust adoption is feasible:
Feasible: New Rust-First Codebases
Greenfield projects or full rewrites where Rust-only dependencies are viable.
Feasible: Interprocess Boundaries
Microservices or IPC-based systems allow incremental Rust adoption.
Feasible: Small Intraprocess APIs
When a small C ABI surface exists, Rust can be integrated manually.
However, many environments lack these natural boundaries.
Infeasible: Breaking Out of Well-Established Ecosystems
Rust adoption becomes extremely difficult in ecosystems dominated by large, mature C++ codebases.
Examples include:
- Gaming (Unreal, Unity)
- Robotics (ROS)
- Compiler backends (LLVM, MLIR)
- Large monorepos (e.g. Google)
- Embedded systems
These projects rely on rich C++ APIs, templates, inheritance, CRTP, and massive dependency graphs.
Rewriting is infeasible.
Bindings are expensive to maintain.
Interop tooling struggles with language mismatches.
Case Study: The Rust Compiler
Rust itself relies on hand-maintained bindings to LLVM’s C++ APIs.
A small working group maintains this bridge.
Without them, Rust would not exist.
Even so, Rust cannot fully leverage MLIR or LLVM’s advanced pass infrastructure due to:
- Heavy template usage
- Deep inheritance hierarchies
- C++-specific design patterns
This demonstrates the true interop challenge:
It’s not just calling functions ,
It’s reconciling two language philosophies.
Beyond Node: Rust as an Interop Language
There is growing momentum to treat Rust as:
- A language-neutral core
- A safer replacement for C/C++
- A generator of bindings for many ecosystems
The same Rust code can power:
- Node.js (via N-API)
- Python (via C-ABI or PyO3)
- Kotlin (via JNI)
- Dart (via FFI)
Rust becomes the shared execution layer.
Interop Is Paramount for Brownfield Adoption
Interoperability with other languages is of paramount importance for Rust adoption in the industry, especially on brownfield projects. Network, FFI, WASM,each can be chosen to fit the use case and minimize friction.
This highlights something crucial: Rust adoption isn't just about greenfield projects. Most real-world software lives in brownfield environments,existing codebases, legacy systems, and established ecosystems. The key insight here is that Rust doesn't need to replace everything. It needs to integrate with everything.
The Three Interop Paths
There can be three primary interop strategies:
1. Network-based interop
- Rust runs as a separate service or microservice
- Communication happens over HTTP, gRPC, or message queues
- Zero coupling at the language level
- Best for: Large-scale systems, independent services, when you need process isolation
2. FFI (Foreign Function Interface)
- Direct function calls across language boundaries
- Shared memory, same process
- Best for: Performance-critical paths, tight integration, when latency matters
- This is what
napi-rsprovides for Node.js
3. WASM (WebAssembly)
- Rust compiles to WASM, runs in a sandbox
- Portable across platforms and languages
- Best for: Browser integration, plugin systems, when you need isolation without separate processes
Why This Matters for Brownfield Projects
Brownfield projects have constraints that greenfield projects don't:
- Existing codebases you can't rewrite
- Established patterns teams are familiar with
- Legacy dependencies that must continue working
- Risk tolerance that favors incremental change
In these environments, you can't just say "let's rewrite everything in Rust." You need to:
- Incrementally adopt Rust for new features
- Integrate with existing code without breaking it
- Choose the right interop strategy for each use case
- Minimize friction for developers who aren't Rust experts
This is why having multiple interop options matters. Network interop might work for a new microservice, FFI might work for a performance-critical library, and WASM might work for a browser component. The flexibility to choose minimizes the friction of adoption.
The Friction Problem
"Minimizing friction" is the key phrase here. Every interop boundary adds:
- Cognitive overhead (understanding two languages)
- Build complexity (managing multiple toolchains)
- Runtime costs (serialization, marshalling, context switching)
- Debugging challenges (tracing across language boundaries)
The best interop tooling doesn't just make it possible to call Rust from other languages,it makes it easy. Type generation, automatic bindings, and clear error messages all reduce friction.
This is why tools like napi-rs matter. They don't just provide FFI,they provide ergonomic FFI that feels native to the host language.
Rust as the New IDL
Historically, we used:
- C headers
- Protobuf
- Thrift
- OpenAPI
Now, Rust itself is becoming the interface definition language:
- Structs define data contracts
- Traits define behavior
- The compiler enforces correctness
Other languages consume projections of Rust APIs.
Instead of “Rust binding to JS”, we get “JS projecting Rust”.
The Direction We’re Heading
The future stack:
- Rust for performance-critical logic
- High-level languages for orchestration and UX
- Tooling that hides the boundary
- Cargo and compilers as SDK generators
Interop becomes infrastructure, not a hack.
Final Thought
Rust is not replacing JavaScript or Python.
Rust is becoming the engine they rely on.
And tools like napi-rs signal a larger shift:
one high-performance core, many language frontends
That is the future of language interoperability.