Making React Server Components "Stateful": Trial #1
22 reads.
Before talking about "stateful RSC," it's important to be clear about what React Server Components actually are — and what they are not.
React Server Components are not components that live on the server. They are not mounted, they are not hydrated, and they do not persist between navigations. Instead, RSC is best understood as a serialization and streaming model for UI.
At its core, RSC turns a React tree into data, streams that data over HTTP, and lets different runtimes reconstruct it for different purposes.
You can roughly model RSC as:
(request + data sources) → serialized UI stream
That framing is critical for everything that follows.
RSC as a Streaming Protocol (Not a Runtime)
When an RSC request is executed, the server does not produce HTML.
Instead, it:
- Executes the React tree
- Serializes the result into a Flight stream
- Streams it incrementally over HTTP
This stream is a structured protocol containing:
- References to client components
- Serialized props
- Module identifiers
- Suspense boundaries
- Instructions for how to reconstruct the tree
Think of it less like HTML, and more like:
"A binary UI description that React knows how to replay."
Once a chunk is streamed:
- The server forgets it
- There is no retained component instance
- There is no memory between requests
This design is intentional — and it's what allows RSC to scale across Node, Edge, and serverless environments.
How the RSC Stream Is Generated (RSC Environment)
In a Vite-powered RSC setup, the RSC environment is responsible for producing this stream.
In your RSC entry (e.g. the file that runs on the server to render the tree), the flow is very explicit:
const root = (
<html>
<body>
<h1>Test</h1>
</body>
</html>
)
const rscStream = renderToReadableStream(root)
This step:
- Executes the React tree
- Serializes it into a Flight-compatible stream
- Produces a
ReadableStream
At this point:
- No HTML exists
- No browser-specific logic runs
- Only React's server renderer is involved
If the request explicitly asks for .rsc, the server responds with that stream directly:
return new Response(rscStream, {
headers: {
'Content-type': 'text/x-component;charset=utf-8',
},
})
That's the purest form of RSC: UI as streamed data.
SSR: Consuming the Same Stream for HTML
For an initial page load, users still expect HTML. This is where the SSR environment comes in.
Instead of re-running the React tree, the SSR environment:
- Consumes the RSC stream
- Deserializes it back into a React tree
- Renders HTML from that tree
In your SSR entry (the script that runs to produce the first HTML response):
const root = await createFromReadableStream(rscStream)
const htmlStream = renderToReadableStream(root, {
bootstrapScriptContent,
})
This is a key insight:
SSR does not re-execute your app. It replays the RSC stream.
The same serialized UI is now:
- Turned into HTML
- Sent to the browser
- Embedded with bootstrap scripts
This keeps server-rendered HTML and client navigation perfectly consistent.
Client: Hydration and Navigation
On the client, the story is similar — but with a different goal.
During the initial load:
- The browser receives HTML
- Client components hydrate
- Server components are never hydrated
For navigation (or revalidation), the client fetches the RSC stream directly:
const rscResponse = await fetch(window.location.href + '.rsc')
const root = await createFromReadableStream(rscResponse.body)
hydrateRoot(document, root)
Here, the client:
- Fetches the same
.rscendpoint - Deserializes the Flight stream
- Reconciles it into the existing React tree
No HTML parsing. No full reload. Just applying a streamed UI delta.
This explains why RSC navigation feels instant — and also why the client never owns the full UI history.
One Stream, Three Consumers
This is the most important mental model:
| Environment | What it does with the RSC stream |
|---|---|
| RSC | Generates it (React → Stream) |
| SSR | Replays it (Stream → HTML) |
| Client | Replays it (Stream → React tree) |
The stream is:
- Generated once per request
- Consumed once
- Never retained
- Never replayed automatically
Which leads to the core constraint.
Why RSC Is Stateless by Design
Because the stream:
- Is generated per request
- Is consumed immediately
- Is never cached by React
- Is never replayed across navigations
There is no place inside React to store navigation memory.
Every navigation:
- Re-executes the tree
- Re-serializes the UI
- Streams a new result
This is not a limitation — it's the contract.
But real applications need memory.
Why This Experiment Exists
This is the first in a series of experiments exploring what people usually mean when they say "stateful RSC."
Not in theory — in practice.
The goal isn't to bend React Server Components into something they're not, but to understand where state can live without breaking the model, and how far we can push that boundary before things stop making sense.
This first trial focuses on infinite scroll, server state, and Edge KV.
The Core Problem
With a purely stateless RSC setup:
/feed?page=50
Forces the server to:
- Re-fetch pages 1 → 50
- Rebuild the tree
- Re-stream everything
Correct — but expensive.
So the question becomes:
Can the server remember what it already fetched without making the RSC stream stateful?
Trial #1 Hypothesis
RSC doesn't need state. The server does.
If we externalize navigation memory into a server-side store, the RSC tree can stay pure:
UI = f(request, server_state)
Why Edge KV (Not Just Redis)
Traditional Redis works, but it introduces:
- Centralized latency
- Cross-region hops
- Hot keys
Edge KV is a better fit.
Edge KV is:
- Globally distributed
- Read-optimized
- Extremely fast
- Resource-efficient
- Designed for partial failure
Most importantly:
KV operations are non-blocking and retriable by design.
Robustness by Default
In this trial, KV is treated as:
- A best-effort cache
- Not a source of truth
Which gives strong guarantees:
- KV read fails → fallback to DB ( if aggressive )
- KV write fails → UI still renders
- Writes can be retried asynchronously
- No request is blocked on consistency
KV failure degrades performance — not correctness.
The Model: Stateful Server, Stateless Stream
State stored in Edge KV:
feed:{sessionId} = {
loadedPages: number,
items: Item[]
}
Request flow:
- Read KV
- Fetch only missing pages
- Append state
- Generate RSC stream from updated inputs
The RSC stream itself remains stateless.
Topology-Aware State
State must be proxied to the closest server.
Requests:
- Hit the nearest edge
- Read local KV
- Fetch missing data once
- Let replication propagate
This keeps state:
- Local
- Cheap
- Safe to miss
Intercepting Requests with Metadata
Requests are intercepted to attach:
- Session ID
- Feed context
- Navigation depth
This metadata becomes the KV lookup key.
From the RSC perspective:
- It just reads data
- It doesn't know where it came from
Which is exactly the abstraction RSC wants.
What This Trial Is Not
This is not:
- Stateful components
- In-memory server hacks
- Long-lived processes
It is:
- Server-owned navigation state
- Externalized memory
- Edge-local and disposable
Tradeoffs
You pay for this with:
- Session growth but we could make it resource efficient in a way by adding stale with deletion algorithms.
- Expiry logic
- Invalidation complexity but solvable with attaching params along with proxied metadata.
You gain:
- Efficient deep navigation
- No refetch storms
- Clean RSC semantics
What Comes Next (Trial #2)
True statefulness is effectively impossible in a serverless environment. Each invocation is isolated; there is no long-lived process, no in-memory store that survives between requests. The moment the function finishes, that context is gone.
That doesn’t mean we’re stuck. We can treat other serverless building blocks as our state and cache layer. Managed databases, Edge KV, object storage, and serverless-friendly caches (e.g. DynamoDB, Upstash, Cloudflare KV, Vercel KV) are all “pieces of serverless” that persist across invocations. They don’t make RSC itself stateful — they give the server a place to remember things. So in Trial #2 we’ll explore how to lean on these services to store and manage data and cache without pretending the RSC stream has memory. The goal stays the same: stateful server, stateless stream.
Want to collaborate on this post? Hit me up on Telegram. I’d love to see you here.