Why I Built @farming-labs/orm: One Schema, Many Stacks
230 reads.
This is the thinking behind orm.farming-labs.dev: why I wanted a storage-agnostic layer, why the schema DSL is the center of it, and why I think reusable libraries should not have to rewrite their storage story for every stack.
There are already a lot of ORMs, query builders, and schema tools.
So if I was going to build orm.farming-labs.dev, it had to solve a different problem.
I was not trying to compete with "how do I talk to Postgres" in the most basic sense. I was trying to solve a problem I kept seeing in libraries, frameworks, auth systems, billing kits, and internal platform packages:
the moment you want reusable storage logic across many app stacks, your data model starts splitting apart.
That is the real pain this project is aimed at.
Why I Built It
Most storage tools assume one app, one stack, one preferred persistence story.
That is a fine default if you are building a single product and you already know you will stay on one ORM or one database layer forever.
But that is not the shape of every problem.
If you are building:
- an auth package
- a billing module
- an organization or role system
- an internal platform package
- or a framework that other apps are supposed to consume
then you usually do not control the final storage decision.
One consumer wants Prisma. Another wants Drizzle. Another already has raw SQL. Another is on Cloudflare. Another is using Redis, KV, DynamoDB, Firestore, or Mongo.
At that point, the problem stops being "how do I query this one database?" and becomes:
- how do I keep one schema story?
- how do I avoid rewriting adapters for every stack?
- how do I keep one runtime-facing API?
- how do I stop duplicating docs, examples, migrations, and setup guides for every consumer?
That is why I built this project.
The Experience That Made This Pain Very Real
Part of what pushed me toward this was my experience maintaining storage adapters around the Better Auth ecosystem.
That work teaches you something quickly:
- every adapter becomes its own maintenance track
- every backend has slightly different behavior
- every integration grows its own setup story
- every docs path starts multiplying
And even then, an adapter is usually still limited.
Some adapters only really make sense for one database family. Some work for SQL but not for key-value or document storage. Some fit one runtime model but not another. Some are technically "supported" but still force very different assumptions once you get into transactions, indexes, relation loading, or setup flows.
So the maintenance burden is not only "there are many adapters."
It is also that each adapter tends to be narrow, and the library or framework still ends up owning all the storage complexity at the edges.
That was a big motivation here. I did not want to keep repeating the same pattern where we build one more adapter, then one more adapter-specific doc, then one more adapter-specific setup path, and still never really get one coherent storage story.
The Main Idea
The core idea is simple:
- define the schema once
- generate what app-specific stacks need
- run one typed query surface through whichever storage runtime the app actually owns
That means one schema, one query API, and many possible outputs or runtime drivers.
This is not "pretend every storage system is identical."
It is "keep the contract unified, then translate carefully at the edges."
Why The Unified DSL Matters
I wanted the schema DSL to be the center of the system, not one more layer sitting next to a dozen other schema definitions.
That is why the CLI reads schema objects exported from @farming-labs/orm directly. It does not invent a second schema language.
You define the model once in TypeScript:
import {
belongsTo,
datetime,
defineSchema,
hasMany,
id,
model,
string,
} from "@farming-labs/orm";
export const authSchema = defineSchema({
user: model({
table: "users",
fields: {
id: id(),
email: string().unique(),
name: string(),
createdAt: datetime().defaultNow(),
},
relations: {
sessions: hasMany("session", { foreignKey: "userId" }),
},
}),
session: model({
table: "sessions",
fields: {
id: id(),
userId: string().references("user.id"),
token: string().unique(),
},
relations: {
user: belongsTo("user", { foreignKey: "userId" }),
},
}),
});
From there, generators and runtime drivers both read from the same source of truth.
That matters because once the schema becomes fragmented, drift shows up everywhere:
- docs drift
- adapter logic drifts
- relation behavior drifts
- generated artifacts drift
- examples drift
I wanted to reduce that drift at the root.
Why Storage-Agnostic Matters
Storage-agnostic does not mean "lowest common denominator."
It means the package or framework should not force one storage commitment on every consuming app.
That is especially important when the package is supposed to be shared.
An auth library should not have to ship:
- Prisma adapter logic
- Drizzle adapter logic
- Kysely adapter logic
- raw SQL examples
- Mongo examples
- docs for every separate storage path
just to say "here is how users and sessions work."
I wanted that package to be able to say:
- here is the schema
- here is the storage contract
- here is one query API
- now choose the runtime or generated output that fits your app
That is a much cleaner model.
One Schema, Two Translation Paths
One thing I like about the project is that it does not force a single integration style.
It supports two useful paths.
1. Generator-first
If the app wants generated artifacts, the CLI can emit Prisma, Drizzle, or SQL from the same schema source.
import { defineConfig } from "@farming-labs/orm-cli";
import { authSchema } from "./src/schema";
export default defineConfig({
schemas: [authSchema],
targets: {
prisma: {
out: "./generated/prisma/schema.prisma",
provider: "postgresql",
},
drizzle: {
out: "./generated/drizzle/schema.ts",
dialect: "pg",
},
sql: {
out: "./generated/sql/0001_init.sql",
dialect: "postgres",
},
},
});
That path is useful when the app already has a Prisma or Drizzle workflow and wants generated files to drop into the stack it already knows.
2. Runtime-first
If the app already owns a live client, the ORM can run through a runtime driver and keep the same typed API.
That can mean Prisma, Drizzle, Kysely, MikroORM, TypeORM, Sequelize, direct SQL pools, Cloudflare D1, Cloudflare KV, Redis and Upstash Redis, Firestore, DynamoDB, Unstorage, MongoDB, or Mongoose.
That split was intentional. Some teams want generation. Some teams want runtime adapters. Some need both.
I did not want the project to collapse those into one rigid workflow.
Why I Wanted It To Work Across Very Different Storage Units
If I only cared about SQL databases, this could have stopped much earlier.
But real products do not only store data in one place anymore.
Some things belong in a relational database. Some things live better in Redis. Some package-owned state is fine in KV or Unstorage. Some apps are on document databases. Some are on edge platforms.
So I wanted the model to be broad enough to say:
- the schema is still one contract
- the query surface is still one contract
- the storage runtime can vary
That does not erase the differences between backends. It just stops those differences from infecting every package boundary in the system.
This also connects to a bigger direction I care about: a meta framework where storage is a first-class concern, not an afterthought.
When I think about framework-owned storage, I am not only thinking about a long-term relational database.
I am also thinking about:
- cache
- rate limits
- short-term state
- auth and session state
- framework metadata
- event or audit records
- long-term persistent application data
Those are different storage shapes, different durability needs, and often different runtime environments.
If the framework is going to treat storage as first-class, it needs a model that can speak to all of them without forcing every feature into one backend or one ORM-specific contract.
That is one of the biggest reasons this project is broader than "another SQL ORM."
Why This Is Good For Libraries And Frameworks
This is the part I care about the most.
I think this project is strongest when the storage story is not local to one app, but shared across many consumers.
That is why the docs keep pointing at use cases like:
- auth-like libraries
- billing and organization kits
- full-stack frameworks
- internal platforms
- multi-package monorepos
Those are exactly the places where storage duplication gets expensive.
A billing module should be able to define plans, subscriptions, seats, organizations, and memberships once, then let each app generate or run them in the way that matches its stack.
A platform team should be able to own roles, audit logs, provisioning state, and feature-flag models once, then reuse that storage contract across products.
That is much higher leverage than rebuilding the same schema over and over inside every app.
And this is where the meta-framework angle becomes important to me.
If storage is first-class at the framework level, then framework modules should be able to own and compose storage for:
- caching layers
- rate-limit buckets
- short-lived framework state
- persistent business data
- audit and observability records
without turning each one into a completely separate adapter ecosystem.
That is the kind of experience I want the framework layer to have: storage is present everywhere it needs to be, but it still feels coherent.
Why The CLI Exists
I did not want this to be just a philosophical schema layer.
If the project is going to be practical, it needs a real CLI that turns the schema into artifacts teams can actually use.
That is why @farming-labs/orm-cli exists.
It reads exported schema objects directly, merges multiple schema packages, rejects duplicate model names, and can:
- generate Prisma schema output
- generate Drizzle schema output
- generate safe SQL
- check generated output in CI so drift shows up early
This makes monorepo package composition especially useful, because several reusable packages can contribute models into one generated application surface.
That is one of the highest-leverage workflows in the whole project.
Why The Runtime Layer Matters Too
The runtime is what makes the unified query API real.
createOrm(...) attaches the schema to a driver, and from there the app gets typed model clients like orm.user, orm.session, or orm.subscription.
I also wanted the query API itself to feel stable and straightforward.
That means the storage layer should not only share one schema contract. It should also give packages one readable way to do queries, nested reads, mutations, and transactions.
Something like this is the kind of surface I wanted:
const user = await orm.user.findUnique({
where: { email: "ada@farminglabs.dev" },
select: {
id: true,
email: true,
sessions: {
select: {
token: true,
expiresAt: true,
},
},
},
});
const session = await orm.session.create({
data: {
userId: user!.id,
token: crypto.randomUUID(),
expiresAt: new Date(Date.now() + 1000 * 60 * 60),
},
});
await orm.$driver.transaction(async () => {
await orm.rateLimitBucket.upsert({
where: {
scope_identifier: {
scope: "auth.login",
identifier: "ada@farminglabs.dev",
},
},
create: {
scope: "auth.login",
identifier: "ada@farminglabs.dev",
remaining: "4",
resetAt: new Date(Date.now() + 60_000),
},
update: {
remaining: "4",
resetAt: new Date(Date.now() + 60_000),
},
});
});
I did not want every consumer package to have one query style for Prisma, another one for Drizzle, another one for SQL, and another one for key-value or document backends. Even a small amount of API consistency matters a lot once several packages and teams depend on the same storage contract.
I also care a lot about what this does to operational shape, not just developer ergonomics.
The runtime-helper path is intentionally light and lazy. That matters because once a package stops dragging several adapter branches into the same surface area, a few good things get easier:
- bundle shape can get smaller
- serverless and edge-friendly integration gets easier
- startup behavior can improve because less backend-specific wiring has to happen eagerly
- the app can load the storage pieces it actually uses instead of carrying every adapter-shaped path at once
That part was important to me because reusable libraries and framework layers often end up in environments where cold-start pressure and deployment size actually matter.
I do want to be precise here though: this does not magically make every database faster.
Postgres is still Postgres. Prisma is still Prisma. Mongo is still Mongo.
The more realistic wins usually come from:
- less duplicated adapter logic
- fewer eagerly-wired branches
- lighter integration surfaces for serverless and edge-style environments
- one shared place to improve runtime behavior instead of repeating fixes adapter by adapter
I also wanted the runtime layer to be honest about capability differences.
That is why the runtime exposes metadata on orm.$driver so higher layers can inspect what the backend actually supports instead of pretending every storage engine behaves the same.
That part matters.
Abstraction gets dangerous when it hides important differences.
I want this project to unify the developer surface, but I do not want it to lie about the backend.
What I Did Not Want This To Become
I did not want this to become a vague "universal database abstraction" where everything feels portable until the moment you need real behavior.
That is why I think the project works best when it stays anchored in a few principles:
- one logical schema contract
- one typed query surface
- explicit generators
- explicit runtime drivers
- explicit capability boundaries
That combination gives you portability where portability is useful, without pretending every storage engine has the same semantics.
When This Is Not The Right Tool
I also think it is important to say where this approach is probably overkill.
If you have:
- one app
- one storage layer
- one ORM-specific schema you are happy with
- no package-level reuse
- no need to publish or share a storage-facing contract
then this might be more framework than you need.
That is fine.
I did not build this because every project needs a storage-agnostic layer. I built it because some projects absolutely do, and the pain there is real.
The value of this approach rises as these rise:
- number of consumer apps
- number of package boundaries
- number of supported stacks
- amount of duplicated storage docs and adapter work
If your recurring pain is "we keep rewriting the same storage story for every consumer," then this project is aimed directly at that problem.
Final Thought
I built orm.farming-labs.dev because I wanted reusable software to have a better storage story.
I wanted one schema definition, one runtime-facing API, generated outputs when apps want them, runtime drivers when apps already own the client, and enough flexibility to work across SQL, document, key-value, and edge-friendly systems without rewriting the whole package every time.
For me, that is the real point:
not one database forever, not one ORM forever, but one coherent contract that can survive many stacks.