Engineering

Why Bun Is Ready for Production (And Better Than Node)

Jan 19, 202615 min read
buntypescripttoolingperformance

The Production Readiness Question

"Is Bun ready for production?" gets asked on every Reddit thread about runtime alternatives. The question misses the point. Bun shipped its 1.0 in September 2023 and has been stable since. The real question is whether Bun gives you something Node doesn't.

It does. Bun ships builtins that eliminate entire dependency categories. We run Bun across a mobile app's build tooling, a documentation scraping pipeline, and various CLI utilities. The dependency count dropped, the build times dropped, and the surface area for CVEs shrank along with them.

The Package Manager Alone Is Worth It

Before touching runtime features, consider the package manager. bun install resolves and installs dependencies faster than npm, yarn, or pnpm. Our monorepo lockfile is 405 KB. A single bun install handles the root Expo app, the Hono API, the TypeScript scraping pipeline, and the Nuxt marketing site.

The lockfile is now JSON and Human-readable. Deterministic with SHA-512 checksums on every dependency. No binary formats to debug when things go wrong.

CI pipelines feel the difference. Cold installs that took 45 seconds with npm now take 12 seconds. Warm installs with a cached lockfile finish in under 3 seconds. These numbers compound across dozens of daily builds.

Dependency Elimination

The runtime builtins matter more than the package manager speed. Here's what we replaced:

npm PackageBun BuiltinWhy It Matters
better-sqlite3bun:sqliteNo node-gyp, no native compilation in CI
zstd-codecBun.zstdCompressNo WASM overhead, sync and async variants
@aws-sdk/client-s3S3Client from "bun"Zero dependencies for R2/S3 uploads
globBun.GlobSync file discovery, no callback hell
execa / shelljsBun.$Tagged template shell with JSON parsing
js-yamlYAML from "bun"Built-in YAML parsing
tomlDirect .toml importsParsed at import time, zero runtime cost

Each replacement removes a dependency tree. Each dependency tree is a vector for supply chain attacks, version conflicts, and maintenance burden. The aggregate effect is a lighter, faster, more secure codebase.

bun:sqlite vs better-sqlite3

SQLite without compilation is the killer feature for CI/CD pipelines. better-sqlite3 requires native bindings. Every CI run needs node-gyp, Python, and a C++ compiler. Bun's sqlite module is built into the runtime.

Our documentation pipeline builds SQLite databases with FTS5 search indexing. The schema:

typescriptdatabase.ts
import { Database } from "bun:sqlite";

const db = new Database("docs.db");

db.exec(`
  CREATE TABLE IF NOT EXISTS dl_content (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    page_name TEXT NOT NULL,
    slug TEXT NOT NULL,
    chunked_html_zst BLOB NOT NULL,
    raw_text_fts TEXT NOT NULL
  );

  CREATE INDEX IF NOT EXISTS idx_dl_content_slug ON dl_content(slug);
`);

The API mirrors better-sqlite3 and prepared statements work identically. The difference is deployment. Native modules need compilation, which means CI runners need build tools, and platform-specific binaries mean debugging why it works on your Mac but fails in the Docker container. bun:sqlite just runs.

Tip

Use .prepare() once outside loops and .run() for bulk inserts. This compiles the query plan once instead of on every iteration.

TypeScript support is built in. Generic query types work without additional packages:

typescriptpipeline.ts
const rows = db
  .query<{ slug: string; page_name: string }, []>(
    "SELECT slug, page_name FROM dl_content"
  )
  .all();

The :memory: URI creates transient databases for tests that disappear when the connection closes. Our test suite runs 70 test files with zero database cleanup logic because each test gets a fresh in-memory database that vanishes automatically.

Native Compression

Bun now ships zstd and gzip compression without external dependencies. Our pipeline compresses HTML chunks with zstd at level 5:

typescriptcompressor.ts
const COMPRESSION_LEVEL = 5;

export function compressSync(input: string | Uint8Array): Uint8Array {
  const data = typeof input === "string"
    ? new TextEncoder().encode(input)
    : input;
  return Bun.zstdCompressSync(data, { level: COMPRESSION_LEVEL });
}

export async function compress(input: string | Uint8Array): Promise<Uint8Array> {
  const data = typeof input === "string"
    ? new TextEncoder().encode(input)
    : input;
  return Bun.zstdCompress(data, { level: COMPRESSION_LEVEL });
}

Level 5 balances compression ratio and decompression speed. Higher levels compress smaller but decompress slower. On mobile devices, decompression speed matters more than file size. A 50 KB MDN page decompresses in 5-15ms on an iPhone 14 Pro.

The sync variants (Bun.zstdCompressSync, Bun.gzipSync) work well for batch processing, while the async variants avoid blocking when handling user requests.

This replaces zstd-codec, a WASM-based npm package. WASM carries overhead that native compression avoids, and when a pipeline processes thousands of HTML files during documentation builds, the aggregate time savings add up.

S3Client from "bun"

Cloudflare R2 uses the S3 API. The standard Node approach requires @aws-sdk/client-s3, which pulls in dozens of transitive dependencies. Bun's native S3Client is a single import:

typescriptr2-uploader.ts
import { S3Client } from "bun";

const client = new S3Client({
  accessKeyId: config.accessKey,
  secretAccessKey: config.secretKey,
  bucket: config.bucket,
  endpoint: `https://${config.accountId}.r2.cloudflarestorage.com`,
  region: "auto",
});

await client.write(key, Bun.file(dbGzPath), {
  type: "application/gzip",
});

The Bun.file() integration means no manual stream handling. Pass a file reference directly to the S3 client and it handles the upload efficiently. Our package.json has zero AWS SDK dependencies, and the entire R2 uploader is 50 lines of code including error handling and logging.

HTMLRewriter

Streaming HTML transformation without loading a full DOM tree. The API comes from Cloudflare Workers, but Bun implements it natively.

Our documentation pipeline extracts base64 images from HTML, processes them separately, then restores them. HTMLRewriter handles this without parsing the entire document:

typescriptpreservation.ts
export function extractBase64Images(html: string) {
  const images: Array<[string, string, Record<string, string>]> = [];
  let imageIndex = 0;

  const rewriter = new HTMLRewriter().on("img", {
    element(el) {
      const src = el.getAttribute("src");
      if (!src?.startsWith("data:")) return;

      const marker = `data:image/marker;base64,DNIMG_${imageIndex++}`;
      const attrs: Record<string, string> = {};

      for (const [name, value] of el.attributes) {
        attrs[name] = value;
      }

      images.push([marker, src, attrs]);
      el.setAttribute("src", marker);
    },
  });

  return { html: rewriter.transform(html), images };
}

HTMLRewriter shines for simple, targeted transformations. We also use it to strip interactive elements from MDN documentation that don't work offline: custom elements, iframes, live code playgrounds. Chain multiple .on() selectors for different removal operations.

For complex DOM manipulation (creating elements, traversing node trees), linkedom remains the better tool. HTMLRewriter handles the streaming cases efficiently.

YAML and TOML Imports

Configuration files shouldn't require parsing libraries. Bun agrees.

Both YAML and TOML are first-class citizens. You can import them directly as modules, and Bun parses at import time with hot reloading support in watch mode:

typescriptmetadata.ts
import rawMetadata from "../../metadata.toml";
import config from "./config.yaml";

// Both are already JavaScript objects
// No parsing, no runtime cost
const entry = rawMetadata.javascript;

When you need to parse strings (fetched content, user input), Bun.YAML.parse handles it:

typescriptmdn.ts
const yamlContent = await response.text();
const parsed = Bun.YAML.parse(yamlContent);

Our docset metadata lives in TOML files. License information, attribution text, minimum app versions. The import statement handles everything. If the file is malformed, the import fails at build time rather than crashing at runtime.

Bun.$ for Shell Execution

This one deserves attention because it replaces more than just execa. Every project accumulates shell scripts and Python one-offs: a bash script to sync database backups, a Python script to generate license files, a shell script to run wrangler commands with the right flags. These scripts work, but they live outside your main codebase, lack type checking, and require different tooling to maintain.

Bun.$ lets you write all of that in TypeScript. Tagged template shell commands with ergonomic output handling. The following is from a script to generate our App Store screenshots:

typescriptappstore-icon-grid.ts
import { $ } from "bun";

// Convert SVG to PNG at specific dimensions
await $`rsvg-convert -w ${pngWidth} -h ${pngHeight} ${svgPath} -o ${pngPath}`;

// ImageMagick for blur effects
await $`magick ${pngPath} -adaptive-blur 0x16 ${blurredPath}`;

// Composite images with a gradient mask
await $`magick ${pngPath} ${blurredPath} ${maskPath} -composite ${pngPath}`;

// Optimize and cleanup
await $`oxipng -o 4 -s ${pngPath}`.quiet();
await $`rm ${blurredPath} ${maskPath}`;

Variables interpolate safely into the template, .quiet() suppresses stdout for clean logs, and the whole pipeline lives in a TypeScript file with the rest of your codebase. When a path changes or a flag needs adjustment, you refactor code instead of hunting through shell scripts.

Compare this to execa:

typescript
// execa approach
import { execa } from "execa";
const { stdout } = await execa("bunx", ["license-checker", "--production", "--json"]);
const licenseInfo = JSON.parse(stdout);

// Bun approach
const licenseInfo = await $`bunx license-checker --production --json`.json();

Template literals feel natural for shell scripting, variables interpolate safely, and the API surface is smaller than execa's while producing cleaner output.

And That's Not All

Quick hits on other builtins we use daily:

Bun.hash / xxHash64 generates content hashes for deduplication. We hash SVGs and documentation chunks to avoid storing duplicates. One line: Bun.hash.xxHash64(input).toString(16).

Bun.serve creates HTTP servers with Unix socket support. Our language detection service runs on a Unix socket for fast IPC between TypeScript and Python workers.

bun:test runs our 70 test files with zero configuration. Jest-compatible API, faster execution. The bunfig.toml for our test setup is four lines.

Bun.Glob handles file pattern matching. new Glob("**/*.html").scanSync() finds files without callback gymnastics. Replaces the glob npm package.

Bun.file / Bun.write are cleaner than fs. await Bun.file(path).text() reads a file. await Bun.write(path, content) writes it. The API is obvious.

Bun.sleep is a promisified delay. await Bun.sleep(100) pauses for 100ms. We use it to rate-limit documentation fetches.

Migrating a Pipeline from Python

We're in the middle of migrating our entire documentation scraping pipeline from Python to TypeScript on Bun. The Python version is 9,500 lines across 60 files with dependencies that Bun and TypeScript now replace: pydantic becomes TypeScript interfaces, sqlmodel becomes bun:sqlite, boto3 becomes the native S3Client, httpx and requests become fetch, xxhash becomes Bun.hash, rustoml becomes direct TOML imports, and nh3 (a Rust-based HTML sanitizer) becomes HTMLRewriter. Even trio for async and orjson for fast JSON are unnecessary when the runtime handles both natively.

The TypeScript version consolidates everything into the same language as our mobile app. The same type definitions that describe our SQLite schema in the React Native app also describe the schema in the build pipeline. When we change a column name, TypeScript catches the mismatch everywhere.

Building bespoke scrapers turned out to be faster in TypeScript than Python. Bun's native fetch replaces httpx, HTMLRewriter handles streaming transformations faster than BeautifulSoup ever could, and we use linkedom when we need full DOM manipulation. Extracting navigation trees from documentation sites with Playwright feels natural when your scraping code and your app code share the same language. The CLI uses @clack/prompts for interactive multiselects and spinners that update as the pipeline progresses, which is a nicer experience than Click ever provided.

The migration also eliminates cross-language friction. The Python pipeline used SQLModel with Pydantic for database access, a heavyweight ORM for what amounts to simple inserts and queries. The TypeScript version uses bun:sqlite directly with raw SQL, no ORM overhead. The Python version used a WASM-based zstd codec. The TypeScript version uses native Bun compression. Each boundary we removed was a source of subtle bugs and deployment complexity.

We're not finished yet. The TypeScript pipeline handles MDN, Swift, Nuxt, and several other documentation sources, but some edge cases remain in Python. The architecture is proven, though, and the trajectory is clear: one language, one runtime, one set of tools.

Where Node Still Wins

Production-ready doesn't mean perfect. Know when to fall back.

Serverless platforms haven't caught up. Cloudflare Workers doesn't support the Bun runtime. Our API runs on Hono, which works great on Workers, but we can't use Bun.serve or bun:sqlite in that environment. The serverless ecosystem remains Node-first across the board, from AWS Lambda to Vercel Functions to Cloudflare Workers, and Bun's deploy story is weaker as a result.

Immaturity gaps exist. Bun is production-ready but not bug-free. Our Nuxt marketing site can't use the --bun flag because of a longstanding compatibility issue with ultrahtml, which means the entire site runs on Node because of one dependency conflict. The ecosystem is smaller, obscure packages sometimes break, and when they do you're debugging unfamiliar territory.

The workaround is hybrid deployment. Our scraping pipeline runs on Bun, our API deploys to Workers (Node-compatible), and our marketing site runs on Node. Each component uses the best available runtime for its deployment target. Dogma about "all Bun" or "all Node" misses the point. Ship what works.

What's Next

Bun keeps shipping. Recent versions added native S3 support, improved Windows compatibility, and better Node.js API coverage. The trajectory suggests more dependency elimination ahead.

The question isn't whether Bun is ready. Bun has been ready. The question is whether your deployment targets support it. If they do, the dependency reduction and developer experience improvements are substantial. If they don't, Bun's package manager and local development speed still make it worth installing.

We built DocNative's documentation pipeline entirely on Bun. The scraping, cleaning, database building, compression, and upload steps use native APIs throughout, which meant no better-sqlite3 compilation, no WASM compression overhead, and no AWS SDK dependency trees to manage. The pipeline processes MDN, Swift, Python, and React documentation into offline-ready SQLite databases. Pages load in under 200ms on a phone because we optimized the entire chain, and Bun made that optimization possible without accumulating dependencies.

The app ships documentation that works on airplanes, subways, and anywhere else the network doesn't. The phone never touches raw HTML; it queries a local database, decompresses a chunk, and renders. Bun built the databases, and the builtins made it clean.

Out Now

Bun and TypeScript documentation are both included in DocNative at launch. Read the APIs that built the app, offline.

Read Docs Anywhere

Download DocNative and access documentation offline on iOS and Android.

Download iOS