PART 1: TypeScript
Chapter 1: Getting Started with TypeScript
JavaScript (the precursor to TypeScript) started life in the browser as a light scripting language that worked inside tiny windows of time. It handled button clicks, page effects, and simple logic. The early web needed something quick and flexible, so JavaScript arrived, small and nimble, ready to run anywhere a browser existed.
Over the years that small language became the backbone of enormous applications. It shifted from sprinkling effects across pages to driving whole interfaces, handling networking, storing data, and powering collaborative tools. This expansion created new pressure. Teams needed stronger tools around structure and meaning. They needed a way to describe their intentions so editors and build systems could help. These needs shaped the world into which TypeScript stepped.
Although JavaScript still runs the web and sits inside every TypeScript file you compile, the modern developer works in a landscape where clarity and safety matter. This chapter opens the door to that world. It explains why TypeScript exists, how to install the tools, how to shape a project, and how to run your first program. Everything else grows from these first steps.
Why TypeScript exists and when to use it
TypeScript grew from a simple idea. JavaScript needed stronger footing for large projects, so TypeScript added a type system that could guide the shape of data and the intent of functions. This change helped developers catch problems early and encouraged designs that felt tidy rather than tangled. TypeScript also introduced modern language features that arrived before they landed in JavaScript itself, giving teams a smoother path toward reliable codebases.
The language is most useful when projects grow bigger than a single file or when several people share the same code. It helps when APIs must stay predictable and when refactoring should feel like shifting a puzzle piece rather than pulling out a wire from a crowded circuit. TypeScript also shines when you want clarity inside editor tooling. Features such as autocomplete, parameter hints, and inline documentation become far more helpful once types begin shaping the world around them.
Although TypeScript transforms into JavaScript before running, most developers treat it as its own language. This approach gives the reader a clean start and avoids the usual pattern where TypeScript is taught as an afterthought. In practice, developers pick up TypeScript first and meet JavaScript naturally as the compiled output. This book follows that pattern.
Installing Node, npm, and the TypeScript compiler
TypeScript needs a runtime that understands the compiled JavaScript it produces. Node provides that environment. Installing Node also installs npm (The Node Package Manager) so you can fetch packages, manage dependencies, and install the TypeScript compiler. The goal in this section is to create a steady foundation before you write any code.
Most systems use the official installers from the Node website (nodejs.org/en/download). These installers place Node and npm in your system path so you can use them from any terminal. After installation you can confirm everything is ready by asking for version numbers.
node --version
npm --version
Once Node is working, you can install TypeScript globally or within a project folder. Global installation makes the tsc command available everywhere.
npm install -g typescript
You can confirm that the TypeScript compiler works by checking its version.
tsc --version
npm about new patch versions or minor fixes. These updates are usually safe to install because they contain improvements that smooth out small issues. Accepting them keeps your tools steady and avoids problems that older versions sometimes create.
At this point the tools are in place. You have a runtime, a package manager, and a compiler ready to translate your TypeScript into clean JavaScript.
Project layout and first tsconfig.json
A TypeScript project grows smoother when the folder structure is simple. Most developers create a root directory with two main parts: a src folder that holds the TypeScript files and an out or dist folder that holds the compiled JavaScript. The TypeScript compiler can produce that second folder automatically once you set up the configuration file:
my-project
├── src
│ ├── main.ts
│ └── …
├── dist
│ └── (compiled files appear here after running tsc)
├── package.json
└── tsconfig.json
The tsconfig.json file tells the compiler what rules to follow. It describes the language level, the strictness settings, the output folder, and any features you want enabled. Start with a small version and expand later once the project grows.
{
"compilerOptions": {
"target": "ES2020",
"module": "ES2020",
"outDir": "dist",
"strict": true
},
"include": ["src/…"]
}
This configuration keeps the project predictable. The strict option gives you clearer boundaries, the target setting controls the JavaScript flavor, and the outDir separates clean output from edited source files.
tsconfig.json acts like a contract between your code and the compiler. It keeps expectations clear as the project grows.
Once the configuration file exists, the project begins to feel like a real workspace. You can add TypeScript files to the src folder and trust that the compiler will take care of the rest.
Compiling and running a simple program
Now you can create your first real TypeScript file. The file can be as small as a few lines, but even this short example shows the full pipeline from TypeScript to JavaScript to the Node runtime. Create a file named src/main.ts and add a simple greeting.
const message = "Hello TypeScript";
console.log(message);
Next you tell the compiler to translate the file. The compiler reads the configuration, scans the src folder, and drops compiled JavaScript into the output directory.
tsc
You should now see a dist folder containing main.js. This file is pure JavaScript so Node can run it without any additional support.
node dist/main.js
This small loop shows the entire workflow. You write TypeScript, the compiler produces JavaScript, and Node runs the result. Everything else in the language builds on this pattern.
Chapter 2: The Type System Fundamentals
TypeScript’s type system gives shape to your ideas so the compiler can reason about your code before it ever runs. This chapter introduces the core building blocks. You start with the simplest values, then see how TypeScript infers types, how different types can combine, and how to describe structures using aliases and interfaces. These foundations make larger programs steadier and far easier to maintain.
Primitive types, literals, and type annotations
Primitive types describe the simplest values in TypeScript. These include string, number, boolean, null, undefined, bigint, and symbol. When you add a type annotation to a variable you tell the compiler exactly what kind of data belongs there. Literal types add an extra layer by restricting a value to a specific string or number which can be useful for states or configuration.
let title: string = "TypeScript Guide";
let count: number = 5;
let ready: boolean = true;
let direction: "left" = "left";
let retryLimit: 3 = 3;
Literal types help the compiler narrow possibilities. A value with the type "left" cannot suddenly become "right". This clarity becomes powerful when building state machines or command systems because each branch becomes more predictable.
Type inference and best practices
TypeScript often knows the type of a value without any annotation. This behaviour is called type inference. If you assign a string literal to a variable the compiler recognises it as a string. If you initialise an array of numbers the compiler tracks that pattern automatically. This reduces clutter and keeps your code readable while still giving you the safety of static types.
let username = "robin"; // inferred as string
let scores = [10, 15, 20]; // inferred as number[]
let active = false; // inferred as boolean
Inference works best when variables are declared close to where they are used. It is safer to add explicit annotations when values begin empty or when the intent is not immediately clear. For example a variable intended to hold a number later should not begin as null without an annotation because inference would fix its type too narrowly.
Union, intersection, and nullable types
Union types describe values that may take one form or another. This is useful when a function may accept more than one shape. Intersections work in the opposite direction by combining multiple types into one. Nullable types are simply unions that include null or undefined. These patterns let you express real world flexibility while keeping your rules steady.
let value: string | number = "start";
value = 42;
type WithId = { id: number };
type WithName = { name: string };
type User = WithId & WithName;
let user: User = { id: 1, name: "Robin" };
let maybe: string | null = null;
Union types help you model behaviour that branches based on conditions. Intersection types help merge independent structures into a single coherent shape. Nullable types remind you to check for missing values which avoids runtime surprises.
Type aliases and interfaces
Type aliases and interfaces both describe object shapes. A type alias assigns a name to any kind of type including unions, intersections, and primitives. An interface describes the structure of an object and can be extended by other interfaces. Interfaces communicate intent clearly and type aliases offer flexibility for complex combinations. Both approaches give the compiler a clear contract to enforce.
type Point = {
x: number;
y: number;
};
interface Person {
name: string;
age: number;
}
interface Employee extends Person {
id: number;
}
Interfaces are often preferred for modelling objects because they support extension. Type aliases are ideal when building more advanced types such as unions or template literal types. You can choose the form that best matches your design and mix them freely as your project grows.
Chapter 3: Functions and Control Flow
Functions are the building blocks of most TypeScript programs; control flow guides how and when those functions run. This chapter focuses on typing function parameters and return values, practical narrowing techniques, how to model impossible states with never, and patterns for callbacks and typed events.
Function types, parameters, and return types
TypeScript lets you describe a function’s shape precisely. You can annotate parameter types, return types, and the overall callable signature. Clear function types make call sites safer, improve editor assistance, and simplify refactoring.
function declarations and => arrows
Both declaration and arrow styles support full type annotations. Prefer explicit return types for exported functions; they help prevent unintended changes from leaking into public APIs.
function add(a: number, b: number): number { return a + b; }
const multiply = (a: number, b: number): number => a * b;
// Function type variable
let op: (x: number, y: number) => number;
op = add; // OK
Optional, defaulted, and rest parameters
Optional parameters use ?. Default values provide a value when callers omit the argument. Rest parameters capture a list of arguments as an array; annotate the element type rather than the array literal.
function greet(name?: string): string { return `Hello ${name ?? "friend"}`; }
function power(base: number, exp: number = 2): number { return base ** exp; }
function sumAll(...nums: number[]): number { return nums.reduce((t, n) => t + n, 0); }
this types and call signatures
You can annotate the expected this for a function using a fake first parameter. This helps when a function is intended to be called with an explicit receiver through call or apply.
interface Point { x: number; y: number; }
function move(this: Point, dx: number, dy: number): void { this.x += dx; this.y += dy; }
const p: Point = { x: 0, y: 0 };
move.call(p, 5, 3);
Overloads for multiple call shapes
Overload signatures describe distinct ways to call the same function. Provide one or more overloads, then implement a single body that narrows parameters at runtime and returns a value compatible with the overload list.
function format(input: number): string;
function format(input: Date): string;
function format(input: number | Date): string {
if (typeof input === "number") { return input.toFixed(2); }
return input.toISOString();
}
const a = format(3.14); // string
const b = format(new Date()); // string
Return type inference
TypeScript infers return types from the implementation. For APIs, write an explicit type. Use void for functions that should not be used for values; reserve undefined for cases where a function intentionally returns that value.
function log(message: string): void { console.log(message); }
// do not rely on a value
function maybeFind(): string | undefined { return Math.random() > 0.5 ? "value" : undefined; }
Narrowing and equality checks
Narrowing refines a union type to a specific member inside a code block. The compiler trusts well known runtime checks such as typeof, in, and equality comparisons; this unlocks property access and method calls that would otherwise be unsafe.
Primitive guards
Use typeof for primitives such as string, number, boolean, bigint, and symbol. The compiler narrows within the guarded branch.
function len(x: string | string[]): number {
if (typeof x === "string") { return x.length; }
return x.reduce((t, s) => t + s.length, 0);
}
Property existence
The in operator narrows based on whether a property exists. This works well for object unions where each variant has a distinguishing key.
type Circle = { kind: "circle"; radius: number };
type Square = { kind: "square"; side: number };
type Shape = Circle | Square;
function area(s: Shape): number {
if ("radius" in s) { return Math.PI * s.radius ** 2; }
return s.side * s.side;
}
Equality checks and discriminated unions
When unions share a literal tag such as kind, compare that tag to narrow precisely. This is a common pattern for state machines and reducers.
type State =
| { status: "idle" }
| { status: "loading" }
| { status: "error"; message: string }
| { status: "success"; data: string };
function render(s: State): string {
if (s.status === "idle") { return "Start"; }
if (s.status === "loading") { return "Loading…"; }
if (s.status === "error") { return `Error: ${s.message}`; }
return `OK: ${s.data}`;
}
Truthiness and nullish checks
Checks like if (value) narrow away null and undefined. For values where empty strings or zero are valid, prefer value ?? fallback instead of truthiness to avoid accidental replacement.
function label(x: string | null | undefined): string {
if (x) { return x.toUpperCase(); }
return "UNKNOWN";
}
function safePort(p: number | null | undefined): number { return p ?? 3000; }
Exhaustiveness and never
Exhaustive handling ensures every variant of a union is covered. When a switch or if chain handles all cases, the leftover path becomes never. Use this to detect missing cases during refactoring.
Exhaustive switch with a never guard
Place a final check that assigns the remaining value to a never variable. If a new variant appears, the compiler flags the assignment.
type Result =
| { type: "ok"; value: number }
| { type: "fail"; reason: string };
function handle(r: Result): string {
switch (r.type) {
case "ok": return `Value ${r.value}`;
case "fail": return `Fail: ${r.reason}`;
default: {
const _exhaustive: never = r; // compile error if a case is missing
return _exhaustive;
}
}
}
assertNever helper
A small utility improves readability. It turns an impossible value into a thrown error at runtime and a compile time error for missing cases.
function assertNever(x: never): never { throw new Error(`Unexpected: ${String(x)}`); }
type Op = { kind: "add"; a: number; b: number } | { kind: "mul"; a: number; b: number };
function compute(op: Op): number {
switch (op.kind) {
case "add": return op.a + op.b;
case "mul": return op.a * op.b;
}
return assertNever(op);
}
Intentional non-returning functions
Annotate functions that never produce a value with never. Examples include functions that always throw or endlessly loop. The annotation communicates intent; it also helps control flow analysis.
function fail(msg: string): never { throw new Error(msg); }
function loopForever(): never { for (;;) { /* do work … */ } }
Callbacks, higher order functions, and typed events
Callbacks and event handlers connect parts of your program. With careful types, producers and consumers agree on data shapes; this prevents subtle bugs when parameters change.
Typing callbacks
Express callbacks as function types. For Node style error-first callbacks, model the first parameter as Error | null; this encourages explicit null checks at call sites.
type Callback<T> = (err: Error | null, value?: T) => void;
function asyncRead(path: string, cb: Callback<string>): void {
setTimeout(() => { cb(null, "file contents"); }, 10);
}
asyncRead("data.txt", (err, value) => {
if (err) { console.error(err.message); return; }
console.log(value);
});
Higher order functions
Higher order functions accept functions or return functions. Generics keep types connected across parameters and results; this preserves inference at call sites.
function mapValues<A, B>(xs: A[], f: (a: A) => B): B[] { return xs.map(f); }
const lengths = mapValues(["a", "bb", "ccc"], s => s.length);
Event emitters with typed payloads
You can create a simple typed event system by mapping event names to payload types. The emitter enforces correct payloads for emit and correct handlers for on.
type Events = {
"connected": { id: string; at: Date };
"message": { from: string; text: string };
};
class Emitter {
private handlers: { [K in keyof Events]?: Array<(p: Events[K]) => void> } = {};
on<K extends keyof Events>(name: K, h: (p: Events[K]) => void): void {
(this.handlers[name] ??= []).push(h);
}
emit<K extends keyof Events>(name: K, payload: Events[K]): void {
for (const h of this.handlers[name] ?? []) { h(payload); }
}
}
const bus = new Emitter();
bus.on("message", m => console.log(m.text));
bus.emit("message", { from: "alice", text: "Hi" });
on and emit aligned.
Typing Promise-based APIs and async functions
async functions return Promise<T> automatically. Annotate the resolved type when clarity helps. When adapting callbacks to promises, express the wrapper’s return type as a Promise.
async function fetchJson(url: string): Promise<unknown> {
const res = await fetch(url);
return res.json();
}
function readFileP(path: string): Promise<string> {
return new Promise((resolve, reject) => {
setTimeout(() => resolve("contents"), 5);
});
}
Control flow with guards and early returns
Guard clauses simplify complex branches. Return early when preconditions fail; the remaining code benefits from narrowed types and fewer nested blocks.
function parseId(x: string | null | undefined): number {
if (x == null) { return -1; }
const n = Number(x);
if (Number.isNaN(n)) { return -1; }
return n;
}
Chapter 4: Objects, Classes, and Interfaces
TypeScript builds large codebases around object shapes, class features, and contracts defined by interfaces. This chapter explains how structural typing guides compatibility, how classes provide fields and accessors, how implements and extends guide reuse, and how abstract classes and mixin patterns help you model flexible hierarchies.
Structural typing
TypeScript uses structural typing; two values are compatible when their shapes match. This differs from nominal typing where names decide compatibility. When you create object literals, the compiler also performs excess property checks to catch likely mistakes such as misspelled properties.
Structural compatibility
If a value has at least the required members of a target type, it is assignable. Extra members usually do not block assignment when the value is not a fresh object literal.
type Point = { x: number; y: number };
const p1 = { x: 1, y: 2, label: "A" };
let pt: Point = p1; // OK; p1 has x and y
Excess property checks for fresh literals
When assigning an object literal directly, the compiler treats it as fresh. Unknown properties trigger a diagnostic which often reveals typos.
type Options = { timeout: number; verbose?: boolean };
const good: Options = { timeout: 5000 };
const bad: Options = { timeout: 5000, verbsoe: true }; // Error; excess property
If you intend a flexible bag of properties, add an index signature or widen with an intermediate variable.
type Bag = { [key: string]: string | number };
const cfg: Bag = { host: "localhost", port: 5432, … };
Readonly properties and exactness
readonly marks intent and prevents writes. Structural typing still applies; a type with readonly members is compatible with another that matches those members as readable fields.
type ReadonlyPoint = { readonly x: number; readonly y: number };
const rp: ReadonlyPoint = { x: 0, y: 0 };
// rp.x = 1; // Error; cannot assign to readonly
Classes, fields, accessors, and visibility
Classes in TypeScript compile to JavaScript classes. You can declare fields, use parameter properties, add get and set accessors, and control visibility with public, private, and protected. You can also mark fields readonly to communicate intent and prevent mutation.
Declaring fields
Fields can be declared with types and initializers. Parameter properties let you declare and initialize a field from a constructor parameter in one step.
class User {
readonly id: string;
constructor(public name: string, id: string) {
this.id = id;
}
}
const u = new User("Riley", "u-001");
Accessors with get and set
Accessors expose derived values or validate assignments. A setter can narrow input and guard invariants.
class Rectangle {
constructor(private _w: number, private _h: number) {}
get area(): number { return this._w * this._h; }
get width(): number { return this._w; }
set width(v: number) {
if (v <= 0) throw new Error("width must be positive");
this._w = v;
}
}
Managing visibility
public is the default visibility type. private hides members from all external code. protected exposes members to subclasses while keeping them hidden from other consumers.
class Base {
public a = 1;
protected b = 2;
private c = 3;
}
class Derived extends Base {
demo() {
this.a; // OK
this.b; // OK
// this.c; // Error; private
}
}
private for invariants and protected for subclass hooks; this keeps APIs small and intention clear.
implements and extends
implements checks that a class satisfies an interface. extends creates a subclass that inherits members and behavior. These features combine structural contracts with reuse.
Using implements for contracts
A class can implement one or more interfaces. The compiler verifies that required members exist with compatible types.
interface Serializer<T> { serialize(value: T): string; }
class JsonSerializer<T> implements Serializer<T> {
serialize(value: T): string { return JSON.stringify(value); }
}
Using extends for inheritance
Subclasses add or refine behavior. Use super to call the base constructor or methods.
class Animal {
constructor(public name: string) {}
speak(): string { return "..." }
}
class Dog extends Animal {
speak(): string { return "woof" }
}
Multiple contracts
A class may implement many interfaces, and it may extend exactly one base class. Combine both when you need reuse and guarantees.
interface Describable { describe(): string; }
class Service extends JsonSerializer<unknown> implements Describable {
describe(): string { return "Service using JSON" }
}
Abstract classes
Abstract classes define partial implementations with abstract members that subclasses must provide. Mixin patterns compose behavior by creating class factories that add members to a base class. Use abstract classes when you want inheritable behavior with enforced overrides; use mixins when you prefer composition with small reusable traits.
Defining and extending an abstract class
Mark a class abstract to prevent direct construction. Mark methods abstract to require overrides.
abstract class Formatter {
abstract format(value: unknown): string;
prefix(v: string): string { return "[[" + v + "]]"; }
}
class UpperFormatter extends Formatter {
format(value: unknown): string { return this.prefix(String(value).toUpperCase()); }
}
const f = new UpperFormatter();
// const bad = new Formatter(); // Error; abstract
A simple mixin factory
A mixin is a function that takes a constructor and returns a new constructor with added members. This composes features without deep inheritance.
type Ctor<T = {}> = new (...args: any[]) => T;
function Timestamped<TBase extends Ctor>(Base: TBase) {
return class extends Base {
createdAt = new Date();
touch() { this.createdAt = new Date(); }
};
}
class Model { id = crypto.randomUUID(); }
class Note extends Timestamped(Model) {
text = "";
}
const n = new Note();
n.touch();
implements on the mixed class to advertise the shape publicly.
Choosing between abstract classes and mixins
Prefer an abstract base when subclasses share a core algorithm with required hooks; prefer mixins when you want orthogonal capabilities such as logging, events, timestamps, or caching that apply to unrelated classes.
Chapter 5: Generics and Advanced Types
Generics let you write code that carries meaning from one type to another; advanced types reshape those meanings into new forms. This chapter explores constraints for safer generic functions, the expressive power of mapped and conditional types, the precision of keyof and infer, and the design of reusable APIs built from TypeScript’s utility types.
Generic functions
Generics describe relationships between input and output types. Without them, functions lose context and collapse into any. Constraints protect your logic by limiting what a generic type parameter must support.
Basic generic functions
A simple generic function preserves type information across parameters and return values. The caller’s argument drives the type parameter, which gives predictable results.
function identity<T>(value: T): T { return value; }
const a = identity(42); // number
const b = identity("hello"); // string
Constraints with extends
Use a constraint when your logic depends on certain members. A constraint does not change the type itself; it simply restricts what callers can provide.
function lengthOf<T extends { length: number }>(value: T): number {
return value.length;
}
lengthOf("abc");
lengthOf([1, 2, 3]);
Generic types
Generic interfaces and classes capture the shape of reusable containers such as stores, caches, or collections. They preserve type relationships between operations.
interface Box<T> { value: T; }
class Store<T> {
constructor(private item: T) {}
get(): T { return this.item; }
set(value: T): void { this.item = value; }
}
const s = new Store<number>(10);
Default type parameters
Default type parameters smooth over common use cases. They let callers omit a parameter when the default communicates a reasonable choice.
interface Response<T = unknown> {
status: number;
data: T;
}
const r: Response = { status: 200, data: { … } };
Mapped, conditional, and template literal types
Advanced types reshape old structures into new ones. Mapped types transform properties, conditional types branch on type relationships, and template literal types blend string patterns with type safety.
Mapped types
Mapped types iterate over property keys and produce new objects that reuse those keys with transformed rules. This is the mechanism behind built in utility types.
type Readonlyify<T> = { readonly [K in keyof T]: T[K] };
type User = { id: string; name: string };
type LockedUser = Readonlyify<User>;
Conditional types
Conditional types follow the form A extends B ? X : Y. They describe logic that produces different shapes depending on relationships between types.
type IsString<T> = T extends string ? true : false;
type A = IsString<string>; // true
type B = IsString<number>; // false
Distributive conditional types
When a conditional type receives a union, it distributes across each member. This builds powerful transformations that feel like type level iteration.
type ToArray<T> = T extends any ? T[] : never;
type C = ToArray<string | number>; // string[] | number[]
Template literal types
Template literal types combine unions and string interpolation to describe allowable string patterns. They are useful for event names, route keys, or command identifiers.
type EventName = `user:${"create" | "delete" | "update"}`;
const ok: EventName = "user:create";
// const bad: EventName = "user:login"; // Error
keyof, indexed access, and infer
The keyof operator extracts property names. Indexed access retrieves the type of a specific property. The infer keyword captures types inside conditional branches, allowing you to pull out pieces such as return types or parameter types.
keyof for property keys
keyof produces a union of the keys of an object type. This feeds into generic utilities that operate on specific members.
type Config = { host: string; port: number };
type Keys = keyof Config; // "host" | "port"
Indexed access for property types
Use indexed access to reach into object types the same way you reach into values.
type HostType = Config["host"]; // string
Capturing types with infer
infer introduces a placeholder inside a conditional type. This extracts a hidden type from another type.
type ReturnTypeOf<F> = F extends (...args: any[]) => infer R ? R : never;
function f() { return { ok: true }; }
type R = ReturnTypeOf<typeof f>; // { ok: true }
infer is a quiet sidekick; it gives you the missing pieces when building expressive type transformations.
Extracting tuple members
Conditional types with infer can deconstruct tuples or argument lists. This proves useful when modelling decorators or function wrappers.
type First<T> = T extends [infer A, ...any[]] ? A : never;
type X = First<[string, number, boolean]>; // string
Utility types
Utility types are packaged patterns built on advanced types. Studying them helps you design your own reusable APIs. Combine them with constraints and generics to produce consistent and predictable interfaces.
Core utility types
Most projects rely on Partial, Required, Readonly, Pick, and Record. They cover common transformations such as optionality, immutability, and selection.
type PartialUser = Partial<User>;
type RequiredUser = Required<User>;
type UserRecord = Record<string, User>;
Composing utility types
Combine utilities to create specialised versions of data models. This lets you carve precise shapes from broad interfaces.
type UpdateUser = Partial<Pick<User, "name" | "email">>;
Designing fluent and predictable APIs
When building libraries, rely on patterns that preserve relationships across calls. Use generics to link inputs and outputs; use constraints to enforce shape expectations. This creates APIs that feel consistent and help readers follow your intent.
interface Query<T> {
where<K extends keyof T>(key: K, value: T[K]): Query<T>;
select<K extends keyof T>(...keys: K[]): Pick<T, K>[];
}
Balancing precision with usability
Precise types feel impressive, but the goal is trust and clarity. A tiny reduction in precision can make an API easier for real callers. Aim for types that explain themselves and match the runtime behavior without overwhelming readers.
Chapter 6: Modules and Project Organization
Modular code keeps projects understandable and maintainable; it also enables incremental builds and clean public surfaces. TypeScript compiles to JavaScript while preserving your module boundaries, so understanding formats, resolution rules, and project layout is essential. This chapter explains how module kinds affect runtime behavior, how to organize exports and imports, how to describe ambient shapes with .d.ts files, and how to ship type definitions that work for consumers using different toolchains.
ES modules, CommonJS, and compilerOptions
JavaScript has two major module systems in real projects. ES modules use import and export syntax with strict semantics and static analysis; CommonJS uses require and module.exports with dynamic semantics. TypeScript supports both. The combination of your tsconfig.json settings and your package.json determines how the emitted code behaves at runtime.
| Aspect | ES modules | CommonJS |
| Syntax | import { x } from "./x.js" |
const x = require("./x") |
| Interop | Default and named imports; live bindings | module.exports object; values copied on require |
Top level await |
Supported in modern runtimes | Not natively supported |
| Node detection | "type": "module" in package.json or .mjs |
Default in Node when no "type": "module" and files use .cjs or .js |
For TypeScript, the module compiler option selects the emitter target. The moduleResolution option selects how imports are resolved. Modern Node workflows usually choose "module": "NodeNext" and "moduleResolution": "NodeNext" to align with current Node behavior. Browser builds often use bundler resolution through tools that understand bare specifiers and aliases.
esModuleInterop to smooth default import semantics; it keeps your source readable while the codebase transitions.
tsconfig.json choices that map to runtime behavior
Your compiler options should reflect the environment where the output runs. Pick a module target that your runtime understands; pick a resolution strategy that matches how your runtime or bundler finds files.
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"rootDir": "src",
"outDir": "dist",
"declaration": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"skipLibCheck": true,
"strict": true
}
}
With NodeNext, TypeScript respects package.json fields like type and exports, and it requires file extensions in import paths that match runtime expectations.
package.json alignment for Node and bundlers
Your package metadata influences module loading. Use "type" to set the default format and use "exports" to define public entry points. Provide both import and require conditions when you publish libraries that support dual consumers.
{
"name": "my-lib",
"version": "1.0.0",
"type": "module",
"main": "./dist/index.cjs",
"module": "./dist/index.js",
"types": "./dist/index.d.ts",
"exports": {
".": {
"import": "./dist/index.js",
"require": "./dist/index.cjs",
"types": "./dist/index.d.ts"
}
}
}
NodeNext, include file extensions in relative import paths such as import "./util.js"; omitting the extension compiles but fails at runtime in strict ESM setups.
Examples of ESM and CJS source
The following snippets show equivalent exports and imports using each style. Prefer one style within a project for clarity; mixed styles make resolution rules harder to reason about.
// ESM: src/math.ts
export function add(a: number, b: number) { return a + b; }
export default function mul(a: number, b: number) { return a * b; }
// ESM consumer
import mul, { add } from "./math.js";
console.log(add(2, 3), mul(2, 3));
// CJS: src/math.cjs
function add(a, b) { return a + b; }
function mul(a, b) { return a * b; }
module.exports = { add, default: mul };
// CJS consumer
const { add, default: mul } = require("./math.cjs");
console.log(add(2, 3), mul(2, 3));
Barrel files and path mapping
A barrel groups exports from multiple modules into a single public surface. This reduces import verbosity and makes refactors simpler; the trade off is that careless wildcards can reexport more than intended. Path mapping shortens import strings by creating aliases that map to real directories during compilation.
Creating a focused barrel
Export exactly what callers need. Avoid blanket reexports from deep trees where side effects are not obvious. Keep the barrel small and intentional.
// src/index.ts
export { parseUser } from "./users/parseUser.js";
export { saveUser } from "./users/saveUser.js";
export type { User, UserId } from "./users/types.js";
export type so the emitter can erase them; this keeps JavaScript output minimal.
paths and baseUrl in tsconfig.json
Path mapping replaces long relative segments with short aliases. The compiler rewrites these during type checking; your runtime still needs matching behavior through a bundler or a loader.
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@lib/*": ["src/lib/*"],
"@models/*": ["src/models/*"]
}
}
}
After configuring the mapping, import from aliases instead of long relative paths.
// before
import { Widget } from "../../../lib/widgets/widget.js";
// after
import { Widget } from "@lib/widgets/widget.js";
paths by itself. Use a bundler that honors tsconfig paths or install a runtime resolver such as a loader that maps aliases to real paths.
Project references with barrels
Large repositories often split code into buildable units that reference each other. Combine project references with barrels to expose crisp entry points while keeping incremental builds fast.
// tsconfig.build.json
{
"files": [],
"references": [{ "path": "./packages/core" }, { "path": "./packages/web" }]
}
Each referenced package should emit declarations so the top level barrel can surface types across boundaries without extra wiring.
Ambient declarations and .d.ts files
Ambient declarations describe shapes that exist at runtime without providing implementations. Use them for globals, non TS modules, and augmentation. Keep them separate from implementation files so consumers can rely on them without pulling code.
Declaring globals with declare
Put global types in a dedicated .d.ts file. Use declare global to avoid accidental pollution of the module scope.
// global.d.ts
export {};
declare global {
interface Window {
appVersion: string;
}
}
The empty export marks the file as a module; this prevents unintended global declarations that leak during compilation.
Declaring modules for untyped packages
When a dependency has no types, stub the module with an ambient declaration. Start minimal; refine later as usage grows.
// types/external-lib/index.d.ts
declare module "external-lib" {
export interface Options { verbose?: boolean; }
export function run(options?: Options): Promise<void>;
}
types folder and add it to typeRoots if you want strict control over discovery.
typeRoots, types, and library filtering
Use typeRoots to point TypeScript at folders that contain package style types; use types to enable a specific list of global packages. These options reduce noise in projects that only need a subset of declarations.
{
"compilerOptions": {
"typeRoots": ["./types", "./node_modules/@types"],
"types": ["node", "jest"]
}
}
If you omit types, TypeScript includes all packages under @types by default, which may add unwanted globals.
Publishing type definitions
Libraries should publish stable .d.ts files so users get IntelliSense and safety. You can emit declarations from your source or you can hand write them. Include proper entry points in package.json so all toolchains find the types.
Emit .d.ts from source
Turn on declaration emit and keep your public API explicit. The compiler will generate matching .d.ts files that mirror your exports.
// tsconfig.json
{
"compilerOptions": {
"declaration": true,
"declarationMap": true,
"emitDeclarationOnly": false,
"outDir": "dist"
}
}
// src/index.ts
export interface Config { retries?: number; }
export function connect(url: string, cfg?: Config) { /* … */ }
After compilation, publish dist/index.d.ts along with your JavaScript files. Declaration maps help editors jump to source when available.
types entry and dual support
Point consumers at your primary declaration file by adding a types entry. For packages with multiple entry points, prefer conditional exports with a types key to keep everything aligned.
{
"name": "my-lib",
"version": "1.0.0",
"type": "module",
"exports": {
".": {
"import": "./dist/index.js",
"require": "./dist/index.cjs",
"types": "./dist/index.d.ts"
},
"./cli": {
"import": "./dist/cli.js",
"require": "./dist/cli.cjs",
"types": "./dist/cli.d.ts"
}
}
}
exports consistent with what you publish. A missing declaration path breaks editor tooling even when the runtime code works.
Publishing to @types versus bundling your own
If your package is authored in JavaScript, you can ship a separate @types/your-package definition on DefinitelyTyped or bundle .d.ts files directly in your package. Bundling is simple for consumers because npm install delivers runtime code and types in one step; hosting on DefinitelyTyped works when you do not control the original package or you want the community to maintain the declarations.
// package.json for JS source that includes types
{
"name": "cool-js-lib",
"version": "1.0.0",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"files": ["dist", "README.md", "LICENSE"]
}
When contributing to DefinitelyTyped, follow its guidelines and versioning rules; declaration packages track the major version of the upstream library to keep compatibility clear.
Chapter 7: Working with the JavaScript Ecosystem
TypeScript sits inside the wider JavaScript world like a lens that sharpens everything passing through it. You keep writing regular JavaScript; TypeScript adds structure and safety without changing the underlying runtime. This chapter explores how to adopt stricter rules gradually, how to work with third party libraries that may or may not have types, how JSDoc can add structure to plain JavaScript, and how to blend both languages in projects that grow organically.
Strict mode and incremental adoption
Strict mode in TypeScript is a bundle of checks that tighten your code. It does not change runtime behavior; the effect is entirely in the compiler. Teams often start with partial strictness then gradually enable the full set as the project matures. This approach keeps friction low while guiding the codebase toward predictable behavior.
| Option | Description |
strictNullChecks |
Ensures null or undefined are handled explicitly |
noImplicitAny |
Requires every value to have a known type |
strictFunctionTypes |
Makes function assignments safer by checking parameter variance |
alwaysStrict |
Emits JavaScript in strict mode and parses modules accordingly |
The safest path is to switch on strict then override individual items only if the migration becomes difficult. Keep exceptions temporary; revisit them as code stabilizes.
Gradual rollout inside large repositories
For monorepos or multi package setups, you can adopt strict mode package by package. Turn on strict in leaf packages first; this limits the blast radius while helping you strengthen the foundation.
// packages/widget/tsconfig.json
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"strict": true
}
}
After finishing the leaf packages, move upward until every package compiles with the same set of strict rules.
Typing third party libraries
Many JavaScript libraries ship with bundled types; many others rely on community maintained definitions. TypeScript can only offer strong checks when it knows the shapes it is working with, so understanding where types come from is essential.
| Source | How types are delivered |
| Bundled with the library | package.json includes a types entry |
| Community maintained | Install from @types/… on DefinitelyTyped |
| Absent entirely | Create a .d.ts stub with partial shapes |
When a library does not ship types, you can install a matching definition from DefinitelyTyped. These packages follow the library version through a convention where the major version stays aligned. If no suitable definition exists, write your own ambient module.
// minimal types for an untyped package
declare module "cool-js-lib" {
export function init(id: string): void;
export function shutdown(): void;
}
Contributing upstream
If you discover missing or incorrect definitions, you can improve the ecosystem by contributing fixes. For libraries you control, add .d.ts or migrate to TypeScript. For libraries maintained by others, contribute to DefinitelyTyped. Clear versioning and small, focused changes keep these contributions easy to review.
JSDoc types and JS projects
JavaScript projects can benefit from TypeScript checking without converting to .ts files. JSDoc annotations let you describe shapes inside plain JavaScript. With checkJS enabled, the compiler treats JSDoc comments as type hints and reports mismatches in the same way it reports them for TypeScript.
Annotating functions
JSDoc tags give JavaScript a thin layer of structure. They are lightweight, readable, and friendly to mixed teams where not everyone works directly with TypeScript.
// utils.js
/**
* @param {number} a
* @param {number} b
* @returns {number}
*/
export function add(a, b) {
return a + b;
}
When checkJS is active, TypeScript validates calls to add and warns if arguments or return values do not match the annotated shapes.
@typedef or @template to create reusable structures or generics that span several functions.
Mixed codebases
Real world projects often mix .js and .ts files. TypeScript handles these setups smoothly. allowJs lets the compiler include JavaScript files; checkJs turns on full semantic checking for them.
{
"compilerOptions": {
"allowJs": true,
"checkJs": true,
"outDir": "dist"
},
"include": ["src"]
}
You can migrate files one by one from .js to .ts; TypeScript uses JSDoc types until you are ready to switch to full annotations.
Sharing types between TS and JS
Shared type definitions help both file kinds understand a common domain. Place reusable types in .d.ts files and let both JavaScript and TypeScript import them or receive them through global augmentation.
// types/domain.d.ts
export interface UserProfile {
id: string;
email: string;
}
JavaScript consumers can import these types purely for documentation or use @typedef to leverage them indirectly.
Interoperating with plain JavaScript
Most teams adopt TypeScript gradually; the language is designed to fit alongside existing JavaScript rather than replace it in one step. Interoperation works well when boundaries are clear and when you plan for a little flexibility at the edges.
Calling JS from TS
You can call JavaScript files directly from TypeScript. The compiler infers types when possible then falls back to any if the shapes are unclear. Adding JSDoc or ambient types strengthens the connection without rewriting the file.
// js/logger.js
export function log(msg) {
console.log("[log]", msg);
}
// ts/app.ts
import { log } from "./logger.js";
log("ready");
The import resolves normally. The only difference is that TypeScript works with inferred shapes rather than declared ones.
Calling TS from JS
JavaScript consumers can import TypeScript compiled output like any other module. Ship the JavaScript files and keep the .d.ts definitions alongside them so editors supply IntelliSense. From the JavaScript side, the code behaves like a normal library.
// js/main.js
import { sum } from "../dist/math.js";
console.log(sum(3, 4));
This flow preserves a clean runtime story. TypeScript remains an authoring tool while JavaScript continues to run unmodified.
Soft edges between both worlds
As projects evolve, the boundary between JavaScript and TypeScript shifts. Some teams keep utility files in JavaScript for fast sketching; others keep legacy directories untouched until time allows for migration. Both approaches work when you maintain clear interfaces and publish accurate types. The goal is to let both languages cooperate without friction.
Chapter 8: Tooling, Linting, and Formatting
Great tooling makes TypeScript projects feel responsive during development and predictable in production. This chapter focuses on the TypeScript compiler and build strategies, linting with ESLint, formatting with Prettier, and creating reliable debugging experiences with source maps. The goal is a feedback loop that is fast, consistent, and clear.
tsc and project references
The TypeScript compiler tsc is more than a transpiler. It can type check, emit JavaScript, create incremental builds, and orchestrate multi package repositories. Thoughtful configuration in tsconfig.json gives major speed improvements and clearer boundaries between packages.
Core tsconfig.json options
Start with sensible compiler options. The combination of incremental and composite enables caching and project references. Use skipLibCheck to reduce work during inner loop development, then re enable for release builds if your quality bar requires it.
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"rootDir": "src",
"declaration": true,
"sourceMap": true,
"inlineSources": true,
"strict": true,
"incremental": true,
"composite": true,
"skipLibCheck": true,
"tsBuildInfoFile": ".tsbuildinfo",
"isolatedModules": true
},
"include": ["src/…"]
}
skipLibCheck true and running a separate full type check in CI so daily edits stay quick, while releases still receive thorough verification.
Multi package repos
Project references let one package build only after its dependencies have compiled. Each package becomes a standalone project with an output directory and declarations. The repository root then drives a coordinated build using the -b flag.
// ./packages/util/tsconfig.json
{
"compilerOptions": {
"outDir": "dist",
"rootDir": "src",
"declaration": true,
"composite": true
},
"include": ["src/…"]
}
// ./packages/app/tsconfig.json
{
"compilerOptions": {
"outDir": "dist",
"rootDir": "src",
"composite": true
},
"references": [{ "path": "../util" }],
"include": ["src/…"]
}
// ./tsconfig.json at repo root
{
"files": [],
"references": [
{ "path": "packages/util" },
{ "path": "packages/app" }
]
}
Build everything using a single command that respects reference order and caches results between runs.
npx tsc -b
Speed tactics that make a difference
Small adjustments compound to large gains. Target a modern ECMAScript level to reduce transform work. Avoid unnecessary type widening by keeping files focused. Prefer paths with baseUrl for clean imports and predictable cache keys. Keep your outDir clean to avoid stale artifacts.
target or module, clear old outputs. Stale JavaScript can mask problems and slow the next build while the compiler tries to reconcile mismatched artifacts.
ESLint recommended rules
ESLint enforces code quality and consistency that go beyond formatting. With the TypeScript parser, it can understand types to catch logic issues. Use a minimal core plus well chosen plugin rules so checks run fast while still guiding good practices.
Parser and plugin setup
Install the parser and the main plugin, then create a config file that points to your tsconfig.json. The parser uses your settings to understand path mapping and module resolution.
npm i -D eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin
// eslint.config.js (flat config)
import ts from "@typescript-eslint/eslint-plugin";
import tsParser from "@typescript-eslint/parser";
export default [
{
files: ["**/*.ts", "**/*.tsx"],
languageOptions: {
parser: tsParser,
parserOptions: {
project: "./tsconfig.json",
tsconfigRootDir: __dirname
}
},
plugins: { "@typescript-eslint": ts },
rules: {
"@typescript-eslint/consistent-type-imports": "warn",
"@typescript-eslint/no-unused-vars": ["warn", { "argsIgnorePattern": "^_" }],
"@typescript-eslint/no-floating-promises": "error",
"no-console": "off"
}
}
];
Recommended rule themes
Choose rules that prevent common pitfalls in TypeScript codebases. Focus on unused entities, unsafe promises, and clarity around imports. Prefer warning severity for stylistic nudges and error severity for correctness.
| Concern | Example rule | Why it helps |
| Unawaited async work | @typescript-eslint/no-floating-promises | Prevents silent failures and lost errors |
| Type only imports | @typescript-eslint/consistent-type-imports | Improves tree shaking and intent |
| Dead code | @typescript-eslint/no-unused-vars | Keeps surfaces small and clear |
| Any leakage | @typescript-eslint/no-explicit-any as warn | Highlights escaping the type system |
Running ESLint
Wire scripts so developers can lint changed files locally and CI can run the full set. Keep the command parameters simple and predictable so contributors do not have to remember flags.
// package.json
{
"scripts": {
"lint": "eslint .",
"lint:fix": "eslint . --fix"
}
}
eslint --max-warnings 0 in CI when you want warning free merges. This makes the bar explicit and avoids slow back and forth about which violations are acceptable.
Prettier integration
Prettier removes style debates by formatting code to a consistent standard. Let Prettier own layout and whitespace while ESLint focuses on logic quality. Avoid overlapping responsibilities to keep feedback clear.
Install and configure
Install the formatter and a small config. Keep options minimal. The fewer toggles you use, the easier it is for teams to accept automatic formatting everywhere.
npm i -D prettier eslint-config-prettier
// .prettierrc.json
{
"printWidth": 100,
"singleQuote": true,
"trailingComma": "all"
}
// eslint.config.js (add last)
import prettier from "eslint-config-prettier";
export default [
/* … existing entries … */,
prettier
];
Editor and script workflow
Enable format on save in your editor and expose a project script so contributors can format on demand. This ensures consistent output across platforms and avoids manual adjustments.
// package.json
{
"scripts": {
"format": "prettier --write .",
"format:check": "prettier --check ."
}
}
Prettier formatting. Keep eslint-config-prettier as the final extension in your config so formatting related lint rules are disabled.
Ignoring files and regions
Sometimes generated files or embedded content should not be reformatted. Use ignore files and inline directives sparingly so exceptions remain rare and obvious.
// .prettierignore
dist
coverage
*.min.js
// Inline ignore next line
// prettier-ignore
const layout = [1, 2, 3];
prettier --cache for large repositories. The cache speeds up repeated runs by skipping unchanged files.
Source maps and debugging
Source maps link emitted JavaScript back to your original TypeScript so stack traces and breakpoints make sense. Configure maps in the compiler, ensure Node understands them, and set up editor launch settings for a smooth workflow.
Compiler settings that help stack traces
Enable sourceMap and inlineSources so debuggers can display the original files. Keep outputs and sources side by side using outDir so paths stay predictable.
{
"compilerOptions": {
"outDir": "dist",
"sourceMap": true,
"inlineSources": true
}
}
Running Node with source map support
Modern Node reads source maps from //# sourceMappingURL comments. Use the --enable-source-maps flag to improve stack traces from unhandled errors and promise rejections.
node --enable-source-maps dist/index.js
During development, many teams use tsx or ts-node to run TypeScript directly. Both tools produce stack traces that map to .ts files when configured with source maps.
npm i -D tsx
// package.json
{
"scripts": {
"dev": "tsx watch src/index.ts"
}
}
Debugging in VS Code
Create a launch configuration that points to the built file or uses a runtime that understands TypeScript. Keep the mapping simple so breakpoints bind reliably with or without a watch task.
// .vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Run built app",
"type": "node",
"request": "launch",
"program": "${workspaceFolder}/dist/index.js",
"cwd": "${workspaceFolder}",
"runtimeArgs": ["--enable-source-maps"]
},
{
"name": "Run via tsx",
"type": "node",
"request": "launch",
"runtimeExecutable": "tsx",
"runtimeArgs": ["watch"],
"program": "${workspaceFolder}/src/index.ts",
"cwd": "${workspaceFolder}"
}
]
}
dist and your .tsbuildinfo file, then rebuild. Old maps cause confusing breakpoint behavior.
Chapter 9: Testing and Types
Types reduce whole classes of bugs; tests prevent regressions and document intent. Together they create a fast, confident workflow where small edits stay safe, and large refactors remain predictable. This chapter shows how to choose a runner, wire configuration for TypeScript, write type aware tests and safe test doubles, use types as executable specifications, and measure coverage so quality does not drift.
Choosing a test runner and configuration
Your test runner should feel invisible during day to day work. The three most common choices in the TypeScript and Node ecosystem are jest, vitest, and the built in node:test module. Each can run TypeScript through a transformer or a just in time compiler; each can also run the JavaScript output produced by tsc. The best choice depends on the mix of speed, features, and ecosystem plugins you need.
Feature comparison
This quick table highlights common selection criteria. Treat it as a starting point rather than a rule.
| Runner | Philosophy | TS strategy | Mocking | Coverage | Watch mode |
vitest |
Fast unit tests in Node or browser like environments | Transform with esbuild or tsconfig.paths; also runs JS built by tsc |
Built in vi.fn, spies, timers |
V8 coverage via --coverage |
Yes; smart reruns |
jest |
Batteries included test platform | Transform via ts-jest or babel-jest; also runs JS built by tsc |
Built in jest.fn, module mocking |
Istanbul with configuration | Yes; filters and patterns |
node:test |
Minimal core module for simple tests | Run JS output from tsc; pair with tsx or ts-node if needed |
Manual or library based | Via c8 or Node flags |
Yes; simple re runs with external tools |
Whichever runner you choose, keep TypeScript compilation simple. Either compile once with tsc -p tsconfig.json then run tests against dist, or rely on a fast transformer and keep tsconfig.json aligned with your test environment.
Project layout and tsconfig.json
Place tests beside source files or under a tests directory; both patterns work as long as file globs are clear. Add a small test specific tsconfig that extends the main one so editor tooling knows about globals and paths.
{
"extends": "./tsconfig.json",
"compilerOptions": {
"types": ["node", "vitest/globals"],
"noEmit": true
},
"include": ["src", "tests"]
}
For jest change the types entry to include jest. For node:test keep only node and ensure the test runner loads files from your built output directory when you compile first.
Minimal configurations
These examples assume ESM with path mapping. Adjust the parts inside braces as needed; unknown sections are shown as ….
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
include: ['src/**/*.test.ts'],
environment: 'node'
},
resolve: { alias: { '@': new URL('./src', import.meta.url).pathname } }
});
// jest.config.ts
import type { Config } from '@jest/types';
const config: Config.InitialOptions = {
testMatch: ['<rootDir>/src/**/*.test.ts'],
transform: { '^.+\\.tsx?$': ['ts-jest', { tsconfig: 'tsconfig.json' }] },
moduleNameMapper: { '^@/(.*)$': '<rootDir>/src/$1' }
};
export default config;
// package.json
{
"type": "module",
"scripts": {
"build": "tsc -p tsconfig.json",
"test": "vitest",
"test:watch": "vitest --watch",
"coverage": "vitest --coverage"
}
}
target and module in tsconfig.json compatible with your runtime. If Node executes native ESM then set "module": "esnext" and point the runner at ESM output or a transformer that understands ESM.
Type aware tests
Type aware tests fail fast when public shapes drift. You can assert value behavior with the runner, and assert type intent with compile time checks that run alongside tests. Use helpers that encode contracts as types and keep doubles aligned with the interfaces they pretend to implement.
Asserting types during tests
Two practical patterns exist. First, compile test files so any misuse of APIs fails the build. Second, add explicit type assertions inside tests using utilities such as expectTypeOf in vitest or small custom helpers that force compile time evaluation.
// sample.spec.ts
import { expect, describe, it, expectTypeOf } from 'vitest';
import { makeUser } from '@/user';
describe('makeUser', () => {
it('returns an immutable user', () => {
const u = makeUser({ id: 'u1', name: 'Ada' } as const);
expect(u.id).toBe('u1');
expectTypeOf(u).toMatchTypeOf<{ id: string; name: string }>();
// @ts-expect-error: name is readonly
// u.name = 'Grace';
});
});
If you prefer framework neutral checks, use @ts-expect-error and satisfies so the compiler enforces intent without extra libraries.
// types-only checks compile or fail
type User = { id: string; name: string };
const good = { id: '1', name: 'Ada' } satisfies User;
// @ts-expect-error: missing name
const bad: User = { id: '2' } as any;
@ts-expect-error instead of @ts-ignore. It tells the compiler an error is expected and fails if the error disappears, which protects intent during refactors.
Designing safe test doubles
Spies and fakes should match the real interface so production code and test code evolve together. Represent the dependency as a TypeScript interface, then implement a minimal fake in tests that satisfies the interface. This avoids casting and keeps editor completion accurate.
// service.ts
export interface Mailer {
send(to: string, subject: string, body: string): Promise<void>;
}
export class Notifier {
constructor(private mailer: Mailer) {}
async welcome(email: string) {
await this.mailer.send(email, 'Welcome', 'Thanks for joining');
}
}
// service.spec.ts (vitest)
import { describe, it, expect, vi } from 'vitest';
import { Notifier, type Mailer } from './service';
class FakeMailer implements Mailer {
sent: Array<[string, string, string]> = [];
async send(to: string, subject: string, body: string) {
this.sent.push([to, subject, body]);
}
}
describe('Notifier', () => {
it('sends a welcome email', async () => {
const mailer = new FakeMailer();
const n = new Notifier(mailer);
await n.welcome('a@example.com');
expect(mailer.sent[0]?.[1]).toBe('Welcome');
});
it('can spy on a single call', async () => {
const mailer: Mailer = { send: vi.fn().mockResolvedValue() };
const n = new Notifier(mailer);
await n.welcome('a@example.com');
expect(mailer.send).toHaveBeenCalledOnce();
});
});
satisfies or explicit types so the fake breaks when the real contract changes.
Property based checks
For pure functions, property based strategies generate many inputs and assert invariants. Libraries can feed random data into your functions so a small test describes a large input space. When pairing this with TypeScript, define the generator’s output type to match the function’s input type so the compiler assists discovery.
// example with a hypothetical fc library
import { describe, it, expect } from 'vitest';
import * as fc from 'fast-check';
const reverse = (s: string) => [...s].reverse().join('');
describe('reverse', () => {
it('is its own inverse', () => {
fc.assert(fc.property(fc.string(), s => {
expect(reverse(reverse(s))).toBe(s);
}));
});
});
Contract tests with types as specifications
Public types are living documentation. You can turn them into contract tests by checking that implementations conform at compile time, and by validating real data at runtime using the same source of truth. The key is to avoid duplication so type level promises and runtime checks stay aligned.
Compile time conformance
Expose an interface or type in your package and require implementations to satisfy it during the build. A small test-contract file can import an implementation and fail compilation when the shape drifts.
// contract.ts in a shared library
export interface PaymentProvider {
charge(amountCents: number, token: string): Promise<{ id: string }>;
}
// contract.spec.types.ts used only for type checking
import type { PaymentProvider } from '@acme/contracts';
import { StripeProvider } from '@/stripe';
const provider: PaymentProvider = new StripeProvider(); // fails if methods or types differ
void provider;
Run tsc --noEmit against the contract tests in CI. No runtime is required, yet breaking changes surface immediately.
Runtime validation from a single source
Types alone cannot reject malformed input at the boundary. Generate validators from type like declarations so inbound data is checked at runtime while editors still see rich types. Schema libraries provide this bridge.
// schema.ts
import { z } from 'zod';
export const UserSchema = z.object({
id: z.string().uuid(),
name: z.string().min(1),
email: z.string().email()
});
export type User = z.infer<typeof UserSchema>;
// handler.ts
import type { User } from './schema';
import { UserSchema } from './schema';
export function handle(input: unknown): User {
const user = UserSchema.parse(input);
return user;
}
src/contracts and export both the schema and its inferred type. Application code imports the type for compile time safety and uses the schema at the boundary for runtime checks.
Consumer driven contracts
When multiple services interact, write minimal examples that represent how consumers call providers. Store these examples as JSON and validate them against shared schemas during CI so providers cannot ship a change that violates a consumer’s expectations.
// example-consumer.json
{ "id": "u_123", "name": "Ada", "email": "ada@example.com" }
// pact-like check …
import { UserSchema } from '@/contracts';
import example from './example-consumer.json' assert { type: 'json' };
UserSchema.parse(example);
Measuring coverage and maintaining confidence
Coverage shows which code executes during tests; confidence comes from what those tests assert. Use coverage to locate blind spots, not as a score to game. Track lines, branches, and functions, then set pragmatic thresholds so the team keeps intent high without chasing perfection.
Source maps and accurate coverage
Enable high quality source maps so coverage points to TypeScript lines rather than transpiled output. Include maps in dev builds, and keep them external in production builds if needed for size.
// tsconfig.json
{
"compilerOptions": {
"sourceMap": true,
"inlineSources": true,
"declarationMap": false
}
}
With vitest pass --coverage to record V8 coverage. With jest enable coverage in configuration. For node:test pair tests with c8 so V8’s native coverage is reported clearly.
// package.json …
{
"scripts": {
"coverage": "vitest --coverage",
"coverage:html": "vitest --coverage --reporter=html"
}
}
any casts or unchecked as can make coverage look healthy while meaningful assertions are missing. Prefer precise types and runtime checks at boundaries so coverage reflects real behavior.
Thresholds and what to measure
Set thresholds that encourage intent. Lines and statements prevent dead regions. Branch coverage encourages tests for success and failure paths. Function coverage highlights untested utilities. Keep thresholds slightly below current numbers so improvements land incrementally.
// vitest.config.ts …
export default defineConfig({
test: {
coverage: {
statements: 90,
branches: 85,
functions: 90,
lines: 90,
reporter: ['text', 'html']
}
}
});
Mutation testing for deeper signals
Mutation testing flips operators and changes literals to see whether tests notice. A high mutation score is a strong signal that assertions are specific. Use it sparingly on core modules where confidence matters most; it is slower than regular unit tests.
Continuous integration and fast feedback
Run type checks, unit tests, and coverage in parallel during pull requests. Cache node_modules and the TypeScript build info so rechecks are quick. Keep a short path for local development with watch modes and a longer path in CI that includes contract checks and coverage thresholds.
Chapter 10: Advanced Patterns and Performance
Advanced type patterns help you model domain rules faithfully while keeping code ergonomic to use. Performance includes quick compiles, snappy watch mode, and efficient runtime output. This chapter explores nominal typing tricks, robust state encodings, API design for good DX, and compiler options that affect speed.
Branded types
TypeScript is structurally typed; two shapes that match are compatible. Sometimes you want nominal semantics so only values created through a constructor or helper are accepted. A common approach is to intersect with a unique brand that callers cannot forge without a helper.
// brand helpers
declare const Brand: unique symbol;
type BrandOf<T, B extends string> = T & { [Brand]: B };
type UserId = BrandOf<string, 'UserId'>;
function makeUserId(raw: string): UserId {
// validate format …
return raw as UserId;
}
function getUser(id: UserId) { /* … */ }
// compile time safety
const s: string = 'abc';
getUser(makeUserId(s)); // ok
// getUser(s); // error: string is not UserId
Opaque responses
When parsing untrusted input, return an opaque branded type after validation so downstream code cannot mix raw and parsed values by accident.
type Email = BrandOf<string, 'Email'>;
function parseEmail(s: string): Email | null { return /@/.test(s) ? (s as Email) : null; }
Discriminated unions
Discriminated unions model finite states with exhaustive checks. Use a literal tag field as the discriminator and let the compiler guide you to handle every case.
type LoadState<T> =
| { tag: 'idle' }
| { tag: 'loading' }
| { tag: 'success'; data: T }
| { tag: 'error'; error: string };
function renderUser(s: LoadState<{ name: string }>) {
switch (s.tag) {
case 'idle': return 'Idle';
case 'loading': return 'Loading';
case 'success': return s.data.name;
case 'error': return s.error;
default: const _exhaustive: never = s; return _exhaustive;
}
}
Transitions encoded as functions
Keep transitions pure; accept the prior state and an event and return the next state. This mirrors reducer patterns and makes testing straightforward.
type Event =
| { type: 'start' }
| { type: 'ok'; data: string }
| { type: 'fail'; error: string };
function step(s: LoadState<string>, e: Event): LoadState<string> {
if (e.type === 'start') return { tag: 'loading' };
if (e.type === 'ok') return { tag: 'success', data: e.data };
if (e.type === 'fail') return { tag: 'error', error: e.error };
return s;
}
API ergonomics and DX considerations
Design APIs that infer types naturally, make common cases short, and advanced cases explicit. Favor option objects for extensibility; provide overloads that preserve inference for function call styles you expect users to prefer.
Options object
Defaults reduce boilerplate while allowing precise control when needed. Keep the return type stable so callers do not fight generics.
interface RetryOptions {
retries?: number;
backoffMs?: number;
}
interface Result<T> { ok: true; value: T } | { ok: false; error: string };
async function withRetry<T>(op: () => Promise<T>, opts: RetryOptions = {}): Promise<Result<T>> {
const { retries = 2, backoffMs = 100 } = opts;
let lastErr: unknown = null;
for (let i = 0; i <= retries; i++) {
try { return { ok: true, value: await op() }; }
catch (e) { lastErr = e; if (i < retries) await new Promise(r => setTimeout(r, backoffMs)); }
}
return { ok: false, error: String(lastErr) };
}
Generics that infer from arguments
Write generic parameters so they flow from inputs. Avoid explicit generics when inference can determine them.
function mapValues<T extends object, R>(obj: T, f: (v: T[keyof T]) => R) {
const out: Record<keyof T, R> = {} as any;
for (const k in obj) out[k] = f(obj[k]);
return out;
}
Compiler options
Compiler and project settings affect how quickly your codebase builds, how accurate type checking is, and the size and shape of emitted JavaScript. Aim for fast feedback in watch mode and reproducible builds for CI.
incremental, composite, and project references
Enable incremental builds to store build info on disk so repeated checks are faster. Use composite projects and references to split a monorepo into leaf packages so each builds independently.
// tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"incremental": true,
"composite": true,
"declaration": true,
"outDir": "dist"
},
"references": [{ "path": "./packages/core" }, { "path": "./packages/web" }]
}
skipLibCheck and strictness trade offs
skipLibCheck speeds builds by skipping type checks of dependencies. Keep it on for large projects; keep core strict options enabled for your own code so safety remains high.
// tsconfig.json
{
"compilerOptions": {
"strict": true,
"skipLibCheck": true
}
}
isolatedModules and transform compatibility
isolatedModules ensures each file can be transpiled without cross file information, which aligns with Babel, SWC, and esbuild. This can surface issues early that would otherwise appear only in bundlers.
// tsconfig.json
{ "compilerOptions": { "isolatedModules": true } }
Output targets
Pick a target and module that match your Node version so emitted code is smaller and faster. Avoid downleveling features you do not need.
// Node 20 friendly
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "Bundler"
}
}
target can affect helpers and polyfills produced by your transformer. Verify output size and startup time after a change.
PART 2: Node
Chapter 11: The Node Runtime
Node brings together the V8 JavaScript engine, the libuv library, and a small standard library to provide a fast, portable runtime for building servers, tools, and scripts. This chapter explains where Node came from, how its core pieces work, and which everyday facilities you will use to build and diagnose applications.
A brief history and where Node fits
Node began in 2009 when Ryan Dahl combined the V8 engine with a non-blocking I/O layer so JavaScript could handle many concurrent connections efficiently. The community quickly added npm for sharing packages. Today Node is a general purpose runtime that fits in several places: web servers and APIs, command line tools, build pipelines, desktop apps through wrappers, and edge or on-device utilities where a small single binary is helpful.
Node’s model is event driven and single threaded at the JavaScript level. Work that would block (file and network I/O, DNS, timers, compression, and similar) is delegated to a native layer so that your code stays responsive. This does not make Node automatically faster than other runtimes. It makes it a good fit for I/O-heavy tasks where many requests stay in flight while little CPU is used per request.
Here is a tiny server to illustrate the programming model.
import http from "node:http";
const server = http.createServer((req, res) => {
res.writeHead(200, { "content-type": "text/plain; charset=utf-8" });
res.end("Hello from Node\n");
});
server.listen(8080, () => {
console.log("Listening on http://localhost:8080");
});
Compilation, events and threading
V8 compiles JavaScript to machine code and runs it. libuv provides the cross-platform event loop and thread pool. The loop pulls I/O and timer events as they complete and schedules your callbacks. Microtasks (Promise reactions and queueMicrotask) run between macrotasks. Some scheduling primitives fire earlier than others, which matters when you coordinate work.
Task ordering in practice
The following program prints in a specific order because microtasks run between turns of the loop, and because process.nextTick is handled before Promise reactions.
setTimeout(() => console.log("timeout"), 0);
setImmediate(() => console.log("immediate"));
Promise.resolve().then(() => console.log("promise"));
process.nextTick(() => console.log("nextTick"));
console.log("sync");
A typical output is:
sync
nextTick
promise
timeout
immediate
The last two lines can swap depending on how the program reached the check phase. When a timer callback schedules setImmediate, the immediate often runs first. When the main script schedules both, the timer often fires first.
The event loop
Conceptually the loop progresses through phases that process different queues. The real implementation is optimized and platform specific, although this model is useful for reasoning.
| Phase | What runs |
| Timers | setTimeout and setInterval callbacks whose thresholds have elapsed |
| Pending I/O | Some system callbacks that were deferred from the previous tick |
| Idle/Prepare | Internal housekeeping |
| Poll | I/O events are pulled; the loop can block here briefly when waiting |
| Check | setImmediate callbacks |
| Close | 'close' events for sockets and handles |
Between callbacks the microtask queue runs: first process.nextTick tasks then Promise reactions. Use microtasks to finish a small unit of work before yielding back to the loop. Avoid long microtask chains that starve I/O.
Worker threads
Node has two sources of parallelism. First, libuv manages a fixed-size thread pool for operations such as file I/O and crypto. Second, you can use node:worker_threads to run JavaScript on additional threads when you need parallel CPU work. Workers communicate through messages and shared memory buffers.
import { Worker } from "node:worker_threads";
const w = new Worker(new URL("./worker.js", import.meta.url), { type: "module" });
w.on("message", (msg) => console.log("result:", msg));
w.postMessage({ task: "sha256", data: "..." });
ES modules support
Node supports both ES modules and CommonJS. The loader is selected per file and per package. File extensions decide the default (.mjs is ES modules, .cjs is CommonJS). A package can opt in globally using "type" in package.json. Exports are described with "exports" and optional "imports" maps for internal specifiers.
Choosing a module mode
When you set "type": "module" the default for .js becomes ES modules. You import with import and you export with export. When you set "type": "commonjs" the default for .js becomes CommonJS, and you use require and module.exports. You can always override per file with .mjs or .cjs.
{
"name": "my-lib",
"version": "1.0.0",
"type": "module",
"main": "./dist/index.cjs",
"exports": {
".": {
"import": "./dist/index.js",
"require": "./dist/index.cjs"
}
}
}
With the export map above, consumers that use import will receive ./dist/index.js. Consumers that use require will receive ./dist/index.cjs. The same package supports both ecosystems cleanly.
Resolution and file layout
ES module resolution in Node is strict. File extensions are required. Bare specifiers resolve using "imports" and "exports" maps first, then node_modules lookup. CommonJS has more permissive resolution. Keep your published layout explicit to avoid surprises.
Useful package.json fields for Node
These are common fields that affect Node’s behavior at runtime or during install.
| Field | Purpose |
"type" | Sets the default module format for .js files |
"main" | Entry point for CommonJS consumers |
"exports" | Public entry points for ESM and CJS; supports condition maps |
"imports" | Private alias map for your own package (for example #utils) |
"bin" | Command name to executable path mapping for CLI tools |
"types" | Path to .d.ts types for TypeScript users |
"engines" | Declares supported Node versions |
Here is a cross-platform CLI layout that wires a command name to a module entry.
{
"name": "greeter",
"version": "1.0.0",
"type": "module",
"bin": { "greeter": "./bin/greeter.js" }
}
// bin/greeter.js
#!/usr/bin/env node
import { greet } from "../dist/index.js";
const name = process.argv[2] ?? "world";
console.log(greet(name));
Diagnostics, and built in tools
The node REPL is a fast way to test code and inspect values. It supports top-level await, input editing, and a small set of REPL commands. Diagnostics options help you profile, trace, and report problems in production.
Using the node REPL
Start the REPL by running node with no arguments. Special commands begin with a dot. The underscore holds the last result. The editor mode lets you paste multiline input comfortably.
$ node
> 2 + 2
4
> .editor
// paste a function…
function add(a, b) {
return a + b;
}
^D
> add(2, 3)
5
.save file.js to write the current REPL session to disk, and .load file.js to load one back in.
Checking and running code
Use node --check file.js to parse a script without running it. Use node --eval "…" to evaluate a short snippet. For built in tests you can use node --test to run test files that use the node:test module.
$ node --check server.js
$ node --eval "console.log(1 + 2)"
$ node --test
Inspecting and profiling
Use the Inspector protocol to debug. Run node --inspect app.js and connect from a Chromium-based browser’s DevTools, or start with --inspect-brk to pause before the first line. For CPU profiles and heap snapshots you can capture artifacts that DevTools understands.
$ node --inspect-brk app.js
Debugger listening on ws://…
127.0.0.1 or use an SSH tunnel so that only trusted users can connect.
Tracing and reports
Runtime traces can reveal costly operations. The --trace-… flags print events as they occur. Diagnostic reports capture a point-in-time snapshot of the process when you call the API or when specific conditions fire.
$ node --trace-gc app.js
$ node --trace-deprecation app.js
import process from "node:process";
process.report.writeReport(); // writes a JSON report to the current directory
Common command line patterns
These short commands help during development.
$ node --version
$ node --require dotenv/config app.js
$ NODE_OPTIONS="--max-old-space-size=2048" node app.js
When you understand how the loop schedules tasks, how modules load, and which diagnostics are available, you can reason about performance and correctness with confidence. The rest of this part uses these tools freely as we build servers, CLIs, and services.
Chapter 12: Modules, Packages, and Dependency Management
The Node ecosystem revolves around packages. These packages form the building blocks of applications, frameworks, and command line tools. Good dependency management keeps an application stable and predictable. This chapter shows how different package managers behave, how semver shapes upgrades, how workspaces support larger projects, and how native addons integrate with Node through the Node API.
npm, pnpm, and Yarn
Most projects install dependencies through a package manager. These tools share the same registry design although their installation strategies differ. npm installs a nested node_modules structure using hard links for duplicates when possible. pnpm stores content in a global content addressable store and creates symbolic links into each project. Yarn began as an alternative to npm with a different approach to offline installs and lockfile stability.
All three tools read package.json, write a lockfile, and support scripts. Their differences matter when you rely on monorepos or when installation performance becomes noticeable as the project grows.
This example shows common commands across the tools.
# install dependencies
npm install
pnpm install
yarn install
# add a dependency
npm install lodash
pnpm add lodash
yarn add lodash
# run a script named "build"
npm run build
pnpm build
yarn build
Lockfiles and reproducible installs
Lockfiles map each dependency to a resolved version along with integrity hashes. They guarantee the same tree on every machine. npm writes package-lock.json, pnpm writes pnpm-lock.yaml, and Yarn writes yarn.lock. The formats differ, although all serve the same purpose. Commit lockfiles to version control so the entire team shares predictable builds.
When a dependency changes upstream, the lockfile determines whether your local tree updates or stays pinned. For production environments this predictability avoids risk from new transitive versions.
Semver, peer dependencies, and workspaces
Semver describes a version with three numbers: major, minor, and patch. Patch versions fix bugs. Minor versions add features without breaking existing code. Major versions change behavior in ways that may require updates in your code. Package managers interpret ranges so that patch and minor updates can install automatically when allowed.
Version ranges
Common operators include caret, tilde, and exact pins. The caret allows changes that do not modify the first non-zero part. The tilde allows only patch updates. Exact pins use a fixed version.
| Range | Meaning |
^1.4.2 | Accept any 1.x.x that is at least 1.4.2 |
~1.4.2 | Accept 1.4.x that is at least 1.4.2 |
1.4.2 | Install exactly 1.4.2 |
>=1.4.2 | Install any version at least 1.4.2 |
You can combine operators to express advanced constraints. Keep ranges simple unless a library has compatibility conditions that require more care.
* or >=0.0.0 invites unexpected updates. Use ranges that reflect the contract your code depends on.
Peer dependencies
Peer dependencies describe packages that must be installed by the consumer rather than bundled inside your package. They enforce compatibility between frameworks and plugins. For example, a React plugin may require the same version of React that the host application uses.
Package managers warn when peer requirements are unmet. Some managers attempt to install missing peers although this depends on the tool and its configuration. Declare peers when your library expects to operate in the same environment as another library or framework.
Workspaces for multi-package projects
Workspaces group several packages in a single repository. The package manager installs dependencies once at the root and links local packages to each other. This avoids repetitive installs and makes development flows like refactoring easier.
Most tools support a definition similar to this layout.
{
"name": "monorepo",
"private": true,
"workspaces": ["packages/…"]
}
Packages inside the workspace can reference each other using regular version ranges. The manager then symlinks or hard links local paths during installation so edits in one package appear immediately in dependants. Workspace tools also provide filtering commands so you can run a script across a subset of packages.
Publishing
Publishing a package involves preparing the build, verifying the package.json metadata, and running an authentication step with the registry. npm remains the most common tool for this workflow. Versioning follows semver and must match the version declared in package.json.
Preparing a package
A typical publish script builds the project and ensures only necessary files enter the tarball. You can list included files explicitly with a files array or exclude files with .npmignore. Keep your published package clean so consumers do not receive stray test data or build artifacts.
{
"name": "example-lib",
"version": "1.0.0",
"type": "module",
"files": ["dist/…"],
"scripts": {
"build": "tsc",
"prepare": "npm run build"
}
}
Run npm publish to ship a new version. Private registries provide similar interfaces with their own authentication flows.
npm pack to preview the exact contents that will publish. This avoids surprises when consumers install your work.
Managing versions
Semantic versioning helps consumers understand how risky an upgrade might be. Increment the patch version when fixing defects, the minor version when adding backward compatible features, and the major version when introducing breaking changes. Tools such as npm version patch update the version and create a matching git tag.
Automated release pipelines often build, test, and publish from continuous integration. These pipelines ensure the published package is reproducible from the repository rather than from a local machine.
Auditing
Package managers include scanners that spot known vulnerabilities. npm provides npm audit. pnpm and Yarn integrate with external databases or services. These scanners classify issues by severity and suggest upgrades. Not every advisory is relevant to your code path, although the report helps you review transitive dependencies.
You can also use tools such as npx license-checker or npx npmgraph to inspect licenses and dependency trees. For production code review these output reports help you understand risk.
Native addons
Native addons let you call compiled code from JavaScript. These addons are useful when performance matters or when you must bind to a system library. Node offers two main interfaces: the older NAN layer and the modern Node API. The Node API focuses on ABI stability so addons compiled today continue to work on future Node versions that support the same ABI.
How native addons load
An addon compiles to a platform specific shared library with a .node extension. Node loads this file when you import it. The addon’s entry point registers exported functions that JavaScript code can call. Build systems such as node-gyp or CMake configure compilation options, platform abstractions, and include paths.
// binding.gyp
{
"targets": [
{
"target_name": "myaddon",
"sources": ["src/….cc"]
}
]
}
Once compiled you can import the addon from JavaScript.
import addon from "./build/Release/myaddon.node";
console.log(addon.doWork("input"));
The Node API
The Node API (often called N-API) exposes types and functions for creating values, calling functions, and managing memory safely. It abstracts the underlying engine details so your addon does not depend on V8 specifics. This makes long-term compatibility more stable and reduces maintenance.
The API uses opaque handles and status codes. A simple function might receive arguments, perform native work, and return a new value back to JavaScript. Errors use dedicated APIs so they map correctly to JavaScript exceptions.
// pseudo-code sketch, C++ with Node API
napi_value DoWork(napi_env env, napi_callback_info info) {
napi_value arg;
napi_get_cb_info(env, info, …, &arg, …);
// perform native logic
napi_value result;
napi_create_string_utf8(env, "done", …, &result);
return result;
}
Native addons bridge the gap between JavaScript and system capabilities. With the Node API you can write extensions that continue to work as the JavaScript engine evolves. This keeps low level performance code stable and usable across the lifetime of a project.
Chapter 13: Files, Paths, and Processes
Node gives you tools to work with the file system, manage paths, handle process details, and run other programs. These capabilities form the backbone of servers, build tools, and command line utilities. This chapter surveys the modern fs interfaces, path handling for both CommonJS and ES modules, configuration through environment variables, and ways to run parallel work through child processes and worker threads.
fs modern APIs: promises and streams
The node:fs module provides three styles of interaction: callbacks, promises, and streams. The promise based API lives in node:fs/promises and offers a clean way to write asynchronous file code without callback nesting. Streams handle large files efficiently because they transfer data in chunks rather than loading entire files into memory.
Here is a simple example using the promise API to read and write text files.
import { readFile, writeFile } from "node:fs/promises";
const text = await readFile("input.txt", "utf8");
await writeFile("output.txt", text.toUpperCase());
Streams are useful when you want backpressure, piping between transformations, or incremental parsing. A readable stream emits data events each time a chunk is ready. A writable stream processes those chunks one at a time, respecting its internal queue so memory stays stable.
import { createReadStream, createWriteStream } from "node:fs";
createReadStream("large.txt")
.pipe(createWriteStream("copy.txt"))
.on("finish", () => console.log("done"));
pipeline from node:stream/promises when you need a single promise that resolves or rejects once the stream chain completes.
Watching files
The fs.watch function observes file system changes. Events can coalesce or behave differently across platforms because each system exposes its own kernel notifications. For cross-platform tooling try to debounce events or monitor entire directory trees when possible.
import { watch } from "node:fs";
for await (const event of watch("src")) {
console.log(event.eventType, event.filename);
}
path, URL, and ESM file handling
The node:path module resolves, joins, and normalizes path strings. It accounts for platform rules such as drive letters on Windows. When you write scripts that run across different systems, path helpers keep your logic portable. ES modules add another layer because import.meta.url uses URLs rather than plain file paths.
To convert between file URLs and native paths use fileURLToPath and pathToFileURL. This is essential for loaders, configuration scripts, and anything that calculates paths relative to the current module.
import { fileURLToPath } from "node:url";
import { dirname, join } from "node:path";
const here = dirname(fileURLToPath(import.meta.url));
const configPath = join(here, "config.json");
Resolving relative paths
path.resolve computes an absolute path by interpreting segments from right to left. path.join concatenates segments and then normalizes them. When writing cross-platform scripts avoid hard coded separators and always use these helpers.
| Function | Purpose |
path.join | Concatenate segments then normalize |
path.resolve | Produce an absolute path based on working directory |
path.basename | Return last path component |
path.extname | Return the extension including the dot |
file:///path/to/file is not the same as a plain path string. Convert before passing to fs APIs.
process, env, and configuration
The process object exposes information about the running program: environment variables, arguments, current working directory, memory usage, and signals. Many applications load configuration from process.env so that deployments can override settings without modifying code.
This example pulls a port number from the environment with a fallback.
const port = Number(process.env.PORT) || 3000;
console.log("Server will start on port", port);
The process also emits signals. For example, SIGINT fires when the user presses Ctrl+C. A graceful shutdown sequence closes open resources before exiting.
process.on("SIGINT", () => {
console.log("Shutting down");
process.exit(0);
});
process.exit() sparingly. Let the event loop empty naturally unless you are in a fatal condition that cannot recover.
Command line arguments
The array process.argv holds the command line arguments. Index zero is the Node executable, index one is the script path, and the rest are the arguments. CLI tools often slice the first two positions away and parse the remainder.
const args = process.argv.slice(2);
console.log(args);
For structured parsing consider a library that handles flags and help text. This keeps the command interface predictable and user friendly.
Child processes and worker threads
Node can run other programs or additional JavaScript threads. Child processes execute external commands or scripts through spawn, exec, and fork. Worker threads run JavaScript in parallel within the same process. Each option fits a slightly different situation.
Running external programs
The spawn function starts a program and streams its output. The exec function buffers output then returns the entire result. The fork function launches another Node script with an IPC channel for structured messaging.
import { spawn } from "node:child_process";
const ls = spawn("ls", ["-l"]);
ls.stdout.on("data", (chunk) => {
process.stdout.write(chunk);
});
Use external commands when the task already exists as a system utility or when you script build pipelines. Handle exit codes carefully so failures do not pass silently.
Worker threads for parallel CPU work
The node:worker_threads module lets you run compute heavy tasks on additional threads. Workers communicate through messages and can share memory using SharedArrayBuffer or typed arrays when appropriate.
// main.js
import { Worker } from "node:worker_threads";
const worker = new Worker(new URL("./task.js", import.meta.url), { type: "module" });
worker.on("message", (msg) => console.log("result", msg));
worker.postMessage({ n: 1000000 });
Worker threads shine when tasks are CPU bound or when you want to isolate parts of an application from each other. They add concurrency without blocking the event loop.
Files, paths, processes, and parallel work form the practical toolkit that most Node applications depend on. With these pieces in place you can move on to networking, HTTP servers, and the design of services that run reliably in production.
Chapter 14: Networking and HTTP
Networking is one of Node’s core strengths. This chapter focuses on building and tuning networked programs with the standard library. You will learn the differences between http, https, and http2; how to create a minimal server; how to add WebSockets for real time features; and how to improve throughput with keep alive and caching.
http, http2, and https modules
Node provides three related server side modules. The http module implements HTTP 1.1 over TCP. The https module is the same API layered over TLS. The http2 module adds HTTP 2 features such as multiplexing and header compression, which can reduce latency under concurrent load.
| Module | Protocol | Key capabilities | Typical use |
http |
HTTP/1.1 | Request per connection or pipelining; simple API; ubiquitous tooling | Internal services; simple APIs; dev servers |
https |
HTTP/1.1 + TLS | TLS termination; certificates; same surface as http |
Public endpoints; auth flows; anything crossing networks you do not control |
http2 |
HTTP/2 | Multiplexed streams; HPACK compression; server push… | High concurrency; resource heavy pages; microservices behind a proxy |
Creating a basic http server
The http.createServer API handles a request object and a response object. You write headers and a body, then end the response. The following example is TypeScript friendly.
import http from "node:http";
const server = http.createServer((req, res) => {
res.setHeader("Content-Type", "application/json; charset=utf-8");
res.writeHead(200);
res.end(JSON.stringify({ ok: true, path: req.url }));
});
server.listen(3000, () => {
console.log("http server on http://localhost:3000");
});
Upgrading to https
Switch to https and provide key material. Use strong ciphers and modern TLS versions. In production, many teams terminate TLS at a reverse proxy, although you can also do it in Node.
import https from "node:https";
import fs from "node:fs";
const options = {
key: fs.readFileSync("./certs/server.key"),
cert: fs.readFileSync("./certs/server.crt"),
// ca: fs.readFileSync("./certs/ca.crt") // for client auth …
};
const server = https.createServer(options, (req, res) => {
res.writeHead(200, { "content-type": "text/plain; charset=utf-8" });
res.end("secure hello");
});
server.listen(3443, () => {
console.log("https server on https://localhost:3443");
});
Multiplexing
The http2 module exposes different request and response objects named streams. Each stream is an independent bidirectional channel over one TCP connection. Browsers only allow secure HTTP 2, so use TLS for public sites.
import http2 from "node:http2";
import fs from "node:fs";
const server = http2.createSecureServer({
key: fs.readFileSync("./certs/server.key"),
cert: fs.readFileSync("./certs/server.crt")
});
server.on("stream", (stream, headers) => {
// Each stream handles one request
stream.respond({ ":status": 200, "content-type": "text/plain; charset=utf-8" });
stream.end("hello over http/2");
});
server.listen(3444, () => {
console.log("http2 server on https://localhost:3444");
});
Building a minimal server
A minimal server reads the method and path, routes to a handler, and returns a structured result. Start small, keep the surface simple, then add middleware style helpers as needed.
A tiny router
This example shows a compact approach that parses JSON and routes by method and url. It is easy to extend with more paths and helpers.
import http, { IncomingMessage, ServerResponse } from "node:http";
type Handler = (req: IncomingMessage, res: ServerResponse) => void;
const routes = new Map<string, Handler>();
routes.set("GET /", (req, res) => {
res.writeHead(200, { "content-type": "text/html; charset=utf-8" });
res.end("<h1>Home</h1>");
});
routes.set("POST /echo", async (req, res) => {
const chunks: Buffer[] = [];
for await (const c of req) chunks.push(c as Buffer);
const body = Buffer.concat(chunks).toString("utf8");
res.writeHead(200, { "content-type": "application/json; charset=utf-8" });
res.end(JSON.stringify({ youSent: body }));
});
const server = http.createServer((req, res) => {
const key = `${req.method} ${req.url}`;
const handler = routes.get(key);
if (handler) return handler(req, res);
res.writeHead(404, { "content-type": "application/json; charset=utf-8" });
res.end(JSON.stringify({ error: "not found" }));
});
server.listen(3001, () => console.log("listening on http://localhost:3001"));
res.writeHead and res.end so your handlers return plain objects.
Making outbound requests
Modern Node exposes a global fetch which is convenient for high level requests. For fine control or streaming, use http.request or https.request.
// High level
const r = await fetch("https://api.example.com/data");
if (!r.ok) throw new Error(`http ${r.status}`);
const data = await r.json();
console.log(data);
// Low level with streaming
import https from "node:https";
const req = https.request("https://api.example.com/stream", res => {
res.on("data", chunk => process.stdout.write(chunk));
});
req.on("error", err => console.error(err));
req.end();
WebSockets and event driven patterns
WebSockets provide full duplex messaging over a single connection. You can add WebSockets to an existing HTTP server and share the same port. The usual pattern is event driven: listen for connections, subscribe to message events, and broadcast updates to peers.
Server sidee
The ws library integrates cleanly with a Node server. It handles the WebSocket handshake during the HTTP upgrade step and then exposes a simple message event API.
import http from "node:http";
import { WebSocketServer } from "ws";
const httpServer = http.createServer();
const wss = new WebSocketServer({ server: httpServer });
wss.on("connection", socket => {
console.log("client connected");
socket.send(JSON.stringify({ hello: "world" }));
socket.on("message", msg => {
// Broadcast to everyone
for (const client of wss.clients) {
if (client.readyState === 1) client.send(msg);
}
});
});
httpServer.listen(3080, () => console.log("ws on ws://localhost:3080"));
Browser client and heartbeat
Keep connections healthy with a lightweight heartbeat. Send periodic pings or app messages. Close idle sockets to reclaim resources gracefully.
// Browser code
const ws = new WebSocket("ws://localhost:3080");
ws.addEventListener("open", () => ws.send(JSON.stringify({ type: "hello" })));
ws.addEventListener("message", ev => console.log("msg:", ev.data));
// heartbeat
setInterval(() => {
if (ws.readyState === WebSocket.OPEN) ws.send(JSON.stringify({ type: "ping" }));
}, 10000);
Performance management
Small improvements can multiply under load. Keep connections open when useful, compress responses, and use caching headers. Profile with realistic concurrency and payloads.
Client side keep alive
Use a shared agent with pooling and timeouts. This reduces TCP handshakes and can improve throughput significantly.
import https from "node:https";
const agent = new https.Agent({
keepAlive: true,
maxSockets: 100,
timeout: 30_000
});
export async function getJson(url: string) {
const r = await fetch(url, { dispatcher: agent as any });
if (!r.ok) throw new Error(`http ${r.status}`);
return r.json();
}
Server side keep alive
Enable keep alive on responses and compress textual data. Many proxies handle compression, although you can also do it in Node.
import http from "node:http";
import { createGzip } from "node:zlib";
import { pipeline } from "node:stream";
const server = http.createServer((req, res) => {
res.setHeader("Connection", "keep-alive");
const accept = req.headers["accept-encoding"] ?? "";
const payload = Buffer.from("lots of text …");
if (accept.includes("gzip")) {
res.writeHead(200, { "content-encoding": "gzip", "content-type": "text/plain; charset=utf-8" });
pipeline(ReadableFromBuffer(payload), createGzip(), res, err => err && console.error(err));
} else {
res.writeHead(200, { "content-type": "text/plain; charset=utf-8" });
res.end(payload);
}
});
function ReadableFromBuffer(buf: Buffer) {
return new ReadableStream<Uint8Array>({
start(controller) {
controller.enqueue(buf);
controller.close();
}
}) as any;
}
server.listen(3002);
HTTP caching
Caching cuts latency and reduces server load. Use strong validators when content changes and explicit lifetimes when it does not.
| Header | Purpose | Example |
Cache-Control |
Freshness lifetime and rules | Cache-Control: public, max-age=300, stale-while-revalidate=30 |
ETag |
Content validator for conditional requests | ETag: "sha256-…" |
Last-Modified |
Timestamp validator alternative | Last-Modified: Tue, 01 Oct 2025 10:00:00 GMT |
import http from "node:http";
import crypto from "node:crypto";
const body = Buffer.from(JSON.stringify({ message: "cache me" }));
const etag = `"${crypto.createHash("sha256").update(body).digest("hex")}"`;
http.createServer((req, res) => {
if (req.headers["if-none-match"] === etag) {
res.writeHead(304);
return res.end();
}
res.writeHead(200, {
"content-type": "application/json; charset=utf-8",
"cache-control": "public, max-age=300",
etag
});
res.end(body);
}).listen(3003);
Time outs, back pressure, and limits
Protect servers with sensible time outs and body size limits. Apply back pressure by pausing reads when downstream is slow, and by rejecting excessively large payloads early.
import http from "node:http";
const MAX = 1_000_000; // 1MB
const server = http.createServer((req, res) => {
req.socket.setTimeout(30_000);
let size = 0;
const chunks: Buffer[] = [];
req.on("data", c => {
size += (c as Buffer).length;
if (size > MAX) {
res.writeHead(413);
res.end("payload too large");
req.destroy();
return;
}
chunks.push(c as Buffer);
});
req.on("end", () => {
res.writeHead(200, { "content-type": "text/plain; charset=utf-8" });
res.end(`received ${size} bytes`);
});
});
server.maxConnections = 1000;
server.listen(3004);
Measure under realistic traffic. Tune socket and server limits gradually. A small change to idle time outs or pool size can have a large effect on latency and memory use.
Chapter 15: Frameworks and Middleware
Node’s HTTP core is intentionally small, which leaves room for frameworks that add routing, middleware, and helpful patterns. This chapter compares popular servers, explains how the request lifecycle works, and shows practical techniques for validation, logging, errors, templates, and static files. The examples use TypeScript so that types guide your design and catch mistakes early.
Express, Fastify, and Koa compared
Express, Fastify, and Koa each provide a thin layer over Node networking with different opinions about speed, extensibility, and ergonomics. The right choice depends on your needs around plugins, performance, and how you want to model middleware.
| Framework | Philosophy | Middleware model | Routing style | Types | Logging | Validation |
| Express | Mature and minimal with a huge ecosystem | app.use stack with req, res, next |
Methods like app.get, app.post |
Community types via @types/express |
External middlewares like morgan |
Libraries like zod, joi |
| Fastify | Speed and developer experience with schema first design | Hook system; encapsulated plugins and decorators | fastify.route or fastify.get |
First class TS helpers | Built in via pino |
Built in JSON Schema with fast validators |
| Koa | Tiny core with composable async functions | Onion style async (ctx, next) |
Router as a separate package | Community types via @types/koa |
External middlewares | Libraries or router layer |
When to prefer each
Choose Express when you want maximal compatibility and a stable ecosystem. Choose Fastify when performance, JSON Schema, and plugin encapsulation matter. Choose Koa when you prefer small building blocks and the onion model of composition.
Project setup in TypeScript
All three work well with TypeScript once you add types and compiler options. Enable esModuleInterop if a package expects default imports, and target a modern runtime that matches your Node version.
// Example common tsconfig bits
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "Bundler",
"esModuleInterop": true,
"strict": true,
"skipLibCheck": true
}
}
Routing, request lifecycle, and middleware
Frameworks deliver a pipeline that receives a request, transforms it through middleware, then produces a response. Ordering is significant since earlier functions prepare context for later ones. Understanding the chain lets you place logging, parsing, and guards in predictable spots.
The pipeline
Each request flows through a stack. In Express it is the app.use chain. In Koa it is nested await next() calls. In Fastify it is hooks like onRequest, preHandler, and route handlers. Place cross cutting concerns early, keep handlers focused, and return quickly on failures.
Writing app.use middleware
Express middleware reads or augments req and res, then calls next. Keep it pure and side effect aware so that sequencing remains clear.
// express-middleware.ts
import express, { Request, Response, NextFunction } from "express";
const app = express();
app.use((req: Request, _res: Response, next: NextFunction) => {
req.headers["x-start"] = Date.now().toString();
next();
});
app.get("/ping", (_req, res) => {
res.json({ ok: true });
});
app.listen(3000);
Koa’s onion model
Koa composes async functions so that code before await next() runs on the way in, and code after runs on the way out. This makes timing and error wrapping natural.
// koa-onion.ts
import Koa from "koa";
const app = new Koa();
app.use(async (ctx, next) => {
const t = Date.now();
await next();
ctx.set("x-time", String(Date.now() - t));
});
app.use(async ctx => {
ctx.body = { ok: true };
});
app.listen(3000);
Fastify hooks and encapsulation
Fastify encourages plugins that register routes and hooks with local scope. Encapsulation keeps concerns modular and reduces accidental cross talk between features.
// fastify-hooks.ts
import Fastify from "fastify";
const app = Fastify({ logger: true });
app.addHook("onRequest", async (req, _res) => {
req.headers["x-start"] = Date.now().toString();
});
app.get("/ping", async () => {
return { ok: true };
});
app.listen({ port: 3000 });
Async errors and central handlers
Async functions can throw. Central handlers prevent leaks and keep responses uniform. Use Express error middleware with four parameters. Use Koa try or a top layer that wraps await next(). Use Fastify’s built in error handling with a custom setErrorHandler when you want structured output.
// express-errors.ts
import express, { Request, Response, NextFunction } from "express";
const app = express();
app.get("/fail", async (_req, _res) => {
throw new Error("Boom");
});
app.use((err: unknown, _req: Request, res: Response, _next: NextFunction) => {
res.status(500).json({ error: (err as Error).message });
});
app.listen(3000);
Validation, logging, and error handling
Validation rejects bad inputs early. Logging records what happened and why. Error handling produces consistent responses. Combine these so that every request either succeeds clearly or fails clearly.
Schema validation with zod in Express
Define schemas once, infer types for handlers, and validate requests at the edge. This reduces boilerplate and prevents unsafe access to body fields.
// express-zod.ts
import express from "express";
import { z } from "zod";
const app = express();
app.use(express.json());
const CreateUser = z.object({
email: z.string().email(),
name: z.string().min(1)
});
type CreateUser = z.infer<typeof CreateUser>;
function validate<T>(schema: z.ZodSchema<T>) {
return (req: express.Request, res: express.Response, next: express.NextFunction) => {
const parsed = schema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ errors: parsed.error.issues });
}
// attach typed data for downstream handlers
(req as any).data = parsed.data;
next();
};
}
app.post("/users", validate(CreateUser), (req, res) => {
const data = (req as any).data as CreateUser;
res.status(201).json({ id: "u_"+Date.now(), ...data });
});
app.listen(3000);
Built in validation and logging
Fastify integrates JSON Schema and high speed validation by default, and uses pino for structured logs. This combination helps when you want predictable formats and performance.
// fastify-schema.ts
import Fastify from "fastify";
const app = Fastify({ logger: true });
app.post("/users", {
schema: {
body: {
type: "object",
required: ["email","name"],
properties: {
email: { type: "string", format: "email" },
name: { type: "string", minLength: 1 }
}
}
}
}, async (req) => {
return { id: "u_"+Date.now(), ...(req.body as any) };
});
app.setErrorHandler((err, _req, res) => {
res.status(500).send({ error: err.message });
});
app.listen({ port: 3000 });
Request scoped correlation IDs
Attach a correlation ID so logs across services can be tied together. Use a header if present, or generate one for internal calls.
// correlation.ts (Express)
import { randomUUID } from "node:crypto";
export function correlation() {
return (req: any, res: any, next: any) => {
const id = req.get("x-correlation-id") || randomUUID();
req.correlationId = id;
res.set("x-correlation-id", id);
next();
};
}
Error taxonomy and mapping
Create a small set of error classes for common outcomes, then map them to HTTP status codes in one place. This prevents conditional scattering across handlers and makes responses predictable for clients.
// errors.ts
export class NotFound extends Error {}
export class Forbidden extends Error {}
export class BadRequest extends Error {}
export function toStatus(err: unknown): number {
if (err instanceof BadRequest) return 400;
if (err instanceof Forbidden) return 403;
if (err instanceof NotFound) return 404;
return 500;
}
Template rendering
Server rendered pages remain useful for dashboards, emails, and simple apps. Even in API first projects you often serve a health page and static files like a favicon, robots.txt, and documentation. Handle caching and content types carefully so clients receive fresh content when needed.
View engines in Express
Express supports template engines through app.set and a render call on the response. Choose a simple engine for small pages or a component based engine when you want reuse and layout control.
// express-views.ts
import express from "express";
import path from "node:path";
const app = express();
app.set("views", path.join(process.cwd(), "views"));
app.set("view engine", "pug");
app.get("/", (_req, res) => {
res.render("index", { title: "Home", message: "Hello" });
});
app.listen(3000);
Static files and caching
Serve static assets from a dedicated directory and apply cache headers that fit your deployment. Fingerprinted files can be cached for a long time, while HTML usually needs shorter lifetimes.
// static.ts (Express)
import express from "express";
import path from "node:path";
const app = express();
app.use("/assets", express.static(path.join(process.cwd(), "public"), {
etag: true,
lastModified: true,
maxAge: "1h",
setHeaders: (res, filepath) => {
if (filepath.endsWith(".html")) {
res.setHeader("Cache-Control", "no-cache");
}
}
}));
app.listen(3000);
Sending correct content types
Always send accurate content types so browsers and intermediaries do not guess. Most frameworks handle this automatically for common formats, but custom file types need an explicit header.
// content-type.ts (Koa)
import Koa from "koa";
const app = new Koa();
app.use(async ctx => {
ctx.set("Content-Type", "application/json; charset=utf-8");
ctx.body = JSON.stringify({ ok: true });
});
app.listen(3000);
Chapter 16: Data and Persistence
Applications become useful once they store state safely and retrieve it quickly. This chapter explores relational databases, NoSQL systems, migrations, schema tools, caching layers, and message queues. Each section highlights patterns that work cleanly with TypeScript so your data access remains predictable and strongly typed.
Relational databases and drivers
Relational engines such as PostgreSQL, MySQL, MariaDB, SQLite, and SQL Server provide structured storage with transactions and powerful query languages. Node offers several drivers and higher level abstractions that make these engines fit naturally into a TypeScript project.
| Database | Strengths | Driver | Notes |
| PostgreSQL | Rich types, strong correctness, extensions | pg |
Excellent JSON support and indexing |
| MySQL and MariaDB | Broad hosting availability and speed | mysql2 |
Fast prepared statements and pooling |
| SQLite | Zero setup and file based | better-sqlite3 |
Great for dev tools, tests, and small sites |
Using pg with TypeScript
The pg client provides pooled connections and parameterised queries. Map rows to typed interfaces so handlers treat database output safely.
// db.ts
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL
});
export async function getUser(id: number) {
const r = await pool.query("select id, email, name from users where id = $1", [id]);
return r.rows[0] as { id: number; email: string; name: string } | undefined;
}
Type safe query builders
Tools like Kysely and Drizzle infer types from table definitions. They help prevent mistakes by checking column names and result shapes at compile time. These tools are helpful when schemas evolve often.
// drizzle-example.ts
import { drizzle } from "drizzle-orm/node-postgres";
import { users } from "./schema";
import { Pool } from "pg";
const db = drizzle(new Pool({ connectionString: process.env.DATABASE_URL }));
export async function allUsers() {
return db.select().from(users);
}
NoSQL options and use cases
NoSQL databases store documents, key value pairs, graphs, or other flexible shapes. They work well when schemas change rapidly or when relationships are shallow. However, transactional guarantees and complex queries vary widely across systems.
| System | Model | Strengths | Typical uses |
| MongoDB | Document | Flexible shapes and indexing | Content stores and user profiles |
| Redis | Key value | Speed and primitives like lists and sets | Caches and rate limits |
| Cassandra | Wide column | Large scale writes | Event logs and time series |
MongoDB with the official driver
The official driver exposes collections as typed generics. This keeps inserts, updates, and queries consistent with your TypeScript interfaces.
// mongo.ts
import { MongoClient } from "mongodb";
const client = new MongoClient(process.env.MONGO_URL!);
const db = client.db("example");
export interface User {
email: string;
name: string;
created: Date;
}
export async function createUser(u: User) {
const coll = db.collection<User>("users");
await coll.insertOne(u);
return u;
}
Redis for ephemeral state
Redis excels when you need simple data structures that respond quickly. Use it for tokens, rate limits, counters, and caches.
// redis.ts
import { createClient } from "redis";
const redis = createClient({ url: process.env.REDIS_URL });
await redis.connect();
export async function bumpCounter(key: string) {
return redis.incr(key);
}
Migrations and schema management
Schemas shift over time. Migrations bring structure to those changes so different environments stay aligned. Track migrations in version control and run them reliably during deployment.
Migration tools
Tools such as knex, drizzle-kit, and umzug provide structured migration files. Use sequential files or timestamped names to control ordering.
// knex-migration.js
export function up(knex) {
return knex.schema.createTable("users", t => {
t.increments("id");
t.string("email").notNullable();
t.string("name").notNullable();
t.timestamp("created").defaultTo(knex.fn.now());
});
}
export function down(knex) {
return knex.schema.dropTable("users");
}
Safe rollouts
Prefer additive changes. Add new columns, migrate data gradually, then remove old columns once clients have switched. Heavy destructive changes slow deployments and risk downtime.
Versioning strategies
Use one of two approaches. You can keep a linear sequence of migration files or use timestamp based files that sort automatically. Both approaches work well as long as you test them in staging before production.
Caching layers and queues
Caching reduces repeated work and improves response times. Queues separate slow tasks from fast paths so users do not wait. Combined, these tools make applications smoother under load.
Layered caching
Combine in memory caches with distributed stores for speed and durability. In memory caches respond instantly while Redis or Memcached share state between processes. Use careful expiration policies so stale data remains rare.
// simple-cache.ts
const local = new Map<string, any>();
export function remember(key: string, ttlMs: number, compute: () => Promise<any>) {
const item = local.get(key);
if (item && item.expire > Date.now()) return item.value;
return compute().then(value => {
local.set(key, { value, expire: Date.now() + ttlMs });
return value;
});
}
Queues for background tasks
Use a queue to offload work that does not need to block an HTTP request. Jobs such as email, image resize, and report generation fit well here. Popular queue systems include RabbitMQ, Redis streams, and cloud managed queues.
// bullmq-example.ts
import { Queue } from "bullmq";
import IORedis from "ioredis";
const connection = new IORedis(process.env.REDIS_URL!);
const jobs = new Queue("emails", { connection });
export async function sendEmailJob(to: string, body: string) {
await jobs.add("sendEmail", { to, body });
}
Workers and concurrency
Workers pull tasks from a queue and process them independently. Use concurrency settings that match your CPU and IO profile so each worker remains responsive without overwhelming the system.
// worker.ts
import { Worker } from "bullmq";
import IORedis from "ioredis";
const worker = new Worker("emails", async job => {
const { to, body } = job.data;
// send email …
}, {
connection: new IORedis(process.env.REDIS_URL!),
concurrency: 4
});
By choosing appropriate storage engines, planning schema changes, and placing caches and queues carefully, your application becomes more reliable and more predictable as it grows.
Chapter 17: Observability and Operations
Once an application reaches production, observability becomes the compass that guides every fix and improvement. Clear logs, useful metrics, distributed traces, and predictable configuration help you understand what the system is doing even when you are far from the terminal. This chapter brings these pieces together so your Node services remain transparent, measurable, and steady under load.
Logging strategy and structure
Logs are the first signal you check during an incident. They should remain structured, easy to search, and context rich. A few predictable fields make dashboards meaningful and give operators a reliable trail to follow.
Shaping structured logs
Emit logs as structured objects rather than free text. Include fields such as method, path, status, duration, and a correlation ID. This improves filtering and chart building in tools such as Loki, Elastic, or cloud providers.
// logger.ts
export function logEvent(data: Record<string, any>) {
process.stdout.write(JSON.stringify({ time: Date.now(), ...data }) + "\n");
}
Integrating logging with frameworks
Fastify bundles structured logging with pino. Express and Koa require external helpers. Wrap your logger in middleware so every request receives a consistent lifecycle entry.
// express-logger.ts
import { logEvent } from "./logger";
export function requestLogger(req, res, next) {
const start = Date.now();
res.on("finish", () => {
logEvent({
method: req.method,
path: req.url,
status: res.statusCode,
duration: Date.now() - start
});
});
next();
}
Log levels and rotation
Use levels such as info, warn, error, and debug. Keep production logs tight by reducing noise at the debug level. Rotate logs by size or time so disks stay healthy and old files compress cleanly.
Metrics, tracing, and profiling
Metrics provide continuous summaries of system health, traces show how a request moves through the system, and profilers reveal where CPU or memory slows the service. Together these tools create a full picture of runtime behaviour.
Metrics with Prometheus format
Export counters, gauges, and histograms so Prometheus and similar collectors can scrape them. Metrics help you notice slow climbs in error counts or latency before they become incidents.
// metrics.ts
let requests = 0;
export async function metrics(req, res) {
if (req.url === "/metrics") {
res.setHeader("Content-Type", "text/plain; version=0.0.4");
return res.end(`requests_total ${requests}`);
}
requests++;
}
Distributed tracing
Distributed traces connect spans across services. Each span records timing and context so you can follow a request from edge to database. OpenTelemetry provides the shared vocabulary and exporters for popular backends.
// otel-setup.ts
import { NodeSDK } from "@opentelemetry/sdk-node";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
const sdk = new NodeSDK({
instrumentations: [getNodeAutoInstrumentations()]
});
sdk.start();
Profiling CPU and memory
Profilers identify hot paths and leaks. Use Node’s built in --inspect and Chrome DevTools or sampling profilers through clinic when performance dips. Symptoms often appear under concurrency so reproduce load while profiling.
// run with: node --inspect app.js
// then open chrome://inspect
Health checks and graceful shutdown
Reliable services advertise their state and shut down gently when asked. Health checks help orchestrators place traffic safely, and graceful shutdown prevents dropped requests during deploys or restarts.
Liveness and readiness endpoints
Expose two endpoints. Liveness confirms the process is running. Readiness confirms the process is ready to accept traffic. Kubernetes and other orchestrators use these signals to route or withhold requests.
// health.ts
export function liveness(_req, res) {
res.end("ok");
}
export function readiness(_req, res) {
const healthy = true; // check db or cache here
res.statusCode = healthy ? 200 : 503;
res.end(healthy ? "ready" : "not-ready");
}
Graceful shutdown
On signals such as SIGTERM, stop accepting new connections and finish in flight requests before exiting. This avoids dropped responses during deploys.
// graceful.ts
export function attachGraceful(server) {
const shutdown = () => {
server.close(() => process.exit(0));
};
process.on("SIGTERM", shutdown);
process.on("SIGINT", shutdown);
}
Dependency checks
Check key dependencies such as the database, cache, or message queue. Readiness should fail if any critical component becomes unreachable for more than a short span.
Managing configuration across environments
Configuration varies between development, staging, and production. The aim is to keep application code static while configuration flows in from the environment. This keeps deployments reproducible and secure.
Environment variables and safe parsing
Read values from the environment and validate them on startup so errors appear early. Use a small schema to enforce presence and type safety.
// config.ts
import { z } from "zod";
const Schema = z.object({
PORT: z.string().transform(Number),
DATABASE_URL: z.string().url(),
REDIS_URL: z.string().url()
});
export const config = Schema.parse(process.env);
Secret management
Avoid storing sensitive values inside repositories. Use secret managers, encrypted files, or cloud vaults. Retrieve secrets at startup and keep them in memory for the lifetime of the process.
Configuration layering
Layer defaults with environment overrides. For example, include a base file with harmless defaults, then layer environment specific files, then apply environment variables. This pattern keeps development smooth without leaking production values.
Observability closes the loop between development and operations. By combining logs, metrics, traces, health checks, and reliable configuration, you build services that remain transparent and trustworthy even as they grow.
Chapter 18: Security and Reliability
Security and reliability are shared responsibilities between your code, your platform, and your operations. This chapter gives you a practical baseline for real projects that run on the open internet. You will define a clear threat model, validate every input, authenticate users correctly, protect secrets, keep transport secure, keep dependencies healthy, slow bad actors, and build in resilience so that partial failure does not become total failure.
Threat model, input validation, and auth
You cannot defend what you have not defined. A small, written threat model helps you decide which controls matter now. From there, validate inputs at every boundary and authenticate callers in a way that suits your application shape and risk profile.
Define assets, actors, and trust boundaries
List sensitive assets such as account data, payment intents, and API keys. List likely actors such as regular users, curious tinkerers, automated scrapers, and targeted attackers. Draw the trust boundaries between browser, API, internal services, and databases. Any crossing of a boundary needs verification and logging.
Input validation at the boundary
Validate all external inputs before they reach business logic. Treat query strings, path params, JSON bodies, headers, cookies, and environment variables as untrusted. Prefer declarative schemas so you can parse and transform in one pass.
// zod example for a login payload
import { z } from "zod";
const LoginSchema = z.object({
email: z.string().email().min(6).max(254),
password: z.string().min(8).max(128),
remember: z.boolean().optional()
});
type LoginInput = z.infer<typeof LoginSchema>;
// Express handler using safe parse
app.post("/api/login", async (req, res) => {
const parsed = LoginSchema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ error: "Invalid input", details: parsed.error.flatten() });
}
const input: LoginInput = parsed.data;
// proceed with auth …
});
Output encoding and query safety
Encode outputs for the context they will be rendered in such as HTML, attributes, URLs, or JSON. Use parameterized queries for databases and safe templating for views. Never build SQL by concatenating strings. The same principle applies to shell commands and file paths.
Authentication choices and session scope
Pick an approach that fits your topology. Traditional web apps often use server sessions plus cookies. Distributed APIs often use structured tokens. Scope tokens narrowly, include expiry, and prefer revocation levers.
// Verifying a JWT with audience and clock tolerance
import jwt from "jsonwebtoken";
const PUBLIC_KEY = process.env.JWT_PUBLIC_KEY!;
function verifyAccessToken(token: string) {
return jwt.verify(token, PUBLIC_KEY, {
algorithms: ["RS256"],
audience: "api",
issuer: "https://auth.example.com",
clockTolerance: 5
});
}
Password storage and second factors
Never store raw passwords. Use a memory hard password hasher such as Argon2id with strong parameters. Add optional second factors such as TOTP or WebAuthn for higher value actions.
// Argon2id example with reasonable defaults
import argon2 from "argon2";
export async function hashPassword(pw: string) {
return argon2.hash(pw, { type: argon2.argon2id, timeCost: 3, memoryCost: 19456, parallelism: 1 });
}
export async function verifyPassword(hash: string, pw: string) {
return argon2.verify(hash, pw);
}
Secrets, TLS, and dependency hygiene
Secrets deserve careful handling at rest and in transit. Transport protection keeps data confidential and authentic between parties. Healthy dependencies reduce your exposure to upstream risk.
Secret storage options and trade offs
Choose storage that fits your deployment maturity. Keep a single source of truth and automate rotation wherever possible.
| Option | Pros | Cons | Use when |
| .env files | Simple; works locally | Easy to leak; hard to rotate | Local dev only |
| Environment vars | Standard; easy to inject | Process wide; appears in dumps | Small apps and staging |
| Cloud secret manager | Audit; rotation; IAM | Service coupling | Production services |
| HSM / KMS | Strong keys; envelopes | Cost; complexity | High assurance systems |
// Fetching a secret at runtime with a placeholder client
const apiKey = await secretClient.get("payments/apiKey"); // resolves to latest version …
Environment variables and process.env
Load configuration from the environment at process start then validate with a schema. Avoid accessing process.env deep inside business logic. Pass typed config down through constructors so code is testable.
// Validate config once at boot
import { z } from "zod";
const ConfigSchema = z.object({
NODE_ENV: z.enum(["development","test","production"]),
PORT: z.coerce.number().int().positive().default(3000),
JWT_PUBLIC_KEY: z.string().min(1),
DATABASE_URL: z.string().url()
});
export const CONFIG = ConfigSchema.parse(process.env);
TLS everywhere
Use HTTPS between client and edge, and between edge and origin whenever possible. Prefer modern TLS settings and short certificates. Terminate at the load balancer or ingress then re encrypt to services that cross host boundaries.
// Minimal HTTPS server for internal admin endpoints
import fs from "node:fs";
import https from "node:https";
import app from "./app.js";
const server = https.createServer({
key: fs.readFileSync("./tls/key.pem"),
cert: fs.readFileSync("./tls/cert.pem")
}, app);
server.listen(8443);
Dependency hygiene and supply chain care
Pin versions with a lockfile, update on a schedule, and scan continuously. Treat install hooks and transitive dependencies as part of your attack surface. Prefer small, well maintained packages over large bundles that you barely use.
// Common tasks you can automate in CI
# verify integrity and engines
npm ci
# look for known issues
npm audit --production
# check for orphaned or extraneous modules
npm ls --all
Rate limiting and abuse prevention
Abuse controls make your service predictable under pressure. You will limit how often expensive actions can run, detect patterns across identities, and slow down automated attacks without punishing legitimate users.
Choosing a limiting strategy
Pick a windowing algorithm that fits your needs. Fixed windows are simple; sliding windows reduce burstiness; token buckets allow controlled bursts; leaky buckets smooth output. Use keys that reflect risk, such as IP plus user ID for authenticated traffic.
| Algorithm | Behavior | Good for |
| Fixed window | Count per interval | Simple endpoints |
| Sliding window | Weight recent hits | Public APIs |
| Token bucket | Budget with refill | Bursty clients |
| Leaky bucket | Constant outflow | Queues and workers |
Implementing a Redis token bucket
Centralized stores allow consistent enforcement across pods. Keep the operation atomic to prevent race conditions and include metadata for observability. Use TTLs so idle keys disappear.
// Pseudo implementation for a Redis token bucket
// KEYS[1] = bucket key, ARGV = capacity, refillPerSec, now, cost
// Returns remaining tokens and reset time …
const lua = `
-- token bucket logic here …
`;
const [remaining, reset] = await redis.eval(lua, { keys: ["rl:u:"+userId], arguments: ["100","10",Date.now().toString(),"1"] });
if (remaining < 0) {
res.setHeader("Retry-After", Math.ceil((reset - Date.now()) / 1000));
return res.status(429).json({ error: "Too many requests" });
}
Login hardening and automated abuse
Use progressive delays, sliding limits, and device binding for authentication flows. Consider challenges for anonymous high risk endpoints. Log decisions with opaque identifiers so you can debug without leaking private data.
Resource quotas and fairness
For multi tenant systems, set per tenant quotas for requests, concurrent jobs, storage, and bandwidth. Enforce at admission time so you can reject early and cheap. Expose headers like X-RateLimit-Remaining and Retry-After to help well behaved clients adapt.
Resilience patterns and backpressure
Reliability assumes that things fail. Add timeouts, retries with jitter, isolation between components, and pressure relief valves so your system stays within safe limits during incidents.
Timeouts and bounded work
Every network call needs a timeout. Timeouts must be smaller than the caller’s timeout so failures propagate predictably. Bound CPU heavy tasks with worker pools and queue long running jobs.
// Fetch with AbortController and a hard timeout
import fetch from "node-fetch";
async function getJson(url: string, ms = 2000) {
const ctrl = new AbortController();
const t = setTimeout(() => ctrl.abort(), ms);
try {
const res = await fetch(url, { signal: ctrl.signal });
if (!res.ok) throw new Error("Bad status "+res.status);
return await res.json();
} finally {
clearTimeout(t);
}
}
Retries with jitter
Use a small number of retries for transient errors only. Add full jitter to avoid thundering herds. Never retry non idempotent actions unless you have an idempotency key.
// Exponential backoff with full jitter
function sleep(ms: number) { return new Promise(r => setTimeout(r, ms)); }
async function withRetries(fn: () => Promise<unknown>) {
const base = 100;
for (let i = 0; i < 4; i++) {
try { return await fn(); }
catch (e) {
const wait = Math.floor(Math.random() * (base * (2 ** i)));
await sleep(wait);
}
}
throw new Error("Exhausted retries");
}
Bulkheads, pools, and isolation
Separate resources so one noisy neighbor cannot sink the ship. Use connection pools per dependency, separate thread pools for blocking work, and independent queues for different priorities. Apply admission control at each bulkhead.
Circuit breakers and graceful degradation
Fail fast when a dependency is unhealthy. A circuit breaker tracks recent failures and opens the circuit to reject calls quickly, then probes occasionally to see if the dependency has recovered. Provide a fallback when possible such as a cached value or a reduced feature.
// Tiny circuit breaker sketch
type State = "closed" | "open" | "half";
class Breaker {
state: State = "closed";
failures = 0; openedAt = 0;
constructor(private threshold = 5, private coolMs = 5000) {}
async exec<T>(fn: () => Promise<T>): Promise<T> {
const now = Date.now();
if (this.state === "open" && now - this.openedAt > this.coolMs) this.state = "half";
if (this.state === "open") throw new Error("Circuit open");
try {
const res = await fn();
this.failures = 0; this.state = "closed";
return res;
} catch (e) {
this.failures++;
if (this.failures >= this.threshold) { this.state = "open"; this.openedAt = now; }
throw e;
}
}
}
Backpressure in streams
Use Node streams to avoid loading entire payloads into memory. Respect backpressure by awaiting write() when it returns false and by tuning highWaterMark. Propagate pressure across hops so a slow downstream slows the upstream naturally.
// Properly awaiting drain when respecting backpressure
import { createReadStream, createWriteStream } from "node:fs";
const src = createReadStream("big.json", { highWaterMark: 64 * 1024 });
const dst = createWriteStream("copy.json", { highWaterMark: 64 * 1024 });
src.on("data", chunk => {
if (!dst.write(chunk)) {
src.pause();
dst.once("drain", () => src.resume());
}
});
src.on("end", () => dst.end());
Load shedding and queues
When you cannot keep up, shed load deliberately. Reject low priority work first, queue work that can be deferred, and cap queue length so latency does not grow without bound. Expose saturation metrics so autoscaling can react intelligently.
With a clear model, strong boundaries, careful secret handling, healthy dependencies, fair usage controls, and resilience patterns, your service remains safe and steady even when conditions are rough.
Chapter 19: Packaging and Deployment
Getting your TypeScript and Node projects into users' hands requires thoughtful packaging and dependable deployment. This chapter walks through command line packaging, container images, serverless and edge targets, and release automation that produces repeatable, auditable artifacts.
CLI apps and shebangs
Command line tools start like any other Node program; the difference is how the entry file is invoked and distributed. A shebang allows a script to run as an executable on Unix like systems. Packaging with npm also needs a bin map so installs can place shims on the user's PATH.
Entry file layout
Place your CLI's entry in src/ and compile to dist/. Keep parsing, I/O, and business logic in separate modules so you can test logic without invoking the process layer.
Adding a shebang safely
Use #!/usr/bin/env node for portability. When compiling TypeScript, output the shebang in the built file; do not rely on the TypeScript compiler to copy comments unless configured.
#!/usr/bin/env node
import { main } from "./cli-main.js";
main(process.argv).catch(err => {
console.error(err instanceof Error ? err.message : String(err));
process.exit(1);
});
Declaring bin in package.json
Map one or more command names to built files. Local development is easiest with npm link or pnpm link.
{
"name": "@acme/greeter",
"type": "module",
"bin": {
"greeter": "./dist/cli.js"
},
"scripts": {
"build": "tsc -p tsconfig.build.json"
}
}
chmod +x dist/cli.js. Windows does not use executable bits; npm creates a .cmd shim that forwards to Node.
Packaging with npm pack
npm pack creates a tarball that matches what would be published. Add a .npmignore or use files in package.json to include only necessary artifacts.
Docker images and multi stage builds
Containers provide a reproducible runtime with system level dependencies. A multi stage build reduces image size by compiling in one stage and copying only the runtime files into a minimal base image.
Choosing a base image
Use an LTS Node image for builds and a smaller base for runtime such as gcr.io/distroless/nodejs or an Alpine variant. Validate native dependencies before switching bases.
A minimal multi stage Dockerfile
This example installs dependencies with a clean install, builds TypeScript, and copies only production files into the final stage.
# syntax=docker/dockerfile:1.7
FROM node:22-bookworm AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY tsconfig.json ./
COPY src ./src
RUN npm run build && npm prune --omit=dev
FROM gcr.io/distroless/nodejs22
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/package.json ./package.json
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
USER 1000
EXPOSE 3000
CMD ["dist/server.js"]
Reproducible builds
Pin exact versions in your lockfile and prefer npm ci rather than npm install. Enable Docker layer caching with separate COPY steps so dependency layers are reused.
/health endpoint and structured logs; do not expect to sh into the container.
Serverless targets and edge runtimes
Serverless platforms execute functions in response to events; edge runtimes run JavaScript close to users with a browser like API. Both have constraints around cold starts, startup size, and available Node modules.
Packaging a function handler
Export a small handler surface and keep imports shallow so the platform bundles minimal code. Avoid large optional dependencies.
export async function handler(event: { path: string; body: string | null }) {
const start = Date.now();
const result = await route(event.path, event.body);
return { statusCode: 200, headers: { "content-type": "application/json" }, body: JSON.stringify({ result, took: Date.now() - start }) };
}
Edge compatible code
Edge environments often expose fetch, Request, and Response rather than Node's http module. Use Web APIs when possible and guard any Node specific imports with conditional code.
export default async function handle(request: Request): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/") return new Response("ok", { status: 200 });
return new Response("not found", { status: 404 });
}
import { specific } over importing whole libraries; verify with your bundler's analyze report.
Environment and secrets
Use platform specific secret stores rather than plaintext environment files. Keep configuration injection simple; prefer process.env.NAME ?? "default" and validate at startup.
Versioning, releases, and automation
Reliable releases use semantic versioning, repeatable changelog generation, and automated pipelines that build, test, and publish artifacts on tagged commits. The same commit should produce the same package and image.
Semantic versioning
Increment major for breaking changes, minor for new features, and patch for fixes. Communicate deprecations clearly in the changelog and keep the public API surface explicit.
Automated release pipeline
This example workflow runs tests, builds, publishes a package to the registry, and pushes a Docker image tagged with the version. Adjust registry logins and permissions for your environment.
name: release
on:
push:
tags:
- "v*.*.*"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
registry-url: https://registry.npmjs.org
- run: npm ci
- run: npm test --if-present
- run: npm run build
- run: npm publish --provenance --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- run: |
VERSION=${GITHUB_REF_NAME#v}
docker build -t acme/app:$VERSION .
docker push acme/app:$VERSION
Changelogs and provenance
Generate changelogs from conventional commits so readers can scan changes quickly. Enable provenance or attestations where your tooling supports it so consumers can verify how the artifact was built.
Chapter 20: Testing for Node Services
Service testing is about confidence. Each layer of a Node application sees the world through different cracks in the door, so each test style shines a light on different behaviour. This chapter sketches where to draw the borders, how to exercise HTTP surfaces, how to bend time and IO without breaking the truth of your tests, and how to check that a service stays steady when traffic rolls in like a spring tide.
Unit, integration, and end to end boundaries
Clear boundaries keep tests honest. A unit test checks a single piece of logic with all dependencies replaced. An integration test checks two or more real components working together. An end to end test runs the whole service through its public interface. Each layer has a cost profile, and a healthy suite keeps a larger number of fast tests and a smaller number of slow, sweeping ones.
Unit tests as tight loops
Unit tests thrive on small inputs and predictable outputs. They do not touch the file system or network. This purity keeps them quick and portable.
import { sum } from "./math.js";
test("sum adds numbers", () => {
expect(sum(2, 3)).toBe(5);
});
Integration tests as trust builders
An integration test confirms that multiple modules talk to each other correctly. For a Node service, this could involve a real database, a message queue, or a cache. Spin up ephemeral resources when possible so you do not fight stale state.
// Connecting to a test Postgres database
import { createClient } from "./db.js";
test("user lookup works", async () => {
const db = await createClient(globalThis.TEST_DB_URL);
const user = await db.findUser("alice");
expect(user.email).toBe("alice@example.com");
});
End to end checks
End to end tests mirror real behaviour. They start the service, send real HTTP requests, and verify responses, logs, and side effects. Run these sparingly because they can be slow, but keep them around because they anchor the contract your users depend on.
import supertest from "supertest";
import { start } from "../dist/server.js";
let server;
beforeAll(async () => { server = await start(0); });
afterAll(async () => { await server.close(); });
test("GET /health returns ok", async () => {
const res = await supertest(server).get("/health");
expect(res.status).toBe(200);
expect(res.body.status).toBe("ok");
});
HTTP contract tests and fixtures
Contract tests ensure that your HTTP API behaves exactly as documented. These tests treat the service as a small universe: requests go in, responses come out, and fixtures help you shape the universe into predictable states.
Contract focused assertions
Contract tests pay attention to status codes, headers, body shape, and field semantics. They do not care about internal structures. Keep assertions crisp so failures point to real behaviour shifts.
test("POST /login returns typed payload", async () => {
const res = await supertest(server)
.post("/login")
.send({ email: "alice@example.com", password: "pw123456" });
expect(res.status).toBe(200);
expect(res.headers["content-type"]).toContain("application/json");
expect(res.body).toEqual({
userId: expect.any(String),
token: expect.any(String)
});
});
Using fixtures for repeatable states
Fixtures load data into the system so you can work with known scenarios. They act like small theatre sets that you raise and strike between tests. Keep fixtures small and disposable.
// A simple fixture loader for local Postgres
export async function loadFixture(db, name) {
const sql = await fs.readFile(`./fixtures/${name}.sql`, "utf8");
await db.query(sql); // seeds tables then …
}
Schema drift detection
When clients or services evolve independently, schemas can drift. Use contract tests to detect missing fields, changed types, or broken status codes. A failing contract test is early warning that something upstream or downstream needs attention.
Mocking IO and time safely
Many behaviours rely on clocks, randomness, or IO. Mocking these carefully keeps logic deterministic without turning tests into fragile illusions. The trick is to mock only the external edges and keep the core honest.
Mocking file and network calls
Replace IO at the boundary such as fs or HTTP clients. Provide minimal doubles that return controlled outputs. Keep mocks narrow so you do not simulate full systems through fragile stubs.
import { readFile } from "node:fs/promises";
jest.mock("node:fs/promises", () => ({
readFile: jest.fn()
}));
test("loader parses JSON", async () => {
readFile.mockResolvedValue("{\"name\":\"Ada\"}");
const data = await loadUser("./user.json");
expect(data.name).toBe("Ada");
});
Time control and timers
Use a fake clock when logic depends on time so you can advance or freeze it at will. This keeps delays, expiry logic, and retry intervals predictable.
jest.useFakeTimers();
test("token expires", () => {
const t = new Token(1000); // expires in 1 second
jest.advanceTimersByTime(1000);
expect(t.expired()).toBe(true);
});
Randomness and reproducibility
When randomness affects behaviour, inject a seedable source. This yields the same sequence on every run and makes tests tidy to debug.
// A tiny seedable RNG
class RNG { constructor(seed) { this.s = seed; }
next() { this.s = (this.s * 16807) % 2147483647; return this.s; }
}
Load testing and reliability verification
Load tests reveal how a service behaves under steady pressure and sudden bursts. They tell you where bottlenecks hide, how memory patterns evolve, and where latency bends out of shape. Reliability checks help you confirm that backpressure, timeouts, and retries behave as intended.
Steady load profiles
Start with a clean baseline. Pick a realistic requests per second figure and run it long enough to observe memory plateaus, CPU trends, and connection churn. Steady load shows you how the service behaves on a calm day.
// Example k6 script
import http from "k6/http";
export const options = {
vus: 20,
duration: "2m"
};
export default function () {
http.get("http://localhost:3000/health");
}
Burst and spike scenarios
Real traffic is rarely smooth. Test short spikes and longer bursts so you can see how the service handles queuing, saturation, and recovery. Monitor logs and metrics to spot slowdowns before errors start to appear.
Resilience verification
Use controlled fault injection to test timeouts, circuit breakers, and backpressure. These tests act like storm drills. Run them at safe intervals to confirm that safeguards behave as designed.
// Injecting delays in a fake dependency
app.get("/slow", async (_req, res) => {
await sleep(2000); // delay for test …
res.json({ ok: true });
});
Observing saturation
Track saturation signals such as queue depth, event loop lag, and connection wait times. These clues help you predict where bottlenecks form when traffic climbs.
With a thoughtful palette of tests across layers and load shapes, your service grows into something you can trust even before the first user arrives.
PART 3: TypeScript in Node
Chapter 21: Project Setup for TS in Node
TypeScript and Node work well together once the project chooses a runtime module system, a matching compiler configuration, and a layout that keeps build artifacts separate from sources. This chapter gives you a practical baseline that you can apply to new apps or libraries; you will also see the key options that change runtime behavior in Node, plus a short guide to monorepos and project references.
Choosing ESM or CommonJS and why it matters
Node can execute JavaScript in two module systems. ESM uses import and export with URL like resolution rules and file extensions. CommonJS uses require and module.exports with different resolution rules. This choice affects your package.json, file extensions, how TypeScript emits modules, and how tools resolve dependencies.
Package shape and file extensions
For ESM, mark the package as a module and prefer .mts or .ts inputs that compile to .mjs or .js with ESM syntax. For CommonJS, either omit the package type or set it to commonjs, then compile to .cjs or .js with CJS syntax. You can also publish dual entry points using exports maps with conditions.
{
"name": "my-app",
"type": "module",
"exports": {
".": {
"import": "./dist/index.js",
"require": "./dist/index.cjs"
}
}
}
import paths once compiled, or rely on a bundler that rewrites them. Omitting extensions causes runtime errors in pure Node ESM.
ESM and CommonJS APIs differ
In ESM you cannot use require directly and you access __dirname through import.meta.url. In CommonJS you use require and have __dirname available. The following helper gives you a portable directory name in ESM.
// file: src/dir.ts (ESM)
import { fileURLToPath } from "node:url";
import { dirname } from "node:path";
export const here = dirname(fileURLToPath(import.meta.url));
Quick comparison
| Aspect | ESM | CommonJS |
| Package marker | "type": "module" |
"type": "commonjs" or omitted |
| Syntax | import and export |
require and module.exports |
| Extensions at runtime | Required; use .js or .mjs |
Optional; resolution is more permissive |
| Interop | createRequire for CJS; default export wrapping varies |
require on transpiled ESM may need .default |
TypeScript module |
NodeNext or ESNext |
CommonJS |
exports map with import and require conditions.
Compiler options that affect runtime behavior
Several tsconfig.json options change what Node executes, not only how TypeScript type checks. The module target, module resolution strategy, and JSX or decorators settings all influence emitted code and import paths. Choose settings that match your chosen module system and your runtime environment.
module and moduleResolution
Use module: "NodeNext" with moduleResolution: "NodeNext" for projects that follow Node’s ESM and CJS rules in the same repo. This lets TypeScript mirror Node’s behavior for .ts, .mts, .cts inputs and their emitted files. For strictly ESM without CJS inputs, module: "ESNext" with moduleResolution: "Bundler" suits bundler driven apps.
{
"compilerOptions": {
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"rootDir": "src",
"resolveJsonModule": true,
"allowJs": false
}
}
isolatedModules and verbatimModuleSyntax
isolatedModules forces each file to be emit ready by itself, which aligns with many transpilers. verbatimModuleSyntax tells the compiler to keep your imports and exports as written; unused imports that carry types only must be written with import type.
{
"compilerOptions": {
"isolatedModules": true,
"verbatimModuleSyntax": true
}
}
Path mapping and resolution
Path aliases improve ergonomics but can break at runtime if Node cannot resolve them. When you use paths, either compile with outDir and run the output where paths are rewritten, or use a resolver at runtime. Keep aliases shallow and stable.
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@lib/*": ["src/lib/*"]
}
}
}
Emit format and sourceMap
Enable sourceMap for debugging and set inlineSources so error stacks map back to TypeScript during local development. Ensure your process manager enables source maps at runtime through flags or environment variables when needed.
{
"compilerOptions": {
"sourceMap": true,
"inlineSources": true
}
}
Strictness, target, and lib settings
Strict types give better feedback in Node services and libraries. The JavaScript features you can use depend on the target you emit and the lib you include for type checking. Choose the newest target your runtime supports, then include DOM libraries only when you actually run in a browser or a hybrid environment.
strict and friends
strict enables several checks at once. You can refine behavior with targeted flags such as noUncheckedIndexedAccess for safer dictionary lookups and exactOptionalPropertyTypes for precise optional fields.
{
"compilerOptions": {
"strict": true,
"noUncheckedIndexedAccess": true,
"exactOptionalPropertyTypes": true
}
}
target for your Node version
Pick a target that matches your minimum Node runtime. Newer Node versions support modern syntax and built ins, which lets you compile less and run faster.
| Minimum Node | Suggested target |
Notes |
| Node 16 | ES2021 |
Top level await in ESM; Promise.any |
| Node 18 | ES2022 |
Error cause; class fields |
| Node 20+ | ES2023 or ESNext |
Array findLast; structuredClone |
lib picks for Node
For a pure Node service, include ES2022 or newer libs and DOM only when you truly access browser APIs. Many projects choose ["ES2022"] or ["ES2023"] for server code.
{
"compilerOptions": {
"lib": ["ES2023"],
"skipLibCheck": true
}
}
DOM to lib masks accidental use of browser globals in server code. Keep server and browser targets in separate tsconfig files when you share types across environments.
Monorepos and project references
Project references let TypeScript compile multiple packages in a monorepo in topological order. This improves build speed and keeps type boundaries clear. Arrange each package with its own tsconfig.json, then create a root tsconfig.json with references so tsc -b can coordinate builds.
Directory layout
my-workspace/
├─ package.json
├─ tsconfig.base.json
├─ packages/
│ ├─ utils/
│ │ ├─ package.json
│ │ ├─ tsconfig.json
│ │ └─ src/…
│ └─ api/
│ ├─ package.json
│ ├─ tsconfig.json
│ └─ src/…
Per package tsconfig.json
Each package extends a base and sets an outDir. Mark composite builds to enable references, and set declaration so downstream packages consume types.
// packages/utils/tsconfig.json
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"composite": true,
"declaration": true,
"outDir": "dist",
"rootDir": "src",
"module": "NodeNext",
"moduleResolution": "NodeNext"
},
"include": ["src"]
}
Root tsconfig.json with references
The build config sits at the workspace root and points to each package. The -b flag uses this to build all packages in dependency order and to watch them efficiently.
// tsconfig.json
{
"files": [],
"references": [
{ "path": "./packages/utils" },
{ "path": "./packages/api" }
]
}
tsc -b --watch at the root for a fast incremental developer loop across all packages. It tracks which leaf changed and rebuilds affected dependents only.
Publishing and exports maps
For publishable packages, emit types into dist and define an exports map that points to ESM and CJS entries. This keeps consumers on stable paths and lets Node pick the right file.
// packages/utils/package.json
{
"name": "@scope/utils",
"version": "0.1.0",
"type": "module",
"main": "./dist/index.cjs",
"module": "./dist/index.js",
"types": "./dist/index.d.ts",
"exports": {
".": {
"import": "./dist/index.js",
"require": "./dist/index.cjs",
"types": "./dist/index.d.ts"
}
}
}
Testing and tooling in a monorepo
Keep local package links through your package manager’s workspace support so tooling resolves source during development. Run tests at the package level for quick feedback, then run a workspace wide test pass in continuous integration.
Chapter 22: Build and Run Strategies
Once your project structure is in place, you need a reliable way to turn TypeScript into something Node can execute. Some teams prefer a classic compile step, others run code on the fly during development, and larger projects often adopt fast builders that compress the edit to feedback loop. This chapter explores these approaches so you can pick the style that fits your workflow.
Transpile and run: tsc outDir workflow
This workflow compiles TypeScript files into a dedicated output directory and points Node at the generated JavaScript. It keeps the boundary between source and build products clear, which is helpful for libraries and production deployments. You write code in src, then run tsc to emit into dist or another folder of your choice.
Clean separation of source and output
A split between directories avoids confusion when debugging and removes the risk of shipping TypeScript files by mistake. Most teams use a structure similar to the one below, where src holds the TypeScript and dist receives the compiled output.
my-app/
src/…
dist/…
tsconfig.json
Configuring the compiler
Set rootDir and outDir so that TypeScript mirrors your folder structure and emits files consistently. This also helps tools like nodemon or pm2 watch the correct location.
{
"compilerOptions": {
"rootDir": "src",
"outDir": "dist",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"sourceMap": true
}
}
npm publish predictable.
On the fly execution with tsx or ts-node
Sometimes you want instant execution without a manual build step. Tools like tsx and ts-node execute TypeScript directly by compiling each file just before running it. This keeps iteration fast and cuts down on the number of moving parts during development.
Using tsx for ESM friendly projects
tsx handles modern ESM resolution rules and supports speedy startup. It respects your tsconfig.json and keeps imports stable, which suits apps that lean on Node’s current resolution model.
npx tsx src/index.ts
ts-node for classic environments
ts-node hooks into Node’s loader chain and works well for CommonJS or mixed projects. While not as fast as dedicated builders, it is simple and dependable for small apps or command line tools.
npx ts-node src/index.ts
Fast builders: esbuild and swc
For large codebases, build speed becomes part of developer happiness. Builders like esbuild and swc compile TypeScript extremely quickly and can bundle code when needed. They skip type checking by default, so many teams pair them with tsc --noEmit in continuous integration.
esbuild for ultrafast bundling and transforms
esbuild is designed for speed and can collapse an entire dependency graph into a single output. You can use it to build server code or to prepare a hybrid project that mixes client and server elements.
esbuild src/index.ts --platform=node --format=esm --outfile=dist/index.js
swc for a Rust powered pipeline
swc offers fast transforms and a flexible configuration file. It is a strong choice when you want speed with fine grained control over emitted JavaScript targets.
{
"jsc": {
"target": "es2022",
"parser": { "syntax": "typescript" }
}
}
tsc --noEmit to keep type checking clean. This gives you speed without sacrificing safety.
Source maps and production debugging
Source maps connect emitted JavaScript back to your TypeScript files. When configured correctly, error stacks point to the original source, which saves time during debugging. You can use inline maps during development and external maps for production deployments.
Compiler options for clean debugging
Enable sourceMap and inlineSources so your editor and Node’s inspector can locate original lines. External maps are often better in production because they reduce payload size while still giving traceability when you capture logs.
{
"compilerOptions": {
"sourceMap": true,
"inlineSources": true
}
}
Runtime support in Node
Node reads source maps automatically when the loader provides them. For modern setups, no extra flags are needed. If you use older runtime tools or process managers, confirm that source maps remain available in the deployed environment.
Chapter 23: Typing Node Core and Ecosystem
TypeScript turns Node’s dynamic landscape into something steadier by layering types over core modules, community packages, and your own glue code. This chapter shows how to use the official Node types, how to write ambient declarations when the ecosystem falls short, and how to maintain internal type packages that keep large projects tidy.
@types/node and DOM libs
The @types/node package provides type definitions for Node’s standard library, including modules such as fs, path, crypto, and many others. It describes callbacks, promises, streams, buffers, and event emitters in a way that lines up with the actual runtime. Most projects install it either directly or indirectly through workspace tooling.
Enabling Node types
Ensure your compiler includes Node’s types in the global namespace. When types is missing from compilerOptions, TypeScript automatically pulls from all installed type packages, which often works fine. In more controlled setups, list the Node types explicitly.
{
"compilerOptions": {
"types": ["node"],
"lib": ["ES2023"]
}
}
Avoiding accidental DOM usage
Including DOM libraries brings in types for document, window, and many browser specific globals. These can mask mistakes because they let Node code type check while still failing at runtime. Only add DOM libs when you intend to run in a browser or a hybrid environment.
fetch,” resist the urge to add the DOM library automatically. Prefer installing a proper fetch implementation for Node or use its built in version when your runtime supports it.
Writing and consuming ambient declarations
Ambient declarations fill the gaps where JavaScript modules have no built in types. You describe the shape of an API with declare statements so the compiler can understand how your code interacts with those modules. These declarations can live in .d.ts files within your source tree or in a dedicated types folder.
Module declarations
To type a module that exports a function, object, or class, write a declare module block. The compiler uses these shapes whenever it encounters an import that matches the module name.
// types/legacy-lib.d.ts
declare module "legacy-lib" {
export function init(config: unknown): void;
export const version: string;
}
Global augmentations
Some packages expect you to extend Node’s global objects or add new fields to process.env. Use a global declaration file for this so the changes stay isolated and easy to track.
// types/globals.d.ts
declare namespace NodeJS {
interface ProcessEnv {
API_URL?: string;
FEATURE_FLAG?: string;
}
}
Dealing with untyped or poorly typed packages
Not every npm package comes with solid type definitions. Some include incomplete shapes, some drift out of date, and some have no definitions at all. You can manage these cases with targeted strategies that keep your project safe without drowning in boilerplate.
Adopting @types/* packages
When a package lacks built in types, DefinitelyTyped often fills the gap. Installing the relevant @types package provides community maintained type definitions that follow the published API. These packages usually track upstream releases, although occasionally they lag behind.
npm install --save-dev @types/left-pad
Using any sparingly
When definitions are broken or too loose, you may fall back on any as a temporary measure. Apply it to the smallest possible surface area so that the rest of your code continues to benefit from strong checking. This isolates the uncertainty instead of letting it spread.
import broken from "untyped-lib";
const result: any = broken.doSomething();
Partial rewrites for stability
For important dependencies, consider rewriting small portions of the package’s types locally. You can patch definitions by creating a module declaration file that overrides or narrows the upstream shapes.
// types/untyped-lib.d.ts
declare module "untyped-lib" {
export function doSomething(): { ok: boolean };
}
Maintaining internal type packages
Larger codebases often share common types across several projects. Instead of duplicating definitions, create an internal types package that holds interfaces, helpers, and ambient declarations. This gives every team a consistent language for domain objects and service contracts.
Setting up a shared types package
Create a new workspace or package that contains only .ts and .d.ts files. Mark it as a composite project so it can participate in project references, then export a clear surface for consumers to import.
internal-types/
package.json
tsconfig.json
src/
index.ts
domain/…
events/…
Publishing and versioning internal types
You can publish this package to a private registry or keep it within a monorepo. Treat it like a normal dependency: tag releases, bump versions, and document breaking changes. This prevents type drift across independent services.
// internal-types/package.json
{
"name": "@org/internal-types",
"version": "1.4.0",
"type": "module",
"types": "./dist/index.d.ts"
}
Chapter 24: APIs with Confidence
Robust APIs thrive on clarity. With TypeScript in Node you can express request shapes, response contracts, and error conditions in a way that helps both humans and machines understand the boundaries of your service. This chapter looks at how to apply type safety to common HTTP frameworks, how to validate data at runtime, how to generate OpenAPI specifications from your definitions, and how to model errors in a clean and predictable way.
Type safe HTTP with Fastify or Express
Fastify and Express let you shape web servers with minimal code, although their approaches differ. Fastify includes built in schema features that align naturally with TypeScript, while Express keeps things open and flexible. Both can become type safe once you pair them with strongly typed handlers and clear request contracts.
Fastify built in schema support
Fastify encourages you to define JSON schemas for requests and responses. When paired with TypeScript, those schemas become hints that guide your handlers toward safer code. You register a route with a schema property, then TypeScript infers parameter and body types whenever possible.
fastify.route({
method: "POST",
url: "/items",
schema: {
body: {
type: "object",
properties: { name: { type: "string" } },
required: ["name"]
}
},
handler: async (req, reply) => {
const item = { id: 1, name: req.body.name };
return reply.send(item);
}
});
Type safe Express handlers
Express does not enforce request shapes, so you supply them manually with generics. This works well for smaller services where you do not need a full schema driven approach. Define interfaces for query strings, parameters, bodies, and responses, then attach them to your handlers.
import { Request, Response } from "express";
interface Body {
title: string;
}
function create(req: Request, res: Response) {
res.json({ ok: true, title: req.body.title });
}
Runtime validation with Zod or similar
Compile time checks catch many mistakes, but they cannot protect you from malformed data arriving over the network. Runtime validators such as Zod let you enforce contracts at the boundary, which keeps your API honest and prevents silent failures.
Defining a schema once and using it twice
A Zod schema describes the allowed shape of a payload. TypeScript infers the static type from that schema, so you avoid writing the same structure twice. When requests arrive, you parse the input through the schema and get a safe value or a validation error.
import { z } from "zod";
const Item = z.object({
name: z.string(),
price: z.number().positive()
});
type Item = z.infer<typeof Item>;
Integrating validation into handlers
The validator becomes a small guard at the top of each route. Failed validations produce clear error messages that callers can rely on. Successful validations give you strong types for the rest of the handler.
const parsed = Item.safeParse(req.body);
if (!parsed.success) {
return reply.status(400).send({ error: parsed.error.format() });
}
const item = parsed.data;
OpenAPI generation from types
OpenAPI documents describe your service in a machine readable format. Tools that link TypeScript types to OpenAPI let you create this documentation automatically. This avoids drift between code and documentation and gives client generators a reliable roadmap.
Schema driven generators
Libraries such as openapi-typescript, ts-json-schema-generator, or framework specific plugins read your types and produce JSON schema or OpenAPI definitions. These artifacts become inputs for documentation servers, test suites, and client stubs.
npx openapi-typescript schema.json --output types.ts
Embedding OpenAPI metadata in routes
Some frameworks let you embed OpenAPI metadata directly inside route definitions. This approach keeps documentation close to your implementation and reduces the chance of mismatches.
{
method: "GET",
url: "/items",
schema: {
response: {
200: {
type: "array",
items: { type: "string" }
}
}
}
}
Error modeling and typed results
APIs need a calm and structured way to communicate failure. Typed errors turn vague exceptions into clear signals about what went wrong and how clients should respond. Instead of throwing unstructured messages, define a small set of error shapes and return them consistently.
Defining result types
A discriminated union separates successful paths from failures. Each failure case carries enough context to help debugging while staying small enough to serialize safely. Clients can inspect the tag field to decide on the right action.
type Result<T> =
| { ok: true; value: T }
| { ok: false; error: "NotFound" | "InvalidInput" | "Conflict"; detail?: string };
Returning typed errors in handlers
Your handler returns either a successful result or a typed failure. This makes error handling explicit and gives upstream callers a pattern that is simple to test.
function getItem(id: string): Result<string> {
if (id === "") {
return { ok: false, error: "InvalidInput" };
}
return { ok: true, value: "Item " + id };
}
Chapter 25: Databases with Type Safety
Databases shape the heart of many Node services. When you combine TypeScript’s type system with the structure of your tables, you gain a calmer way to reason about queries, migrations, and data access layers. This chapter looks at typed SQL patterns, gives an overview of popular ORMs and query builders, covers schema aware migrations, and shows how to write predictable tests with lightweight factories.
Type safe SQL access patterns
Writing SQL directly can feel close to the metal, yet type safety is not lost when you use the right tools. Several libraries parse SQL template literals and infer result types through TypeScript. This lets you keep full control over your queries while still enjoying strong compile time guarantees.
Tagged template SQL helpers
Libraries such as slonik or @databases/pg let you mark SQL with a special function. These helpers validate parameters and return rows with inferred shapes. This keeps your queries readable without sacrificing safety.
const rows = await sql<{ id: number; name: string }>`
select id, name
from items
where id = ${itemId}
`;
Parameter binding and injection safety
Using parameters avoids the hazards of string concatenation. Query helpers insert values safely and the database driver handles the binding. The type checker ensures that each placeholder receives the right kind of value.
ORM and query builders overview
Object relational mappers and query builders offer another route to type safety. They generate queries behind the scenes and produce TypeScript types that follow your schema. Each tool balances convenience, performance, and flexibility in different ways.
Prisma and its generated client
Prisma reads a declarative schema file and generates a client that exposes strongly typed CRUD operations. The client tracks relations, nullability, enums, and field types. This approach suits teams that want predictable queries and a clear domain model.
const item = await prisma.item.findUnique({
where: { id: itemId }
});
Drizzle, Kysely, and other query builders
Query builders like Drizzle and Kysely let you write SQL shaped code with full type inference. They avoid the sometimes heavy runtime of ORMs and give you control over the generated queries. This works well when you want type safety without a high level abstraction layer.
const rows = await db
.selectFrom("items")
.select(["id", "name"])
.where("id", "=", itemId)
.execute();
Migrations and schema types
Schema changes move in step with your code. Type safe migrations describe how tables evolve over time and let you synchronize your database structure across environments. Many migration tools integrate directly with TypeScript or allow you to attach types to the schema definitions they generate.
Code first migrations
Tools such as Prisma Migrate or Drizzle Kit create migrations based on your schema files. They detect differences, generate SQL, and give you a structured history of changes. This makes upgrades predictable and prevents accidental drift.
npx prisma migrate dev --name add_price_column
Schema types across services
When multiple services share tables or common views, export types that match your schema. These types reflect the authoritative shape of your data and help independent teams coordinate changes.
export interface Item {
id: number;
name: string;
price: number;
}
Testing data layers with factories
Database tests become lighter when you generate fixtures with small factories. These factories create valid records that follow your schema, which keeps test setup quick and expressive. The type system ensures that each record matches the shape of your domain.
Simple factories with overrides
A factory returns an object that fits a table or entity. Optional overrides give you control over edge cases. This avoids duplicating large data blocks across tests.
function makeItem(overrides: Partial<Item> = {}): Item {
return {
id: 1,
name: "Example",
price: 10,
...overrides
};
}
Seeding the database for integration tests
For integration tests that use a real database, seed the tables with a small number of stable fixtures. Clear the tables between runs so tests do not bleed into one another. The type system makes sure your fixtures stay aligned with the evolving schema.
Chapter 26: Concurrency and Streams
Concurrency in Node comes from cooperative asynchronous execution, worker threads for CPU bound tasks, and streaming primitives that move data in small chunks. TypeScript adds confidence through typed iterators, typed messages, and safe resource handling. This chapter ties these together and shows practical patterns that avoid contention while maintaining backpressure and predictable flow.
Async iteration and typed streams
Async iteration lets you consume data chunk by chunk with for await. When you pair it with Node streams, you get a natural way to process files, sockets, and generated values without buffering everything. TypeScript types document each chunk and reduce accidental mixing of object and byte modes.
AsyncIterator fundamentals
An async iterable exposes [Symbol.asyncIterator]() and yields values over time. This meshes well with I/O and timers because each await cooperates with the event loop; nothing blocks between chunks.
async function* lines(src: AsyncIterable<Buffer>): AsyncIterable<string> {
let acc = "";
for await (const buf of src) {
acc += buf.toString("utf8");
let idx;
while ((idx = acc.indexOf("\n")) >= 0) {
yield acc.slice(0, idx);
acc = acc.slice(idx + 1);
}
}
if (acc) yield acc;
}
Bridging Node stream.Readable to async iteration
Every modern Node Readable is also an async iterable, so you can loop with for await. This avoids manual data handlers and keeps control flow linear.
import { createReadStream } from "node:fs";
import { once } from "node:events";
async function* readUtf8(path: string): AsyncIterable<string> {
const rs = createReadStream(path, { encoding: "utf8" });
for await (const chunk of rs) yield chunk as string;
await once(rs, "close");
}
Typed object mode streams
When you set objectMode: true, a stream passes arbitrary JavaScript values instead of bytes. Define a type for the chunk shape and keep it consistent across your pipeline.
type Row = { id: string; name: string; email?: string };
import { Readable, Transform, Writable } from "node:stream";
const rows = Readable.from<Row>([{ id: "1", name: "Ada" }, { id: "2", name: "Linus", email: "l@…" }], { objectMode: true });
const toCsv = new Transform({
objectMode: true,
transform(chunk: Row, _enc, cb) {
const line = [chunk.id, chunk.name, chunk.email ?? ""].map(v => JSON.stringify(v)).join(",") + "\n";
cb(null, line);
}
});
const out = new Writable({
write(chunk, _enc, cb) {
process.stdout.write(chunk);
cb();
}
});
rows.pipe(toCsv).pipe(out);
| Aspect | Byte mode | Object mode |
| Chunk type | Buffer or string | Arbitrary typed values |
highWaterMark | Bytes per internal buffer | Items per internal buffer |
| Use cases | Files, sockets, compression | Parsed rows, domain events |
Readable.from<T>([...]) for quick typed sources. It respects backpressure and keeps types with the data.
Worker threads with typed messages
Worker threads run JavaScript in parallel on separate threads. Use them for CPU heavy tasks such as parsing, compression, or cryptography. Structure your messages with a discriminated union so both sides get exhaustive checks and safe narrowing.
Message protocol as a discriminated union
A small schema goes a long way. Include a type field for narrowing and a reqId so you can correlate responses to requests when concurrency increases.
type Req =
| { type: "hash"; reqId: string; input: string; algo: "sha256" | "sha512" }
| { type: "compress"; reqId: string; input: Buffer };
type Res =
| { type: "ok"; reqId: string; result: Buffer | string }
| { type: "error"; reqId: string; message: string };
Main thread creating a worker
The main thread posts typed messages and awaits responses. Keep a pending map keyed by reqId. This pattern supports many in flight jobs while staying type safe.
import { Worker } from "node:worker_threads";
import { randomUUID } from "node:crypto";
const worker = new Worker(new URL("./worker.js", import.meta.url), { execArgv: ["--experimental-import-meta-resolve"] });
const pending = new Map<string, (res: Res) => void>();
worker.on("message", (res: Res) => {
const cb = pending.get(res.reqId);
if (cb) { pending.delete(res.reqId); cb(res); }
});
function call(req: Omit<Req, "reqId">): Promise<Res> {
const reqId = randomUUID();
const full = { ...req, reqId } as Req;
return new Promise(resolve => {
pending.set(reqId, resolve);
worker.postMessage(full);
});
}
Worker thread implementation
The worker listens for messages, narrows by type, and posts a response. Avoid synchronous CPU bound loops on the main thread; move them here.
// worker.ts
import { parentPort } from "node:worker_threads";
import { createHash } from "node:crypto";
import { brotliCompress } from "node:zlib";
parentPort!.on("message", async (req: Req) => {
try {
if (req.type === "hash") {
const h = createHash(req.algo).update(req.input, "utf8").digest("hex");
parentPort!.postMessage({ type: "ok", reqId: req.reqId, result: h } satisfies Res);
} else if (req.type === "compress") {
const out = await new Promise<Buffer>((res, rej) => brotliCompress(req.input, (e, b) => e ? rej(e) : res(b)));
parentPort!.postMessage({ type: "ok", reqId: req.reqId, result: out } satisfies Res);
}
} catch (e) {
parentPort!.postMessage({ type: "error", reqId: req.reqId, message: (e as Error).message } satisfies Res);
}
});
postMessage(value, [value.buffer]). Copying big buffers increases latency and memory pressure.
Queues and background jobs
Background jobs decouple request handling from work execution. A queue accepts items fast and processes them at a stable rate. You can keep it in process for simplicity or delegate to a shared broker when you scale beyond one instance.
A minimal in process queue
This queue uses an async generator to pull jobs and a concurrency limiter to run a few at a time. The type parameter ensures producers and consumers agree on the job shape.
type Job = { id: string; payload: { userId: string; email: string } };
class AsyncQueue<T> {
private buf: T[] = [];
private resolvers: Array<(v: IteratorResult<T>)> = [];
enqueue(item: T) {
const r = this.resolvers.shift();
if (r) r({ value: item, done: false });
else this.buf.push(item);
}
async *[Symbol.asyncIterator](): AsyncIterator<T> {
while (true) {
if (this.buf.length) yield this.buf.shift()!;
else yield await new Promise<T>(res => this.resolvers.push(v => res(v.value)));
}
}
}
async function runQueue(q: AsyncQueue<Job>, concurrency = 4) {
const running = new Set<Promise<unknown>>();
for await (const job of q) {
const p = handle(job).finally(() => running.delete(p));
running.add(p);
if (running.size >= concurrency) await Promise.race(running);
}
}
async function handle(job: Job) {
await sendEmail(job.payload); // pretend to send; returns a promise
}
Scheduling and retries with jitter
Retries should back off to avoid thundering herds. Add jitter so many jobs do not wake at once.
function backoff(attempt: number, baseMs = 250, capMs = 10_000) {
const exp = Math.min(capMs, baseMs * 2 ** attempt);
const jitter = Math.floor(Math.random() * baseMs);
return exp + jitter;
}
Persisting jobs safely
In memory queues lose jobs on process exit. When durability matters, persist to a database table or a broker. A simple durable table uses a status field and a lease to keep one worker per job.
-- SQL shape for durability
CREATE TABLE job_queue (
id TEXT PRIMARY KEY,
kind TEXT NOT NULL,
payload JSON NOT NULL,
status TEXT NOT NULL DEFAULT 'queued',
run_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
lease_until TIMESTAMP NULL
);
(status, run_at) and claim jobs with a short lease inside a transaction. Renew the lease while working to handle long tasks.
Backpressure and flow control
Backpressure keeps producers from outrunning consumers. Streams implement this with internal buffers, highWaterMark, and return values from write(). Flow control protects memory, reduces latency variance, and improves throughput by keeping queues short.
Understanding write() return values
When writable.write(chunk) returns false, the buffer is full. Pause the producer and wait for drain before continuing. This pattern respects downstream limits.
function writeMany(w, chunks: Buffer[]) {
for (const c of chunks) {
if (!w.write(c)) {
w.once("drain", () => writeMany(w, chunks.slice(chunks.indexOf(c) + 1)));
return;
}
}
w.end();
}
Using stream.pipeline for safety
pipeline wires streams with proper error forwarding and cleanup. It also pairs well with async generators and transforms.
import { pipeline } from "node:stream/promises";
import { createReadStream, createWriteStream } from "node:fs";
import { createGzip } from "node:zlib";
await pipeline(
createReadStream("input.txt"),
createGzip(),
createWriteStream("output.txt.gz")
);
Tuning highWaterMark and object sizes
Right size your buffers. Large chunks reduce overhead but increase latency for the last byte and risk memory pressure. Small chunks improve interactivity but raise syscall and scheduling cost. Measure with realistic inputs before fixing values.
import { createReadStream } from "node:fs";
// bytes in byte mode; items in object mode
const rs = createReadStream("big.bin", { highWaterMark: 1024 * 64 }); // 64 KiB
Building an async transform with backpressure
You can implement transforms as async generators and still participate in flow control. Each yield waits for the downstream consumer to pull the next value.
async function* mapAsync<A, B>(src: AsyncIterable<A>, fn: (a: A) => Promise<B> | B) {
for await (const a of src) {
yield await fn(a);
}
}
const upper = mapAsync(readUtf8("input.txt"), s => s.toUpperCase());
for await (const chunk of upper) {
process.stdout.write(chunk);
}
With these tools you can keep I/O flowing, keep CPU heavy work off the event loop, and keep your types aligned from edges to core. Concurrency and streams become predictable once you respect backpressure, define message protocols, and measure buffer sizes under load.
Chapter 27: Frameworks and Architectures
Architectural choices shape how your TypeScript and Node services evolve. This chapter focuses on patterns and frameworks that encourage clear boundaries, reliable contracts, and repeatable delivery. The goal is to keep the codebase small in surface area while allowing features to grow in a predictable way.
NestJS and modular design
NestJS offers a structured approach that feels familiar to developers who like dependency injection, explicit modules, and annotations. Under the surface it uses standard Node libraries and TypeScript features, which means you can keep control of primitives while gaining helpful conventions. The core idea is to isolate concerns in modules, export providers that act as stable interfaces, and keep controllers as thin orchestration layers.
Modules and providers
Think of a module as a boundary that groups related use cases, entities, and infrastructure details. Providers inside a module represent capabilities that can be swapped or mocked. Keep module public APIs small; only export what other modules need.
// app.module.ts
import { Module } from '@nestjs/common';
import { BillingModule } from './billing/billing.module';
import { OrdersModule } from './orders/orders.module';
@Module({
imports: [BillingModule, OrdersModule]
})
export class AppModule {}
Controllers and validation
Controllers coordinate input, call application services, and then format output. Validation should occur at the edges with schemas or DTOs. If you prefer schema based validation that stays close to types, wire a library such as zod or class validator adapters.
// orders.controller.ts
import { Controller, Post, Body } from '@nestjs/common';
import { CreateOrder } from './use-cases/create-order';
import { z } from 'zod';
const CreateOrderDto = z.object({
customerId: z.string().uuid(),
lines: z.array(z.object({ sku: z.string(), qty: z.number().int().positive() }))
});
@Controller('orders')
export class OrdersController {
constructor(private readonly createOrder: CreateOrder) {}
@Post()
async create(@Body() body: unknown) {
const dto = CreateOrderDto.parse(body);
const result = await this.createOrder.execute(dto);
return { id: result.id, status: 'created' };
}
}
Dependency injection and testing
Abstractions help you swap implementations during tests. Define an interface for outbound ports, then bind either a production adapter or a fake. Keep the binding at the module level so tests can rebind easily.
// billing.port.ts
export interface BillingPort {
charge(cents: number, source: string): Promise<{ authId: string }>;
}
// billing.module.ts
import { Module } from '@nestjs/common';
import { BillingPort } from './billing.port';
import { StripeAdapter } from './stripe.adapter';
@Module({
providers: [{ provide: 'BillingPort', useClass: StripeAdapter }],
exports: ['BillingPort']
})
export class BillingModule {}
Hexagonal architecture with types
Hexagonal architecture separates the domain from delivery and infrastructure. The domain holds pure types and logic. Adapters translate between the outside world and those types. Explicit TypeScript interfaces for ports make these boundaries visible and enforceable.
Domain core and ports
Keep the domain free of libraries that deal with I/O. Express domain intent with value objects, entities, and functions. Ports describe what the domain needs from the outside, while adapters satisfy those needs.
// domain/order.ts
export type Money = { cents: number; currency: 'USD' | 'EUR' };
export type OrderLine = { sku: string; qty: number; price: Money };
export function total(lines: OrderLine[]): Money {
const cents = lines.reduce((acc, l) => acc + l.qty * l.price.cents, 0);
return { cents, currency: 'USD' };
}
Adapters and mappers
Adapters convert protocols and storage formats into domain types. Mappers ensure only valid values cross the boundary. If an upstream payload contains optional or unknown fields, prefer explicit mapping to avoid leaking shapes into the core.
// adapters/http-order-mapper.ts
import { z } from 'zod';
import { OrderLine } from '../domain/order';
const HttpLine = z.object({
sku: z.string(),
qty: z.number().int().positive(),
priceCents: z.number().int().nonnegative()
});
export function fromHttp(input: unknown): OrderLine[] {
const arr = z.array(HttpLine).parse(input);
return arr.map(l => ({ sku: l.sku, qty: l.qty, price: { cents: l.priceCents, currency: 'USD' } }));
}
Use cases as application services
Use cases coordinate domain operations and ports. Each use case should expose a single method such as execute that accepts a typed request and returns a typed result. This pattern supports synchronous calls and also fits queues or events.
// app/create-order.ts
import { BillingPort } from '../ports/billing.port';
import { total, OrderLine } from '../domain/order';
export type CreateOrderRequest = { customerId: string; lines: OrderLine[] };
export type CreateOrderResult = { id: string; authId: string };
export class CreateOrder {
constructor(private billing: BillingPort) {}
async execute(req: CreateOrderRequest): Promise<CreateOrderResult> {
const amount = total(req.lines).cents;
const { authId } = await this.billing.charge(amount, req.customerId);
return { id: crypto.randomUUID(), authId };
}
}
Microservices, RPC, and events
Service boundaries are communication boundaries. Choose between request response protocols such as RPC, and asynchronous messaging such as events. Both benefit from shared types and generated clients that prevent drift. Keep the internal domain decoupled even when transport libraries change.
Comparing protocols
Pick a protocol that matches latency, payload shape, and evolution needs. Human readable payloads help during early development. Binary protocols can reduce overhead once schemas stabilize.
| Style | Pros | Trade offs |
| HTTP JSON | Easy to inspect; broad tooling | Payload size; schema drift if types are not enforced |
| tRPC | End to end typing; client generation at compile time | Tight coupling to TypeScript runtime assumptions |
| gRPC or Connect | Strong schemas; streaming; fast | Extra tooling; binary frames are harder to debug |
| Events | Loose coupling; natural for fan out | Event ordering and idempotency require careful design |
Type safe RPC
Define a service contract once, then generate clients and servers. With TypeScript first approaches, you can author the contract in code and derive routers. With schema first approaches, you can compile from .proto or similar into TypeScript definitions.
// trpc/router.ts
import { initTRPC } from '@trpc/server';
import { z } from 'zod';
const t = initTRPC.create();
export const appRouter = t.router({
charge: t.procedure
.input(z.object({ cents: z.number().int().positive(), source: z.string() }))
.output(z.object({ authId: z.string() }))
.mutation(async ({ input }) => {
return { authId: 'auth_' + input.source };
})
});
export type AppRouter = typeof appRouter;
Event driven contracts
Events represent facts that already happened. Encode a compact, versioned envelope with a type, version, and an immutable data payload. Consumers should treat events as append only signals and handle duplicates safely.
// events.ts
export type EventEnvelope<TType extends string, T> = {
id: string;
type: TType;
version: number;
occurredAt: string;
data: T;
};
export type OrderCreated = EventEnvelope<'order.created', {
orderId: string;
customerId: string;
totalCents: number;
lines: Array<{ sku: string; qty: number }>;
}>;
// publisher.ts
import { OrderCreated } from './events';
export async function publishOrderCreated(e: OrderCreated): Promise<void> {
// send to Kafka or NATS; headers may include tracing ids
await send('order.created', JSON.stringify(e));
}
version in the envelope; bump it only when a breaking change occurs, and keep consumers tolerant to additional fields.
Idempotency and retries
Distributed systems fail in partial ways. Design handlers to be safe under repeated delivery. Use an idempotency key per business action; store results keyed by this value to ignore duplicates.
// idempotency.ts
export async function withIdempotency<T>(key: string, work: () => Promise<T>): Promise<T> {
const existing = await store.get(key);
if (existing) return existing as T;
const result = await work();
await store.set(key, result);
return result;
}
Shared types across services
Strongly typed boundaries depend on shared contracts that evolve deliberately. A monorepo can host shared packages that publish type definitions and small runtime helpers. Even in a polyrepo, you can publish versioned packages with semantic versioning that match API changes.
Monorepo workspaces
Workspaces let you develop shared packages and services together while keeping deploy units independent. Use project references so TypeScript understands build order and emits once per package.
// package.json
{
"private": true,
"workspaces": ["packages/*", "services/*"]
}
// packages/contracts/tsconfig.json
{
"compilerOptions": { "composite": true, "declaration": true, "outDir": "dist" },
"include": ["src"]
}
API extraction and stability
Automated API reports help you detect accidental breaking changes. Generate declarations from contracts, then compare against the previous report in CI. Treat a changed public type as a potential breaking change and bump the major version if required.
// contracts/src/index.ts
export type ChargeRequest = { cents: number; source: string };
export type ChargeResult = { authId: string };
Schema evolution
Forward compatible readers should ignore unknown fields. Backward compatible writers should prefer adding optional fields over renaming or removing. For events, consider an upcaster that upgrades older versions into the newest shape at read time.
// upcast.ts
type V1 = { orderId: string; totalCents: number };
type V2 = { orderId: string; totalCents: number; currency?: 'USD' | 'EUR' };
export function upcastToV2(input: V1 | V2): V2 {
return 'currency' in input ? input : { ...input, currency: 'USD' };
}
Runtime validation for shared packages
Compile time types are not enough at process boundaries. Export parsers along with TypeScript types so both sides share the same single source of truth. The caller validates before sending, and the callee validates upon receipt.
// contracts/src/billing.ts
import { z } from 'zod';
export const ChargeRequestSchema = z.object({
cents: z.number().int().positive(),
source: z.string()
});
export type ChargeRequest = z.infer<typeof ChargeRequestSchema>;
Versioning and compatibility policy
Adopt a clear policy that maps API and event changes to semantic versions. Publish a short document in the repository that describes what counts as a breaking change. Teams move faster when they can trust that a patch means safety and a minor version means additive features.
Chapter 28: Testing in TypeScript for Node
Testing turns a codebase from a fragile puzzle into a predictable machine. With TypeScript you also gain a layer of static assurance that pairs well with runtime checks. This chapter focuses on practical tools and patterns for fast, reliable feedback as you build services, libraries, and command line tools.
Jest, Vitest, Node test runner
Most TypeScript projects settle on one of three runners. Jest offers a full ecosystem with mocks and global APIs. Vitest stays close to Jest but runs inside Vite, which makes it fast and lightweight. The built in Node test runner provides minimalism for projects that avoid heavy frameworks. All three can run TypeScript either through ts-node, compilation, or an integrated transformer.
Jest with TypeScript
Jest gives you an expressive test vocabulary and snapshot capabilities. Pair it with ts-jest or a transform so you can run TypeScript without building first. Keep configuration small; most setups work with a few lines.
// jest.config.ts
import type { Config } from 'jest';
const config: Config = {
preset: 'ts-jest',
testEnvironment: 'node'
};
export default config;
// sum.test.ts
import { sum } from './sum';
test('adds two numbers', () => {
expect(sum(2, 3)).toBe(5);
});
Vitest for speed
Vitest shares Jest like syntax but keeps boot time low. It integrates naturally with modern ESM projects and offers smart caching. Because it uses Vite under the hood, imports work as they do in production.
// example.test.ts
import { expect, test } from 'vitest';
import { greet } from './greet';
test('greets with a name', () => {
expect(greet('Ada')).toBe('Hello Ada');
});
Node test runner
The Node test runner lives under node:test. It offers structure without extra layers. This suits small libraries or projects that already rely on native Node facilities. Assertions come from node:assert by default.
// math.test.ts
import test from 'node:test';
import assert from 'node:assert/strict';
import { div } from './math';
test('safe division', () => {
assert.equal(div(6, 3), 2);
});
Type aware mocks and spies
Mocks give you controlled stand ins for dependencies. With TypeScript you can use interfaces to express expectations and let your mocks track calls without drifting from the real contract. A well typed mock prevents accidental mismatches and encourages small, clear interfaces.
Mocking functions safely
A simple mock records calls and returns predictable values. Capture the generic signature so your helper mirrors the real function type.
function createMockFn<T extends (...args: any[]) => any>() {
const calls: Array<Parameters<T>> = [];
let impl: ((...a: Parameters<T>) => ReturnType<T>) | null = null;
function fn(...args: Parameters<T>): ReturnType<T> {
calls.push(args);
if (impl) return impl(...args);
throw new Error('No implementation set');
}
fn.setImpl = (f: typeof impl) => (impl = f);
fn.calls = calls;
return fn as T & { setImpl: typeof impl; calls: Array<Parameters<T>> };
}
Spying on methods
Spies wrap existing functions rather than replacing them. This is useful when you want to observe behaviour without modifying logic. Vitest and Jest both offer spy helpers, while hand written spies work well in the Node runner.
// spy example with Vitest
import { vi, test, expect } from 'vitest';
import * as fs from 'node:fs';
test('reads a file', () => {
const spy = vi.spyOn(fs, 'readFileSync');
fs.readFileSync('some.txt', 'utf8');
expect(spy).toHaveBeenCalled();
spy.mockRestore();
});
Mocking ports and adapters
Abstract interfaces make it easy to replace infrastructure during tests. Configure your container or module to inject a fake that records calls. Typed interfaces ensure the fake respects the correct signature.
// billing.port.ts
export interface BillingPort {
charge(cents: number, source: string): Promise<{ authId: string }>;
}
// fake billing for tests
export class FakeBilling implements BillingPort {
public calls: Array<[number, string]> = [];
async charge(cents: number, source: string) {
this.calls.push([cents, source]);
return { authId: 'fake_' + source };
}
}
Property based and contract testing
Property based tests explore input ranges automatically. Instead of checking a single example, you describe an invariant and let the framework generate many cases. Contract tests ensure that interfaces shared between services behave as expected, independent of implementation details.
Property based testing with fast check
Fast check generates random values and shrinks failures. This reveals edge cases that example based tests might miss. Focus on algebraic laws, inverse functions, or behaviours that must hold for all valid inputs.
// reverse(reverse(s)) is s
import { test } from 'vitest';
import fc from 'fast-check';
import { reverse } from './reverse';
test('double reverse returns original', () => {
fc.assert(fc.property(fc.string(), s => reverse(reverse(s)) === s));
});
Contract tests for API clients
Contract tests confirm that a client obeys the API specification and that the server fulfils it. Use schemas or routers as the source of truth. Test the client against mocked servers that enforce the contract.
// using a schema
import { appRouter } from '../router';
import { z } from 'zod';
const ChargeRequest = z.object({
cents: z.number().int().positive(),
source: z.string()
});
// thin contract test
function validateRequest(input: unknown) {
return ChargeRequest.parse(input);
}
Event contract and versioning tests
Event contracts must stay stable over time. Write tests that ensure older versions can still be read and that new consumers tolerate extra fields. Keep fixture events in version numbered files to prevent accidental shape drift.
// upcast test
import { upcastToV2 } from './upcast';
const legacy = { orderId: '1', totalCents: 500 };
const upgraded = upcastToV2(legacy);
if (upgraded.currency !== 'USD') throw new Error('Upcast failed');
CI pipelines and fast feedback
A healthy test suite runs quickly and fails loudly. The faster you catch mistakes, the more confidently you can change code. CI pipelines should be clear, repeatable, and tuned for minimal latency.
Parallel and focused runs
CI runners often provide several cores. Configure your runner to split work across workers so long test suites finish sooner. Also use path filters to skip jobs when irrelevant files change.
// GitHub Actions sketch
name: ci
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci
- run: npm test --workspaces
Caching dependencies and builds
Cache node_modules or the output of your package manager to reduce install time. Cache tsc output when using project references. The faster CI can rebuild only what changed, the more often you can push without delay.
Static checks in CI
Static checks catch issues before runtime tests run. Include tsc --noEmit and linters early in the pipeline. Failing early saves cycles and keeps logs shorter.
// package.json scripts
{
"scripts": {
"check": "tsc --noEmit && eslint .",
"test": "vitest run"
}
}
Coverage and thresholds
Coverage does not measure correctness, but it highlights blind spots. Set thresholds that reflect the criticality of each module. Track coverage trends over time so regressions do not slip past reviews.
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
coverage: { provider: 'v8', lines: 85 }
}
});
With a balanced mix of runners, mocks, property tests, and tight CI loops, your TypeScript and Node projects can grow safely and stay enjoyable to maintain.
Chapter 29: Delivery and Maintenance
Building software is one thing; getting it into the world and keeping it healthy is another. Delivery shapes how users experience your tools, while maintenance determines how peacefully your team sleeps at night. With TypeScript you can describe deployment inputs, model release behaviour, and guide safe upgrades with helpful types and scripts.
Distribution formats and build artifacts
Node applications travel in several shapes. Each shape balances portability, size, and runtime traits. TypeScript’s build step produces JavaScript, declaration files, and source maps. From there you can bundle your code, pack it into archives, or publish to registries. The right choice depends on how the software will be used and how fast it must start.
Plain Node packages
A plain package includes compiled JavaScript, .d.ts files, and metadata. This works well for libraries and services that run in environments with Node. Keep your output directory clean and avoid shipping tests or raw TypeScript unless users expect them.
// tsconfig.build.json
{
"compilerOptions": {
"outDir": "dist",
"declaration": true,
"sourceMap": true,
"module": "esnext"
},
"include": ["src"]
}
Bundled artifacts
Bundling with tools such as esbuild or Rollup produces a single file that starts quickly. This suits CLI utilities, lightweight services, and tasks that run in short lived environments. Minification trims size, although you must keep source maps when debugging matters.
// esbuild.mjs
import { build } from 'esbuild';
await build({
entryPoints: ['src/cli.ts'],
outfile: 'dist/cli.js',
bundle: true,
platform: 'node',
sourcemap: true
});
Containers and serverless packages
Containers wrap your program with its runtime. Keep images small and deterministic. Serverless targets often require compact bundles and explicit handler exports. In both cases, ensure dependencies are pinned and builds are reproducible.
# Dockerfile
FROM node:20-slim
WORKDIR /app
COPY dist/ .
CMD ["node", "server.js"]
Runtime configuration typing
Every deployed system needs configuration. Typed configuration reduces surprises and helps catch mistakes before they leak into production. Represent the shape of environment variables and external settings with schemas, then validate at startup.
Typed configuration objects
Keep configuration near the entry point of your service. Use a schema so you can read from process.env, parse values, and supply defaults. Invalid configuration should stop the program early.
// config.ts
import { z } from 'zod';
const ConfigSchema = z.object({
port: z.number().int().positive(),
dbUrl: z.string().url(),
featureFlag: z.boolean().default(false)
});
export type Config = z.infer<typeof ConfigSchema>;
export function load(): Config {
const raw = {
port: Number(process.env.PORT),
dbUrl: process.env.DB_URL,
featureFlag: process.env.FEATURE_FLAG === 'true'
};
return ConfigSchema.parse(raw);
}
Separating secrets safely
Configuration and secrets belong together logically but should be retrieved from different sources. Load secrets from encrypted stores or environment variables. Keep them typed so misuse is easy to detect during reviews.
Runtime feature flags
Feature flags let you ship code early and enable features gradually. Model flags with a small interface. Keep flag checks simple so behaviour stays readable.
// flags.ts
export interface FeatureFlags {
newCheckout: boolean;
useFastPath: boolean;
}
export function enabled(f: FeatureFlags, name: keyof FeatureFlags) {
return f[name];
}
Automated releases and changelogs
Releases document how your project evolves. Automated pipelines reduce manual work and keep tags, versions, and notes in sync. Changelog generation encourages good commit messages and shows users what changed in each version.
Semantic versioning with automation
Semantic release tools analyse commit messages, decide the next version, update the changelog, publish packages, and tag the repository. This keeps releases predictable and keeps human effort on content rather than mechanics.
// .releaserc
{
"branches": ["main"],
"plugins": [
"@semantic-release/commit-analyzer",
"@semantic-release/release-notes-generator",
"@semantic-release/npm",
"@semantic-release/github"
]
}
Keeping changelogs readable
A clear changelog groups changes into sections such as added, fixed, changed, and removed. Automated tools can generate this, but you can refine the language when publishing a major release. Readers appreciate clarity over volume.
// excerpt from CHANGELOG.md
## 1.4.0
- Added new schema validator for configs
- Fixed race condition in job scheduler
- Improved startup time by bundling CLI
Release candidates and verification
When large changes arrive, publish release candidates with a suffix such as -rc.1. This pattern encourages early testing without surprising production systems.
Deprecations and upgrading safely
All software grows and sheds old behaviour. Deprecation policies guide your users toward safer patterns. This requires clear communication, migration paths, and tooling that detects old calls at compile time when possible.
Marking deprecated APIs
TypeScript supports the @deprecated tag in JSDoc. Editors flag these APIs with warnings. Deprecate when you know a better alternative exists and when the old API will be removed in a predictable future release.
// example.ts
/**
* @deprecated Use calculateTotal instead
*/
export function legacyTotal(x: number, y: number) {
return x + y;
}
Migration helpers
Provide codemods or small scripts that rewrite old imports and calls. This reduces friction when upgrading and helps teams avoid manual hunting for outdated patterns.
// codemod sketch
// find legacyTotal and replace with calculateTotal
export function rewrite(source: string) {
return source.replace(/legacyTotal/g, 'calculateTotal');
}
Safe removal windows
When removing a deprecated feature, define a window where both behaviours coexist. Communicate the timeline in changelogs and documentation. Do not surprise downstream consumers with silent removals.
Tracking upgrade pain points
Major version upgrades reveal rough edges. Keep a list of common breakages, note where types changed unexpectedly, and publish a migration guide. Continuous improvements make future upgrades easier.
With strong distribution habits, typed configuration, automated releases, and thoughtful deprecations, your TypeScript and Node projects can evolve smoothly and stay dependable long after the first deployment.
Chapter 30: Putting It All Together
This final chapter braids together the ideas that shaped the book. TypeScript gives structure, Node provides runtime power, and sound architectural habits keep projects calm as they grow. What follows is a practical blueprint for building something real, observing it in motion, surfacing threats early, and guiding future work with purpose.
End to end project blueprint
An end to end project has several moving parts. The goal is not complexity but clarity. Each layer supports the next without leaking its details upward. Keep modules simple, types explicit, and behaviours testable. A good blueprint makes features easier to add and issues easier to diagnose.
Project structure
Start with a folder layout that separates domain logic, adapters, configuration, and infrastructure. Arrange packages or directories so boundaries are visible at a glance. Use project references if you have multiple build targets.
.
├─ packages/
│ ├─ contracts/
│ └─ domain/
├─ services/
│ ├─ api/
│ ├─ worker/
│ └─ jobs/
└─ infra/
└─ deploy/
Flow of data
Trace how a request moves through the system. A client calls an endpoint; a controller validates input; the application layer coordinates domain logic; ports handle side effects; events or responses go back out. Understanding this flow keeps complexity in check.
Cross cutting concerns
Logging, metrics, tracing, and validation should wrap the system at its boundaries. Use middlewares, decorators, or helper functions rather than scattering these concerns inside the domain.
Observability and SLOs from day one
Observability is not a luxury; it is an early companion. Systems that ship with metrics, logs, and traces reveal their behaviour from the first production minute. This makes debugging feel like reading a map instead of guessing in the dark.
Structured logs
Emit logs as structured objects, not free form strings. Use levels such as info, warn, and error. Include request identifiers and contextual fields so you can follow the story of a request through the system.
// logger.ts
export function log(level: 'info' | 'warn' | 'error', message: string, fields: Record<string, unknown> = {}) {
process.stdout.write(JSON.stringify({ level, message, ...fields, ts: Date.now() }) + "\n");
}
Metrics that matter
Track metrics for throughput, latency, error counts, and resource usage. Choose dimensions that reflect user experience rather than internal details. A simple histogram for request durations and a counter for failures cover most early needs.
Tracing spans
Distributed tracing connects events across services. Wrap important operations in spans. Include attributes such as user ids or order ids so traces show the context behind each hop.
Setting SLOs
Service level objectives define the quality bar. Pick targets such as percentiles for latency or uptime windows. SLOs guide investment: if the error budget runs out, pause feature work and improve reliability.
Security reviews and threat modeling
Security grows best when baked in early. A small threat model before each major change helps protect data and users. Treat it as a conversation about possibilities and safeguards rather than a checklist.
Identifying assets and actors
List what you need to protect. This includes user data, secrets, and infrastructure credentials. Then list who interacts with the system: users, internal services, automated scripts, and potential attackers.
Entry points and trust boundaries
Every HTTP endpoint, queue consumer, or cron job is an entry point. Map these and note where trust changes. Enforce strict validation anytime data crosses a trust boundary.
Common risks
Consider risks such as injection, insecure defaults, broken access controls, or forgotten debug settings. Review logs for suspicious patterns. Keep dependencies updated and lockfile hashes stable.
Reviewing fixes and regressions
Every fix should include a regression test so the issue does not reappear. Track security issues in a separate section of the changelog if the project is public.
Roadmap and next steps
A roadmap creates momentum. It gives contributors direction and helps users anticipate future changes. Plan short horizons for near term improvements and longer arcs for architectural shifts.
Technical debt and refactoring
Track debt explicitly. When a part of the codebase starts resisting changes, refactor with tests at your side. Small refactors done often keep entropy low.
Feature planning and experiments
Plan features in stages. Start with a narrow slice, run it through the system, and collect feedback. If the idea proves useful, extend it. If not, retire it gracefully.
Documentation as a living guide
Documentation must grow with the project. Keep architecture notes, diagrams, and reasoning near the code. Treat docs as first class citizens so knowledge flows to new team members.
Community and stewardship
Healthy projects welcome contributions with clear guidelines, helpful issue templates, and steady communication. Stewardship means guiding the project without over controlling it, letting good ideas rise from users and contributors.
With these final pieces, you have a complete view of how TypeScript and Node can support projects from first file to large scale systems. The craft lives in the small decisions: shaping types, keeping boundaries clean, observing real behaviour, and planning thoughtfully. A steady rhythm of delivery and care lets your software grow with confidence and clarity.
© 2025 Robin Nixon. All rights reserved
No content may be re-used, sold, given away, or used for training AI without express permission
Questions? Feedback? Get in touch