Chapter 1: Introduction to JavaScript
JavaScript is one of the most influential programming languages ever created. It began in 1995 when Netscape engineer Brendan Eich was tasked with adding a lightweight scripting language to the web browser. The result was a simple language designed to make web pages more dynamic and interactive. Though it started small, JavaScript quickly became an essential part of the web’s foundation, alongside HTML and CSS.
In its early years, JavaScript was mainly used to add basic effects such as image rollovers, simple validation, and pop-ups. Over time, as browsers improved and standardisation arrived through the ECMAScript specification, the language evolved into a robust, full-featured tool. Each new version brought refinements and features that made JavaScript faster, more reliable, and easier to use for complex applications.
Today, JavaScript is everywhere. It runs in every major web browser, on almost every device, and even on servers. Thanks to Node.js, developers can use JavaScript not just for front-end interfaces but also for backend logic, APIs, and command-line tools. This single-language approach allows entire systems to be built using the same core language from end to end.
One reason for JavaScript’s continuing popularity is the rich ecosystem that has grown around it. Frameworks such as React, Vue, and Angular simplify the creation of complex interfaces, while libraries like Express make building server applications straightforward. Package managers such as npm provide access to millions of reusable components, encouraging fast and efficient development.
Another strength is JavaScript’s constant evolution. The ECMAScript standard is updated regularly, introducing features such as arrow functions, classes, modules, async programming, and pattern matching. These improvements keep the language modern and expressive while remaining compatible with older code. Few languages have managed to grow so rapidly without losing their core simplicity.
Because JavaScript runs in browsers by default, it has become the most accessible language for new programmers. You can start experimenting immediately in your browser’s developer console, or install Node.js to run scripts locally. Whether you are creating a small interactive feature or building a large-scale web application, JavaScript adapts easily to your needs.
In this chapter, we will look at what JavaScript is today, how it runs in different environments, and how to execute simple code examples yourself. By understanding its origins and the forces that shaped it, you will be better prepared to explore its syntax, structure, and unique features in the chapters that follow.
Who this book is for
This book is for anyone who wants to learn JavaScript in a clear and straightforward way. It avoids unnecessary jargon and long-winded explanations, focusing instead on what the language does and how to use it effectively. Each topic is presented simply, with examples that can be understood without any prior experience of web development.
If you already know another programming language, you will find many familiar concepts here. Variables, loops, functions, and data structures all exist in JavaScript, though often with their own syntax and behaviour. The book highlights these differences so you can quickly adapt your existing knowledge to JavaScript’s style.
For those new to programming, JavaScript is a friendly place to start. You can try code instantly in your browser, see the results immediately, and make changes without installing complex tools. The examples are short, the explanations are direct, and everything builds step by step from first principles to more advanced features.
JavaScript also suits anyone who wants to understand how the modern web works. It is the language that drives interactivity in browsers and increasingly powers the backend through Node.js. Whether your goal is to write front-end scripts, automate simple tasks, or build complete web applications, this book provides the foundation you need.
By the end of the book you will be able to read, write, and reason about JavaScript with confidence. You will understand its syntax, its key features, and its modern capabilities, and be ready to explore frameworks, libraries, or more advanced tools if you wish.
What JavaScript is today
Modern JavaScript is the result of almost three decades of steady growth and refinement. What began as a small browser scripting tool has become a mature, general-purpose programming language. Its official name is ECMAScript, a standard maintained by the ECMA International organization, which ensures that JavaScript behaves consistently across browsers and environments.
The ECMAScript specification defines the syntax, data types, operators, and core features that make up the language. The name “JavaScript” refers to the common implementation of this standard in browsers and other platforms. Each new version of ECMAScript adds features that make the language cleaner, safer, and more powerful, while preserving compatibility with older code so that decades of existing scripts still run correctly.
Significant milestones include ES5 (released in 2009), which stabilised the language and introduced strict mode, and ES6 (formally called ECMAScript 2015), which brought major improvements such as classes, arrow functions, modules, promises, and template literals. Since then, new versions have appeared yearly, introducing smaller but important updates like optional chaining, nullish coalescing, and pattern matching.
This continuous evolution keeps JavaScript modern without forcing constant rewrites. Developers can gradually adopt new features as needed, confident that older syntax remains supported. The result is a language that feels both familiar and forward-looking—a rare combination in software development.
Although the ECMAScript specification defines the language itself, real-world JavaScript also includes APIs provided by the environments where it runs. In browsers, these include the Document Object Model (DOM) for interacting with web pages, and in Node.js, modules for file handling, networking, and system access. Together they extend JavaScript far beyond its original role as a simple scripting tool.
Where JavaScript runs
JavaScript originally ran only inside web browsers, where it allowed developers to add interactivity to otherwise static pages. Each browser provides its own JavaScript engine that reads and executes code. Google Chrome uses the V8 engine, Firefox uses SpiderMonkey, Safari runs JavaScriptCore, and Microsoft’s Edge uses Chakra (or now also V8). These engines compile JavaScript into machine code for speed, making the language much faster than it was in its early years.
When JavaScript runs in a browser, it has access to the Document Object Model (DOM)—a live representation of the web page. Through this model, scripts can modify elements, respond to user input, and communicate with web services. This is how modern websites achieve animations, form validation, interactive menus, and dynamic content without reloading the page.
Beyond browsers, JavaScript also powers servers, tools, and applications thanks to Node.js. Node uses the same V8 engine as Chrome but runs outside the browser. It provides modules for working with files, the network, and the operating system. This allows developers to build web servers, command-line utilities, or complete back-end systems using the same language they use in the browser.
document and window, while Node provides fs and http.
The emergence of Node.js marked a major turning point in JavaScript’s history. It made JavaScript a true general-purpose language, suitable for almost any task. From automating builds to managing databases and deploying APIs, Node has made it possible to use one consistent language across both client and server.
In addition, JavaScript now runs in many other environments: mobile applications built with React Native, desktop apps powered by Electron, embedded systems, and even IoT devices. Its versatility and small footprint have made it the universal language of modern computing.
Running JavaScript
Because JavaScript is built into every major browser and supported by Node.js on the desktop, you can begin experimenting immediately without installing complex tools. There are several easy ways to write and run JavaScript code, each suited to different kinds of learning and development.
Using the browser console
The quickest way to try JavaScript is in your browser’s developer console. Every modern browser includes one, and it can be opened with a keyboard shortcut such as F12 or Ctrl + Shift + I. Select the Console tab, type a command like 2 + 2, and press Enter. The result appears instantly below your input. You can run multiple lines, define functions, and inspect variables interactively.
This environment is ideal for short tests, exploring new syntax, or checking how built-in functions behave. You can also execute scripts that are already part of a web page and see how they interact with the DOM in real time.
console.log(). It is the simplest and most widely used debugging tool in JavaScript.
Using browser developer tools
Developer tools go beyond the console, allowing you to step through code, watch variables, and inspect network activity. The Sources or Debugger tab lets you view and edit JavaScript files, set breakpoints, and run code line by line to observe how it behaves. These tools are invaluable for understanding how programs execute and for finding logic errors.
You can also inject your own code into any page to test small changes or experiments. Simply open a script file, make an edit, and reload the page to see the effect immediately.
Running code with Node.js
Node.js provides a command-line environment for running JavaScript outside the browser. Once installed, you can run single files by typing node filename.js, or enter the REPL (Read-Eval-Print Loop) by typing node with no filename. The REPL works much like the browser console, to enter commands, press Enter, and see results straight away.
This makes Node ideal for trying examples from the book, running small utilities, or learning how JavaScript behaves without a web page. It also lets you explore server-side features such as file handling and networking when you reach those topics later.
document and window, while Node offers process, require(), and module.
Including JavaScript in web pages
The most common way to use JavaScript on a website is by placing it inside <script>…</script> tags within an HTML document. You can put these tags in the <head> or at the end of the <body> section. For example:
<script>
console.log("Hello from JavaScript!");
</script>
Alternatively, you can keep your scripts in separate files and include them with a src attribute:
<script src="script.js"></script>
This method keeps your HTML cleaner and makes it easier to reuse code across pages. Browsers load and run the script automatically when they reach the tag.
Strict mode and common pitfalls it prevents
JavaScript has a flexible design that allows beginners to write useful programs quickly, but this flexibility can sometimes lead to errors that are hard to detect. To improve consistency and catch mistakes earlier, ECMAScript 5 introduced strict mode, which enforces a stricter set of rules on how JavaScript is written and executed.
Strict mode is enabled by placing the line "use strict"; at the top of a script or at the beginning of a function. Once active, it changes how some parts of the language behave, making JavaScript safer, clearer, and easier to debug.
How to enable strict mode
You can apply strict mode to an entire script file or just to a single function:
// Entire script in strict mode
"use strict";
let x = 10;
// Only this function in strict mode
function example() {
"use strict";
let y = 20;
}
If you use modern tools or frameworks, strict mode is often applied automatically, but it is still good practice to understand what it does.
What strict mode changes
Strict mode removes or restricts several features that have historically caused confusion or subtle bugs. Among the most important changes are:
- It prevents the use of undeclared variables.
- It disallows deleting variables, functions, or parameters.
- It makes
thisbehave more predictably inside functions that are not methods (it will beundefinedinstead of the global object). - It stops silent errors from being ignored and converts them into exceptions.
- It forbids duplicate parameter names in functions.
"use strict";
x = 5; // ReferenceError: x is not defined
Without strict mode, the same statement would quietly create a global variable named x, which is a frequent source of hard-to-find bugs.
"use strict"; at the start of your code or module is a simple way to avoid many logic errors. It enforces habits that modern JavaScript already expects.
Strict mode in modern JavaScript
All JavaScript modules and class definitions automatically run in strict mode, even if you do not include the directive. This means that most modern code already benefits from its safety improvements by default. You only need to enable it explicitly in older scripts or inline code that is not treated as a module.
Overall, strict mode helps make JavaScript more reliable and less error-prone. It encourages cleaner code and better practices, which is why it has become the standard mode for all modern JavaScript development.
Chapter 2: JavaScript Foundations
Every programming language is built on a set of core ideas that define how its code is written, read, and executed. Before learning about data types, functions, or objects, it is important to understand the basic structure of the language itself. This chapter explores the essential building blocks that form the foundation of all JavaScript programs.
Here you will learn what statements and expressions are, how JavaScript uses tokens and identifiers to form valid code, and how comments can make your programs easier to read. You will also see how JavaScript organises code into blocks and manages scope, and how its automatic semicolon insertion feature can sometimes change the way your code behaves.
These topics may seem simple, but together they define how the language interprets your instructions. A solid understanding of them helps you read other people’s code, avoid subtle syntax errors, and write your own programs with greater precision.
JavaScript’s foundations are not complicated, yet they contain details that matter. By mastering them early, you will gain a clear sense of how the interpreter works and how your code becomes a sequence of actions that the engine performs step by step.
Statements and expressions
Every JavaScript program is built from statements and expressions. They are the smallest meaningful units of execution in the language, and understanding the distinction between them helps you predict how code behaves and where certain constructs are valid.
Expressions
An expression is a fragment of code that produces a value. It can be as simple as a literal number or string, or as complex as a function call or logical comparison. When the JavaScript engine evaluates an expression, it always resolves to a single value.
5 // a literal expression
"Hello" // a string expression
a + b // an arithmetic expression
myFunction() // a function call expression
user.age > 18 // a comparison expression
Expressions can appear anywhere a value is expected. They can be assigned to variables, passed as arguments to functions, or combined with other expressions to create new ones. This makes them flexible building blocks for all JavaScript code.
Statements
A statement performs an action. It may declare a variable, make a decision, start a loop, or call a function. Statements usually end with a semicolon, although JavaScript can sometimes insert them automatically (as explained later in this chapter).
let total = 0; // variable declaration statement
if (total < 10) { // if statement
total++;
}
console.log(total); // expression statement
Some statements, such as if, for, and while, control the flow of a program. Others, like return or break, affect how execution moves within functions or loops.
Expression statements
Many expressions can also be used as standalone statements. When this happens, they are called expression statements. For example, a function call can be both an expression and a statement depending on how it is used:
myFunction(); // expression used as a statement
let result = myFunction();
// same expression used in a larger expression
JavaScript allows expression statements because they often cause useful side effects, such as logging output or modifying data. However, not all expressions can stand alone; for instance, an arithmetic expression like 5 + 2 on its own does nothing unless its result is used.
Building programs from statements and expressions
JavaScript programs are sequences of statements that may contain expressions. When a program runs, the JavaScript engine evaluates each statement in order, computing the results of any expressions as needed. This simple but consistent model makes it easy to reason about the flow of execution.
Tokens, identifiers, and comments
When you write JavaScript code, the interpreter reads it as a stream of small, meaningful elements called tokens. These tokens form the building blocks of the language; names, numbers, symbols, and punctuation that together create statements and expressions. Understanding how JavaScript breaks code into tokens helps explain how it parses and executes your programs.
Tokens in JavaScript
A token is the smallest unit of syntax that carries meaning. Examples include keywords, variable names, operators, and literals. When the JavaScript engine reads your code, it divides it into tokens before interpreting or compiling it.
let total = 5 + 3;
In this line, the tokens are let, total, =, 5, +, 3, and the semicolon ;. The interpreter recognises each as a separate piece of syntax and processes them accordingly.
Identifiers
An identifier is the name you give to something in your code, such as a variable, function, or class. Identifiers must begin with a letter, underscore (_), or dollar sign ($), and can contain digits after the first character. They are case-sensitive, so total and Total refer to different variables.
let name = "Robin";
let Name = "Alex"; // different identifier
Identifiers cannot use JavaScript keywords such as if, for, class, or return. Choosing clear, descriptive names makes code easier to read and maintain. Although technically valid, names like x or data1 rarely explain their purpose well.
userCount or calculateTotal() immediately convey meaning and make code easier to follow.
Unicode in identifiers
JavaScript identifiers can include Unicode characters, meaning you can use letters and symbols from many languages, not just English. While this feature allows flexibility, it is best to use standard ASCII letters and digits for clarity and compatibility.
let café = "open"; // valid, but may confuse readers
let cafe = "open"; // clearer and more common
Comments
Comments are parts of code ignored by the interpreter. They are used to explain what code does, mark sections, or temporarily disable lines during testing. JavaScript supports two styles of comments: single-line and multi-line.
// This is a single-line comment
let total = 5; // You can also comment after a statement
/*
This is a multi-line comment.
It can span several lines and is often used for longer explanations.
*/
Comments do not affect how the program runs, but they greatly improve readability and maintainability, especially in collaborative projects.
Whitespace and readability
JavaScript generally ignores spaces, tabs, and line breaks outside strings. These are used mainly to make code readable. Consistent indentation and spacing help others understand the structure of your code at a glance.
// Easy to read
if (count < 10) {
total += count;
}
// Harder to read
if(count<10){total+=count;}
Clean, well-formatted code is easier to debug and maintain, even though the interpreter does not require strict spacing rules.
Blocks and lexical structure
JavaScript programs are organised into blocks of code that group related statements together. These blocks define the structure of your program, control the visibility of variables, and determine where certain pieces of code can be used. The rules that govern this organisation form JavaScript’s lexical structure.
Blocks
A block is a section of code enclosed in curly braces {…}. It can contain one or more statements and is used in functions, loops, and conditionals to group actions that should execute together.
if (count > 0) {
console.log("Count is positive");
count--;
}
In this example, both statements inside the braces belong to the if block. If the condition is true, the entire block runs; otherwise, it is skipped. Even when a block contains only a single statement, using braces can improve clarity and prevent logical errors when you later expand the code.
Lexical structure
The lexical structure defines how JavaScript code is written and read at the most basic level. It includes the use of tokens, whitespace, comments, line breaks, and punctuation. The interpreter follows these rules to separate and interpret parts of your code correctly.
Because JavaScript is a lexically scoped language, the placement of code within its textual structure determines which variables and functions are visible at any point. This means the physical layout of your code (where you declare things) matters to how they behave at runtime.
Lexical scope
Lexical scope defines where variables can be accessed based on where they are declared in the source code. Each function, block, or module introduces a new scope. Variables declared with let and const are confined to the block in which they appear, while variables declared with var are visible throughout the entire function or global scope.
let outer = "outside";
{
let inner = "inside";
console.log(outer); // accessible
}
// console.log(inner); // ReferenceError: inner is not defined
Here, inner is available only inside its block, but outer is visible everywhere within the same function or script. This is a key part of how modern JavaScript manages variable lifetimes and prevents accidental interference between sections of code.
Nested blocks and shadowing
Blocks can be nested inside one another. When two variables share the same name in different scopes, the inner declaration shadows the outer one, temporarily hiding it within that block.
let message = "outer";
{
let message = "inner";
console.log(message); // "inner"
}
console.log(message); // "outer"
Shadowing is often used intentionally to create temporary variables without affecting the outer scope, but using distinct names can make code easier to follow.
Global scope
Variables declared outside any function or block belong to the global scope. They are accessible from anywhere in your code, including functions and nested blocks. However, overusing global variables can lead to naming conflicts and unintended side effects, especially in large programs.
let version = "1.0";
function showVersion() {
console.log(version); // accessible everywhere
}
In modern JavaScript, the use of modules and block-scoped declarations (let and const) helps reduce reliance on the global scope, keeping programs more predictable and modular.
Together, blocks and lexical structure define the framework that gives meaning and order to JavaScript code. They determine how variables are created, where they exist, and how different parts of a program interact with one another.
Automatic semicolon insertion
JavaScript statements usually end with semicolons, but unlike many languages, the interpreter can automatically insert them when they are missing. This feature, known as Automatic Semicolon Insertion (ASI), was designed to make the language more forgiving and easier for beginners. However, it can also cause unexpected behaviour if you rely on it too much.
When semicolons are automatically inserted
The JavaScript engine inserts semicolons in a few specific situations where it detects the end of a statement. The most common cases are:
- When a line break appears where a semicolon is expected.
- At the end of a block (
}) if no semicolon follows. - At the end of a script or function.
For example, the following code runs correctly even though there are no semicolons:
let a = 5
let b = 10
console.log(a + b)
JavaScript automatically inserts semicolons after each line. While this may seem convenient, the rules behind ASI are subtle, and there are cases where it can change the meaning of your code.
When ASI can cause problems
One of the most common pitfalls occurs when a line begins with an expression that could be interpreted as a continuation of the previous line. In such cases, the missing semicolon can lead to incorrect grouping of statements.
let result = getValue()
[1, 2, 3].forEach(console.log)
Here, JavaScript interprets the code as a single statement because it assumes the array is being accessed from the return value of getValue(). This causes a runtime error. To fix it, insert a semicolon before the array or at the end of the previous line:
let result = getValue();
[1, 2, 3].forEach(console.log);
Return statements and ASI
ASI can also change the behaviour of return statements if you place the return value on the next line. JavaScript automatically inserts a semicolon after return when it encounters a line break, returning undefined instead of the intended value.
function getItem() {
return
{
name: "Book"
}
}
console.log(getItem()); // undefined
To fix this, place the opening brace on the same line as return:
function getItem() {
return {
name: "Book"
};
}
(, [, /, +, or -, ensure the previous line ends with a semicolon. This avoids nearly all ASI-related issues.
Best practices
Most experienced developers include semicolons explicitly. Although modern JavaScript engines handle ASI well, writing them yourself makes your intent unambiguous and prevents subtle bugs. Consistency is more important than the style itself. Whether you use semicolons or omit them, do so the same way throughout your project.
Automatic semicolon insertion was introduced to make JavaScript friendlier to new programmers, but understanding its limits helps you write more reliable and predictable code.
Value vs reference
In JavaScript, the way data is stored and passed between variables depends on whether it is a value type or a reference type. Understanding this distinction is essential for predicting how changes to variables will affect your program. It determines whether copying or modifying one variable also affects another.
Primitive values
JavaScript’s primitive types (number, string, boolean, null, undefined, symbol, and bigint) are always stored and passed by value. This means each variable holds its own independent copy of the data.
let a = 10;
let b = a; // copy the value of a
b = 20;
console.log(a); // 10
console.log(b); // 20
Changing b does not affect a because they store separate values. Each assignment of a primitive creates a new copy in memory.
Reference types
Objects, arrays, and functions are reference types. When you assign one variable to another, you are copying a reference (a pointer) to the same underlying object, not the object itself. Both variables then refer to the same data in memory.
let x = { count: 5 };
let y = x; // copy the reference, not the object
y.count = 10;
console.log(x.count); // 10 — both point to the same object
Because x and y reference the same object, modifying one affects the other. This behaviour often surprises new developers coming from languages that default to value semantics.
Copying reference types safely
To create a real copy of an object or array, you must explicitly duplicate its contents. A common way to do this is by using the spread syntax (...) or functions such as Object.assign().
let original = { a: 1, b: 2 };
let copy = { ...original };
copy.a = 99;
console.log(original.a); // 1
console.log(copy.a); // 99
Here, copy holds a new object with its own data. However, this is a shallow copy: if an object contains other objects, the nested ones are still shared by reference. For deep copying, utility libraries or structured cloning functions can be used.
[...array] or array.slice() to avoid modifying the original.
Comparing values and references
When comparing primitives, JavaScript checks whether the actual values are equal. When comparing objects or arrays, it checks whether the references point to the same memory location.
let m = [1, 2, 3];
let n = [1, 2, 3];
console.log(m === n); // false — different arrays
console.log(m == n); // false — same reason
let p = n;
console.log(p === n); // true — both reference the same array
This distinction explains why two identical objects or arrays are not considered equal unless they are literally the same instance.
When to use each
Primitive values are ideal for simple, independent data. Reference types are suited for collections, structured information, and objects that may be shared or updated. Knowing which kind you are working with helps you manage memory safely and avoid accidental side effects.
By distinguishing between values and references, you gain a clearer understanding of how JavaScript handles data internally. This knowledge will help you reason about your code more confidently as you move on to variables, functions, and more advanced structures in later chapters.
Chapter 3: Types and Values
Everything in JavaScript is a value, and every value has a type. Understanding the difference between types and values, and how JavaScript interprets and converts them, is essential to writing clear and predictable code. Unlike many strictly typed languages, JavaScript uses dynamic typing, which means that variables are not bound to any particular type. A single variable can hold different kinds of values during execution, and the language automatically converts between types when necessary.
JavaScript’s type system includes a set of primitive types for representing basic data, and a single non-primitive type, the object, which acts as a container for collections of values and more complex structures. You will also encounter operators and constructs that help you identify, compare, and safely convert between these types. Together, these concepts define how values behave, how they interact with one another, and how they are stored and referenced in memory.
This chapter explores the fundamental types and their characteristics, how values are stored and compared, and how JavaScript performs type coercion under the hood. You will learn to use operators such as typeof and instanceof effectively, understand equality distinctions, and handle tricky conversions safely and predictably.
Primitives: undefined, null, boolean, number
Primitive types are the simplest forms of data in JavaScript. They are immutable values that are stored directly, not as references, and include undefined, null, boolean, number, string, symbol, and bigint. This section introduces the first four of these core primitives, which are fundamental to all JavaScript programs.
undefined
The special value undefined represents a variable that has been declared but not assigned a value. It is also the default return value of functions that do not explicitly return anything. Accessing a property or element that does not exist also results in undefined.
let x;
console.log(x); // undefined
function test() {}
console.log(test()); // undefined
undefined) with a missing variable, which causes a ReferenceError.
null
The value null represents the intentional absence of any object value. It is often used to indicate that a variable should be empty or that a reference points to nothing.
let user = null;
if (user === null) {
console.log("No user data available");
}
typeof null returns "object" due to a design bug that remains for backward compatibility. Always check for null using strict equality (===) instead.
boolean
Boolean values are either true or false. They commonly appear as results of comparisons or conditions in control statements. JavaScript can also convert other types to booleans when used in logical expressions, based on their “truthy” or “falsy” nature.
let active = true;
let ready = false;
console.log(Boolean(0)); // false
console.log(Boolean("Hi")); // true
false, 0, -0, 0n, "", null, undefined, and NaN. Everything else is truthy.
number
JavaScript uses a single numeric type, number, which represents both integers and floating-point values using the IEEE 754 double-precision format. This means that certain arithmetic operations can introduce rounding errors.
let x = 10;
let y = 3.5;
let z = x / y;
console.log(z); // 2.857142857142857
Special numeric values include Infinity, -Infinity, and NaN (Not-a-Number). These are valid number values and can result from invalid or overflow operations.
console.log(1 / 0); // Infinity
console.log(-1 / 0); // -Infinity
console.log("abc" * 2); // NaN
Number.isNaN(value) to check for NaN, since NaN === NaN is always false.
string, symbol, bigint
In addition to numbers and booleans, JavaScript provides several other primitive types that handle text, unique identifiers, and very large integers. These are string, symbol, and bigint. Each serves a specific purpose and behaves differently from objects, even though they can sometimes appear similar when used in expressions or converted implicitly.
string
A string represents a sequence of characters enclosed in single quotes, double quotes, or backticks. Strings are immutable, meaning once created they cannot be changed. Any operation that appears to modify a string actually produces a new one. Template literals, introduced in ES6, use backticks (`) and support interpolation with ${...} expressions.
let name = "Robin";
let greeting = `Hello, ${name}!`;
console.log(greeting); // Hello, Robin!
Strings can be concatenated with the + operator or manipulated using built-in methods such as slice(), toUpperCase(), and includes(). They are commonly used to represent textual data, keys, and serialized information.
symbol
The symbol type represents a unique and immutable value, often used as an identifier for object properties. Each call to Symbol() creates a new value that is guaranteed to be distinct, even if given the same description. Symbols do not coerce to strings, which helps prevent accidental key collisions in objects.
const id = Symbol("id");
const user = {
name: "Alice",
[id]: 123
};
console.log(user[id]); // 123
for...in loops or returned by Object.keys(). To access them, use Object.getOwnPropertySymbols().
bigint
The bigint type allows you to represent integers larger than the maximum safe value for number (253 - 1). A bigint is created by appending an n to the end of an integer literal or by calling BigInt().
const large = 9007199254740993n;
console.log(large + 2n); // 9007199254740995n
Unlike number, bigint values cannot be mixed directly with non-bigint values in arithmetic expressions. Attempting to do so will cause a TypeError. They are primarily used when working with very large counts, identifiers, or precise integer arithmetic.
BigInt() carefully, as some APIs and libraries may not yet support bigint values.
objects and property attributes
Unlike primitive values, which are immutable and stored directly, an Object is a mutable collection of key–value pairs. Keys are usually strings or symbols, and values can be any type, including functions and other objects. Most of JavaScript’s complex data structures, such as arrays, maps, and functions, are specialized kinds of Objects.
const person = {
name: "Sam",
age: 30
};
person.job = "Developer";
console.log(person.name); // Sam
Objects are passed by reference, not by value. This means that if two variables reference the same object, modifying it through one variable also affects the other.
const a = { x: 1 };
const b = a;
b.x = 2;
console.log(a.x); // 2
Object by assignment only copies its reference. To clone an object, use structuredClone(), Object.assign(), or spread syntax ({ ...obj }).
Property attributes
Each property of an Object has internal attributes that define its behavior. These include:
value: the property’s stored datawritable: whether the value can be changedenumerable: whether it appears in loops and enumerationsconfigurable: whether the property can be deleted or modified
These attributes can be inspected or defined explicitly using Object.getOwnPropertyDescriptor() and Object.defineProperty().
const user = {};
Object.defineProperty(user, "id", {
value: 42,
writable: false,
enumerable: true,
configurable: false
});
console.log(user.id); // 42
user.id = 99; // silently ignored (or error in strict mode)
for...in loops or when spreading ({ ...obj }). Use descriptors to create hidden or read-only fields safely.
Accessors: getters and setters
In addition to data properties, objects can define accessor properties using get and set functions. These allow computed or controlled access to internal data without exposing it directly.
const product = {
_price: 100,
get price() {
return `$${this._price}`;
},
set price(value) {
if (value > 0) this._price = value;
}
};
product.price = 150;
console.log(product.price); // $150
value or writable attribute. Instead, their behavior is defined by their get and set functions.
Prototype chain
Every Object has an internal link to another object known as its prototype. When accessing a property that does not exist directly on the object, JavaScript looks for it along this prototype chain. This mechanism enables inheritance and shared behavior across object types.
const base = { kind: "base" };
const derived = Object.create(base);
console.log(derived.kind); // "base"
console.log(Object.getPrototypeOf(derived) === base); // true
Object.create(null) to create an object with no prototype, useful for pure key–value maps without inherited properties.
typeof, instanceof, and equality
JavaScript provides several operators to identify the type or class of a value and to compare values for equality. Understanding how these operators behave is crucial, especially since some may produce results that seem counterintuitive due to JavaScript’s dynamic typing and type coercion rules.
typeof
The typeof operator returns a string describing the type of its operand. It works for both primitives and objects, though it has a few historical quirks.
console.log(typeof 123); // "number"
console.log(typeof "Hello"); // "string"
console.log(typeof true); // "boolean"
console.log(typeof undefined); // "undefined"
console.log(typeof null); // "object" (legacy bug)
console.log(typeof {}); // "object"
console.log(typeof Symbol()); // "symbol"
"object" for null is a long-standing language bug preserved for backward compatibility. Use strict checks like value === null when you specifically need to detect null.
instanceof
The instanceof operator tests whether an object’s prototype chain includes the prototype property of a given constructor function. It is useful for checking the class of objects and custom types.
function Person(name) {
this.name = name;
}
const alice = new Person("Alice");
console.log(alice instanceof Person); // true
console.log(alice instanceof Object); // true
This operator only works with objects, not primitives, and relies on the internal prototype chain, so it may not behave as expected when dealing with objects across different JavaScript contexts (such as iframes or workers).
value.constructor === Type or Object.prototype.toString.call(value) for more predictable type checks, especially when working with multiple execution contexts.
Equality operators
JavaScript has two sets of equality operators: loose equality (==) and strict equality (===). The loose version performs type coercion before comparison, while the strict version does not.
console.log(5 == "5"); // true
console.log(5 === "5"); // false
console.log(null == undefined); // true
console.log(null === undefined); // false
Using === and !== is generally safer, since they prevent unexpected conversions. Only use == when you explicitly want coercion between compatible types, such as comparing null and undefined.
NaN is never equal to anything, including itself. Use Number.isNaN() to test for it.
Object identity
Objects are compared by reference, not by value. Two distinct objects are never equal, even if their properties have identical contents.
const a = { x: 1 };
const b = { x: 1 };
const c = a;
console.log(a === b); // false
console.log(a === c); // true
To compare object contents, you must check individual properties manually or use utilities such as structuredClone() combined with serialization, or libraries that perform deep comparisons.
Object.is() provides a more accurate comparison for some edge cases, such as distinguishing +0 from -0 and correctly treating NaN as equal to itself.
Coercion rules and safe checks
Type coercion is the process by which JavaScript automatically converts values from one type to another when required by an operation. This happens in many contexts, such as comparisons, arithmetic, and string concatenation. While coercion allows flexible code, it can also introduce subtle bugs if not fully understood.
Implicit coercion
JavaScript performs implicit conversions when combining values of different types. For example, when a string and a number are used together with the + operator, the number is converted to a string. In contrast, other arithmetic operators (-, *, /) convert both operands to numbers.
console.log("5" + 2); // "52"
console.log("5" - 2); // 3
console.log("5" * "2"); // 10
Boolean contexts also trigger coercion. Values like 0, "", null, undefined, and NaN become false, while all others become true.
if ("hello") {
console.log("This runs"); // non-empty string is truthy
}
Explicit coercion
Explicit coercion occurs when you deliberately convert a value to another type using functions or constructors like String(), Number(), Boolean(), or BigInt(). These make conversions clear and intentional.
let n = Number("42"); // 42
let s = String(42); // "42"
let b = Boolean(0); // false
When precision matters, always prefer explicit conversions to make your intent obvious and avoid automatic type juggling.
+ operator is a concise way to convert a value to a number: +"5" becomes 5. Use sparingly for clarity.
Loose equality and coercion
The loose equality operator (==) triggers coercion when the operands differ in type. Some of its rules are non-intuitive and can lead to errors if used without care.
console.log(false == 0); // true
console.log("" == 0); // true
console.log(null == undefined); // true
console.log([] == false); // true
For predictable comparisons, always use strict equality (===) instead, which compares both type and value without conversion.
[] == ![] evaluates to true due to the multi-step coercion process. Avoid constructs that rely on implicit truthy/falsy conversions in equality checks.
Safe type checking
When working in a dynamically typed environment, it is often necessary to confirm that a value has the expected type before performing operations. The safest approaches combine typeof, Array.isArray(), and constructor checks.
if (typeof value === "string") {
console.log("It is a string");
}
if (Array.isArray(value)) {
console.log("It is an array");
}
if (value instanceof Date) {
console.log("It is a Date object");
}
Always prefer clear checks over assumptions, especially when handling user input, API responses, or loosely structured data.
value != null with type checks to safely exclude both null and undefined before accessing properties.
Chapter 4: Variables and Scope
Variables are the foundation of any program, serving as named references to values stored in memory. In JavaScript, variables are not simply containers for data. Their behavior depends on how and where they are declared. The language provides three distinct keywords for variable declaration (var, let, and const), each with different scope rules and lifetimes that affect how code executes and interacts across functions and blocks.
Scope determines where variables are visible and accessible within your program. JavaScript uses a model called lexical scoping, which ties variable visibility to the structure of the code as written. Understanding how scope works is essential for writing predictable, maintainable programs and avoiding subtle bugs caused by shadowing or unintended reuse of variable names.
This chapter explores the evolution of JavaScript’s variable system from the early days of var to the modern, block-scoped let and const. It explains how declarations are processed by the interpreter through a mechanism known as hoisting, why the temporal dead zone exists, and how closures form when inner functions retain access to variables from their outer scope.
var, let, and const
JavaScript provides three ways to declare variables: var, let, and const. Each differs in how it handles scope, reassignment, and hoisting. Understanding these differences is essential to writing safe, predictable code. Although all three keywords create variables, only let and const follow modern block-scoping rules introduced in ECMAScript 2015 (ES6). The older var keyword uses function scope and behaves differently in several subtle ways.
var
The var keyword declares a variable that is function-scoped (or globally scoped if declared outside a function). Variables declared with var are hoisted to the top of their scope and initialized with undefined before any code runs. This can cause unexpected results if you attempt to use them before their declaration appears.
function example() {
console.log(a); // undefined (hoisted)
var a = 10;
console.log(a); // 10
}
var does not respect block boundaries, it can leak outside of loops and conditionals. Use let or const instead for predictable scoping.
let
The let keyword declares a variable with block scope, meaning it is only accessible within the nearest set of curly braces ({…}). Unlike var, let declarations are not initialized until the line of execution reaches them, creating what is known as the temporal dead zone (covered later in this chapter).
{
let x = 5;
console.log(x); // 5
}
console.log(x); // ReferenceError: x is not defined
Variables declared with let can be reassigned, making them suitable for values that change over time, such as counters or mutable state.
let over var unless you have a specific reason to rely on function scope. It helps prevent accidental overwriting and makes scope boundaries explicit.
const
The const keyword declares a block-scoped variable that cannot be reassigned after its initial definition. It is ideal for values that should remain constant throughout a program, such as configuration data or references that should not be replaced.
const pi = 3.14159;
pi = 3; // TypeError: Assignment to constant variable
However, const does not make objects or arrays immutable — it only prevents reassignment of the variable binding itself. The contents of a referenced object can still change.
const user = { name: "Robin" };
user.name = "Alex"; // allowed
console.log(user.name); // Alex
Object.freeze() if you need to prevent changes to the contents of an object or array, not just its reference.
Choosing the right declaration
In modern JavaScript, the general rule is simple: use const by default, and let only when reassignment is necessary. Avoid var entirely except when maintaining legacy code or working in older environments that do not support block-scoped declarations.
const first encourages clarity and helps catch unintended mutations early, making your programs more reliable.
Lexical scope
JavaScript uses a lexical scoping model, meaning that the scope of a variable is determined by its position in the written code, not by where or when it is executed. In other words, the structure of your program defines which variables are visible in which parts of the code. This is a fundamental concept that underlies closures, module systems, and modern JavaScript’s block-scoped declarations.
How lexical scope works
When JavaScript code runs, it creates a series of nested scopes, forming what is known as a scope chain. Each function or block introduces a new scope. When you reference a variable, JavaScript looks for it first in the current scope, then moves outward through the parent scopes until it finds a matching declaration, or throws a ReferenceError if it doesn’t find one.
let globalVar = "global";
function outer() {
let outerVar = "outer";
function inner() {
let innerVar = "inner";
console.log(globalVar); // found in global scope
console.log(outerVar); // found in outer scope
console.log(innerVar); // found in inner scope
}
inner();
}
outer();
In this example, each level of the nested function chain has access to variables declared in the levels above it, but not below. The inner() function can see outerVar and globalVar, but the reverse is not true because the outer scopes cannot access variables defined within inner().
Global, function, and block scope
JavaScript has three main kinds of scope:
- Global scope: variables declared outside any function or block, accessible everywhere.
- Function scope: variables declared inside a function (using
var) are visible only within that function. - Block scope: variables declared with
letorconstare limited to the nearest block ({…}).
{
let blockScoped = "visible only here";
var functionScoped = "visible outside block";
}
console.log(functionScoped); // works
console.log(blockScoped); // ReferenceError
var inside a block does not restrict it to that block, which can lead to leaks into the containing scope. Always use let or const for block-local variables.
Nested scopes and shadowing
When a variable in an inner scope shares the same name as one in an outer scope, the inner variable shadows the outer one within its scope. This allows for local redefinition without altering the outer variable, but can also cause confusion if overused.
let x = 10;
function test() {
let x = 5; // shadows outer x
console.log(x); // 5
}
test();
console.log(x); // 10
Variable shadowing is often used deliberately in small, self-contained scopes, but in larger codebases it can obscure meaning and lead to mistakes if the same identifier is reused unintentionally.
Hoisting explained
Hoisting is JavaScript’s process of moving variable and function declarations to the top of their scope during compilation. This means that you can reference certain identifiers before their declaration appears in the source code. However, the behavior differs between var, let, const, and function declarations, and misunderstanding these differences can lead to confusing results.
What hoisting really means
When a JavaScript file or function is executed, the interpreter performs two passes over the code. In the first pass (the creation phase), all variable and function declarations are registered in memory. In the second pass (the execution phase), the code runs line by line. This is what allows you to call a function before it is defined, or access a variable that appears later in the code, although the value may not yet be initialized.
console.log(x); // undefined
var x = 5;
console.log(x); // 5
In this example, the declaration var x is hoisted to the top of its scope, but the assignment (x = 5) is not. This is why the first console.log shows undefined instead of throwing an error.
undefined (for var) or throws a ReferenceError (for let and const).
Hoisting with var
All var declarations are hoisted to the top of their enclosing function or global scope and automatically initialized to undefined. This can lead to unexpected results if you assume variable declarations occur in order.
function demo() {
console.log(a); // undefined
var a = 10;
console.log(a); // 10
}
Internally, JavaScript treats the example above as if it were written like this:
function demo() {
var a;
console.log(a);
a = 10;
console.log(a);
}
Hoisting with let and const
Variables declared with let and const are also hoisted to the top of their block, but unlike var, they are not automatically initialized. Accessing them before their declaration results in a ReferenceError. The period between the start of the block and the actual declaration is called the temporal dead zone (covered next).
{
console.log(value); // ReferenceError
let value = 42;
}
let and const are technically hoisted, it’s clearer to think of them as unavailable until their declaration line is reached.
Hoisting and functions
Function declarations are fully hoisted, including their definitions. This means you can safely call a function before its declaration in the code. Function expressions, on the other hand, follow variable hoisting rules depending on whether they use var, let, or const.
sayHello(); // Works fine
function sayHello() {
console.log("Hello!");
}
But if the function is defined as an expression, it behaves differently:
sayHi(); // TypeError: sayHi is not a function
var sayHi = function() {
console.log("Hi!");
};
Here, sayHi is hoisted like any other var variable and initialized to undefined, so attempting to call it before assignment fails.
Temporal dead zone
The temporal dead zone (TDZ) refers to the period between entering a block and the point where a let or const variable is declared. During this time, the variable exists in memory but cannot be accessed. Attempting to use it results in a ReferenceError. The TDZ enforces safer coding practices by preventing access to variables before they are properly initialized.
How the TDZ works
When JavaScript begins executing a block, it sets up space for all variables declared with let and const but does not initialize them. They remain in the temporal dead zone until execution reaches their declaration line. This ensures that no variable can be read or written before it has been explicitly created.
{
// TDZ begins
console.log(a); // ReferenceError
let a = 5; // TDZ ends here
console.log(a); // 5
}
Unlike var (which initializes to undefined when hoisted), let and const provide stricter guarantees about variable use. This helps prevent bugs caused by uninitialized or shadowed variables.
let or const variable before its declaration always throws a ReferenceError, even if the same name exists in an outer scope.
Example with shadowing
The TDZ also applies when an inner block redeclares a variable name already used in an outer scope. Within the TDZ, the new declaration temporarily hides (or “shadows”) the outer variable, even before it becomes initialized.
let value = "outer";
{
// TDZ for inner 'value' starts
// console.log(value); // ReferenceError
let value = "inner";
console.log(value); // "inner"
}
This rule ensures that variables behave consistently according to their lexical scope and that no code can accidentally read a variable before its valid initialization point.
const and the TDZ
Because const variables must be initialized at declaration, they remain in the TDZ until the exact moment they are defined. After initialization, they cannot be reassigned, but their presence in the TDZ still prevents access beforehand.
{
// TDZ active
// console.log(MAX); // ReferenceError
const MAX = 100;
console.log(MAX); // 100
}
var-based code.
Introduction to closures
A closure is one of JavaScript’s most powerful and defining features. It occurs when a function retains access to variables from its outer (enclosing) scope even after that outer function has finished executing. Closures enable functions to “remember” their environment, making them essential for encapsulation, data privacy, and many functional programming techniques.
How closures form
Whenever a function is created, it captures the variables that were in scope at the time of its creation. This captured environment travels with the function wherever it goes, even if the original scope has already been exited. The function can still access and modify those variables later.
function makeCounter() {
let count = 0;
return function() {
count++;
return count;
};
}
const counter = makeCounter();
console.log(counter()); // 1
console.log(counter()); // 2
console.log(counter()); // 3
In this example, the inner function forms a closure over the variable count, keeping it alive between calls. Each invocation of makeCounter() creates a new, independent closure with its own private count.
Practical uses of closures
Closures make it possible to write functions that preserve state without relying on global variables. They are commonly used for data privacy, event handlers, iterators, and factory functions that generate customized behavior.
function greeter(name) {
return function() {
console.log(`Hello, ${name}!`);
};
}
const greetAlice = greeter("Alice");
const greetBob = greeter("Bob");
greetAlice(); // Hello, Alice!
greetBob(); // Hello, Bob!
Here, each returned function remembers its own name value. Even though greeter() has finished executing, the inner function continues to reference its closed-over variables.
Common closure pitfalls
Because closures retain references to variables, they can also keep memory alive longer than expected. Careless use of closures (especially inside loops or long-lived contexts) can lead to unintended side effects or memory leaks if variables are not released when no longer needed.
for (var i = 0; i < 3; i++) {
setTimeout(function() {
console.log(i);
}, 100);
}
// Outputs: 3, 3, 3
Each function above closes over the same i from the outer scope, which has already reached 3 by the time the callbacks execute. Using let creates a new binding for each iteration, fixing the issue.
for (let i = 0; i < 3; i++) {
setTimeout(function() {
console.log(i);
}, 100);
}
// Outputs: 0, 1, 2
Closures and scope
Closures demonstrate how JavaScript’s lexical scoping model persists beyond function execution. The inner function retains a live link to variables from its defining environment, not just a copy. This makes closures an essential concept for understanding asynchronous code, callbacks, and functional programming patterns.
Chapter 5: Operators and Expressions
JavaScript programs are built from expressions that produce values, and operators that combine or transform them. Whether performing arithmetic, testing equality, chaining logic, or managing control flow, operators define how values interact and how results are derived. Understanding these relationships is essential to writing predictable and efficient code.
This chapter explores JavaScript’s wide range of operators: from basic arithmetic and assignment, through comparison and logical operations, to newer features like the nullish coalescing and optional chaining operators. You will also learn how operator precedence affects the order of evaluation, and how to use grouping to make intentions clear.
Operators form the connective tissue of the language. They turn individual values into relationships, enable decision-making, and allow concise manipulation of data. Mastering them lays the groundwork for confident, expressive coding throughout all your JavaScript projects.
Arithmetic and assignment
Arithmetic operators are among the most familiar in JavaScript, allowing you to perform standard mathematical operations such as addition, subtraction, multiplication, division, and remainder. These operators work with both numbers and, in some cases, strings that can be converted to numbers.
let a = 10;
let b = 3;
console.log(a + b); // 13
console.log(a - b); // 7
console.log(a * b); // 30
console.log(a / b); // 3.333...
console.log(a % b); // 1
JavaScript also supports the unary increment (++) and decrement (--) operators, which add or subtract one from a variable. These can appear before or after the variable, depending on whether you want the change to occur before or after the value is returned.
let count = 5;
console.log(++count); // 6 (increment, then return)
console.log(count--); // 6 (return, then decrement)
console.log(count); // 5
++ and -- inside complex expressions. Their side effects can make code harder to read and reason about, especially when used within loops or conditionals.
Assignment operators
Assignment operators let you store values in variables, with = being the most basic form. However, JavaScript provides compound assignment operators that combine assignment with arithmetic for brevity and clarity.
let x = 10;
x += 5; // x = x + 5 → 15
x -= 2; // x = x - 2 → 13
x *= 3; // x = x * 3 → 39
x /= 3; // x = x / 3 → 13
x %= 5; // x = x % 5 → 3
+= or *=, they instantly know the variable is being updated relative to its current value.
Exponentiation
JavaScript includes the exponentiation operator (**) for raising a value to a power. It is right-associative, which means that in chained operations it evaluates from right to left.
console.log(2 ** 3); // 8
console.log(2 ** 3 ** 2); // 512 (2 ** (3 ** 2))
Math.pow() for the same effect.
Unary plus and minus
The unary plus (+) converts its operand to a number if possible, while unary minus (-) negates it after conversion. This makes them useful for ensuring numeric types before performing calculations.
let n = "42";
console.log(+n); // 42 (string converted to number)
console.log(-n); // -42
+ as a quick conversion to number can be more concise than Number(), but be cautious: it can produce NaN if the value cannot be converted.
Comparison and equality
Comparison operators evaluate the relationship between two values and return a Boolean result (true or false). They are fundamental to decision-making in JavaScript, allowing programs to branch and react to data conditions.
let a = 10;
let b = 20;
console.log(a < b); // true
console.log(a > b); // false
console.log(a <= b); // true
console.log(a >= b); // false
These operators work on numbers, strings, and other comparable types. When comparing strings, JavaScript uses Unicode code point order, so comparisons are case-sensitive and may not align with natural language sorting.
console.log("apple" < "banana"); // true
console.log("Zoo" < "apple"); // false
localeCompare() instead.
Equality vs identity
JavaScript has two types of equality comparison: loose equality (==) and strict equality (===). The difference lies in whether type conversion occurs before comparison.
console.log(5 == "5"); // true (values are coerced)
console.log(5 === "5"); // false (types differ)
Loose equality attempts to convert operands to a common type before comparing. This can lead to unexpected results due to JavaScript’s coercion rules. Strict equality compares both value and type directly, making it the safer and more predictable option.
=== and !== unless you have a specific reason to allow type coercion.
Inequality operators
Just as with equality, JavaScript provides both loose (!=) and strict (!==) forms of inequality.
console.log(5 != "5"); // false (values coerced)
console.log(5 !== "5"); // true (types differ)
These behave exactly like their equality counterparts, differing only in the negation of the result.
Object and reference comparison
When comparing objects, arrays, or functions, JavaScript checks whether both operands reference the same underlying object, not whether their contents are identical.
let x = [1, 2, 3];
let y = [1, 2, 3];
let z = x;
console.log(x === y); // false (different objects)
console.log(x === z); // true (same reference)
Comparing null and undefined
The values null and undefined are equal to each other under loose equality but not under strict equality.
console.log(null == undefined); // true
console.log(null === undefined); // false
This subtle distinction is a common source of confusion. In modern code, prefer strict comparisons or use the nullish coalescing operator (??) when you need to handle both null and undefined together.
null or undefined, you can use value == null safely, since it will match both and nothing else.
Logical operators and short-circuiting
Logical operators combine or negate Boolean values, allowing expressions to express multiple conditions or default behaviors. In JavaScript, the main logical operators are && (AND), || (OR), and ! (NOT). They return one of their operands rather than a strict Boolean value, which makes them especially powerful for control flow and value selection.
let a = true;
let b = false;
console.log(a && b); // false
console.log(a || b); // true
console.log(!a); // false
The && operator returns the first falsy value or the last truthy value if all are true. The || operator does the opposite: it returns the first truthy value or the last falsy value if none are true. Because they return operand values rather than simple true or false, they are often used for short-circuiting and fallback expressions.
let username = "";
let display = username || "Guest";
console.log(display); // "Guest"
a && b, if a is falsy, b is never evaluated. In a || b, if a is truthy, b is skipped.
Truthy and falsy values
Any value in JavaScript can be treated as either true or false when evaluated in a Boolean context. Values considered falsy include false, 0, -0, "" (empty string), null, undefined, and NaN. Everything else is truthy.
if ("text") {
console.log("Truthy");
}
if (0) {
console.log("Falsy"); // not executed
}
0 and "" are falsy, which can lead to unintended results in conditions that test for presence or length.
Logical assignment operators
ES2021 introduced logical assignment operators that combine logical evaluation with assignment. These allow a variable to be updated only when certain logical conditions apply.
let user = null;
let defaultUser = "Anonymous";
user ||= defaultUser; // assigns if user is falsy
console.log(user); // "Anonymous"
let config = { debug: true };
config &&= { verbose: true }; // assigns if config is truthy
console.log(config); // { verbose: true }
let count = 0;
count ??= 10; // assigns if count is null or undefined
console.log(count); // 0 (not changed)
&&=, ||=, and ??=) are concise ways to update variables conditionally without writing explicit if statements.
Using logic for control and defaults
Because logical operators return operand values, they can act as elegant control structures or defaulting mechanisms within expressions. For instance, || is often used to provide a fallback value, while && can execute code conditionally.
let debug = true;
debug && console.log("Debug mode active");
let port = userPort || 8080; // use default if undefined
Nullish coalescing and optional chaining
Modern JavaScript introduces two powerful operators that make working with uncertain data safer and more expressive: the nullish coalescing operator (??) and the optional chaining operator (?.). Both were added in ECMAScript 2020 and are especially useful when dealing with optional data structures, configuration objects, or JSON responses where some values may not exist.
The ?? (nullish coalescing) operator
The nullish coalescing operator returns the right-hand operand only when the left-hand operand is null or undefined. This differs from the logical OR operator (||), which treats all falsy values (such as 0, "", or false) as triggers for the fallback.
let count = 0;
console.log(count || 10); // 10 (because 0 is falsy)
console.log(count ?? 10); // 0 (because 0 is not null or undefined)
Use ?? when you only want to provide a default for truly missing values, not for all falsy values.
0 or "" are preserved.
The ?. (optional chaining) operator
The optional chaining operator allows you to safely access deeply nested properties without causing a runtime error if an intermediate value is null or undefined. Instead of throwing an exception, the expression simply evaluates to undefined.
let user = {
profile: {
name: "Robin"
}
};
console.log(user.profile?.name); // "Robin"
console.log(user.account?.email); // undefined
console.log(user.account.email); // TypeError
Optional chaining can also be used for method calls and array indexing, which makes it particularly versatile when navigating uncertain data structures.
user.getSettings?.(); // calls only if getSettings exists
let theme = user.prefs?.themes?.[0];
null or undefined. If a property exists but its access triggers another kind of error (such as a thrown exception), that error still propagates.
Combining ?. and ??
These two operators often work best together: optional chaining safely accesses a value, and nullish coalescing provides a fallback if that value is missing.
let config = {
database: {
host: "localhost"
}
};
let host = config.database?.host ?? "default.local";
let port = config.database?.port ?? 3306;
console.log(host); // "localhost"
console.log(port); // 3306
?. and ?? helps create resilient expressions that gracefully handle missing or incomplete data, especially when working with external sources such as APIs or user-submitted content.
Together, nullish coalescing and optional chaining reduce the need for repetitive if checks and defensive coding. They make your logic cleaner, safer, and easier to maintain when navigating unpredictable object hierarchies.
Bitwise and exponentiation
JavaScript includes a set of bitwise operators that perform operations on numbers at the binary level. These operators treat operands as 32-bit signed integers, working directly on their bit representations rather than on their numeric values. Although bitwise operations are less common in everyday JavaScript, they are useful in performance-sensitive code, low-level data manipulation, and certain algorithmic techniques such as masking and flags.
Bitwise fundamentals
Bitwise operators first convert their operands to 32-bit integers, perform the specified operation on each bit, and then return a new 32-bit integer. The main bitwise operators are as follows:
&(AND)|(OR)^(XOR)~(NOT)<<(left shift)>>(sign-propagating right shift)>>>(zero-fill right shift)
let a = 5; // 0101
let b = 3; // 0011
console.log(a & b); // 1 (0001)
console.log(a | b); // 7 (0111)
console.log(a ^ b); // 6 (0110)
console.log(~a); // -6 (bitwise NOT)
Bitwise shifts move bits to the left or right. Left shifts fill with zeros on the right, while right shifts can either preserve the sign bit (>>) or fill with zeros (>>>).
let value = 8; // 0000 1000
console.log(value << 1); // 16 (0001 0000)
console.log(value >> 1); // 4 (0000 0100)
console.log(value >>> 1); // 4 (0000 0100)
Practical uses of bitwise operators
Bitwise logic can be used to represent multiple Boolean states within a single number, a technique known as bit masking. Each bit acts as a flag that can be set, cleared, or checked efficiently.
const READ = 1; // 0001
const WRITE = 2; // 0010
const EXEC = 4; // 0100
let permissions = READ | WRITE; // 0011
console.log(permissions & READ); // true (bit set)
console.log(permissions & EXEC); // false (bit not set)
Exponentiation revisited
The exponentiation operator (**) raises a number to a given power. It is right-associative, meaning that expressions with multiple exponentiations evaluate from right to left.
console.log(2 ** 3); // 8
console.log(2 ** 3 ** 2); // 512 → 2 ** (3 ** 2)
This operator is equivalent to Math.pow() but more concise and readable. It can also be combined with assignment as **= to apply exponentiation in place.
let n = 4;
n **= 3;
console.log(n); // 64
Number.MAX_SAFE_INTEGER.
Although bitwise operations and exponentiation are rarely used together, both give fine-grained control over numerical behavior. They represent JavaScript’s bridge between high-level convenience and low-level precision.
Operator precedence and grouping
When multiple operators appear in a single expression, JavaScript follows a defined order of evaluation known as operator precedence. Operators with higher precedence execute before those with lower precedence. Understanding this order prevents subtle bugs and makes complex expressions predictable.
If operators share the same precedence level, JavaScript applies associativity rules to determine whether evaluation proceeds from left to right or right to left. Most operators are left-associative, but assignment and exponentiation are notable exceptions since they evaluate from right to left.
() always take precedence and can be used to make evaluation order explicit. When in doubt, group sub-expressions for clarity.
Highest to lowest precedence
| Operator | Description |
() | Parentheses for grouping and function calls |
[], ., ?. | Array indexing, property access, and optional chaining |
new (with arguments) | Object instantiation when called with parentheses |
++, -- | Postfix increment and decrement |
++, --, +, -, ~, !, typeof, void, delete | Prefix operations, unary plus/minus, bitwise NOT, logical NOT, type queries, and property deletion |
** | Exponentiation (right-associative) |
*, /, % | Multiplication, division, remainder |
+, - | Addition and subtraction (numeric or string concatenation) |
<<, >>, >>> | Bitwise shift operators |
<, <=, >, >=, in, instanceof | Relational comparisons and membership/type checking |
==, !=, ===, !== | Equality and inequality comparisons |
& | Bitwise AND |
^ | Bitwise XOR |
| | Bitwise OR |
&& | Logical AND (short-circuiting) |
|| | Logical OR (short-circuiting) |
?? | Nullish coalescing (short-circuiting on null or undefined) |
? : | Ternary conditional operator |
=, +=, -=, *=, /=, %=, **=, <<=, >>=, >>>=, &=, |=, ^=, ||=, &&=, ??= | Assignment and compound assignment (right-associative) |
, | Comma operator (evaluates expressions in sequence, returning the last value) |
Grouping for clarity
Even though operator precedence determines evaluation order, adding parentheses often makes expressions easier to read and maintain. Parentheses do not change results when used correctly, but they remove ambiguity for readers and tools alike.
// Without grouping
let result = a + b * c;
// With grouping for clarity
let result = (a + b) * c;
Operator precedence and grouping shape how JavaScript interprets every expression. A clear grasp of these rules helps prevent logical errors and ensures that your code executes exactly as you intend.
Chapter 6: Control Flow
Control flow determines the order in which a program’s statements are executed. By default, JavaScript runs code from top to bottom, one statement after another. Control structures interrupt or redirect that flow, allowing programs to make decisions, repeat actions, or exit early when conditions are met.
These constructs (such as if, switch, and looping statements) form the backbone of logical structure in JavaScript. They allow you to describe not just what your code should do, but under what circumstances and how many times it should do it.
In this chapter, you will explore JavaScript’s main flow-control tools: conditional branching with if and switch, iterative looping with while and for variations, and fine-grained loop control using break, continue, and labels. You will also examine the conditional (ternary) operator for concise decision-making within expressions.
Together, these elements let your programs respond dynamically to data, automate repetitive actions, and maintain readable, predictable logic; hallmarks of effective JavaScript code.
if, else if, else
The if statement is the foundation of decision-making in JavaScript. It evaluates a condition and executes a block of code only if that condition is true. Optional else if and else clauses allow you to test multiple conditions in sequence and define fallback behavior when none of them match.
let temperature = 25;
if (temperature > 30) {
console.log("It’s hot today.");
} else if (temperature >= 20) {
console.log("The weather is mild.");
} else {
console.log("It’s cool outside.");
}
Each condition is tested in order, and only the first true condition runs its corresponding block. If no conditions evaluate to true, the else block (if present) executes by default.
if (loggedIn)
console.log("Welcome back!");
Although this works, always prefer the explicit, brace-enclosed form:
if (loggedIn) {
console.log("Welcome back!");
}
Nested if statements
When multiple related conditions depend on different levels of logic, if statements can be nested. Each block can contain further decision points, allowing fine-grained control of behavior.
let user = { name: "Robin", role: "admin" };
if (user) {
if (user.role === "admin") {
console.log("Access granted.");
} else {
console.log("Access limited.");
}
}
if statements can quickly become difficult to read. When logic grows complex, consider using switch statements or returning early from functions to simplify control paths.
Falsy conditions
JavaScript treats certain values as falsy. These include false, 0, "", null, undefined, and NaN. Any falsy value causes an if condition to fail, while all other values are treated as truthy.
let name = "";
if (name) {
console.log("Name provided.");
} else {
console.log("No name given.");
}
This behavior allows concise tests but can also introduce subtle bugs if falsy values like 0 or "" are legitimate inputs.
if (value !== null && value !== undefined)) instead of relying on truthiness.
The if family of statements remains the most direct way to guide JavaScript’s flow. Clear, well-structured conditions are key to making intent obvious and maintaining logical control throughout your programs.
switch with fallthrough control
The switch statement provides a clean way to handle multiple possible values of a single expression. It compares the result of an expression against a list of case labels and executes the matching block. A default label handles any unmatched cases.
let color = "green";
switch (color) {
case "red":
console.log("Stop");
break;
case "yellow":
console.log("Get ready");
break;
case "green":
console.log("Go");
break;
default:
console.log("Unknown color");
}
Each case is checked in order, and execution stops when a break statement is encountered. If break is omitted, JavaScript continues executing the following cases until it finds one with break or the switch block ends. This is known as fallthrough.
Using fallthrough intentionally
Fallthrough can be useful when multiple cases should execute the same code. Instead of duplicating logic, you can stack case labels together.
let day = "Saturday";
switch (day) {
case "Saturday":
case "Sunday":
console.log("Weekend");
break;
default:
console.log("Weekday");
}
break after the shared section to prevent unwanted continuation.
Fallthrough hazards
Unintentional fallthrough is a common source of bugs, especially when a break is accidentally omitted. JavaScript’s interpreter will continue executing subsequent cases until a break or return is encountered.
let code = 1;
switch (code) {
case 1:
console.log("One");
case 2:
console.log("Two");
case 3:
console.log("Three");
break;
}
Output:
One
Two
Three
switch cases for missing break statements. Unintended fallthrough can execute multiple blocks and produce confusing output.
Using return in functions
When a switch is used inside a function, you can use return instead of break to exit immediately. This approach is often cleaner and eliminates the risk of fallthrough.
function getMessage(status) {
switch (status) {
case 200:
return "OK";
case 404:
return "Not Found";
case 500:
return "Server Error";
default:
return "Unknown";
}
}
console.log(getMessage(404));
Strict comparison
switch uses strict equality (===) when comparing the target expression with each case label. No type coercion occurs, so types must match exactly.
let value = "1";
switch (value) {
case 1:
console.log("Number one");
break;
case "1":
console.log("String one");
break;
}
This will output String one because "1" === 1 is false.
switch cases to match values flexibly, convert the expression or cases explicitly before comparison, for example using String() or Number().
The switch statement is ideal for matching discrete values, state codes, or user selections. When used with careful break placement, it produces cleaner, more readable logic than long chains of if...else if statements.
while, do while
Loops allow a block of code to run repeatedly while a condition remains true. The while and do while statements are the simplest looping constructs in JavaScript. Both evaluate a condition before (or after) executing their loop body, making them ideal for scenarios where repetition depends on dynamic data.
The while loop
The while loop checks its condition before each iteration. If the condition is true, the loop body executes; if false, the loop ends immediately.
let count = 0;
while (count < 5) {
console.log("Count is " + count);
count++;
}
This loop prints values from 0 to 4. Because the condition is tested before every iteration, it is possible for the body to never run at all if the condition starts as false.
while loop can run indefinitely. Always make sure something inside the loop modifies a value that affects the condition.
The do while loop
The do while loop guarantees that its body executes at least once, even if the condition is initially false. This is because the condition is evaluated after each iteration rather than before.
let value = 0;
do {
console.log("Value is " + value);
value++;
} while (value < 3);
Here, the output will include values 0, 1, and 2. Even if value began greater than 3, the loop body would still execute once before stopping.
do while when you must run the loop body at least once, such as when reading user input or performing an initial setup before validation.
Breaking or skipping within loops
Both while and do while loops support break and continue for finer control. break exits the loop entirely, while continue skips to the next iteration.
let n = 0;
while (n < 10) {
n++;
if (n === 3) continue; // skip this iteration
if (n === 8) break; // stop the loop
console.log(n);
}
This example skips printing 3 and stops altogether when n reaches 8.
break and continue can make logic concise but should be used sparingly. Overuse can obscure loop behavior and make control flow harder to follow.
The while and do while constructs provide simple, condition-driven repetition. They are best suited for loops where the number of iterations is not known in advance and depends entirely on runtime conditions.
for and for...of
The for loop is one of JavaScript’s most versatile looping constructs. It allows you to define initialization, condition, and iteration expressions all in a single statement. This makes it ideal for loops that execute a specific number of times or iterate through a sequence with predictable bounds.
The classic for loop
The traditional for loop includes three parts inside parentheses, separated by semicolons:
for (initialization; condition; final-expression) {
// loop body
}
Each section controls a different part of the loop:
- Initialization: executed once before the loop starts
- Condition: checked before each iteration; if false, the loop ends
- Final expression: executed at the end of each iteration
for (let i = 0; i < 5; i++) {
console.log("Iteration " + i);
}
This outputs values from 0 to 4. When i reaches 5, the condition i < 5 becomes false and the loop stops.
let are block-scoped, meaning they exist only inside the loop. If you use var, the variable will remain accessible after the loop finishes.
The for...of loop
Introduced in ES2015, the for...of loop provides a clean, direct way to iterate over iterable objects such as arrays, strings, maps, and sets. It retrieves each value in sequence without using an index.
const colors = ["red", "green", "blue"];
for (const color of colors) {
console.log(color);
}
This loop prints each element in colors. It works with any object that implements the iterable protocol, which includes most built-in collections.
for...of when you only need element values, not their indices. It is more readable and less error-prone than managing counters manually.
Iterating over strings
Because strings are iterable, for...of can loop through each character directly:
for (const ch of "JavaScript") {
console.log(ch);
}
This is often simpler than using an index-based loop like for (let i = 0; i < str.length; i++).
Comparing for, for...of, and for...in
Although similar in appearance, for...in is different: it iterates over property names of an object, not values. It is useful for objects but should not be used with arrays when element order matters.
const obj = { a: 1, b: 2, c: 3 };
for (const key in obj) {
console.log(key + ": " + obj[key]);
}
for...in only for plain objects or when enumerating property names. For arrays or other collections, prefer for...of to get the actual values.
break, continue, and labels
JavaScript provides several keywords for fine-grained control of loops and blocks. The break and continue statements alter the normal flow of iteration, while labels (used rarely) allow control to move out of specific named blocks or nested loops.
Exiting a loop early
The break statement stops execution of the nearest enclosing loop or switch immediately, jumping to the statement that follows it. It is often used when a certain condition has been met or when continuing further would be unnecessary.
for (let i = 0; i < 10; i++) {
if (i === 5) {
break; // stop when i reaches 5
}
console.log(i);
}
This prints values from 0 through 4, then exits the loop when i equals 5.
break to improve efficiency by halting loops as soon as their purpose is fulfilled, rather than letting unnecessary iterations run.
Skipping an iteration
The continue statement ends the current loop iteration immediately and moves on to the next one. It does not exit the loop completely, just skips the remainder of that iteration’s body.
for (let i = 1; i <= 6; i++) {
if (i % 2 === 0) {
continue; // skip even numbers
}
console.log(i);
}
In this example, only the odd numbers (1, 3, 5) are printed. The continue statement is particularly useful when you need to ignore certain data without breaking the loop entirely.
continue in a way that skips code responsible for updating loop variables.
Labeled statements
JavaScript allows you to assign a label to a statement or block. A label is an identifier followed by a colon. You can then reference that label with break or continue to control which loop or block to affect, which is useful in nested structures.
outerLoop: for (let i = 0; i < 3; i++) {
for (let j = 0; j < 3; j++) {
if (i === j) {
continue outerLoop; // skip to the next outer loop iteration
}
console.log(i, j);
}
}
This skips any pair where i equals j by continuing the outer loop rather than the inner one. Labels can also be used with break to exit multiple nested loops at once.
search:
for (let i = 0; i < 5; i++) {
for (let j = 0; j < 5; j++) {
if (i * j === 6) {
console.log("Found at", i, j);
break search; // exit both loops
}
}
}
In general, break and continue keep your loops efficient and expressive, while labels provide an escape hatch for complex iteration patterns. Used carefully, these tools help you manage control flow cleanly and precisely.
Conditional operator
The conditional operator (? :), often called the ternary operator, is a compact way to perform simple conditional evaluations within an expression. It offers a concise alternative to an if...else statement, returning one of two values depending on a given condition.
let age = 18;
let status = (age >= 18) ? "Adult" : "Minor";
console.log(status); // "Adult"
The syntax consists of three parts:
condition ? expressionIfTrue : expressionIfFalse
First, the condition is evaluated. If it is truthy, the expression before the colon executes; otherwise, the expression after the colon executes. The entire construct is an expression, meaning it produces a value and can be used anywhere a value is expected—such as in assignments, function arguments, or return statements.
Nested conditional expressions
Conditional operators can be chained or nested to handle multiple outcomes, though this can quickly reduce readability. Each additional condition appears as another ? : sequence inside one of the branches.
let score = 72;
let grade = (score >= 90) ? "A"
: (score >= 75) ? "B"
: (score >= 60) ? "C"
: "D";
console.log(grade); // "C"
if...else if statements for clarity.
Using the operator in expressions
Because the conditional operator returns a value, it can be embedded directly inside other expressions, allowing elegant one-liners when appropriate.
function greet(user) {
return "Hello, " + (user ? user.name : "Guest") + "!";
}
console.log(greet({ name: "Robin" })); // "Hello, Robin!"
console.log(greet(null)); // "Hello, Guest!"
This example chooses between user.name and a default string based on whether user is truthy.
The conditional operator’s ability to condense decision-making into a single line makes it a valuable tool for writing expressive, minimal code, especially when used judiciously and with readability in mind.
Chapter 7: Functions
Functions are one of the most important parts of JavaScript. They let you group related code into named blocks that can be called and reused whenever needed. Every serious JavaScript program, no matter how small or large, relies on functions to organize logic and reduce repetition.
Functions in JavaScript are first-class citizens. This means you can store them in variables, pass them to other functions, and even return them as results. They are objects with their own properties and methods, and they form the backbone of modular and functional programming styles.
In this chapter you will explore the many ways to define and use functions. You will see how declarations differ from expressions, how this behaves inside arrow functions, and how to work with parameters, defaults, and rest arguments. You will also learn about return values, closures, and how to design functions that are predictable and free from unwanted side effects.
By the end of this chapter, you will have a clear understanding of how functions shape the structure and behavior of JavaScript programs, and how to use them effectively to build maintainable, reliable code.
Function declarations and expressions
In JavaScript, a function can be defined in two main ways: as a declaration or as an expression. Both create functions, but they behave differently in how and when they become available to the code around them.
Function declarations
A function declaration defines a named function using the function keyword, followed by the function name, a parameter list, and a block of code in braces. It looks like this:
function greet(name) {
console.log('Hello, ' + name + '!');
}
Because declared functions are hoisted, you can call them even before their definition appears in the code. This happens because the JavaScript interpreter loads all function declarations before executing the rest of the script.
sayHi('Robin');
function sayHi(name) {
console.log('Hi, ' + name);
}
Function expressions
A function expression creates a function as part of a larger expression. It may be anonymous (without a name) or named, and it is not hoisted in the same way as declarations. A common pattern is to assign the function to a variable:
const greet = function(name) {
console.log('Hello, ' + name + '!');
};
greet('Robin');
Unlike declarations, function expressions are only available after the line where they are defined. Trying to call them before that point will result in a reference error.
Immediate invocation
A special kind of function expression is the immediately invoked function expression (IIFE). It runs as soon as it is defined, often to create a private scope:
(function() {
console.log('This runs immediately');
})();
IIFEs were widely used before modern block scope with let and const became available. They still appear in legacy code and in libraries that encapsulate logic without polluting the global scope.
Arrow functions and this
Arrow functions provide a shorter, cleaner syntax for writing functions. They were introduced in ES6 and are often used for inline operations, callbacks, and expressions that return a single value.
Basic syntax
An arrow function uses the => operator instead of the function keyword. If there is only one parameter, the parentheses can be omitted, and if the body contains only one expression, the braces and return keyword can also be omitted:
const square = x => x * x;
console.log(square(4)); // 16
For multiple parameters or longer bodies, parentheses and braces are required:
const add = (a, b) => {
const result = a + b;
return result;
};
No binding of this
One of the key differences between arrow functions and regular functions is that arrow functions do not create their own this binding. Instead, they inherit this from the enclosing scope (known as lexical this binding). This makes them ideal for callbacks or methods that rely on the surrounding context.
const counter = {
count: 0,
start() {
setInterval(() => {
this.count++;
console.log(this.count);
}, 1000);
}
};
counter.start();
In the example above, the arrow function inside setInterval uses the same this value as the surrounding start method. If a regular function had been used instead, this would refer to the global object (or be undefined in strict mode).
new will throw a TypeError because they do not have their own prototype property.
When not to use arrow functions
Although concise and convenient, arrow functions are not always appropriate. They should be avoided when a function needs its own this, such as in class constructors or object methods that depend on method binding.
const user = {
name: 'Robin',
greet: () => {
console.log('Hello, ' + this.name);
}
};
user.greet(); // "Hello, undefined"
Here, this does not refer to user because the arrow function inherits this from the outer (global) scope. For methods that rely on this, use a regular function instead.
const user = {
name: 'Robin',
greet() {
console.log('Hello, ' + this.name);
}
};
user.greet(); // "Hello, Robin"
Parameters, defaults, and rest
Functions in JavaScript can accept any number of arguments, regardless of how many parameters are declared. The language is flexible about arguments: extra ones are ignored unless you capture them, and missing ones are simply undefined.
Basic parameters
Parameters are the named variables listed in a function’s definition. When a function is called, the arguments you provide are assigned to those parameters in order:
function greet(first, last) {
console.log('Hello, ' + first + ' ' + last);
}
greet('Robin', 'Nixon'); // Hello, Robin Nixon
If fewer arguments are passed than there are parameters, the remaining parameters become undefined by default.
greet('Robin'); // Hello, Robin undefined
undefined values.
Default parameters
You can assign default values to parameters directly in the function definition. If no argument (or undefined) is provided for that parameter, the default will be used:
function greet(name = 'Guest') {
console.log('Welcome, ' + name);
}
greet(); // Welcome, Guest
greet('Robin'); // Welcome, Robin
Default parameters can also depend on earlier parameters:
function repeat(str, times = 2) {
console.log(str.repeat(times));
}
repeat('Hi'); // HiHi
repeat('Hi', 3); // HiHiHi
undefined. Passing null will override the default with a null value.
The arguments object
Before modern syntax, JavaScript functions could access their arguments through the special arguments object. It is array-like (not a true array) and contains all arguments passed to the function:
function showAll() {
console.log(arguments.length + ' arguments:');
for (let i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
showAll('a', 'b', 'c');
The arguments object still works in regular functions, but it is not available inside arrow functions. Modern JavaScript instead uses the rest parameter syntax.
Rest parameters
Rest parameters gather remaining arguments into an array. They are declared with three dots (...) before the parameter name, and they make handling variable argument lists much simpler:
function sum(...nums) {
return nums.reduce((a, b) => a + b, 0);
}
console.log(sum(1, 2, 3, 4)); // 10
Rest parameters must appear last in the parameter list, and only one rest parameter is allowed per function.
..., but rest gathers values while spread expands them.
Return values and early returns
A function’s return statement determines what value it produces when called. If a function does not explicitly return anything, it automatically returns undefined. The return keyword can be used to pass back any value or expression.
Returning values
The simplest use of return is to end a function and hand a result back to its caller:
function add(a, b) {
return a + b;
}
const result = add(2, 3);
console.log(result); // 5
Once a return statement is reached, the function stops executing immediately, and no further statements in its body are run.
function demo() {
return 'done';
console.log('This never runs');
}
return.
const square = x => x * x;
console.log(square(4)); // 16
Returning objects
When returning an object literal from an arrow function, you must wrap it in parentheses so that it is not confused with a function block:
const makeUser = (name, age) => ({ name, age });
console.log(makeUser('Robin', 55));
// { name: 'Robin', age: 55 }
Early returns
Sometimes you only want to continue a function if certain conditions are met. Using early return statements helps simplify logic and avoid deep nesting.
function divide(a, b) {
if (b === 0) {
console.log('Cannot divide by zero');
return;
}
console.log(a / b);
}
divide(10, 2); // 5
divide(10, 0); // Cannot divide by zero
This pattern improves readability by handling invalid or edge cases first, then proceeding only when the main logic applies.
return statements in overly complex functions. While early returns can simplify flow, too many can make the code harder to follow.
Returning multiple values
JavaScript functions can only return one value, but you can bundle multiple values together in an array or object:
function getStats(a, b) {
return {
sum: a + b,
product: a * b
};
}
const { sum, product } = getStats(3, 4);
console.log(sum, product); // 7 12
Returning structured data makes your functions more flexible and keeps related results together.
Closures in practice
A closure is one of JavaScript’s most powerful and distinctive features. It occurs when an inner function remembers and can access variables from its outer function, even after that outer function has finished running. This allows data to be kept private and preserved between function calls.
How closures work
When a function is defined inside another, it forms a lexical scope chain. The inner function has access not only to its own variables, but also to those of any outer functions and the global scope. This relationship persists for as long as the inner function exists.
function makeCounter() {
let count = 0;
return function() {
count++;
return count;
};
}
const counter = makeCounter();
console.log(counter()); // 1
console.log(counter()); // 2
console.log(counter()); // 3
In this example, the inner function maintains a reference to count, even though makeCounter has already completed execution. The variable lives on through the closure.
Using closures to encapsulate state
Closures are often used to encapsulate private data or configuration. Because the outer variables are not exposed directly, they can only be modified through the inner function’s logic:
function createAccount(initialBalance) {
let balance = initialBalance;
return {
deposit(amount) {
balance += amount;
console.log('Deposited ' + amount);
},
withdraw(amount) {
if (amount > balance) {
console.log('Insufficient funds');
return;
}
balance -= amount;
console.log('Withdrew ' + amount);
},
getBalance() {
return balance;
}
};
}
const account = createAccount(100);
account.deposit(50);
account.withdraw(25);
console.log(account.getBalance()); // 125
Here, balance is private. It cannot be accessed directly from outside the returned object, but it persists through the closure inside the object’s methods.
Closures inside loops
Before let and const were introduced, closures in loops were a common source of confusion because var was function-scoped, not block-scoped. Using let fixes this by binding a new variable for each loop iteration:
for (let i = 1; i <= 3; i++) {
setTimeout(() => {
console.log('Iteration ' + i);
}, i * 1000);
}
This correctly prints 1, 2, and 3 in sequence, one per second. Using var instead would print 3 three times, because all functions would share the same variable reference.
Practical closure patterns
Closures underpin many useful patterns in JavaScript, such as module creation, function factories, and memoization. For example, this function remembers previous results to speed up repeated calls:
function memoize(fn) {
const cache = {};
return function(x) {
if (x in cache) {
console.log('Returning cached result');
return cache[x];
}
const result = fn(x);
cache[x] = result;
return result;
};
}
const square = memoize(x => x * x);
console.log(square(4)); // 16
console.log(square(4)); // Returning cached result
By retaining access to cache between calls, the closure allows results to persist without polluting the global scope. This technique is common in optimization and library design.
Pure functions and side effects
In programming, a pure function is one that always produces the same output for the same input and does not modify any state outside itself. Pure functions are predictable, testable, and central to functional programming techniques. In contrast, a side effect occurs when a function changes something in the outside world, such as a variable, file, or DOM element.
What makes a function pure
A pure function depends only on its parameters and does not rely on or alter external data. Given the same input, it will always return the same output:
function add(a, b) {
return a + b;
}
console.log(add(2, 3)); // 5
console.log(add(2, 3)); // 5 (always the same)
Because it affects no external state and uses only its inputs, add() is completely pure.
Examples of side effects
A side effect occurs when a function interacts with or modifies something outside its scope. Common examples include writing to a file, changing a global variable, or updating the user interface:
let counter = 0;
function increment() {
counter++;
console.log(counter);
}
increment(); // modifies global variable
Even though this function works correctly, it is not pure because it changes counter, which exists outside the function’s local scope.
Avoiding unintended side effects
Side effects are sometimes necessary, especially in user interfaces and applications that interact with the real world. However, where possible, functions should be designed to minimize them. One common approach is to return a new value instead of modifying an existing one:
function addItem(list, item) {
return [...list, item];
}
const original = ['apple', 'banana'];
const updated = addItem(original, 'cherry');
console.log(original); // ['apple', 'banana']
console.log(updated); // ['apple', 'banana', 'cherry']
This version returns a new array rather than altering the original, making the function pure and the data flow predictable.
When side effects are acceptable
Not all side effects are bad. Some are required for a program to do anything useful, such as logging information, updating the DOM, or saving data to a database. The key is to manage and isolate them so they are intentional and predictable.
function logMessage(message) {
console.log('[LOG] ' + message);
}
This function has a clear and limited side effect (writing to the console), which is acceptable because it is part of its defined purpose.
Balancing purity and practicality
Completely pure programs are rare, but striving for purity within individual components can lead to cleaner, more reliable code. In many cases, a hybrid approach works best: keep most logic pure and centralize side effects in well-defined places, such as event handlers or API layers.
// Pure function
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
// Controlled side effect
function displayTotal(items) {
const total = calculateTotal(items);
document.querySelector('#total').textContent = total.toFixed(2);
}
Here, calculateTotal() is pure and reusable, while displayTotal() performs the necessary side effect of updating the web page. Keeping these roles separate makes the program both predictable and practical.
Chapter 8: Objects and Prototypes
Objects are the foundation of JavaScript. Almost everything in the language is built on objects, from arrays and functions to dates and regular expressions. An object is a collection of key–value pairs, where each key is a string (or symbol) and each value can be any data type, including another object or function.
JavaScript’s object system is unique among major languages because it is prototype-based rather than class-based at its core. Although modern syntax allows classes for convenience, under the surface everything still works through prototypes and delegation along a chain of linked objects.
This chapter explores how objects are created, extended, and connected through prototypes. You will learn how to use object literals, control property behavior with descriptors, and understand how prototype delegation defines inheritance in JavaScript. You will also examine how this works within object methods, how to create new objects from existing ones, and how JSON represents data structures in a portable text format.
By mastering objects and prototypes, you gain a deeper understanding of how JavaScript actually works under the hood, allowing you to design data structures, manage inheritance, and build maintainable, extensible programs with confidence.
Object literals and property access
Objects in JavaScript are usually created using object literals—a simple, readable syntax that defines key–value pairs inside curly braces. Each key is called a property name, and each value can be of any type, including another object, array, or function.
Creating objects with literals
An object literal is written as a list of property definitions inside braces:
const user = {
name: 'Robin',
age: 55,
isAuthor: true
};
You can create an empty object with {} or use new Object(), though the literal form is preferred for its clarity and conciseness.
const empty = {};
const another = new Object();
Accessing properties
JavaScript provides two main ways to access object properties: dot notation and bracket notation. Dot notation is the most common and is used when the property name is a valid identifier:
console.log(user.name); // Robin
console.log(user.age); // 55
Bracket notation is required when the property name is dynamic (stored in a variable) or contains spaces or other special characters:
const key = 'isAuthor';
console.log(user[key]); // true
const data = { 'user name': 'Robin' };
console.log(data['user name']); // Robin
Adding, changing, and deleting properties
Objects are mutable, so you can freely add, modify, or remove properties after creation:
user.country = 'UK';
user.age = 56;
delete user.isAuthor;
console.log(user);
// { name: 'Robin', age: 56, country: 'UK' }
null or undefined if you only need to mark it as unused.
Nested objects and property chains
Objects can contain other objects, creating nested structures for representing more complex data:
const book = {
title: 'This is JavaScript',
author: {
first: 'Robin',
last: 'Nixon'
}
};
console.log(book.author.first); // Robin
Attempting to access a property on undefined or null will cause an error. The optional chaining operator (?.) can safely check nested properties without throwing:
console.log(book.publisher?.name); // undefined, no error
?.) and nullish coalescing (??) make working with nested objects safer and more expressive. Together they help avoid long, repetitive checks for undefined or null.
JavaScript Descriptors
Every property in a JavaScript object has an associated set of attributes known as a property descriptor. These descriptors control how the property behaves, such as whether it can be changed, listed in loops, or deleted. Understanding them gives you precise control over object behavior and immutability.
Viewing property descriptors
You can inspect the descriptor of a property using Object.getOwnPropertyDescriptor():
const user = { name: 'Robin' };
const descriptor = Object.getOwnPropertyDescriptor(user, 'name');
console.log(descriptor);
This outputs an object describing the property:
{
value: 'Robin',
writable: true,
enumerable: true,
configurable: true
}
These three Boolean attributes (writable, enumerable, and configurable) define the property’s mutability, visibility, and flexibility.
writable
The writable flag determines whether the property’s value can be changed. If it is set to false, any attempt to modify the value will silently fail in non-strict mode or throw an error in strict mode:
const user = {};
Object.defineProperty(user, 'id', {
value: 1001,
writable: false
});
user.id = 2002; // silently ignored (or error in strict mode)
console.log(user.id); // 1001
writable: false only affects the property’s value. It does not prevent deleting or reconfiguring the property unless configurable is also false.
enumerable
The enumerable flag determines whether a property appears in loops such as for...in and in methods like Object.keys() or JSON.stringify(). Non-enumerable properties remain accessible but are hidden from iteration:
const user = {};
Object.defineProperty(user, 'secret', {
value: 'classified',
enumerable: false
});
user.visible = 'shown';
for (const key in user) {
console.log(key); // only "visible" is shown
}
console.log(Object.keys(user)); // ['visible']
console.log(user.secret); // accessible directly
toString()) are non-enumerable. This is why looping through an object does not list them.
configurable
The configurable flag determines whether a property can be deleted or redefined. Once configurable is set to false, it cannot be changed back, and the property cannot be removed or reconfigured:
const user = {};
Object.defineProperty(user, 'role', {
value: 'admin',
configurable: false
});
delete user.role; // has no effect
Object.defineProperty(user, 'role', { value: 'editor' }); // throws error
configurable is false, even writable and enumerable cannot be altered. Use it only when you are certain a property should remain fixed.
Defining multiple properties at once
You can define several properties in one operation using Object.defineProperties(). This is useful when setting detailed descriptors during object initialization:
const book = {};
Object.defineProperties(book, {
title: {
value: 'This is JavaScript',
writable: false
},
author: {
value: 'Robin Nixon',
enumerable: true
}
});
console.log(Object.getOwnPropertyDescriptors(book));
Descriptors allow you to design objects with carefully controlled interfaces, making them ideal for secure APIs, library design, or immutable configurations.
Prototype chain and delegation
Every JavaScript object has an internal link to another object known as its prototype. When you try to access a property that does not exist on an object, JavaScript automatically looks for it up the prototype chain. This mechanism is called delegation and it forms the basis of JavaScript’s inheritance model.
The prototype link
You can view or access an object’s prototype using Object.getPrototypeOf(), and set it explicitly using Object.setPrototypeOf() (though doing so dynamically can affect performance):
const parent = { kind: 'parent' };
const child = { name: 'Robin' };
Object.setPrototypeOf(child, parent);
console.log(child.kind); // inherited from parent
console.log(Object.getPrototypeOf(child) === parent); // true
In this example, the child object delegates missing property lookups to its prototype (parent), forming a simple two-link chain.
__proto__. It is a legacy accessor for the prototype, equivalent to Object.getPrototypeOf(), but should not be used in production code.
Prototype delegation in action
When JavaScript encounters a property access like child.kind, it checks the object itself first. If the property is not found, it follows the prototype link and continues searching up the chain until it either finds the property or reaches the top, the Object.prototype.
const grandparent = { species: 'human' };
const parent = Object.create(grandparent);
parent.role = 'developer';
const child = Object.create(parent);
child.name = 'Robin';
console.log(child.species); // found on grandparent
console.log(child.role); // found on parent
console.log(child.name); // found on child
The lookup sequence continues up the chain until no prototype remains, at which point JavaScript returns undefined.
The end of the chain
All standard objects in JavaScript ultimately inherit from Object.prototype, which provides fundamental methods like toString(), valueOf(), and hasOwnProperty():
console.log(Object.getPrototypeOf({}) === Object.prototype); // true
console.log(Object.getPrototypeOf(Object.prototype)); // null
This means that every object has access to those shared methods unless they are shadowed (redefined) further down the chain.
toString or constructor. Overriding them can cause unexpected results or break built-in behavior.
Creating custom prototypes
The most common way to create objects with a specific prototype is using Object.create(). It allows you to define a prototype and optionally provide property descriptors at creation time:
const animal = {
speak() {
console.log('Generic sound');
}
};
const dog = Object.create(animal);
dog.speak(); // Generic sound
dog.speak = function() {
console.log('Woof!');
};
dog.speak(); // Woof!
In this case, dog delegates to animal for any missing properties, but can override them locally when needed.
How delegation differs from class inheritance
Unlike class-based inheritance found in languages such as Java or C++, JavaScript’s prototype system uses delegation rather than copying. No properties or methods are duplicated since objects simply link to one another dynamically.
const base = { greet() { console.log('Hello'); } };
const derived = Object.create(base);
derived.greet(); // Hello
base.greet = function() { console.log('Hi'); };
derived.greet(); // Hi (delegation reflects the change)
Because delegation is dynamic, changes to a prototype are immediately visible to all objects that inherit from it. This flexibility makes prototypes both powerful and potentially hazardous if modified carelessly at runtime.
class and extends) is built on top of this same prototype mechanism. Classes simply provide a cleaner, more structured way to define prototype chains.
this binding rules
In JavaScript, the value of this is determined by how a function is called, not where it is defined. Understanding these binding rules is crucial when working with object methods, event handlers, and constructors, because this can change depending on the calling context.
Default binding
When a function is called in the global scope or as a plain function (not attached to an object), this refers to the global object (window in browsers, global in Node.js). In strict mode, however, this becomes undefined instead.
function showThis() {
console.log(this);
}
showThis(); // window (or undefined in strict mode)
this to the global object.
Implicit binding (object methods)
When a function is called as a property of an object, this is automatically set to the object before the dot:
const user = {
name: 'Robin',
greet() {
console.log('Hello, ' + this.name);
}
};
user.greet(); // Hello, Robin
However, if the method is extracted and called separately, the binding is lost:
const greetFn = user.greet;
greetFn(); // undefined or window, depending on mode
Explicit binding with call, apply, and bind
You can manually control what this refers to by using Function.prototype.call(), apply(), or bind(). These methods allow precise context management:
function greet() {
console.log('Hello, ' + this.name);
}
const user = { name: 'Robin' };
const guest = { name: 'Alex' };
greet.call(user); // Hello, Robin
greet.apply(guest); // Hello, Alex
const boundGreet = greet.bind(user);
boundGreet(); // Hello, Robin
call() and apply() invoke the function immediately, while bind() returns a new function with this permanently set to the specified value.
Constructor binding
When a function is used with the new keyword, it acts as a constructor. A new object is automatically created and assigned to this inside the function:
function Person(name) {
this.name = name;
}
const user = new Person('Robin');
console.log(user.name); // Robin
In this case, this refers to the newly created instance. If the constructor explicitly returns an object, that object replaces the default this.
Lexical binding in arrow functions
Arrow functions behave differently from regular functions because they do not have their own this. Instead, they inherit this from the surrounding scope (lexical binding):
const counter = {
value: 0,
start() {
setInterval(() => {
this.value++;
console.log(this.value);
}, 1000);
}
};
counter.start();
Here, the arrow function inside setInterval uses the same this as the enclosing start() method, so it correctly refers to counter.
this value cannot be changed using call(), apply(), or bind().
Binding precedence summary
When multiple binding rules could apply, JavaScript follows a clear order of precedence:
- new binding (highest priority)
- explicit binding (call, apply, bind)
- implicit binding (object method)
- default binding (global or undefined)
Arrow functions bypass this order because they always use lexical binding from their defining scope.
this always refers to what you expect.
Creating and assigning objects
JavaScript provides several built-in methods for creating and combining objects. Two of the most important are Object.create() and Object.assign(). The first lets you create an object with a specific prototype, while the second copies properties from one or more source objects into a target object.
Creating objects with Object.create()
Object.create() creates a new object whose internal prototype is set to a given object. This allows precise control over inheritance without using constructor functions or classes:
const animal = {
speak() {
console.log('Generic sound');
}
};
const dog = Object.create(animal);
dog.bark = function() {
console.log('Woof!');
};
dog.speak(); // Generic sound (from prototype)
dog.bark(); // Woof!
Here, dog inherits from animal. Any properties not found on dog are delegated up the prototype chain to animal.
Object.create() containing property descriptors, allowing you to define properties with full control from the start.
const cat = Object.create(animal, {
sound: {
value: 'Meow',
writable: false
}
});
console.log(cat.sound); // Meow
cat.sound = 'Purr'; // Ignored (not writable)
Copying properties with Object.assign()
Object.assign() copies all enumerable own properties from one or more source objects into a target object. It returns the target object after merging:
const defaults = { theme: 'light', fontSize: 14 };
const settings = { theme: 'dark' };
const finalConfig = Object.assign({}, defaults, settings);
console.log(finalConfig); // { theme: 'dark', fontSize: 14 }
If multiple sources contain the same key, later ones overwrite earlier values. Non-enumerable properties are ignored, and property descriptors are not preserved (only values are copied).
Object.assign() performs a shallow copy. If a property holds a nested object, only the reference is copied, not a deep clone of its contents.
const original = { info: { name: 'Robin' } };
const copy = Object.assign({}, original);
copy.info.name = 'Alex';
console.log(original.info.name); // Alex (same reference)
Combining patterns
Object.create() and Object.assign() can be used together to build objects with inherited behavior and custom overrides:
const baseUser = {
role: 'user',
describe() {
console.log(this.name + ' (' + this.role + ')');
}
};
const admin = Object.create(baseUser);
Object.assign(admin, { name: 'Robin', role: 'admin' });
admin.describe(); // Robin (admin)
This pattern is lightweight and flexible. It avoids the need for classes or constructors while still providing inheritance and customization through simple composition.
Object.create() and Object.assign() form the foundation of many modern design patterns, such as mixins and prototype-based factories. They offer a clean, low-level way to model relationships and behavior in JavaScript.
JSON as a data format
JSON (JavaScript Object Notation) is a lightweight, text-based format for storing and transmitting data. It is based on JavaScript object syntax, but it is used independently of the language. JSON has become a universal standard for data exchange between applications, APIs, and databases.
Structure of JSON
JSON represents data as key–value pairs enclosed in curly braces, much like a JavaScript object literal. Keys must always be strings wrapped in double quotes, and values can be strings, numbers, booleans, arrays, objects, or null:
{
"name": "Robin",
"age": 55,
"isAuthor": true,
"books": ["This is Python", "This is JavaScript"],
"details": { "country": "UK" }
}
Although JSON looks like JavaScript, it is purely data and it cannot include functions, comments, or special values such as undefined.
Converting objects to JSON
To convert a JavaScript object into a JSON string, use JSON.stringify(). This is commonly done when sending data over a network or saving it to a file:
const user = {
name: 'Robin',
age: 55,
active: true
};
const json = JSON.stringify(user);
console.log(json); // {"name":"Robin","age":55,"active":true}
You can also control how objects are stringified by providing a replacer function or an array of property names, as well as a number of spaces for readable formatting:
console.log(JSON.stringify(user, null, 2));
JSON.stringify(object, null, 2) for a neatly formatted output.
Parsing JSON back into objects
To convert a JSON string back into a JavaScript object, use JSON.parse():
const text = '{"name":"Robin","age":55}';
const obj = JSON.parse(text);
console.log(obj.name); // Robin
Parsing invalid JSON will throw a SyntaxError, so it is good practice to wrap JSON.parse() in a try...catch block when handling data from unreliable sources.
try {
const data = JSON.parse(invalidText);
} catch (e) {
console.error('Invalid JSON data');
}
JSON and data interchange
Because JSON is language-independent, it is supported by almost every programming environment (and is even a useful way to convey complex instructions to generative AI). APIs and web services typically send and receive data in JSON format, which can easily be parsed or generated by both client and server applications.
// Example of sending JSON data with fetch
fetch('/api/user', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'Robin', active: true })
});
JSON’s simplicity and portability have made it the de facto standard for configuration files, REST APIs, and structured data storage in both browsers and servers.
JSON bridges the gap between JavaScript objects and universal data exchange formats. Using JSON.stringify() and JSON.parse(), you can serialize and deserialize data with ease, enabling communication across systems, services, and platforms in a consistent, language-neutral way.
Chapter 9: JavaScript Classes
Classes in JavaScript provide a cleaner, more structured way to work with objects and inheritance. They build on the existing prototype system, but present it in a syntax that is easier to read and maintain. A class defines the blueprint for creating objects, bundling together data (fields) and behavior (methods) within a single structure.
Although class syntax was introduced in ECMAScript 2015 (ES6), it does not replace JavaScript's prototype-based system underneath. Instead, it offers syntactic sugar that allows you to declare constructors, inheritance chains, and methods in a way that will feel familiar to programmers coming from languages such as Java, C++, or Python.
This chapter introduces the full class syntax and demonstrates how it integrates with existing object behavior. You will learn how to declare and instantiate classes, define fields and accessors, extend other classes, and use static members for shared functionality. Together these features allow you to design clear, reusable structures for your programs.
Class syntax and constructors
The class keyword defines a template for creating objects with shared structure and behavior. Inside a class body, you can declare a constructor() method, which runs automatically whenever a new instance is created using the new keyword. The constructor is typically used to initialize instance fields and set up the object’s initial state.
Here is a simple example that defines a class and creates an instance of it:
class Person {
constructor(name, age) {
this.name = name;
this.age = age;
}
greet() {
console.log(`Hello, my name is ${this.name}`);
}
}
const robin = new Person('Robin', 42);
robin.greet(); // Hello, my name is Robin
The constructor() is special: each class can have only one. If you attempt to declare multiple constructors, JavaScript will throw a syntax error. However, you can use default parameters or conditional logic to handle different initialization scenarios within the same constructor.
constructor(), JavaScript automatically provides an empty one that takes no arguments and simply calls super() if the class extends another.
Class declarations are not hoisted like functions. You must define the class before you can use it, or you will encounter a ReferenceError. This ensures that class definitions behave predictably and cannot be accessed before declaration.
// This will cause an error
const p = new Person('Alex');
class Person {
constructor(name) {
this.name = name;
}
}
Classes can also be defined as expressions and assigned to variables. This is known as a class expression and can be useful when creating classes dynamically or conditionally:
const Animal = class {
constructor(type) {
this.type = type;
}
speak() {
console.log(`${this.type} makes a sound`);
}
};
const dog = new Animal('Dog');
dog.speak(); // Dog makes a sound
Whether you use a class declaration or expression, both forms are functionally equivalent. The difference is mainly one of style and flexibility.
Fields, getters, and setters
Classes can define not only methods, but also fields (properties declared directly inside the class body) as well as getters and setters that control how values are accessed and updated. These features help you encapsulate data and provide a clear interface for working with class instances.
Defining instance fields
Instance fields belong to each object created from a class. They can be declared directly in the class body, outside of the constructor. This makes code shorter and clearer, especially when combined with initial default values.
class Counter {
count = 0;
increment() {
this.count++;
}
}
const c = new Counter();
c.increment();
console.log(c.count); // 1
In older JavaScript versions, you had to assign such properties manually inside the constructor. Field declarations now provide a concise, modern alternative while still allowing you to override or extend behavior as needed.
this safely or be assigned computed values.
Private fields
Private fields are declared using a leading hash sign (#) and are accessible only within the class where they are defined. They cannot be read, written, or deleted from outside the class.
class User {
#password;
constructor(name, password) {
this.name = name;
this.#password = password;
}
checkPassword(value) {
return value === this.#password;
}
}
const u = new User('Alice', 'secret');
console.log(u.checkPassword('secret')); // true
console.log(u.#password); // SyntaxError
Getters and setters
Getters and setters are special methods that define how a property is read or written. They allow you to add logic around property access, such as validation or computed values, while still using normal property syntax.
class Rectangle {
constructor(width, height) {
this.width = width;
this.height = height;
}
get area() {
return this.width * this.height;
}
set area(value) {
throw new Error('Area is read-only');
}
}
const r = new Rectangle(4, 5);
console.log(r.area); // 20
r.area = 30; // Error: Area is read-only
When a getter or setter is defined, it takes the place of a normal property with the same name. This makes it possible to design flexible APIs that act like fields but are backed by logic.
Combining fields and accessors
Fields, getters, and setters can be freely mixed within the same class. A typical pattern uses private fields for data storage and public accessors for controlled access:
class Product {
#price = 0;
get price() {
return this.#price;
}
set price(value) {
if (value < 0) throw new Error('Price must be positive');
this.#price = value;
}
}
const item = new Product();
item.price = 9.99;
console.log(item.price); // 9.99
This pattern provides a clean balance between encapsulation and ease of use. The external interface appears simple, while internal logic remains safely protected inside the class definition.
Inheritance with extends and super
Inheritance allows one class to build upon another, reusing and extending its behavior. In JavaScript, this is done using the extends keyword to create a subclass, and the super keyword to call the constructor or methods of the parent class. Together, they provide a powerful and expressive way to organize related classes in a hierarchy.
Creating a subclass with extends
The extends keyword establishes a prototype chain between the subclass and its parent. This means that all methods and fields of the parent class are available to instances of the subclass, unless explicitly overridden.
class Animal {
constructor(name) {
this.name = name;
}
speak() {
console.log(`${this.name} makes a sound`);
}
}
class Dog extends Animal {
speak() {
console.log(`${this.name} barks`);
}
}
const d = new Dog('Rex');
d.speak(); // Rex barks
In this example, Dog inherits from Animal. It overrides the speak() method but still retains access to other methods defined on Animal.
Using super() in constructors
If a subclass defines its own constructor(), it must call super() before accessing this. The super() call invokes the parent class’s constructor, allowing inherited properties to initialize properly.
class Vehicle {
constructor(type) {
this.type = type;
}
}
class Car extends Vehicle {
constructor(type, brand) {
super(type); // must call before using 'this'
this.brand = brand;
}
info() {
console.log(`${this.brand} is a ${this.type}`);
}
}
const car = new Car('sedan', 'Toyota');
car.info(); // Toyota is a sedan
super() before referencing this in a subclass constructor. Failing to do so results in a ReferenceError.
Calling parent methods with super
The super keyword can also be used within methods to access the parent class’s implementation of a function. This is useful when you want to extend the parent behavior rather than completely replace it.
class Shape {
describe() {
console.log('This is a shape');
}
}
class Circle extends Shape {
describe() {
super.describe(); // call parent method
console.log('Specifically, a circle');
}
}
const c = new Circle();
c.describe();
// This is a shape
// Specifically, a circle
This pattern allows a subclass to enhance or modify its inherited behavior while preserving the original functionality of its parent class.
Inheritance and the prototype chain
Under the surface, extends sets up the subclass prototype to point to its parent, maintaining JavaScript’s long-standing prototype-based design. Every class instance inherits from its class’s prototype, and that prototype in turn inherits from its parent’s prototype, forming a chain.
console.log(Object.getPrototypeOf(Car.prototype) === Vehicle.prototype); // true
Through this prototype chain, subclasses gain access to methods and properties defined higher up in the hierarchy. The extends keyword simply automates what could otherwise be done manually using Object.create() and explicit prototype assignment.
Static fields and methods
Static fields and methods belong to the class itself rather than to its instances. They are defined using the static keyword and are accessed directly through the class name, not through objects created from the class. This makes them ideal for utility functions, shared constants, or operations that apply to the class as a whole.
Defining static methods
A static method is declared by prefixing the method name with static. It can be called directly on the class, but not on instances of that class.
class MathUtils {
static square(x) {
return x * x;
}
static cube(x) {
return x * x * x;
}
}
console.log(MathUtils.square(3)); // 9
console.log(MathUtils.cube(2)); // 8
In this example, the methods square() and cube() are accessed on MathUtils itself. Creating an instance of MathUtils would serve no purpose because the behavior does not depend on any object state.
Static fields
Static fields work in a similar way, holding data that is shared across all instances of a class. They are declared using the static keyword as well.
class Counter {
static total = 0;
constructor() {
Counter.total++;
}
}
const a = new Counter();
const b = new Counter();
console.log(Counter.total); // 2
Each time a new Counter instance is created, the shared total field is incremented. This allows classes to track global counts or maintain shared state easily and cleanly.
Static inheritance
Static members can also be accessed or overridden by subclasses. The super keyword can be used within static methods to call static members of the parent class.
class Shape {
static describe() {
return 'A general shape';
}
}
class Circle extends Shape {
static describe() {
return super.describe() + ', specifically a circle';
}
}
console.log(Shape.describe()); // A general shape
console.log(Circle.describe()); // A general shape, specifically a circle
This pattern lets subclasses modify or extend shared static behavior while still reusing logic from their parent classes.
Private static fields and methods
Just like instance fields, static members can also be made private by prefixing them with #. These can only be accessed from within the class definition itself.
class Config {
static #secret = 'xyz123';
static getSecret() {
return Config.#secret;
}
}
console.log(Config.getSecret()); // xyz123
console.log(Config.#secret); // SyntaxError
Private static members help maintain internal consistency and prevent external access to data or functions meant for internal use only.
Together, static fields and methods provide a clean way to define class-level data and functionality. They complement instance-based features by separating what belongs to the whole class from what belongs to individual objects.
Private fields and methods
Private fields and methods allow you to strictly control access to internal data and helper functions within a class. They are declared by prefixing their names with a hash sign (#), and cannot be accessed from outside the class where they are defined. This feature adds true encapsulation to JavaScript’s class model, ensuring internal details stay hidden and protected.
Declaring private fields
A private field is created by placing # before its name inside the class body. It can only be accessed by methods or constructors defined within the same class.
class Account {
#balance = 0;
deposit(amount) {
if (amount > 0) this.#balance += amount;
}
getBalance() {
return this.#balance;
}
}
const a = new Account();
a.deposit(100);
console.log(a.getBalance()); // 100
console.log(a.#balance); // SyntaxError
Here, #balance cannot be read or written directly from outside the class. Any attempt to do so results in a syntax error during parsing, not at runtime.
Object.keys() or for...in loops, and cannot be dynamically created or deleted.
Private methods
Private methods are declared in the same way, using a leading #. They can be called only from other methods within the same class and are invisible to subclasses and external code.
class PasswordValidator {
#isStrong(password) {
return password.length >= 8 && /\d/.test(password);
}
validate(password) {
return this.#isStrong(password)
? 'Valid password'
: 'Too weak';
}
}
const v = new PasswordValidator();
console.log(v.validate('abc123')); // Too weak
console.log(v.validate('abc12345')); // Valid password
This approach lets you hide supporting logic that should not be part of the public interface, keeping your API clean and focused on what matters to users of the class.
Access from subclasses
Private fields and methods are not inherited or accessible by subclasses. Each class that declares them creates its own distinct private storage. If a subclass defines a private field with the same name, it is treated as a separate member.
class Base {
#secret = 'hidden';
reveal() {
console.log(this.#secret);
}
}
class Derived extends Base {
#secret = 'new';
revealDerived() {
console.log(this.#secret);
}
}
const d = new Derived();
d.reveal(); // hidden
d.revealDerived(); // new
In this example, the #secret field of Derived does not override the one in Base. Both exist independently, even though they share the same name.
Private static members
You can also combine the static and # keywords to define private static members that belong to the class itself rather than instances. They are useful for storing internal configuration or shared state that should not be exposed.
class ConfigManager {
static #apiKey = 'abc123';
static getKey() {
return ConfigManager.#apiKey;
}
}
console.log(ConfigManager.getKey()); // abc123
console.log(ConfigManager.#apiKey); // SyntaxError
Private static fields and methods remain hidden from both instances and subclasses, providing strict control over sensitive data and helper logic.
#) are lexically scoped and cannot be created dynamically. You must declare them explicitly in the class body before they can be used.
Private fields and methods bring JavaScript classes in line with the encapsulation features long found in other object-oriented languages. They make it easier to design robust, maintainable APIs with clear boundaries between internal and external behavior.
Chapter 10: Modules
As JavaScript applications grew in size and complexity, the need for a standardized way to organize and reuse code became clear. Modules solve this problem by letting you split programs into separate files, each with its own scope and responsibilities. A module can export specific values, functions, or classes for use in other modules, and import what it needs from them in turn.
The modern module system, known as ES modules (or ESM), was introduced in ECMAScript 2015. It provides a consistent, declarative syntax that works both in browsers and in Node.js environments. Modules encourage clean separation of concerns, making your codebase easier to read, test, and maintain.
Modules are loaded once per program execution, and their exports are cached. This means that even if you import the same module in multiple places, it is evaluated only once, improving efficiency and predictability.
This chapter explores how to define and use modules with the import and export statements, the difference between default and named exports, how module resolution works, and how modern JavaScript allows await expressions at the top level of a module. It concludes with interoperability notes for working with older systems such as CommonJS and hybrid environments.
ES modules: import and export
ES modules provide a native way to divide code into separate files and share functionality between them. Each module can explicitly define which parts of its code are visible to other modules using the export keyword, and can bring in code from other modules using import. This creates a clear, declarative structure for managing dependencies.
Exporting from a module
Any value, variable, function, or class can be made available to other modules by exporting it. You can export items individually as they are declared, or group them together at the end of the file.
// math.js
export const PI = 3.14159;
export function square(x) {
return x * x;
}
function cube(x) {
return x * x * x;
}
export { cube };
In this example, the constants and functions marked with export become accessible to other modules. Anything not exported remains private to math.js.
Importing into another module
To use exported values from another module, you use the import keyword. You must specify both the names to import and the module file they come from.
// main.js
import { PI, square, cube } from './math.js';
console.log(PI); // 3.14159
console.log(square(4)); // 16
console.log(cube(3)); // 27
Imported bindings are read-only references to the exported values. You cannot reassign them, but if the exported value is an object or array, its contents can still be modified (subject to normal JavaScript rules).
.js) when used in browsers. Node.js allows extensionless imports in certain configurations, but ESM files typically use explicit paths.
Renaming imports and exports
Sometimes you may want to use different names when importing or exporting items. The as keyword provides an alias mechanism for this purpose.
// shapes.js
export function areaCircle(r) {
return Math.PI * r * r;
}
export function areaSquare(s) {
return s * s;
}
// main.js
import { areaCircle as circleArea } from './shapes.js';
console.log(circleArea(5)); // 78.5398...
This can help avoid naming conflicts or improve clarity when combining functionality from multiple modules.
Importing everything from a module
You can import an entire module’s exports under a single namespace object using the * as syntax.
// main.js
import * as math from './math.js';
console.log(math.PI);
console.log(math.square(6));
This approach keeps related functions grouped under one name, which can be useful when the module defines many related items.
Import side effects
Sometimes a module performs setup or configuration when it loads, without exporting anything. You can import such a module simply for its side effects by omitting any binding list.
// log-setup.js
console.log('Logging initialized');
// main.js
import './log-setup.js';
This pattern is occasionally used for initializing global settings, polyfills, or event hooks, though modern design usually favors explicit exports and imports.
Default vs named exports
ES modules support two kinds of exports: named exports and default exports. Named exports let you export multiple values from a single module, while a default export designates one primary value or object to be exported. Understanding the difference is key to writing clear and consistent module APIs.
Named exports
Named exports explicitly identify each item you want to make available. You can export several items from the same module, and they must be imported using their exact exported names (unless renamed with as).
// utils.js
export function greet(name) {
return `Hello, ${name}!`;
}
export const VERSION = '1.0';
// main.js
import { greet, VERSION } from './utils.js';
console.log(greet('Robin')); // Hello, Robin!
console.log(VERSION); // 1.0
Each module can have as many named exports as you like. These exports are collected into a single export object that can be imported in whole or in part.
Default exports
Default exports are used when a module has one primary thing to expose, such as a class, function, or configuration object. Only one default export is allowed per module.
// person.js
export default class Person {
constructor(name) {
this.name = name;
}
greet() {
console.log(`Hi, I'm ${this.name}`);
}
}
// main.js
import Person from './person.js';
const robin = new Person('Robin');
robin.greet(); // Hi, I'm Robin
Note that the import statement for a default export does not use braces. You can choose any name you like for the imported value, since it refers to the module’s default export only.
export default declaration. Attempting to declare multiple default exports will cause a syntax error.
Mixing default and named exports
You can combine both default and named exports in the same module, but it is generally clearer to pick one approach per module for consistency. If you do mix them, they can be imported together as follows:
// shapes.js
export const CIRCLE = 'circle';
export const SQUARE = 'square';
export default function draw(shape) {
console.log(`Drawing a ${shape}`);
}
// main.js
import draw, { CIRCLE, SQUARE } from './shapes.js';
draw(CIRCLE);
Here, draw is the module’s default export, while CIRCLE and SQUARE are named exports. This pattern works well when a module provides one main function plus a few helper constants or types.
Re-exporting from other modules
Modules can also re-export values imported from elsewhere. This helps you create aggregators or index files that gather exports from multiple sources into a single entry point.
// math.js
export const PI = 3.14;
export function square(x) { return x * x; }
// index.js
export { PI, square } from './math.js';
// main.js
import { PI, square } from './index.js';
You can also re-export a default export under your own default or named export using the default keyword:
export { default as Library } from './library.js';
Module resolution basics
When you use an import statement, JavaScript must locate and load the requested module before execution continues. The process by which it finds the correct file or package is called module resolution. Understanding how this works helps avoid common issues such as missing files, circular dependencies, and unexpected module paths.
Relative and absolute paths
When importing local modules, you typically use a relative path that begins with ./ or ../. This tells the JavaScript engine to look for the module in the filesystem relative to the current file’s location.
import { helper } from './utils.js';
import Logger from '../lib/logger.js';
Absolute paths (starting from the project root or a base URL) can also be used, but they require specific configuration in your environment, such as a module root in Node.js or a base path in a web bundler.
.js) when using ES modules in browsers. Node.js can infer the extension in some configurations, but explicit filenames are the safest and most portable choice.
Bare specifiers
When you import a package by name rather than by file path, such as import _ from 'lodash', this is called a bare specifier. Bare specifiers are resolved according to the host environment’s rules:
- In browsers: Bare specifiers are not supported directly. They must be mapped to URLs or file paths using a tool like import maps or a bundler (for example, Rollup, Vite, or Webpack).
- In Node.js: Bare specifiers are resolved by looking inside the
node_modulesdirectory and following the module’spackage.jsonconfiguration.
// Node.js example
import express from 'express';
Here, Node.js looks for an express directory inside node_modules, reads its package.json, and uses the "exports" or "main" field to locate the entry file.
Module file extensions and types
JavaScript modules can have different extensions and formats, depending on their type and the runtime environment:
.js— Standard JavaScript module (ESM by default in browsers)..mjs— Explicitly treated as an ES module in Node.js..cjs— CommonJS format (legacy Node.js module type).
In Node.js, the type field in package.json determines the default module system for files ending in .js. For example:
{
"type": "module"
}
With this setting, Node.js treats .js files as ES modules. Without it, Node assumes they are CommonJS files unless they use the .mjs extension.
Circular dependencies
A circular dependency occurs when two modules import each other, directly or indirectly. When this happens, one of the modules may be partially evaluated, meaning some exports may still be undefined when accessed.
// a.js
import { bar } from './b.js';
export const foo = 'foo';
console.log(bar);
// b.js
import { foo } from './a.js';
export const bar = 'bar';
console.log(foo);
This pattern can cause unpredictable results because of the order in which modules are initialized. If circular dependencies are unavoidable, design them so that shared data flows through functions or objects rather than top-level exports.
Caching and re-evaluation
Once a module has been successfully loaded and executed, it is cached. Future imports of the same module return the same instance rather than re-running the module’s code. This ensures consistency across your application and improves performance.
import './setup.js';
import './setup.js'; // runs only once
Because of this caching, any module-level state you maintain (such as counters, configuration, or open connections) persists across all imports within the same runtime session.
Top-level await
Traditionally, the await keyword could only be used inside async functions. However, modern JavaScript allows await to appear directly at the top level of an ES module. This feature, known as top-level await, enables modules to perform asynchronous setup operations before their exports become available to importing modules.
Using await at the top level
When used at the top level, await pauses evaluation of the module until the awaited promise resolves. This makes it possible to perform asynchronous initialization in a clean, linear style without wrapping everything in an async function.
// config.js
const response = await fetch('./settings.json');
export const settings = await response.json();
// main.js
import { settings } from './config.js';
console.log(settings.theme);
Here, the main.js module automatically waits for config.js to finish fetching and parsing its data before continuing execution. This makes asynchronous resource loading straightforward in modular codebases.
await is supported only in ES modules, not in traditional script files. If you need it, ensure your environment treats files as ESM (for example, with type="module" in HTML or "type": "module" in package.json).
How it affects module loading
When a module uses top-level await, any other module that imports it must wait for it to complete before continuing. This can make module loading asynchronous, even if other modules use only synchronous imports.
// data.js
console.log('Fetching data...');
export const data = await fetch('./data.json').then(r => r.json());
console.log('Data loaded');
// main.js
import { data } from './data.js';
console.log('Data ready', data);
In this sequence, main.js will not log Data ready
until the fetch completes and data.js finishes its top-level await.
await can cause import delays if used heavily or across multiple dependent modules. Use it carefully in performance-sensitive code, and consider lazy loading or deferred initialization for large datasets.
Awaiting dynamic imports
Top-level await works especially well with dynamic import() expressions, allowing modules to load dependencies only when needed. The import() function returns a promise that resolves to the imported module’s namespace object.
// main.js
const { default: greet } = await import('./greeter.js');
greet('Robin');
This pattern is often used for conditional imports, feature detection, or optimizing bundle sizes in web applications. Because the syntax is asynchronous, it integrates smoothly with top-level await.
Combining with initialization logic
You can use top-level await to initialize state before exporting anything from the module. For example, you might connect to a database, read configuration files, or prepare API clients.
// db.js
const connection = await connectToDatabase();
export const db = connection;
// main.js
import { db } from './db.js';
db.query('SELECT * FROM users');
This ensures dependent modules receive a fully initialized resource, with no need to manage promises manually.
await simplifies module setup, but it also means importers must wait for the module to resolve. For predictable performance, reserve it for essential one-time initialization tasks.
Interoperability notes
While ES modules (ESM) are now the standard format for modern JavaScript, many existing projects and libraries still use the older CommonJS (CJS) system, especially in Node.js environments. Understanding how these two systems interact helps ensure compatibility when mixing module types or transitioning older codebases to ESM.
ESM vs CommonJS overview
CommonJS was designed for server-side JavaScript long before ESM existed. It uses the require() function to import dependencies and module.exports or exports to expose values. ES modules, by contrast, use import and export statements that are statically analyzed at load time.
// CommonJS (cjs)
const fs = require('fs');
module.exports = { read: fs.readFileSync };
// ES module (esm)
import fs from 'fs';
export function read(file) {
return fs.readFileSync(file, 'utf8');
}
CommonJS modules are loaded synchronously, while ES modules are loaded asynchronously. This difference affects how inter-module dependencies are resolved and when exports become available.
Importing CommonJS from ESM
In Node.js, ES modules can import CommonJS modules using a default import. The entire module.exports object is exposed as the default export.
// utils.cjs
module.exports = {
greet(name) {
console.log(`Hello, ${name}`);
}
};
// main.mjs (ESM)
import utils from './utils.cjs';
utils.greet('Robin');
This works because Node.js automatically wraps CommonJS exports in a default binding for ES module consumers. Named imports are not supported unless the CJS module explicitly structures its exports as an object.
utils.greet()) to call specific functions or retrieve values.
Importing ES modules from CommonJS
CommonJS cannot use import statements directly. Instead, you can dynamically load an ES module using the asynchronous import() function, which returns a promise that resolves to the module’s namespace object.
// main.cjs
(async () => {
const { sayHello } = await import('./greeter.mjs');
sayHello('Robin');
})();
This pattern allows CJS code to interoperate with modern ESM modules while maintaining backward compatibility. It is the recommended approach for gradually migrating legacy code to ES module syntax.
Package configuration and type field
In Node.js, the type field in package.json controls how files with the .js extension are treated:
{
"type": "module"
}
With this setting, all .js files in the package are interpreted as ES modules. Without it, they default to CommonJS. You can still use .mjs for ESM and .cjs for CommonJS to explicitly control the module type per file.
package.json settings. Mixing ESM and CJS within the same directory without clear naming or configuration often causes resolution errors.
Browser vs Node.js interoperability
In browsers, only ES modules are supported natively. CommonJS modules must be bundled or converted using tools like Rollup, Webpack, or esbuild. These tools can analyze CJS dependencies and output ESM-compatible bundles.
<script type="module" src="main.js"></script>
Node.js supports both formats, but requires explicit handling of the type field or file extensions. Browser-targeted ESM files generally work in Node.js as long as they avoid web-specific APIs like document or window.
Interoperability between ESM and CommonJS is now well supported but can still introduce subtle differences in behavior. Keeping your codebase modular, explicit, and consistent helps ensure that imports and exports behave predictably across environments.
Chapter 11: Strings and Text
Strings are the primary way JavaScript represents text. Whether you are displaying messages, manipulating user input, or constructing templates for output, strings are essential to almost every program. They are sequences of Unicode characters enclosed in quotes and can be created with single, double, or backtick marks. While simple in concept, JavaScript strings have many subtleties that arise from their immutability and Unicode handling.
Because strings cannot be changed once created, any operation that seems to alter a string actually produces a new one. This behaviour keeps data consistent and predictable, but it also means that performance can become a consideration in large-scale text processing.
JavaScript also provides template literals, a modern syntax using backticks that allows embedded expressions, multiline text, and improved readability for code that generates dynamic content. Template interpolation greatly simplifies string composition compared to manual concatenation with +.
Since JavaScript is Unicode-based, strings can contain any script or symbol, from emojis to accented characters. However, because some characters are represented by multiple code units, working with text at the character level can be less straightforward than it first appears.
This chapter explores how strings are represented and manipulated in JavaScript. It begins with the basics of string creation and immutability, then moves to template literals and interpolation, followed by an overview of Unicode concepts. It continues with common methods for searching, slicing, and transforming text, and concludes with an introduction to regular expressions, the powerful pattern-matching system built into the language.
String basics and immutability
Strings in JavaScript are sequences of characters enclosed by single quotes ('), double quotes ("), or backticks (`). Each approach creates a String object or literal, although backticks are special and enable template features that will be discussed later. Strings are a primitive type, meaning they are immutable and stored by value rather than by reference.
// Using single and double quotes
let name = 'Robin';
let greeting = "Hello, world!";
// Using backticks (template literal)
let message = `Welcome, ${name}!`;
When a string is said to be immutable, it means its content cannot be altered once created. Any operation that appears to modify a string actually produces a new one. For example, concatenating or replacing parts of a string returns a new value instead of updating the original.
let text = 'Java';
text += 'Script'; // Creates a new string, does not modify 'Java'
console.log(text); // JavaScript
text[0] = 'Y', will silently fail. JavaScript strings are not arrays and cannot be mutated this way.
You can access characters in a string using bracket notation or the charAt() method. Both return the character at the given position, starting at index 0.
let lang = 'JavaScript';
console.log(lang[0]); // J
console.log(lang.charAt(1)); // a
Strings have a length property, which counts the number of UTF-16 code units in the string. For simple text this matches the character count, but for some Unicode characters (such as emojis or accented letters) the result may differ.
console.log('Hello'.length); // 5
console.log('💡'.length); // 2
for...of or Array.from(), both of which understand surrogate pairs.
Understanding that strings are immutable is crucial to writing efficient and predictable text-processing code. Each time you use concatenation, slicing, or replacement, you are creating a new string object rather than modifying the original one. This becomes especially important when manipulating large volumes of text or within tight loops.
Templates and interpolation
Template literals were introduced in ECMAScript 2015 to make working with strings more expressive and less error-prone. They use backticks (`) instead of single or double quotes and can span multiple lines without the need for escape characters. This makes them ideal for constructing readable text blocks or dynamic strings that include variable data.
let name = 'Robin';
let language = 'JavaScript';
let message = `Hello ${name},
welcome to ${language}!`;
console.log(message);
// Output:
// Hello Robin,
// welcome to JavaScript!
The key feature of template literals is interpolation: the ability to embed expressions directly inside a string using the ${...} syntax. Anything between the braces is evaluated as JavaScript and then converted to a string.
let a = 5;
let b = 7;
console.log(`The total is ${a + b}`); // The total is 12
${...} placeholder…even function calls or ternary expressions.
Because template literals support multiline text, they are often used for creating formatted output or inline HTML fragments without cumbersome concatenation operators.
let html = `
<div class="user">
<h2>${name}</h2>
<p>Learning ${language}</p>
</div>`;
Traditional strings require escaping line breaks with \n or manual concatenation with +. Template literals remove that overhead, keeping your code clear and easier to maintain.
In addition to interpolation, template literals can also use tagged templates, a more advanced feature where a function processes the literal’s content and expressions before producing output. This mechanism enables custom formatting or filtering but will be discussed in more detail in later chapters.
Unicode, code points, and iteration
JavaScript strings are based on the Unicode standard, which assigns a unique number, or code point, to every character in virtually every writing system. Internally, JavaScript stores these strings using UTF-16 encoding, where most common characters use a single 16-bit unit, but some (such as emojis or certain symbols) require a pair of units known as a surrogate pair.
let smile = '😊';
console.log(smile.length); // 2 (two UTF-16 code units)
console.log(smile.charCodeAt(0)); // 55357 (first half of surrogate pair)
console.log(smile.charCodeAt(1)); // 56842 (second half)
This dual-unit representation means that some string operations, like indexing or measuring length, can produce confusing results with characters outside the Basic Multilingual Plane (BMP). For instance, slicing or reversing such strings without care can break them into invalid halves.
substring() or split('') can separate surrogate pairs incorrectly.
To handle Unicode accurately, JavaScript provides modern features for working with full code points. The codePointAt() and fromCodePoint() methods correctly interpret characters regardless of how many code units they use.
let heart = '💖';
console.log(heart.codePointAt(0)); // 128150
console.log(String.fromCodePoint(128150)); // 💖
Iteration over strings has also improved with the introduction of for...of loops, which automatically respect Unicode boundaries. This ensures each visible symbol is treated as a single item, even if it uses surrogate pairs internally.
for (let ch of 'A💡B') {
console.log(ch);
}
// Output:
// A
// 💡
// B
for...of loop and Array.from() both iterate over entire Unicode characters, making them safer for text processing than traditional index-based loops.
JavaScript also exposes the normalize() method, which helps standardize characters that can be represented in multiple ways (for example, accented letters). Unicode normalization ensures that two visually identical strings will compare equally, even if their internal byte sequences differ.
let a = 'é'; // single code point
let b = 'e\u0301'; // 'e' + combining accent
console.log(a === b); // false
console.log(a.normalize() === b.normalize()); // true
Understanding code points, surrogate pairs, and normalization is essential when working with multilingual or symbol-rich text. These features ensure that string manipulation in JavaScript remains consistent across the full range of Unicode characters.
Common string methods
JavaScript’s String object provides a wide range of methods for inspecting, transforming, and extracting parts of text. Since strings are immutable, these methods always return new strings or values rather than altering the original content.
Some of the most frequently used methods deal with case conversion and trimming whitespace:
let text = ' JavaScript ';
console.log(text.toUpperCase()); // JAVASCRIPT
console.log(text.toLowerCase()); // javascript
console.log(text.trim()); // "JavaScript"
To locate substrings, use indexOf(), lastIndexOf(), or includes(). These help find or check for specific sequences within a string.
let lang = 'JavaScript';
console.log(lang.indexOf('a')); // 1
console.log(lang.lastIndexOf('a')); // 3
console.log(lang.includes('Script')); // true
When you need to extract sections of text, the most common tools are slice(), substring(), and substr() (the last of which is now deprecated but still widely seen). slice() is generally preferred for its clarity and flexibility.
let word = 'JavaScript';
console.log(word.slice(0, 4)); // Java
console.log(word.slice(-6)); // Script
slice() and substring() return a new string. The original remains unchanged, reinforcing the concept of immutability.
String splitting and joining are essential for text processing. split() divides a string into an array, while join() (used on arrays) reconstructs text from elements.
let csv = 'red,green,blue';
let colors = csv.split(',');
console.log(colors); // ['red', 'green', 'blue']
console.log(colors.join(' | ')); // red | green | blue
Replacing or transforming parts of a string can be done with replace() or replaceAll(). Both return a new string, leaving the original untouched.
let phrase = 'I like cats. Cats are cute.';
console.log(phrase.replace('cats', 'dogs'));
// I like dogs. Cats are cute.
console.log(phrase.replaceAll('Cats', 'Dogs'));
// I like cats. Dogs are cute.
replace() method changes only the first match unless a regular expression with the global flag (/g) is used, or you switch to replaceAll().
Finally, there are several convenient methods for checking prefixes, suffixes, or matching patterns:
let file = 'index.html';
console.log(file.startsWith('index')); // true
console.log(file.endsWith('.html')); // true
console.log(file.match(/\.html$/)); // ['.html']
Together, these methods cover most everyday string operations: finding, slicing, replacing, trimming, matching, and joining. They form the backbone of text handling in JavaScript and are used in nearly every kind of program that processes human-readable data.
Regular expressions
Regular expressions (often called regex or regexp) are patterns used to search, match, and manipulate text. In JavaScript, they are represented by the RegExp object and can be created either with literal syntax, using slashes (/pattern/), or by calling the constructor new RegExp().
// Two equivalent forms
let re1 = /cat/;
let re2 = new RegExp('cat');
Regular expressions can be used with several string methods, including match(), search(), replace(), and split(). When applied, they allow sophisticated text processing far beyond simple substring checks.
let text = 'The cat sat on the mat.';
console.log(text.match(/cat/)); // ['cat']
console.log(text.search(/sat/)); // 8
console.log(text.replace(/mat/, 'rug')); // The cat sat on the rug.
Regex patterns can include special characters known as metacharacters that define how text should be matched. For example, . matches any single character, \d matches digits, and \s matches whitespace. Quantifiers like +, *, and {n,m} specify repetition.
let sample = 'ID: A12345';
console.log(sample.match(/A\d+/)); // ['A12345']
g flag makes the regex global, allowing multiple matches, while the i flag makes it case-insensitive. You can combine them as /pattern/gi.
Grouping and alternation provide further control. Parentheses (...) group parts of a pattern, and the pipe | offers a choice between alternatives.
let str = 'apple or orange';
console.log(str.match(/apple|orange/)); // ['apple']
Regex captures matched groups, which can be accessed by number or by name (with modern syntax). This is useful for extracting parts of structured text, such as dates or tags.
let date = '2025-10-28';
let re = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/;
let match = date.match(re);
console.log(match.groups.year); // 2025
Regular expressions can also be used directly through the test() method of a RegExp object, which simply returns true or false depending on whether the pattern matches.
let valid = /^\d{4}-\d{2}-\d{2}$/; // yyyy-mm-dd
console.log(valid.test('2025-10-28')); // true
console.log(valid.test('28/10/2025')); // false
Regular expressions are an indispensable tool for validating, parsing, and transforming text. They can appear cryptic at first, but even a basic grasp of their syntax greatly expands what you can do with strings in JavaScript. Later chapters will revisit regex in more advanced contexts, including data extraction and text validation in real-world applications.
Chapter 12: Arrays and Collections
Arrays and collection types are at the heart of JavaScript programming. They allow you to group related values, organize data, and perform operations across lists, sequences, or sets of items. From storing a few numbers to managing complex datasets, JavaScript’s array and collection features provide the tools needed for efficient data manipulation.
The array is the most common of these structures. It is an ordered, zero-indexed list that can hold any type of value such as numbers, strings, objects, or even other arrays. Although arrays are technically objects, they come with a rich set of methods designed for traversal, transformation, and aggregation.
Beyond traditional arrays, modern JavaScript includes higher-order methods such as map(), filter(), and reduce() that enable expressive functional programming techniques. These methods help process collections declaratively, reducing the need for manual loops and temporary variables.
It is also important to understand which array operations modify the original data (mutating methods) and which return new arrays instead. Knowing this distinction helps prevent subtle bugs and keeps your code predictable when working with shared or reused data.
push(), pop(), and splice() modify the original array. Others like map(), filter(), and slice() return a new one, leaving the source untouched.
In addition to arrays, JavaScript provides specialized collection types such as Map, Set, WeakMap, and WeakSet. These offer efficient ways to store key-value pairs or unique elements and are especially useful for caching, lookup tables, and managing object references safely.
This chapter explores all these structures in turn. It begins with array creation and iteration, moves on to functional transformation methods, examines the difference between mutating and non-mutating operations, introduces typed arrays for binary data, and concludes with a look at the modern collection classes that extend JavaScript’s handling of structured data.
Array creation and iteration
Arrays in JavaScript are ordered lists of values, indexed from zero, that can hold elements of any type. You can create them using array literals, the Array constructor, or from existing iterable objects. Array literals are by far the most common and readable form.
// Literal syntax
let numbers = [1, 2, 3, 4];
// Constructor form
let fruits = new Array('apple', 'banana', 'cherry');
// From another iterable
let chars = Array.from('hello'); // ['h', 'e', 'l', 'l', 'o']
Arrays are dynamic, so they can grow or shrink as elements are added or removed. You can assign values by index or use methods such as push() and pop().
let list = [];
list.push('a');
list.push('b');
console.log(list); // ['a', 'b']
list[2] = 'c';
console.log(list.length); // 3
undefined elements, and some methods skip them during iteration.
JavaScript provides several ways to iterate through arrays, each suited to different purposes. The classic for loop gives fine control over indexes, while the more modern for...of loop iterates directly over values.
let nums = [10, 20, 30];
// Traditional for loop
for (let i = 0; i < nums.length; i++) {
console.log(nums[i]);
}
// for...of loop
for (let value of nums) {
console.log(value);
}
Array iteration methods offer a declarative alternative. forEach() runs a callback function once for every element.
nums.forEach(n => console.log(n * 2));
// Output: 20, 40, 60
for...in loop is designed for object properties, not arrays. It iterates over keys and can include inherited properties, so prefer for...of or forEach() when working with arrays.
Arrays also expose useful factory methods for creation and filling. Array.of() creates an array from given arguments, while fill() and copyWithin() can modify contents efficiently.
let arr = Array.of(1, 2, 3); // [1, 2, 3]
arr.fill(0); // [0, 0, 0]
Finally, array destructuring allows quick unpacking of values into separate variables, improving clarity and conciseness when working with lists of known size.
let coords = [12, 34];
let [x, y] = coords;
console.log(x, y); // 12 34
These techniques form the foundation for all further work with arrays. Once you understand how to create and iterate through them, you can move on to transforming, filtering, and reducing collections with functional methods such as map(), filter(), and reduce().
map, filter, and reduce
Arrays in JavaScript include a set of higher-order methods that make it easy to transform, filter, and aggregate data without manual loops. These methods (map(), filter(), and reduce()) are central to a functional programming style, where operations are expressed in terms of data transformations rather than step-by-step instructions.
The map() method creates a new array by applying a function to every element of the original. It does not modify the source array.
let numbers = [1, 2, 3, 4];
let squares = numbers.map(n => n * n);
console.log(squares); // [1, 4, 9, 16]
map() method is perfect for one-to-one transformations where every input element becomes exactly one output element.
The filter() method returns a new array containing only those elements for which a given function returns true. It is ideal for selecting subsets of data.
let words = ['apple', 'bat', 'banana', 'cat'];
let longWords = words.filter(w => w.length > 3);
console.log(longWords); // ['apple', 'banana']
The reduce() method is more general. It processes all elements and combines them into a single result, carrying forward an accumulated value with each iteration. This is useful for totals, averages, or any type of aggregation.
let prices = [10, 15, 20];
let total = prices.reduce((sum, value) => sum + value, 0);
console.log(total); // 45
reduce() (the second argument). Without it, the first array element becomes the initial accumulator, which can cause subtle bugs or errors when the array is empty.
Two closely related methods, some() and every(), test whether any or all elements satisfy a condition.
let nums = [2, 4, 6];
console.log(nums.every(n => n % 2 === 0)); // true
console.log(nums.some(n => n > 5)); // true
Another useful method, find(), returns the first element that matches a condition, while findIndex() returns its position in the array.
let people = [
{ name: 'Alice', age: 30 },
{ name: 'Bob', age: 40 }
];
let result = people.find(p => p.age > 35);
console.log(result.name); // Bob
For flattening nested arrays, use flat() and flatMap(). The latter combines mapping and flattening into one concise step.
let nested = [1, [2, 3], [4, [5]]];
console.log(nested.flat(2)); // [1, 2, 3, 4, 5]
let doubled = [1, 2, 3].flatMap(n => [n, n * 2]);
console.log(doubled); // [1, 2, 2, 4, 3, 6]
let result = [1, 2, 3, 4, 5]
.filter(n => n % 2 === 0)
.map(n => n * n)
.reduce((a, b) => a + b, 0);
console.log(result); // 20
Together, map(), filter(), and reduce() form the core of functional-style array processing in JavaScript. When combined with other helpers like some(), every(), find(), and flatMap(), they provide a concise, expressive, and immutable approach to working with data collections.
Mutating vs non-mutating methods
Some array methods in JavaScript change the array they are called on, while others leave the original untouched and instead return a new array. Understanding which methods mutate and which do not is essential for writing predictable and bug-free code, especially when data is shared across different parts of a program.
Mutating methods directly modify the array’s contents. They may add, remove, or rearrange elements in place, altering both the data and its length. Examples include:
push()adds elements to the endpop()removes the last elementshift()removes the first elementunshift()adds elements to the startsplice()adds or removes elements at specific positionssort()sorts the array in placereverse()reverses element orderfill()replaces all values with a given onecopyWithin()overwrites part of the array with its own elements
let fruits = ['apple', 'banana', 'cherry'];
fruits.push('date');
console.log(fruits); // ['apple', 'banana', 'cherry', 'date']
fruits.reverse();
console.log(fruits); // ['date', 'cherry', 'banana', 'apple']
Non-mutating methods return new arrays or values without touching the original data. These methods are safer in functional-style programming and make it easier to reason about program state.
concat()merges arrays into a new oneslice()extracts a section into a new arraymap()transforms elementsfilter()selects elements based on a conditionreduce()combines values into one resultflat()flattens nested arraystoSorted()returns a sorted copy (ES2023+)toReversed()returns a reversed copy (ES2023+)
let nums = [3, 1, 2];
let sorted = nums.toSorted();
console.log(sorted); // [1, 2, 3]
console.log(nums); // [3, 1, 2] — original unchanged
toSorted() and toReversed(). These preserve immutability while offering familiar functionality.
When working on large projects or functional-style codebases, preferring non-mutating methods helps maintain data integrity and avoids unintentional changes. However, mutating methods can still be useful for small, localized operations where performance or simplicity matters.
The key is to know which approach you are using at any given time. Mutating methods alter the existing array in place, while non-mutating methods return a new array, leaving the original untouched. Choosing between them is as much about code style and predictability as it is about performance.
Typed arrays
Typed arrays provide a way to handle raw binary data in JavaScript. Unlike regular arrays, which can hold any type of value and have variable element sizes, typed arrays are fixed-length and store elements of a single numeric type. They are designed for efficient interaction with low-level data, such as graphics buffers, audio streams, and files.
A typed array is not a standalone structure but a view into an underlying ArrayBuffer, which represents a fixed-size block of memory. Each typed array interprets that buffer as a sequence of a specific numeric format, such as 8-bit integers or 32-bit floats.
// Create a buffer of 8 bytes
let buffer = new ArrayBuffer(8);
// Create a 32-bit integer view on the buffer
let view = new Int32Array(buffer);
view[0] = 12345;
view[1] = 67890;
console.log(view); // Int32Array(2) [12345, 67890]
Each typed array type corresponds to a specific data representation. Common variants include:
Int8Array8-bit signed integersUint8Array8-bit unsigned integersInt16Array16-bit signed integersUint16Array16-bit unsigned integersInt32Array32-bit signed integersUint32Array32-bit unsigned integersFloat32Array32-bit floating pointFloat64Array64-bit floating point
You can also create a typed array directly from a standard array or iterable. The values are automatically converted to the target numeric type.
let floats = new Float32Array([1.5, 2.5, 3.5]);
console.log(floats[1]); // 2.5
Typed arrays are especially useful in contexts such as WebGL, audio processing, or working with binary file formats, where precise control over memory layout and performance is essential.
You can also create multiple views over the same buffer, allowing the same memory region to be interpreted in different ways:
let buffer2 = new ArrayBuffer(4);
let bytes = new Uint8Array(buffer2);
let ints = new Int32Array(buffer2);
bytes[0] = 0x78;
bytes[1] = 0x56;
bytes[2] = 0x34;
bytes[3] = 0x12;
console.log(ints[0].toString(16)); // 12345678 (depends on endianness)
While most JavaScript code works comfortably with regular arrays, typed arrays bridge the gap between JavaScript and systems-level performance. They allow efficient number crunching and binary manipulation while still using familiar array-like syntax.
Map, Set, WeakMap, and WeakSet
Beyond arrays, JavaScript provides several collection types designed for more specialized data storage: Map, Set, WeakMap, and WeakSet. These structures offer powerful alternatives to plain objects and arrays, each with its own unique characteristics and use cases.
Map
A Map is a collection of key–value pairs, similar to an object, but with significant differences. Unlike objects, map keys can be of any type, including objects, functions, or even NaN. Maps also maintain the order in which entries were inserted.
let capitals = new Map();
capitals.set('France', 'Paris');
capitals.set('Italy', 'Rome');
capitals.set('Japan', 'Tokyo');
console.log(capitals.get('Italy')); // Rome
console.log(capitals.size); // 3
Maps can be created directly from arrays of pairs, and they provide built-in iteration methods such as forEach() and for...of.
let nums = new Map([[1, 'one'], [2, 'two'], [3, 'three']]);
for (let [key, value] of nums) {
console.log(key, value);
}
Map when keys are not guaranteed to be strings, or when you need predictable insertion order and efficient lookups.
Set
A Set is an unordered collection of unique values. Duplicate entries are ignored automatically. Sets are ideal for membership testing or deduplicating arrays.
let colors = new Set(['red', 'green', 'blue', 'red']);
console.log(colors.size); // 3
colors.add('yellow');
console.log(colors.has('blue')); // true
You can also convert sets to arrays using the spread operator or Array.from().
let arr = [...colors];
console.log(arr); // ['red', 'green', 'blue', 'yellow']
Set object stores values by reference, so two separate but identical objects are considered distinct entries.
WeakMap
WeakMap is similar to Map, but it only accepts objects as keys, and those keys are held “weakly.” This means that if the key object becomes unreachable elsewhere in your program, it can be garbage-collected automatically, freeing memory. WeakMap is not iterable and does not expose its size for security and performance reasons.
let wm = new WeakMap();
let obj = {};
wm.set(obj, 'secret data');
console.log(wm.get(obj)); // secret data
obj = null; // key and value may now be garbage-collected
WeakMap is often used to store metadata or private data associated with objects without preventing their cleanup.
WeakSet
WeakSet is the set equivalent of WeakMap. It stores only objects, and its entries are also held weakly, allowing garbage collection when objects are no longer referenced elsewhere. Like WeakMap, it is not iterable and has no size property.
let ws = new WeakSet();
let user = { name: 'Alice' };
ws.add(user);
console.log(ws.has(user)); // true
user = null; // object may now be collected
WeakMap and WeakSet are useful for managing memory safely when storing associations or flags for objects that have independent lifetimes.
Together, Map, Set, WeakMap, and WeakSet give JavaScript developers a versatile collection toolkit. Use Map for key–value data, Set for unique lists, and their weak counterparts for temporary or memory-sensitive object tracking.
Chapter 13: Iteration and Generators
Iteration is a cornerstone of JavaScript's design, allowing you to process data sequentially and perform actions on each element in a collection. From arrays and maps to custom objects, JavaScript provides a rich set of mechanisms for traversing data structures efficiently and expressively. Understanding these mechanisms is key to writing modern, elegant code.
This chapter explores how iteration works under the hood, beginning with the iterable and iterator protocols that define the language’s iteration behaviour. You will learn how constructs such as for...of and the spread operator rely on these protocols to access values one at a time. The chapter then introduces generator functions (special functions that can pause and resume their execution) along with the yield keyword and its practical applications. Finally, you will see how asynchronous generators extend these concepts for working with streams of data that arrive over time.
Iterable and iterator protocols
Iteration in JavaScript is built on two formal agreements known as the iterable and iterator protocols. These protocols define how a data structure can be stepped through, one element at a time, in a predictable and standard way. They are not special syntax, but conventions that objects can implement so that language features such as for...of and the spread operator know how to access their contents.
An iterable is any object that has a method keyed by Symbol.iterator. When this method is called, it must return an iterator object. That iterator provides a next() method, which returns an object of the form { value, done }. Each call to next() yields the next value in the sequence until done becomes true, signalling that iteration has finished.
const iterable = {
data: [10, 20, 30],
[Symbol.iterator]() {
let i = 0;
return {
next: () => {
if (i < this.data.length) {
return { value: this.data[i++], done: false };
} else {
return { value: undefined, done: true };
}
}
};
}
};
for (const num of iterable) {
console.log(num);
}
This example defines a custom iterable object. Its Symbol.iterator method creates an iterator that returns numbers from the internal data array one by one. Each iteration retrieves the next value until there are no more, at which point the loop stops automatically.
for...of loops and the spread operator.
The distinction between iterables and iterators makes iteration flexible. A single iterable can produce multiple independent iterators, each tracking its own progress through the data. This allows separate loops to traverse the same collection without interfering with one another.
for...of with for...in. The former retrieves values from iterables using these protocols, while the latter enumerates property names of objects.
for...of and spread
The for...of loop is JavaScript’s most direct way to consume iterable objects. It automatically calls the object’s Symbol.iterator method to obtain an iterator, then repeatedly calls its next() method until done becomes true. Each value returned is assigned to the loop variable in sequence.
const colors = ["red", "green", "blue"];
for (const color of colors) {
console.log(color);
}
In this example, the array’s built-in iterator supplies each element in order. You do not need to handle Symbol.iterator or next() manually as the loop takes care of that behind the scenes. The for...of construct works with any object that implements the iterable protocol, including strings, maps, sets, and even custom iterables.
const message = "Hi!";
for (const ch of message) {
console.log(ch);
}
Here, the loop walks through each character of the string, because strings are also iterable.
for...of loop reads values, not keys. To iterate over property names of an object, use for...in or Object.keys().
The spread operator (...) also uses the iterable protocol. When applied to an iterable, it expands the sequence into individual values. This makes it useful for copying, concatenating, or unpacking data from collections.
const nums = [1, 2, 3];
const copy = [...nums];
console.log(copy); // [1, 2, 3]
const extended = [...nums, 4, 5];
console.log(extended); // [1, 2, 3, 4, 5]
The spread operator can appear in array literals, function calls, and destructuring assignments. It provides a concise, readable way to work with iterables without manually looping through them.
function sum(a, b, c) {
return a + b + c;
}
const values = [10, 20, 30];
console.log(sum(...values)); // 60
TypeError. Use { ...obj } for object spreading instead, which is a distinct feature.
Generator functions and yield
Generators provide a special way to write functions that can pause and resume their execution. Instead of returning a single value and finishing immediately, a generator can produce multiple values over time, each separated by a yield expression. This makes them ideal for representing sequences, streams, and lazy computations.
A generator function is declared using an asterisk (*) after the function keyword. When called, it does not run its body immediately but instead returns an iterator-like object, known as a generator object. Each time the generator’s next() method is called, the function runs until it reaches a yield statement, pauses, and returns the yielded value.
function* numbers() {
yield 1;
yield 2;
yield 3;
}
const gen = numbers();
console.log(gen.next()); // { value: 1, done: false }
console.log(gen.next()); // { value: 2, done: false }
console.log(gen.next()); // { value: 3, done: false }
console.log(gen.next()); // { value: undefined, done: true }
Each call to next() resumes execution from where the generator last paused. When the function reaches its end (or a return statement), the iteration finishes with done: true.
for...of loop without calling next() yourself.
for (const n of numbers()) {
console.log(n);
}
You can also send values into a generator by passing arguments to next(). Each yield expression acts like a two-way channel: it pauses the function and returns a value outward, then resumes with whatever value is passed back in.
function* greeter() {
const name = yield "What is your name?";
yield `Hello, ${name}!`;
}
const g = greeter();
console.log(g.next().value); // "What is your name?"
console.log(g.next("Robin").value); // "Hello, Robin!"
This feature allows generators to maintain internal state across multiple calls, effectively turning them into resumable coroutines that exchange data with the caller.
new. They are designed to produce sequences, not objects.
By combining yield with control logic, you can model complex iterative processes cleanly and lazily, generating values only when needed rather than computing everything upfront.
Generator use cases
Generators are more than just an alternative way to write loops. Because they can pause, resume, and maintain internal state, they enable elegant solutions to problems that would otherwise require complex callbacks, iterators, or manual bookkeeping. Their ability to produce values lazily (only when requested) makes them efficient for handling large or infinite data sequences.
Producing sequences lazily
Generators are perfect for generating sequences one step at a time. Instead of constructing an entire array in memory, a generator yields values as needed. This is particularly useful for potentially unbounded data, such as number ranges or stream-like structures.
function* countUpTo(limit) {
let n = 1;
while (n <= limit) {
yield n++;
}
}
for (const n of countUpTo(5)) {
console.log(n);
}
Each call to next() retrieves the next number without keeping the whole range in memory. The iteration stops automatically when the generator’s done flag becomes true.
[...countUpTo(5)] or Array.from() to collect their results when needed.
Stateful iteration
Generators can also encapsulate and preserve internal state across iterations. This makes them suitable for tasks such as producing unique IDs, repeating patterns, or managing timed events.
function* idGenerator() {
let id = 0;
while (true) {
yield ++id;
}
}
const ids = idGenerator();
console.log(ids.next().value); // 1
console.log(ids.next().value); // 2
console.log(ids.next().value); // 3
Unlike regular functions, which lose their local variables after returning, generators remember where they left off. Each subsequent call continues execution with the same local state intact.
Controlling asynchronous flow
Before the introduction of async/await, generators were commonly used to simplify asynchronous code. By combining them with promises and a small runner function, developers could write asynchronous logic in a synchronous-looking style.
function* fetchSequence() {
const a = yield fetch("/data1.json").then(r => r.json());
const b = yield fetch("/data2.json").then(r => r.json());
return [a, b];
}
Each yield pauses the generator until the corresponding promise resolves. Although async functions now offer a cleaner approach, this technique remains useful for understanding how asynchronous control flow evolved in JavaScript.
Custom iteration logic
Generators can define custom iteration rules for objects, allowing you to control how they are traversed by for...of or the spread operator. By implementing [Symbol.iterator] as a generator, you can make objects iterable with minimal code.
const team = {
members: ["Alex", "Robin", "Sam"],
*[Symbol.iterator]() {
for (const name of this.members) {
yield name;
}
}
};
for (const person of team) {
console.log(person);
}
In this case, the generator acts as the object’s iterator, providing a concise and readable way to expose its contents for iteration.
[Symbol.iterator] as a generator function is one of the cleanest ways to make custom data structures work naturally with for...of, spread, and destructuring.
Async generators
Asynchronous generators extend the concept of regular generators to support asynchronous data streams. They allow you to yield promises and consume results as they resolve, which is ideal when working with data that arrives over time, such as network responses, file streams, or real-time events.
An asynchronous generator is declared by adding both async and * to the function keyword. It returns an asynchronous iterator, which is an object that has an async next() method returning a promise for each { value, done } pair. This allows iteration to be paused until each asynchronous step completes.
async function* fetchData() {
const urls = ["/data1.json", "/data2.json", "/data3.json"];
for (const url of urls) {
const response = await fetch(url);
const data = await response.json();
yield data;
}
}
To consume an asynchronous generator, use a for await...of loop. This form of loop automatically waits for each promise to resolve before moving to the next iteration.
for await (const item of fetchData()) {
console.log(item);
}
Each yield in an async generator can represent an asynchronous operation. This design pattern simplifies working with streams of data that may not all be available at once, allowing you to process each item as soon as it arrives rather than waiting for the full set.
for...of iteration with the power of asynchronous control flow. They provide a clean alternative to manual promise chaining or nested then() calls when handling sequential asynchronous tasks.
Manual consumption of async generators
You can also consume async generators manually by calling their next() method, which returns a promise that resolves to a result object. This gives fine-grained control over when each step executes.
const gen = fetchData();
(async () => {
console.log(await gen.next()); // first item
console.log(await gen.next()); // second item
console.log(await gen.next()); // third item
})();
Manual iteration is useful when you need to coordinate generator progress with other asynchronous operations or manage flow explicitly.
Error handling
Because async generators operate asynchronously, errors within them are raised as rejected promises. You can handle these errors using try...catch blocks inside the generator body or around the for await...of loop that consumes it.
async function* getData() {
try {
yield await fetch("/ok.json").then(r => r.json());
yield await fetch("/missing.json").then(r => r.json());
} catch (err) {
console.error("Fetch failed:", err);
}
}
for await (const item of getData()) {
console.log(item);
}
await inside an async generator pauses execution until its promise resolves. Use caution when combining many awaits inside long-running loops, as this can delay results unnecessarily if operations could run concurrently.
Async generators elegantly merge iteration and asynchronous programming. They allow you to stream results progressively, responding to data as it arrives rather than waiting for entire collections to load into memory.
Chapter 14: Errors and Exceptions
Even the best-written programs can fail. Networks disconnect, files go missing, input is invalid, or assumptions simply turn out to be wrong. In JavaScript, such unexpected situations are handled through a structured mechanism known as exceptions. By learning how to detect, throw, and handle errors properly, you can make your programs far more resilient and predictable.
JavaScript provides a consistent model for signalling and recovering from problems. You can raise an error intentionally with the throw statement, intercept it using try and catch blocks, and ensure necessary cleanup through finally. This approach allows you to separate normal logic from failure handling, keeping code both robust and readable.
In this chapter, you will explore how exceptions work, the different Error types available, and how to define your own custom error classes for clearer intent. You will see how control flow behaves when exceptions occur, and how to use defensive coding techniques to prevent small problems from cascading into major failures.
By the end of this chapter, you will be able to write code that anticipates failure, responds appropriately, and provides helpful feedback instead of leaving users with cryptic messages or silent errors. Robust programs fail predictably, not mysteriously.
throw and Error types
JavaScript uses the throw statement to signal that something has gone wrong. When a value is thrown, normal execution stops immediately and control passes to the nearest matching catch block higher in the call stack. If no handler is found, the program (or the current script context) terminates and displays an error message.
function divide(a, b) {
if (b === 0) {
throw new Error("Cannot divide by zero");
}
return a / b;
}
console.log(divide(10, 2)); // 5
console.log(divide(10, 0)); // Uncaught Error: Cannot divide by zero
In this example, the throw statement interrupts execution of the divide() function when an invalid argument is detected. The thrown object (in this case an instance of Error) contains information about what went wrong. You can throw any value, but using Error objects ensures consistent formatting and useful diagnostic data.
Error instances. They include a stack trace and integrate better with debugging tools.
The Error constructor
The Error constructor creates an error object with a message property and, in most environments, a stack trace showing where the error occurred.
const err = new Error("Something went wrong");
console.log(err.name); // "Error"
console.log(err.message); // "Something went wrong"
console.log(err.stack); // stack trace
Different kinds of errors inherit from Error, allowing more specific handling of distinct failure types.
Built-in error types
JavaScript defines several standard error types for common situations:
Error— a generic error for general-purpose use.ReferenceError— accessing an undeclared variable.TypeError— performing an invalid operation on a value, such as calling something that is not a function.RangeError— providing a number outside an allowable range.SyntaxError— invalid JavaScript syntax, usually thrown during parsing.URIError— invalid use of URI-handling functions such asdecodeURIComponent().EvalError— historically used foreval()errors, now rarely encountered.
try {
let x = undeclared; // ReferenceError
} catch (e) {
console.log(e.name); // "ReferenceError"
console.log(e.message); // "undeclared is not defined"
}
By identifying the name property, you can distinguish between different error types and handle them appropriately.
SyntaxError exceptions occur at parse time and cannot be caught by try...catch in the same script, as they prevent execution from starting at all.
Understanding how to throw and identify errors is the foundation for effective exception handling. In the next section, you will see how to catch and manage them cleanly with try, catch, and finally blocks.
try, catch, and finally
The try...catch statement allows your code to anticipate and respond to runtime errors without halting execution. Code that might throw an exception is placed inside a try block, and any resulting error is passed to the catch block, where it can be handled gracefully. An optional finally block runs after either of these, regardless of whether an error occurred, making it ideal for cleanup operations.
try {
console.log("Starting calculation...");
const result = divide(10, 0); // throws an Error
console.log("Result:", result); // skipped
} catch (err) {
console.error("An error occurred:", err.message);
} finally {
console.log("Calculation complete.");
}
In this example, once the divide() function throws an error, control jumps immediately to the catch block. The finally section then executes regardless of what happened inside the try block, ensuring that essential code (such as closing connections or releasing resources) always runs.
finally for cleanup tasks that must happen even when errors occur, such as closing files, resetting state, or releasing locks.
Catching and inspecting errors
The catch block receives a single parameter representing the thrown value. In most cases, this will be an Error object. You can inspect its properties to determine the cause and act accordingly.
try {
JSON.parse("invalid");
} catch (err) {
console.log(err.name); // "SyntaxError"
console.log(err.message); // "Unexpected token i in JSON at position 0"
}
If you only need to handle a problem without referencing the specific error object, you can omit the identifier entirely:
try {
riskyOperation();
} catch {
console.log("Something went wrong.");
}
Nested and rethrown errors
You can nest try...catch blocks to handle different levels of failure, or rethrow an error after performing partial handling. This is useful when you need to log information but still want the error to propagate upward.
function processData() {
try {
riskyStep();
} catch (err) {
console.warn("Minor issue:", err.message);
throw err; // rethrow so outer handler can respond
}
}
try {
processData();
} catch (err) {
console.error("Fatal error:", err.message);
}
Rethrowing preserves the original stack trace, which helps in diagnosing where an error originated.
The finally block
The finally block executes after both try and catch blocks finish, regardless of whether an error was thrown or caught. It runs even if you use return statements inside the preceding blocks.
function example() {
try {
return "try result";
} finally {
console.log("This still runs");
}
}
console.log(example()); // logs "This still runs" then "try result"
This guarantees that cleanup code executes predictably, even when control flow exits early.
finally sparingly for essential post-processing tasks. Placing too much logic in it can make code flow harder to follow.
Together, try, catch, and finally give JavaScript a robust mechanism for managing unexpected conditions while keeping programs stable and responsive.
Creating custom errors
While JavaScript provides several built-in Error types, there are times when defining your own can make debugging and handling problems clearer. Custom error classes let you describe failures more precisely, distinguishing between different kinds of issues in your program. This approach is especially helpful in larger codebases or libraries where users need to know exactly what went wrong.
Custom errors are typically created by extending the Error class. Your subclass can define a specific name and include any extra properties that describe the problem in more detail.
class ValidationError extends Error {
constructor(message, field) {
super(message);
this.name = "ValidationError";
this.field = field;
}
}
function validateUser(user) {
if (!user.name) {
throw new ValidationError("Missing name", "name");
}
if (user.age < 18) {
throw new ValidationError("User must be 18 or older", "age");
}
return true;
}
try {
validateUser({ age: 15 });
} catch (err) {
if (err instanceof ValidationError) {
console.error(`${err.name} in field "${err.field}": ${err.message}`);
} else {
throw err;
}
}
This pattern allows you to differentiate between validation errors, network errors, and other unexpected issues, and handle each one appropriately.
super(message) in the constructor of your custom error class. This ensures that the standard error message and stack trace are created correctly.
Adding context to errors
Custom errors can also include extra context such as error codes, status values, or metadata to help identify and report problems more effectively.
class HttpError extends Error {
constructor(status, url) {
super(`Request failed with status ${status}`);
this.name = "HttpError";
this.status = status;
this.url = url;
}
}
async function fetchData(url) {
const response = await fetch(url);
if (!response.ok) {
throw new HttpError(response.status, url);
}
return response.json();
}
try {
await fetchData("/missing.json");
} catch (err) {
if (err instanceof HttpError) {
console.error(`Failed to load ${err.url}: ${err.status}`);
} else {
console.error("Unknown error:", err);
}
}
By using descriptive error names and extra fields, you make debugging easier and allow calling code to handle different failure types intelligently.
Clear, structured error classes communicate intent far better than generic messages. They help ensure that when something fails, it does so in a way that is easy to understand, diagnose, and recover from.
Control flow with errors
Exceptions are not only for signalling fatal problems because they can also influence the flow of execution. In certain situations, you may intentionally throw and catch errors to exit deeply nested operations or redirect logic when specific conditions arise. Used carefully, this technique can simplify complex control paths that would otherwise require multiple checks or flags.
function findItem(items, target) {
for (const item of items) {
if (item === target) {
throw new Error("Found target");
}
}
console.log("Target not found");
}
try {
findItem(["a", "b", "c", "d"], "c");
} catch (err) {
if (err.message === "Found target") {
console.log("Search complete: item found.");
} else {
throw err;
}
}
Here, throwing an error provides a quick escape from a loop when the target is found. Although this works, it is better viewed as a special-case control mechanism, not a substitute for normal conditionals or returns.
Stack unwinding
When an error is thrown, JavaScript immediately stops the current function and searches up the call stack for a matching catch block. This process is called stack unwinding. It continues until either a suitable handler is found or the stack is exhausted.
function level1() {
level2();
}
function level2() {
level3();
}
function level3() {
throw new Error("Something failed in level3");
}
try {
level1();
} catch (err) {
console.error("Caught:", err.message);
}
In this example, the error originates inside level3(), but it propagates upward until it reaches the try...catch surrounding level1(). Understanding this process is key to designing reliable functions that either handle or forward errors appropriately.
Propagating errors
Not every function should handle the errors it encounters. Sometimes, it is better to let them propagate so that higher-level code (such as a central controller or user interface layer) can decide what to do. This approach avoids duplicating error-handling logic in many places.
function parseJSON(data) {
return JSON.parse(data); // may throw SyntaxError
}
function loadData() {
const raw = '{ "name": "Robin" }';
return parseJSON(raw);
}
try {
console.log(loadData());
} catch (err) {
console.error("Failed to load data:", err.message);
}
Here, the SyntaxError thrown by JSON.parse() travels up until it reaches the outer catch. Each layer delegates responsibility to the next, keeping functions focused on their main tasks.
Intentional error throwing for validation
It is sometimes useful to throw specific errors as part of input validation or contract enforcement. This makes failures explicit and consistent, helping detect invalid use of functions early.
function setAge(age) {
if (typeof age !== "number" || age < 0) {
throw new RangeError("Age must be a non-negative number");
}
console.log("Age set to", age);
}
try {
setAge(-3);
} catch (err) {
if (err instanceof RangeError) {
console.warn("Invalid input:", err.message);
} else {
throw err;
}
}
Using targeted exceptions like this keeps validation logic concise and allows calling code to handle bad input predictably.
Handled properly, exceptions provide a structured form of control flow that separates normal operation from failure states. They allow programs to recover gracefully, skip unnecessary steps, and report problems clearly without losing track of what went wrong.
Defensive coding patterns
Defensive programming is the practice of writing code that anticipates and resists failure. Rather than assuming everything will work as expected, defensive code checks inputs, validates assumptions, and handles unexpected conditions gracefully. This approach reduces the likelihood of runtime errors and makes your programs more predictable and maintainable.
In JavaScript, defensive coding often means validating arguments before using them, providing sensible defaults, and catching errors in places where external data or unpredictable conditions are involved. The goal is not to eliminate all errors, which is impossible in real-world systems, but to contain and control them.
Validating inputs
Many runtime errors can be prevented by validating data at the boundaries of your program, which is where user input, network responses, or file contents enter your system.
function calculateTotal(price, quantity) {
if (typeof price !== "number" || typeof quantity !== "number") {
throw new TypeError("Both price and quantity must be numbers");
}
if (price < 0 || quantity < 0) {
throw new RangeError("Values must be non-negative");
}
return price * quantity;
}
try {
console.log(calculateTotal(10, 3)); // 30
console.log(calculateTotal("10", 3)); // throws
} catch (err) {
console.error(err.message);
}
By checking values early, you make errors explicit and easier to diagnose, rather than letting them cause subtle logic problems later in execution.
Using default values and guards
When input may be missing or incomplete, providing safe defaults prevents code from failing unexpectedly. Guard clauses, with early exits based on simple checks, make functions easier to read and reduce nested logic.
function greet(name) {
if (!name) {
console.warn("No name provided, using default.");
name = "Guest";
}
console.log(`Hello, ${name}!`);
}
greet("Robin");
greet();
Guard clauses can also stop invalid operations before they start, improving clarity and reducing the chance of cascading errors.
Failing fast and clearly
When a serious problem occurs, fail fast. Stop execution at the point of failure instead of letting the program continue in an uncertain state. This helps ensure that errors are visible during development and testing, not hidden or ignored.
function loadConfig(config) {
if (!config || typeof config !== "object") {
throw new Error("Invalid configuration object");
}
// continue safely knowing config is valid
}
Failing early gives a clear signal about what went wrong and where. Combined with well-designed error messages, it simplifies debugging and speeds up problem resolution.
Graceful degradation
In production environments, failing completely is not always acceptable. Graceful degradation ensures that even if part of the system fails, the rest continues functioning in a limited but stable way.
function getUserSetting(key) {
try {
const value = localStorage.getItem(key);
return value ? JSON.parse(value) : null;
} catch {
console.warn("Could not read setting, using default.");
return null;
}
}
Here, an invalid JSON string in localStorage will not crash the application. Instead, a warning is shown and a fallback value is used, allowing the program to continue safely.
Centralising error handling
Large applications benefit from having a single, consistent point for handling unexpected errors. Centralised handlers can log details, alert developers, or show user-friendly messages without duplicating logic across many parts of the program.
window.addEventListener("error", e => {
console.error("Global error:", e.message);
});
window.addEventListener("unhandledrejection", e => {
console.error("Unhandled promise rejection:", e.reason);
});
This pattern ensures that even unanticipated problems are captured and logged, improving observability and stability in production.
By combining clear validation, early failure, and robust handling, you can build software that behaves predictably under pressure and provides clear information when things go wrong. Defensive coding keeps your JavaScript reliable, maintainable, and ready for the unpredictable nature of real-world execution.
Chapter 15: Asynchronous JavaScript
JavaScript is single-threaded, meaning it executes one operation at a time in sequence. This design keeps the language simple and predictable, but it also means that long-running tasks (such as network requests, timers, or file operations) could otherwise block execution and make applications unresponsive.
To overcome this, JavaScript introduced asynchronous programming models that let code continue running while waiting for other operations to complete. These include callbacks, promises, and the async and await keywords, which together form the foundation of modern non-blocking JavaScript.
This chapter explores the evolution of asynchronous handling in JavaScript from early callback functions and their inversion-of-control issues, through to the elegance and clarity provided by promises and async/await. You will also learn about handling errors in asynchronous flows, and how to manage concurrent tasks with utilities such as Promise.all() and Promise.race().
Callbacks and the inversion problem
Before promises and async/await, the primary way to handle asynchronous operations in JavaScript was through callbacks. A callback is simply a function passed as an argument to another function, to be executed after a task completes. This allows you to delay execution of certain code until the required data or event is ready.
function getData(callback) {
setTimeout(() => {
callback("Data received");
}, 1000);
}
getData((result) => {
console.log(result);
});
In this example, the getData() function simulates an asynchronous operation using setTimeout(). It takes a callback function that runs once the simulated delay is over. This pattern became the standard way to manage non-blocking behavior in JavaScript for many years.
However, as applications grew in complexity, callbacks often led to a structure known as “callback hell,” where nested asynchronous operations became deeply indented and hard to manage. Each step depended on the previous one, resulting in tangled, hard-to-read code.
doStep1((result1) => {
doStep2(result1, (result2) => {
doStep3(result2, (result3) => {
doStep4(result3, (result4) => {
console.log("All done");
});
});
});
});
This inversion of control, where you hand over execution flow to multiple callbacks, made debugging, error handling, and reasoning about program order extremely difficult. The more dependent tasks there were, the worse the structure became.
The limitations of this model motivated the creation of promises, which offer a cleaner, more predictable way to manage asynchronous behavior without the deep nesting that callbacks require.
Promises and chaining
Promises were introduced to make asynchronous code more structured and readable. A Promise represents a value that will be available now, later, or never, depending on whether the asynchronous operation succeeds or fails. It acts as a placeholder for a future result and provides methods to handle completion or failure in a clean, sequential manner.
const dataPromise = new Promise((resolve, reject) => {
setTimeout(() => {
resolve("Data loaded");
}, 1000);
});
dataPromise.then((result) => {
console.log(result);
});
Here, new Promise() takes a function with two parameters: resolve (to indicate success) and reject (to indicate failure). When resolve() is called, any .then() handler attached to the promise is executed. This allows asynchronous tasks to be expressed in a linear, chainable fashion.
Promises also allow chaining through multiple .then() calls. Each .then() returns a new promise, so you can process results step by step in a clear, top-down manner.
fetch("data.json")
.then((response) => response.json())
.then((data) => {
console.log("Parsed data:", data);
})
.catch((error) => {
console.error("Error:", error);
});
Each .then() receives the resolved value from the previous step. If any error occurs, the .catch() at the end handles it, no matter where the rejection happened in the chain. This makes error propagation simpler and avoids multiple nested callback levels.
.then() handler throws an error or returns a rejected promise, the chain skips directly to the next .catch(). Always include .catch() when chaining promises to prevent unhandled rejections.
By structuring asynchronous operations as promise chains, code becomes easier to read and maintain. Instead of deeply nested callbacks, each asynchronous step forms part of a logical sequence, setting the stage for the cleaner async and await syntax that builds on this foundation.
async and await
The async and await keywords provide a modern, elegant way to work with promises in JavaScript. They let you write asynchronous code that looks and behaves more like synchronous code, making it easier to follow the program’s logic and handle results or errors.
Declaring a function as async automatically makes it return a promise. Inside that function, the await keyword can be used to pause execution until a promise resolves, then resume with the resolved value.
async function loadData() {
const response = await fetch("data.json");
const data = await response.json();
console.log("Data:", data);
}
loadData();
In this example, the function waits for the fetch request to complete and then for the JSON to be parsed, without blocking the main thread. The code appears linear, but each await allows other operations to run while waiting for the asynchronous task to finish.
await keyword can only be used inside functions declared with async. If you try to use it elsewhere, a syntax error will occur.
Because async functions return promises, you can still chain them or combine them with other asynchronous patterns. This makes it simple to mix older and newer styles of asynchronous handling in the same codebase.
async function getUserData() {
try {
const res = await fetch("/user");
const user = await res.json();
return user;
} catch (err) {
console.error("Failed to load user:", err);
}
}
getUserData().then((u) => console.log("User:", u));
Here, errors are handled using standard try and catch syntax, just as in synchronous code. Any error thrown inside an async function rejects the returned promise, which can then be caught with .catch() if desired.
await with multiple independent asynchronous calls, placing them sequentially can make the code run slower than necessary. Use Promise.all() to execute them concurrently when they do not depend on each other.
The combination of async and await represents the most readable and natural approach to asynchronous programming in JavaScript. It combines the clarity of synchronous code with the non-blocking behavior of promises, forming the backbone of most modern JavaScript applications.
Error handling with promises
When working with asynchronous code, handling errors effectively is just as important as managing results. Promises provide a built-in mechanism for dealing with both successful and failed outcomes, making error propagation predictable and easy to control.
A promise can either resolve successfully or reject with an error. You can handle both cases by chaining .then() and .catch() methods, or by using async and await with try/catch blocks.
fetch("missing.json")
.then((response) => {
if (!response.ok) {
throw new Error("Network error");
}
return response.json();
})
.then((data) => {
console.log("Data:", data);
})
.catch((error) => {
console.error("Error caught:", error.message);
});
Here, if the file cannot be fetched or if response.ok is false, the throw statement rejects the promise. The .catch() at the end of the chain then receives the error, ensuring that all asynchronous issues are handled in one consistent place.
.then() callback automatically rejects the promise, so you do not need to wrap every operation in try/catch when using promise chains.
When using async and await, the same principle applies, but you can handle errors using familiar synchronous syntax:
async function loadData() {
try {
const response = await fetch("data.json");
const data = await response.json();
console.log("Loaded:", data);
} catch (error) {
console.error("Failed:", error.message);
}
}
loadData();
If any awaited promise rejects, control jumps immediately to the nearest catch block. This makes asynchronous error handling as straightforward as in regular synchronous code.
.catch() or use try/catch around awaited code to prevent silent failures.
By treating rejections like exceptions, promises and async/await provide a clear and reliable way to handle errors in asynchronous programs. This consistency allows developers to build robust systems without resorting to deeply nested callback logic.
Concurrency with Promise.all and race
While await makes asynchronous code easier to read, using it sequentially can cause unnecessary delays when multiple tasks can run at the same time. JavaScript provides built-in utilities such as Promise.all() and Promise.race() to handle concurrency efficiently, allowing several asynchronous operations to proceed together.
Promise.all() takes an array (or iterable) of promises and returns a single promise that resolves when all of them have fulfilled, or rejects immediately if any of them fail.
const p1 = fetch("/user");
const p2 = fetch("/posts");
const p3 = fetch("/comments");
Promise.all([p1, p2, p3])
.then(async ([userRes, postRes, commentRes]) => {
const [user, posts, comments] = await Promise.all([
userRes.json(),
postRes.json(),
commentRes.json()
]);
console.log("User:", user);
console.log("Posts:", posts);
console.log("Comments:", comments);
})
.catch((err) => {
console.error("One or more requests failed:", err);
});
In this example, all three fetch requests start at the same time, and the program waits for all to complete before proceeding. This reduces total waiting time compared with awaiting them one by one.
Promise.all(), if any promise rejects, the entire operation fails immediately. If you want all results regardless of success or failure, use Promise.allSettled() instead.
Promise.race() works differently. It resolves or rejects as soon as any of the supplied promises settles, returning the value or error from that first-completed promise. This can be useful for implementing timeouts or picking the fastest response among several sources.
const fetchWithTimeout = Promise.race([
fetch("/slow-endpoint"),
new Promise((_, reject) =>
setTimeout(() => reject(new Error("Timeout after 2s")), 2000)
)
]);
fetchWithTimeout
.then((res) => console.log("Response received:", res))
.catch((err) => console.error(err.message));
Here, whichever operation finishes first (either the network request or the timeout) determines the outcome of the race. This technique is especially helpful for improving responsiveness in web applications.
Promise.race() settles after the first result, it does not cancel other promises still running. Those operations will continue in the background unless explicitly aborted.
Using Promise.all() and Promise.race() wisely lets you balance concurrency, reliability, and speed. Together with async and await, they complete the core set of tools for managing asynchronous execution in modern JavaScript.
Chapter 16 The Event Loop and Concurrency Model
JavaScript’s concurrency model is one of its most distinctive features. Although JavaScript runs on a single thread, it can still handle multiple tasks efficiently without blocking the main execution flow. This is made possible by the event loop, a mechanism that coordinates the execution of synchronous code, asynchronous callbacks, promises, and scheduled events.
Understanding how the event loop works is essential for mastering asynchronous programming in JavaScript. It explains why some operations appear to happen “in the background,” why certain callbacks run later than others, and how tasks, microtasks, and timers all fit together into a predictable execution model.
This chapter explores how JavaScript manages concurrency behind the scenes. You will learn about the call stack and event queues, the difference between macrotasks and microtasks, how timers and scheduling functions like setTimeout() and setImmediate() behave, and how web workers and worker threads can offload computation onto parallel processes.
By the end of this chapter, you will understand how JavaScript’s single-threaded engine can handle multiple operations at once through non-blocking design, and how to use this knowledge to avoid timing issues and race conditions in your own programs.
Call stack, queue, and microtasks
At the heart of JavaScript’s concurrency model lies the call stack and the event loop. The call stack is where the JavaScript engine keeps track of the currently executing functions. Each time a function is called, a new frame is pushed onto the stack. When the function finishes, that frame is popped off, and control returns to the previous one.
function first() {
second();
console.log("First done");
}
function second() {
console.log("Second done");
}
first();
When this code runs, first() is pushed onto the call stack, followed by second(). Once second() finishes, it is removed, and first() resumes. This strict top-down order ensures predictable, synchronous execution.
But what happens when you use asynchronous operations like setTimeout() or a fetch() request? Those functions hand off their work to the browser or runtime environment, allowing JavaScript to continue running other code. When the operation completes, a callback or promise resolution is queued for later execution.
console.log("Start");
setTimeout(() => {
console.log("Timeout callback");
}, 0);
console.log("End");
Even though the timeout is set to zero, the output will always be:
Start
End
Timeout callback
This happens because the callback is placed in the task queue, waiting for the call stack to clear before being executed. The event loop continually checks whether the stack is empty, then processes queued callbacks in order.
In addition to the regular task queue, there is also a microtask queue (sometimes called the job queue), which holds promise callbacks and certain other operations. Microtasks run after the current script completes but before the next task from the main queue is processed.
console.log("A");
Promise.resolve().then(() => console.log("B"));
console.log("C");
This code prints:
A
C
B
The promise callback runs as a microtask—after the current stack (which prints A and C) is complete, but before the next macrotask begins. This ordering ensures promise resolution remains tightly coupled to the code that scheduled it.
By understanding how the call stack, task queue, and microtask queue interact through the event loop, you gain a deeper grasp of how JavaScript schedules work, and why asynchronous behavior in the language is both powerful and predictable.
Timers and scheduling
JavaScript provides several functions for scheduling code to run at a later time, including setTimeout(), setInterval(), and in some environments, setImmediate(). These functions enable delayed or repeated execution without blocking the main thread, forming the basis for animations, polling, and deferred logic.
The most common is setTimeout(), which schedules a callback to run once after a specified delay in milliseconds:
console.log("Start");
setTimeout(() => {
console.log("Timeout fired");
}, 1000);
console.log("End");
Even if the delay is set to zero, the callback is not executed immediately. Instead, it is placed in the task queue, and will only run once the call stack is clear and all microtasks have finished.
setTimeout() is the minimum wait time before the callback runs. Actual timing depends on system load, current queue length, and browser throttling behavior.
The setInterval() function works similarly but repeats execution at a given interval until cancelled. It is useful for periodic updates such as refreshing data or updating a timer display.
let count = 0;
const timer = setInterval(() => {
count++;
console.log("Tick", count);
if (count === 3) {
clearInterval(timer);
console.log("Stopped");
}
}, 1000);
This example prints “Tick” three times, then stops the interval using clearInterval(). Each callback runs as a separate task, meaning that if one iteration takes longer than the interval, the next call will be delayed until the stack is free again.
requestAnimationFrame() API instead.
Some environments (like Node.js) provide additional scheduling mechanisms such as setImmediate() and process.nextTick(). These differ slightly in timing. process.nextTick() queues a microtask (executed before other tasks), while setImmediate() queues a regular macrotask to run after I/O events complete.
setTimeout(() => console.log("Timeout"));
setImmediate(() => console.log("Immediate"));
process.nextTick(() => console.log("Next tick"));
In Node.js, this will typically print:
Next tick
Timeout
Immediate
Understanding when these scheduling functions execute relative to the call stack, task queue, and microtask queue helps ensure that your asynchronous code behaves predictably and efficiently across both browsers and server environments.
Tasks vs microtasks in practice
Although both tasks and microtasks represent units of work scheduled by the event loop, they run at different times and have distinct priorities. Understanding how they interact is crucial for writing responsive and predictable JavaScript applications.
A task (sometimes called a macrotask) is a general piece of work that enters the event queue, such as the execution of a setTimeout() callback, a DOM event handler, or a message from a web worker. A microtask, on the other hand, is a smaller job that runs immediately after the current task completes but before the next one begins. Promises, queueMicrotask(), and process.nextTick() in Node.js all schedule microtasks.
console.log("A");
setTimeout(() => console.log("B"), 0);
Promise.resolve().then(() => console.log("C"));
console.log("D");
Even though both setTimeout() and the promise are asynchronous, the output will always be:
A
D
C
B
This happens because microtasks (like the promise callback) run after the current task completes but before any new tasks are taken from the queue. The event loop gives them priority to ensure that small, immediate follow-up actions are completed as soon as possible.
Here's a compact, and intuitive way to make the event loop’s task scheduling visible:
const unsorted = [1, 10, 7, 3, 4, 8, 2, 9, 6, 5];
unsorted.forEach((item) => {
setTimeout(() => {
console.log(item);
}, item * 10);
});
Each number’s delay determines when its callback joins the task queue. Smaller numbers finish first, so the values appear in ascending order on the console, even though the array itself was never sorted. This illustrates how asynchronous scheduling can appear to organize output purely through timing. The result in the numbers 1 through 10 displayed in order.
Consider another example that shows how microtasks can delay other queued operations:
setTimeout(() => console.log("Timeout"), 0);
Promise.resolve().then(function handler() {
console.log("Microtask");
Promise.resolve().then(handler);
});
This creates an infinite loop of microtasks that will continuously execute and prevent the timeout from ever running, effectively blocking the event loop. It demonstrates that although microtasks run quickly, too many of them can starve the task queue.
In practice, you can think of tasks as major steps in execution (like handling an event or completing an API call), and microtasks as smaller, immediate reactions (like promise resolutions or cleanup steps). This distinction is what allows JavaScript to appear concurrent while still executing in a single-threaded, deterministic order.
Web workers and worker threads
Although JavaScript itself is single-threaded, it can still perform true parallel work using workers. Web workers (in browsers) and worker threads (in Node.js) allow code to run on separate threads, offloading computation-intensive or blocking operations from the main event loop. This keeps user interfaces responsive and prevents long-running tasks from freezing an application.
A web worker runs in its own isolated context, without access to the DOM or main thread variables. Communication occurs through message passing, using postMessage() and the message event. Data is copied or transferred between contexts, not shared directly.
// main.js
const worker = new Worker("worker.js");
worker.onmessage = (event) => {
console.log("From worker:", event.data);
};
worker.postMessage("Hello from main thread");
// worker.js
onmessage = (event) => {
console.log("Worker received:", event.data);
postMessage("Message processed");
};
In this example, both files execute independently. The main thread sends a message to the worker, which performs its task and sends back a response. This message-based model avoids shared state, eliminating many concurrency hazards.
Node.js offers a similar concept through the worker_threads module, enabling multiple threads within the same process. Unlike web workers, Node’s worker threads can share memory through SharedArrayBuffer objects, allowing more efficient data exchange when needed.
// main.js
import { Worker } from "node:worker_threads";
const worker = new Worker("./task.js");
worker.on("message", (msg) => console.log("From worker:", msg));
worker.postMessage({ count: 5 });
// task.js
import { parentPort } from "node:worker_threads";
parentPort.on("message", (msg) => {
let result = 0;
for (let i = 0; i < msg.count; i++) result += i;
parentPort.postMessage({ result });
});
Each worker thread has its own event loop and memory space but can exchange data efficiently with the parent thread. This makes worker threads ideal for CPU-heavy operations in server-side JavaScript, where performance and concurrency matter.
Workers expand JavaScript’s capabilities beyond the single-threaded model, allowing parallel computation while preserving the event loop’s simplicity. They are the preferred tool for heavy background processing in both browser and Node.js environments.
Avoiding race conditions
Even though JavaScript runs on a single thread, asynchronous code can still cause race conditions, which are situations where the timing of events produces unpredictable results. These occur when multiple operations depend on shared data or sequence, but execute in an unexpected order due to asynchronous scheduling.
For example, two asynchronous requests might try to update the same variable:
let userName = "";
fetch("/user/1").then((res) => res.json()).then((data) => {
userName = data.name;
console.log("Loaded user:", userName);
});
fetch("/user/2").then((res) => res.json()).then((data) => {
userName = data.name;
console.log("Loaded user:", userName);
});
If the second request completes before the first, it will overwrite userName unexpectedly. Both tasks run asynchronously, and their completion order cannot be guaranteed.
To avoid such issues, ensure that asynchronous logic explicitly defines its dependencies and ordering. One simple approach is to use await or promise chaining to control sequence:
async function loadUsers() {
const res1 = await fetch("/user/1");
const user1 = await res1.json();
const res2 = await fetch("/user/2");
const user2 = await res2.json();
console.log("Loaded users:", user1.name, "and", user2.name);
}
This guarantees that /user/1 finishes before /user/2 starts. For independent requests, you can still run them concurrently with Promise.all(), then process results together safely:
async function loadUsersConcurrently() {
const [res1, res2] = await Promise.all([
fetch("/user/1"),
fetch("/user/2")
]);
const [user1, user2] = await Promise.all([
res1.json(),
res2.json()
]);
console.log("Loaded both users:", user1.name, "and", user2.name);
}
Promise.all() for operations that do not depend on each other, and sequential await for those that do. This ensures predictable execution order without blocking performance unnecessarily.
In more advanced scenarios (such as shared resources, cached data, or stateful applications) you may need to introduce locks, semaphores, or controlled update queues to ensure consistency. Libraries that manage concurrency (for example, mutexify or async-mutex in Node.js) can help coordinate access to shared data safely.
Race conditions can be subtle and difficult to detect, because they depend on timing. By structuring asynchronous logic carefully, keeping operations independent where possible, and clearly defining execution order, you can maintain predictable, race-free behavior even in highly concurrent JavaScript programs.
Chapter 17: Standard Library Essentials
JavaScript ships with a rich set of built-in objects that form its standard library. These objects handle common needs such as working with numbers, dates, URLs, and data serialization. They are always available in any JavaScript environment, making them essential tools for reliable, portable code.
This chapter explores several of the most widely used core utilities. You will learn how the Number, Math, and BigInt objects help perform calculations and represent numeric data of various sizes. You will also see how the Date object manages time and scheduling, and how Intl provides language-aware formatting for numbers, currencies, and dates. Later sections cover the URL and URLSearchParams interfaces for parsing and manipulating web addresses, and conclude with practical data handling using StructuredClone and JSON.
Although these features are simple in concept, they cover a wide range of use cases from everyday data formatting to advanced web applications. Mastering them ensures your programs remain efficient, portable, and idiomatic across browsers and runtimes.
Number, Math, and BigInt helpers
JavaScript provides several built-in objects for handling numeric data. The Number type represents all standard floating-point and integer values, while BigInt enables safe computation with integers beyond the limits of the 64-bit range. The Math object complements both by offering constants and utility methods for calculations such as rounding, trigonometry, and logarithms.
Because JavaScript’s Number type is based on IEEE 754 double-precision floating-point arithmetic, it can represent extremely large and small values, but may introduce rounding errors in certain decimal operations. BigInt avoids this by representing integers of arbitrary size, though it cannot be mixed directly with regular numbers without explicit conversion.
// Working with Number
let x = 0.1 + 0.2;
console.log(x); // 0.30000000000000004
// Using BigInt for large integers
let big = 123456789012345678901234567890n;
console.log(big + 10n); // 123456789012345678901234567900n
// Math utilities
console.log(Math.round(4.7)); // 5
console.log(Math.max(3, 8, 2)); // 8
console.log(Math.sqrt(9)); // 3
Number and BigInt values is not allowed. You must convert one type to the other explicitly using Number() or BigInt().
In addition to the familiar trigonometric and rounding functions, Math also includes modern methods such as Math.sign(), Math.trunc(), and Math.hypot() for more robust calculations. Constants like Math.PI and Math.E are also provided for convenience.
console.log(Math.sign(-3)); // -1
console.log(Math.trunc(3.9)); // 3
console.log(Math.hypot(3, 4)); // 5
Unlike Number values, which are limited to about 15–16 digits of precision, BigInt values are unbounded. This means they can grow to any size that memory allows, making them ideal for tasks that involve very large integers. Examples include cryptographic algorithms such as RSA, where numbers often exceed a thousand bits in length, or precise arithmetic on large datasets where overflow would otherwise occur. Although BigInt operations are slower than those on regular numbers, they maintain exact integer accuracy regardless of magnitude.
Together, Number, Math, and BigInt form the numeric backbone of JavaScript, providing both simplicity and flexibility for numerical computation across all environments.
Date and time basics
JavaScript provides the Date object to represent and manipulate points in time. A Date instance stores a single moment as the number of milliseconds since midnight on January 1, 1970 UTC (the Unix epoch). You can create, format, and modify these values using the object’s constructor and methods.
// Current date and time
let now = new Date();
console.log(now.toString());
// Create specific dates
let d1 = new Date('2028-12-31T23:59:59Z');
let d2 = new Date(2028, 11, 31, 23, 59, 59); // Months are 0-based
console.log(d1.getFullYear()); // 2028
console.log(d2.getMonth()); // 11 (December)
Because Date internally stores UTC time, most methods can return either UTC or local values. For instance, getHours() gives the local hour, while getUTCHours() returns the corresponding UTC hour. Timezone handling can therefore differ between systems depending on user settings.
Date API is not fully timezone-aware. For accurate global time calculations, use libraries or newer internationalization APIs such as Intl.DateTimeFormat.
Besides its constructor and getters/setters, Date includes utility methods such as Date.now() (which returns the current time in milliseconds since the epoch) and toISOString() for producing standardized date strings.
console.log(Date.now()); // milliseconds since 1970-01-01
console.log(now.toISOString()); // e.g. "2028-10-29T16:00:00.000Z"
getTime() to compare their numeric millisecond values rather than relying on string forms.
While the Date object covers most everyday needs, modern projects that require consistent time zones, formatting, or calendar arithmetic often use Intl or third-party libraries. These provide more robust handling of locales and cultural variations in date and time representation.
Internationalization
The Intl namespace provides a suite of internationalization features built into JavaScript. It enables programs to format numbers, currencies, dates, and text according to the rules and conventions of different locales without the need for external libraries. This ensures that your output looks correct to users around the world, matching local languages, scripts, and cultural norms.
// Number formatting example
let amount = 1234567.89;
let formatter = new Intl.NumberFormat('en-GB');
console.log(formatter.format(amount)); // "1,234,567.89"
// Currency formatting
let usd = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' });
let eur = new Intl.NumberFormat('de-DE', { style: 'currency', currency: 'EUR' });
console.log(usd.format(99.95)); // "$99.95"
console.log(eur.format(99.95)); // "99,95 €"
The same pattern works for dates and times using Intl.DateTimeFormat. You can specify which components to show, such as weekday, month, or time zone, while letting the system choose regionally appropriate formatting.
// Date formatting
let now = new Date();
let dtFormat = new Intl.DateTimeFormat('fr-FR', {
weekday: 'long',
year: 'numeric',
month: 'long',
day: 'numeric'
});
console.log(dtFormat.format(now)); // e.g. "mercredi 29 octobre 2025"
Intl avoids the pitfalls of manual string formatting. It automatically applies correct punctuation, separators, and ordering for the chosen locale.
The Intl API also includes Intl.Collator for locale-aware string comparison and sorting, Intl.RelativeTimeFormat for expressions like “3 days ago”, and Intl.ListFormat for joining items with natural conjunctions such as “apples, oranges, and bananas”.
// Relative time
let rtf = new Intl.RelativeTimeFormat('en', { numeric: 'auto' });
console.log(rtf.format(-3, 'day')); // "3 days ago"
// List formatting
let lf = new Intl.ListFormat('en', { style: 'long', type: 'conjunction' });
console.log(lf.format(['HTML', 'CSS', 'JavaScript'])); // "HTML, CSS, and JavaScript"
Intl API follows browser and runtime locale data. Some older environments may not include every locale or option, so verify support if targeting embedded or legacy systems.
By leveraging Intl, your applications can easily adapt numeric, monetary, and textual output to diverse audiences, enhancing clarity and professionalism without extra dependencies.
URL and URLSearchParams
Modern JavaScript includes built-in support for working with web addresses through the URL and URLSearchParams classes. These provide a structured way to parse, inspect, and modify URLs without relying on string manipulation. Both are available in browsers and most server environments such as Node.js.
// Create and inspect a URL
let url = new URL('https://example.com:8080/path/page.html?user=robin&debug=true#section2');
console.log(url.protocol); // "https:"
console.log(url.hostname); // "example.com"
console.log(url.port); // "8080"
console.log(url.pathname); // "/path/page.html"
console.log(url.search); // "?user=robin&debug=true"
console.log(url.hash); // "#section2"
You can modify any component directly by setting its corresponding property, and the href updates automatically to reflect the change. This makes the URL object ideal for building and adjusting links dynamically.
url.pathname = '/new/page.html';
url.searchParams.set('debug', 'false');
console.log(url.href);
// "https://example.com:8080/new/page.html?user=robin&debug=false#section2"
URL object automatically encodes parameters to maintain valid formatting, so you never need to call encodeURIComponent() manually for query strings.
The URLSearchParams interface is designed specifically for handling the query string portion of a URL. It supports adding, deleting, updating, and iterating over parameters, much like a map.
let params = new URLSearchParams('?page=5&sort=desc');
console.log(params.get('page')); // "5"
params.set('filter', 'active');
params.delete('sort');
for (let [key, value] of params) {
console.log(key, value);
}
// page 5
// filter active
You can also serialize parameters back to a string with toString(), which produces a correctly encoded query segment ready for concatenation or reassignment.
console.log(params.toString()); // "page=5&filter=active"
URLSearchParams always encodes keys and values in UTF-8, ensuring consistency across environments. However, duplicate keys are allowed, so use getAll() if a parameter may appear multiple times.
Together, URL and URLSearchParams provide a robust, native alternative to manual parsing. They simplify building APIs, manipulating navigation links, and handling web requests in a standards-compliant way.
StructuredClone() and JSON methods
When copying or transferring data in JavaScript, it is important to distinguish between shallow and deep copies. A shallow copy only duplicates the top level of an object, while nested structures remain shared. The structuredClone() function provides a native deep-cloning mechanism that recursively copies complex objects, including arrays, maps, sets, and even circular references.
let original = {
user: 'Robin',
data: [1, 2, 3],
nested: { key: 'value' }
};
let copy = structuredClone(original);
console.log(copy.nested === original.nested); // false
This method is particularly useful when you need to safely duplicate data structures without linking references between them. It follows the same structured serialization algorithm used by web workers and postMessage(), which allows it to handle more data types than JSON can, including Map, Set, Blob, and File.
structuredClone() cannot clone functions, DOM nodes, or objects with internal platform references such as Window. These will cause an exception if encountered.
Before structuredClone() became available, JSON serialization was the common approach for deep copying. The combination of JSON.stringify() and JSON.parse() converts data into a string and back again, though it supports only plain objects, arrays, numbers, strings, booleans, and null.
let source = { a: 1, b: { c: 2 } };
let clone = JSON.parse(JSON.stringify(source));
console.log(clone.b === source.b); // false
Beyond cloning, JSON methods are used widely for data storage and network communication. The JSON.stringify() function supports options for indentation and replacer filters, while JSON.parse() can apply a reviver function to transform values during decoding.
// Pretty-printing JSON
let person = { name: 'Alex', age: 30, active: true };
console.log(JSON.stringify(person, null, 2));
// Parsing with a reviver
let text = '{"timestamp": "2028-10-29T16:00:00Z"}';
let data = JSON.parse(text, (key, value) => {
if (key === 'timestamp') return new Date(value);
return value;
});
console.log(data.timestamp instanceof Date); // true
structuredClone() for fast, type-safe duplication of in-memory data, and JSON methods for serialization across storage or network boundaries.
Together, these tools cover the two most common needs in modern JavaScript: cloning live objects and encoding structured data for persistence or transfer.
Chapter 18: Working in the Browser
JavaScript’s most familiar environment is the web browser. Within it, scripts can interact directly with the page, respond to user input, communicate with servers, and store data locally. This chapter explores the essential browser APIs that make this possible, covering everything from the Document Object Model (DOM) to persistent storage mechanisms.
The DOM gives JavaScript access to the live structure of an HTML document, allowing scripts to read and modify elements dynamically. Event handling enables interactivity, letting your code respond to user actions such as clicks, keypresses, and form submissions. You will also see how to perform asynchronous network operations with the fetch API, and how to save data on the client using localStorage and sessionStorage.
Each section of this chapter focuses on one major part of the browser interface. By the end, you will have the foundation needed to build interactive pages, handle user input, and manage client-side data in a clean, standards-based way.
DOM access and querying
The Document Object Model (DOM) represents a web page as a structured hierarchy of nodes. Each HTML element, attribute, and piece of text corresponds to a node that JavaScript can access and modify at runtime. Through this model, your scripts can change page content, style, and structure dynamically.
// Accessing elements
let heading = document.querySelector('h1');
let items = document.querySelectorAll('ul li');
heading.textContent = 'Updated title';
items[0].classList.add('highlight');
Two key methods dominate modern DOM querying: querySelector(), which returns the first element matching a CSS selector, and querySelectorAll(), which returns a static collection of all matches. These replace older methods such as getElementById() and getElementsByTagName() for most modern code.
querySelector() uses standard CSS selectors, you can target elements by class, attribute, hierarchy, or any other CSS-compatible pattern.
Once an element is selected, its properties and methods allow complete control over its content and attributes. For example, you can modify text, inject HTML, or alter style declarations directly.
let para = document.querySelector('p');
para.textContent = 'This paragraph has been changed.';
para.style.color = 'steelblue';
DOM collections like those from querySelectorAll() can be iterated with for...of or converted to arrays for use with array methods such as map() and forEach(). This makes it easy to apply the same change to multiple elements at once.
document.querySelectorAll('.item').forEach(el => {
el.classList.toggle('active');
});
Shadow DOM overview
The Shadow DOM is a more advanced feature that provides encapsulation for parts of a document. It allows components to have their own isolated DOM subtree that is separate from the main page structure. Styles and scripts in a shadow tree do not leak out to the main document, and external styles do not affect its internal content.
// Creating a shadow root
let host = document.querySelector('#widget');
let shadow = host.attachShadow({ mode: 'open' });
// Add content inside the shadow tree
shadow.innerHTML = '<p>This is inside the shadow DOM</p>';
Shadow DOM is primarily used by Web Components and custom elements, but understanding its principles helps explain how modern UI libraries achieve encapsulation and predictable styling.
querySelector(). You must access them through the shadow root itself.
Together, the regular DOM and the Shadow DOM form the core of the browser’s document model, offering both flexibility and structure for building interactive, maintainable user interfaces.
Events and propagation
Events are central to how JavaScript interacts with the browser. They notify your code when something happens (such as a click, a keypress, a page load, or a network response) allowing you to react dynamically. Each event originates at a target element and then travels through the DOM in a process known as propagation.
// Basic event listener
let button = document.querySelector('button');
button.addEventListener('click', () => {
console.log('Button clicked');
});
The addEventListener() method attaches a handler function to a specific event type. Most modern code uses this approach instead of older inline attributes like onclick. You can attach multiple listeners to the same element and remove them later with removeEventListener().
'click', 'keydown', and 'input'.
When an event occurs, it follows a three-phase journey through the DOM:
- Capturing phase – the event descends from the window down to the target element.
- Target phase – the event is handled at the element where it occurred.
- Bubbling phase – the event ascends back up the DOM tree, allowing parent elements to respond.
document.body.addEventListener('click', () => {
console.log('Body handler (bubbling)');
});
document.body.addEventListener('click', () => {
console.log('Body handler (capturing)');
}, true); // true = capture phase
By default, listeners run during the bubbling phase. Setting the third argument of addEventListener() to true moves the listener to the capturing phase instead.
focus and blur do not. Their bubbling equivalents are focusin and focusout.
You can stop an event’s propagation using event.stopPropagation(), or prevent its default action (such as following a link or submitting a form) with event.preventDefault().
document.querySelector('a').addEventListener('click', event => {
event.preventDefault();
console.log('Default navigation prevented');
});
Event delegation is a powerful pattern that takes advantage of bubbling. Instead of attaching a separate handler to every child element, you attach one listener to a common ancestor and detect which child triggered it using event.target.
document.querySelector('ul').addEventListener('click', event => {
if (event.target.tagName === 'LI') {
console.log('Clicked item:', event.target.textContent);
}
});
Understanding event propagation is key to building efficient, responsive interfaces. This model forms the foundation for user interaction throughout the browser.
Forms and validation
Forms are the main way users input data into web pages. JavaScript provides full access to form elements, allowing scripts to inspect, validate, and respond to user input dynamically before it is submitted. This improves both user experience and data quality by catching errors early.
// Access form elements
let form = document.querySelector('form');
let input = document.querySelector('input[name="email"]');
// Handle form submission
form.addEventListener('submit', event => {
event.preventDefault(); // Stop automatic submission
console.log('Form submitted with:', input.value);
});
Each input, select, and textarea element exposes its current value through the value property. You can read or change it directly, and the browser will reflect that change in the interface. Events like input and change trigger whenever the user edits a field, enabling real-time feedback and validation.
input.addEventListener('input', () => {
if (input.value.includes('@')) {
input.style.borderColor = 'green';
} else {
input.style.borderColor = 'red';
}
});
input event fires on every keystroke, while change only fires when the user leaves the field or confirms an edit.
Modern browsers include built-in form validation through HTML attributes such as required, minlength, pattern, and type. JavaScript can check or override these constraints using the checkValidity() and reportValidity() methods.
<form id="signup">
<input type="email" required>
<button>Submit</button>
</form>
<script>
let form = document.getElementById('signup');
form.addEventListener('submit', event => {
if (!form.checkValidity()) {
event.preventDefault();
form.reportValidity(); // Show browser error message
}
});
</script>
Custom validation logic can supplement or replace the built-in checks. This is useful when the rules are dynamic or complex, such as confirming password fields or verifying server data asynchronously.
form.addEventListener('submit', event => {
let password = document.querySelector('#pass').value;
let confirm = document.querySelector('#confirm').value;
if (password !== confirm) {
event.preventDefault();
alert('Passwords do not match');
}
});
By combining HTML constraints, event handling, and script-based checks, you can provide responsive and user-friendly forms that guide users toward correct input without unnecessary friction.
fetch and HTTP requests
The fetch() API provides a modern, promise-based interface for making HTTP requests in JavaScript. It replaces the older XMLHttpRequest with a cleaner syntax that integrates naturally with async and await. You can use it to retrieve or send data to servers, handle JSON APIs, or load resources dynamically.
// Basic GET request
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => console.log(data))
.catch(error => console.error('Error:', error));
The fetch() function returns a Promise that resolves to a Response object. You typically call one of its methods, such as .text(), .json(), or .blob(), to extract the body content. Each of these also returns a promise, making async/await syntax especially convenient.
// Using async/await for readability
async function getData() {
try {
let response = await fetch('https://api.example.com/items');
if (!response.ok) throw new Error('Network error');
let items = await response.json();
console.log(items);
} catch (err) {
console.error(err.message);
}
}
getData();
You can configure the request method, headers, and body by passing an options object as the second argument. This allows you to send POST, PUT, or DELETE requests and to specify content types or authorization headers.
// Sending JSON data with POST
async function sendData() {
let payload = { name: 'Robin', score: 42 };
let response = await fetch('https://api.example.com/submit', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
console.log('Status:', response.status);
}
sendData();
fetch() treats HTTP errors (like 404 or 500) as successful responses unless the network itself fails. Always check response.ok or response.status to detect these conditions.
By default, fetch() follows redirects, enforces same-origin policy rules, and rejects only on network-level errors. You can adjust behaviour with options such as mode, credentials, and cache to fine-tune how requests interact with the browser’s security and caching systems.
// Fetch with credentials and no-cache
fetch('/user/profile', {
credentials: 'include',
cache: 'no-store'
});
Access-Control-Allow-Origin. If not, the browser will block the response even if the server returns data.
For streaming or large downloads, the Response body also supports the Streams API, letting you process data incrementally. This enables efficient file transfers, progressive rendering, and real-time updates without waiting for the entire response to finish downloading.
Whether for APIs, form submissions, or data synchronization, fetch() provides a simple and consistent foundation for all HTTP communication in the browser.
localStorage and sessionStorage
The browser’s Web Storage API provides two key-value storage mechanisms; localStorage and sessionStorage. Both let JavaScript store small amounts of data directly in the browser, preserving it across page reloads without the need for cookies or a server round trip.
Data is stored as strings, indexed by key names, and can be retrieved or removed using straightforward methods. Each origin (protocol, domain, and port) gets its own isolated storage space, ensuring that sites cannot access each other’s data.
// Store data
localStorage.setItem('username', 'Robin');
// Retrieve data
let name = localStorage.getItem('username');
console.log(name); // "Robin"
// Remove data
localStorage.removeItem('username');
// Clear everything
localStorage.clear();
localStorage persists indefinitely, such that data remains even after the browser is closed and reopened. In contrast, sessionStorage lasts only for the duration of the browser tab or window, disappearing once it is closed.
// sessionStorage example
sessionStorage.setItem('session', 'active');
console.log(sessionStorage.getItem('session')); // "active"
JSON.stringify() and recover them using JSON.parse().
// Storing objects safely
let settings = { theme: 'dark', fontSize: 16 };
localStorage.setItem('settings', JSON.stringify(settings));
let restored = JSON.parse(localStorage.getItem('settings'));
console.log(restored.theme); // "dark"
Each storage area typically holds at least 5 MB per origin, though the exact limit varies by browser. Because the data is stored locally, reads and writes are fast and synchronous, meaning large operations can briefly block the main thread. For larger or asynchronous needs, consider IndexedDB instead.
localStorage or sessionStorage.
Used wisely, these mechanisms are ideal for lightweight persistence, user preferences, or cached data that improves page performance without requiring a network connection.
Chapter 19: Node.js Essentials
Node.js lets you run JavaScript outside the browser on your own machine and on servers. You use the language you already know, with a fast event loop, a useful standard library, and a very large package ecosystem.
In this chapter you will learn how to run scripts, load modules, and manage dependencies. You will be able to read files with fs, build safe paths with path, and work with the current process through process, environment variables, and command line arguments in process.argv. You will also learn to send and receive HTTP requests and will see a simple client and a small server with http.createServer, and you will learn where the modern fetch API fits in Node.js.
Finally you will be able to set up a project with package.json. You will install packages with npm, choose versions with semantic version ranges, and create useful run targets in scripts. The goal is portable and predictable code that is easy to share.
node -v and npm -v before you begin.
Node.js supports two module systems. CommonJS uses require and module.exports. ECMAScript modules use import and export. Each file chooses one system. Opt into ECMAScript modules with "type": "module" in package.json or by using the .mjs extension. Top level await works only in ECMAScript modules.
path.join and URL helpers over manual string building. When you see braces with unknown content, such as {"name": "…"}, treat it as a placeholder and keep the structure unchanged.
By the end you will be comfortable writing small utilities, simple servers, and repeatable workflows in Node.js. You will know how to run code, use modules, make network calls, read from the filesystem, and manage packages with confidence.
Running scripts and using modules
Node.js runs JavaScript directly from files on your system. Any file with a .js or .mjs extension can be executed from the command line. This makes it simple to test snippets, build utilities, or run full programs without a browser.
To run a file, type:
node app.js
The Node.js runtime reads the file, compiles it to machine code using the V8 engine, and executes it. Any calls to console.log() or other I/O functions appear in your terminal.
node alone. This opens the Read-Eval-Print Loop (REPL), useful for quick testing and inspection.
When scripts grow beyond a few lines, it makes sense to split them into modules. Node.js supports both CommonJS and ECMAScript module systems. CommonJS is older and still common in many projects, while ECMAScript modules follow the modern JavaScript standard.
CommonJS modules
In a CommonJS module, you export values using module.exports and import them with require():
// math.js
module.exports.add = function(a, b) {
return a + b;
};
// app.js
const math = require('./math.js');
console.log(math.add(2, 3));
The require() function loads a file once and caches its exports, so repeated imports are fast and consistent across a program.
ECMAScript modules
ECMAScript modules (ESM) use import and export statements. To enable ESM, either name your files with a .mjs extension or set "type": "module" in your package.json file.
// math.mjs
export function add(a, b) {
return a + b;
}
// app.mjs
import { add } from './math.mjs';
console.log(add(2, 3));
import() or compatibility loaders.
Both systems isolate code by file, provide local scope, and prevent variable collisions. They also simplify testing and maintenance by allowing small, focused pieces of functionality to be developed independently.
Later in this chapter, you will see how modules combine with npm packages to form reusable components across many projects.
Filesystem and path basics
Node.js provides built-in modules for working with the filesystem and file paths. The most important are fs for reading and writing files, and path for building and manipulating file paths in a platform-independent way.
Both modules are part of the Node.js standard library, so no installation is required. You can import them directly:
import fs from 'fs';
import path from 'path';
In older CommonJS code, you would use require() instead:
const fs = require('fs');
const path = require('path');
Reading and writing files
Use fs.readFile() and fs.writeFile() to work with file content asynchronously. Each function takes a path, optional encoding, and a callback. Promise-based variants are also available through fs.promises.
import { readFile, writeFile } from 'fs/promises';
const input = await readFile('input.txt', 'utf8');
await writeFile('output.txt', input.toUpperCase());
console.log('File written successfully');
fs/promises API for modern code. It avoids nested callbacks and integrates cleanly with async and await.
When you only need quick synchronous access, such as reading configuration files at startup, the synchronous forms (fs.readFileSync() and fs.writeFileSync()) are available but block execution until complete.
Working with directories
You can inspect and manipulate directories with fs.readdir(), fs.mkdir(), and related functions. For example:
import { readdir, mkdir } from 'fs/promises';
await mkdir('logs', { recursive: true });
const files = await readdir('logs');
console.log(files);
The recursive option ensures that all required parent directories are created if they do not yet exist.
Building safe paths
File paths differ between operating systems, using either forward slashes (/) or backslashes (\). To avoid errors, always use the path module to join directories and filenames safely:
import path from 'path';
const logPath = path.join('logs', 'server.log');
console.log(logPath); // Works on all platforms
Other useful helpers include path.basename(), path.dirname(), and path.extname(), which extract parts of a file path. You can also use path.resolve() to compute absolute paths relative to the current working directory.
path.join() or new URL() to stay portable and safe across systems.
When using ECMAScript modules, __dirname and __filename are not defined. To reproduce their behavior, create equivalents from import.meta.url:
import { fileURLToPath } from 'url';
import path from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
These small details make your scripts reliable and consistent whether you are reading data, generating reports, or serving files from a web server.
Process, environment, and args
Every Node.js program runs inside a single process. The process object gives you access to information about that process, including environment variables, command line arguments, and standard input and output streams.
This global object is always available without importing it, and provides a way to configure or inspect your program’s runtime environment.
Inspecting the current process
You can print details about the process directly from any script:
console.log(process.pid); // Process ID
console.log(process.platform); // OS platform
console.log(process.cwd()); // Current working directory
The process.cwd() function returns the directory from which Node.js was launched, not necessarily the directory containing the script file. This is useful when resolving relative paths or logging context information.
process.exit(code) to end a program with a specific exit code. A zero value means success, while non-zero values indicate errors or abnormal termination.
Accessing environment variables
Environment variables are stored in process.env. They are key-value pairs set by the operating system or shell and are commonly used for configuration, such as database credentials, API keys, or log settings.
console.log(process.env.PATH);
console.log(process.env.NODE_ENV);
You can set environment variables when running a command:
NODE_ENV=production node app.js
On Windows, use the set command first:
set NODE_ENV=production && node app.js
Environment variables are always strings. Convert them manually to numbers or booleans when needed:
const port = Number(process.env.PORT) || 3000;
const debug = process.env.DEBUG === 'true';
Reading command line arguments
Arguments passed to your script appear in the array process.argv. The first two entries are the Node.js executable and the script filename. The rest are the actual user arguments.
// Run: node app.js hello world
console.log(process.argv);
// → ['/usr/bin/node', '/path/app.js', 'hello', 'world']
To simplify argument handling, slice off the first two entries:
const args = process.argv.slice(2);
console.log(args); // ['hello', 'world']
For more complex command line tools, use a library such as yargs or commander to define options, flags, and help text.
Standard input and output
Every Node.js process has access to three standard streams: stdin, stdout, and stderr. You can use them for text input and output directly:
process.stdout.write('Enter your name: ');
process.stdin.on('data', data => {
console.log('Hello', data.toString().trim());
process.exit(0);
});
These streams provide fine-grained control for CLI tools, where user interaction, logging, and piping data between processes are common.
By combining process methods, environment variables, and command line arguments, you can make your scripts flexible, configurable, and ready for real-world automation tasks.
HTTP client and server (overview)
Node.js was originally built for networked applications. Its event-driven design and non-blocking I/O make it ideal for serving many concurrent HTTP connections efficiently. The built-in http module allows you to create both simple servers and basic clients without external dependencies.
Creating a simple HTTP server
You can create a working web server in just a few lines. The http.createServer() function takes a callback that runs each time a client connects:
import http from 'http';
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello from Node.js\n');
});
server.listen(3000, () => {
console.log('Server running at http://localhost:3000');
});
This server listens on port 3000 and responds with plain text to any request. The req object provides request details, such as the URL and headers, while res is used to send back a response.
res.setHeader() to define custom headers, and always call res.end() to finish the response. Without it, the client will wait indefinitely.
By adding simple conditionals, you can serve different content based on the request URL:
const server = http.createServer((req, res) => {
if (req.url === '/about') {
res.end('About page');
} else {
res.end('Home page');
}
});
Making HTTP requests as a client
Node.js can also make HTTP requests. You can use the built-in http.request() or https.request() functions, though most developers now prefer the modern fetch() API, which is available globally in recent Node.js versions.
const response = await fetch('https://example.com');
const text = await response.text();
console.log(text);
This works exactly as it does in browsers, supporting JSON responses, headers, and streaming data. You can also use await response.json() to decode JSON directly.
fetch may not be defined. To use it in those environments, install a lightweight polyfill such as node-fetch.
Serving files and JSON
To serve more useful content, combine the fs module with http to read and send files or structured data:
import http from 'http';
import fs from 'fs/promises';
http.createServer(async (req, res) => {
if (req.url === '/data') {
const json = { message: 'Hello, JSON!' };
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(json));
} else {
const html = await fs.readFile('index.html');
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(html);
}
}).listen(3000);
This demonstrates how dynamic and static responses can coexist in a minimal Node.js server.
Modern frameworks and utilities
Although the core http module is powerful, many developers use frameworks such as express or fastify for higher-level routing, middleware, and better error handling. These frameworks build upon the same foundations while improving readability and maintainability.
For client-side requests beyond simple fetch(), libraries like axios offer convenient wrappers that support automatic JSON parsing, interceptors, and timeout control.
Together, these tools make Node.js an effective environment for building APIs, backend services, or automation scripts that communicate over the web.
Packages with npm and scripts
Node.js uses the Node Package Manager (npm) to install, share, and manage external modules. The npm registry hosts millions of open-source packages, covering everything from web servers to testing tools.
Each Node.js project is defined by a package.json file that records metadata, dependencies, and script commands. You can create this file automatically using:
npm init -y
This generates a minimal package.json that looks like this:
{
"name": "myapp",
"version": "1.0.0",
"type": "module",
"scripts": {
"start": "node index.js"
}
}
The type field determines whether the project defaults to ECMAScript modules or CommonJS. The scripts section defines custom commands that can be run with npm run <name>.
Installing and using packages
To add a dependency, use npm install followed by the package name. For example, to install Express:
npm install express
This creates a node_modules directory and updates your package.json and package-lock.json files. The module can then be imported into your script:
import express from 'express';
const app = express();
app.get('/', (req, res) => res.send('Hello, world!'));
app.listen(3000);
--save-dev to install development-only dependencies, such as testing frameworks or build tools. These will not be included in production installations.
You can also install specific versions or version ranges:
npm install lodash@4.17.21
Use npm outdated to check for newer versions and npm update to upgrade packages safely.
Running scripts
Scripts are predefined command shortcuts stored in package.json. They are run with npm run, and can be used for tasks such as testing, building, or linting.
{
"scripts": {
"start": "node index.js",
"dev": "node server.js",
"test": "node test.js"
}
}
npm run start
npm run test
The start script is special: it can be run as npm start without the run keyword.
npm commands, such as install or update. Doing so overrides the default behavior and may cause confusion.
Global vs local packages
Packages installed locally go into your project’s node_modules directory. To install tools for global use (such as command line utilities), use the -g flag:
npm install -g nodemon
Global packages add executables to your system’s PATH, allowing you to run them directly from the terminal.
Lock files and reproducibility
npm automatically maintains a package-lock.json file, which records the exact versions of all installed dependencies. This ensures that everyone working on the same project installs identical versions, improving consistency across environments.
When sharing your project, commit package-lock.json to version control so that collaborators and CI systems reproduce your dependency tree exactly.
Together, npm, package.json, and the scripts system form the backbone of Node.js project management. They make it easy to install tools, automate workflows, and share your work with others in a reliable and standardised way.
Chapter 20: Testing, Debugging, and Tooling
As JavaScript projects grow, so does the need for reliability, readability, and maintainability. Testing, debugging, and automation tools help you detect mistakes early, maintain consistent style, and ship stable code with confidence. In this chapter, you will learn the essential techniques and utilities that make this possible.
JavaScript’s flexibility is one of its strengths, but it can also allow silent errors or subtle logic flaws to slip through. Effective debugging and testing practices bring structure to your workflow and catch issues before users ever see them.
Clean code is not just about correctness, but also about consistency. Tools like eslint and prettier enforce style rules and formatting automatically, reducing friction in team projects and reviews.
By the end of this chapter, you will have a clear picture of how professional JavaScript development ties together the cycle of writing, testing, debugging, and preparing code for release. These practices help you move from experimentation toward dependable, production-quality software.
Console and breakpoints
The simplest and most widely used debugging tool in JavaScript is still the humble console object. Whether running in the browser or Node.js, console provides an immediate way to print messages, inspect data, and trace execution flow without interrupting the program.
Logging and inspection
Use console.log() to output general information, console.warn() for potential issues, and console.error() for serious errors. You can log multiple values at once, including objects and arrays.
const user = { name: 'Robin', age: 42 };
console.log('User info:', user);
console.warn('Password field missing');
console.error('Failed to load data');
In Node.js, messages appear in your terminal. In browsers, they appear in the developer console, often with color and filtering options.
console.table() to print arrays or objects in a structured, readable grid format. It is especially useful for comparing multiple data items.
To track when code executes, use console.time() and console.timeEnd() to measure duration:
console.time('loop');
for (let i = 0; i < 1e6; i++) {}
console.timeEnd('loop');
This will print the label and total milliseconds taken to run the code block.
Tracing execution
If you are unsure where a particular call originates, use console.trace(). It prints the call stack at the point of execution, showing which functions led to that line.
function level1() {
level2();
}
function level2() {
console.trace('Trace example');
}
level1();
The output shows each function in the chain with file names and line numbers, helping you locate where logic diverges or loops unexpectedly.
Using breakpoints
Breakpoints let you pause execution and inspect program state interactively. You can set them visually in browser developer tools or in Node.js via an editor like Visual Studio Code. When execution stops, you can view variable values, step through code, and watch expressions in real time.
You can also insert a breakpoint directly in your source using the debugger statement:
function calculateTotal(items) {
let sum = 0;
for (const item of items) {
debugger; // Execution pauses here
sum += item.price;
}
return sum;
}
When this line is reached, execution halts if a debugger is attached. Otherwise, it is ignored and execution continues normally.
debugger statements before releasing code. Leaving them in can cause unexpected pauses in production environments.
Debugging Node.js applications
Node.js includes a built-in inspector that works with many browsers and IDEs. To start a program in debug mode, run:
node --inspect app.js
Then open the provided URL (often chrome://inspect) in Chrome or Edge to attach a debugger. You can set breakpoints, step through functions, and examine variables just like in a browser context.
For a faster workflow in Visual Studio Code, launch with the “Run and Debug” panel and configure breakpoints directly within your project files.
Combined with console logging, the debugger gives you precise insight into program flow and helps locate logical or performance issues efficiently.
Common testing approaches
Testing ensures that your JavaScript code behaves as expected. Even simple tests can catch logic errors, detect regressions, and confirm that updates do not break existing functionality. Testing can be done manually or automatically, depending on your workflow and project scale.
Manual testing
Manual testing involves running scripts yourself and verifying the results by observation. It is fast for small experiments or quick prototypes, but unreliable for larger projects because human checks are inconsistent and easily skipped.
In early development, you might write a simple driver script:
import { add } from './math.js';
console.log(add(2, 3) === 5 ? 'Pass' : 'Fail');
This approach works but becomes difficult to maintain once you have many test cases or modules. Automated testing removes this burden.
Unit testing
Unit tests check small, isolated pieces of functionality—typically one function or class at a time. Each test runs independently and reports whether the output matches the expected result.
// math.test.js
import { add } from './math.js';
test('adds numbers correctly', () => {
expect(add(2, 3)).toBe(5);
});
This example uses the jest testing framework, which provides global functions like test() and expect(). The same principle applies to other frameworks such as mocha or vitest.
To run tests with Jest after installing it (npm install --save-dev jest), add a script to your package.json:
{
"scripts": {
"test": "jest"
}
}
npm test
Integration testing
Integration tests verify that modules work together correctly. They often simulate a complete workflow, such as creating a record, updating it, and verifying the result.
import app from './app.js';
import request from 'supertest';
test('GET / responds with Hello', async () => {
const res = await request(app).get('/');
expect(res.statusCode).toBe(200);
expect(res.text).toBe('Hello');
});
This example uses the supertest library to simulate HTTP requests without launching an external server. Integration tests are slower than unit tests but essential for validating overall system behavior.
Test organization and coverage
Organize your test files alongside the source code or in a dedicated tests directory. Many tools recognize files ending in .test.js or .spec.js automatically.
To measure how much of your codebase is covered by tests, use a coverage report:
npm test -- --coverage
This command generates a report showing which lines or branches of your program are exercised by tests. Higher coverage indicates better verification, but aim for meaningful tests rather than chasing a perfect percentage.
End-to-end (E2E) testing
End-to-end tests simulate full user interactions, often through browsers or APIs. Tools like Playwright, Cypress, or Puppeteer automate these workflows, verifying that the entire system behaves as expected from a user’s perspective.
import { test, expect } from '@playwright/test';
test('homepage has title', async ({ page }) => {
await page.goto('https://example.com');
await expect(page).toHaveTitle(/Example/);
});
Although slower and more complex than unit or integration tests, E2E tests catch issues that only appear when all parts of the system interact together.
By combining these testing levels (unit, integration, and end-to-end) you can build a reliable foundation of confidence in your codebase and detect errors early, before deployment.
Linting and formatting
Linting and formatting tools help maintain a consistent and error-free codebase. They automatically detect potential problems, enforce style conventions, and ensure that all contributors write code in a uniform way. This improves readability, reduces merge conflicts, and catches mistakes early in development.
Linting with eslint
eslint is the most popular JavaScript linter. It analyses your code for syntax issues, bad practices, and deviations from established rules. You can install it as a development dependency and initialize a configuration file in your project:
npm install --save-dev eslint
npx eslint --init
The initialization process asks a few questions about your project type, preferred syntax (CommonJS or modules), and style conventions. It then creates a .eslintrc configuration file that defines rules and environments.
To lint your files, run:
npx eslint .
By default, ESLint checks all .js files in your project. You can also target specific directories or files.
Rules are configurable. You can enable, disable, or customize their severity (for example, "error", "warn", or "off"). Here is a minimal example:
{
"env": { "es2023": true, "node": true },
"extends": "eslint:recommended",
"rules": {
"no-unused-vars": "warn",
"eqeqeq": "error"
}
}
This configuration warns about unused variables and enforces the use of strict equality (===) checks. ESLint also supports plugins such as eslint-plugin-import or eslint-plugin-react for framework-specific rules.
Automatic fixing
Many common issues can be fixed automatically. To let ESLint apply safe corrections, run:
npx eslint . --fix
This reformats spacing, removes unused variables, and applies stylistic corrections where possible, saving time and avoiding manual edits.
Formatting with prettier
While ESLint enforces correctness and style rules, prettier focuses purely on formatting. It ensures consistent indentation, line wrapping, and punctuation across your entire project, regardless of who writes the code.
npm install --save-dev prettier
Then create a simple configuration file named .prettierrc:
{
"semi": true,
"singleQuote": true,
"trailingComma": "es5"
}
Run it manually or as a script:
npx prettier --write .
Prettier rewrites code according to its rules without requiring you to make stylistic decisions. This keeps pull requests clean and focused on logic rather than formatting.
eslint-config-prettier to disable conflicting style rules so the tools work together without overlap.
Integrating with scripts
You can automate linting and formatting by adding them to your package.json scripts:
{
"scripts": {
"lint": "eslint .",
"format": "prettier --write ."
}
}
Now you can run them directly with npm run lint or npm run format. For collaborative projects, include these checks in version control hooks or continuous integration pipelines to ensure consistent code quality.
By applying linting and formatting as regular habits, your code remains readable, consistent, and easier to maintain, no matter how large the project becomes.
Type checking with JSDoc or TypeScript
JavaScript is dynamically typed, which means variables can hold any type of value at any time. While this flexibility is useful, it also makes type-related bugs hard to catch before runtime. Adding type information helps tools and editors catch problems early and improves code completion and documentation.
You can introduce types into JavaScript projects without rewriting them by using JSDoc annotations or adopting TypeScript. Both provide a layer of static analysis that checks function signatures, variable types, and object structures.
Using JSDoc for type hints
JSDoc uses structured comments to describe the expected types of variables, parameters, and return values. Editors such as Visual Studio Code and tools like tsc can read these annotations and validate your code.
/**
* Adds two numbers.
* @param {number} a - First number
* @param {number} b - Second number
* @returns {number} Sum of a and b
*/
function add(a, b) {
return a + b;
}
When type checking is enabled, incorrect usage will raise warnings:
add('2', 3); // Type warning: expected number, got string
To enable type checking through JSDoc, create a jsconfig.json file in your project root and include:
{
"compilerOptions": {
"checkJs": true
}
}
This tells supporting editors to analyze your JavaScript files for type safety while still allowing plain .js syntax.
Migrating gradually to TypeScript
TypeScript is a superset of JavaScript that adds a full static type system. It compiles to regular JavaScript and integrates deeply with modern editors, providing autocomplete, refactoring, and compile-time safety.
To begin, install TypeScript as a development dependency:
npm install --save-dev typescript
npx tsc --init
This creates a tsconfig.json configuration file that defines compiler settings. You can then start converting .js files to .ts one by one. TypeScript understands plain JavaScript syntax, so migration can happen gradually.
A simple example looks like this:
function greet(name: string): string {
return `Hello, ${name}`;
}
console.log(greet('Robin'));
The compiler will immediately warn about incorrect usage:
greet(42);
// Error: Argument of type 'number' is not assignable to parameter of type 'string'
Checking JavaScript with the TypeScript compiler
You do not need to fully switch to TypeScript files to benefit from type checking. The TypeScript compiler (tsc) can check standard JavaScript if you enable checkJs in your configuration. Run:
npx tsc --allowJs --checkJs
This scans your code for type errors without producing compiled output, acting purely as a static checker.
Balancing flexibility and safety
Adding typing to JavaScript code is not an all-or-nothing decision. JSDoc comments suit lightweight projects and incremental improvements, while TypeScript fits larger or long-term applications where correctness and maintainability matter most.
Both approaches give your tools and editors more context about your code. They reduce accidental errors, improve navigation, and make your functions self-documenting, enhancing quality without changing how the code runs.
Build and bundle basics
Modern JavaScript projects often use multiple files, libraries, and assets that must be combined into optimized bundles before deployment. The build process prepares your code for production by transpiling new syntax, compressing files, and packaging everything efficiently for the browser or server.
Node.js applications may not always need bundling, but front-end and hybrid projects benefit greatly from smaller file sizes and faster loading times. Tools such as webpack, esbuild, rollup, and vite automate these tasks and integrate with most development workflows.
Transpiling modern JavaScript
Newer ECMAScript features are not always supported in older browsers. A transpiler converts modern JavaScript into a version that runs everywhere. The most widely used tool is Babel.
npm install --save-dev @babel/core @babel/cli @babel/preset-env
Then create a configuration file named .babelrc:
{
"presets": ["@babel/preset-env"]
}
To compile all scripts from src to dist:
npx babel src --out-dir dist
This step rewrites your code using older syntax as needed, preserving behavior while improving compatibility.
@babel/preset-env preset automatically targets environments based on your package.json or browser support list, avoiding unnecessary conversions.
Bundling multiple modules
Most projects import code from many modules. Bundlers combine them into one or more output files, resolving dependencies and minimizing duplication. The classic tool for this is webpack, which is configured through a simple JavaScript file.
// webpack.config.js
export default {
entry: './src/index.js',
output: {
filename: 'bundle.js',
path: new URL('./dist', import.meta.url).pathname
},
mode: 'production'
};
Then build the project:
npx webpack
This produces a compressed bundle.js suitable for deployment. Webpack can also process CSS, images, and other assets using plugins and loaders.
Faster bundlers: esbuild and vite
esbuild is a newer, extremely fast bundler and transpiler written in Go. It performs the same tasks as Webpack or Rollup but in a fraction of the time, making it ideal for rapid development.
npm install --save-dev esbuild
npx esbuild src/index.js --bundle --outfile=dist/bundle.js --minify
vite builds on top of esbuild for development and rollup for production. It provides an instant dev server with hot reloading, then bundles efficiently for release.
npm create vite@latest myapp
cd myapp
npm run dev
Automating builds with scripts
You can define build steps directly in package.json so they run with a single command. For example:
{
"scripts": {
"build": "babel src --out-dir dist && webpack",
"dev": "vite"
}
}
This lets you run npm run build for a production build or npm run dev to launch a live development server. Many teams also add lint and test tasks here to maintain code quality as part of the same workflow.
The full toolchain
Together, transpilers, bundlers, and automation scripts form a toolchain that transforms your source code into efficient, deployable output. A well-configured build process not only improves compatibility and performance, but also enforces consistency across teams and environments.
With testing, debugging, and tooling in place, you now have the complete picture of professional JavaScript development, and are ready to build, optimize, and release high-quality applications with confidence.
© 2025 Robin Nixon. All rights reserved
No content may be re-used, sold, given away, or used for training AI without express permission
Questions? Feedback? Get in touch