Docker for JavaScript Developers: The Only Guide You Need
Docker eliminates "works on my machine" forever. Here is everything a JavaScript developer needs to know to containerize their apps and run them consistently anywhere.

DevForge Team
AI Development Educators

"Works on my machine" is the most expensive sentence in software development. It means your development environment and your production environment behave differently — and you won't know exactly how until something breaks in production.
Docker solves this by packaging your application and everything it needs to run — Node.js version, system libraries, environment configuration — into a container that runs identically on your laptop, your teammate's laptop, and your production server.
This guide covers everything a JavaScript developer needs to containerize their applications, from writing a first Dockerfile to running multi-service applications with Docker Compose.
The Core Concepts
Image: A snapshot of a filesystem containing your application and its runtime. Images are built from a Dockerfile and stored in a registry (Docker Hub, GitHub Container Registry, AWS ECR).
Container: A running instance of an image. You can run many containers from the same image simultaneously. Containers are isolated from each other and from the host system.
Dockerfile: A text file with instructions for building an image. Each instruction creates a layer in the image.
Docker Compose: A tool for defining and running multi-container applications. Replaces running multiple docker run commands with a single docker compose up.
Your First Dockerfile
Here is a Dockerfile for a Node.js Express application:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]Line by line:
FROM node:20-alpine — Start from the official Node.js 20 image based on Alpine Linux. Alpine is a minimal Linux distribution (~5MB) that keeps image sizes small. The alternative, node:20, is based on Debian and is about 10x larger.
WORKDIR /app — Set the working directory inside the container. All subsequent commands run from this directory.
**COPY package*.json ./** — Copy package.json and package-lock.json into the container before copying the rest of the code. This is critical for layer caching (explained below).
RUN npm ci --only=production — Install production dependencies. npm ci installs exactly what's in package-lock.json, unlike npm install which may update versions.
COPY . . — Copy the rest of your application code into the container.
EXPOSE 3000 — Document that the container listens on port 3000. This doesn't actually publish the port — that happens when you run the container.
CMD ["node", "server.js"] — The command to run when the container starts.
Layer Caching: Why Order Matters
Docker builds images layer by layer. If a layer hasn't changed, Docker reuses the cached version from the previous build. This makes subsequent builds much faster.
The critical insight: once a layer changes, all subsequent layers must be rebuilt.
This is why we copy package.json before copying the application code. Node modules rarely change (only when you add or remove dependencies), but application code changes constantly. If we copied everything first:
COPY . .
RUN npm ciEvery code change would trigger a fresh npm install, which might take 30-60 seconds. With the optimized ordering, code changes only require copying the new files — npm install only runs when package.json changes.
Building and Running Your Container
Build the image:
docker build -t my-app:latest .The -t flag tags the image with a name and version. The . tells Docker to look for the Dockerfile in the current directory.
Run a container from the image:
docker run -p 3000:3000 my-app:latestThe -p 3000:3000 flag maps port 3000 on your machine to port 3000 in the container. The format is host-port:container-port.
Run in detached mode (background):
docker run -d -p 3000:3000 --name my-app my-app:latestView running containers:
docker psStop a container:
docker stop my-appEnvironment Variables
Never bake environment variables into your Docker image. Pass them at runtime:
docker run -e DATABASE_URL=postgres://... -e NODE_ENV=production my-app:latestOr use an env file:
docker run --env-file .env.production my-app:latestIn your application, access them the same way you always have: process.env.DATABASE_URL.
The .dockerignore File
Just like .gitignore, .dockerignore prevents files from being copied into your image. Create a .dockerignore file in your project root:
node_modules
.git
.env
.env.local
dist
*.log
README.mdWithout this, your node_modules directory gets copied into the container, which is both slow and wrong — you want the container to install its own fresh dependencies.
Multi-Stage Builds for Frontend Apps
Frontend applications need to be built (compiled, bundled) before they're served. Multi-stage builds let you use one image for building and a smaller image for serving, without the build tools ending up in production.
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]The first stage (builder) installs all dependencies including devDependencies and runs the build. The second stage starts fresh from an nginx image and copies only the built output. The final image contains nginx and your compiled files — not Node.js, not your source code, not your dev dependencies.
Docker Compose for Multi-Service Apps
Most real applications need more than one service. A full-stack app might need a Node.js API server, a PostgreSQL database, and a Redis cache. Running these manually with docker run commands and keeping them connected is cumbersome. Docker Compose solves this.
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://postgres:password@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
db:
image: postgres:16-alpine
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- postgres_data:/var/lib/postgresql/data
cache:
image: redis:7-alpine
volumes:
postgres_data:Start all services:
docker compose upStart in the background:
docker compose up -dStop all services:
docker compose downView logs:
docker compose logs -f apiNotice how the DATABASE_URL uses db as the hostname — that's the service name from the Compose file. Docker Compose creates an internal network where services can reach each other by their service names.
Development vs Production Compose Files
Use separate Compose files for development and production. A development configuration often mounts source code as a volume (so code changes take effect without rebuilding) and enables hot reloading:
services:
api:
build:
context: ./api
target: development
volumes:
- ./api:/app
- /app/node_modules
command: npm run dev
ports:
- "3000:3000"The volume mount ./api:/app syncs your local files into the container in real time. The /app/node_modules exclusion prevents your local node_modules from overwriting the container's node_modules.
Common Patterns for JavaScript Apps
Next.js with standalone output:
Add to next.config.js: output: 'standalone'. Then in the Dockerfile:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
EXPOSE 3000
CMD ["node", "server.js"]Node.js API with health check:
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1What to Learn Next
With the fundamentals covered, the next steps are:
- Container registries: Push your images to Docker Hub or GitHub Container Registry so they can be deployed from any server
- Kubernetes: Orchestrate containers at scale across multiple servers
- CI/CD integration: Automatically build and push Docker images in your GitHub Actions pipeline
- Container security scanning: Use tools like Trivy or Docker Scout to scan images for vulnerabilities before deployment
Docker is the foundation of modern deployment. Once your application runs in a container, deploying it to any cloud provider becomes a matter of telling that provider where to find your image.