BytePane

Docker for Developers: Containers, Images & Docker Compose Guide

Docker12 min read

Why Docker Matters for Developers

Docker solves the oldest problem in software: "it works on my machine." By packaging an application with its entire runtime environment -- OS libraries, language runtime, dependencies, and configuration -- Docker guarantees that code runs identically on every developer's laptop, in CI/CD, and in production. No more debugging environment mismatches.

Unlike virtual machines, Docker containers share the host OS kernel, making them start in milliseconds and consume a fraction of the memory. A typical development setup with a web server, database, and cache requires three containers totaling 200MB of RAM, compared to 6GB+ for three VMs.

Core Concepts

ConceptWhat It IsAnalogy
ImageRead-only template with code + dependenciesA class definition
ContainerRunning instance of an imageAn object (instance of class)
DockerfileBuild instructions for an imageA recipe / Makefile
VolumePersistent storage outside the containerAn external hard drive
NetworkVirtual network connecting containersA private LAN
RegistryRepository for sharing images (Docker Hub)npm / PyPI for containers

Essential Docker Commands

# Pull an image from Docker Hub
docker pull node:20-alpine

# Run a container (interactive + terminal)
docker run -it --rm node:20-alpine sh

# Run a web server (detached, port mapping)
docker run -d -p 3000:3000 --name my-app my-image

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# View container logs
docker logs my-app
docker logs -f my-app          # follow (tail)

# Execute a command inside a running container
docker exec -it my-app sh

# Stop and remove a container
docker stop my-app
docker rm my-app

# Build an image from a Dockerfile
docker build -t my-app:latest .

# Remove unused images and containers
docker system prune -a

Writing a Dockerfile: Node.js Example

A Dockerfile is a text file with instructions to build an image. Each instruction creates a layer that Docker caches for fast rebuilds. Order matters -- put instructions that change frequently (like COPY . .) at the end to maximize cache hits.

Basic Dockerfile

# Use slim Node.js image (not full -- saves 300MB)
FROM node:20-alpine

# Set working directory inside the container
WORKDIR /app

# Copy package files first (cache layer for dependencies)
COPY package.json package-lock.json ./

# Install production dependencies only
RUN npm ci --omit=dev

# Copy application source code
COPY . .

# Build the application (if using TypeScript, Next.js, etc.)
RUN npm run build

# Expose the port the app listens on
EXPOSE 3000

# Define the command to run the application
CMD ["node", "dist/server.js"]

Multi-Stage Build (Production-Optimized)

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production (only the built output)
FROM node:20-alpine AS production
WORKDIR /app

# Copy only production dependencies
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force

# Copy built output from builder stage
COPY --from=builder /app/dist ./dist

# Non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

EXPOSE 3000
CMD ["node", "dist/server.js"]

# Result: ~80MB instead of ~500MB
# No TypeScript, no devDependencies, no source code

.dockerignore

# .dockerignore -- exclude from build context
node_modules
.git
.gitignore
*.md
.env
.env.*
coverage
.nyc_output
tests
__tests__
*.test.js
*.spec.js
Dockerfile
docker-compose*.yml
.dockerignore

The .dockerignore file works like .gitignore -- it prevents unnecessary files from being sent to the Docker daemon during builds. Always exclude node_modules, .git, and test files. Validate your ignore patterns with a Regex Tester.

Docker Compose: Multi-Container Applications

Docker Compose defines multi-container applications in a single YAML file. Instead of running multiple docker run commands with complex flags, you declare your entire stack and start everything with docker compose up.

Full-Stack Development Setup

# docker-compose.yml
services:
  # Node.js API server
  api:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - .:/app                    # hot-reload: mount source code
      - /app/node_modules         # prevent overwriting container modules
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
      - REDIS_URL=redis://cache:6379
      - NODE_ENV=development
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    command: npm run dev

  # PostgreSQL database
  db:
    image: postgres:16-alpine
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data    # persist data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 5s
      timeout: 5s
      retries: 5

  # Redis cache
  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:                         # named volume for database
# Docker Compose commands
docker compose up              # start all services
docker compose up -d           # start in background (detached)
docker compose down            # stop and remove containers
docker compose down -v         # also remove volumes (reset data)
docker compose logs api        # view logs for one service
docker compose exec api sh     # shell into a running service
docker compose build           # rebuild images
docker compose ps              # list running services

Docker Compose uses YAML for configuration. If you work with JSON configs alongside YAML, check our guide on YAML vs JSON to understand the syntax differences. Format your JSON config files with our JSON Formatter.

Volumes: Persistent Data

Containers are ephemeral -- when you remove a container, its filesystem is gone. Volumes solve this by storing data outside the container's lifecycle. There are three types of mounts.

Mount TypeSyntaxBest For
Named volumepgdata:/var/lib/dataDatabase storage, persistent data
Bind mount./src:/app/srcDevelopment hot-reload, config files
tmpfs mounttmpfs: /tmpTemporary data, secrets (RAM only)

Docker Networking

Docker Compose automatically creates a network for your services. Containers can reach each other by service name (DNS resolution). The api service connects to the database as db:5432, not localhost:5432.

# Container networking in Docker Compose

# From the api container:
# db:5432     -> reaches PostgreSQL (service name = hostname)
# cache:6379  -> reaches Redis
# localhost   -> only the api container itself

# Port mapping: host:container
ports:
  - "3000:3000"   # host port 3000 -> container port 3000
  - "8080:3000"   # host port 8080 -> container port 3000

# Internal-only service (no host port)
# (no 'ports' key = only reachable from other containers)
db:
  image: postgres:16-alpine
  # no ports = not exposed to host, only to other services

Dockerfile Best Practices

  1. Use specific image tags -- always pin versions (node:20.11-alpine) instead of node:latest. The latest tag changes unpredictably and can break builds.
  2. Order instructions by change frequency -- put rarely changing layers first (base image, system deps) and frequently changing layers last (source code) to maximize cache hits.
  3. Use multi-stage builds -- separate build tools and dev dependencies from the production image. This cuts image size by 70-90%.
  4. Run as non-root -- create a dedicated user with USER. Running as root inside containers is a security risk. See our guide on Linux file permissions for user management.
  5. Use .dockerignore -- exclude build artifacts, tests, documentation, and version control from the build context.
  6. One process per container -- run your application as PID 1. Do not run supervisord or multiple processes. Use Docker Compose for multi-process stacks.
  7. Add health checks -- use HEALTHCHECK in Dockerfiles or healthcheck in Compose so orchestrators know when your app is ready.

Docker vs Alternatives

ToolUse CaseCompared to Docker
PodmanDrop-in Docker replacementDaemonless, rootless by default
KubernetesProduction orchestrationScaling, self-healing, rolling deploys
VMs (VirtualBox)Full OS isolationHeavier, slower, stronger isolation
nix / devboxReproducible dev environmentsNo containers, package-level isolation
Dev ContainersVS Code remote developmentDocker-based, IDE-integrated

Frequently Asked Questions

What is the difference between a Docker image and a container?
A Docker image is a read-only template containing the application code, runtime, libraries, and configuration. Think of it as a blueprint or a class in OOP. A container is a running instance of an image -- it has its own filesystem, network, and process space. You can create multiple containers from the same image, just like creating multiple objects from the same class. Images are built with docker build and stored in registries; containers are created with docker run and exist only while running.
How do I reduce Docker image size?
Use multi-stage builds to separate the build environment from the production image. Start from slim or Alpine base images (node:20-alpine is 50MB vs node:20 at 350MB). Combine RUN commands to reduce layers. Add a .dockerignore file to exclude node_modules, .git, and test files from the build context. Install only production dependencies (npm ci --omit=dev). These practices can reduce image size by 80-90%.
Should I use Docker Compose or Kubernetes?
Docker Compose is designed for local development and single-server deployments. It defines multi-container applications in a single YAML file and is simple to learn. Kubernetes is an orchestration platform for production workloads that need auto-scaling, self-healing, rolling deployments, and multi-node clusters. For most development teams, use Docker Compose locally and for staging, and Kubernetes (or a managed service like ECS, Cloud Run) for production if you need horizontal scaling.

Format Your Docker Configs

Working with JSON config files, environment variables, or API responses in your containers? Format and validate them instantly with our free developer tools.

Related Articles