Docker Cheat Sheet: Essential Commands for Developers (2026)
For the first time in the history of the Stack Overflow Developer Survey, Docker ranked as the #1 most-used cloud development tool in 2025 — with 59% of professional developers reporting regular use, a jump of 17 percentage points year-over-year. The CNCF 2025 Annual Cloud Native Survey (15.6 million cloud native developers globally) found that containers are now running in 92% of professional IT environments, up from 80% in 2024.
Docker Hub has recorded 318 billion all-time image pulls — a 145% year-over-year increase — across 7.3 million accounts and 8.3 million container repositories. The number of companies using Docker globally now exceeds 108,000.
This reference covers the commands you will actually reach for organized by workflow phase, the flags that matter and why, and the troubleshooting commands that turn black-box containers into debuggable processes. Assumes Docker Engine 23.0+ (BuildKit default) with Compose V2 (docker compose as a plugin — the old docker-compose V1 Python binary reached end-of-life in July 2023).
Key Takeaways
- ▸Docker is now #1 dev tool per the 2025 Stack Overflow survey (59% usage, +17pp YoY). Docker Hub hit 318B all-time pulls.
- ▸BuildKit is default since Docker 23.0 — parallel stage builds and
--mount=type=cachecan reduce CI build times 70%+. - ▸docker init (GA since Docker Desktop 4.27) generates a production-ready Dockerfile, Compose file, and .dockerignore automatically.
- ▸docker system df before
docker system prune— know what you are deleting before the nuclear option. - ▸Named volumes survive
docker rm; bind mounts do not. Use named volumes for any data that must persist across container replacements.
Container Lifecycle
The most commonly looked-up commands — the day-to-day container start/stop/inspect cycle:
# Run a container (creates + starts in one step)
docker run nginx:alpine
# Common flags on docker run:
docker run -d nginx:alpine # detached (background)
docker run -it ubuntu:24.04 bash # interactive shell
docker run --rm alpine echo "hello" # delete on exit (one-offs)
docker run -p 8080:80 nginx:alpine # map host:container ports
docker run -p 127.0.0.1:8080:80 nginx # bind to localhost only
docker run -v /host/path:/app/data nginx # bind mount
docker run -v myvolume:/app/data nginx # named volume
docker run -e NODE_ENV=production myapp # env variable
docker run --name my-nginx nginx:alpine # human-readable name
docker run --restart unless-stopped nginx # auto-restart policy
# List containers
docker ps # running containers only
docker ps -a # all containers (including stopped)
docker ps --format "table {{.Names}} {{.Status}} {{.Ports}}"
# Stop / start / restart
docker stop my-nginx # SIGTERM → waits 10s → SIGKILL
docker stop -t 30 my-nginx # custom grace period (30s)
docker kill my-nginx # SIGKILL immediately (no grace period)
docker start my-nginx # restart a stopped container
docker restart my-nginx # stop + start
# Remove containers
docker rm my-nginx # must be stopped first
docker rm -f my-nginx # force-remove running container
docker rm $(docker ps -aq) # remove ALL stopped containers
docker container prune # same, with confirmation promptUnderstanding docker run Flags
| Flag | Full Form | Purpose | When to Use |
|---|---|---|---|
| -d | --detach | Run in background; print container ID | Long-running services (web servers, databases) |
| -it | --interactive --tty | Keep STDIN open + allocate pseudo-TTY | Interactive shells, debugging sessions |
| --rm | — | Delete container when it exits | One-off commands, CI steps, scripts |
| -p | --publish | Map host:container port | Exposing services to the host or network |
| -v | --volume | Mount host path or named volume | Persisting data, live code reloading |
| -e | --env | Set environment variable | Config, API keys (use --env-file for many) |
| --name | — | Assign human-readable name | Any container you'll reference by name later |
Critical gotcha: -d and -it are mutually contradictory for interactive sessions. Use -it without -d when you need a shell. Use -d for background services you will check on later with docker logs.
Image Management
# List local images
docker images
docker image ls
docker images --format "table {{.Repository}} {{.Tag}} {{.Size}}"
# Pull / push
docker pull nginx:alpine # from Docker Hub
docker pull ghcr.io/org/image:tag # from GitHub Container Registry
docker push myrepo/myimage:latest
# Build
docker build -t myapp:latest .
docker build -t myapp:latest -f Dockerfile.prod . # custom Dockerfile
docker build --no-cache -t myapp:latest . # skip cache
# Tag
docker tag myapp:latest myrepo/myapp:1.0.0
# Remove
docker rmi nginx:alpine # by name
docker rmi abc123def456 # by image ID
docker image prune # remove dangling (untagged) images
docker image prune -a # remove ALL unused images
# Inspect an image (layers, config, env, entrypoint)
docker inspect nginx:alpine
docker history nginx:alpine # show all layers + sizesBuilding Images: BuildKit & Multi-Stage Builds
BuildKit has been the default build backend since Docker Engine 23.0 (February 2023). Its key advantages: it builds independent Dockerfile stages in parallel using a Directed Acyclic Graph (DAG), and it offers cache mounts that persist across builds without being committed to image layers.
According to Netdata's Docker benchmarks, enabling Docker Layer Caching with BuildKit in CI/CD pipelines reduces build times by 70% or more for typical Node.js and Python applications. The pattern is the same in every language: copy the dependency manifest first, install, then copy source code — so the install layer caches until dependencies actually change.
Layer Cache Ordering (the most impactful Dockerfile habit)
# BAD — cache busted on every source code change
FROM node:20-alpine
WORKDIR /app
COPY . . # invalidates everything below when ANY file changes
RUN npm ci
CMD ["node", "index.js"]
# GOOD — npm ci only reruns when package*.json changes
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./ # copy manifests first
RUN npm ci # cached until deps change
COPY . . # source code last
CMD ["node", "index.js"]Multi-Stage Build (Node.js → Alpine)
# Stage 1: full SDK image for building
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: minimal runtime image
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
# Only copy compiled output + prod dependencies
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
RUN npm ci --omit=dev
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Result: ~120MB vs ~1.1GB for the single-stage node:20 imageBuildKit Cache Mount (persistent across builds)
# syntax=docker/dockerfile:1
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
# --mount=type=cache keeps the pip cache between builds
# without the cache ending up inside the image layer
RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]BuildKit Remote Cache (CI/CD)
# Push build cache to registry (GitHub Actions / GitLab CI)
docker buildx build --cache-from type=registry,ref=myrepo/myimage:cache --cache-to type=registry,ref=myrepo/myimage:cache,mode=max -t myrepo/myimage:latest --push .
# mode=max caches ALL stages, not just the final image's layersdocker init — Auto-Generate a Production Dockerfile
Introduced in beta with Docker 23.0 and reaching General Availability in Docker Desktop 4.27, docker init scans your project directory, detects the tech stack (Go, Node.js, Python, Java, Rust, .NET, PHP), and generates a Dockerfile, docker-compose.yml, and .dockerignore with production defaults:
cd my-project
docker init
# Interactive: select platform, port, entrypoint
# Generates: Dockerfile (multi-stage), compose.yaml, .dockerignoreThe generated Dockerfile uses multi-stage builds by default — something most developers skip when writing their first Dockerfile from scratch. It also populates .dockerignore with sensible exclusions (node_modules, .git, test files), which prevents the most common source of unnecessary cache busting.
Volumes and Networks
Volumes
# Named volumes (Docker-managed, persists across container removal)
docker volume create mydata
docker volume ls
docker volume inspect mydata # shows mount path: /var/lib/docker/volumes/
docker volume rm mydata
docker volume prune # remove all unused volumes
# Bind mounts (host filesystem — live code reloading in dev)
docker run -v $(pwd):/app node:20 # current directory → /app
# tmpfs mount (in-memory, not persisted to disk or image)
docker run --tmpfs /tmp myappNetworks
# List and create networks
docker network ls
docker network create mynet
docker network create --driver bridge mynet # default driver
docker network inspect mynet
# Connect containers to a network
docker run --network mynet --name api myapp
docker run --network mynet --name db postgres
# Containers on the same network resolve each other by name:
# Inside 'api', postgres is reachable at hostname 'db'
# Connect a running container to an additional network
docker network connect mynet existing-container
# Remove
docker network rm mynet
docker network prune # remove all unused networksDocker Compose V2 Commands
Compose V2 is written in Go and ships as a Docker CLI plugin (docker compose with a space). The old Python-based docker-compose reached end-of-life July 2023 and was removed from Docker Desktop in 2024. Use docker compose (no hyphen) going forward.
# Start services
docker compose up # foreground, streams logs
docker compose up -d # detached (background)
docker compose up --build # rebuild images before starting
docker compose up api db # start only specific services
# Stop / tear down
docker compose down # stop + remove containers and networks
docker compose down -v # also remove named volumes (⚠️ deletes data)
docker compose down --rmi all # also remove all images
# View status and logs
docker compose ps
docker compose logs # all services
docker compose logs -f # follow/stream
docker compose logs api # specific service only
docker compose logs --tail=50 api
# Run one-off commands
docker compose run --rm api bash # interactive shell in service
docker compose run --rm api npm run migrate # one-off command
# Rebuild and restart a single service
docker compose build api
docker compose up -d --no-deps api # restart api without restarting deps
# Scale a service
docker compose up -d --scale worker=4
# Config validation
docker compose config # validate and print resolved compose.yamlMinimal compose.yaml Reference
# compose.yaml (preferred filename — docker-compose.yml still works)
services:
api:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- DATABASE_URL=postgres://user:pass@db:5432/mydb
volumes:
- .:/app # bind mount for live reload
- /app/node_modules # anonymous volume — keeps container's node_modules
depends_on:
db:
condition: service_healthy
networks:
- backend
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data # named volume — survives docker compose down
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
volumes:
pgdata:
networks:
backend:System Commands and Disk Cleanup
Docker accumulates disk usage quickly. A single active project with several builds can consume 10–15GB. Run docker system df before any prune command — know what you are deleting.
# Check disk usage (containers, images, volumes, build cache)
docker system df
docker system df -v # verbose: breakdown per image/container/volume
# Prune options (from conservative to nuclear)
docker container prune # stopped containers only
docker image prune # dangling images only (untagged, unreferenced)
docker image prune -a # ALL unused images (nothing pointing to them)
docker volume prune # unused named volumes
docker network prune # unused networks
docker builder prune # build cache only
# The nuclear option — reclaim maximum space
# Removes: stopped containers, unused networks, ALL unused images,
# build cache. Does NOT remove named volumes by default.
docker system prune -a
# Include volumes (careful — deletes database data)
docker system prune -a --volumesTroubleshooting Commands
Containers are black boxes until you know how to see inside them. These commands convert opaque container failures into debuggable processes.
docker logs — Reading Output
docker logs my-container # full log output
docker logs -f my-container # follow / stream live
docker logs --tail 100 my-container # last 100 lines
docker logs --timestamps my-container # RFC3339Nano timestamps
docker logs --since 1h my-container # logs from last hour
docker logs --since 2026-03-29T10:00:00 c # since specific timedocker exec — Shell Access
docker exec -it my-container bash # interactive shell (full distro)
docker exec -it my-container sh # Alpine / minimal images (no bash)
# Run specific commands without interactive session
docker exec my-container env # list all env vars
docker exec my-container cat /etc/hosts # check networking config
docker exec my-container ls -la /app # inspect filesystem
# Debug a distroless or minimal container (Docker Desktop only)
docker debug my-container # attaches debug sidecar — no shell required in imagedocker inspect — Full Metadata
docker inspect my-container # full JSON: networking, mounts, env, state
docker inspect my-image # inspect an image — layers, config, entrypoint
# Filter with --format (Go template) — faster than piping to jq
docker inspect --format '{{.State.Status}}' my-container
docker inspect --format '{{.NetworkSettings.IPAddress}}' my-container
docker inspect --format '{{range .Mounts}}{{.Source}} → {{.Destination}}{{"
"}}{{end}}' c
# With jq for complex queries
docker inspect my-container | jq '.[0].State'
docker inspect my-container | jq '.[0].HostConfig.PortBindings'docker stats — Live Resource Monitoring
docker stats # live CPU/mem/net/IO for all containers
docker stats my-container # specific container
docker stats --no-stream # single snapshot (scriptable)
docker stats --format "table {{.Name}} {{.CPUPerc}} {{.MemUsage}} {{.NetIO}}"
# Companion commands
docker top my-container # running processes inside (like ps aux)
docker events # real-time Docker daemon event stream
docker events --filter type=container # filter by object type
docker events --since 30m # last 30 minutes of eventsCopying Files Out of a Container
# Copy a file from a running or stopped container to the host
docker cp my-container:/app/logs/error.log ./error.log
# Copy a directory
docker cp my-container:/app/dist ./dist-backup
# Copy from host into container
docker cp ./config.json my-container:/app/config.jsonRegistry Authentication and Operations
# Docker Hub
docker login
docker login -u myuser -p mytoken # non-interactive (CI)
docker logout
# GitHub Container Registry
docker login ghcr.io -u USERNAME --password-stdin <<< $GITHUB_TOKEN
# AWS ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
# Tag for a specific registry
docker tag myapp:latest ghcr.io/myorg/myapp:latest
docker tag myapp:latest 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
# Save image to file (for air-gapped environments)
docker save myapp:latest | gzip > myapp.tar.gz
# Load image from file
docker load < myapp.tar.gzDocker Compose vs Kubernetes: When to Migrate
Per the CNCF 2025 Annual Cloud Native Survey, 82% of container users now run Kubernetes in production — up from 66% in 2023. But this figure skews heavily toward enterprises with dedicated platform teams. The practical decision framework:
| Criterion | Docker Compose | Kubernetes |
|---|---|---|
| Team size (infra) | 1–6 engineers | Dedicated platform team |
| Host count | Single host | Multi-node required |
| Autoscaling | Manual scale flag | HPA / KEDA built-in |
| Uptime SLA | Best-effort single host | Multi-zone HA |
| Onboarding time | Minutes (one command) | Days to weeks |
| GPU workloads | Possible, manual | CDI-based scheduling (K8s + Docker Engine 29) |
The practical rule: start with Docker Compose for dev and production, migrate to Kubernetes when you hit one of three walls — multi-host scheduling requirements, autoscaling needs, or strict multi-zone HA uptime SLAs. The migration is incremental; Compose files can be converted with kompose convert as a starting point.
For deeper coverage of Compose patterns for local development including hot reloading, service dependencies, and database initialization, see Docker Compose for Local Development.
Docker Desktop Licensing (2022 Change — Still Relevant in 2026)
Docker Desktop became a paid product for professional use in January 2022. Organizations with 250+ employees or $10M+ in annual revenue must purchase a subscription. The tiers:
| Tier | Price | Key Features |
|---|---|---|
| Personal (free) | $0 | Individual use, education, non-commercial OSS |
| Pro | $5/user/mo | Single commercial user |
| Team | $9/user/mo | Up to 100 users (raised cap in Oct 2022) |
| Business | $24/user/mo | SAML SSO, centralized management, image access control |
Free alternatives for enterprise environments: Podman Desktop (Red Hat, free, rootless by default), Rancher Desktop (SUSE, free, supports containerd or moby), and OrbStack (macOS, fast alternative). Docker Engine itself — the CLI and daemon without the GUI — remains Apache 2.0 licensed and free on Linux.
FAQ
What is the difference between docker stop and docker kill?
docker stop sends SIGTERM to PID 1, giving the process up to 10 seconds (configurable with -t) to shut down cleanly. If it does not exit in time, Docker sends SIGKILL. docker kill sends SIGKILL immediately with no grace period. Always prefer docker stop for databases and message brokers — abrupt SIGKILL can corrupt write-ahead logs and uncommitted transactions.
What is the difference between a bind mount and a named volume?
A bind mount (-v /host/path:/container/path) maps a specific host directory — changes are visible on both sides instantly, useful for live code reloading in development. A named volume (-v myvolume:/container/path) is managed by Docker in /var/lib/docker/volumes/, survives docker rm, and is not host-path-dependent. Use named volumes for database data that must persist across container replacements.
How do multi-stage builds reduce image size?
Multi-stage builds let you compile or bundle in a full SDK image, then COPY --from=builder only the production artifacts into a minimal base (alpine, distroless, or scratch). The final image contains no compiler, source files, or dev dependencies. A Node.js app goes from ~1.1GB (node:20) to ~120MB (node:20-alpine). A compiled Go binary using FROM scratch can be under 20MB.
What does docker system prune remove?
By default: stopped containers, networks not used by any container, dangling images (untagged, not referenced), and the build cache. It does NOT remove named volumes or tagged images unless you add -a (all unused images) and --volumes. Always run docker system df first to see breakdown by category before running prune.
When should I use Docker Compose instead of Kubernetes?
Docker Compose is the right choice for teams under 6-7 engineers, single-host deployments with 1-5 services, and projects where developer onboarding speed is the priority. Kubernetes makes sense when you need multi-node scheduling, autoscaling (HPA/KEDA), multi-zone HA, or GPU workload scheduling. Per CNCF 2025, 82% of container users run Kubernetes in production — but that skews heavily toward enterprises with dedicated platform teams.
Is Docker Desktop still free in 2026?
Free for personal use, education, and non-commercial open source. Since January 2022, commercial use by organizations with 250+ employees or $10M+ revenue requires a paid subscription (Pro $5/user/mo, Team $9/user/mo, Business $24/user/mo). Docker Engine (the daemon + CLI, no GUI) remains Apache 2.0 licensed and fully free on Linux for all use cases.
What is BuildKit and should I enable it?
BuildKit is the default build backend since Docker Engine 23.0. It builds independent Dockerfile stages in parallel, offers cache mounts that persist across builds without being committed to layers, and supports secrets injection without leaking into the image. Netdata benchmarks show BuildKit layer caching can reduce CI build times by 70%+. On Docker 23.0+, it is already active — use docker buildx build to access full BuildKit features including multi-platform builds.
Related Developer References
Container workflows often involve environment variables, JSON config files, and SSH keys. BytePane covers the full stack.