Linux Command Cheat Sheet: 50+ Essential Terminal Commands
According to the Stack Overflow Developer Survey 2025, Bash/shell scripting is used by 49% of all developers — fifth most-used language overall, ahead of C, Rust, and Kotlin. That number reflects a reality that most developers learn gradually and painfully: the terminal is not optional. Linux commands are the interface to every production server, CI/CD pipeline, Docker container, and cloud VM you will ever manage.
Linux powers 96.3% of the top 1 million web servers worldwide (W3Techs, 2024) and runs on 72.6% of Fortune 500 mission-critical workloads (Linux Foundation). Even if you develop on macOS or Windows, you deploy to Linux. Knowing the terminal fluently is the difference between a 30-second fix and a 30-minute debugging session.
This cheat sheet covers the commands you actually need — organized by use case, with real examples, and notes on modern Rust-based alternatives that are faster than their POSIX counterparts.
Key Takeaways
- ▸49% of developers use Bash/shell scripting (Stack Overflow 2025) — it is the language of automation, regardless of your primary stack.
- ▸Modern Rust-based tools (ripgrep, fd, bat, eza) are 8–23× faster than their classic counterparts and significantly more ergonomic.
- ▸Always use SIGTERM (-15) before SIGKILL (-9). Processes should be given the chance to clean up.
- ▸The pipe ( | ) operator is the most powerful feature of the shell — composing small, focused tools into complex operations.
- ▸ss replaces netstat, ip replaces ifconfig — if you are still using the legacy tools, upgrade your mental model.
File and Directory Management
File operations are the foundation. These commands work identically on Ubuntu, Debian, Arch, Fedora, and macOS (which runs a POSIX-compliant BSD userland with most of the same tools).
# List files
ls -la # All files including hidden, long format
ls -lhS # Sort by size, human-readable sizes
ls -lt # Sort by modification time, newest first
# Navigate
cd /var/log # Absolute path
cd .. # Parent directory
cd - # Previous directory (toggle)
cd ~ # Home directory
# Create
mkdir -p a/b/c # Create nested dirs, no error if exists
touch file.txt # Create empty file or update timestamp
cp -r src/ dst/ # Copy directory recursively
cp -a src/ dst/ # Archive copy — preserves timestamps, permissions
# Move & rename
mv old.txt new.txt # Rename file
mv file.txt /tmp/ # Move file to /tmp/
mv -n src dst # No clobber — skip if destination exists
# Delete (CAREFUL)
rm file.txt # Delete file
rm -rf dir/ # Recursively delete directory and contents
rm -i *.log # Interactive: prompt before each deletion
# View file info
file image.png # Detect file type from magic bytes, not extension
stat file.txt # Full metadata: size, inodes, access times, permissions
du -sh ./ # Disk usage of current directory, human-readable
du -sh * | sort -rh # Largest entries first — good for finding disk hogsViewing File Contents
cat file.txt # Print entire file
head -n 20 file.txt # First 20 lines
tail -n 50 file.txt # Last 50 lines
tail -f /var/log/app.log # Follow live — essential for log monitoring
less file.txt # Paginated viewer (q to quit, /pattern to search)
# Concatenate and redirect
cat a.txt b.txt > combined.txt # Merge two files into one
cat /dev/null > file.txt # Empty a file without deleting it
# Compression and archives
tar -czf archive.tar.gz dir/ # Create gzip-compressed tarball
tar -xzf archive.tar.gz # Extract gzip tarball to current dir
tar -xzf archive.tar.gz -C /opt # Extract to specific directory
tar -tf archive.tar.gz # List contents without extracting
zip -r archive.zip dir/ # Create zip
unzip -l archive.zip # List zip contentsText Processing: The Shell Pipeline
The Unix philosophy — small tools that do one thing well, composed via pipes — reaches its most practical form in text processing. Mastering this set replaces entire Python scripts for data manipulation tasks.
# grep — search for patterns
grep "ERROR" app.log # Lines containing ERROR
grep -i "error" app.log # Case-insensitive
grep -r "TODO" ./src # Recursive search through directory
grep -n "function" main.js # Show line numbers
grep -v "DEBUG" app.log # Invert: lines NOT matching
grep -c "404" access.log # Count matching lines
grep -A 3 -B 3 "FATAL" app.log # Show 3 lines before and after match
grep -E "^[0-9]{4}" data.csv # Extended regex
# sed — stream editor (search, replace, transform)
sed 's/foo/bar/g' file.txt # Replace all occurrences of foo → bar
sed 's/foo/bar/g' file.txt > new.txt # Write to new file (sed doesn't edit in-place by default)
sed -i 's/localhost/prod-db/g' config.yml # In-place edit (Linux)
sed -i '' 's/localhost/prod-db/g' config.yml # In-place edit (macOS BSD sed)
sed -n '10,20p' file.txt # Print only lines 10–20
sed '/^#/d' config.ini # Delete comment lines (starting with #)
# awk — columnar data processing
awk '{print $1, $3}' data.txt # Print columns 1 and 3
awk -F: '{print $1}' /etc/passwd # Parse colon-delimited: print usernames
awk '{sum += $2} END {print sum}' data.txt # Sum column 2
awk 'NR > 1' file.csv # Skip header (print from row 2)
awk '$3 > 100 {print $1, $3}' data.txt # Filter rows where column 3 > 100
# Sorting and deduplication
sort file.txt # Alphabetical sort
sort -n file.txt # Numeric sort
sort -rn file.txt # Reverse numeric sort
sort -t, -k2 -n data.csv # Sort CSV by column 2 numerically
sort data.txt | uniq # Remove duplicate lines (must be sorted first)
sort data.txt | uniq -c | sort -rn # Count occurrences, sort by frequency
# Other essentials
cut -d, -f1,3 data.csv # Extract columns 1 and 3 from CSV
wc -l file.txt # Count lines
wc -w file.txt # Count words
tr 'a-z' 'A-Z' < input.txt # Translate lowercase to uppercase
tr -d '
' < windows.txt > unix.txt # Remove carriage returns (Windows → Unix)
xargs -I{} echo "Processing: {}" < list.txt # Pass each line as an argument
tee output.log # Pipe to both stdout and a file simultaneouslyPractical One-Liners
# Top 10 most frequent IPs in access log
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head -10
# Find all files larger than 100MB
find . -size +100M -type f
# Count total lines of code in a project (excluding node_modules)
find . -name "*.ts" -not -path "*/node_modules/*" | xargs wc -l | tail -1
# All unique HTTP status codes in nginx log
awk '{print $9}' access.log | sort -u
# Watch a log file and highlight ERROR lines in red
tail -f app.log | grep --color=always -E "ERROR|$"
# Find and replace across all files in a directory
find . -name "*.json" -exec sed -i 's/v1/api/v2/api/g' {} +
# Extract all email addresses from a file
grep -oE '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,}' contacts.txt | sort -uFor pattern testing before running on production data, use the BytePane Regex Tester to validate your grep/sed patterns interactively.
Process Management
Understanding processes is non-negotiable for production debugging. Memory leaks, CPU spikes, zombie processes, and port conflicts — these all require process management commands.
# View processes
ps aux # All running processes: user, PID, %CPU, %MEM, command
ps aux | grep nginx # Find nginx processes
ps -ef # Full-format: shows PPID (parent process ID)
ps -o pid,ppid,cmd,%cpu --sort=-%cpu # Custom columns, sorted by CPU
# Real-time monitoring
top # Basic real-time process view
htop # Enhanced top with colors and F5 tree view (install separately)
# In top/htop: press 'k' to kill, 'r' to renice, 'q' to quit
# Sending signals
kill -15 <PID> # SIGTERM — graceful shutdown (preferred)
kill -9 <PID> # SIGKILL — force kill (last resort)
kill -1 <PID> # SIGHUP — reload config (used by nginx, apache)
pkill nginx # Kill by process name
pkill -f "node server.js" # Kill by full command match
killall node # Kill all processes named "node"
# Background jobs
./server.js & # Run in background, prints PID
jobs # List background jobs for current shell
fg %1 # Bring job 1 to foreground
bg %1 # Continue stopped job 1 in background
nohup ./script.sh & # Survive terminal close, output → nohup.out
disown %1 # Remove job from shell's job table (survives shell exit)
# Priority
nice -n 10 ./heavy-task.sh # Start with reduced priority (nice: -20 to +19)
renice +5 -p <PID> # Change priority of running process
# Checking port usage
ss -tlnp # All listening TCP ports + owning processes
ss -tlnp | grep :3000 # What is listening on port 3000?
lsof -i :3000 # Alternative: open files on port 3000The critical rule on signals: always send SIGTERM (-15) first, wait 5–10 seconds, then escalate to SIGKILL (-9) only if the process is still alive. SIGTERM allows the process to flush buffers, close database connections, and write state to disk. SIGKILL bypasses all of that — you will get data corruption or half-written files if you jump straight to -9.
Networking Commands
Linux networking tools have a clear legacy/modern split. ifconfig and netstat are deprecated on most modern distros; use ip and ss instead.
# Network interfaces
ip addr # Show all interfaces and IP addresses (replaces ifconfig)
ip addr show eth0 # Specific interface
ip link # Show link layer (MAC addresses, state)
ip route # Routing table (replaces route)
ip route show default # Default gateway
# Connectivity testing
ping -c 4 google.com # Send 4 ICMP packets
ping -i 0.2 google.com # Ping every 200ms
traceroute google.com # Trace route to host (may need install)
mtr google.com # traceroute + ping combined, real-time
# HTTP
curl https://api.example.com # GET request
curl -X POST -H "Content-Type: application/json" -d '{"key":"value"}' https://api.example.com # POST with JSON body
curl -I https://example.com # HEAD request (headers only)
curl -o file.zip https://example.com/file.zip # Download to file
curl -L https://example.com # Follow redirects
curl -v https://example.com # Verbose: show request/response headers
# DNS
dig example.com # DNS lookup (full response)
dig +short example.com # Just the IP(s)
dig MX gmail.com # Mail exchange records
nslookup example.com # Alternative DNS lookup
host example.com # Quick lookup
# Ports and connections
ss -tlnp # Listening TCP ports + owning process
ss -tulnp # TCP + UDP listening
ss -tnp state established # Established connections
netstat -tlnp # Legacy equivalent of ss -tlnp
# File transfer
scp file.txt user@host:/path/ # Secure copy to remote
scp -r dir/ user@host:/path/ # Recursive copy
rsync -avz ./src/ user@host:/dst/ # Sync with progress, verbose
rsync -avz --delete src/ dst/ # Sync + delete files at dest not in src
# SSH
ssh user@host # Connect
ssh -p 2222 user@host # Custom port
ssh -L 5432:localhost:5432 user@host # Local port forward (tunnel DB to local)
ssh -i ~/.ssh/id_rsa user@host # Specify key
# Packet analysis
tcpdump -i eth0 port 443 # Capture HTTPS traffic on eth0
tcpdump -i any -w capture.pcap # Write all traffic to file for WiresharkFile Search: find and the Modern Alternative
# find — POSIX standard, available everywhere
find . -name "*.ts" # Files by name pattern
find . -iname "*.ts" # Case-insensitive
find . -type f -name "*.log" # Files only (not dirs)
find . -type d -name "node_modules" # Directories only
find . -mtime -7 # Modified in the last 7 days
find . -size +10M -type f # Files larger than 10MB
find . -perm 777 # Files with exact permissions
find . -not -path "*/node_modules/*" -name "*.ts" # Exclude path
# Execute a command on each result
find . -name "*.log" -exec rm {} + # Delete all .log files
find . -name "*.txt" -exec chmod 644 {} + # Set permissions on all .txt files
# Combine conditions
find . -type f ( -name "*.jpg" -o -name "*.png" ) # OR condition
find . -type f -name "*.ts" -a -newer package.json # AND: .ts newer than package.jsonFor searching file contents, see the grep section above and consider ripgrep. For quick content search without installing ripgrep, the Git cheat sheet also covers git grep, which searches only tracked files and is faster than recursive grep for repos.
System Information and Disk Management
# System info
uname -a # Kernel version, architecture, hostname
hostname # System hostname
uptime # How long running, load averages
whoami # Current user
id # User ID, group IDs
cat /etc/os-release # Distro name and version
lscpu # CPU architecture, cores, threads
lsmem # Memory layout
nproc # Number of available CPU cores (useful for make -j$(nproc))
# Memory and CPU
free -h # RAM and swap usage, human-readable
vmstat 1 5 # VM statistics — 5 snapshots, 1 second apart
cat /proc/meminfo # Detailed memory information
cat /proc/cpuinfo # CPU details per core
# Disk usage
df -h # Disk usage per filesystem, human-readable
df -h /var # Usage for specific mount point
du -sh /var/log # Total size of /var/log
du -sh * | sort -rh # Largest items in current directory
lsblk # Block devices and mount points
fdisk -l # Disk partition information (requires root)
# Environment variables
env # Print all environment variables
printenv PATH # Print a specific variable
echo $HOME # Expand a variable
export MY_VAR=value # Set variable for current session and subprocesses
unset MY_VAR # Remove variableFor managing environment variables in applications and CI/CD pipelines, see the environment variables guide — it covers .env file patterns, secrets management, and the difference between shell-level and process-level environment variables.
Permissions and User Management
Linux permissions are a three-level model: owner (u), group (g), and other (o). Each level can have read (r=4), write (w=2), and execute (x=1) permissions. The octal value is the sum: 7 = rwx, 6 = rw-, 5 = r-x, 4 = r--. For a deep dive into special permissions (setuid, setgid, sticky bit) and ACLs, see the Linux file permissions guide.
# chmod — change file mode bits
chmod 755 script.sh # rwxr-xr-x (owner: full, group/other: read+execute)
chmod 644 config.yaml # rw-r--r-- (standard for config files)
chmod 600 ~/.ssh/id_rsa # rw------- (required by ssh-agent)
chmod +x script.sh # Add execute bit for all
chmod u+x,g-w,o= script.sh # Symbolic: add exec to owner, remove write from group, clear other
# chown — change ownership
chown alice file.txt # Change owner to alice
chown alice:developers file.txt # Change owner and group
chown -R www-data:www-data /var/www/html # Recursive
# sudo
sudo command # Run as root
sudo -u alice command # Run as specific user
sudo !! # Re-run previous command with sudo
sudo -l # List allowed sudo commands for current user
sudo -i # Interactive root shell (use sparingly)
sudo -k # Invalidate cached sudo credentials
# User management
useradd -m -s /bin/bash bob # Create user with home dir and bash shell
passwd bob # Set password for bob
usermod -aG docker bob # Add bob to docker group (append, don't replace)
id bob # Show user/group IDs
groups bob # Show group memberships
su - alice # Switch to user alice (full login environment)Shell Productivity: Shortcuts and Features
# History
history # Show command history
history | grep docker # Search history for docker commands
!! # Repeat last command
!ssh # Repeat last command starting with "ssh"
ctrl+r # Reverse-search history interactively (type to filter)
# Navigation shortcuts
ctrl+a # Move cursor to beginning of line
ctrl+e # Move cursor to end of line
ctrl+u # Delete from cursor to beginning of line
ctrl+k # Delete from cursor to end of line
ctrl+w # Delete word before cursor
alt+f / alt+b # Move forward/backward by word
ctrl+l # Clear screen (same as clear)
# Useful operators
command1 && command2 # Run command2 ONLY if command1 succeeds
command1 || command2 # Run command2 ONLY if command1 fails
command1 ; command2 # Run both, regardless of success
command > file.txt # Redirect stdout to file (overwrite)
command >> file.txt # Redirect stdout to file (append)
command 2> error.log # Redirect stderr to file
command > out.log 2>&1 # Redirect both stdout and stderr to file
command1 | command2 # Pipe stdout of command1 to stdin of command2
# Brace expansion
mkdir -p project/{src,tests,docs} # Create three dirs in one command
cp config{,.backup} # Copy config to config.backup
touch file{1..10}.txt # Create file1.txt through file10.txt
# Aliases (put in ~/.bashrc or ~/.zshrc)
alias ll='ls -la'
alias gs='git status'
alias ..='cd ..'
alias ...='cd ../..'
# Functions
deploy() {
git push origin "$1" && echo "Deployed branch: $1"
}Modern Rust-Based CLI Alternatives
A cohort of Rust-written CLI tools has emerged since 2019 that outperform their POSIX counterparts significantly. These are not cosmetic replacements — the performance differences are measurable and reproducible.
| Modern Tool | Replaces | Speed Gain | GitHub Stars | Key Advantage |
|---|---|---|---|---|
| ripgrep (rg) | grep | 8–10× faster | 61,930 | Parallel search, respects .gitignore |
| fd | find | 13–23× faster | 42,346 | Case-insensitive by default, simpler syntax |
| bat | cat | Similar | 57,978 | Syntax highlighting, Git diff integration |
| eza | ls | Similar | 21,034 | Colors, icons, Git status per file, tree view |
| zoxide | cd | N/A | 35,254 | Jump to any visited dir with partial name: z proj |
| delta | diff / git diff | Similar | 22,000+ | Syntax-highlighted, line-numbered diffs |
The ripgrep benchmark (per Andrew Gallant's methodology at burntsushi.net, the tool's author) on the Linux kernel source tree (~75,000 files): ripgrep at 0.082 seconds vs grep at 0.671 seconds. ripgrep uses CPU SIMD instructions to scan 16–32 bytes per cycle for literal patterns and parallelizes across all CPU cores using a work-stealing thread pool.
fd's benchmark (from its GitHub README, measured by sharkdp): on a home directory with ~4 million files, fd is 13× faster than find -iname and 23× faster than find -iregex. It achieves this via parallel directory traversal and early exclusion of .gitignore-listed paths before opening them.
# Installation (Ubuntu/Debian)
sudo apt install ripgrep fd-find bat eza
# On Debian, fd is installed as fdfind to avoid conflict with fdclone
# Create an alias: alias fd='fdfind'
# Usage examples
rg "TODO" ./src # Find TODOs in ./src (respects .gitignore)
rg -t ts "useState" # Search only .ts files
rg "console.log" --hidden # Include hidden files
fd ".env" # Find files named .env (case-insensitive)
fd -e ts -E node_modules # TypeScript files, excluding node_modules
fd -t d "components" # Find directories named components
bat src/index.ts # View with syntax highlighting
bat -l json config.json # Force language detection
eza -la --git # ls -la with Git status per file
eza --tree --level=2 # Directory tree, 2 levels deep
# Shell aliases to replace classic tools
echo 'alias grep="rg"' >> ~/.bashrc
echo 'alias fd="fdfind"' >> ~/.bashrc
echo 'alias cat="batcat"' >> ~/.bashrc
echo 'alias ls="eza"' >> ~/.bashrcsystemd and Service Management
# systemctl — manage services
systemctl start nginx # Start a service
systemctl stop nginx # Stop a service
systemctl restart nginx # Restart (stop + start)
systemctl reload nginx # Reload config without downtime (if supported)
systemctl enable nginx # Start at boot
systemctl disable nginx # Don't start at boot
systemctl enable --now nginx # Enable and start immediately
systemctl status nginx # Show status, recent log lines
systemctl is-active nginx # Returns "active" or "inactive" (scriptable)
systemctl list-units --type=service --state=running # All running services
# Logs
journalctl -u nginx # All logs for nginx
journalctl -u nginx -f # Follow live
journalctl -u nginx --since "1 hour ago" # Recent logs
journalctl -p err # Only error-level messages
journalctl --disk-usage # How much disk logs are using
journalctl --vacuum-size=500M # Delete old logs to stay under 500MBFrequently Asked Questions
What are the most important Linux commands for developers?
The core set you need daily: ls/eza (list), cd/zoxide (navigate), grep/rg (search), find/fd (find files), ps/htop (processes), kill (stop processes), ssh (remote access), curl (HTTP), chmod/chown (permissions), and tar (archives). systemctl and journalctl are also essential for server management.
What percentage of developers use Linux?
According to the Stack Overflow Developer Survey 2024, Ubuntu alone is used by 27.7% of developers. WSL adds another 17.1%, Debian 9.8%, Arch Linux 8%, and Fedora 4.8%. Bash/shell scripting is used by 49% of all developers per the 2025 survey. Linux powers 96.3% of top web servers globally (W3Techs 2024), so even developers not running Linux locally work with it in production.
Is ripgrep faster than grep?
Yes — 8–10× faster on the Linux kernel source tree (~75,000 files) per Andrew Gallant's benchmarks at burntsushi.net. ripgrep uses SIMD instructions to scan 16–32 bytes per CPU cycle for literal patterns, parallelizes across all cores, and skips binary files and .gitignore-listed paths before opening them. grep still wins for piped stdin processing and strict POSIX compliance requirements.
What is the difference between kill -9 and kill -15?
kill -15 sends SIGTERM, requesting graceful shutdown — the process can catch it, flush buffers, close database connections, and exit cleanly. kill -9 sends SIGKILL, enforced immediately by the kernel with no chance for cleanup. Always send SIGTERM first and wait; escalate to SIGKILL only if the process is unresponsive. Jumping straight to -9 risks data corruption.
What does sudo !! do in Linux?
!! expands to the last command in your shell history. So sudo !! re-runs your previous command prefixed with sudo — the classic fix when you forget to use sudo. If systemctl restart nginx fails with a permission error, run sudo !! and it executes sudo systemctl restart nginx.
How do I find which process is using a port on Linux?
Use ss -tlnp | grep :3000 (replace 3000 with your port). Alternatively, lsof -i :3000 shows the process name, PID, and user. ss is the modern replacement for netstat and reads directly from the kernel, so it is always accurate. netstat parses /proc files and can show stale data.
What is the difference between > and >> in Linux shell?
> redirects stdout to a file, overwriting it if it exists. >> appends to the file. For stderr: 2> redirects stderr, 2>> appends stderr. To capture both stdout and stderr: command > output.log 2>&1. The 2>&1 means redirect file descriptor 2 (stderr) to wherever file descriptor 1 (stdout) currently points.
How do I run a Linux command in the background?
Append & to run in background: ./script.sh &. Use nohup command & to keep it running after terminal close. For processes that survive SSH disconnects, use tmux or screen — create a session, start your process, detach with Ctrl+B D, and reattach later with tmux attach.
Related Developer Tools
Pair this cheat sheet with the BytePane developer toolbox — regex tester, JSON formatter, base64 encoder, hash generator, and more. All tools run in-browser with no sign-up required.