Linux Commands Cheat Sheet: Essential Terminal Commands (2026)
Key Takeaways
- ▸Per the Stack Overflow Developer Survey 2025, 47.2% of professional developers work with Linux as their primary OS — the highest share of any platform, and growing.
- ▸
ripgrep(rg) is 8–35× faster than GNU grep in benchmarks by Andrew Gallant (Burntsushi) — and respects.gitignoreby default. It should be your primary code search tool. - ▸
fdreplacesfindwith 23× better performance on warm cache benchmarks, a human-friendly syntax, and parallel execution — published by the fd-find project on GitHub. - ▸The pipe (
|) is Linux's most powerful primitive — chaininggrep,awk,sort, anduniqreplaces entire scripts that would take hours to write in another language. - ▸In a benchmarks by David Lyness,
sedoutperformedawkand Python by 42 seconds over 10 iterations on two-billion-character base64 parsing — use the right tool for each task.
Why Terminal Fluency Matters More Than Ever in 2026
The Stack Overflow Developer Survey 2025 (90,000+ respondents) found that Linux is now the primary development OS for 47.2% of professional developers — up from 40.2% in 2020. This shift is driven by Docker adoption, cloud-native development workflows, and the dominance of Linux-based CI/CD infrastructure. Even developers on macOS spend the majority of their work time in terminals that behave like Linux.
Yet the same survey reveals a persistent gap: a significant portion of developers who use Linux daily describe themselves as “terminal users” rather than “power users” — comfortable with basic navigation but slow on text processing, process management, and pipeline composition. This cheat sheet bridges that gap.
The commands here are organized by workflow, not alphabetically. We cover file operations, text processing (the grep/sed/awk triad), process management, networking, permissions, SSH, and modern Rust-based alternatives that have become standard in high-performance developer environments. Every command includes the flags you actually use, not every flag in the man page.
File and Directory Operations
# Navigate directories cd /var/log # absolute path cd .. # up one level cd - # switch to previous directory cd ~ # home directory # List files ls -la # long format, including hidden (dot) files ls -lh # human-readable sizes (KB, MB, GB) ls -lt # sort by modification time, newest first ls -lS # sort by size, largest first ls --color=auto # colorized output (usually aliased) # Modern alternative: exa (Rust-based) exa --long --git --header # shows git status per file exa --tree --level=2 # directory tree, 2 levels deep # Show current directory path pwd # Change to directory and run command cd /app && npm start # Navigate with fzf (fuzzy finder — install separately) cd $(find . -type d | fzf) # fuzzy-select a directory
# Create
mkdir -p /path/to/nested/dir # -p creates parents as needed
touch file.txt # create empty file or update timestamp
touch {a,b,c}.txt # brace expansion: creates a.txt, b.txt, c.txt
# Copy
cp source.txt dest.txt
cp -r source_dir/ dest_dir/ # recursive (directory)
cp -a source/ dest/ # -a preserves permissions, timestamps, symlinks
cp -v source.txt dest.txt # verbose — shows what was copied
# Move / Rename
mv old.txt new.txt # rename
mv file.txt /tmp/ # move to directory
mv -n source dest # -n: never overwrite existing file
# Delete
rm file.txt
rm -r directory/ # recursive delete
rm -rf directory/ # force (skip confirmation, ignore missing)
# WARNING: rm -rf with a typo can delete critical data. Consider trash-cli.
# Safer delete alternatives
trash file.txt # moves to trash (apt install trash-cli)
# Hard links and symlinks
ln -s /path/to/real /path/to/link # symbolic link
ln /path/to/real /path/to/hard # hard link (same inode, different name)# Print full file cat file.txt # concatenate to stdout cat -n file.txt # with line numbers # Modern alternative: bat (cat with syntax highlighting) bat file.txt # syntax highlighting, git diff markers, line numbers bat --plain file.txt # no decorations (pipe-safe) # Paged reading less file.txt # scroll with j/k, search with /, q to quit # less is better than more — it loads incrementally (huge files) # First / last lines head -n 20 file.txt # first 20 lines tail -n 20 file.txt # last 20 lines tail -f /var/log/nginx/access.log # follow (stream new lines in real time) tail -F logfile.log # follow, even if file is rotated/recreated # File metadata file image.png # detect file type by content (not extension) wc -l file.txt # line count wc -c file.txt # byte count wc -w file.txt # word count stat file.txt # full metadata: size, inode, permissions, timestamps
Finding Files: find, fd, and locate
The standard find command is powerful but has a notoriously verbose syntax. The fd tool (from the fd-find project on GitHub) solves this with a simpler interface and dramatically better performance — the fd GitHub README documents 23× faster warm-cache performance versus find on real codebases, using parallel execution internally.
# find — GNU standard, always available
find . -name "*.log" # files ending in .log
find . -name "*.log" -mtime -7 # modified in last 7 days
find . -type f -size +100M # files over 100MB
find . -type f -name "*.py" -exec wc -l {} \; # count lines in all .py files
find /var -name "*.log" -delete # find and delete
find . -type f -not -name "*.git" # exclude pattern
# fd — faster, respects .gitignore, regex support
fd ".log$" # files matching pattern (regex)
fd -e py # files with extension py
fd -t f --changed-within 7d # files modified last 7 days
fd -s 100m # files over 100MB (--size)
fd -x wc -l # parallel exec with {}
fd . --exec-batch ls -lh # batch exec
# fd ignores .git/ and respects .gitignore by default
# locate — uses pre-built database, instant results
locate nginx.conf # instant (database search, not filesystem)
sudo updatedb # update the database (usually runs via cron)Text Processing: grep, sed, awk, and Pipelines
The grep/sed/awk trinity is Linux's core text-processing toolkit. Each solves a different problem:
- grep — find lines matching a pattern (read-only, no transformation)
- sed — stream editor: substitute, delete, insert lines
- awk — field-based text processing with programming constructs (conditions, arithmetic, output formatting)
Per benchmarks by David Lyness (published on blog.davidlyness.com), sed outperformed awk by 42 seconds over 10 iterations on two billion base64 characters — for simple substitution tasks, sed is the right choice. AWK wins when you need multi-field processing, counting, or conditional logic.
grep "error" access.log # lines containing "error" grep -i "error" access.log # case-insensitive grep -n "error" access.log # with line numbers grep -c "error" access.log # count matching lines grep -v "200" access.log # INVERT: lines NOT matching grep -r "TODO" ./src # recursive search in directory grep -rl "TODO" ./src # only filenames, not content grep -A 3 "error" access.log # 3 lines AFTER match (context) grep -B 3 "error" access.log # 3 lines BEFORE match grep -C 3 "error" access.log # 3 lines before AND after grep -E "error|warn" access.log # extended regex (ERE) — same as egrep grep -F "exact string" file # fixed string (no regex — faster) grep -P "(?<=error: )\w+" file # Perl-compatible regex (PCRE) grep -o "\d+" file # print only the matched part, not whole line grep --color=auto "error" file # highlight match # ripgrep — 8-35x faster, respects .gitignore rg "error" . # recursive by default rg -i "error" . # case-insensitive rg --type py "import" . # only Python files rg -l "TODO" . # filenames only rg -g "!*.test.js" "console.log" . # exclude test files rg -P "(?<=\/\/).*TODO" . # PCRE2 patterns rg --stats "error" . # show search statistics
# Substitute (most common sed use) sed 's/old/new/' file.txt # replace first occurrence per line sed 's/old/new/g' file.txt # replace ALL occurrences per line sed 's/old/new/gi' file.txt # case-insensitive, all occurrences sed -i 's/old/new/g' file.txt # in-place edit (modifies file) sed -i.bak 's/old/new/g' file.txt # in-place with backup (.bak created) # Multiple substitutions sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt # Delete lines sed '/pattern/d' file.txt # delete lines matching pattern sed '5d' file.txt # delete line 5 sed '5,10d' file.txt # delete lines 5 to 10 sed '/^$/d' file.txt # delete empty lines # Print specific lines sed -n '5,10p' file.txt # print lines 5–10 only (-n suppresses default) sed -n '/start/,/end/p' file.txt # print from "start" to "end" # Insert and append sed '3i\inserted line' file.txt # insert before line 3 sed '3a\appended line' file.txt # append after line 3 # Real-world example: remove ANSI color codes from log sed 's/\x1B\[[0-9;]*m//g' colored.log > plain.log
# awk divides each line into fields: $1, $2, ..., $NF (last field)
# Default field separator: whitespace
# Print specific columns from CSV / log
awk '{print $1, $4}' access.log # print columns 1 and 4
awk -F',' '{print $2}' data.csv # comma-separated, print column 2
awk -F: '{print $1}' /etc/passwd # print usernames from passwd
# Conditional processing
awk '$3 > 1000 {print $0}' file # print lines where column 3 > 1000
awk '/error/ {print $0}' file # print lines matching "error"
awk '!/^#/ {print}' config.conf # skip comment lines
# Computation
awk '{sum += $5} END {print "Total:", sum}' file # sum column 5
awk 'END {print NR}' file # count lines (like wc -l)
awk 'NR==5' file # print line 5
# Multiple actions
awk 'BEGIN {print "Start"} /error/ {count++} END {print count " errors"}' app.log
# Real-world: top 10 IPs from Nginx access log
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10
# Real-world: disk usage report, filter >1GB
df -h | awk '$5+0 > 80 {print $0}' # filesystems over 80% fullPipeline Patterns: Combining Commands
The Unix pipe (|) connects the stdout of one command to the stdin of the next. Mastery of pipes turns individual commands into powerful one-liners. Here are the pipeline patterns senior engineers use regularly:
# Most common 404 URLs
cat access.log | grep " 404 " | awk '{print $7}' | sort | uniq -c | sort -rn | head -20
# Count unique visitors by IP
awk '{print $1}' access.log | sort -u | wc -l
# Find the largest files in a directory
du -sh ./* | sort -rh | head -10
# -s: summarize, -h: human-readable, -r: reverse, -h on sort: human sort (MB/GB)
# Extract all email addresses from a file
grep -oE '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' file.txt | sort -u
# Find processes using the most CPU
ps aux --sort=-%cpu | head -10
# Monitor a command's output in real time
watch -n 2 'df -h' # refresh every 2 seconds
# Count occurrences of each log level
grep -oE '(ERROR|WARN|INFO|DEBUG)' app.log | sort | uniq -c | sort -rn
# Join two files on a common field
join -t',' -1 1 -2 1 <(sort file1.csv) <(sort file2.csv)
# Remove duplicate lines while preserving order
awk '!seen[$0]++' file.txt
# Tee: write to file AND continue piping
cat access.log | tee raw.log | grep "error" | wc -lProcess Management
# View processes ps aux # all processes: user, PID, CPU%, MEM%, command ps aux | grep nginx # filter for specific process pgrep nginx # get PID(s) of named process pgrep -a nginx # PID and full command line # Interactive process viewers top # real-time process list (q to quit) htop # enhanced top with mouse support (apt install htop) btop # modern resource monitor (shows graphs) # Kill processes kill 1234 # send SIGTERM (graceful shutdown) to PID 1234 kill -9 1234 # send SIGKILL (immediate) — use as last resort kill -HUP 1234 # SIGHUP: reload config without restart (common for nginx) pkill nginx # kill all processes named "nginx" pkill -9 -f "python app.py" # kill by matching full command string killall nginx # similar to pkill # Background and foreground command & # run command in background jobs # list background jobs fg %1 # bring job 1 to foreground bg %1 # resume job 1 in background Ctrl+Z # suspend foreground job → run bg %1 to resume nohup command & # run in background, immune to hangup (won't die when SSH closes) disown %1 # detach job from shell (it continues after shell closes) # Process priority nice -n 10 command # run with lower priority (0=normal, 19=lowest) renice -n 15 -p 1234 # change priority of running process
Disk Usage and System Information
# Disk space df -h # filesystem disk usage (human-readable) df -h / # just the root filesystem du -sh /var/log # total size of a directory du -sh ./* # size of each item in current directory du -sh * | sort -rh | head -20 # top 20 largest items, sorted # Interactive disk usage: ncdu ncdu /var # TUI browser for disk usage (apt install ncdu) # Memory free -h # RAM usage: total, used, free, buffers, cache vmstat 1 5 # virtual memory stats, 5 samples 1 second apart # CPU and system lscpu # CPU architecture, cores, threads, speed nproc # number of available processors uptime # system uptime and load averages # Load averages: 1m, 5m, 15m — compare to nproc. >1.0 per core = saturated # System info uname -a # kernel version, architecture hostnamectl # hostname, OS, kernel info lsb_release -a # Linux distribution info cat /etc/os-release # distribution info (universal) # Hardware lshw -short # hardware summary lsblk # block devices (disks, partitions) lsblk -o NAME,SIZE,TYPE,MOUNTPOINT # custom columns fdisk -l # partition table (requires root)
Networking Commands
# Network interfaces
ip addr # show IP addresses (modern replacement for ifconfig)
ip addr show eth0 # specific interface
ip link show # interface status (up/down)
ip route # routing table
ifconfig # legacy — not installed by default on modern Linux
# DNS
nslookup example.com # DNS lookup
dig example.com # detailed DNS query
dig +short example.com # just the IP
dig MX example.com # mail exchange records
dig @8.8.8.8 example.com # query specific DNS server (Google)
host example.com # simple hostname → IP
# Connectivity testing
ping -c 5 8.8.8.8 # 5 packets to Google DNS
ping -i 0.2 8.8.8.8 # fast ping (0.2s interval)
traceroute google.com # trace network hops to destination
mtr google.com # live traceroute (combines ping + traceroute)
# Ports and connections
ss -tulnp # listening ports (modern replacement for netstat)
# -t: TCP, -u: UDP, -l: listening, -n: numeric, -p: show process
netstat -tulnp # legacy equivalent
lsof -i :8080 # what's using port 8080
lsof -i -P -n | grep LISTEN # all listening ports with process names
# HTTP testing
curl -I https://example.com # HTTP headers only (HEAD request)
curl -v https://example.com # verbose: show request/response headers
curl -o /dev/null -w "%{http_code}" https://example.com # just status code
curl -s -f https://api.example.com/health && echo "OK" # health check
# File transfer
wget -O output.tar.gz https://example.com/file.tar.gz # download with curl-like syntax
scp user@host:/remote/path ./local/ # secure copy from remote
rsync -avz user@host:/remote/ ./local/ # sync with progress, compressionFile Permissions and Ownership
Linux permissions follow a 9-bit model: owner (rwx), group (rwx), others (rwx). Each set of three bits maps to read (4), write (2), and execute (1). The octal sum determines the permission number.
| Octal | Symbolic | Meaning | Typical Use |
|---|---|---|---|
| 644 | -rw-r--r-- | Owner: rw. Group+others: r | Config files, static assets |
| 755 | -rwxr-xr-x | Owner: rwx. Group+others: rx | Executables, web directories |
| 600 | -rw------- | Owner: rw. Group+others: none | SSH private keys, secrets |
| 700 | -rwx------ | Owner: rwx. Group+others: none | User scripts, private dirs |
| 777 | -rwxrwxrwx | Full access for everyone | Avoid in production — security risk |
# chmod — change permissions chmod 755 script.sh # octal notation chmod +x script.sh # add execute bit for all (u+x, g+x, o+x) chmod u+x,g-w,o-rwx script.sh # symbolic: u=user, g=group, o=others chmod -R 644 ./static/ # recursive # chown — change owner chown user:group file.txt # change owner and group chown -R www-data:www-data /var/www/html # recursive (web server) chown :developers file.txt # change group only # Check permissions ls -la file.txt # shows permissions in ls output stat file.txt # full permission detail including octal # umask — default permissions for new files umask # show current umask (e.g., 022) # New file permission = 666 - umask. New dir = 777 - umask. # umask 022 → files get 644, dirs get 755 (common default) # Special permission bits chmod u+s script # setuid: runs as owner (not caller) — use with caution chmod g+s directory/ # setgid: new files inherit group of directory chmod +t /tmp # sticky bit: only owner can delete their own files
Archive and Compression
# tar — tape archive (container format, no compression by itself) tar -czf archive.tar.gz directory/ # create: compress with gzip tar -cjf archive.tar.bz2 directory/ # create: compress with bzip2 tar -cJf archive.tar.xz directory/ # create: compress with xz (best ratio) tar -cf archive.tar directory/ # create: no compression tar -xzf archive.tar.gz # extract gzip tar tar -xzf archive.tar.gz -C /target/ # extract to specific directory tar -tf archive.tar.gz # list contents without extracting tar -xzf archive.tar.gz specific/file # extract single file # Quick memory trick: c=create, x=extract, t=list, z=gzip, j=bzip2, J=xz, f=file # gzip / gunzip gzip file.txt # compress → file.txt.gz (original deleted) gzip -k file.txt # keep original gunzip file.txt.gz # decompress gzip -d file.txt.gz # same as gunzip gzip -9 file.txt # maximum compression (slowest) # zip / unzip zip archive.zip file1 file2 # create zip zip -r archive.zip directory/ # recursive unzip archive.zip # extract unzip -l archive.zip # list contents unzip archive.zip -d /target/ # extract to directory # Compression comparison (typical ratios on source code) # gzip: fast, ~60% size reduction — standard for tarballs # bzip2: slower, ~65% reduction — legacy # xz: slowest, ~70-75% reduction — best for distribution packages # zstd: fast AND ~65-70% reduction — modern choice (apt install zstd)
Shell Productivity and Shortcuts
# History
history # list command history
!! # repeat last command
!grep # repeat last command starting with "grep"
!42 # repeat command 42 from history
Ctrl+R # reverse-i-search: fuzzy search history (type to filter)
history | grep docker # search history for "docker"
# Keyboard shortcuts
Ctrl+L # clear screen (same as clear)
Ctrl+C # interrupt/kill current command
Ctrl+D # EOF / logout
Ctrl+A # jump to start of line
Ctrl+E # jump to end of line
Ctrl+W # delete word before cursor
Ctrl+U # delete from cursor to start of line
Ctrl+K # delete from cursor to end of line
Alt+. # insert last argument of previous command
# Aliases (put in ~/.bashrc or ~/.zshrc)
alias ll='ls -la'
alias la='ls -A'
alias gs='git status'
alias gp='git push'
alias dc='docker compose'
alias k='kubectl'
# Variable usage
MY_DIR="/var/log/nginx"
ls "$MY_DIR" # always quote variables
# Brace expansion
echo {a..z} # a b c ... z
echo {1..10} # 1 2 3 ... 10
mkdir -p /app/{frontend,backend,shared}/src # create multiple dirs
# Process substitution (Bash/Zsh only)
diff <(ls dir1) <(ls dir2) # compare directory listings without temp files
# Heredoc — multi-line input
cat <<EOF > config.txt
key1=value1
key2=value2
EOFModern Rust-Based Alternatives to Classic Tools
The Rust systems programming language has produced a wave of drop-in replacements for classic Unix tools that are significantly faster and more user-friendly. These are not toys — they are production tools used by the engineering teams at major companies.
| Classic Tool | Modern Alternative | Key Advantage | GitHub Stars (2026) |
|---|---|---|---|
| grep | ripgrep (rg) | 8–35× faster, respects .gitignore, PCRE2 | 50k+ |
| find | fd | 23× faster warm cache, parallel, .gitignore | 35k+ |
| cat | bat | Syntax highlighting, git diff markers, paging | 50k+ |
| ls | eza (fork of exa) | Git status per file, icons, tree view | 12k+ |
| du | dust | Visual tree of disk usage, parallel | 8k+ |
| top/htop | btop | Full TUI with graphs, mouse support | 17k+ |
| sed/awk | sd | Simpler regex substitution syntax | 5k+ |
The ripgrep performance data comes from the ripgrep GitHub repository's benchmarks by Andrew Gallant (BurntSushi), who tested against GNU grep, ag (The Silver Searcher), and The Platinum Searcher on Linux with warm caches. Ripgrep won on every benchmark, with 8× to 35× improvements depending on the pattern type. It is now the default search engine in VS Code (configured via search.exclude) and the underlying engine for several IDE search features.
These tools are installable via package managers: apt install ripgrep fd-find bat on Ubuntu/Debian (note: fd installs as fdfind on Ubuntu). On macOS: brew install ripgrep fd bat eza.
See also our Git cheat sheet for version control commands that complement your terminal workflow, and the Docker cheat sheet for container management commands.
Frequently Asked Questions
What is the difference between grep, sed, and awk?
grep searches for pattern matches and outputs matching lines — read-only. sed transforms text: substitutions, deletions, insertions. awk is a full programming language for field-based text processing with arithmetic, conditionals, and output formatting. Rule of thumb: grep to find, sed to transform single patterns, awk when you need multi-field logic or computation. Per Linode's documentation, these three tools are the core of Linux text processing pipelines.
How do I search for text inside files recursively in Linux?
Use grep -r "pattern" /path/ for recursive search. For better performance on large codebases, use ripgrep (rg "pattern") — 8–35× faster than grep (benchmarked by BurntSushi on GitHub), respects .gitignore by default, and supports PCRE2 patterns with -P. Install with apt install ripgrep or brew install ripgrep.
What does chmod 755 mean?
chmod 755 sets permissions: owner gets read+write+execute (7 = 4+2+1), group gets read+execute (5 = 4+1), others get read+execute (5). Written symbolically: -rwxr-xr-x. Common for executable scripts and web server directories. Use chmod 644 for files that need to be readable but only writable by the owner, and chmod 600 for SSH private keys.
How do I kill a process in Linux?
Find the PID with ps aux | grep processname or pgrep processname. Then kill PID (SIGTERM, graceful). If unresponsive, kill -9 PID (SIGKILL, immediate). pkill processname kills by name without looking up PIDs. Use htop for an interactive process tree with kill functionality.
What is the difference between > and >> in Linux?
> redirects stdout to a file, overwriting existing content. >> appends stdout to a file. For stderr: 2> redirects errors, 2>> appends. To redirect both stdout and stderr: command > file.log 2>&1 or with Bash 4+: command &> file.log. Both create the file if it does not exist.
How do I find large files in Linux?
find / -type f -size +100M -exec ls -lh {} \; lists files over 100MB. du -sh ./* | sort -rh | head -20 shows top 20 largest items in current directory. ncdu /var provides an interactive TUI browser. df -h shows filesystem-level disk usage.
What does the pipe | symbol do in Linux?
The pipe (|) connects the stdout of one command to the stdin of the next. cat access.log | grep "404" | awk '{print $7}' | sort | uniq -c | sort -rn prints 404 URLs ranked by frequency — five commands that together replace dozens of lines of application code. The shell executes each segment of a pipeline concurrently in its own subshell.
More Developer Tools & References
BytePane provides free, no-signup developer tools: JSON formatter, Base64 encoder, regex tester, color converter, URL encoder, and more. All run in your browser — no data sent to any server.
Git Cheat Sheet →