Binary to Decimal Converter: Convert Binary Numbers Online
The Subnet Mask Problem
You're debugging a network issue. A colleague says the server can't reach the database because they're on different subnets. The CIDR notation on the config reads 192.168.1.0/24. The subnet mask is 255.255.255.0. To understand what that mask actually means — which IP addresses are allowed through, what the broadcast address is, how many hosts fit in the subnet — you need to understand binary.
The /24 means 24 bits are the network prefix. In binary: 11111111.11111111.11111111.00000000. Three octets of all-ones (255 each), one octet of all-zeros (0). The zero bits are host space — 8 bits of host space means 2⁸ = 256 addresses, minus 2 reserved (network and broadcast) = 254 usable host IPs. That calculation only makes sense in binary.
Binary isn't just academic. File permissions (chmod 755), bitwise flags in low-level code, IEEE 754 floating-point representation, WebAssembly memory layout, TCP/IP packet headers — all of these demand binary literacy from working developers. This article covers the conversion math from first principles, the contexts where it matters, and the standard library calls for every common language.
Key Takeaways
- ▸Binary (base 2) uses only digits 0 and 1. Each digit's value is 2 raised to its position, starting at 0 from the right. Sum the positional values of all 1-bits to get the decimal equivalent.
- ▸8 bits = 1 byte = decimal range 0–255. This is why IPv4 octets, RGB channels, and ASCII character codes all max at 255.
- ▸Hex is compressed binary: one hex digit = 4 binary digits. Programmers use hex because it is vastly shorter while still mapping directly to bit patterns.
- ▸In JavaScript:
parseInt('1101', 2)→ 13. In Python:int('1101', 2)→ 13. In Go:strconv.ParseInt("1101", 2, 64). - ▸Bitwise operations are binary operations — AND (&), OR (|), XOR (^), NOT (~), and bit shifts (<<, >>) all operate on the binary representation of integers.
Positional Notation: The Math Behind Binary
Every number system uses positional notation — each digit's contribution is its face value multiplied by the base raised to its position (starting at 0 from the right). Decimal uses base 10; binary uses base 2.
In decimal, the number 149 means:
1 × 10² + 4 × 10¹ + 9 × 10⁰ = 1 × 100 + 4 × 10 + 9 × 1 = 100 + 40 + 9 = 149
In binary, the same value 149 is written as 10010101. To convert back to decimal, apply the same positional formula but with base 2:
1 0 0 1 0 1 0 1 ← binary digits
│ │ │ │ │ │ │ └─ position 0: 1 × 2⁰ = 1
│ │ │ │ │ │ └──── position 1: 0 × 2¹ = 0
│ │ │ │ │ └─────── position 2: 1 × 2² = 4
│ │ │ │ └────────── position 3: 0 × 2³ = 0
│ │ │ └───────────── position 4: 1 × 2⁴ = 16
│ │ └──────────────── position 5: 0 × 2⁵ = 0
│ └─────────────────── position 6: 0 × 2⁶ = 0
└────────────────────── position 7: 1 × 2⁷ = 128
───
149The algorithm: read binary digits from right to left. If the digit is 1, add the corresponding power of 2. If it's 0, add nothing. Sum all the contributions.
Short-Cut: Powers of 2 You Should Know
Memorizing the first 16 powers of 2 makes mental binary-to-decimal conversion practical. You don't need all of them — just the ones you encounter regularly.
| Position (n) | 2ⁿ | Where you see this value |
|---|---|---|
| 0 | 1 | Bit flag: off/on |
| 1 | 2 | Two states (binary digit) |
| 2 | 4 | UNIX permission group (read/write/execute = 4+2+1) |
| 3 | 8 | Nibble count, MIDI velocity steps |
| 4 | 16 | Hex digit range (0–F) |
| 5 | 32 | ASCII printable range start (space = 0x20) |
| 6 | 64 | 64-bit architecture, JWT default (HS256 secret min) |
| 7 | 128 | MSB of a byte, HTTP/2 frame flag mask |
| 8 | 256 | 1 byte — IPv4 octet, RGB channel, ASCII range |
| 10 | 1,024 | 1 KiB (kibibyte) |
| 16 | 65,536 | 2 bytes — TCP/UDP port range (0–65535) |
| 20 | 1,048,576 | 1 MiB |
| 24 | 16,777,216 | RGB color space (256³) |
| 32 | 4,294,967,296 | 4 bytes — IPv4 address space, 32-bit int range |
| 64 | 1.84 × 10¹⁹ | 8 bytes — 64-bit pointer, most modern integer types |
Step-by-Step Conversion Examples
Example 1: 1101 → 13
1 1 0 1
│ │ │ └─ position 0: 1 × 1 = 1
│ │ └──── position 1: 0 × 2 = 0
│ └─────── position 2: 1 × 4 = 4
└────────── position 3: 1 × 8 = 8
──
13Example 2: 11111111 → 255 (one byte, all bits set)
1+2+4+8+16+32+64+128 = 255
─ or equivalently: 2⁸ - 1 = 256 - 1 = 255
This is the maximum value of a single byte.
It is: the third octet in 255.255.255.0 subnet mask
the max RGB channel value (rgb(255,255,255) = white)
0xFF in hex
the signed -1 in two's complement (same bit pattern)Example 3: 11000000.10101000.00000001.00000001 → 192.168.1.1
IPv4 addresses are four 8-bit octets separated by dots. Each octet converts independently: 11000000 → 128+64 = 192 10101000 → 128+32+8 = 168 00000001 → 1 00000001 → 1 Result: 192.168.1.1 ← a typical home router default gateway
Example 4: 0111 1111 1111 1111 1111 1111 1111 1111 → 2,147,483,647
This 32-bit value is 0x7FFFFFFF — the maximum signed 32-bit integer. MSB (bit 31) = 0 → positive number in two's complement. Bits 0–30 all set → 2³¹ - 1 = 2,147,483,647. You'll see this value as Integer.MAX_VALUE in Java, INT_MAX in C/C++, and i32::MAX in Rust. Adding 1 causes integer overflow → -2,147,483,648 (or undefined behavior in C).
Where Binary Shows Up in Real Development Work
IPv4 Addresses and Subnet Masks
Every IPv4 address is a 32-bit binary number. Per IANA (Internet Assigned Numbers Authority), the entire IPv4 address space contains exactly 2³² = 4,294,967,296 addresses — a number only meaningful in binary. Subnet masks use consecutive 1-bits to define the network portion:
# CIDR notation and its binary subnet mask
/8 → 11111111.00000000.00000000.00000000 → 255.0.0.0
/16 → 11111111.11111111.00000000.00000000 → 255.255.0.0
/24 → 11111111.11111111.11111111.00000000 → 255.255.255.0
/25 → 11111111.11111111.11111111.10000000 → 255.255.255.128
/30 → 11111111.11111111.11111111.11111100 → 255.255.255.252
# Usable hosts in /25: 2^7 - 2 = 126
# Usable hosts in /30: 2^2 - 2 = 2 (point-to-point links)
# JavaScript: check if IP is in subnet
function inSubnet(ip, network, prefix) {
const ipBits = ip.split('.').map(Number)
.reduce((acc, octet) => (acc << 8) | octet, 0) >>> 0
const netBits = network.split('.').map(Number)
.reduce((acc, octet) => (acc << 8) | octet, 0) >>> 0
const mask = prefix === 0 ? 0 : (~0 << (32 - prefix)) >>> 0
return (ipBits & mask) === (netBits & mask)
}
inSubnet('192.168.1.100', '192.168.1.0', 24) // true
inSubnet('192.168.2.1', '192.168.1.0', 24) // falseUnix File Permissions
Unix permission bits are a 12-bit field. The lower 9 bits control read (4), write (2), and execute (1) for owner, group, and others. According to the POSIX specification (IEEE Std 1003.1), these values are deliberately chosen to be non-overlapping powers of 2 so they can be combined with bitwise OR and tested with bitwise AND:
# chmod 755 in binary
7 = 111 (owner: read=4 + write=2 + execute=1)
5 = 101 (group: read=4 + execute=1)
5 = 101 (others: read=4 + execute=1)
Binary: 111 101 101 = 0b111101101 = decimal 493 = octal 0755
# ls -la output Octal Binary
# -rwxr-xr-x 0755 111 101 101
# -rw-r--r-- 0644 110 100 100
# -rw------- 0600 110 000 000
# drwxrwxr-x 0775 111 111 101
# Python: check if file is executable by owner
import stat
import os
mode = os.stat('/usr/bin/python3').st_mode
is_executable = bool(mode & stat.S_IXUSR) # 0o100 = 0b001000000Bitwise Operations in Application Code
Bitwise operations are essential in feature flags, permissions systems, color manipulation, and protocol parsing. The Stack Overflow Developer Survey 2025 reports that 38% of developers regularly work with systems programming or embedded code, where bitwise operations are routine:
// Feature flags packed into a single integer — common in low-memory embedded systems
const FEATURE_DARK_MODE = 0b00000001 // bit 0
const FEATURE_ANALYTICS = 0b00000010 // bit 1
const FEATURE_BETA_API = 0b00000100 // bit 2
const FEATURE_ADMIN_PANEL = 0b00001000 // bit 3
// User has dark mode + beta API:
let userFlags = FEATURE_DARK_MODE | FEATURE_BETA_API // 0b00000101 = 5
// Check if dark mode is enabled:
const hasDarkMode = (userFlags & FEATURE_DARK_MODE) !== 0 // true
// Toggle analytics:
userFlags ^= FEATURE_ANALYTICS // XOR flips the bit → 0b00000111 = 7
// Disable beta API:
userFlags &= ~FEATURE_BETA_API // AND with NOT → clears bit 2 → 0b00000101 = 5
// How many features are enabled? (Hamming weight / popcount)
const popcount = (n: number) => {
let count = 0
while (n) { count += n & 1; n >>>= 1; }
return count
}
popcount(userFlags) // 2IEEE 754 Floating-Point: Binary Beneath the Decimal
The reason 0.1 + 0.2 !== 0.3 in JavaScript (and every other language using IEEE 754) is binary. The IEEE 754-2008 standard defines a 64-bit double-precision float as 1 sign bit + 11 exponent bits + 52 mantissa bits. The decimal fraction 0.1 cannot be represented exactly in binary — the closest binary approximation is 0.100000000000000005551115… which accumulates error in addition:
// 0.1 in binary (64-bit IEEE 754)
// Sign: 0 Exponent: 01111111011 Mantissa: 1001100110011001... (repeating)
//
// Just like 1/3 = 0.333... repeating in decimal,
// 1/10 = 0.0001100110011... repeating in binary.
// The classic JavaScript gotcha:
0.1 + 0.2 // 0.30000000000000004 — NOT 0.3
// Safe comparison: use tolerance (epsilon)
const EPSILON = Number.EPSILON // 2.220446049250313e-16
Math.abs(0.1 + 0.2 - 0.3) < EPSILON // true
// For financial calculations, use integer arithmetic:
// Store $12.99 as integer cents: 1299
// Never store currency as floating-point
const price = 1299 // cents
const tax = Math.round(price * 0.0825) // 107 cents
const total = price + tax // 1406 cents = $14.06
// Or use a decimal library: npm install decimal.js
import Decimal from 'decimal.js'
new Decimal('0.1').plus('0.2').toString() // '0.3' — exactBinary in TCP/IP Protocol Headers
Network protocol headers are densely packed binary structures. The TCP header, defined in RFC 793, packs port numbers, sequence numbers, flags, and window size into a minimum 20-byte binary field. Parsing network packets requires understanding exactly which bits represent which fields:
# TCP flags byte (9 bits — last 9 bits of the control field):
# Bit 8: NS (ECN-nonce concealment)
# Bit 7: CWR (Congestion Window Reduced)
# Bit 6: ECE (ECN-Echo)
# Bit 5: URG (Urgent pointer field significant)
# Bit 4: ACK (Acknowledgment field significant)
# Bit 3: PSH (Push function)
# Bit 2: RST (Reset the connection)
# Bit 1: SYN (Synchronize sequence numbers)
# Bit 0: FIN (No more data from sender)
# A SYN packet (TCP handshake first step): flags = 0b000000010 = 2
# A SYN-ACK packet: flags = 0b000010010 = 18
# A FIN-ACK packet: flags = 0b000010001 = 17
# Python: parse TCP flags from raw packet bytes
import struct
def parse_tcp_flags(raw_header: bytes) -> dict:
# Unpack bytes 12-13 of TCP header (data offset + flags)
data_offset_flags = struct.unpack('!H', raw_header[12:14])[0]
flags = data_offset_flags & 0x1FF # mask lower 9 bits
return {
'FIN': bool(flags & 0x001),
'SYN': bool(flags & 0x002),
'RST': bool(flags & 0x004),
'PSH': bool(flags & 0x008),
'ACK': bool(flags & 0x010),
'URG': bool(flags & 0x020),
}Binary ↔ Decimal Conversion Code in Every Language
JavaScript / TypeScript
// Binary string → decimal number
parseInt('1101', 2) // 13
parseInt('11111111', 2) // 255
parseInt('10010101', 2) // 149
parseInt('0b1101', 2) // handles 0b prefix — returns 13
// Decimal number → binary string
(13).toString(2) // '1101'
(255).toString(2) // '11111111'
(149).toString(2) // '10010101'
// Pad to fixed width (e.g., always 8 bits for a byte display)
(13).toString(2).padStart(8, '0') // '00001101'
(255).toString(2).padStart(8, '0') // '11111111'
// For 64-bit values, use BigInt (Number only safely handles 53 bits)
BigInt('0b' + '1'.repeat(53)).toString() // OK for 53-bit numbers
parseInt('1'.repeat(54), 2) // UNSAFE — beyond Number.MAX_SAFE_INTEGER
// Correct 64-bit approach:
const binary64 = '1'.repeat(64)
BigInt('0b' + binary64) // 18446744073709551615n
// Convert IPv4 address to binary representation
function ipToBinary(ip: string): string {
return ip.split('.').map(octet =>
parseInt(octet).toString(2).padStart(8, '0')
).join('.')
}
ipToBinary('192.168.1.1')
// '11000000.10101000.00000001.00000001'Python
# Binary string → int
int('1101', 2) # 13
int('11111111', 2) # 255
int('0b1101', 2) # 13 — 0b prefix handled
int('0b1101', 0) # 13 — base 0 auto-detects 0b prefix
# Binary literal → int (evaluated at parse time)
value = 0b1101 # int 13
# int → binary string
bin(13) # '0b1101'
bin(13)[2:] # '1101' — strip 0b prefix
f'{13:b}' # '1101'
f'{13:08b}' # '00001101' — zero-padded to 8 chars
f'{255:08b}' # '11111111'
f'{149:08b}' # '10010101'
# bytes object → binary string
data = b'Þ'
' '.join(f'{byte:08b}' for byte in data)
# '11011110 10101101'
# int → bytes (big-endian)
(255).to_bytes(4, 'big') # b'