BytePane

Binary to Decimal Converter: Convert Binary Numbers Online

Number Systems14 min read

The Subnet Mask Problem

You're debugging a network issue. A colleague says the server can't reach the database because they're on different subnets. The CIDR notation on the config reads 192.168.1.0/24. The subnet mask is 255.255.255.0. To understand what that mask actually means — which IP addresses are allowed through, what the broadcast address is, how many hosts fit in the subnet — you need to understand binary.

The /24 means 24 bits are the network prefix. In binary: 11111111.11111111.11111111.00000000. Three octets of all-ones (255 each), one octet of all-zeros (0). The zero bits are host space — 8 bits of host space means 2⁸ = 256 addresses, minus 2 reserved (network and broadcast) = 254 usable host IPs. That calculation only makes sense in binary.

Binary isn't just academic. File permissions (chmod 755), bitwise flags in low-level code, IEEE 754 floating-point representation, WebAssembly memory layout, TCP/IP packet headers — all of these demand binary literacy from working developers. This article covers the conversion math from first principles, the contexts where it matters, and the standard library calls for every common language.

Key Takeaways

  • Binary (base 2) uses only digits 0 and 1. Each digit's value is 2 raised to its position, starting at 0 from the right. Sum the positional values of all 1-bits to get the decimal equivalent.
  • 8 bits = 1 byte = decimal range 0–255. This is why IPv4 octets, RGB channels, and ASCII character codes all max at 255.
  • Hex is compressed binary: one hex digit = 4 binary digits. Programmers use hex because it is vastly shorter while still mapping directly to bit patterns.
  • In JavaScript: parseInt('1101', 2) → 13. In Python: int('1101', 2) → 13. In Go: strconv.ParseInt("1101", 2, 64).
  • Bitwise operations are binary operations — AND (&), OR (|), XOR (^), NOT (~), and bit shifts (<<, >>) all operate on the binary representation of integers.

Positional Notation: The Math Behind Binary

Every number system uses positional notation — each digit's contribution is its face value multiplied by the base raised to its position (starting at 0 from the right). Decimal uses base 10; binary uses base 2.

In decimal, the number 149 means:

1 × 10² + 4 × 10¹ + 9 × 10⁰
= 1 × 100 + 4 × 10 + 9 × 1
= 100 + 40 + 9
= 149

In binary, the same value 149 is written as 10010101. To convert back to decimal, apply the same positional formula but with base 2:

1  0  0  1  0  1  0  1   ← binary digits
│  │  │  │  │  │  │  └─ position 0: 1 × 2⁰ =   1
│  │  │  │  │  │  └──── position 1: 0 × 2¹ =   0
│  │  │  │  │  └─────── position 2: 1 × 2² =   4
│  │  │  │  └────────── position 3: 0 × 2³ =   0
│  │  │  └───────────── position 4: 1 × 2⁴ =  16
│  │  └──────────────── position 5: 0 × 2⁵ =   0
│  └─────────────────── position 6: 0 × 2⁶ =   0
└────────────────────── position 7: 1 × 2⁷ = 128
                                              ───
                                              149

The algorithm: read binary digits from right to left. If the digit is 1, add the corresponding power of 2. If it's 0, add nothing. Sum all the contributions.

Short-Cut: Powers of 2 You Should Know

Memorizing the first 16 powers of 2 makes mental binary-to-decimal conversion practical. You don't need all of them — just the ones you encounter regularly.

Position (n)2ⁿWhere you see this value
01Bit flag: off/on
12Two states (binary digit)
24UNIX permission group (read/write/execute = 4+2+1)
38Nibble count, MIDI velocity steps
416Hex digit range (0–F)
532ASCII printable range start (space = 0x20)
66464-bit architecture, JWT default (HS256 secret min)
7128MSB of a byte, HTTP/2 frame flag mask
82561 byte — IPv4 octet, RGB channel, ASCII range
101,0241 KiB (kibibyte)
1665,5362 bytes — TCP/UDP port range (0–65535)
201,048,5761 MiB
2416,777,216RGB color space (256³)
324,294,967,2964 bytes — IPv4 address space, 32-bit int range
641.84 × 10¹⁹8 bytes — 64-bit pointer, most modern integer types

Step-by-Step Conversion Examples

Example 1: 1101 → 13

1  1  0  1
│  │  │  └─ position 0: 1 × 1 =  1
│  │  └──── position 1: 0 × 2 =  0
│  └─────── position 2: 1 × 4 =  4
└────────── position 3: 1 × 8 =  8
                                 ──
                                 13

Example 2: 11111111 → 255 (one byte, all bits set)

1+2+4+8+16+32+64+128 = 255
─ or equivalently: 2⁸ - 1 = 256 - 1 = 255

This is the maximum value of a single byte.
It is: the third octet in 255.255.255.0 subnet mask
      the max RGB channel value (rgb(255,255,255) = white)
      0xFF in hex
      the signed -1 in two's complement (same bit pattern)

Example 3: 11000000.10101000.00000001.00000001 → 192.168.1.1

IPv4 addresses are four 8-bit octets separated by dots.
Each octet converts independently:

11000000 → 128+64 = 192
10101000 → 128+32+8 = 168
00000001 → 1
00000001 → 1

Result: 192.168.1.1   ← a typical home router default gateway

Example 4: 0111 1111 1111 1111 1111 1111 1111 1111 → 2,147,483,647

This 32-bit value is 0x7FFFFFFF — the maximum signed 32-bit integer.
MSB (bit 31) = 0 → positive number in two's complement.
Bits 0–30 all set → 2³¹ - 1 = 2,147,483,647.

You'll see this value as Integer.MAX_VALUE in Java,
INT_MAX in C/C++, and i32::MAX in Rust.
Adding 1 causes integer overflow → -2,147,483,648 (or undefined behavior in C).

Where Binary Shows Up in Real Development Work

IPv4 Addresses and Subnet Masks

Every IPv4 address is a 32-bit binary number. Per IANA (Internet Assigned Numbers Authority), the entire IPv4 address space contains exactly 2³² = 4,294,967,296 addresses — a number only meaningful in binary. Subnet masks use consecutive 1-bits to define the network portion:

# CIDR notation and its binary subnet mask
/8   → 11111111.00000000.00000000.00000000 → 255.0.0.0
/16  → 11111111.11111111.00000000.00000000 → 255.255.0.0
/24  → 11111111.11111111.11111111.00000000 → 255.255.255.0
/25  → 11111111.11111111.11111111.10000000 → 255.255.255.128
/30  → 11111111.11111111.11111111.11111100 → 255.255.255.252

# Usable hosts in /25: 2^7 - 2 = 126
# Usable hosts in /30: 2^2 - 2 = 2 (point-to-point links)

# JavaScript: check if IP is in subnet
function inSubnet(ip, network, prefix) {
  const ipBits = ip.split('.').map(Number)
    .reduce((acc, octet) => (acc << 8) | octet, 0) >>> 0
  const netBits = network.split('.').map(Number)
    .reduce((acc, octet) => (acc << 8) | octet, 0) >>> 0
  const mask = prefix === 0 ? 0 : (~0 << (32 - prefix)) >>> 0
  return (ipBits & mask) === (netBits & mask)
}

inSubnet('192.168.1.100', '192.168.1.0', 24)  // true
inSubnet('192.168.2.1', '192.168.1.0', 24)    // false

Unix File Permissions

Unix permission bits are a 12-bit field. The lower 9 bits control read (4), write (2), and execute (1) for owner, group, and others. According to the POSIX specification (IEEE Std 1003.1), these values are deliberately chosen to be non-overlapping powers of 2 so they can be combined with bitwise OR and tested with bitwise AND:

# chmod 755 in binary
7 = 111  (owner: read=4 + write=2 + execute=1)
5 = 101  (group: read=4 + execute=1)
5 = 101  (others: read=4 + execute=1)

Binary: 111 101 101 = 0b111101101 = decimal 493 = octal 0755

# ls -la output   Octal  Binary
# -rwxr-xr-x      0755   111 101 101
# -rw-r--r--      0644   110 100 100
# -rw-------      0600   110 000 000
# drwxrwxr-x      0775   111 111 101

# Python: check if file is executable by owner
import stat
import os

mode = os.stat('/usr/bin/python3').st_mode
is_executable = bool(mode & stat.S_IXUSR)  # 0o100 = 0b001000000

Bitwise Operations in Application Code

Bitwise operations are essential in feature flags, permissions systems, color manipulation, and protocol parsing. The Stack Overflow Developer Survey 2025 reports that 38% of developers regularly work with systems programming or embedded code, where bitwise operations are routine:

// Feature flags packed into a single integer — common in low-memory embedded systems
const FEATURE_DARK_MODE   = 0b00000001  // bit 0
const FEATURE_ANALYTICS   = 0b00000010  // bit 1
const FEATURE_BETA_API    = 0b00000100  // bit 2
const FEATURE_ADMIN_PANEL = 0b00001000  // bit 3

// User has dark mode + beta API:
let userFlags = FEATURE_DARK_MODE | FEATURE_BETA_API  // 0b00000101 = 5

// Check if dark mode is enabled:
const hasDarkMode = (userFlags & FEATURE_DARK_MODE) !== 0  // true

// Toggle analytics:
userFlags ^= FEATURE_ANALYTICS  // XOR flips the bit → 0b00000111 = 7

// Disable beta API:
userFlags &= ~FEATURE_BETA_API  // AND with NOT → clears bit 2 → 0b00000101 = 5

// How many features are enabled? (Hamming weight / popcount)
const popcount = (n: number) => {
  let count = 0
  while (n) { count += n & 1; n >>>= 1; }
  return count
}
popcount(userFlags)  // 2

IEEE 754 Floating-Point: Binary Beneath the Decimal

The reason 0.1 + 0.2 !== 0.3 in JavaScript (and every other language using IEEE 754) is binary. The IEEE 754-2008 standard defines a 64-bit double-precision float as 1 sign bit + 11 exponent bits + 52 mantissa bits. The decimal fraction 0.1 cannot be represented exactly in binary — the closest binary approximation is 0.100000000000000005551115… which accumulates error in addition:

// 0.1 in binary (64-bit IEEE 754)
// Sign: 0  Exponent: 01111111011  Mantissa: 1001100110011001... (repeating)
//
// Just like 1/3 = 0.333... repeating in decimal,
// 1/10 = 0.0001100110011... repeating in binary.

// The classic JavaScript gotcha:
0.1 + 0.2  // 0.30000000000000004 — NOT 0.3

// Safe comparison: use tolerance (epsilon)
const EPSILON = Number.EPSILON  // 2.220446049250313e-16
Math.abs(0.1 + 0.2 - 0.3) < EPSILON  // true

// For financial calculations, use integer arithmetic:
// Store $12.99 as integer cents: 1299
// Never store currency as floating-point
const price = 1299  // cents
const tax = Math.round(price * 0.0825)  // 107 cents
const total = price + tax  // 1406 cents = $14.06

// Or use a decimal library: npm install decimal.js
import Decimal from 'decimal.js'
new Decimal('0.1').plus('0.2').toString()  // '0.3' — exact

Binary in TCP/IP Protocol Headers

Network protocol headers are densely packed binary structures. The TCP header, defined in RFC 793, packs port numbers, sequence numbers, flags, and window size into a minimum 20-byte binary field. Parsing network packets requires understanding exactly which bits represent which fields:

# TCP flags byte (9 bits — last 9 bits of the control field):
# Bit 8: NS  (ECN-nonce concealment)
# Bit 7: CWR (Congestion Window Reduced)
# Bit 6: ECE (ECN-Echo)
# Bit 5: URG (Urgent pointer field significant)
# Bit 4: ACK (Acknowledgment field significant)
# Bit 3: PSH (Push function)
# Bit 2: RST (Reset the connection)
# Bit 1: SYN (Synchronize sequence numbers)
# Bit 0: FIN (No more data from sender)

# A SYN packet (TCP handshake first step): flags = 0b000000010 = 2
# A SYN-ACK packet: flags = 0b000010010 = 18
# A FIN-ACK packet: flags = 0b000010001 = 17

# Python: parse TCP flags from raw packet bytes
import struct

def parse_tcp_flags(raw_header: bytes) -> dict:
    # Unpack bytes 12-13 of TCP header (data offset + flags)
    data_offset_flags = struct.unpack('!H', raw_header[12:14])[0]
    flags = data_offset_flags & 0x1FF  # mask lower 9 bits
    return {
        'FIN': bool(flags & 0x001),
        'SYN': bool(flags & 0x002),
        'RST': bool(flags & 0x004),
        'PSH': bool(flags & 0x008),
        'ACK': bool(flags & 0x010),
        'URG': bool(flags & 0x020),
    }

Binary ↔ Decimal Conversion Code in Every Language

JavaScript / TypeScript

// Binary string → decimal number
parseInt('1101', 2)           // 13
parseInt('11111111', 2)       // 255
parseInt('10010101', 2)       // 149
parseInt('0b1101', 2)         // handles 0b prefix — returns 13

// Decimal number → binary string
(13).toString(2)              // '1101'
(255).toString(2)             // '11111111'
(149).toString(2)             // '10010101'

// Pad to fixed width (e.g., always 8 bits for a byte display)
(13).toString(2).padStart(8, '0')   // '00001101'
(255).toString(2).padStart(8, '0')  // '11111111'

// For 64-bit values, use BigInt (Number only safely handles 53 bits)
BigInt('0b' + '1'.repeat(53)).toString()  // OK for 53-bit numbers
parseInt('1'.repeat(54), 2)  // UNSAFE — beyond Number.MAX_SAFE_INTEGER

// Correct 64-bit approach:
const binary64 = '1'.repeat(64)
BigInt('0b' + binary64)  // 18446744073709551615n

// Convert IPv4 address to binary representation
function ipToBinary(ip: string): string {
  return ip.split('.').map(octet =>
    parseInt(octet).toString(2).padStart(8, '0')
  ).join('.')
}
ipToBinary('192.168.1.1')
// '11000000.10101000.00000001.00000001'

Python

# Binary string → int
int('1101', 2)          # 13
int('11111111', 2)      # 255
int('0b1101', 2)        # 13 — 0b prefix handled
int('0b1101', 0)        # 13 — base 0 auto-detects 0b prefix

# Binary literal → int (evaluated at parse time)
value = 0b1101          # int 13

# int → binary string
bin(13)                 # '0b1101'
bin(13)[2:]             # '1101' — strip 0b prefix
f'{13:b}'               # '1101'
f'{13:08b}'             # '00001101' — zero-padded to 8 chars
f'{255:08b}'            # '11111111'
f'{149:08b}'            # '10010101'

# bytes object → binary string
data = b'Þ­'
' '.join(f'{byte:08b}' for byte in data)
# '11011110 10101101'

# int → bytes (big-endian)
(255).to_bytes(4, 'big')      # b'ÿ'
(255).to_bytes(4, 'little')   # b'ÿ'

# bytes → int
int.from_bytes(b'ÿ', 'big')  # 255

# IPv4 address ↔ integer
import ipaddress
int(ipaddress.IPv4Address('192.168.1.1'))   # 3232235777
ipaddress.IPv4Address(3232235777)           # IPv4Address('192.168.1.1')

Go

import (
    "fmt"
    "strconv"
)

// Binary string → int64
n, err := strconv.ParseInt("1101", 2, 64)    // 13, nil
n, err := strconv.ParseInt("11111111", 2, 64) // 255, nil

// Unsigned binary string → uint64
u, err := strconv.ParseUint("10000000", 2, 64) // 128, nil

// int → binary string
s := strconv.FormatInt(13, 2)   // "1101"
s := strconv.FormatInt(255, 2)  // "11111111"

// With fmt (Sprintf)
s := fmt.Sprintf("%b", 13)      // "1101"
s := fmt.Sprintf("%08b", 13)    // "00001101" — zero-padded to 8 chars

// Binary literal (Go 1.13+)
n := 0b1101  // int 13

// Bit manipulation: set, clear, toggle, check
func setBit(n int, pos uint) int    { return n | (1 << pos) }
func clearBit(n int, pos uint) int  { return n &^ (1 << pos) }
func toggleBit(n int, pos uint) int { return n ^ (1 << pos) }
func hasBit(n int, pos uint) bool   { return n&(1<<pos) != 0 }

n := 0b00000101  // 5
n = setBit(n, 1)    // 0b00000111 = 7 (set bit 1)
n = clearBit(n, 0)  // 0b00000110 = 6 (clear bit 0)
hasBit(n, 2)         // true

Rust

// Binary literal (Rust uses _ as visual separator)
let n: u32 = 0b1101;            // 13
let byte: u8 = 0b1111_1111;     // 255 — readable grouping

// Binary string → integer
let n = u32::from_str_radix("1101", 2).unwrap();    // 13
let n = u8::from_str_radix("11111111", 2).unwrap(); // 255

// integer → binary string
let s = format!("{:b}", 13u32);     // "1101"
let s = format!("{:08b}", 13u32);   // "00001101"
let s = format!("{:032b}", 149u32); // zero-padded to 32 bits

// Bit counting (hardware intrinsic when available)
let n: u32 = 0b1101;
n.count_ones()   // 3 (popcount)
n.count_zeros()  // 29
n.leading_zeros()  // 28
n.trailing_zeros() // 0

// Rotate bits (useful in hashing, cryptography)
let n: u32 = 0b1000_0000_0000_0000_0000_0000_0000_0001;
n.rotate_right(1)  // 0b1100_0000_0000_0000_0000_0000_0000_0000

Common Binary Conversion Mistakes

JavaScript: 32-bit Ceiling on Bitwise Operators

JavaScript's bitwise operators (&, |, ^, ~, <<, >>) convert their operands to signed 32-bit integers, even though JavaScript numbers are 64-bit floats. This means any bitwise operation on a value above 2³¹ − 1 (2,147,483,647) will produce unexpected results. Use BigInt for 64-bit flag fields.

// WRONG — truncates to 32 bits, loses bit 32+
parseInt('1'.repeat(64), 2)  // 18446744073709552000 — WRONG (precision lost)
(2**32) | 0                  // 0 — truncated to 32-bit signed

// CORRECT — use BigInt for large binary values
BigInt('0b' + '1'.repeat(64))  // 18446744073709551615n — correct

Signed vs. Unsigned: Two's Complement

The binary pattern 11111111 is 255 as an unsigned byte but −1 as a signed byte (two's complement). Context determines interpretation. In C, unsigned char gives 255; signed char gives −1. JavaScript's >> does signed shift; >>> does unsigned shift.

Byte Order (Endianness)

Multi-byte values can be stored with the most significant byte first (big-endian) or least significant byte first (little-endian). Network protocols (TCP/IP) use big-endian ("network byte order"). x86/x64 CPUs use little-endian. The integer 256 in little-endian bytes is 00 01; in big-endian it's 01 00. This matters when parsing binary file formats or network packets across architectures.

Binary vs. Hex vs. Decimal: When to Use Each

Binary, hexadecimal, and decimal are different representations of the same underlying values. The choice of representation is purely about human readability and convenience for the context. For number system conversions beyond binary and decimal, the BytePane number converter handles all bases simultaneously.

ContextBest FormatWhy
Bit flags, permission masksBinaryEach bit is directly visible — easy to see which flags are set
Memory addresses, hash digestsHexFar more compact than binary; still maps 1:1 to nibbles
IP addresses (human-readable)DecimalConvention: 192.168.1.1 is easier to type than 11000000.10101000.00000001.00000001
IPv6 addressesHex128 bits in hex = 32 hex chars; in decimal = 39 chars; in binary = 128 chars
Byte values (network protocols)HexEach byte = 2 hex chars — clean visual grouping
File permissions (Unix)OctalEach 3-bit group = one octal digit — convenient for the rwx triad
Floating-point bit layoutBinary/HexBoth expose the sign/exponent/mantissa structure; decimal obscures it
User-visible numbersDecimalHumans are trained on base 10; all other formats need translation

For the hex side of number conversions, the hex to decimal guide covers the math and every common use case — CSS colors, memory addresses, cryptographic hashes, and IPv6.

Binary and Base64: Encoding Binary Data for Text Transmission

Base64 encoding takes binary data (arbitrary bytes) and encodes it as printable ASCII text. According to the RFC 4648 specification, Base64 groups input bytes into 6-bit chunks (2⁶ = 64 possible values, hence "base 64") and maps each to one of 64 printable characters. This 6-bits-per-character encoding inflates the size by 4/3 (3 bytes → 4 characters).

# How Base64 works — binary grouping
Input: "Man" = bytes 77, 97, 110
Binary: 01001101 01100001 01101110

Group into 6-bit chunks:
010011 010110 000101 101110
  19     22     5     46

Map to Base64 alphabet (A=0, B=1, ..., Z=25, a=26, ..., z=51, 0=52, ..., 9=61, +=62, /=63):
  19 → T
  22 → W
   5 → F
  46 → u

Result: "TWFu"

# Every 3 bytes of binary → 4 Base64 characters
# 24 bits binary → 24 bits encoded (no loss), just different alphabet

Base64 is used to embed binary assets in JSON (images in data URIs, PDF attachments in email), in JWT tokens (header and payload are Base64url-encoded), and in TLS certificates. See the Base64 encoding guide for the full treatment.

Frequently Asked Questions

How do you convert binary to decimal?

Multiply each binary digit by 2 raised to its positional power (starting at 0 from the right), then sum the results. For 1101: (1 × 8) + (1 × 4) + (0 × 2) + (1 × 1) = 13. In JavaScript: parseInt("1101", 2) → 13. In Python: int("1101", 2) → 13.

How many decimal values can 8 binary bits represent?

8 bits (one byte) represent 256 distinct values: 0 through 255 (2⁸ = 256). As a signed integer using two's complement, the range is −128 to +127. This 0–255 range is why IPv4 octets max at 255, RGB color channels max at 255, and ASCII characters fit in a single byte.

What is an IPv4 subnet mask in binary?

A subnet mask is a 32-bit binary number where all network bits are 1 and all host bits are 0. The /24 mask (255.255.255.0) is 11111111.11111111.11111111.00000000 — 24 consecutive 1-bits. ANDing an IP address with the mask extracts the network portion, determining if two addresses are on the same subnet.

Why does a computer use binary instead of decimal?

Computer hardware is built from transistors with exactly two states: on (1) and off (0). Binary maps directly to these physical states. Decimal would require analog circuitry (10 distinct voltage levels) that is far less reliable and harder to manufacture. Binary's simplicity enables billions of reliable transistor switches in a modern CPU.

What is the difference between binary and hexadecimal?

Binary (base 2) uses digits 0–1. Hexadecimal (base 16) uses digits 0–9 and A–F. One hex digit = exactly 4 binary digits. So 0xFF = 11111111 in binary = 255 in decimal. Hex is compressed binary — humans use hex because it is much shorter while still mapping directly to bit patterns.

How does chmod 755 relate to binary?

Unix file permissions are stored as binary flags. chmod 755 is octal — each octal digit represents 3 bits: 7 = 111 (read+write+execute), 5 = 101 (read+execute). The full 9-bit permission mask for 755 is 111 101 101 in binary. Three groups of three bits — one group each for owner, group, and others.

Convert Binary Numbers Instantly

Paste any binary number and get the decimal, hex, and octal equivalents with a step-by-step breakdown. Also handles IP address binary conversion.

Open Number Converter →