Skip to main content
🔬 Advanced

Hexadecimal Calculator

Add, subtract, and convert hexadecimal numbers. Perform hex arithmetic and convert to decimal or binary. Free math calculator. Get instant results now.

★★★★★ 4.8/5 · 📊 0 calculations · 🔒 Private & free

Understanding Hexadecimal (Base-16): Foundations

Hexadecimal (hex) is a positional numeral system with base 16. It uses sixteen distinct symbols: the digits 0–9 represent values zero through nine, and the letters A–F (or a–f) represent values ten through fifteen. One hexadecimal digit represents exactly four binary bits (a "nibble"), making hex the most natural way to represent binary data in a human-readable format.

The positional value of each hex digit is a power of 16. For the hex number 2F4:

Converting decimal to hex: repeatedly divide by 16 and record remainders (rightmost digit first). 756 ÷ 16 = 47 R 4; 47 ÷ 16 = 2 R 15 (F); 2 ÷ 16 = 0 R 2. Read remainders upward: 2F4₁₆ ✓. In most programming languages, hex literals are prefixed with 0x: 0x2F4 = 756.

Hex to Decimal Conversion Table

Quick reference for converting single hex digits and common hex values to decimal:

HexDecimalBinaryCommon Meaning
0x0000000 0000Null byte, false, off
0x0110000 0001True, enabled
0x0A100000 1010Newline character (LF)
0x0D130000 1101Carriage return (CR)
0x1F310001 1111Unit separator
0x20320010 0000Space character (ASCII)
0x41650100 0001'A' in ASCII
0x61970110 0001'a' in ASCII (lowercase)
0x7F1270111 1111DEL character; max signed nibble pair
0x801281000 0000Sign bit set (negative in signed byte)
0xFF2551111 1111Max unsigned byte; all bits set
0x1002561 0000 0000One more than max byte
0xFFFF65,53516 onesMax 16-bit unsigned value
0xFFFFFF16,777,21524 onesMax 24-bit (16M colors)

The relationship between hex and binary is direct: each hex digit maps to exactly 4 bits. A = 1010, B = 1011, C = 1100, D = 1101, E = 1110, F = 1111. Converting 0xAB to binary: A=1010, B=1011 → 10101011₂ = 171₁₀.

Hex Arithmetic: Addition, Subtraction, Multiplication

Hexadecimal arithmetic follows the same rules as decimal arithmetic but carries and borrows at 16 instead of 10. Understanding hex arithmetic is essential for assembly programming, embedded systems, and reading compiler output.

Addition example: 3A + 27. Units: A + 7 = 10 + 7 = 17 = 1×16 + 1 → write 1, carry 1. Sixteens: 3 + 2 + 1 (carry) = 6. Result: 61₁₆ = 97₁₀. Verify: 58 + 39 = 97 ✓.

Subtraction example: C3 − 5F. Units: 3 < F (15), so borrow: 3 + 16 − 15 = 4, and carry 1 to next column. Sixteens: C (12) − 5 − 1 (borrow) = 6. Result: 64₁₆ = 100₁₀. Verify: 195 − 95 = 100 ✓.

Multiplication example: 1A × 3. A × 3 = 30 = 1E₁₆ (write E, carry 1). 1 × 3 + 1 = 4. Result: 4E₁₆ = 78₁₀. Verify: 26 × 3 = 78 ✓.

For complex hex calculations, converting to decimal, computing, and converting back is often more reliable unless you are deeply practiced. However, understanding hex arithmetic builds intuition for memory layout, CPU registers, and data representation.

Hex in Web Design: Color Codes

HTML and CSS color codes are one of the most visible applications of hexadecimal outside of programming. Colors are expressed as #RRGGBB where each channel ranges from 00 (0 intensity) to FF (255 = full intensity). This gives 256³ = 16,777,216 possible colors.

Hex ColorRedGreenBlueColor Name
#FF000025500Pure red
#00FF0002550Pure green (lime)
#0000FF00255Pure blue
#FFFF002552550Yellow
#FF00FF2550255Magenta
#00FFFF0255255Cyan
#FFFFFF255255255White
#000000000Black
#808080128128128Medium gray
#FF57332558751Vivid orange-red

CSS also supports 4-digit (#RGBA) and 8-digit (#RRGGBBAA) hex colors where AA is the alpha channel (00 = transparent, FF = opaque). Short form colors (#RGB) expand by repeating each digit: #F3A = #FF33AA.

Web designers frequently adjust colors by modifying hex values. Adding to the red channel makes colors warmer; subtracting makes them cooler. Hex colors with equal R, G, and B values always produce shades of gray. A color like #7F7F7F is exactly 50% gray (127 out of 255 on each channel).

Hex in Programming: Memory Addresses and Bit Manipulation

In systems programming, hex is the natural language for memory addresses, bit flags, and hardware registers. Every programmer working in C, C++, assembly, or embedded systems encounters hex constantly.

Memory addresses: On a 32-bit system, addresses range from 0x00000000 to 0xFFFFFFFF (4 GB). Common address ranges: 0x00000000–0x00FFFFFF (low memory), 0x7FFFFFFF (max positive signed 32-bit int), 0x80000000 (start of negative space in signed interpretation), 0xFFFFFFFF (max unsigned 32-bit). On 64-bit systems, user space typically occupies 0x0000000000000000–0x00007FFFFFFFFFFF.

Bit manipulation with hex masks: Bit operations are expressed naturally in hex because they align with nibble boundaries.

OperationExpressionEffect
Set bit 3x |= 0x08Force bit 3 to 1, leave others unchanged
Clear bit 3x &= ~0x08Force bit 3 to 0, leave others unchanged
Toggle bit 3x ^= 0x08Flip bit 3, leave others unchanged
Check bit 3(x & 0x08) != 0Test if bit 3 is set
Extract low nibblex & 0x0FGet lower 4 bits
Extract high nibble(x >> 4) & 0x0FGet upper 4 bits of byte

Famous hex constants: Programmers have created memorable constants for debugging and initialization: 0xDEADBEEF (used to mark uninitialized memory in old IBM systems), 0xCAFEBABE (Java class file magic number), 0xFEEDFACE (Mach-O binary format magic), 0x0BADF00D (memory debugging sentinel), 0xDEADC0DE (used in iOS crash detection). These constants appear unambiguously in hex dumps, making them easy to spot during debugging.

File Format Signatures and Hex Forensics

Every file format has a characteristic sequence of bytes at its beginning called a "magic number" or file signature. Hex editors and digital forensics tools use these signatures to identify file types regardless of file extension. Knowing file signatures is essential for data recovery, malware analysis, and digital forensics.

File TypeHex Signature (first bytes)ASCII Representation
JPEG imageFF D8 FFÿØÿ
PNG image89 50 4E 47 0D 0A 1A 0A.PNG....
PDF document25 50 44 46%PDF
ZIP archive50 4B 03 04PK..
GIF image47 49 46 38GIF8
ELF executable (Linux)7F 45 4C 46.ELF
Windows PE executable4D 5AMZ
MP3 audioFF FB or 49 44 33ÿû or ID3
SQLite database53 51 4C 69 74 65SQLite

Security professionals use hex to examine binary files without trusting the file extension. A file named "document.pdf" that starts with 4D 5A is actually a Windows executable — a common malware trick. Hex analysis of network packets reveals protocol structures, encryption headers, and potential exploits.

Number Base Comparison: Decimal, Binary, Octal, Hex

Computers use different number bases for different purposes. Understanding how they relate helps in reading technical documentation, debugging, and system programming.

DecimalBinary (base 2)Octal (base 8)Hex (base 16)
0000000
81000108
10101012A
15111117F
160001 00002010
640100 000010040
1281000 000020080
2551111 1111377FF
2561 0000 0000400100
1024100 0000 00002000400

Octal (base 8) was once common in computing (it appears in Unix file permissions: chmod 755 = 111 101 101 in binary = rwxr-xr-x). Hex largely replaced octal for most purposes because 4 bits per digit (hex) aligns better with modern 8-bit, 16-bit, 32-bit, and 64-bit architectures than 3 bits per digit (octal).

Frequently Asked Questions

How do I convert between hex and binary quickly?

Each hex digit corresponds to exactly 4 binary bits: 0=0000, 1=0001, 2=0010, 3=0011, 4=0100, 5=0101, 6=0110, 7=0111, 8=1000, 9=1001, A=1010, B=1011, C=1100, D=1101, E=1110, F=1111. To convert 0xB7: B=1011, 7=0111 → 10110111₂. To convert 11001010₂: split into nibbles: 1100=C, 1010=A → 0xCA.

Why do CSS colors use hexadecimal?

CSS uses hex because each RGB channel (0-255) fits neatly in exactly 2 hex digits (00-FF). The #RRGGBB format is compact, unambiguous, and maps directly to the 24-bit color model used by display hardware. HTML adopted hex colors in the early 1990s from X11 color definitions, and the convention has remained standard ever since.

What does 0xFF equal in decimal?

0xFF = 15×16 + 15 = 240 + 15 = 255. In binary, FF = 1111 1111, meaning all 8 bits are set. 255 is the maximum value of an unsigned byte (uint8), the maximum intensity for each RGB color channel, and appears constantly in networking (255.255.255.255 is the broadcast address) and computing.

What is the difference between 0x1F and 0xF1?

These are different numbers with the same digits in different order. 0x1F = 1×16 + 15 = 31 decimal. 0xF1 = 15×16 + 1 = 241 decimal. In binary: 0x1F = 0001 1111; 0xF1 = 1111 0001. The positional value matters, just as 19 ≠ 91 in decimal.

How many hex digits do I need to represent a 32-bit number?

Exactly 8 hex digits (since 16⁸ = 2³², covering all 32-bit values from 0x00000000 to 0xFFFFFFFF). Memory addresses on 32-bit systems are shown as 8-digit hex numbers. For 64-bit numbers, you need 16 hex digits (0x0000000000000000 to 0xFFFFFFFFFFFFFFFF).

What does the "0x" prefix mean in hex numbers?

The "0x" prefix is a notation convention used in C and most programming languages to indicate that the following number is hexadecimal. "0x" stands for "hex" (the 'x' suggests hexadecimal). Other notations: trailing 'h' in assembly (FFh), leading '#' in CSS and some contexts (#FF0000), and the $ prefix in some older languages ($FF).

How is hex used in IP addresses?

IPv4 addresses (e.g., 192.168.1.1) can be expressed in hex: each octet in hex. 192.168.1.1 = 0xC0 0xA8 0x01 0x01 = 0xC0A80101. IPv6 addresses are already written in hex: 2001:0db8:85a3:0000:0000:8a2e:0370:7334. This makes IPv6 address manipulation much easier with hex arithmetic.

What is a hex editor and when would I use one?

A hex editor displays and edits files as raw bytes in hexadecimal. Common uses: examining file format signatures to identify file types, editing binary game save files, reverse engineering software, analyzing network captures, recovering data from damaged files, and digital forensics. Popular hex editors include HxD (Windows), hex fiend (Mac), and xxd (command-line Unix tool).

Why is hexadecimal used instead of decimal in computing?

Because computers operate in binary (base 2), and 16 = 2⁴ — hex aligns perfectly with binary. One hex digit = 4 bits, two hex digits = 1 byte (8 bits), four hex digits = 16 bits, eight hex digits = 32 bits. Decimal has no such clean correspondence with powers of 2, making hex far more natural for expressing binary data compactly.

How do I convert a hex color to RGB values?

Split the 6-digit hex color into three 2-digit groups: #RRGGBB. Convert each from hex to decimal. Example: #4A90E2 → R=0x4A=74, G=0x90=144, B=0xE2=226. So this color is rgb(74, 144, 226) — a medium blue. To reverse: convert each decimal value to 2-digit hex and concatenate: rgb(255, 87, 51) → #FF5733.

Hexadecimal in Networking and Protocols

Computer networking uses hexadecimal extensively. MAC addresses — the hardware identifiers for network interfaces — are written as 6 hex bytes separated by colons or hyphens: for example, 00:1A:2B:3C:4D:5E. The first three bytes (00:1A:2B) identify the manufacturer (Organizationally Unique Identifier, OUI), while the last three (3C:4D:5E) identify the specific device.

IPv6 addresses are 128 bits expressed as 8 groups of 4 hex digits: 2001:0DB8:AC10:FE01:0000:0000:0000:0000. Leading zeros within groups can be omitted and consecutive all-zero groups compressed with "::", giving 2001:DB8:AC10:FE01::. Understanding hex is essential for reading IPv6 addresses, subnet masks, and routing tables.

Ethernet frames, IP packets, TCP segments — all have fields expressed in hex in network analysis tools like Wireshark. A TCP SYN packet shows the flags field as 0x002 (only the SYN bit set); a SYN-ACK shows 0x012 (SYN + ACK bits set). Reading these hex values directly from packet captures is a fundamental network troubleshooting skill.

Unicode and Hex: Character Encoding

Unicode, the universal character encoding standard, assigns each character a code point expressed in hex. The Basic Multilingual Plane spans U+0000 to U+FFFF. Examples:

CharacterUnicode Code PointUTF-8 Encoding (hex)Description
AU+004141Latin capital letter A
αU+03B1CE B1Greek small letter alpha
U+20ACE2 82 ACEuro sign
U+4E2DE4 B8 ADChinese character "middle"
😀U+1F600F0 9F 98 80Grinning face emoji
©U+00A9C2 A9Copyright symbol

UTF-8 encodes ASCII characters (U+0000 to U+007F) in 1 byte identical to their ASCII value. Characters U+0080 to U+07FF use 2 bytes, U+0800 to U+FFFF use 3 bytes, and characters beyond U+FFFF (like most emoji) use 4 bytes. This variable-length encoding, all represented in hex, is why understanding hex helps when debugging text encoding issues in web applications.

},{"@type":"Question","name":"Why do CSS colors use hexadecimal?","acceptedAnswer":{"@type":"Answer","text":"CSS uses hex because each RGB channel (0-255) fits in exactly 2 hex digits (00-FF), making the representation compact and predictable. #RRGGBB covers all 16.7 million possible colors cleanly and maps directly to the 24-bit color model used in display hardware."}},{"@type":"Question","name":"What does 0xFF equal in decimal?","acceptedAnswer":{"@type":"Answer","text":"0xFF = 15×16 + 15 = 240 + 15 = 255. In binary, FF = 1111 1111, which is a fully saturated byte. 255 appears constantly in computing as the maximum value of an unsigned byte."}}]}