Hexadecimal Calculator
Add, subtract, and convert hexadecimal numbers. Perform hex arithmetic and convert to decimal or binary. Free math calculator. Get instant results now.
Understanding Hexadecimal (Base-16): Foundations
Hexadecimal (hex) is a positional numeral system with base 16. It uses sixteen distinct symbols: the digits 0–9 represent values zero through nine, and the letters A–F (or a–f) represent values ten through fifteen. One hexadecimal digit represents exactly four binary bits (a "nibble"), making hex the most natural way to represent binary data in a human-readable format.
The positional value of each hex digit is a power of 16. For the hex number 2F4:
- 2 is in the 16² (256) position: 2 × 256 = 512
- F (15) is in the 16¹ (16) position: 15 × 16 = 240
- 4 is in the 16⁰ (1) position: 4 × 1 = 4
- Total: 512 + 240 + 4 = 756 decimal
Converting decimal to hex: repeatedly divide by 16 and record remainders (rightmost digit first). 756 ÷ 16 = 47 R 4; 47 ÷ 16 = 2 R 15 (F); 2 ÷ 16 = 0 R 2. Read remainders upward: 2F4₁₆ ✓. In most programming languages, hex literals are prefixed with 0x: 0x2F4 = 756.
Hex to Decimal Conversion Table
Quick reference for converting single hex digits and common hex values to decimal:
| Hex | Decimal | Binary | Common Meaning |
|---|---|---|---|
| 0x00 | 0 | 0000 0000 | Null byte, false, off |
| 0x01 | 1 | 0000 0001 | True, enabled |
| 0x0A | 10 | 0000 1010 | Newline character (LF) |
| 0x0D | 13 | 0000 1101 | Carriage return (CR) |
| 0x1F | 31 | 0001 1111 | Unit separator |
| 0x20 | 32 | 0010 0000 | Space character (ASCII) |
| 0x41 | 65 | 0100 0001 | 'A' in ASCII |
| 0x61 | 97 | 0110 0001 | 'a' in ASCII (lowercase) |
| 0x7F | 127 | 0111 1111 | DEL character; max signed nibble pair |
| 0x80 | 128 | 1000 0000 | Sign bit set (negative in signed byte) |
| 0xFF | 255 | 1111 1111 | Max unsigned byte; all bits set |
| 0x100 | 256 | 1 0000 0000 | One more than max byte |
| 0xFFFF | 65,535 | 16 ones | Max 16-bit unsigned value |
| 0xFFFFFF | 16,777,215 | 24 ones | Max 24-bit (16M colors) |
The relationship between hex and binary is direct: each hex digit maps to exactly 4 bits. A = 1010, B = 1011, C = 1100, D = 1101, E = 1110, F = 1111. Converting 0xAB to binary: A=1010, B=1011 → 10101011₂ = 171₁₀.
Hex Arithmetic: Addition, Subtraction, Multiplication
Hexadecimal arithmetic follows the same rules as decimal arithmetic but carries and borrows at 16 instead of 10. Understanding hex arithmetic is essential for assembly programming, embedded systems, and reading compiler output.
Addition example: 3A + 27. Units: A + 7 = 10 + 7 = 17 = 1×16 + 1 → write 1, carry 1. Sixteens: 3 + 2 + 1 (carry) = 6. Result: 61₁₆ = 97₁₀. Verify: 58 + 39 = 97 ✓.
Subtraction example: C3 − 5F. Units: 3 < F (15), so borrow: 3 + 16 − 15 = 4, and carry 1 to next column. Sixteens: C (12) − 5 − 1 (borrow) = 6. Result: 64₁₆ = 100₁₀. Verify: 195 − 95 = 100 ✓.
Multiplication example: 1A × 3. A × 3 = 30 = 1E₁₆ (write E, carry 1). 1 × 3 + 1 = 4. Result: 4E₁₆ = 78₁₀. Verify: 26 × 3 = 78 ✓.
For complex hex calculations, converting to decimal, computing, and converting back is often more reliable unless you are deeply practiced. However, understanding hex arithmetic builds intuition for memory layout, CPU registers, and data representation.
Hex in Web Design: Color Codes
HTML and CSS color codes are one of the most visible applications of hexadecimal outside of programming. Colors are expressed as #RRGGBB where each channel ranges from 00 (0 intensity) to FF (255 = full intensity). This gives 256³ = 16,777,216 possible colors.
| Hex Color | Red | Green | Blue | Color Name |
|---|---|---|---|---|
| #FF0000 | 255 | 0 | 0 | Pure red |
| #00FF00 | 0 | 255 | 0 | Pure green (lime) |
| #0000FF | 0 | 0 | 255 | Pure blue |
| #FFFF00 | 255 | 255 | 0 | Yellow |
| #FF00FF | 255 | 0 | 255 | Magenta |
| #00FFFF | 0 | 255 | 255 | Cyan |
| #FFFFFF | 255 | 255 | 255 | White |
| #000000 | 0 | 0 | 0 | Black |
| #808080 | 128 | 128 | 128 | Medium gray |
| #FF5733 | 255 | 87 | 51 | Vivid orange-red |
CSS also supports 4-digit (#RGBA) and 8-digit (#RRGGBBAA) hex colors where AA is the alpha channel (00 = transparent, FF = opaque). Short form colors (#RGB) expand by repeating each digit: #F3A = #FF33AA.
Web designers frequently adjust colors by modifying hex values. Adding to the red channel makes colors warmer; subtracting makes them cooler. Hex colors with equal R, G, and B values always produce shades of gray. A color like #7F7F7F is exactly 50% gray (127 out of 255 on each channel).
Hex in Programming: Memory Addresses and Bit Manipulation
In systems programming, hex is the natural language for memory addresses, bit flags, and hardware registers. Every programmer working in C, C++, assembly, or embedded systems encounters hex constantly.
Memory addresses: On a 32-bit system, addresses range from 0x00000000 to 0xFFFFFFFF (4 GB). Common address ranges: 0x00000000–0x00FFFFFF (low memory), 0x7FFFFFFF (max positive signed 32-bit int), 0x80000000 (start of negative space in signed interpretation), 0xFFFFFFFF (max unsigned 32-bit). On 64-bit systems, user space typically occupies 0x0000000000000000–0x00007FFFFFFFFFFF.
Bit manipulation with hex masks: Bit operations are expressed naturally in hex because they align with nibble boundaries.
| Operation | Expression | Effect |
|---|---|---|
| Set bit 3 | x |= 0x08 | Force bit 3 to 1, leave others unchanged |
| Clear bit 3 | x &= ~0x08 | Force bit 3 to 0, leave others unchanged |
| Toggle bit 3 | x ^= 0x08 | Flip bit 3, leave others unchanged |
| Check bit 3 | (x & 0x08) != 0 | Test if bit 3 is set |
| Extract low nibble | x & 0x0F | Get lower 4 bits |
| Extract high nibble | (x >> 4) & 0x0F | Get upper 4 bits of byte |
Famous hex constants: Programmers have created memorable constants for debugging and initialization: 0xDEADBEEF (used to mark uninitialized memory in old IBM systems), 0xCAFEBABE (Java class file magic number), 0xFEEDFACE (Mach-O binary format magic), 0x0BADF00D (memory debugging sentinel), 0xDEADC0DE (used in iOS crash detection). These constants appear unambiguously in hex dumps, making them easy to spot during debugging.
File Format Signatures and Hex Forensics
Every file format has a characteristic sequence of bytes at its beginning called a "magic number" or file signature. Hex editors and digital forensics tools use these signatures to identify file types regardless of file extension. Knowing file signatures is essential for data recovery, malware analysis, and digital forensics.
| File Type | Hex Signature (first bytes) | ASCII Representation |
|---|---|---|
| JPEG image | FF D8 FF | ÿØÿ |
| PNG image | 89 50 4E 47 0D 0A 1A 0A | .PNG.... |
| PDF document | 25 50 44 46 | |
| ZIP archive | 50 4B 03 04 | PK.. |
| GIF image | 47 49 46 38 | GIF8 |
| ELF executable (Linux) | 7F 45 4C 46 | .ELF |
| Windows PE executable | 4D 5A | MZ |
| MP3 audio | FF FB or 49 44 33 | ÿû or ID3 |
| SQLite database | 53 51 4C 69 74 65 | SQLite |
Security professionals use hex to examine binary files without trusting the file extension. A file named "document.pdf" that starts with 4D 5A is actually a Windows executable — a common malware trick. Hex analysis of network packets reveals protocol structures, encryption headers, and potential exploits.
Number Base Comparison: Decimal, Binary, Octal, Hex
Computers use different number bases for different purposes. Understanding how they relate helps in reading technical documentation, debugging, and system programming.
| Decimal | Binary (base 2) | Octal (base 8) | Hex (base 16) |
|---|---|---|---|
| 0 | 0000 | 0 | 0 |
| 8 | 1000 | 10 | 8 |
| 10 | 1010 | 12 | A |
| 15 | 1111 | 17 | F |
| 16 | 0001 0000 | 20 | 10 |
| 64 | 0100 0000 | 100 | 40 |
| 128 | 1000 0000 | 200 | 80 |
| 255 | 1111 1111 | 377 | FF |
| 256 | 1 0000 0000 | 400 | 100 |
| 1024 | 100 0000 0000 | 2000 | 400 |
Octal (base 8) was once common in computing (it appears in Unix file permissions: chmod 755 = 111 101 101 in binary = rwxr-xr-x). Hex largely replaced octal for most purposes because 4 bits per digit (hex) aligns better with modern 8-bit, 16-bit, 32-bit, and 64-bit architectures than 3 bits per digit (octal).
Frequently Asked Questions
How do I convert between hex and binary quickly?
Each hex digit corresponds to exactly 4 binary bits: 0=0000, 1=0001, 2=0010, 3=0011, 4=0100, 5=0101, 6=0110, 7=0111, 8=1000, 9=1001, A=1010, B=1011, C=1100, D=1101, E=1110, F=1111. To convert 0xB7: B=1011, 7=0111 → 10110111₂. To convert 11001010₂: split into nibbles: 1100=C, 1010=A → 0xCA.
Why do CSS colors use hexadecimal?
CSS uses hex because each RGB channel (0-255) fits neatly in exactly 2 hex digits (00-FF). The #RRGGBB format is compact, unambiguous, and maps directly to the 24-bit color model used by display hardware. HTML adopted hex colors in the early 1990s from X11 color definitions, and the convention has remained standard ever since.
What does 0xFF equal in decimal?
0xFF = 15×16 + 15 = 240 + 15 = 255. In binary, FF = 1111 1111, meaning all 8 bits are set. 255 is the maximum value of an unsigned byte (uint8), the maximum intensity for each RGB color channel, and appears constantly in networking (255.255.255.255 is the broadcast address) and computing.
What is the difference between 0x1F and 0xF1?
These are different numbers with the same digits in different order. 0x1F = 1×16 + 15 = 31 decimal. 0xF1 = 15×16 + 1 = 241 decimal. In binary: 0x1F = 0001 1111; 0xF1 = 1111 0001. The positional value matters, just as 19 ≠ 91 in decimal.
How many hex digits do I need to represent a 32-bit number?
Exactly 8 hex digits (since 16⁸ = 2³², covering all 32-bit values from 0x00000000 to 0xFFFFFFFF). Memory addresses on 32-bit systems are shown as 8-digit hex numbers. For 64-bit numbers, you need 16 hex digits (0x0000000000000000 to 0xFFFFFFFFFFFFFFFF).
What does the "0x" prefix mean in hex numbers?
The "0x" prefix is a notation convention used in C and most programming languages to indicate that the following number is hexadecimal. "0x" stands for "hex" (the 'x' suggests hexadecimal). Other notations: trailing 'h' in assembly (FFh), leading '#' in CSS and some contexts (#FF0000), and the $ prefix in some older languages ($FF).
How is hex used in IP addresses?
IPv4 addresses (e.g., 192.168.1.1) can be expressed in hex: each octet in hex. 192.168.1.1 = 0xC0 0xA8 0x01 0x01 = 0xC0A80101. IPv6 addresses are already written in hex: 2001:0db8:85a3:0000:0000:8a2e:0370:7334. This makes IPv6 address manipulation much easier with hex arithmetic.
What is a hex editor and when would I use one?
A hex editor displays and edits files as raw bytes in hexadecimal. Common uses: examining file format signatures to identify file types, editing binary game save files, reverse engineering software, analyzing network captures, recovering data from damaged files, and digital forensics. Popular hex editors include HxD (Windows), hex fiend (Mac), and xxd (command-line Unix tool).
Why is hexadecimal used instead of decimal in computing?
Because computers operate in binary (base 2), and 16 = 2⁴ — hex aligns perfectly with binary. One hex digit = 4 bits, two hex digits = 1 byte (8 bits), four hex digits = 16 bits, eight hex digits = 32 bits. Decimal has no such clean correspondence with powers of 2, making hex far more natural for expressing binary data compactly.
How do I convert a hex color to RGB values?
Split the 6-digit hex color into three 2-digit groups: #RRGGBB. Convert each from hex to decimal. Example: #4A90E2 → R=0x4A=74, G=0x90=144, B=0xE2=226. So this color is rgb(74, 144, 226) — a medium blue. To reverse: convert each decimal value to 2-digit hex and concatenate: rgb(255, 87, 51) → #FF5733.
Hexadecimal in Networking and Protocols
Computer networking uses hexadecimal extensively. MAC addresses — the hardware identifiers for network interfaces — are written as 6 hex bytes separated by colons or hyphens: for example, 00:1A:2B:3C:4D:5E. The first three bytes (00:1A:2B) identify the manufacturer (Organizationally Unique Identifier, OUI), while the last three (3C:4D:5E) identify the specific device.
IPv6 addresses are 128 bits expressed as 8 groups of 4 hex digits: 2001:0DB8:AC10:FE01:0000:0000:0000:0000. Leading zeros within groups can be omitted and consecutive all-zero groups compressed with "::", giving 2001:DB8:AC10:FE01::. Understanding hex is essential for reading IPv6 addresses, subnet masks, and routing tables.
Ethernet frames, IP packets, TCP segments — all have fields expressed in hex in network analysis tools like Wireshark. A TCP SYN packet shows the flags field as 0x002 (only the SYN bit set); a SYN-ACK shows 0x012 (SYN + ACK bits set). Reading these hex values directly from packet captures is a fundamental network troubleshooting skill.
Unicode and Hex: Character Encoding
Unicode, the universal character encoding standard, assigns each character a code point expressed in hex. The Basic Multilingual Plane spans U+0000 to U+FFFF. Examples:
| Character | Unicode Code Point | UTF-8 Encoding (hex) | Description |
|---|---|---|---|
| A | U+0041 | 41 | Latin capital letter A |
| α | U+03B1 | CE B1 | Greek small letter alpha |
| € | U+20AC | E2 82 AC | Euro sign |
| 中 | U+4E2D | E4 B8 AD | Chinese character "middle" |
| 😀 | U+1F600 | F0 9F 98 80 | Grinning face emoji |
| © | U+00A9 | C2 A9 | Copyright symbol |
UTF-8 encodes ASCII characters (U+0000 to U+007F) in 1 byte identical to their ASCII value. Characters U+0080 to U+07FF use 2 bytes, U+0800 to U+FFFF use 3 bytes, and characters beyond U+FFFF (like most emoji) use 4 bytes. This variable-length encoding, all represented in hex, is why understanding hex helps when debugging text encoding issues in web applications.