Bits to Bytes (bit to B)
Last updated:
Bits-to-bytes conversions translate the bit-level figures that network bandwidth, encryption-key sizes, audio-bitrate and protocol-frame specs use natively into the byte-level figures that file sizes, storage capacities and CPU-word-width specs use. A 256-bit AES encryption key is 32 bytes of underlying material; a 128-bit IPv6 address is 16 bytes on the wire; a 1024-bit RSA public modulus is 128 bytes in PEM-encoded storage. The math is a clean division by 8 — there are exactly 8 bits per byte by universal modern computing convention since the 1960s System/360 architecture and the subsequent ARPANET RFC standardisation. The bit-vs-byte distinction is the most-confused pairing in IT measurement: bandwidth is bits-per-second (Mbps, Gbps), file sizes are bytes (MB, GB), and the eightfold gap is the source of the persistent "why is my 1 Gbps connection only 125 MB/s?" confusion.
How to convert Bits to Bytes
Formula
bytes = bits ÷ 8
To convert bits to bytes, divide the bit figure by 8 — there are exactly 8 bits per byte by universal modern computing convention, established by the IBM System/360 architecture in 1964 and standardised across every subsequent computing platform. The factor is exact and has not varied across hardware generations. For mental math, "bits ÷ 8" lands the byte figure cleanly: 256 bits is 32 bytes, 1024 bits is 128 bytes, 1,000,000 bits is 125,000 bytes. The conversion runs across every cross-layer translation between bit-denominated specifications (cryptographic keys, network bandwidth, audio bitrate, CPU register width) and byte-denominated storage and memory layouts. The 8-bits-per-byte ratio is one of the most-used conversion factors in IT, occurring in cryptographic-library API calls, packet-decoder logic, codec spec calculations and CPU-architecture documentation across every operating system and programming language in modern use.
Worked examples
Example 1 — 8 bit
Eight bits converts to exactly 1 byte by universal modern computing convention. That is the canonical "1 byte" reference, and the eightfold ratio between bits and bytes has been universal since the 1960s IBM System/360 architecture established the 8-bit byte as the standard CPU word-width unit. The same 8-bit byte underlies every file-system block, every memory address, every network protocol frame and every storage-capacity figure on modern computing platforms.
Example 2 — 256 bit
256 bits — the underlying entropy of an AES-256 encryption key, an Ed25519 private key, or a SHA-256 hash output — converts to 256 ÷ 8 = 32 bytes. That is the number of bytes the key occupies in memory, the number of bytes the SHA-256 hash returns from the digest API, and the number of bytes the cryptographic library reads from the secure random-number generator on key generation. The 32-byte figure is one of the most common cryptographic-key byte counts on modern systems.
Example 3 — 1000000 bit
One million bits — a typical 1 Mbps bandwidth-second of network traffic — converts to 1,000,000 ÷ 8 = 125,000 bytes or 125 KB. That is the figure that translates a 1 Mbps streaming bandwidth into a per-second byte-throughput display: a 1 Mbps audio stream delivers 125 KB of underlying audio data per second of playback, and a 1-second 1 Mbps clip on disk is 125,000 bytes of stored data.
bit to B conversion table
| bit | B |
|---|---|
| 1 bit | 0.125 B |
| 2 bit | 0.25 B |
| 3 bit | 0.375 B |
| 4 bit | 0.5 B |
| 5 bit | 0.625 B |
| 6 bit | 0.75 B |
| 7 bit | 0.875 B |
| 8 bit | 1 B |
| 9 bit | 1.125 B |
| 10 bit | 1.25 B |
| 15 bit | 1.875 B |
| 20 bit | 2.5 B |
| 25 bit | 3.125 B |
| 30 bit | 3.75 B |
| 40 bit | 5 B |
| 50 bit | 6.25 B |
| 75 bit | 9.375 B |
| 100 bit | 12.5 B |
| 150 bit | 18.75 B |
| 200 bit | 25 B |
| 250 bit | 31.25 B |
| 500 bit | 62.5 B |
| 750 bit | 93.75 B |
| 1000 bit | 125 B |
| 2500 bit | 312.5 B |
| 5000 bit | 625 B |
Common bit to B conversions
- 8 bit=1 B
- 16 bit=2 B
- 32 bit=4 B
- 64 bit=8 B
- 128 bit=16 B
- 256 bit=32 B
- 512 bit=64 B
- 1024 bit=128 B
- 2048 bit=256 B
- 4096 bit=512 B
What is a Bit?
One bit is the information content of a single binary digit — equivalently, the Shannon entropy of an outcome with probability ½, such as a fair coin flip whose result is then revealed. Formally, for a discrete random variable X with probability mass function p(x), the Shannon entropy is H(X) = −Σ p(x) log₂ p(x) bits. The choice of log base 2 fixes the unit as the bit; log base e gives the nat (natural unit, ≈ 1.443 bits); log base 10 gives the hartley or ban (≈ 3.322 bits). The bit is the foundational unit in coding theory, channel-capacity calculations (the Shannon–Hartley theorem expresses maximum reliable channel capacity in bits per second as a function of bandwidth and signal-to-noise ratio), in cryptography (key lengths and hash output sizes are denominated in bits), and in computer-architecture word widths (32-bit and 64-bit address spaces, 8-bit and 16-bit microcontroller cores). The bit is conceptually distinct from the binary digit considered as a written symbol — the digit "0" or "1" on a page carries 1 bit of information only when the two outcomes are equally likely; a coin biased 90/10 has Shannon entropy of just 0.469 bits per toss.
The bit is the fundamental unit of information, and unlike the byte it has a specific intellectual paternity rather than a hardware-engineering one. The word "bit" — a contraction of "binary digit" — was coined by the American statistician John W. Tukey in a Bell Labs internal memorandum dated 9 January 1947, in which Tukey rejected the alternative coinages "binit" and "bigit" as cumbersome. Tukey was at the time consulting at Bell Labs and would go on to coin "software" in 1958 and to co-develop the Cooley–Tukey FFT in 1965; the bit was an early entry in a remarkable lifetime of terminology. The unit's formal mathematical definition came eighteen months later, when Claude E. Shannon published "A Mathematical Theory of Communication" in the Bell System Technical Journal across the July and October 1948 issues — the founding document of information theory. Shannon defined the information content of an outcome with probability p as −log₂ p, measured in bits, and the entropy of a discrete random variable as the expectation H = −Σ p log₂ p, also in bits. Shannon's 1948 paper rested on two earlier Bell Labs predecessors. Harry Nyquist's 1924 paper "Certain Factors Affecting Telegraph Speed" introduced the relationship between channel bandwidth and signalling rate; Ralph Hartley's 1928 paper "Transmission of Information" introduced a logarithmic measure of information content using log base 10 (the unit later renamed the hartley or ban). Shannon chose log base 2 because it aligned naturally with binary signalling and with the relay-and-switching circuits he had analysed in his 1937 MIT master's thesis "A Symbolic Analysis of Relay and Switching Circuits" — the thesis, supervised by Vannevar Bush, that mapped Boolean algebra onto electrical-relay networks and is widely regarded as the most important master's thesis of the twentieth century. The IEC standardised the bit's symbol as the lowercase "b" in IEC 60027-2 to disambiguate from the byte's uppercase "B".
Networking is the bit's defining domain in everyday computing — every transmission rate published by an ISP, telecommunications carrier, Wi-Fi alliance, or fibre-backbone operator is denominated in bits per second by long industry convention. The 8:1 ratio between bits-for-networking and bytes-for-storage is the single most consequential unit-system boundary in consumer technology: a "1 Gbps" gigabit Ethernet link delivers 125 MB/s of file-transfer throughput; a "200 Mbps" residential broadband connection delivers 25 MB/s; a "5G NR" sub-6 GHz radio link sustaining 1 Gbps delivers 125 MB/s; a "Wi-Fi 6E AX5400" router advertises 5,400 Mbps aggregate radio capacity but per-device usable goes through the ÷8 conversion. The convention is universal across the IEEE 802.3 (Ethernet), 802.11 (Wi-Fi), and 3GPP (cellular) standards documents, in DOCSIS cable specifications, and in undersea-cable capacity quotes (the MAREA Atlantic cable launched in 2018 at 160 Tbps, equivalent to 20 TB/s of byte-throughput). Cryptography denominates key strength, hash output, and signature-scheme parameters entirely in bits. AES (FIPS 197) is specified at 128, 192, and 256-bit key lengths; SHA-2 produces 224, 256, 384, or 512-bit digests; RSA key lengths run 2048, 3072, 4096-bit; elliptic-curve cryptography Curve25519 operates with 256-bit private keys; the post-quantum standards selected in NIST PQC Round 4 (CRYSTALS-Kyber, CRYSTALS-Dilithium) specify parameter sets in bits-of-classical-security and quantum-security. The 256-bit baseline has become the universal "AES-256, SHA-256, ECDSA-256" reference for symmetric-equivalent strength against both classical and near-term quantum adversaries. Display and imaging use bits to describe per-channel colour depth and high-dynamic-range gradation. Standard sRGB displays are 8-bit per channel (24-bit total RGB, 16.7 million colours); HDR10 and HDR10+ specify 10-bit per channel (1.07 billion colours, the Rec.2020 colour space); Dolby Vision specifies 12-bit per channel (68.7 billion colours); cinema-grade colour grading and DCP delivery work in 16-bit per channel. The visible banding artefacts in 8-bit gradients on HDR-capable televisions are the consumer-visible reason 10-bit panels are now standard on premium displays. Audio bit depth determines dynamic range. CD audio is 16-bit / 44.1 kHz (theoretical 96 dB SNR, ~98 dB after dithering); high-resolution streaming on Apple Music, Tidal, and Qobuz delivers 24-bit / 96–192 kHz lossless (theoretical 144 dB SNR); professional studio recording and mastering work in 32-bit float for unlimited internal headroom. Telephone-grade voice is 8-bit μ-law or A-law companded at 8 kHz, the legacy bit-rate of the public switched telephone network. Computer architecture: 8-bit microcontrollers (AVR, PIC, 8051) dominate cost-sensitive embedded designs; 32-bit ARM Cortex-M cores dominate IoT and industrial control; 64-bit ARM, x86-64, and RISC-V cores dominate phones, laptops, servers, and supercomputers. The CPU bit-width describes both the natural integer register size and the addressable memory space (a 32-bit address space tops out at 4 GiB, the practical RAM ceiling that drove the 32→64-bit consumer transition between 2003 and 2010).
What is a Byte?
One byte equals exactly 8 bits, encoding 2⁸ = 256 distinct values (0–255 unsigned, or −128 to +127 in two's-complement signed representation). The byte is the smallest individually-addressable unit of memory in essentially all modern computer architectures, and is the unit in which file sizes, memory capacities, and storage media capacities are reported. The IEC formal name for the 8-bit byte is the "octet" (IEC 60027-2:2005, IEC 80000-13:2008), used in international standards documents and in networking RFCs to disambiguate from historical machines where "byte" meant other widths. ASCII (ANSI X3.4-1968) encodes a single character in 7 bits within an 8-bit byte; UTF-8 (RFC 3629, 2003) encodes the full Unicode code-point repertoire in 1 to 4 bytes per code point, with the ASCII subset occupying 1 byte for backward compatibility. By IEC 80000-13, the symbol B denotes the byte and is distinguished from the bit (symbol b) by capitalisation alone — a typographic distinction that survives intact through internet-speed advertising in Mbps (megabits per second) and storage capacities in MB (megabytes).
The byte was coined in June 1956 by Werner Buchholz, an IBM engineer working on the IBM Stretch (7030), the company's first transistorised supercomputer project. Buchholz needed a name for an addressable group of bits smaller than a machine word, and proposed "byte" — deliberately respelled from "bite" so a single typographical slip could not collapse the new unit into "bit". The respelling was not cosmetic: in early documentation handwritten and typed in mixed lower- and upper-case, "bit" and "bite" were dangerously similar, and the "y" gave the word a unique fingerprint at a glance. Through the late 1950s the byte was variable-width — Stretch's byte ranged from 1 to 8 bits depending on the operation, and contemporary machines such as the IBM 7090 used 6-bit characters. The 8-bit byte was standardised by the IBM System/360 architecture announced on 7 April 1964, which addressed memory in 8-bit bytes and grouped four bytes into a 32-bit word. System/360's commercial success made the 8-bit byte the de facto industry standard, and ANSI X3.4-1968 (ASCII) ratified the 7-bit character within an 8-bit byte for North American computing. The IEC formalised the 8-bit byte as the "octet" in 1998 (IEC 60027-2) to remove the residual ambiguity in international standards work, although in everyday English-language usage "byte" universally means 8 bits. The byte's defining structural problem appeared once storage capacities scaled past the kilobyte: SI prefixes (kilo, mega, giga) are powers of 1000 by long-standing physics convention, but powers of 1024 are mathematically natural for byte-addressed memory and were used by every operating system from CP/M onwards. The IEC 80000-13 standard published in March 2008 introduced the unambiguous binary prefixes — kibibyte (KiB = 1024 B), mebibyte (MiB = 1024² B), gibibyte (GiB), tebibyte (TiB) — but adoption outside Linux distributions and a handful of standards-conscious vendors has been slow.
The byte is the foundational unit of digital information across every computing system in use. Character encoding is its most universal application: every plain-text file, source-code file, email body, JSON document and HTTP header is denominated in bytes, with ASCII assigning 1 byte per English-language character and UTF-8 using 1–4 bytes depending on the Unicode code point (Latin letters and digits remain 1 byte; Cyrillic and Greek 2 bytes; CJK ideographs 3 bytes; emoji and supplementary planes 4 bytes). File-system sizes, memory allocations, network buffer sizes and CPU cache lines are all reported in bytes or byte multiples. The byte is also where the binary/decimal prefix war originates and propagates. Storage manufacturers (Seagate, Western Digital, SanDisk, Samsung) follow the SI convention literally, advertising 1 TB = 10¹² = 1,000,000,000,000 bytes — the convention that gives the largest possible marketing number. Operating systems historically use binary multiples: Microsoft Windows reports a "1 TB" drive as 931 GB because Windows interprets "GB" as 2³⁰ = 1,073,741,824 bytes, exposing a 7.4% gap that has launched a generation of consumer-support escalations. Apple macOS realigned with the SI convention in OS X 10.6 Snow Leopard (2009), reporting drive capacities in decimal gigabytes that match the manufacturer's marketing. Linux distributions vary by tool: `ls -h` and `df -h` use binary kibibyte/mebibyte/gibibyte semantics with SI-style "K/M/G" symbols, while `ls --si` and the GNU `coreutils` documentation expose the IEC distinction explicitly. Network engineering, by long convention, uses decimal multiples for bandwidth (Mbps in millions of bits per second) — a convention covered in detail under the bit, mbps and gbps entries.
Real-world uses for Bits to Bytes
Cryptographic key-size translation across spec and storage layers
Cryptographic specifications denominate key sizes in bits — AES-128, AES-192, AES-256, RSA-2048, RSA-4096, ECDSA P-256, Ed25519 — but the underlying key material is stored, transmitted and processed in bytes on every implementation. A 256-bit AES key occupies exactly 32 bytes of memory and PEM-encodes to 44 base64 characters; a 2048-bit RSA modulus occupies 256 bytes and PEM-encodes to about 360 characters; a 32-byte Ed25519 private key derives from the same 256-bit underlying entropy. The bits-to-bytes conversion runs at every key-generation, every spec-to-implementation translation, and every cryptographic-library API call (OpenSSL, libsodium, Web Crypto API).
IPv6 address handling between spec and packet-byte layout
IPv6 addresses are specified as 128-bit fields under RFC 4291 but represented in packet headers as 16-byte fields with specific byte-ordering conventions (big-endian network byte order). Network engineers translating IPv6 spec figures (the 128-bit address space, the 64-bit interface identifier, the 32-bit subnet prefix) into packet-decoder byte counts run the bits-to-bytes conversion at every protocol-layer transition. The same applies to IPv4 (32-bit / 4-byte) addresses, MAC addresses (48-bit / 6-byte), and Ethernet frame headers (per-field bit-widths translating to byte-aligned storage).
Audio, video and streaming-media bitrate conversion to byte-throughput
Audio and video bitrate specifications are denominated in kilobits-per-second (kbps) for codec specs and streaming-quality tiers, but the underlying file storage and CDN delivery is measured in bytes-per-second. A 320 kbps MP3 stream delivers 40 KB/s of underlying byte throughput; a 5 Mbps HD video stream delivers 625 KB/s; a 25 Mbps 4K stream delivers 3,125 KB/s. The bits-to-bytes conversion runs at every codec-spec to file-size calculation: a 4-minute 320 kbps song is 320 × 240 ÷ 8 = 9,600 KB ≈ 9.6 MB on disk, with the per-second bit-rate translating to per-second byte-throughput at the eightfold ratio.
CPU register and cache-line specifications in bits versus bytes
CPU architecture specifications denominate register widths and cache-line sizes in bits — the 64-bit ARM and x86-64 register width, the 512-bit AVX-512 SIMD register, the 128-bit ARM NEON register — but the corresponding byte-level layouts are 8 bytes, 64 bytes and 16 bytes respectively in main memory. CPU performance engineers, compiler authors and assembly programmers translate between the bit-width spec figures and the byte-aligned memory layouts at every cache-line and SIMD-load operation. The 64-byte L1 cache line on modern CPUs corresponds to 512 bits of underlying memory width, and the bits-to-bytes conversion runs at every memory-access optimisation pass.
When to use Bytes instead of Bits
Use bytes whenever the destination is a memory layout, file-storage size, packet-encoding byte count, cryptographic-library API parameter, codec-output file calculation or CPU-architecture cache-line layout. Bytes are the natural unit of storage and memory across every modern computing platform, and the byte figure is what file systems, RAM allocators, packet decoders and cryptographic libraries all expect at their API boundaries. Stay in bits when the source is a cryptographic key-size specification, a network-bandwidth figure, an audio or video codec bitrate, a CPU register-width spec, or any protocol-field bit-width definition. The conversion is at the spec-vs-implementation boundary, with the bit figure on the spec side and the byte figure on the implementation side. The bit-vs-byte distinction is the single most-confused pairing in IT measurement and the conversion runs constantly to disambiguate the two layers.
Common mistakes converting bit to B
- Confusing kilobits and kilobytes when reading bandwidth-vs-storage figures. A 1 Mbps connection delivers 125 KB/s of byte throughput, not 1 MB/s — the eightfold gap is the source of the persistent "why is my 1 Gbps connection only 125 MB/s?" confusion. Bandwidth specs use bits-per-second (Mbps, Gbps); file-transfer rates and storage capacities use bytes-per-second (MB/s, GB/s); the unit difference is critical at every bandwidth-to-throughput calculation.
- Assuming all bytes are 8 bits across all historical platforms. The "8-bit byte" became universal with the IBM System/360 in 1964, but earlier computing platforms (DEC PDP-10 with 36-bit words, IBM 1401 with 6-bit "characters", various 7-bit ASCII platforms) used different word-widths. Modern computing is universally 8-bit-byte but legacy data-format documentation and protocol-archaeology work occasionally surface non-8-bit byte references, and the conversion factor only applies on the modern 8-bit-byte standard.
Frequently asked questions
How many bits in a byte?
One byte equals exactly 8 bits by universal modern computing convention. The 8-bit byte became the universal standard with the IBM System/360 architecture in 1964 and has been preserved across every subsequent computing platform — x86, ARM, RISC-V, MIPS, PowerPC and others all use the 8-bit byte as the fundamental unit of memory addressing. Earlier computing platforms used different word-widths but the modern industry has been universally 8-bit-byte for over six decades.
How many bytes is a 256-bit AES key?
A 256-bit AES key equals 256 ÷ 8 = 32 bytes of underlying key material. That is the number of bytes the key occupies in memory, the number of bytes the cryptographic library reads from the secure random-number generator on key generation, and the number of bytes the key serialises to on disk in raw binary format. PEM-encoded the same 32-byte key is 44 base64 characters; hex-encoded it is 64 hex characters.
How many bytes is a 1 Mbps bandwidth-second?
One million bits converts to 1,000,000 ÷ 8 = 125,000 bytes or 125 KB. That is the figure that translates a 1 Mbps streaming bandwidth into a per-second byte-throughput display: a 1 Mbps audio stream delivers 125 KB of underlying audio data per second of playback. A 1 Gbps connection delivers 125 MB/s at theoretical maximum, and the eightfold ratio between Mbps bandwidth and MB/s throughput is universal.
Why are 8 bits in a byte rather than 4 or 16?
The 8-bit byte became the universal standard with the IBM System/360 architecture in 1964, when IBM chose 8 bits per byte to provide convenient encoding of the EBCDIC character set (allowing both upper and lower case Latin letters plus digits, punctuation and control characters in a single byte) and to align cleanly with the 32-bit and 64-bit word widths IBM was targeting for scientific computing. The 8-bit byte then became the de-facto standard across every subsequent computing platform. Earlier machines used 6-bit, 7-bit and 9-bit bytes for specific character-encoding needs, but the System/360's commercial success made the 8-bit byte universal by the early 1970s.
Quick way to convert bits to bytes in my head?
Divide the bit figure by 8. The mental shortcut for common bit values: 8 bits is 1 byte, 64 bits is 8 bytes, 128 bits is 16 bytes, 256 bits is 32 bytes, 512 bits is 64 bytes, 1024 bits is 128 bytes, 2048 bits is 256 bytes. Most cryptographic and protocol-spec bit values are powers of 2, which makes the divide-by-8 mental math trivial.
What is the difference between Mbps and MB/s?
Mbps (megabits per second, lowercase b) measures network bandwidth at the bit level, while MB/s (megabytes per second, uppercase B) measures file-transfer throughput at the byte level. The two differ by a factor of 8 because there are 8 bits per byte. A 100 Mbps connection delivers 12.5 MB/s at theoretical maximum, and a 1 Gbps connection delivers 125 MB/s. The bit-vs-byte distinction is the most common cause of consumer-facing bandwidth confusion.
How many bytes does an IPv6 address occupy?
An IPv6 address is specified as 128 bits under RFC 4291 and occupies exactly 128 ÷ 8 = 16 bytes in packet headers and memory storage. The address is typically displayed as eight 16-bit groups separated by colons (each group is 2 bytes), giving the familiar "2001:0db8:85a3::8a2e:0370:7334" notation that compresses to fewer characters when leading zeros and consecutive zero-groups are omitted. IPv4 addresses by contrast are 32 bits or 4 bytes and display as four dotted decimal octets.