Skip to main content

Bytes to Bits (B to bit)

Last updated:

Bytes-to-bits conversions translate the byte-level figures that file sizes, storage capacities and memory layouts use natively into the bit-level figures that network bandwidth, encryption-key specs and audio bitrate use. A 32-byte Ed25519 key is 256 bits of underlying entropy; a 16-byte IPv6 address is 128 bits on the wire; a 64-byte L1 cache line is 512 bits of memory width. The math is a clean multiplication by 8, fixed by the universal 8-bit-byte standard since the 1964 IBM System/360 architecture. The conversion runs at every cross-layer translation from storage-layer byte counts up to spec-layer bit counts: a security analyst describing a 32-byte key as "256-bit AES" for an audit report, a network engineer translating a 16-byte IPv6 address into the "128-bit address space" terminology for an architecture document, a CPU performance engineer translating a 64-byte cache line into the "512-bit memory width" of a vendor benchmark sheet.

How to convert Bytes to Bits

Formula

bits = bytes × 8

To convert bytes to bits, multiply the byte figure by 8 — there are exactly 8 bits per byte by universal modern computing convention, established by the IBM System/360 architecture in 1964 and standardised across every subsequent computing platform. The factor is exact and unchanging. For mental math, "bytes × 8" lands the bit figure cleanly: 32 bytes is 256 bits, 64 bytes is 512 bits, 128 bytes is 1024 bits, 256 bytes is 2048 bits. Most cryptographic, protocol-spec and CPU-architecture bit values are powers of 2, which makes the multiply-by-8 mental math trivial. The conversion runs across every cross-layer translation from byte-denominated storage and memory layouts up to bit-denominated specifications, with the byte figure on the implementation side and the bit figure on the spec side. The 8-bits-per-byte ratio is one of the most-used conversion factors in IT, occurring in security audits, network-protocol documentation, CPU vendor specs and codec-bitrate calculations across every operating system and programming language.

Worked examples

Example 11 B

One byte converts to exactly 8 bits by universal modern computing convention. That is the canonical "1 byte" reference, and the eightfold ratio underlies every cryptographic-key, network-protocol-field, audio-bitrate, video-bitrate, CPU-cache-line and memory-width specification on modern computing platforms. The 8-bit byte has been universal since the 1964 IBM System/360 architecture and is preserved across every modern CPU, every operating system and every programming language.

Example 232 B

32 bytes — the size of an Ed25519 private key, an AES-256 encryption key, or a SHA-256 hash output — converts to 32 × 8 = 256 bits. That is the figure that translates the byte-level storage of the cryptographic key into the bit-level "256-bit" terminology that every security audit, cryptographic-strength claim and standards-body specification uses. A 256-bit AES key is the same physical 32-byte object described from the spec layer rather than the storage layer.

Example 31000000 B

One million bytes — a typical 1 MB file size for a small image, document or data export — converts to 1,000,000 × 8 = 8,000,000 bits or 8 Mb. That is the figure that translates a 1 MB file-size into the bit-bandwidth time it would take to transfer over a network: at 1 Mbps the 8 Mb file takes 8 seconds, at 10 Mbps it takes 0.8 seconds, at 100 Mbps it takes 0.08 seconds. The bytes-to-bits conversion is critical at every download-time calculation.

B to bit conversion table

Bbit
1 B8 bit
2 B16 bit
3 B24 bit
4 B32 bit
5 B40 bit
6 B48 bit
7 B56 bit
8 B64 bit
9 B72 bit
10 B80 bit
15 B120 bit
20 B160 bit
25 B200 bit
30 B240 bit
40 B320 bit
50 B400 bit
75 B600 bit
100 B800 bit
150 B1200 bit
200 B1600 bit
250 B2000 bit
500 B4000 bit
750 B6000 bit
1000 B8000 bit
2500 B20000 bit
5000 B40000 bit

Common B to bit conversions

  • 1 B=8 bit
  • 8 B=64 bit
  • 16 B=128 bit
  • 32 B=256 bit
  • 64 B=512 bit
  • 128 B=1024 bit
  • 256 B=2048 bit
  • 512 B=4096 bit
  • 1024 B=8192 bit
  • 4096 B=32768 bit

What is a Byte?

One byte equals exactly 8 bits, encoding 2⁸ = 256 distinct values (0–255 unsigned, or −128 to +127 in two's-complement signed representation). The byte is the smallest individually-addressable unit of memory in essentially all modern computer architectures, and is the unit in which file sizes, memory capacities, and storage media capacities are reported. The IEC formal name for the 8-bit byte is the "octet" (IEC 60027-2:2005, IEC 80000-13:2008), used in international standards documents and in networking RFCs to disambiguate from historical machines where "byte" meant other widths. ASCII (ANSI X3.4-1968) encodes a single character in 7 bits within an 8-bit byte; UTF-8 (RFC 3629, 2003) encodes the full Unicode code-point repertoire in 1 to 4 bytes per code point, with the ASCII subset occupying 1 byte for backward compatibility. By IEC 80000-13, the symbol B denotes the byte and is distinguished from the bit (symbol b) by capitalisation alone — a typographic distinction that survives intact through internet-speed advertising in Mbps (megabits per second) and storage capacities in MB (megabytes).

The byte was coined in June 1956 by Werner Buchholz, an IBM engineer working on the IBM Stretch (7030), the company's first transistorised supercomputer project. Buchholz needed a name for an addressable group of bits smaller than a machine word, and proposed "byte" — deliberately respelled from "bite" so a single typographical slip could not collapse the new unit into "bit". The respelling was not cosmetic: in early documentation handwritten and typed in mixed lower- and upper-case, "bit" and "bite" were dangerously similar, and the "y" gave the word a unique fingerprint at a glance. Through the late 1950s the byte was variable-width — Stretch's byte ranged from 1 to 8 bits depending on the operation, and contemporary machines such as the IBM 7090 used 6-bit characters. The 8-bit byte was standardised by the IBM System/360 architecture announced on 7 April 1964, which addressed memory in 8-bit bytes and grouped four bytes into a 32-bit word. System/360's commercial success made the 8-bit byte the de facto industry standard, and ANSI X3.4-1968 (ASCII) ratified the 7-bit character within an 8-bit byte for North American computing. The IEC formalised the 8-bit byte as the "octet" in 1998 (IEC 60027-2) to remove the residual ambiguity in international standards work, although in everyday English-language usage "byte" universally means 8 bits. The byte's defining structural problem appeared once storage capacities scaled past the kilobyte: SI prefixes (kilo, mega, giga) are powers of 1000 by long-standing physics convention, but powers of 1024 are mathematically natural for byte-addressed memory and were used by every operating system from CP/M onwards. The IEC 80000-13 standard published in March 2008 introduced the unambiguous binary prefixes — kibibyte (KiB = 1024 B), mebibyte (MiB = 1024² B), gibibyte (GiB), tebibyte (TiB) — but adoption outside Linux distributions and a handful of standards-conscious vendors has been slow.

The byte is the foundational unit of digital information across every computing system in use. Character encoding is its most universal application: every plain-text file, source-code file, email body, JSON document and HTTP header is denominated in bytes, with ASCII assigning 1 byte per English-language character and UTF-8 using 1–4 bytes depending on the Unicode code point (Latin letters and digits remain 1 byte; Cyrillic and Greek 2 bytes; CJK ideographs 3 bytes; emoji and supplementary planes 4 bytes). File-system sizes, memory allocations, network buffer sizes and CPU cache lines are all reported in bytes or byte multiples. The byte is also where the binary/decimal prefix war originates and propagates. Storage manufacturers (Seagate, Western Digital, SanDisk, Samsung) follow the SI convention literally, advertising 1 TB = 10¹² = 1,000,000,000,000 bytes — the convention that gives the largest possible marketing number. Operating systems historically use binary multiples: Microsoft Windows reports a "1 TB" drive as 931 GB because Windows interprets "GB" as 2³⁰ = 1,073,741,824 bytes, exposing a 7.4% gap that has launched a generation of consumer-support escalations. Apple macOS realigned with the SI convention in OS X 10.6 Snow Leopard (2009), reporting drive capacities in decimal gigabytes that match the manufacturer's marketing. Linux distributions vary by tool: `ls -h` and `df -h` use binary kibibyte/mebibyte/gibibyte semantics with SI-style "K/M/G" symbols, while `ls --si` and the GNU `coreutils` documentation expose the IEC distinction explicitly. Network engineering, by long convention, uses decimal multiples for bandwidth (Mbps in millions of bits per second) — a convention covered in detail under the bit, mbps and gbps entries.

What is a Bit?

One bit is the information content of a single binary digit — equivalently, the Shannon entropy of an outcome with probability ½, such as a fair coin flip whose result is then revealed. Formally, for a discrete random variable X with probability mass function p(x), the Shannon entropy is H(X) = −Σ p(x) log₂ p(x) bits. The choice of log base 2 fixes the unit as the bit; log base e gives the nat (natural unit, ≈ 1.443 bits); log base 10 gives the hartley or ban (≈ 3.322 bits). The bit is the foundational unit in coding theory, channel-capacity calculations (the Shannon–Hartley theorem expresses maximum reliable channel capacity in bits per second as a function of bandwidth and signal-to-noise ratio), in cryptography (key lengths and hash output sizes are denominated in bits), and in computer-architecture word widths (32-bit and 64-bit address spaces, 8-bit and 16-bit microcontroller cores). The bit is conceptually distinct from the binary digit considered as a written symbol — the digit "0" or "1" on a page carries 1 bit of information only when the two outcomes are equally likely; a coin biased 90/10 has Shannon entropy of just 0.469 bits per toss.

The bit is the fundamental unit of information, and unlike the byte it has a specific intellectual paternity rather than a hardware-engineering one. The word "bit" — a contraction of "binary digit" — was coined by the American statistician John W. Tukey in a Bell Labs internal memorandum dated 9 January 1947, in which Tukey rejected the alternative coinages "binit" and "bigit" as cumbersome. Tukey was at the time consulting at Bell Labs and would go on to coin "software" in 1958 and to co-develop the Cooley–Tukey FFT in 1965; the bit was an early entry in a remarkable lifetime of terminology. The unit's formal mathematical definition came eighteen months later, when Claude E. Shannon published "A Mathematical Theory of Communication" in the Bell System Technical Journal across the July and October 1948 issues — the founding document of information theory. Shannon defined the information content of an outcome with probability p as −log₂ p, measured in bits, and the entropy of a discrete random variable as the expectation H = −Σ p log₂ p, also in bits. Shannon's 1948 paper rested on two earlier Bell Labs predecessors. Harry Nyquist's 1924 paper "Certain Factors Affecting Telegraph Speed" introduced the relationship between channel bandwidth and signalling rate; Ralph Hartley's 1928 paper "Transmission of Information" introduced a logarithmic measure of information content using log base 10 (the unit later renamed the hartley or ban). Shannon chose log base 2 because it aligned naturally with binary signalling and with the relay-and-switching circuits he had analysed in his 1937 MIT master's thesis "A Symbolic Analysis of Relay and Switching Circuits" — the thesis, supervised by Vannevar Bush, that mapped Boolean algebra onto electrical-relay networks and is widely regarded as the most important master's thesis of the twentieth century. The IEC standardised the bit's symbol as the lowercase "b" in IEC 60027-2 to disambiguate from the byte's uppercase "B".

Networking is the bit's defining domain in everyday computing — every transmission rate published by an ISP, telecommunications carrier, Wi-Fi alliance, or fibre-backbone operator is denominated in bits per second by long industry convention. The 8:1 ratio between bits-for-networking and bytes-for-storage is the single most consequential unit-system boundary in consumer technology: a "1 Gbps" gigabit Ethernet link delivers 125 MB/s of file-transfer throughput; a "200 Mbps" residential broadband connection delivers 25 MB/s; a "5G NR" sub-6 GHz radio link sustaining 1 Gbps delivers 125 MB/s; a "Wi-Fi 6E AX5400" router advertises 5,400 Mbps aggregate radio capacity but per-device usable goes through the ÷8 conversion. The convention is universal across the IEEE 802.3 (Ethernet), 802.11 (Wi-Fi), and 3GPP (cellular) standards documents, in DOCSIS cable specifications, and in undersea-cable capacity quotes (the MAREA Atlantic cable launched in 2018 at 160 Tbps, equivalent to 20 TB/s of byte-throughput). Cryptography denominates key strength, hash output, and signature-scheme parameters entirely in bits. AES (FIPS 197) is specified at 128, 192, and 256-bit key lengths; SHA-2 produces 224, 256, 384, or 512-bit digests; RSA key lengths run 2048, 3072, 4096-bit; elliptic-curve cryptography Curve25519 operates with 256-bit private keys; the post-quantum standards selected in NIST PQC Round 4 (CRYSTALS-Kyber, CRYSTALS-Dilithium) specify parameter sets in bits-of-classical-security and quantum-security. The 256-bit baseline has become the universal "AES-256, SHA-256, ECDSA-256" reference for symmetric-equivalent strength against both classical and near-term quantum adversaries. Display and imaging use bits to describe per-channel colour depth and high-dynamic-range gradation. Standard sRGB displays are 8-bit per channel (24-bit total RGB, 16.7 million colours); HDR10 and HDR10+ specify 10-bit per channel (1.07 billion colours, the Rec.2020 colour space); Dolby Vision specifies 12-bit per channel (68.7 billion colours); cinema-grade colour grading and DCP delivery work in 16-bit per channel. The visible banding artefacts in 8-bit gradients on HDR-capable televisions are the consumer-visible reason 10-bit panels are now standard on premium displays. Audio bit depth determines dynamic range. CD audio is 16-bit / 44.1 kHz (theoretical 96 dB SNR, ~98 dB after dithering); high-resolution streaming on Apple Music, Tidal, and Qobuz delivers 24-bit / 96–192 kHz lossless (theoretical 144 dB SNR); professional studio recording and mastering work in 32-bit float for unlimited internal headroom. Telephone-grade voice is 8-bit μ-law or A-law companded at 8 kHz, the legacy bit-rate of the public switched telephone network. Computer architecture: 8-bit microcontrollers (AVR, PIC, 8051) dominate cost-sensitive embedded designs; 32-bit ARM Cortex-M cores dominate IoT and industrial control; 64-bit ARM, x86-64, and RISC-V cores dominate phones, laptops, servers, and supercomputers. The CPU bit-width describes both the natural integer register size and the addressable memory space (a 32-bit address space tops out at 4 GiB, the practical RAM ceiling that drove the 32→64-bit consumer transition between 2003 and 2010).

Real-world uses for Bytes to Bits

Security audit reporting from cryptographic byte storage to bit-strength claims

Security audits and penetration-test reports describe cryptographic key strength in bits — AES-128, AES-192, AES-256, RSA-2048, RSA-4096 — even though the underlying key material is stored, transmitted and processed in bytes throughout the implementation. A 16-byte AES key in a software vault becomes "AES-128" in the audit report; a 32-byte AES key becomes "AES-256"; a 256-byte RSA modulus becomes "RSA-2048" in the public-key-strength claim. The bytes-to-bits conversion runs at every audit-finding write-up, with the bit-level figure providing the industry-standard cryptographic-strength terminology.

File-size to bandwidth-time calculation for streaming and download estimates

Streaming-quality calculators, download-time estimators and CDN-delivery planning tools translate per-asset byte sizes into bit-rate figures for bandwidth-time calculations. A 1 GB video file (1,000,000,000 bytes) converts to 8,000,000,000 bits, which divided by a 25 Mbps streaming-quality tier gives 320 seconds of streaming time. The bytes-to-bits conversion runs at every "how long will this download take?" calculator and every "what bandwidth do I need for this video quality?" capacity-planning tool. The eightfold ratio between byte storage and bit-bandwidth specs is critical at every translation.

CPU performance benchmarks translating byte cache lines to bit memory widths

CPU vendor benchmarks (Intel ARK, AMD Ryzen specs, Apple Silicon documentation, ARM Cortex datasheets) describe memory subsystem performance in mixed bit-width and byte-count terms — 512-bit memory width per L1 cache line, 64-byte cache-line size, 256-bit AVX-512 SIMD register, 32-byte SIMD-load operation. CPU performance engineers translate between the byte-level memory layout and the bit-level vendor-spec terminology at every benchmark interpretation and every cache-line optimisation. The 64-byte cache line and the 512-bit memory width are the same physical structure described from two different specification layers.

Network protocol decoders translating byte frames to bit-width field specs

Network protocol decoders (Wireshark, tcpdump, network-engineering documentation) display packet contents byte-by-byte but reference protocol specifications by bit-width — the IPv4 header's 4-bit version field, the 8-bit TTL field, the 16-bit length field, the 32-bit source and destination addresses. Network engineers translating between Wireshark's byte-display and RFC bit-width specs run the bytes-to-bits conversion at every protocol-archaeology pass. The same applies to TCP (16-bit ports, 32-bit sequence numbers), Ethernet (48-bit MAC addresses), and every modern protocol.

When to use Bits instead of Bytes

Use bits whenever the destination is a cryptographic-key specification, a network-bandwidth or protocol-field figure, an audio or video codec bitrate, a CPU register-width or memory-width spec, or any spec-layer document that uses bit-denominated terminology. The bit figure is the industry-standard specification language at the cryptographic, networking and CPU-architecture layers, and translation up from byte-level storage to bit-level spec terminology is required at every audit, vendor-benchmark interpretation and standards-body specification reading. Stay in bytes when the destination is a memory layout, file-storage size, packet-encoding byte count, cryptographic-library API parameter, or codec-output file calculation. The conversion is at the implementation-vs-spec boundary, with the byte figure on the storage and memory side and the bit figure on the cryptographic, networking and architecture-spec side.

Common mistakes converting B to bit

  • Misreporting an "AES-128" key as 128 bytes rather than 128 bits. The 128 in "AES-128" refers to the bit-length of the key, which equals 16 bytes of underlying material — an eightfold size difference. Confusing the two leads to over-allocated key buffers, mismatched API parameters and incorrect cryptographic-strength claims in audit reports. The bit-level figure is always smaller than the byte-level figure by exactly the eightfold ratio.
  • Multiplying file sizes by 8 to estimate "transfer time at 100 Mbps" without converting to per-second figures. A 1 GB file is 8,000,000,000 bits, which at 100 Mbps takes 8,000,000,000 ÷ 100,000,000 = 80 seconds at theoretical maximum. Skipping the bandwidth-per-second division and treating "100 Mbps for 1 GB" as if the answer were "8 seconds" produces a tenfold underestimate. The bytes-to-bits conversion is one step; the bandwidth-per-second division is a separate step.

Frequently asked questions

How many bits in a byte?

One byte equals exactly 8 bits by universal modern computing convention. The 8-bit byte became the universal standard with the IBM System/360 architecture in 1964 and has been preserved across every subsequent CPU, operating system and programming language. The eightfold ratio is exact and unchanging, and underlies every cross-layer translation between byte-level storage and bit-level specification across modern computing.

How many bits in a 32-byte key?

32 bytes equals 32 × 8 = 256 bits. That is the size of an Ed25519 private key, an AES-256 encryption key, or a SHA-256 hash output, and the figure that security audits and standards-body specs report under the "256-bit" cryptographic-strength terminology. The byte-level 32 and the bit-level 256 are the same physical object described from two different specification layers.

How many bits in a 1 GB file?

One gigabyte (1,000,000,000 bytes in decimal SI) converts to 1,000,000,000 × 8 = 8,000,000,000 bits or 8 Gb (lowercase b for bits). That is the figure that translates a 1 GB file-size into the bit-bandwidth-time it would take to transfer over a network: at 1 Gbps the 8 Gb file takes 8 seconds at theoretical maximum, at 100 Mbps it takes 80 seconds, at 10 Mbps it takes 800 seconds (about 13 minutes). Real-world transfer times run 5-12% longer than these theoretical figures after Ethernet, IP and TCP protocol overhead.

Quick way to convert bytes to bits in my head?

Multiply the byte figure by 8. The mental shortcut for common byte values: 1 byte is 8 bits, 4 bytes is 32 bits, 8 bytes is 64 bits, 16 bytes is 128 bits, 32 bytes is 256 bits, 64 bytes is 512 bits, 128 bytes is 1024 bits. Most cryptographic-key and protocol-spec byte values are powers of 2, which makes the multiply-by-8 mental math trivial and the resulting bit figure another power of 2.

Why does "AES-128" mean 16 bytes and not 128 bytes?

The "128" in AES-128 refers to the bit-length of the encryption key, which equals 128 ÷ 8 = 16 bytes of underlying material. AES specifications denominate key sizes in bits because the cryptographic-strength terminology is bit-length-based — a 128-bit key has 2^128 possible values, providing 128 bits of brute-force resistance. The byte-length representation (16 bytes for AES-128, 32 bytes for AES-256) is the implementation-layer storage size, while the bit-length representation is the cryptographic-strength specification.

How long does a 1 GB download take at various speeds?

One GB equals 8 Gb (gigabits). At 1 Gbps the file takes 8 seconds at theoretical maximum, at 100 Mbps it takes 80 seconds, at 10 Mbps it takes 800 seconds (13 minutes), at 1 Mbps it takes 8000 seconds (2.2 hours). Real-world download speeds are typically 88-95% of the theoretical maximum after Ethernet, IP and TCP protocol overhead, so add 5-12% to the times above for typical wired-Ethernet or fibre connections.

What is the difference between MB and Mb?

MB (megabyte, uppercase B) measures storage and file-size at the byte level (1 MB = 8 Mb), while Mb (megabit, lowercase b) measures network bandwidth and codec bitrate at the bit level. The two differ by a factor of 8 and the case-sensitive distinction is critical: a 100 Mbps connection delivers 12.5 MB/s of byte throughput, not 100 MB/s. The case-confused "100 MBps" written with uppercase B is universally an error, since bandwidth specs are universally lowercase-b bits-per-second.