Data Transfer Rate Converters — Mbps, Gbps, network bandwidth
Last updated:
Data-rate conversions cover the units that describe how fast bits and bytes move through networks, storage interfaces, and streaming-media pipelines. The category is anchored in two units — megabits per second (Mbps) and gigabits per second (Gbps) — that together cover the bandwidth range from residential broadband (10–1000 Mbps) through enterprise dedicated internet (1–100 Gbps) and into data-centre fabric uplinks (40 Gbps to 800 Gbps). The most important and most-confused distinction in the category is bits-vs-bytes: bandwidth is measured in bits per second (lowercase b, as in Mbps and Gbps) while file-transfer rates and storage-interface speeds are often quoted in bytes per second (uppercase B, as in MB/s and GB/s), and the two differ by a factor of 8 because there are 8 bits per byte. A 1 Gbps connection has a theoretical maximum throughput of 125 MB/s, not 1 GB/s — and after protocol overhead the real-world figure is typically 110–120 MB/s. The bit-vs-byte distinction is the single most common cause of bandwidth-arithmetic confusion in consumer and enterprise IT alike. Conversion between the bandwidth-bits and storage-bytes layers happens at every download-progress display, every speed-test result, and every capacity-vs-throughput planning calculation. Network engineers, ISP product managers, streaming-media architects, and consumer broadband shoppers all encounter the bit-vs-byte and Mbps-vs-Gbps conversions at the boundary between marketing-tier figures and operational-throughput numbers.
Units in this category
Megabits per second (Mbps)
One megabit per second (Mbps) equals 1,000,000 bits transmitted per second under the SI decimal convention used universally by network engineers, ISPs, and standards bodies — not the binary 2²⁰ that the storage-prefix convention implies for bytes. The IEC and IEEE both treat "Mbps" as 10⁶ bps in IEEE 802.3 (Ethernet), 802.11 (Wi-Fi), 3GPP cellular standards, ITU-T G.9961 powerline, DOCSIS cable, and the relevant ITU-R radio-spectrum recommendations. The Mbps-to-MB/s conversion is exact: 1 Mbps ÷ 8 bits/byte = 0.125 MB/s, so a 100 Mbps connection delivers a maximum of 12.5 MB/s before TCP/IP, Ethernet, and link-layer protocol overhead reduces effective throughput by 5–15% to roughly 11.0–11.9 MB/s.
Gigabits per second (Gbps)
One gigabit per second (Gbps) equals 1,000,000,000 bits transmitted per second under the SI decimal convention used universally by IEEE, ITU-T, 3GPP, and OIF standards. The Gbps-to-GB/s conversion is exact: 1 Gbps ÷ 8 bits/byte = 0.125 GB/s = 125 MB/s of theoretical maximum file-transfer throughput, with TCP/IP, Ethernet, and link-layer overhead reducing practical effective throughput by 5–15% to roughly 110–120 MB/s under favourable conditions. The symbol Gbps (uppercase G, lowercase b, lowercase ps) is distinct from GB/s (uppercase G, uppercase B, slash s) by the same factor of 8 that separates bits from bytes at every prefix tier.
Kilobits per second (kbps)
The kilobit-per-second (kbps, kbit/s) is defined as exactly 1000 bits per second under the SI prefix decimal-convention used in telecommunications-and-data-rate measurement, distinguishing from the binary-prefix kibibit (Kibit, 1024 bits) used in some legacy storage-context documentation. The factor is exact rather than measured. Equivalently, 1 kbps = 0.001 Mbps = 0.000001 Gbps = 0.000125 KB/s (kilobytes per second, since 1 byte = 8 bits).
Megabytes per second (MB/s)
The megabyte-per-second (MB/s, MBps) is defined as exactly 10⁶ bytes per second = 1,000,000 bytes per second under the SI prefix decimal-convention used in modern storage-and-network-and-internet measurement. Equivalently, 1 MB/s = 8 Mbps (megabits per second, since 1 byte = 8 bits) = 8,000,000 bits per second = 1000 KB/s = 0.001 GB/s = 8000 kbps. The "8:1 byte-to-bit" conversion is the canonical factor between MB/s (byte-based, user-facing download-rate) and Mbps (bit-based, network-bandwidth).
History of data transfer rate measurement
Network-bandwidth measurement in bits-per-second traces back to the early days of telegraph and telephone signalling at Bell Labs, where the underlying serial transmission of binary symbols gave the bit-rate its natural unit. Modem speeds in the consumer era were quoted in bits per second from the start: 300 baud, 1200 bps, 14.4 kbps, 28.8 kbps, 56k, all the way up to broadband. Ethernet evolved through a sequence of bit-rate steps: 10 Mbps in 1980, 100 Mbps in 1995, 1000 Mbps (Gigabit Ethernet) in 1999, 10 Gbps in 2002, 25/40/100 Gbps in the 2010s, and 400/800 Gbps in the 2020s. The convention of measuring bandwidth in bits rather than bytes was preserved through every step because of historical inertia and because serial-bit transmission remains the underlying physical model of network signalling. Modern fibre-optic and coherent-optical transmission systems push individual wavelengths into the multi-Tbps range, with subsea cables aggregating dozens of wavelengths into Pbps total system capacity.
Where data transfer rate conversions matter
Data-rate conversions appear at every layer of the modern network and streaming stack. Internet service providers package residential broadband in Mbps tiers (100 Mbps, 500 Mbps, 1 Gbps fibre) and enterprise dedicated internet access in Gbps tiers (1 Gbps DIA, 10 Gbps, 100 Gbps), with the conversion at the residential-vs-enterprise boundary running constantly. Data-centre network architects design leaf-spine fabrics with per-server NICs in Mbps-to-Gbps ranges (10 Gbps, 25 Gbps, 100 Gbps per server) aggregating to spine-switch capacity in Tbps. Streaming-media engineers compute concurrent-viewer bandwidth at the per-stream Mbps level (5 Mbps for 1080p HD, 15 Mbps for 4K, 25 Mbps for HDR 4K) and aggregate against CDN-edge Gbps capacity. Mobile networks specify LTE peaks in hundreds of Mbps and 5G in low Gbps. SaaS platforms meter per-tenant bandwidth in Mbps against org-wide Gbps allocations. Software-defined-WAN admins allocate per-application QoS bandwidth in Mbps from Gbps-class WAN edges. Last-mile fibre engineers run GPON and XGS-PON splits sharing Gbps backhaul across 32 or 64 subscribers at Mbps-class service tiers, with subscription-ratio sizing at the per-network-element level. Network analysis tools report per-flow Mbps against aggregate Gbps interface capacity, and capacity-planning models reconcile across the per-flow, per-tenant, and per-edge layers at every quarterly review. Consumer broadband shoppers comparing tier prices, IT teams sizing new branch-office circuits, and CDN engineers planning live-event capacity all run cross-scale Mbps-Gbps conversions as part of every capacity decision.
How to convert data transfer rate units
In the decimal SI interpretation used universally for network bandwidth, each scale step is exactly 1000: 1 kbps = 1000 bps, 1 Mbps = 1000 kbps, 1 Gbps = 1000 Mbps, 1 Tbps = 1000 Gbps. The conversion is a pure decimal-place shift in either direction. The bit-to-byte conversion is fixed at 8 bits per byte, so a 1 Gbps link has a theoretical maximum byte throughput of 125 MB/s (1000 Mbps ÷ 8). Real-world throughput is lower because of protocol overhead — Ethernet, IP, and TCP headers plus inter-frame gaps consume 4–6% of the gross bit rate, leaving typically 940–960 Mbps of usable application data on a 1 Gbps Ethernet link. For mental math, dividing Mbps by 10 gives a quick MB/s estimate that is about 25% high (true factor is 8); dividing by 8 gives the precise byte-throughput figure. Subscription ratios and oversubscription conventions in last-mile and data-centre networks mean that aggregate provisioned bandwidth often exceeds simultaneous deliverable bandwidth by 2:1 to 10:1, with the gap absorbed by time-of-day usage patterns.
All data transfer rate conversions
Frequently asked questions
Why is bandwidth measured in bits per second instead of bytes per second?
Network bandwidth has been measured in bits per second since the earliest days of telegraph and modem signalling, where serial transmission of binary symbols gave the bit-rate its natural physical unit. Ethernet, fibre, and modern wireless systems all preserve the bit-rate convention because the underlying transmission is still serial bit-by-bit at the physical layer. The bit-rate convention is universal across ISPs, network equipment manufacturers, and standards bodies, even though end-user file-transfer experiences are typically quoted in MB/s or GB/s.
Why does my "1 Gbps" connection only get about 940 Mbps in real-world testing?
The 1 Gbps figure is the gross link rate — the raw bit rate of the underlying Ethernet signalling. Protocol overhead (Ethernet frame headers, IP packet headers, TCP segment headers, inter-frame gap) consumes about 4–6% of the gross rate, leaving typically 940–960 Mbps of usable application data. Cumulative TCP-flow inefficiencies, buffer-bloat issues, and per-connection limits can lower the practical throughput further depending on the application protocol. The 940 Mbps figure is what speed-test apps measure on healthy gigabit links.
How long does a 1 GB file download take on a 100 Mbps connection?
One gigabyte equals 8000 megabits (1000 MB × 8 bits/byte), so the theoretical minimum download time at 100 Mbps is 8000 ÷ 100 = 80 seconds. Real-world downloads add protocol overhead and TCP-flow inefficiencies, typically extending the actual time to 85–95 seconds on a healthy 100 Mbps connection. The "GB-to-Mbps timing math" runs constantly in download-progress estimates and capacity-planning calculations.
What is the difference between Mbps and MB/s?
Mbps (megabits per second, lowercase b) measures network bandwidth at the bit level, while MB/s (megabytes per second, uppercase B) measures file-transfer throughput at the byte level. The two differ by a factor of 8 because there are 8 bits per byte. A 100 Mbps connection delivers 12.5 MB/s at theoretical maximum, and a 1 Gbps connection delivers 125 MB/s. ISPs and network equipment use Mbps; download progress bars and storage-interface specs typically use MB/s; the bit-vs-byte distinction is the most common cause of consumer-facing bandwidth confusion.
What is a baud and how does it relate to bits per second?
Baud is the symbol rate of a transmission system — how many signal symbols are transmitted per second — while bps is the data-bit rate, which can be higher than the baud rate when each symbol encodes multiple bits. Early modems with binary signalling had baud and bps numerically equal (300 baud equalled 300 bps), but later modems used multi-bit signalling so a 56,000 bps modem might run at 8000 baud with each symbol encoding 7 bits. Modern usage almost always quotes bps rather than baud, and the two are no longer interchangeable.
Why are PON and broadband networks oversubscribed?
Last-mile passive optical networks (PON) and broadband ISPs rely on the time-of-day usage pattern of residential subscribers to deliver advertised speeds without provisioning the full simultaneous capacity. A 10 Gbps PON serving 32 subscribers at 1000 Mbps would need 32 Gbps of dedicated capacity for full simultaneous speed, but typical peak-hour aggregate usage stays well below the per-subscriber max because subscribers are not all at peak simultaneously. Subscription ratios of 4:1 to 10:1 are standard, with the gap absorbed by the usage-pattern statistics.
What is the highest data-rate unit in common use?
Modern data-centre and ISP backbone networks operate at Tbps (terabits per second) and increasingly Pbps (petabits per second) at the system level. A single 400 GbE port runs at 400 Gbps, a 100-port 400 GbE switch handles 40 Tbps of aggregate capacity, and a major subsea cable system carries multiple Tbps per fibre pair with dozens of fibre pairs in the cable. Pbps figures appear in published capacity totals for hyperscale data-centre interconnects and global fibre-optic system aggregates.