
Why do ISPs show speed in bits and not in bytes? It's confusing - frade33
It is very confusing,  because for everything else i.e file sizes the standard is bytes.
======
brudgers
Utilities are sized by capacity not the use of that capacity. The local water
works doesn't bill based on mugs of coffee.

How many bytes go over the network is a function of the protocol used to send
bits as bytes. If some bits are used as error detection or correction or
handshake, then they are like the water used to rinse the mug and wash the
coffee pot and left in the sink and turned into steam. In other words, what is
or isn't a byte is subject to debate, but thanks to Shannon, bits are
objectively measurable.

------
csense
My guess is marketing reasons. If you use a smaller unit, you get a bigger
number.

EDIT: Why the downvote? It's a perfectly logical explanation.

~~~
27182818284
This seems very likely given that I now see "100% fiber-backed" advertisements
for the same ol' DSL that has existed in areas for a decade.

------
DanBC
For years we measured speed in bits. 300 bps; 1200 bps.

It is confusing, especially because B per second is 8 times faster than b per
second.

Also, what happens if you're using 7e2 or 8n1 or whatever? What counts as a
byte?

------
mikegogulski
The best one-word answer is, simply, "tradition".

However, it's tradition that's backed up by a huge raft of prior work in the
data transmission and information theory sciences. When you break all of that
stuff down, you're left facing questions like "how can I design a protocol so
that over a lossy channel I can unambiguously ascertain the correct
transmission and reception of a given piece of information?" The natural unit
which falls out of this is the indivisible bit, zero or one, the presence or
absence of a signal, over a certain measured interval of time.

The theoreticians and engineers who produced the first systems for digital
data transmission were all informed by this work, and their fundamental
challenge was to produce systems capable of reliable transmission and
reception of streams of bits -- not necessarily 8-bit bytes, that being a
higher-order concern left to higher-order devices in the network, for sake of
modularity. (For a bewildering example as to why telecom techs
wanted^H^H^H^H^H^Hstill want to leave this kind of thing to others to puzzle
out, see, e.g.
[https://en.wikipedia.org/wiki/36-bit](https://en.wikipedia.org/wiki/36-bit))

As a result, today, network engineers pay first attention to the bit-rate
rather than the byte-rate capabilities of their equipment, and this is
reflected in everything from the names of low-level protocols to ways of
talking about circuits to the specifications of concrete product
implementations, both in terms of the data-carrying capacities of network
interfaces to the speeds offered to consumers.

I come from the network engineering world, as you might have guessed :) So, on
the opposite side of your question, I find it terribly distressing when
software like my Bittorrent client reports speeds to me in megabytes-per-
second. I don't have an intuitive _feel_ for what a megabyte per second is,
but I can take that figure and multiply by eight and say "aha!" Roughly the
same as old-school 10Mbps Ethernet.

------
codemonkeymike
My networking professor who worked for AT&T for years, before and after it was
sold and up into the internet age. His explanation was that the phone
companies (more like phone company) had existed since the late 1800's before
computers, and even when they built and bought the first computers and
computer networks bytes were not a unit of measurement. So the phone companies
network the phone companies rules.

Edit: I'd like to add that the first networks were with dumb devices on the
periphery, aka your phones, and smart devices in the core, aka AT&T computer
controlled routers. There was no concept of Bytes in that network.

------
mariuolo
I think it has to do with avoiding ambiguity.

Granted, the byte hasn't been redefined, but given how every architecture and
language in the past had a different standard for each ADS, one might want to
be proactive.

Also see the mess SI vs. binary prefixes created.

As a sidenote, early modem producers used the baud
([http://en.wikipedia.org/wiki/Baud](http://en.wikipedia.org/wiki/Baud)) which
coincides with bps only in some cases.

------
abhilash0505
One of the reasons for it is actually Marketing. If I were to provide an 8Mbps
connection, an average person would not know the fact that 8b = 1B. From an
ISP view, 8Mbps seems a better speed than 1MBps to an average non-tech
customer.

~~~
saddestcatever
My assumption: No ISP wants to be the first to make the transition to using
MBps. And thus, legacy metrics are used forever.

------
lutusp
> It is very confusing, because for everything else i.e file sizes the
> standard is bytes.

No, this isn't true. It was true once, but things have changed. Now there are
two standards, one based on multiples of two, the other based on multiples of
ten:

[http://en.wikipedia.org/wiki/Mebibit](http://en.wikipedia.org/wiki/Mebibit)

And, more topically,

[http://en.wikipedia.org/wiki/Data_rate_units](http://en.wikipedia.org/wiki/Data_rate_units)

1 kibibit = 1.024 kilobit

I'm not saying this is why ISPs use bits per second, but it certainly would
serve as an explanation in the absence of a better one. Bits per second mean
the same thing in all current schemes for describing a quantity of data or its
velocity.

------
angrok
for computer storage bandwidth, the unit is bytes. for transmission bandwidth,
the unit is bits.

i think this is mostly because on the network, transmission actually happens a
bit at a time. whereas in a computer system, data is addressed at a byte level
(wikipedia says it has evolved due to the fact that in the early days, a byte
was used to store every character of text).

hence files have sizes in MBs etc, while ISPs have speeds in Mbps

~~~
tonyarkles
Not strictly true that data is transferred a bit at a time!
[http://en.wikipedia.org/wiki/Quadrature_amplitude_modulation](http://en.wikipedia.org/wiki/Quadrature_amplitude_modulation)
for example transmits multiple bits on the same wire at the same time.

~~~
angrok
totally agree, QAM does transfer multiple bits

in fact in today's world everything from modulation and multiplexing is done
digitally using network processors which actually work with bytes internally,
so technically it should be possible to give network bandwidth in bytes

but my point is transmission on network or data bus, always happens in
"frames" and a frame of length x bits, usually contains y bits of actual data
and (x-y) bits that contain parity/redundancy bits. transmission of 1 byte
usually never takes exactly 8 bits during transmission, with an additional few
bits to ensure correct reception

------
simlevesque
It's baiting. Most people do not have the knowledge they are being ripped off.

~~~
humbledrone
This argument makes no sense. The only way that an ISP could rip people off
this way would be if the standard unit of measurement across the ISP industry
was e.g. bytes per second, and one rogue ISP listed their figures in bits per
second. Thus, that one ISP would misleadingly appear to be 8 times faster than
the other ISPs.

But in reality, all ISPs measure their bandwidth speeds in bits per second.
Thus there's no ripoff happening -- if you compare the speed numbers from two
competing ISPs, and they're both in bits per second (which they are), then
everything works out fine and you can choose the faster of the two.

And anyway, it's not just ISPs or whatever. Basically the entire networking
industry measures speed in bits per second. It is totally standard.

