
IEEE P802.3bz Approval for 2.5G/5G Ethernet - cm2187
http://www.anandtech.com/show/10720/nbaset-receives-boost-with-ieee-p8023bz-approval
======
mjevans
I think most of us still just want 10G Ethernet to drop in price to the point
where using it in a budget SAN makes sense.

Last I checked it was starting to get there, for single NICs; it was the
switches that made even high-speed in rack deployment not make sense.

~~~
tostaki
Does somebody have some insight about why there is such a gap between 1G
ethernet and 10G ? I mean I have gigabit internet at my home why can't I have
a faster local network for a reasonable price ? Is it just too soon or there
is a real technological gap ?

~~~
wmf
My impression is that a 10Gbase-T PHY requires far more than 10x as many
transistors as a 1000base-T PHY; maybe closer to 100x. This caused a death
spiral where high prices lead to low volume that prevents prices from
dropping.

~~~
DiabloD3
There is also the problem that 10gbe over copper requires interleaving,
whereas optical doesn't. A lot of 10gbit uses are latency sensitive. 2.5 and 5
usage is not likely to be latency sensitive usage.

2.5 and 5 are based on the existing 10gbase-t technology, and can be deployed
over existing Cat5e and Cat6, whereas 10gbase-t only works reliably over
existing Cat6 sometimes less than 100ft.

802.3bz's genius is basically running 10gbase-t at half or quarter the clock
speed, requiring lower spec cabling but also lower power/lower heat parts. I
don't know why this wasn't done as part of the original 10gbase-t
specification (802.3an, which is now 10 years old; in comparison, 802.3ab,
which defines 1gbase-t, is 17).

In addition, for embedded hardware, 2.5 and 5 are a better fit for common on-
chip SerDes implementations when you need to wire stuff up that way.

As for power usage, a modern 10gbase-T controller ran at 1/4th the clockspeed
(and voltage also appropriately adjusted) would use less than 1/4th the power,
but be more power efficient than an existing modern 1gbase-t, per gbit/sec.

The real question is, when will all the cheap "we need an ethernet port but
nothing fancy" controllers (which are all gigabit in anything recognizable as
a computer, 100mbit becoming more and more rare in anything not recognizable
as a computer) become baby 10gbit? That's all I really care about.

~~~
walrus01
10GBaseT only works reliably over "cat6a" and cat7 cable that costs close to
$350 per 1000 ft (305 meters), whereas two strand singlemode G.657.A2 type
cable now costs less than $85 per 1000 ft, or less than cat5e. This is why
most datacenter environments use fiber and not copper. Also cat6a and cat7
will fill overhead cabling trays a great deal faster because of its huge
diameter compared to singlemode patch cables.

~~~
DiabloD3
Yes, this is true. I work in this industry, and I also recommend fibre over
copper for these reasons. However, for rack local wiring (ie, wiring servers
into top of rack switches), cat6 for 10gbase-t is acceptable, although I don't
particularly care for it.

~~~
snuxoll
Eh, with the plethora of DAC SFP+ cables at extremely affordable prices I
don't really see many people using CAT6 - especially considering the
substantially higher power draw that 10GBase-T has over the DAC's.

~~~
mjevans
This doesn't quite fulfill the role either though...

A quick search for SFP+ cards that can handle 10Gbit/sec shows that the
typical price is ~200 USD/card right now (this is still at least 3x the cost
per card I'd prefer to see).

Given the pricing on individual cards I'd hate to consider the price of a
switch, if such a thing even exists.

~~~
snuxoll
We're talking about datacenter usage, not home. Also, there's plenty of older
generation cards out there at reasonable prices, the Mellanox Connect-X 2
cards I have in my home servers cost me like $40 for a pair of them. And no
matter how you slice it, 10GBe switching is expensive right now, like, crazy
expensive - to the point that I don't even bother with it and my VMWare box is
connected directly to my FreeNAS box with a 1M SFP+ DAC as a switch with even
two 10GB uplink ports would have cost me considerably more than my cheap
24-port TP-Link managed switch ($150).

------
rootbear
Running Cat6a through the older buildings here at NASA Goddard can be a
nightmare. I can see us using this, especially for buildings that are planned
for replacement down the road. And let's not talk about getting funding for
Cat 6a upgrades, infrastructure money is always in short supply.

~~~
walrus01
IMHO if you're going to go to the trouble of recabling a NASA research
facility, run singlemode everywhere. It's really cheap, easy enough to
terminate, and if the light path is done correctly is future proof to 100GbE,
400GbE and 1TbE. Singlemode fiber circuits built in very ordinary datacenters
today carrying 1 x 10GbE LX circuit will function just fine for coherent
100GbE QPSK as long as the fiber termination is OK and the connectors are
reasonably clean.

~~~
rootbear
All of the 10Gbit stuff I'm aware of here at Goddard is over fibre. I assume
it's single mode, but that isn't really my area of expertise. I mentioned Cat
6a above because I know that some projects here are using it, I didn't mean to
imply that it was the preferred media for 10Gbit here generally.

------
AceJohnny2
It's crazy to think of the crazy high-speed signals we're now able to send
over copper. Think about it: 5Gbit used to only be possible over very
expensive hardware and very specialized/isolated cabling. Now it's all over
the place: it started with PCIe, then Thunderbolt, then USB3, now Ethernet.
And this is thanks to the advances in silicon technology that allows us to
manufacture PHYs that are both performant to send/receive these signals and
cheap enough to include in consumer hardware. It's not just Moore's law that
allows this, but technological advances in signal processing.

In fact I've once read that all these high-speed signals are basically re-
using the PCIe PHY. Does anyone with more knowledge of that area of tech know
any more about it?

~~~
wmf
PCIe, SATA, SAS, Fibre Channel, Infiniband, and data center flavors of
Ethernet are all based on very similar serdes technology that uses multi-GHz
binary encoding over shielded cabling. [http://www.design-
reuse.com/articles/10541/multi-gigabit-ser...](http://www.design-
reuse.com/articles/10541/multi-gigabit-serdes-the-cornerstone-of-high-speed-
serial-interconnects.html)

1G/2.5G/5G/10G Base-T Ethernet uses fairly different sub-GHz multi-level
encoding over unshielded cabling.

~~~
AceJohnny2
Thanks! That's exactly what I was looking for!

I'm surprised that Base-T is unshielded. Cat-5 and more requires shielding
over the full cable, however the cable will contain 4 twisted pairs of wire.
Are those pairs of wires considered the "unshielded cabling" for the purposes
of this application?

~~~
Vendan
Cat-5, Cat-5e, and Cat-6 are all unshielded by default, but there are shielded
versions for high EMI environments.

~~~
AceJohnny2
Huh, I didn't know that. Thanks!

~~~
DannyBee
They use differential signaling to reduce noise.

------
klagermkii
Is there a licensing reason that one can't get a Thunderbolt switch to connect
a few close computers, laptops, and a NAS and be able to have a network? Even
if one's not going for the full 40Gbit, just at least let me move files around
on an emulated 10Gbit network card as Thunderbolt supports.

Does this just start to get too close to InfiniBand for Intel's liking?

Thunderbolt cables and adapters are (relatively) cheap and I find it a pity
not seeing it used for these kinds of cases.

~~~
walrus01
Thunderbolt is basically an extension of the PCI bus, it's a _very_ different
thing than standards based Ethernet. The effort required to build a full
network stack across it for computer-to-computer connectivity would be kind of
useless, considering how cheap 10GbE optical is nowadays (I just paid
$28/piece for 10GbE SFP+ 1310nm LX optics).

edit: and what are you going to do, reinvent the wheel by creating a
thunderbolt switch/hub? good luck getting a thing like that to capture
economies of scale for manufacturing and more than 0.05% of market share
compared to existing infiniband or ethernet based interconnects.

~~~
jamesfmilne
I agree the transceivers and fibre are cheap nowadays, as are SFP+ DAC cables.
The expense is the Thunderbolt to Ethernet adapters.

You can link two Macs with Thunderbolt, but in my experience the performance
is underwhelming compared to a good 10GbE card (no send/receive offload, etc).

I'm beginning to hate Macs.

------
TheAdamist
Other than reusing existing cables, how this is going to be cheaper than
10GBASE-T isn't clear to me.

Can they make switches & NIC's reasonably priced compared to 10G? If so what
in the spec allows them to do that?

~~~
luma
This technology isn't designed to compete with 10gbit in the datacenter.
10GBASE-T has very strict requirements that can't be met by most installed
cable plants except for very short runs, and as a result is almost exclusively
used within a single room/datacenter. The Netgear switch they call out near
the beginning of the article hints towards the source of the problem: 802.11ac
Wave 2 now allows for just over 2gbps of throughput for current-generation
APs.

Access layer switches are generally already backhauled with 10gbit where
necessary, and even if they aren't running a fiber pair into the IDF doesn't
cost a lot. There is little call for 10gbit to the desktop for most fixed-
station users. APs on the other hand are getting swamped and they are widely
distributed and are almost universally connected into the access layer with
cat5-class cabling. Existing Wave 2 devices on the market solve this by
aggregating two cat5 cables but this isn't ideal and often times still
requires running new cables.

The goal then is to provide a swap-out replacement for existing APs and
switches over existing cable plants. Cisco, et al want there to be no friction
to finally talking their customers into an upgrade for their existing gigabit
switches, and Wave 2 provides a nice incentive to do so, but only if you can
get that bandwidth to the APs when nobody is interested in running new cables
across the entire building.

Hence, NBase-T.

~~~
walrus01
Anecdotally, from the perspective of an ISP, I almost _never_ see 10GBaseT.
Even in the datacenter. I see a great deal of singlemode, and some special
multimode cables for 100GbE intra-building links. And of course cat5e/cat6 for
stuff like 1000BaseT top of rack switches to ordinary dedicated servers in
colocation. If you tried to run cat6a or cat7 cable in some facilities here
(or order your technicians to do it) you'd be laughed at and shamed by your
peers for being wasteful of overhead cable tray space, and lacking the
foresight to predict low cost 100GbE optical in a few years.

~~~
luma
I work as a datacenter consultant and I can count on one finger the number of
customers I've seen deploy 10GBASE-T at any scale (and they aren't a customer
I would hold out as an example for how to do anything right). As others have
mentioned, 10gbit is almost exclusively deployed using optics or direct-attach
twinax.

Your point about upcoming fiber technologies is well taken and I hadn't
considered the situation in that light.

