Hacker News new | past | comments | ask | show | jobs | submit login
IEEE P802.3bz Approval for 2.5G/5G Ethernet (anandtech.com)
103 points by cm2187 on Oct 3, 2016 | hide | past | web | favorite | 55 comments



I think most of us still just want 10G Ethernet to drop in price to the point where using it in a budget SAN makes sense.

Last I checked it was starting to get there, for single NICs; it was the switches that made even high-speed in rack deployment not make sense.


Does somebody have some insight about why there is such a gap between 1G ethernet and 10G ? I mean I have gigabit internet at my home why can't I have a faster local network for a reasonable price ? Is it just too soon or there is a real technological gap ?


Consumer demand for faster ethernet has essentially stalled at the same time as wired broadband stopped getting faster. People don't expect to get > 1G broadband any time soon. 10 years ago broadband was doubling in speed regularly. Instead moving last mile to faster fiber, demand was channeled to iPhones with choppy 4G.


Here in Japan, the NURO Hikari service of Sony Network Communications offers FTTH with 10 Gbps downlink and 2.5 Gbps uplink speeds for around ¥6500 (~65 USD) per month. It's only available in the central wards of Tokyo and a part of Kanagawa prefecture, but you can get it right now.


My impression is that a 10Gbase-T PHY requires far more than 10x as many transistors as a 1000base-T PHY; maybe closer to 100x. This caused a death spiral where high prices lead to low volume that prevents prices from dropping.


There is also the problem that 10gbe over copper requires interleaving, whereas optical doesn't. A lot of 10gbit uses are latency sensitive. 2.5 and 5 usage is not likely to be latency sensitive usage.

2.5 and 5 are based on the existing 10gbase-t technology, and can be deployed over existing Cat5e and Cat6, whereas 10gbase-t only works reliably over existing Cat6 sometimes less than 100ft.

802.3bz's genius is basically running 10gbase-t at half or quarter the clock speed, requiring lower spec cabling but also lower power/lower heat parts. I don't know why this wasn't done as part of the original 10gbase-t specification (802.3an, which is now 10 years old; in comparison, 802.3ab, which defines 1gbase-t, is 17).

In addition, for embedded hardware, 2.5 and 5 are a better fit for common on-chip SerDes implementations when you need to wire stuff up that way.

As for power usage, a modern 10gbase-T controller ran at 1/4th the clockspeed (and voltage also appropriately adjusted) would use less than 1/4th the power, but be more power efficient than an existing modern 1gbase-t, per gbit/sec.

The real question is, when will all the cheap "we need an ethernet port but nothing fancy" controllers (which are all gigabit in anything recognizable as a computer, 100mbit becoming more and more rare in anything not recognizable as a computer) become baby 10gbit? That's all I really care about.


10GBaseT only works reliably over "cat6a" and cat7 cable that costs close to $350 per 1000 ft (305 meters), whereas two strand singlemode G.657.A2 type cable now costs less than $85 per 1000 ft, or less than cat5e. This is why most datacenter environments use fiber and not copper. Also cat6a and cat7 will fill overhead cabling trays a great deal faster because of its huge diameter compared to singlemode patch cables.


Yes, this is true. I work in this industry, and I also recommend fibre over copper for these reasons. However, for rack local wiring (ie, wiring servers into top of rack switches), cat6 for 10gbase-t is acceptable, although I don't particularly care for it.


Eh, with the plethora of DAC SFP+ cables at extremely affordable prices I don't really see many people using CAT6 - especially considering the substantially higher power draw that 10GBase-T has over the DAC's.


This doesn't quite fulfill the role either though...

A quick search for SFP+ cards that can handle 10Gbit/sec shows that the typical price is ~200 USD/card right now (this is still at least 3x the cost per card I'd prefer to see).

Given the pricing on individual cards I'd hate to consider the price of a switch, if such a thing even exists.


We're talking about datacenter usage, not home. Also, there's plenty of older generation cards out there at reasonable prices, the Mellanox Connect-X 2 cards I have in my home servers cost me like $40 for a pair of them. And no matter how you slice it, 10GBe switching is expensive right now, like, crazy expensive - to the point that I don't even bother with it and my VMWare box is connected directly to my FreeNAS box with a 1M SFP+ DAC as a switch with even two 10GB uplink ports would have cost me considerably more than my cheap 24-port TP-Link managed switch ($150).


Agree. $15 for the whole thing.

http://www.fs.com/products/30851.html

Same price of $15 for 3 meter. Slightly more if you're scared of buying things direct from mainland China.


its not transistor count, its process node (speed) and resulting cost of tapeouts


Or 100G...


How fast can data transmission over copper become? What is the limit of physics - just for wire transport, not the switching logic.

Essentially, I guess that is treating the wire as a capacitor, and asking how many times per second an ASIC can detect if it's at a positive (1) or negative (0) voltage - for LVDS at least.


At last year's DesignCon there was a lot of talk about switching to PAM-4 encoding versus the now-standard NRZ. This would almost double the bandwidth given the same clock speed.

NRZ encoding means on a clock tick you either get a 1 or a 0. With PAM-4 a clock tick can be a 0,1,2,or 3.


This sounds similar to MLC in SSD. And both feel like a step back towards analog computing.


50 Gbps per lane has been demonstrated and should be commercially available in 2018. QSFP cables will be carrying 200 Gbps.


Fiber is fine.


Running Cat6a through the older buildings here at NASA Goddard can be a nightmare. I can see us using this, especially for buildings that are planned for replacement down the road. And let's not talk about getting funding for Cat 6a upgrades, infrastructure money is always in short supply.


IMHO if you're going to go to the trouble of recabling a NASA research facility, run singlemode everywhere. It's really cheap, easy enough to terminate, and if the light path is done correctly is future proof to 100GbE, 400GbE and 1TbE. Singlemode fiber circuits built in very ordinary datacenters today carrying 1 x 10GbE LX circuit will function just fine for coherent 100GbE QPSK as long as the fiber termination is OK and the connectors are reasonably clean.


All of the 10Gbit stuff I'm aware of here at Goddard is over fibre. I assume it's single mode, but that isn't really my area of expertise. I mentioned Cat 6a above because I know that some projects here are using it, I didn't mean to imply that it was the preferred media for 10Gbit here generally.


It's crazy to think of the crazy high-speed signals we're now able to send over copper. Think about it: 5Gbit used to only be possible over very expensive hardware and very specialized/isolated cabling. Now it's all over the place: it started with PCIe, then Thunderbolt, then USB3, now Ethernet. And this is thanks to the advances in silicon technology that allows us to manufacture PHYs that are both performant to send/receive these signals and cheap enough to include in consumer hardware. It's not just Moore's law that allows this, but technological advances in signal processing.

In fact I've once read that all these high-speed signals are basically re-using the PCIe PHY. Does anyone with more knowledge of that area of tech know any more about it?


PCIe, SATA, SAS, Fibre Channel, Infiniband, and data center flavors of Ethernet are all based on very similar serdes technology that uses multi-GHz binary encoding over shielded cabling. http://www.design-reuse.com/articles/10541/multi-gigabit-ser...

1G/2.5G/5G/10G Base-T Ethernet uses fairly different sub-GHz multi-level encoding over unshielded cabling.


Thanks! That's exactly what I was looking for!

I'm surprised that Base-T is unshielded. Cat-5 and more requires shielding over the full cable, however the cable will contain 4 twisted pairs of wire. Are those pairs of wires considered the "unshielded cabling" for the purposes of this application?


Cat-5, Cat-5e, and Cat-6 are all unshielded by default, but there are shielded versions for high EMI environments.


Huh, I didn't know that. Thanks!


They use differential signaling to reduce noise.


Is there a licensing reason that one can't get a Thunderbolt switch to connect a few close computers, laptops, and a NAS and be able to have a network? Even if one's not going for the full 40Gbit, just at least let me move files around on an emulated 10Gbit network card as Thunderbolt supports.

Does this just start to get too close to InfiniBand for Intel's liking?

Thunderbolt cables and adapters are (relatively) cheap and I find it a pity not seeing it used for these kinds of cases.


Thunderbolt is basically an extension of the PCI bus, it's a very different thing than standards based Ethernet. The effort required to build a full network stack across it for computer-to-computer connectivity would be kind of useless, considering how cheap 10GbE optical is nowadays (I just paid $28/piece for 10GbE SFP+ 1310nm LX optics).

edit: and what are you going to do, reinvent the wheel by creating a thunderbolt switch/hub? good luck getting a thing like that to capture economies of scale for manufacturing and more than 0.05% of market share compared to existing infiniband or ethernet based interconnects.


I agree the transceivers and fibre are cheap nowadays, as are SFP+ DAC cables. The expense is the Thunderbolt to Ethernet adapters.

You can link two Macs with Thunderbolt, but in my experience the performance is underwhelming compared to a good 10GbE card (no send/receive offload, etc).

I'm beginning to hate Macs.


Thunderbolt is an encapsulation protocol. By default it carries displayport and PCIe, but it can also carry Ethernet frames. This is already implemented on OSX, just connect two macs together with a thunderbolt cable and you'll see a new interface appear with a link-local address. People have already used this for networking by using a mac with lots of thunderbolt ports as a router.


Other than reusing existing cables, how this is going to be cheaper than 10GBASE-T isn't clear to me.

Can they make switches & NIC's reasonably priced compared to 10G? If so what in the spec allows them to do that?


This technology isn't designed to compete with 10gbit in the datacenter. 10GBASE-T has very strict requirements that can't be met by most installed cable plants except for very short runs, and as a result is almost exclusively used within a single room/datacenter. The Netgear switch they call out near the beginning of the article hints towards the source of the problem: 802.11ac Wave 2 now allows for just over 2gbps of throughput for current-generation APs.

Access layer switches are generally already backhauled with 10gbit where necessary, and even if they aren't running a fiber pair into the IDF doesn't cost a lot. There is little call for 10gbit to the desktop for most fixed-station users. APs on the other hand are getting swamped and they are widely distributed and are almost universally connected into the access layer with cat5-class cabling. Existing Wave 2 devices on the market solve this by aggregating two cat5 cables but this isn't ideal and often times still requires running new cables.

The goal then is to provide a swap-out replacement for existing APs and switches over existing cable plants. Cisco, et al want there to be no friction to finally talking their customers into an upgrade for their existing gigabit switches, and Wave 2 provides a nice incentive to do so, but only if you can get that bandwidth to the APs when nobody is interested in running new cables across the entire building.

Hence, NBase-T.


Anecdotally, from the perspective of an ISP, I almost never see 10GBaseT. Even in the datacenter. I see a great deal of singlemode, and some special multimode cables for 100GbE intra-building links. And of course cat5e/cat6 for stuff like 1000BaseT top of rack switches to ordinary dedicated servers in colocation. If you tried to run cat6a or cat7 cable in some facilities here (or order your technicians to do it) you'd be laughed at and shamed by your peers for being wasteful of overhead cable tray space, and lacking the foresight to predict low cost 100GbE optical in a few years.


I work as a datacenter consultant and I can count on one finger the number of customers I've seen deploy 10GBASE-T at any scale (and they aren't a customer I would hold out as an example for how to do anything right). As others have mentioned, 10gbit is almost exclusively deployed using optics or direct-attach twinax.

Your point about upcoming fiber technologies is well taken and I hadn't considered the situation in that light.


I've mostly seen 10GbaseT from top-of-rack to the server. Sure, you could go twin-ax copper too, but if your servers have 10GbaseT as an option, no need to go to SFP+. That's great if you're doing rolling upgrades of networking gear ahead of server refreshes so you need to support 1GbaseT at the same time, too.


I wouldn't discount the potential huge cost wins of reusing existing cables, both from a cabling installation cost perspective, and from the ability to perform a phased migration. Even at cost parity w/ 10G Ethernet it would still be a net savings versus re-cabling.


The cost to re-cable an old building that has cat5e trapped in the walls and inside suspended ceilings is immense. There are a LOT of buildings that were cabled with ordinary cat5e UTP by low voltage contractors in the 1999-2002 era.


In theory there's less signal processing required for the lower throughput, but I doubt that translates to much savings in practice (and the Aquantia PHY still supports 10G mode so there's no savings there). The price may come down due to higher volume, though. The perception that 2.5G/5G is only for APs may mean that few or no NICs will ever be released.


Everything can be a bit slower and I think it is using a simpler encoding scheme on the wire, but even if that doesn't translate to a noticeable price difference on switch ports it still has a lot of benefit for the target market (which for now is pretty clearly wireless). 10GbE switch ports are cheap in comparison to large-scale cabling changes, or even worse running power to all those spots served by PoE before.


My employer still runs a 100Mbit network. Which is actually fine since the network drives top 3MB/s when everyone's not using them... At least they upgraded me from Windows XP last month. It was kind of overdue, not because it wasn't supported, but rather because I had more RAM available on my smartphone than on my $2000 workstation.

I like to think the IT department sources our hardware on flea markets.


> I like to think the IT department sources our hardware on flea markets.

For the really old legacy systems, NASA actually searches (online) flea markets: http://www.geek.com/chips/nasa-needs-8086-chips-549867/


That raises interesting "chain of custody" problems if some of these chips are reprogrammable and are coming from obscure online resellers.


Why? It doesn't even save money to use 100Mbit. You could argue staying on XP is actually costing more money.


Bureaucratic processes, low work standard, "good enough for the business". IT in a large financial company is a pretty appalling thing to observe. Everything that would take minutes to fix in a small organisation is taking years.

For instance the network drives hang regularly, sometimes for several minutes. That affects everyone on the floor. Because of a bug in Windows that was never fixed that also causes windows to hang for minutes, basically when it happens it is coffee time. Everyone I know in the organisation is affected, and has been for years. One would have thought that IT would do something about it, not the least because they must experience that themselves. That's counting without the "I don't give a shit attitude" that I pretty much always see in that part of the organisation.

It's not about money. If we need to make even the most minor changes, it becomes multi-year projects costing millions. As far as I can tell these millions are spent in project managers who understand neither the tech nor the business and ensure that the maximum amount of confusion is maintained. They are spent on paying salaries matching industry standard to bottom of their class developers who were not smart enough to work for google or facebook, and have no idea of what they are doing. Some of the things I see are pretty shocking. I faced developpers who are litteraly mild amateurs. And I would say developers are the "elite" of the IT crowd in these organisations.

Outsourcing didn't help with the I don't give a shit attitude. I doubt that the fact that a guy in London has to make 10 unecessary clicks everytime he needs to do something frequently, or has his computer completely unusable for minutes affects much a contractor in Bangalore. One would assume that a guy who would actually meet his "victims" would have a little more compassion, but as thing stands these problems are very much virtual to these guys.


100Mbit can save loads of money if you're fine with used hardware. You can get old Dell Powerconnect 3548s for next to nothing and it's a decent managed switch for SMB use. They're cheap to the point where you don't care too much if they ever do break because you can easily buy double for very little cash and always have hardware on hand to replace anything.

I'm not saying it's worth it but if 100Mbit works for you and you're short on cash you can get decent managed switches for very little cash. Gigabit switches aren't really at rock bottom prices for somewhat obvious reasons.


From the article it sounded like PoE, general power consumption and cable length were the issues.


Quite right, but those points seem more like PR 2.5G/5G group. Does anyone really need PoE and >1GB speed? Or care that copper cable lengths are limited to 100M?


The target is pretty clearly wireless. We are reaching a point in WLAN tech where a 1Gbit uplink is starting to become a bottleneck (especially if an AP is creating multiple networks, or one cable serves multiple APs, but in theory a single 11ac network can do >1Gb/s).

APs commonly use PoE, might be in locations where there is no external power available and are connected with lots of individual cabling in often awkward spaces.


802.11ac APs need PoE+ and 2.5G. That's really the only purpose of the standard.


Not all 802.11ac APs. Just the more expensive dual radio/3x3 MIMO and dual band ones. Most 802.11ac ordinary APs never get anywhere near 650Mbps real world speeds. Not even 1000 Mbps by far.

When you start looking at things like higher cost Ruckus and Xirrus APs that might have four independent radios operating in one physical unit, on the far end of a 1000BaseT link and PoE, it absolutely can approach 1.5 Gbps aggregate throughput.


I think when you serve multiple users, it is much easier to serve up 1gbs bandwidth wirelessly.


I don't think this is meant solely for data centers, more like offices, branches and home where it'd be cheaper to simply replace the switch/NICs than to rewire the entire place where they're in the walls. The savings are in the labor costs.


The article mentions power usage, and I would bet cost is also a factor - this is the reason many cheap SoCs only have USB2. One day I'm sure the cost will be low enough such that it will start appearing on "prosumer" equipment, then filter down.

However sometimes cost isn't the only factor, like in the case of two-pair gigabit https://en.wikipedia.org/wiki/Gigabit_Ethernet#1000BASE-TX.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: