Last I checked it was starting to get there, for single NICs; it was the switches that made even high-speed in rack deployment not make sense.
2.5 and 5 are based on the existing 10gbase-t technology, and can be deployed over existing Cat5e and Cat6, whereas 10gbase-t only works reliably over existing Cat6 sometimes less than 100ft.
802.3bz's genius is basically running 10gbase-t at half or quarter the clock speed, requiring lower spec cabling but also lower power/lower heat parts. I don't know why this wasn't done as part of the original 10gbase-t specification (802.3an, which is now 10 years old; in comparison, 802.3ab, which defines 1gbase-t, is 17).
In addition, for embedded hardware, 2.5 and 5 are a better fit for common on-chip SerDes implementations when you need to wire stuff up that way.
As for power usage, a modern 10gbase-T controller ran at 1/4th the clockspeed (and voltage also appropriately adjusted) would use less than 1/4th the power, but be more power efficient than an existing modern 1gbase-t, per gbit/sec.
The real question is, when will all the cheap "we need an ethernet port but nothing fancy" controllers (which are all gigabit in anything recognizable as a computer, 100mbit becoming more and more rare in anything not recognizable as a computer) become baby 10gbit? That's all I really care about.
A quick search for SFP+ cards that can handle 10Gbit/sec shows that the typical price is ~200 USD/card right now (this is still at least 3x the cost per card I'd prefer to see).
Given the pricing on individual cards I'd hate to consider the price of a switch, if such a thing even exists.
Same price of $15 for 3 meter. Slightly more if you're scared of buying things direct from mainland China.
Essentially, I guess that is treating the wire as a capacitor, and asking how many times per second an ASIC can detect if it's at a positive (1) or negative (0) voltage - for LVDS at least.
NRZ encoding means on a clock tick you either get a 1 or a 0. With PAM-4 a clock tick can be a 0,1,2,or 3.
In fact I've once read that all these high-speed signals are basically re-using the PCIe PHY. Does anyone with more knowledge of that area of tech know any more about it?
1G/2.5G/5G/10G Base-T Ethernet uses fairly different sub-GHz multi-level encoding over unshielded cabling.
I'm surprised that Base-T is unshielded. Cat-5 and more requires shielding over the full cable, however the cable will contain 4 twisted pairs of wire. Are those pairs of wires considered the "unshielded cabling" for the purposes of this application?
Does this just start to get too close to InfiniBand for Intel's liking?
Thunderbolt cables and adapters are (relatively) cheap and I find it a pity not seeing it used for these kinds of cases.
edit: and what are you going to do, reinvent the wheel by creating a thunderbolt switch/hub? good luck getting a thing like that to capture economies of scale for manufacturing and more than 0.05% of market share compared to existing infiniband or ethernet based interconnects.
You can link two Macs with Thunderbolt, but in my experience the performance is underwhelming compared to a good 10GbE card (no send/receive offload, etc).
I'm beginning to hate Macs.
Can they make switches & NIC's reasonably priced compared to 10G? If so what in the spec allows them to do that?
Access layer switches are generally already backhauled with 10gbit where necessary, and even if they aren't running a fiber pair into the IDF doesn't cost a lot. There is little call for 10gbit to the desktop for most fixed-station users. APs on the other hand are getting swamped and they are widely distributed and are almost universally connected into the access layer with cat5-class cabling. Existing Wave 2 devices on the market solve this by aggregating two cat5 cables but this isn't ideal and often times still requires running new cables.
The goal then is to provide a swap-out replacement for existing APs and switches over existing cable plants. Cisco, et al want there to be no friction to finally talking their customers into an upgrade for their existing gigabit switches, and Wave 2 provides a nice incentive to do so, but only if you can get that bandwidth to the APs when nobody is interested in running new cables across the entire building.
Your point about upcoming fiber technologies is well taken and I hadn't considered the situation in that light.
I like to think the IT department sources our hardware on flea markets.
For the really old legacy systems, NASA actually searches (online) flea markets: http://www.geek.com/chips/nasa-needs-8086-chips-549867/
For instance the network drives hang regularly, sometimes for several minutes. That affects everyone on the floor. Because of a bug in Windows that was never fixed that also causes windows to hang for minutes, basically when it happens it is coffee time. Everyone I know in the organisation is affected, and has been for years. One would have thought that IT would do something about it, not the least because they must experience that themselves. That's counting without the "I don't give a shit attitude" that I pretty much always see in that part of the organisation.
It's not about money. If we need to make even the most minor changes, it becomes multi-year projects costing millions. As far as I can tell these millions are spent in project managers who understand neither the tech nor the business and ensure that the maximum amount of confusion is maintained. They are spent on paying salaries matching industry standard to bottom of their class developers who were not smart enough to work for google or facebook, and have no idea of what they are doing. Some of the things I see are pretty shocking. I faced developpers who are litteraly mild amateurs. And I would say developers are the "elite" of the IT crowd in these organisations.
Outsourcing didn't help with the I don't give a shit attitude. I doubt that the fact that a guy in London has to make 10 unecessary clicks everytime he needs to do something frequently, or has his computer completely unusable for minutes affects much a contractor in Bangalore. One would assume that a guy who would actually meet his "victims" would have a little more compassion, but as thing stands these problems are very much virtual to these guys.
I'm not saying it's worth it but if 100Mbit works for you and you're short on cash you can get decent managed switches for very little cash. Gigabit switches aren't really at rock bottom prices for somewhat obvious reasons.
APs commonly use PoE, might be in locations where there is no external power available and are connected with lots of individual cabling in often awkward spaces.
When you start looking at things like higher cost Ruckus and Xirrus APs that might have four independent radios operating in one physical unit, on the far end of a 1000BaseT link and PoE, it absolutely can approach 1.5 Gbps aggregate throughput.
However sometimes cost isn't the only factor, like in the case of two-pair gigabit https://en.wikipedia.org/wiki/Gigabit_Ethernet#1000BASE-TX.