Hacker News new | past | comments | ask | show | jobs | submit login

I think most of us still just want 10G Ethernet to drop in price to the point where using it in a budget SAN makes sense.

Last I checked it was starting to get there, for single NICs; it was the switches that made even high-speed in rack deployment not make sense.




Does somebody have some insight about why there is such a gap between 1G ethernet and 10G ? I mean I have gigabit internet at my home why can't I have a faster local network for a reasonable price ? Is it just too soon or there is a real technological gap ?


Consumer demand for faster ethernet has essentially stalled at the same time as wired broadband stopped getting faster. People don't expect to get > 1G broadband any time soon. 10 years ago broadband was doubling in speed regularly. Instead moving last mile to faster fiber, demand was channeled to iPhones with choppy 4G.


Here in Japan, the NURO Hikari service of Sony Network Communications offers FTTH with 10 Gbps downlink and 2.5 Gbps uplink speeds for around ¥6500 (~65 USD) per month. It's only available in the central wards of Tokyo and a part of Kanagawa prefecture, but you can get it right now.


My impression is that a 10Gbase-T PHY requires far more than 10x as many transistors as a 1000base-T PHY; maybe closer to 100x. This caused a death spiral where high prices lead to low volume that prevents prices from dropping.


There is also the problem that 10gbe over copper requires interleaving, whereas optical doesn't. A lot of 10gbit uses are latency sensitive. 2.5 and 5 usage is not likely to be latency sensitive usage.

2.5 and 5 are based on the existing 10gbase-t technology, and can be deployed over existing Cat5e and Cat6, whereas 10gbase-t only works reliably over existing Cat6 sometimes less than 100ft.

802.3bz's genius is basically running 10gbase-t at half or quarter the clock speed, requiring lower spec cabling but also lower power/lower heat parts. I don't know why this wasn't done as part of the original 10gbase-t specification (802.3an, which is now 10 years old; in comparison, 802.3ab, which defines 1gbase-t, is 17).

In addition, for embedded hardware, 2.5 and 5 are a better fit for common on-chip SerDes implementations when you need to wire stuff up that way.

As for power usage, a modern 10gbase-T controller ran at 1/4th the clockspeed (and voltage also appropriately adjusted) would use less than 1/4th the power, but be more power efficient than an existing modern 1gbase-t, per gbit/sec.

The real question is, when will all the cheap "we need an ethernet port but nothing fancy" controllers (which are all gigabit in anything recognizable as a computer, 100mbit becoming more and more rare in anything not recognizable as a computer) become baby 10gbit? That's all I really care about.


10GBaseT only works reliably over "cat6a" and cat7 cable that costs close to $350 per 1000 ft (305 meters), whereas two strand singlemode G.657.A2 type cable now costs less than $85 per 1000 ft, or less than cat5e. This is why most datacenter environments use fiber and not copper. Also cat6a and cat7 will fill overhead cabling trays a great deal faster because of its huge diameter compared to singlemode patch cables.


Yes, this is true. I work in this industry, and I also recommend fibre over copper for these reasons. However, for rack local wiring (ie, wiring servers into top of rack switches), cat6 for 10gbase-t is acceptable, although I don't particularly care for it.


Eh, with the plethora of DAC SFP+ cables at extremely affordable prices I don't really see many people using CAT6 - especially considering the substantially higher power draw that 10GBase-T has over the DAC's.


This doesn't quite fulfill the role either though...

A quick search for SFP+ cards that can handle 10Gbit/sec shows that the typical price is ~200 USD/card right now (this is still at least 3x the cost per card I'd prefer to see).

Given the pricing on individual cards I'd hate to consider the price of a switch, if such a thing even exists.


We're talking about datacenter usage, not home. Also, there's plenty of older generation cards out there at reasonable prices, the Mellanox Connect-X 2 cards I have in my home servers cost me like $40 for a pair of them. And no matter how you slice it, 10GBe switching is expensive right now, like, crazy expensive - to the point that I don't even bother with it and my VMWare box is connected directly to my FreeNAS box with a 1M SFP+ DAC as a switch with even two 10GB uplink ports would have cost me considerably more than my cheap 24-port TP-Link managed switch ($150).


Agree. $15 for the whole thing.

http://www.fs.com/products/30851.html

Same price of $15 for 3 meter. Slightly more if you're scared of buying things direct from mainland China.


its not transistor count, its process node (speed) and resulting cost of tapeouts


Or 100G...


How fast can data transmission over copper become? What is the limit of physics - just for wire transport, not the switching logic.

Essentially, I guess that is treating the wire as a capacitor, and asking how many times per second an ASIC can detect if it's at a positive (1) or negative (0) voltage - for LVDS at least.


At last year's DesignCon there was a lot of talk about switching to PAM-4 encoding versus the now-standard NRZ. This would almost double the bandwidth given the same clock speed.

NRZ encoding means on a clock tick you either get a 1 or a 0. With PAM-4 a clock tick can be a 0,1,2,or 3.


This sounds similar to MLC in SSD. And both feel like a step back towards analog computing.


50 Gbps per lane has been demonstrated and should be commercially available in 2018. QSFP cables will be carrying 200 Gbps.


Fiber is fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: