
New Optical Form Factors for 400 Gigabit Ethernet - walrus01
http://www.lightwaveonline.com/articles/2017/12/surveying-the-new-optical-form-factors-for-400-gigabit-ethernet.html
======
francoisLabonte
The real debate is between OSPF and QSFP-DD as they are the only ones with a
shot at high density.

QSFP-DD is backwards compatible so that you can run your 400G port as a 100G
or 40G with 4 lanes at 25G/10G. OSFP will be able to do the same thing but
with an adapter from OSFP to QSFP

But what this article fails to mention is that OSFP is the connector of the
future that will support 8 lanes of 100G to support 800G ethernet with good
signal integrity and capability to support high power optics for longer
distances. ( This looks to be in 2020 )

So QSFP-DD is a 1 generation connector...

OSFP will see us through 400G and 800G.

Disclaimer I work at Arista Networks who is bullish on OSFP.

For more info see Andy Bechtolsheim's talk on 400G optics at OCP

[https://www.youtube.com/watch?v=Kotu6B7AQpk](https://www.youtube.com/watch?v=Kotu6B7AQpk)

~~~
walrus01
I am also optimistic about OSFP as compared to the QSFP size for 400G. I'm not
a laser engineer but I do have a pretty good understanding of how 400G will be
achieved through two strands of singlemode. For medium reach optics it's
actually an 8-channel CWDM with prism built into each optic, so the optic body
needs to be large enough to dissipate the heat from eight separate 25 Gbps
lasers. CFP size will obviously handle that but is a true first generation
solution and is huge.

The only bad thing about OSFP is that its highly similar name will confuse the
hell out of network engineers who never see the OSI layer 1 of a network, and
deal with things at OSI layer 3.

OSFP != OSPF (the routing protocol).

~~~
agoodthrowaway
100G/wave device will be the norm moving forward and 8 lane devices for 400G
won’t be around much except in niche applications.

~~~
walrus01
for long reach, yes, but not for ISP-to-ISP interconnection... the cost of
coherent (QPSK, 8PSK, 16QAM) 100G SerDes and modulation/demodulation is
considerably higher and only justified if going a great many km. For intra-
peering-facility connections I don't see coherent 100G (or multiples of
coherent 100G bundled by CWDM into 200G, 400G interfaces) becoming the norm
anytime soon.

~~~
wmf
I was looking at this the other day; it sounds like 100G-DR1 and 400G-DR4 are
not coherent and vendors are aiming to make it commodity. Coherent is more
like 200-600G per lambda.

~~~
walrus01
At present 100G-LR4 is the defacto standard for ISP peering interconnections
(such as to an IX switch in the same building, or for PNIs between big players
within the same carrier hotel). Because it is very low cost. The cheap 25 Gbps
x 4 approach and low cost cwdm prism are a big part of that. Since 100G will
soon move from qsfp format down to the same size as SFP and SFP+ (fitting 48
in a 1RU height, 17.5" wide line card in a chassis based Arista), I don't
foresee them becoming any less popular... Not when the optics are so
affordable.

------
hesdeadjim
I remember having to schedule a 780Kb download overnight and hoping no one
picked the phone up. Or years later having to download shows in advance
(illegally because they weren’t available) because streaming wasn’t possible.

Now I casually (and legally) boot up any show I feel like within seconds at HD
resolutions. I love thinking about the sheer amount of data that is traversing
around the internet now. And it’s more fun to imagine what becomes possible
when you bump up another order of magnitude or two.

In games for instance, a significant limiting factor for implementing large
player counts in fast-paced multiplayer come down to bandwidth. The amount
needed, for the most part, scales linearly per-player as the player count
increases. Add more bandwidth (and some more CPU power) and games can evolve
entirely new experiences.

The battle royale games like PUBG are a great example, but due to bandwidth
issues and CPU usage the server tick rate is much slower than normal fast-
paced games (20hz vs 60hz) and therefore the experience suffers quite a bit.

~~~
Klinky
Multiplayer gaming has more issues with latency than a lack of raw bandwidth.

~~~
hesdeadjim
Not necessarily, depending on how the game is implemented a player with bad
ping can be the only one who suffers the majority of the effects of their bad
connection.

I am currently working with this tech:

[https://www.photonengine.com/en-US/Quantum](https://www.photonengine.com/en-
US/Quantum)

And it does deliver on their promises, impressive stuff.

~~~
Lorin
This looks neat, think it'd do well with upcoming projects like Star Citizen
from CIG? Physics sims have to be limited dramatically atm.

------
Osiris
I wonder when we will start seeing > 1Gpbs Ethernet in consumer products. I've
read about initiatives for 5 or 10Gb consumer Ethernet. I'd love to be able to
push/pull from my NAS at > 100MB/s.

~~~
dweekly
It's a fair question; I suspect most of the reason for the lag is that
consumer internet connections rarely offer >1gbps, consumer LAN applications
requiring >1gbps of traffic haven't emerged (for NAS/SAN, a directly connected
USB 3 is probably the way to go for 10gbps transfer, which is what you're
looking for here), and WiFi - how most people are connecting their consumer
devices to the network - doesn't offer more than a gigabit of goodput in
almost any consumer configuration. (e.g. 802.11ac with two spatial streams and
80MHz channels gets ~700mbps at MCS8)

That said, the NBASE-T and MGBASE-T efforts did consolidate to produce a
802.3bz standard that can carry up to 2.5gbps on a Cat 5e line and 5gbps on
Cat 6, and we are starting to see consumer and SMB switches, APs, and NICs
that support bz.

Moral of the story is wire for Cat 6 and look to upgrade to bz in the next
year or so.

~~~
walrus01
one of the major use cases for 802.3bz is high bandwidth WAPs, specifically
things that _can_ actually pull more than 1 Gbps of data... to actually
achieve that with wifi standards usually requires:

1) dual, separate 2.4 and 5.x GHz radios

2) 3x3 MIMO

3) 802.11ac wave2 (1024QAM)

4) use of really wide channel sizes, even an 80 MHz wide 802.11ac channel
won't get near 1 Gbps aggregate throughput, needs to be the ridiculous 160 MHz
channel size in the 5.x GHz part 15 frequencies.

5) all of the above combined, or more than two radios in an AP such as
expensive high density Xirrus or Ruckus WAPs which can have two or three
separate 5.x GHz radios in one physical body.

~~~
dweekly
Spot on...and it is going to be a while before a typical home has those needs
or capabilities. Broader FTTH could change bz and ac wave 2 adoption curves as
people end up annoyed that they are paying for a gigabit but not actually
receiving it.

------
samstave
What are distance limitations at 400?

~~~
dogecoinbase
Trans-Pacific :) But more seriously, this article is about the device-side
electrical interfaces -- the modules will transceive on any one of a number of
physical interfaces depending on the specific needs (ultra-short-range for in-
rack/in-DC, though that's a less likely application of this technology). Most
multi-lane paths like these are using multi-strand optical cabling for short-
range, and typically multiplexing the signal using on-board DWDM in/around the
1550nm band for two-strand long-range (which is primarily constrained by the
power available to the module). This band is used because it can also be
amplified inline (ref: erbium-doped amplifier) to extend the range.

(this is an oversimplification, I am not your network engineer, I am not a
network engineer, consult a qualified network engineer in your jurisdiction)

------
bjoli
Does aanyone here work with consuming (not just passing it kn) these amounts
of traffic? How do you manage to get anything meaningful out of it?

~~~
walrus01
network engineer here: nobody connects a 200 or 400GbE interface linecard off
a router to a single anything... The purpose is for ISP-to-ISP
interconnections for truly huge amounts of traffic. For example direct peering
in the same IX/carrier hotel between a huge source of content (youtube/google,
netflix, etc) and a huge ISP (comcast, centurylink).

For a big ISP, also things like 100GbE connections from a city's pair of core
routers to slightly smaller aggregation routers.

edit: I don't know that anything "consumes" data in the way that the
questioner is asking. It's more about the aggregate amount of data. One
netflix 4K stream to an xbox one s is about 15.75 Mbps. Now multiply that by a
hundred thousand netflix subscribers in a typical comcast service area, any of
whom at any given time might be sitting around and watching Altered Carbon or
Breaking Bad.

~~~
kijiki
Cumulus (disclaimer: I co-founded it) has customers that use these:
[http://www.mellanox.com/page/products_dyn?product_family=260...](http://www.mellanox.com/page/products_dyn?product_family=260&mtag=connectx_5_en_card)

to dual 100G attach their servers to a pair of ToR switches. As a result,
they'd really love to have something faster than 100G for their ToR to spine
uplinks, but that isn't quite available yet.

Insanely (and awesomely), Mellanox already has these:
[http://www.mellanox.com/page/products_dyn?product_family=266...](http://www.mellanox.com/page/products_dyn?product_family=266&mtag=connectx_6_en_card)
dual 200G NIC.

~~~
walrus01
100G NICs and dual port of the same version are a really good argument for the
need for more pci-express 3.0 lanes in single socket servers. The AMD epyc is
a step in the right direction. If you assume a server that might have two m.2
nvme SSD in it, those take up lanes, and then the nic will eat an x16
interface.

