
Designing 100G optical connections - ot
https://code.facebook.com/posts/1633153936991442/designing-100g-optical-connections/
======
en4bz
[http://www.cwdm4-msa.org/](http://www.cwdm4-msa.org/)

No where does this mention Facebook as a founding member. This dates back to
at least 2015.

> We created a 100G single-mode optical transceiver solution, and we're
> sharing it through the Open Compute Project.

What gives? Looks to me like someone may be taking more credit than they
deserve.

EDIT:

> The starting point for this specification is the CWDM4 MSA, a standard that
> was agreed upon in 2014 by several optical transceiver suppliers. It uses a
> wider wavelength grid (CWDM = 20 nm spacing) and, for many of the different
> technology approaches, does not require a cooler inside the module to keep
> the laser wavelength stable.

So Facebook took something that already existed. Tweaked it slightly by
relaxing the constraints so that it works in the data-center only.

~~~
walrus01
The big difference is that it is lower cost than the 4x25 cwdm msa because it
is specced for 500m reach through normal g.652.d singlemode and a reasonable
number of clean, properly terminated sc and lc/UPC patch panels. Not specced
to work at 2000 metres.

~~~
en4bz
I guess the real question is do these cost savings actually save them money in
the long run? Considering they had to do all the extra engineering work. Or is
this just to thumb the big manufactures? Just another case of NIH syndrome?

~~~
walrus01
If you have hundreds of optics and the difference is $1100 vs $2000 per unit,
that could be $180,000 to $200,000 saved. Enough for the cost of one serious
core router (half of a twin pair) capable of 400Gbps full duplex per slot.
500m should be more than enough for any optical path within a flat horizontal
datacenter.

Facebook didn't do any extra engineering work, they just specced less
sensitive Rx parts with the optics OEMS , and 1dB less powerful Tx.

~~~
crispyambulance
> Facebook didn't do any extra engineering work...

I guess that depends on how you define "engineering."

They're basically trying to create a new class of transceiver. It remains to
be seen if this will take off or not, but since it is part of the OCP effort,
the chances are good that it will be taken seriously by QSFP vendors.

OCP is generating a lot of activity and change on the networking side. Whether
it just becomes a race to the bottom where only the giant suppliers survive or
whether it creates a new eco-system with more players and interesting
technology remains to be seen.

~~~
mstresh
I think it's great that a data center operator is willing to relax their
requirements. For too long we've been designing against telecom specs and
operating environments.

I think in order to bring more OEM vendors in we need to see the other big
players to also accept the relaxed specs.

Hopefully, we don't end up with another dozen different 100G or 200G MSAs that
work from 15-55C.

------
thomseddon
If you're wondering what this kind of optic costs the rest of us:
[http://www.fs.com/products/65219.html](http://www.fs.com/products/65219.html)
(£1,081)

Although you'd pay at least 10-20x that if it had a big vendor name on the
top.

~~~
ericd
Ha awesome! Is this switch-to-switch only, or are there 100 gig optical
connectors for servers as well? I like the 2 km max cable run length.

~~~
thomseddon
Looks like you can get 100Gb NICs for servers e.g.
[https://www.hpe.com/uk/en/product-catalog/servers/server-
ada...](https://www.hpe.com/uk/en/product-catalog/servers/server-
adapters/pip.hpe-100gb-1-port-op101-qsfp28-x8-pcie-gen3-with-intel-omni-path-
architecture-adapter.1008830960.html) (Intel, Mellanox etc also seem to have
their own)

On the distance, there are other specs that support 100Gb up to 20km on there!

~~~
ericd
Haha for when you need another DC that's sort of across town, or you're trying
to add artificial latency intra DC with enormous coils of fiber.

Not too unreasonably expensive - roughly in line with a GeForce 1080, and
that's after the Official HP Hardware markup. I wonder how much the
switch/cabling is...

~~~
penagwin
> trying to add artificial latency intra DC with enormous coils of fiber.

Do people really do this?! (I don't have DC experience, serious question)

~~~
Dylan16807
I believe that's a reference to IEX's magic shoebox. 61 kilometers of cable
that provide 350 microseconds of delay for stock market voodoo purposes.

------
jpmattia
Note that "100G optical connections" is physically implemented as "4x25G
optical connections". The 4 wavelengths are coarsely spaced, which allows
cheaper optical components to be employed. (As opposed to long-haul optics,
where it is economical to squeeze many more wavelengths on the same fiber with
costlier optics.)

~~~
kpil
I must say that I'm fascinated by the fact that each bit is around 1 cm long
while in transit, at near the speed of light (in vacuum.)

Single channel 100 GB would mean 2-3 mm long bits. Imagine that :-)

~~~
walrus01
Single channel 100GbE is already done, but not with OOK, it's coherent
modulated qpsk, 16qam or 64qam single wavelength. But wider channel size in
THz occupied than a single channel 1550 10GbE single wavelength OOK circuit.

------
madengr
The OCP link loss is 3.5 dB @ 500m, which would be 9.5 dB @ 2000m.

The MSA is 5 dB @ 2000m, so where is the additional loss coming from in the
OCP spec? Different connectors?

~~~
ADefenestrator
It's coming from loosened tolerances. Higher/lower temperatures may mean
higher loss. I'm assuming there's a bit more tolerance in the alignment of the
up optics/connectors as well. So, if you get lucky it might well be 5db@2km,
but it's not guaranteed/required by spec.

------
z3t4
This high bandwidth enables some interesting consumer services. The speed is
also very good, like less then a millisecond over 10km, which is ten times
faster then from your computer to the monitor.

