
Researchers achieve optical data transmission speed of 44.2 terabits per second - martonlanga
https://www.independent.co.uk/life-style/gadgets-and-tech/news/internet-speed-world-record-fastest-download-a9527236.html
======
rbinv
Link to paper (PDF):
[https://www.nature.com/articles/s41467-020-16265-x.pdf](https://www.nature.com/articles/s41467-020-16265-x.pdf)

~~~
martonlanga
Abstract:

> Micro-combs - optical frequency combs generated by integrated micro-cavity
> resonators – offer the full potential of their bulk counterparts, but in an
> integrated footprint. They have enabled breakthroughs in many fields
> including spectroscopy, microwave photonics, frequency synthesis, optical
> ranging, quantum sources, metrology and ultrahigh capacity data
> transmission. Here, by using a powerful class of micro-comb called soliton
> crystals, we achieve ultra-high data transmission over 75 km of standard
> optical fibre using a single integrated chip source. We demonstrate a line
> rate of 44.2 Terabits s−1 using the telecommunications C-band at 1550 nm
> with a spectral efficiency of 10.4 bits s−1 Hz−1 . Soliton crystals exhibit
> robust and stable generation and operation as well as a high intrinsic
> efficiency that, together with an extremely low soliton micro-comb spacing
> of 48.9 GHz enable the use of a very high coherent data modulation format
> (64 QAM - quadrature amplitude modulated). This work demonstrates the
> capability of optical micro-combs to perform in demanding and practical
> optical communications networks.

------
dasudasu
It is not particularly hard to do a "hero" experiment like this. Shannon
limits to fiber transmission have pretty much been reached experimentally a
long time ago. Muxing several wavelengths together is also of course the
backbone of fiber optics transmission since its very origin. The current buzz
in the field is to use micro-combs like they did as opposed to an array of
lasers to provide the multiple wavelengths - but it still comes with its
particular set of challenges to make it practical. The micro-comb provides the
other wavelengths from a single source through a nonlinear process.

My understanding from just looking quickly at the paper is that they don't
modulate all the wavelengths independently, meaning that they duplicate the
info they send several times to reach that high terabit rate. The laser source
is only one part of a transceiver, and once you have 400+ independent
modulators/receivers, the laser source becomes a much smaller concern when it
comes to make it practical. A conventional laser source can be made very
compact too (in a semiconductor platform) and integrated with the rest of the
transceiver on the same chip. This is still where the industry is putting its
efforts. These micro-combs come with some disadvantages too, relating to
stability, low SNR, and uneven power among the wavelengths (that then need to
be equalized).

~~~
cycomanic
I agree with you that microcombs come with their specific kind of challenges
(and SNR is the big one), however making many semiconductor lasers so small as
too fit 100 on a single chip poses lots of challenges (in particular thermal
management and wavelength stability). That said, combs (not necessarily
microcombs) over opportunities for additional functionality/optimizations.
Because the comb lines are locked to each other (normally individual lasers
have small wavelength fluctuations) let's you space channels even closer
together, as well as process multiple channels at the same time.

Regarding your comment about these hero experiments not being hard, I would
argue it's actually the other way around, we are now so close to the limits
that it is becoming incredibly hard to observe further gains. Also regarding
not modulating lines independently, this is the common way how everyone (even
the industry labs) demonstrates these systems, using 100 independent
transceivers would be prohibitevely expensive, moreover research has shown
that you actually receive a penalty from using this approach so the
demonstration is a lower bound on what could be achieved with individual tx
modules.

~~~
dasudasu
Modulating individual lines is the only way for such a scheme to become
practical in a real environment and achieve the claimed data rates. Making
hundreds of modulators and receivers fit within a single chip is about as hard
as making hundreds of laser fit a single chip, hence why it's not
realistically being pursued by the industry. My point is that if you already
require separate chips for the rest of the transceiver, integrating the laser
itself becomes much less of an issue, and the benefits of a single laser
source common to all much more muted.

------
PopeDotNinja
And even it that speed, I bet pre-roll ads on videos still end up buffering.

~~~
Denvercoder9
Yes, we need better software (engineers).

~~~
tudorw
And we need better consumers, kidding, we love you.

------
vaer-k
According to [https://hpbn.co/primer-on-latency-and-
bandwidth/#bandwidth-i...](https://hpbn.co/primer-on-latency-and-
bandwidth/#bandwidth-in-core-networks)

> As of early 2010, researchers have been able to multiplex over 400
> wavelengths with the peak capacity of 171 Gbit/s per channel, which
> translates to over 70 Tbit/s of total bandwidth for a single fiber link!

So why/how is 44 Tbps an improvement?

~~~
vaer-k
Ah ok, so according to the article
([https://www.nature.com/articles/s41467-020-16265-x.pdf](https://www.nature.com/articles/s41467-020-16265-x.pdf)),

> To dramatically increase bandwidthcapacity, ultrahigh capacity transmission
> links employ massivelyparallel wavelength division multiplexing

and

> All of this is driving the need forincreasingly compact, low-cost and
> energy-efficient solutions

and

> The ability to supply all wavelengths with a single, compact integrated
> chip,replacing many parallel lasers, will offer the greatest benefits

So it's not really so much news in the sense that existing speeds over fiber
have been improved, but instead in the sense that the speed produced by this
single chip is a viable compact, low-cost and energy-efficient alternative to
many parallel chips

------
ganzuul
I have this funny idea about a waveguide interconnect, where MIMO radios
address each other inside the manifold. You could get pretty decent bus width
through e.g QAM and with beam steering probably simultaneous data links.

Of course it could be made to look cool as hell, complex microwave plumbing
with integrated heatsink replacing a plain old mainboard. :)

~~~
cycomanic
This has actually been a big research topic over the last 8 years or so. The
keywords are space division multiplexing (SDM) and in particular Mode division
multiplexing (MDM)

------
bobajeff
I like like the bottle neck in the future will be Harddrive/SSD file
read/write speeds.

Honestly, I'd love to have my hands on a terabyte drive with 1TBps speeds.

~~~
zepearl
I'm sending right now at home 7TB from my server to my NAS and it's taking
aaages over my internal 1Gb/s ethernet network.

Am I right thinking that there are (still) no SOHO network switches that can
handle faster speeds (at least 2Gb/s) that don't have active fans & don't get
hot and that aren't super-expensive? The last time I checked, about 1 year
ago, I didn't manage to find anything.

~~~
jlgaddis
I've not used one and can't speak to their quality but:

> _The CRS305 is a compact yet very powerful switch, featuring four SFP+
> ports, for up to 10 Gbit per port. The device has a 1 Gbit copper ethernet
> port for management access and two DC jacks for power redundancy. The device
> is a very sleek and compact metallic case without any fans, for silent
> operation._ [0]

> _Suggested price $149.00_

\---

[0]:
[https://mikrotik.com/product/crs305_1g_4s_in](https://mikrotik.com/product/crs305_1g_4s_in)

~~~
zepearl
Thank you - looks interesting, but they don't write in the specs if it has an
active fan or not, right? (e.g. I see in the pics that this random model has
fans [https://mikrotik.com/product/crs328_4c_20s_4s_rm#fndtn-
downl...](https://mikrotik.com/product/crs328_4c_20s_4s_rm#fndtn-downloads)
but they're not mentioned in the specs)

~~~
jlgaddis
I didn't look through the specs in detail so I'm not sure, but according to
the copy I quoted above:

> _... without any fans ..._

------
The_rationalist
Wasn't the last record (2014) of 255TBs?
[https://www.extremetech.com/extreme/192929-255tbps-worlds-
fa...](https://www.extremetech.com/extreme/192929-255tbps-worlds-fastest-
network-could-carry-all-the-internet-traffic-single-fiber)

~~~
oarsinsync
That was over 1km utilising 7 cores. Typical fibre plants use one core per
direction (transmit / receive).

This is over 75km utilising a single core per direction. IE this is actually
something that has potential to be deployed in the world without having to
replace all the existing fibre plants that already exist (eg undersea cables)

~~~
souterrain
It would be interesting to see if EDFAs also do not require retrofit.

------
mcnamaratw
The article seems to be comparing a hero experiment to access rates. Why not
at least compare to telecom backbone rates? You can do at least 1.6 Tbps per
fiber, long haul, with commercially available gear.

If we use telecom hero experiments as the standard, 44 Tbps is not the record:
[https://en.wikipedia.org/wiki/Fiber-
optic_communication#Stan...](https://en.wikipedia.org/wiki/Fiber-
optic_communication#Standard_fibre_cables)

------
rayiner
It’s not “internet speed” if you can’t route at that speed.

~~~
wbl
Bundled links via ECMP or even round robin are a valid strategy.

~~~
myrandomcomment
A single flow hashs to a single link, so a file transfer is at the speed of a
single link in a ECMP group if ECMP method is normal Ethernet switching.

------
lostlogin
> The highest commercial internet speed anywhere in the world is currently in
> Singapore, where the average download speed is 197.3 megabits per second
> (mbps).

I’m very surprised by this. I would have assumed the leading country would
have had something a lot closer to gigabit. ‘Good enough’ must be the user
reaction. Years of terrible connections have left me chasing down every last
bit, even though fibre is now installed.

~~~
ksec
Because that assumption would means everyone is getting a GPON / Fibre
Network. In reality even if 20% of the nation is still connected via ADSL,
your average speed would have been significantly lowered.

I still think we haven't fully solved the last mile problem yet. Fibre
installation still sucks for most people. And vast majority of new home dont
have additional pipes for Fibre built in.

------
keenmaster
Is anyone knowledgable on the implications of the underlying tech once it gets
commercialized? Obviously things like game streaming would be improved.

~~~
TheSpiceIsLife
Aren’t gamers primarily concerned with _latency_ rather than _speed_?

~~~
TheAdamAndChe
They said game streaming, which makes me think s/he's talking about things
like Twitch and Mixer, livestreaming platforms that do depend on throughput
for high quality video.

~~~
TheSpiceIsLife
Sorry, yes, I see that now.

------
kraussvonespy
Even this wouldn't be enough bandwidth for my clueless wifi users who complain
when speedtests on their smartphone show 70mbps.

"Is there enough bandwidth to do what you need to do? Yes? Then shut up."

------
billme
Given the cost of laying fiber lines across the ocean and this tech (appears)
to double the capacity of an existing line, why would there not be a push to
get this into use, what am I missing?

~~~
detaro
This is a field that's constantly being worked on, not sure why you say there
isn't a "push" in it.

This specific thing is not faster than previous results, but more compact.

Long-distance fiber lines also have amplifiers along the way, so you can't
just scale them up by changing the endpoints if it doesn't match the
capability of the in-line hardware.

~~~
billme
Cost of swapping the amplifiers, not just the end-points, makes sense as an
issue. Thanks!

As for it being “only more compact” not a capacity increase, for a comparable
single coherent optical fiber line, are existing fibers filled to capacity due
to the limits of tech, economies, physics, etc. - if physics, then I assume
all fibers are at capacity, right?

~~~
dasudasu
If you're going to lay a fiber across the oceans, then yeah, that capacity is
going to approach the Shannon limit, but at some point there is a calculation
to be made about how expensive it is to use all that capacity vs using multi-
core fibers or just laying out more fibers.

The economics of it are pretty interesting. A single fiber (non-submarine) is
about ¢8 a meter in raw cost, and it said that they laid out so many during
the telecom bubble of the late 1990s that there are still many unused (so-
called dark) fiber networks throughout the US. See for example
[https://www.ofsoptics.com/lighting-up-dark-
fiber/](https://www.ofsoptics.com/lighting-up-dark-fiber/)

------
acd
Looking forward to a 1 terabit/second home Internet connection.

~~~
andarleen
One can barely get 1Gbps connections in the UK, and in most parts even 10Mbps
is a lot. TBps may have to wait a few eons.

------
fnord77
> The highest commercial internet speed anywhere in the world is currently in
> Singapore, where the average download speed is 197.3 megabits per second
> (mbps).

what?

~~~
notaplumber
Yeah, this confuses me. Gigabit is available in several places in North
America. I had to check the date on the article.. posted 20 hours ago. Yep.
Still confused.

~~~
z3t4
In Sweden we have 10Gbe consumer Internet for $40/month

~~~
thejynxed
To be fair, here we have some counties as large as Sweden with fewer people
living in them.

~~~
sollewitt
Sweden is the size of California.

~~~
jsjohnst
True. It also has one quarter the population too.

------
DyslexicAtheist
tl;dr: _" chips with friggin laser beams attached to their head"_

------
amelius
This is not directly internet related. The original title is better:

> Ultra-dense optical data transmission over standard fibre with a single chip
> source

As a compromise, I'd propose:

> 44.2 terabit/s optical data transmission over standard fibre with a single
> chip source

~~~
YayamiOmate
Much better, because what the heck is "internet speed". The most sensible
definition to me is payload over IP protocol possibly on an existing
commercial link. That's the only way I see relation to internet and the
internet.

~~~
jlgaddis
In the past, the "Internet speed record" was measured in units such as
"terabit meters-per-second":

> _... they had managed to send nearly 840 gigabytes of data across a distance
> of 16,346 kilometers (10,157 miles) in less than 27 minutes, at an average
> speed of 4.23 gigabits per second._

> _This was equal to 69,073 terabit meters per second (or 69,073 trillion bits
> sent through one meter in a second), which exceeded the previous record set
> by CalTech and CERN earlier this year._ [0]

\---

> _The team successfully transferred data at a rate of 8.80Gbps, which is
> equal to 264,147 terabit-meters per second (Tb-m /s)._ [1]

\---

> _Internet2 ... has this week announced a stunning new record speed of
> 9.08Gbps - equal to 272,400 terabit-meters per second (Tb-m /s)_ [2]

\---

No idea if it's still done that way or not but I don't see any mention of
distance in this article (haven't looked at the paper).

\---

[0]: [https://www.cnet.com/news/internet-speed-record-
broken/](https://www.cnet.com/news/internet-speed-record-broken/)

[1]:
[http://www.startap.net/translight/pages/applications/2006/da...](http://www.startap.net/translight/pages/applications/2006/datareservoir.html)

[2]: [https://www.hindustantimes.com/india/the-speed-
fantasy/story...](https://www.hindustantimes.com/india/the-speed-
fantasy/story-GiZC93UFUEql7xoz12d98O.html)

~~~
rejberg
> In the past, the "Internet speed record" was measured in units such as
> "terabit meters-per-second":

I like this unit better, because then a jetliner full of hard drives could be
a valid competitor.

~~~
ChuckMcM
Which is exactly why it was chosen, the 'purpose' of networks is moving data
from point A to point B so the 'goodness' of networks is how much data from
point A to point B _and_ how far away is point A from point B.

Then the Internet became a transport for time sensitive data (movies, voice,
Etc.) and so the latency between bits gets wedged in sometimes.

