Hacker News new | past | comments | ask | show | jobs | submit login
Record-breaking chip can transmit 1.8 petabits per second (newatlas.com)
264 points by typeofhuman on Oct 24, 2022 | hide | past | favorite | 116 comments



I have not dug deeply into the technical content, but the headline as written is pretty far off the mark.

I believe the press release is here: https://www.dtu.dk/english/news/all-news/new-data-transmissi...

The innovation: Normally, data over a fiber is multiplexed using many wavelengths of light (wave-division multiplexing, or WDM for short). These wavelengths are generated from an array of lasers, forming a frequency comb.

The result here creates a frequency comb from a single laser, and uses that for the transmission. It saves all the power associated with the many lasers traditionally used for WDM. All the "chips" that do the modulation, transmission, reception, and de-modulation are still there, but you've cut out all but one laser from the system. It's a nice result.

That was my quick take, please correct if you have more info.


The key point is the petabit per second rate they achieved:

>Using only a single light source, scientists have set a world record by transmitting 1.8 petabits per second.

In 2021 the world record was 300 TB[0]. Why is the headline misleading? for reference, the headline is currently "Record-breaking chip can transmit entire internet's traffic per second." This seems to be correct:

>According to a study from global telecommunications market research and consulting firm TeleGeography, global internet bandwidth has risen by 28% over the course of 2022, with a four-year compound annual growth rate (CAGR) of 29%, and is now standing at 997Tbps (terabits per second).[1]

>Normally, data over a fiber is multiplexed using many wavelengths of light (wave-division multiplexing, or WDM for short). These wavelengths are generated from an array of lasers, forming a frequency comb.

I think that is a relativly new techniuque. For example see https://www.nature.com/articles/s41467-019-14010-7 :

>Optical frequency combs were originally conceived for establishing comparisons between atomic clocks1 and as a tool to synthesize optical frequencies2,3, but they are also becoming an attractive light source for coherent fiber-optical communications, where they can replace the hundreds of lasers used to carry digital data

So "normally" might give the wrong impression. As far as I know, no commercial service is using it. One reason is the cost, which this article addresses by proposing a chip based apporach which makes it cheaper and easier.

[0]https://www.nict.go.jp/en/press/2021/07/12-1.html

[1]https://www.computerweekly.com/news/252524883/New-networking...

Edit: I should point out that the "previous" record was with a 4-core optical fiber, whereas this one uses a 37 core one. They are really two different things: one about the cable and the other about the transmitter. So this one doesn't "beat" the other.


> Why is the headline misleading? for reference, the headline is currently "Record-breaking chip can transmit entire internet's traffic per second."

The "chip" is a CW laser, so it transmits no data.

It's a little hard to tell from the article + PR, but I think the result is a laser with a stabilized frequency-comb output suitable for WDM that has been implemented on a single die (which is still a nice result.)

Perhaps I missed that they implemented an entire transmitter chain on the "chip", but I believe the chip innovation is the continuous photon source, not the data transmission.


The chip which produced the laser is indeed “just” CW with data modulated on separately. And novelty indeed lies in the width of the comb source and the SNRs of the obtained channels.

(Worked on this project.)


Congrats to you and team on these results.

> And novelty indeed lies in the width of the comb source and the SNRs of the obtained channels.

Can you expand on this? I'd be curious how it compares to a traditional (multi-laser) WDM system, probably others would be too.


Thanks! I’ve reached out to my colleague who worked on the chip side of this project.


Ok, so this is one step in such hyper-fast data transmission. What would be the other hurdles?


Maybe I'm missing a nuance here but WDM with one laser per wavelength is bread and butter tech used everywhere. The base case (n=2) even forms the basis of PON networks.


Frequency combs are derived from a single light source.

>Current fibre optic communication systems owe their high-capacity abilities to the wavelength-division multiplexing (WDM) technique, which combines data channels running on different wavelengths, and most often requires many individual lasers. Optical frequency combs, with equally spaced coherent comb lines derived from a single source, have recently emerged as a potential substitute for parallel lasers in WDM systems[0](2021)

So "These wavelengths are generated from an array of lasers, forming a frequency comb" is using "frequency comb" to mean something else in that sentence.

[0]https://www.degruyter.com/document/doi/10.1515/nanoph-2020-0...


> So "These wavelengths are generated from an array of lasers, forming a frequency comb" is using "frequency comb" to mean something else in that sentence.

Yes, "frequency grid" would have been better terminology. Common spacing for WDM is 50 GHz between adjacent frequencies (it's ITU spec'd iirc), and those rely on feedback system to maintain the spacing precision.


I'm eventually foreseeing a whole new form of cache. A coil of optical fiber with the cache data constantly inflight around that loop. With denser optical data transmissions the amount of data per meter of coil starts increasing.

At this speed, we are already talking 2% of the entire Internet traffic in the length of a single fiber between the shortest point between the UK and USA. That's just a single fiber. As transducers of this ability get cheaper and cheaper, all those unused dark fibers start to offer up alternative uses with inflight-caches. Think of how much memory would be needed to store that amount of data, how much that costs and even with the costs of fiber, things would start.


Everything old is new again - delay line memory at the speed of light.



1/1.4 * the speed of light :) Moves a bit slower in glass fiber.


if you want to be pedantic, it is the speed of light. Just not the speed of light in vacuum :)


I don't really agree. If you say "the" speed of light and don't make other qualifications, that means c.


Even at 1.84 Pbps, you can only store about a gigabyte per km, so this doesn't seem very practical.

https://www.wolframalpha.com/input?i=1.84Pbps+*+1km+%2F+c


Speed of light in fiber is not c, but about 2/3 c.


So you're saying it's... about a gigabyte per km?


Reminds me of the harderdrive based on the ping protocol: https://youtu.be/JcJSW7Rprio


225 terabytes in 1 second of fiber. 225TB of DRAM doesn't actually cost all that much, O(million), or maybe O(10 million) if you're also talking the servers required to drive that much.

Ping between USA and EU might be what, 100ms round trip? So how much would you think 20x spools of transatlantic fiber plus all the repeaters and power to run it will cost you? And for that you get a memory with O(1s) latency as opposed to DRAM which is 10 million times better latency.

Or you could go with NAND for about twenty thousand dollars for that much storage with only 1 million times better latency. Or HDD at a few thousand and still get 100x better latency.

Maybe my math is wrong, but I'm not quite seeing the niche for this new form of slow billion dollar cache.


You can technically do this today. Just target a remote server and run pingfs. Store your data in the transatlantic fibres! https://github.com/yarrick/pingfs


Not sure if this is relevant:

In 2004, researchers at UC Berkeley first demonstrated slow light in a semiconductor, with a group velocity 9.6 kilometers per second.[5] Hau and her colleagues later succeeded in stopping light completely, and developed methods by which it can be stopped and later restarted.

https://en.wikipedia.org/wiki/Slow_light



You wouldn’t want fiber though. It’s designed with low latency in mind, whereas for a delay line you want high latency (but not too high).


Fiber Token Ring?


Late to the thread, but I took part of this research (7th author in the list). I worked on the signal processing, information coding etc and is happy to answer any questions :-)


Does this work imply that the same tech could create ultra-high-speed switches that could match this bandwidth, thereby routing and propagating, and not just flow between two points?

BTW, congrats on your success.


The short answer is yes. (1)

Optical saves a heck of a lot of power, and is obviously much faster than copper, so that's the way it's all going.

The longer answer requires reliable and appropriately sized/cost transceivers to get the data back to electrical to match the speed of the optical, and those are going to be a while coming, and this tech is still in the lab.

At the top end subsea cables have very high cost and traditionally bulky transceivers, and it's all about data volume, not switching.

At the other end of the scale is inside the data centre, where most switching needs to occur, there is a move towards optical interconnections and co-packaged switches. (1 and 2)

1: https://www.intel.com/content/www/us/en/newsroom/news/intel-... 2: https://www.intel.in/content/www/in/en/architecture-and-tech...


Thanks :-)

It is a while since I have been into optical signal processing, but I will ask my colleague who is much more well-versed.


what are the optical link budgets in this 8 km dark fiber path?

what's the tx launch power?

what's the frequency bottom end and top end, in nanometers or THz? does this all run in the normal ITU DWDM range from approx. 1528nm up to 1568nm, or wider than that?

https://www.fiberoptics4sale.com/blogs/itu-standards/1004345...

what's the expected path loss? I assume this is some normal 9/125 singlemode fiber and two strands.

what's the usable RSL threshold on the far end?


What do you think the time lag is for this actually being deployed in a non-research context (either small scale or full-blown rollout)?


Wrote another reply here: https://news.ycombinator.com/item?id=33321669

I’d say that there is at least a 10 year delay between the lab and commercial deployment. Even then we are talking about deployment in large fiber systems and not to the home.

However, not all ideas in the lab ever make it into deployment.


Congrats!

What modulation, bitrate and spectral efficiency did you use per WDM channel?

Was that rate achieved in real-time or with massive post processing?


We used constellation shaping and a rate adaptive code to tailor tailor the bitrate of each channel. It varied between something along 64-QAM and 256-QAM depending on the SNR in the channel.

Post processing times were not too bad. It ran on a standard desktop computer and gave an estimate of the data rate in about a minute (can’t remember exactly). Of course, compared to actual transmission that is terrible slow, but that was only due to the implementation and need of this experiment.


For us n00bs, how do you see this being applied? And in what time frame?


I can’t answer for the chip aspect (which is the truly novel part of this research), but many of the signal processing and coding techniques are being deployed in new optical transmission systems. Constellation shaping and rate adaptive coding were two techniques we used in this paper to ensure that individual channels were as ideally utilized as possible.


Devil's advocate here. How do you feel about the social significance of this type of work? Do you think "enough bandwidth" is a thing? If only the cost drops further, will it affect society? If we can already stream anything in the collective consciousness within seconds, what is the purpose of more? Is it likely to enable unnecessary levels of video surveillance by state actors?


I must confess that I have never been concerned along those lines.

I have thought a lot more about the environmental impact of transmission technology. It is a massively energy consuming industry and the expectation is to provide more capacity, while the expectations on efficiency do not add up to an actually reduced energy use.

For what it is worth, I work on Alzheimer’s research today: https://optoceutics.com


I appreciate your honesty. You are not alone in working without considering social impact, it's rife in tech and I am previously guilty too.

Alzheimer's seems a challenge! Here in China they apparently approximate it for research purposes by dosing primates with MDMA... should be easy to find volunteers!


Great now webdevs will get even more lazy and ship an entire docker image in every html tag.


"The user needs to be able to edit some audio in the browser"

Next thing you know, you have linux compiled to WASM running a docker container built to host ffmpeg for you.



Well it's too late to switch to that now. We've already built all this other infrastructure to run it in a docker container which is more secure anyway.


It gives the <img> tag a whole new meaning.


Please don't give them ideas...


It's an old idea called: object oriented programming.


How do they generate data at that rate to transmit? I assume it's synthetic data and probably duplicated a lot? But how do they generate it and receive it to count it?


> How do they generate data at that rate to transmit?

In the lab, the most common scenario is to have a pseudo-random bit sequence (PRBS), and usually the sequence is 2^31-1 bits long. This makes both the generation (on the transmit side) and error-rate detection (on the receive side) reasonably straightforward, although it can be tricky to read out every one of the receive channels to check the bit-error rate (BER).

Here's typical PRBS BER equipment: https://www.anritsu.com/en-us/test-measurement/products/mp19...

Spoiler alert: The test equipment isn't cheap.

Edit: Probably should mention- PRBS from a linear-feedback shift register is used, because in a PRBS of 2^N-1 you are guaranteed every permutation of N bits long, except for N x zeroes in a row. This measures the wideband system, so if there are spurious resonances in the wide pass band, errors will result.


Actually, we tend not to use PRBSs anymore for these sort of experiments, instead you use a randomly generated symbol/bit sequence which fits into the memory of the DAC. Similarly you don't use a BERT anymore but instead use a Realtime oscilloscope (even more expensive than the BERTs) and do offline digital signal processing (in real systems this is done by very expensive asics). PRBSs and BERTs are still uses in so called datacom experiments where latency is often an issue and only very lightweight FEC is used, so one wants to measure down to error rates of 10e-9 unlike coherent systems.


I find that interesting, and am curious:

> instead you use a randomly generated symbol/bit sequence which fits into the memory of the DAC.

How do you guarantee coverage of the entire spectrum? As I mentioned above, PRBS(N) has every bit sequence possible for N bits, which would expose any drop outs or resonances.

> so one wants to measure down to error rates of 10e-9 unlike coherent systems.

Back in the day, for OC-768 (40 Gbps/43 Gbps with FEC) equipment was measured to 10e-12. Has that relaxed? [IIRC, to gain 95% confidence that BER is 10e-9, you had to measure 10e10 bits. Similarly, for 95% confidence of BER 10e-12 you had to measure 10e13 bits. It's been a while though.]


Very late reply so you might not see it. Modern systems use coherent (phase&amplitude and polarisation, typically QAM formats), together with relatively high overhead (~20%) FEC. The consequence is that for demonstration experiments you do offline processing, which significantly constrains how many bits you can measure (good experiments measure a few million per measurement point). So what people do is measure to the FEC threshold (10e-2) and assume the FEC works (usually a valid assumption). Sometimes people implement the FEC and show error free operation to 10e-6 or so. Also we use Generalized Mutual Information as a more accurate measure of the information content (it essentially measures how much information can be transmitted given a bitwise coding FEC).

In other experiments people show FEC implementations running on banks of FPGAs to show that we actually get down to BER 10e-12, but these take weeks on large number of high end FPGAs.


As someone who's become a bit of a test equipment nerd, that is _very_ neat.


The two other comments gave very good general answers, but I happen to have worked on this specific project, so I can give some very specific details (as far as my memory goes.)

Lab testing of this scale of transmission involves a bit of “educated simplification”. We had some hundreds of wavelength channels, 37 fiber cores and two polarizations to fill with data. That is not realistic to actually do within our budget, so instead e split the system into components where there is no interference. For example, if there is different data on all neighboring cores compared to the core-under-test, then we dare to assume that the interference is random, without considering neighbors’ neighbor etc.

This reduces our perspective to a single channel under test with known data and then at least one other channel which is just there as “noise” for the other channels. The goal is to make the channel-under-test have a realistic “background noise” from neighboring interference. This secondary signal is sometimes a time-delayed version, sometimes a completely independent (but real) data signal.

This left us with a single signal of 32 GBd (giga symbols / s). This is doable on high-performance signal generators and samplers.


Ah ok so you just extrapolate the capacity of the pipe based on that, you don't actually generate petabytes of data. That makes a lot of sense, thanks!


I should clarify that we did measure every channel (polarization, wavelength and fiber core) individually. It would not be fair if we just measured one and multiplied ;)

(And yes, that took forever. A shout out to A. A. Jørgensen and D. Kong for their endurance in that.)


That's a good question! I assume that their test run is very short, like maybe a nanosecond. A petabit is 10^15 bits, which means they only needed to generate 10^6 bits (a megabit) for such a run. But even then, I'd be curious to know how you feed a laser 10^6 bits of configuration data in 10^-9 seconds! Definitely a paper I'd like to read.


So the way you do these experiments that at the transmitter you use an arbitrary waveform generator with ~4 DAC channels which let you modulate a single wavelength channel in IQ and two polarizations (4 dimensions). These devices have typically a memory of around 500k samples and rates of up to 120 GS/s (newest one actually has 256 GS/s Google keysight AWG if you are interested). So you generate a sequence of ~120k symbols (depending on symbolrate/oversampling) with 12 bit/per symbol (assuming 64 QAM). That sequence repeats over and over. You then use the multiplexing/emulation techniques described in other posts to emulate the other channels. This is essentially due to limitations of the measurement equipment. You can't just convert a random incoming bitstream into analogue symbols (with FEC coding) in realtime.

In a deployed system this would be done by specific Asics that take millions to develop and are comparatively inflexible. Thus if you want to test/research methods you use the above mentioned equipment which gives much more flexibility.


Also discussed 2 days ago:

Chip can transmit all of the internet's traffic every second

https://news.ycombinator.com/item?id=33296750

(56 points, 17 comments)


Radio astronomy always needs more bandwidth. International arrays like LOFAR or the SKA pathfinders generate a comparable amount of information/second as the entire internet. They could definitely benefit from small scale production of extremely high bandwidth optical networking components.


the main problem with massive data transfer from radio astronomy sites to other places is not throughput in fiber, but economically purchasing transport capacity from carrier-of-carrier ISPs to get it from the place where the radio telescopes are at, to various research labs and datacenters.

radio telescopes tend to be located in very remote places with very few dedicated dark fiber options.

from the perspective of somebody in the ISP business, go try to buy a 100GbE transport circuit from $random_radio_astronomy_telescope_site to a meet-me room/traffic exchange point at a major internet infrastructure site....

you're going to run into economic problems really quick.


I was thinking more interconnects between the FFT/channelizer/correlation boards and the storage that are currently 100 GbE or the like. But now that I think about it it's probably not the interconnect cost that limits things, but storage.


This is cool, but note that it's only enough to feed the floating point units on about 1000 consumer grade GPUs.

I know cloud is all the rage and stuff, but the thing that really surprised me from the article is at how (relatively) slow the internet backbone is.


I'm guessing you're talking VRAM bandwidth, which is just over 1 TB/s on a 4090, while the "internet backbone" is apparently ~1 Pb/s, lowercase B, so actually only 128 4090s have the memory bandwidth to match the internet backbone. Of course, they would fill up in 0.2 ms, at only 24GB each running in parallel.


Those are over $2K. I meant "normal" consumer grade stuff in the $200-$400 range, as opposed to "enthusiast" stuff.

Either way, it's no more than a few racks of server-grade GPUs, which is probably where applications would actually want 1PBit/sec of VRAM bandwidth.


great news in theory, but in practice, problems remain; chiefly, that google analytics & hubspot still reduce this to 0.9MB/s


And 21.5 years ago, we were (or at least, I was) celebrating mere multi-terabit photonic switching:

https://hardware.slashdot.org/story/01/04/23/1233235/multite...


For reference, the global internet bandwidth has been estimated at just shy of 1 Pbit/s

The entire Internet is using the same as 1 million residential 1 gigabit connections could max out? I don't know why, but that sounds far below what I would have expected.


I wonder how that estimate was made. Maybe they are counting it as one transmission when something non-unique is broadcast to many endpoints? Or does every fetch of an asset from a CDN count?

Either way, the bulk of the web is structured to put data as close to where it's needed as possible, to keep things quick and uncongested. So, it doesn't surprise me that internet backbones are much thinner than the aggregate of last mile connections.


I went to a five alarm fire in Seattle once. It started in the late afternoon. Pillar of black smoke into the sky. The fire was still going after sunset.

The interesting thing about a five alarm fire is that it turns out that it takes about 10 fire trucks to run 5 hoses. The city water system, like the Internet, is designed to deliver a certain amount of volume per day, and to be able to move a reasonable amount of it to arbitrary locations, but not a large fraction of its total capacity to one spot.

A city hydrant can't keep up with multiple fire hoses, even with a pumper truck there to give it enough pressure to go onto the fire. So what you have to do is daisy chain trucks, hooking trucks up to hydrants on separate water mains, or opposite ends of the same loop, then pump that water to another truck that pumps it onto the fire.

You can't overbuild capacity without passing those costs on to customers, so you do what you can to keep the system working smoothly and have workarounds for situations where the abstraction leaks, like a once a decade five alarm fire, or an Internet Hug of Death.


> the researchers claim that it could eventually reach eye-watering speeds of up to 100 Pbit/s

We have no ceiling in sight in terms of optical bandwidth improvements. The cost of bandwidth continues to go down, we have roadmaps showing this will continue for at least another 10 years if not 20. And if we are optimistic, including the tickle down effect into consumers we are easily looking at 30 years of improvement.

But that is bandwidth. I hope more research goes into latency, Speed of light in "C" rather than glass fiber 2/3 of C.


What does “entire internet’s traffic” really mean? There isn’t one single measurement point through which all traffic flows, so what set of connections are they measuring? Maybe traffic between BGP peers?


Is it possible to calculate the maximum upper bound on the amount of data possible here?


The upper bound is still the Shannon limit. The experiment does a lot of multiplexing: spatial multi-core fiber, spectral multi channel multiplexing across wavelength, dual polarization.

Each of the multiplexed channels are individually limited by the Shannon limit, and with higher power the fiber's Kerr effect creates interference which creates a sweet spot for the optimal optical launch power.

the novelty here is that the spectral channels are all generated from a single laser source rather than a laser per channel



It depends on what you mean by "possible", what future improvements you're considering, because otherwise the answer is just 1.84 Pbit/s.

But in very general, you have around 200THz of range for these infrared lasers. So on a single core, I'd expect the max to be within an order of magnitude of 200Tbps. They're using 37 cores, so they're getting 50Tbps per core right now.

Order of magnitude because it's not super hard to approach a bit per Hz of bandwidth from the bottom side, though difficult at very high frequencies, while it gets exponentially hard to exceed it. And here's a couple relevant charts for how fiber is extra self-limiting: https://i.stack.imgur.com/bwTy2.png http://opticalcloudinfra.com/wp-content/uploads/2017/07/Nonl...


>We also present a theoretical analysis that indicates that a single, chip-scale light source should be able to support 100 Pbit s–1 in massively parallel space-and-wavelength multiplexed data transmission systems.


The supporting circuitry & equipment - to get 1.84 petabits per second (Pbit/s) to & from the transmit/receive chips they demonstrated - will be a bit $$$extra...


It says they transmit over a 37 core fiber, so 1.9 / 37 is about 53 terabits per second? Is it common for optical phys to encode/decode at this rate?


They also multiplex across +200 channels across wavelength (wavelength division multiplexing).

Not sure what the baud rate of a single channel was in their experiment but probably between 32-80Gb which is common for the lab equipment at Universities. The industry is knocking on 100-400Gb where for the actual decoding and signal processing there is massive parallelism applied to reduce the rate even more


Unless I misunderstood, this is the number that matters. The 1+ Pb/s is like giving a headline grabbing statistic that highway can carry 10 million passengers per hour and then adding below that it's a 100 lane highway. The advancement seems that the de/multiplexing is done on a single module at each end.


Are these speeds just "tested" maximums or can they be utilized in practicality?


Not practical yet, the novelty is the frequency comb which allows +200 channels across wavelength with only a single laser, where before one required 200 lasers.

In an experiment like this, only the initial light source is modulated and therefore all channels carry the same data. The equipment for the transmitter and receiver chain is so expensive that university labs can barely afford one of each.


Almost correct. You typically need 2-4 transmitters to emulate the system. So you modulate one or two channels under test and modulate the rest of the band with a single modulator and use some decorrelation tricks to be realistic. Then you scan your channels under test through the whole band. This is in typically a lower bound of performance, i.e. a real system would likely perform better. As you said, using individual transmitters is economically unfeasible even for the best equipped industry labs.


Does that mean "We experimentally demonstrate transmission of 1.84 Pbit s–1" in the paper abstract is a lie?


I worked on this project and cycomanic summarizes the practice well. I’ve written more on it here: https://news.ycombinator.com/item?id=33321506


Well, the technology is just as impressive either way, but I think "we experimentally demonstrate transmission of 1.84 Pbit s–1" is misleading. The capacity was demonstrated piecewise but that data rate was not demonstrated.


Anybody here following photonic or optical processors closely?


Traffic is amount per second. "traffic per second" is amount per second^2.

What does it mean for a chip to "transfer an mount per second^2" ?


I think it's pretty obvious what was meant by the title and you're disguising pedantry as confusion.

It's poorly worded, sure, I'll give you that. But anyone should be able to understand that what they meant was "The internet on average transfers a certain amount of data per second, and this chip is capable of transferring at that rate."


> I think it's pretty obvious what was meant by the title and you're disguising pedantry as confusion.

I think it's pretty obvious it was a challenge, not a display of fake confusion.


[flagged]


The second sentence in the article:

Engineers have transmitted data at a blistering rate of 1.84 petabits per second (Pbit/s), almost twice the global internet traffic per second.


Traffic is already measured in bit/s so “traffic per second” would be something like data acceleration. Of course this is wrong but journalists have no idea.


The most popular topic is so often the post title.


Why be wrong if you can be right.


Yeah, it's like saying "this spaceship can go 10% of the speed of light per second"


Sure it does. It says per-second.

What is confusing about it?


Per-second doesn't make sense in this context. Either it can transmit all of the internet traffic, so it has sufficient bandwidth to theoretically mirror the whole internet traffic, or it can't. A time unit doesn't make sense here.

The alternative interpretation would be that it can transmit the whole amount of data ever sent through the internet in its existence per second, but this seems rather unlikely.


> Either it can transmit all of the internet traffic, so it has sufficient bandwidth to theoretically mirror the whole internet traffic, or it can't. A time unit doesn't make sense here.

It’s not a time unit. It’s a rate. The rate is twice the rate of traffic on the internet. Therefore it can transmit all the traffic on the internet.

Ideas like traffic only make sense in the context of per-unit-time, because they’re fundamentally about a flow.


Yes, it's a rate. The aspect of time is already baked in. Adding an additional unit of time is either redundant or means you're talking about acceleration.


I can run twice the speed of Usain Bolt.. per second


Well barring the title they define it in the article and say it can match the current speed of all internet traffic raw traffic speed. doesn't matter what units you use as long as it's based on basic units bits/second. Pretty straightforward. Can it keep it up? probably not currently. Can it handle similar amounts of switching? Also probably not.


"Car X is capable of the same velocity as car Y, per hour."


Why would you ever say that "car X is capable of the same velocity as car Y if you measure car X's velocity by km/h and car Y's velocity by mph"?


The word traffic already implies a rate


> What is confusing about it?

If it can transmit it per-second, then it can also transmit it per-hour, so it's redundant and doesn't add anything, which means it's confusing as to why it's there.


What are the units of internet traffic?


It can transmit one internet of traffic per second. So the unit is an internet.

They should have used a more common unit like encyclopedia britannicas.


But how long of an internet?

That's the tricky bit: "internet traffic" is already a measure of units over time.


How many tricky bits are there in 1.84 petabits?


They use Pbit/s in the article.


Data volume transmitted per time increment.


10000 libraries of congress per second.


The Library of Congress claims† to host 21 petabytes of digital content. That would take†† a little over a minute and a half to send over the link described in the article, assuming, of course, that the content has been put in a ready-to-send form.

https://www.loc.gov/programs/digital-collections-management/....

††https://www.google.com/search?q=21+petabytes+%2F+1.84+petabi...


I want my 8k streaming video.


So, will the life of an average dweller of the Earth become happier because of this?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: