Don't optical links to space suffer terribly from atmospheric distortion?
Imagine looking at a shell on the bottom of a swimming pool while there are ripples in the water....
Usually the shell is a bit distorted. But at some points in time, you see two shells... And other points in time, none.
If the water represents the atmospheres shimmering due to changing density, and the shell represents the satellite you're trying to receive data from, then at some points in time, you won't be able to receive any data at all, because the receiver cannot see the satellite.
Network links that are up and down every few milliseconds aren't very useful for much apart from bulk science data download. Perhaps that's why this is marketed for science missions rather than space internet?
Yes weather is still a challenge for space to ground communication. However there are numerous techniques being developped to handle this. One is to use adaptive optics similar to what is used by telescopes, a similar approach can be done digitally (like MIMO antennas in wireless). Now if the sky is overcast all these techniques don't help, however you only need relatively few ground stations to essentially get to above 99% availability.
That said the first rollout of optical comms is to intersatellite links. There the only issue is loss due to diffraction (and pointing etc., but that's a different discussion). Optics has a big advantage because diffraction is proportional to the inverse of wavelength, and the wavelength of light is several orders of magnitude smaller. This determines the size of the transmit and receive antennas/apertures needed, or the loss for a given size. For realistic parameters optics would have 30-50dB less loss which is translates directly to SNR and thus allows the much higher rates.
Can confirm this from my own experience.
Just elaborating on the availability requirements, the idea is that clouds aren't everywhere at the same time, so if you have enough in-space bandwidth, you can hop your traffic around the constellation until you reach a satellite that is above a cloud-free area.
Can orbital angular momentum modulation help here? I'm a deepnoob here but iirc some forms of polarisation are more or less affected by 'water in the sky' and masks.
I'm not an expert in this area, but if I remember correctly a lot of satellites use radio links in the neighborhood of 10 Ghz for ground communication because they've found that band isn't affected much by atmospheric conditions.
10-100 Ghz has very few bands where water is an absorber. It tends to fall between the molecular and atomic modes of many molecules, so you only suffer free space loss (which is still significant over long range at high frequency).
"traditional" geostationary satellites use various c, ku, ka and high ka bands. Anywhere from 6 to 30 GHz. Depends a lot on how new it is, what its intended use is and transponder/antenna configuration. and also depending on the intended size/type of earth stations and remote terminals.
in addition to tracking/telemetry/control low data rate radios in the L/S-bands.
A large reason why these are aimed for scientific missions is because the single photon detectors utilized for receiving these transmissions are still incredibly expensive. They are basically liquid-helium cooled arrays of nanowires (SNSPDs) which act like micro balometers - pretty sweet tech, but not commercial grade just yet.
Is there a reason you can’t use an avalanche photo diode biased in Geiger mode to achieve this?
If biased properly an APD can be used for single photon counting and they only cost $70-250, I’m just not sure if they hold up in space without modification.
The nanowire localizes the photons in space, time, and energy. APDs can't do energy.
The cryogenics allow using cryogenic-level bandgaps/carrier-energies, so each optical photon turns into many carriers which then flow out the wire in both directions, the delay between the pulse on both ends localizes the impact along the wire.
SNSPDs have lower dark count rate, and especially lower timing jitter in the near-infrared than APDs.
The deep space optical demonstrations I know of all work in the near infrared (e.g. 1550 nm), just like most of terrestrial fiber optics used for communication. Probably because there's lots of infrastructure at that wavelength already, and it's a good passband through earth's atmosphere.
In network terms the result is packet loss, which is unavoidable. You could shoot through atmosphere with a lower frequency light like infrared but that suffers a reduction of bandwidth and is still subject to packet loss, just less.
I have had to live off a personal satellite in the middle of uncivilized nowhere for a year (literally). Packet loss is the true regulator of speed irrespective of bandwidth numbers.
there's a big section of the article that talks about that (at a high level). Essentially all in the second half of the article with the title "From radio waves to laser light"
There's an entire field of Adaptive Optics for Free-Space Communications (AO-FSOC) where the turbulence of the atmosphere is actively monitored in real-time, and the laser wavefront is pre-compensated to minimise the effects of atmospheric distortion.
Is this likely to work at Lunar distances anytime soon? I saw that the James Webb telescope people were unhappy about losing much of their communication time on the Deep Space Network to the Artemis 1 mission. Could this be more cost effective than an major upgrade to the Deep Space Network?
I would expect L1-to-earth communication to be problematic because you'd have to distinguish the signal from the background radiation of the sun.
It'd be interesting to know what the technical limits are in terms of output power and aim/focus. Generally, doubling distance means the signal power drops to 1/4th, and maximum data capacity of a communication link is proportional to the signal/noise ratio. So that would mean a 100 Gbps link might drop to 25 Gbps. You might be able to bring the signal/noise ratio back up by using a better detector or a more powerful laser, or aiming better. Or maybe the 100 Gbps data rate is limited by the transceiver, and there's actually plenty of S/N ratio margin that can be traded for range without affecting data rate at all.
If L1 is a problem, then L2 would be a problem as well. After all, at L2 the spacecraft is receiving its commands from the direction of the sun.
However, the problem is not quite as bad as it seems. Spacecraft at L1 and L2 Lagrange points actually are in a halo orbit that "orbits" around the Lagrange point. Attempting to stay at exactly the L1 or L2 point is unstable, since gravitational forces tend to knock you away from that point. The halo orbits are much more stable. And for a spacecraft in a halo orbit, you never have to point your antenna directly at the sun.
The problem is solvable for radio communication at least. There are currently 4 spacecraft orbiting the Earth-Sun L1 point (ACE, DSCOVR, SOHO, and WIND) as well as 3 spacecraft at the L2 point (Gaia, James Webb, and Spektr-RG).
Optical to Orion (O2O) is a plan to do a lasercomm demo on one of the future Orion moon missions.
When the Psyche spacecraft launches and heads to the asteroid belt (was supposed to launch in august) it will do the farthest (by far) lasercomm demo. I work in the group that made the SNSPD ground receiver. As my boss says, with a distance 1000x farther than previous space laser comm demos, closing the link is 1 million times harder...
Fun fact: when the Phyche comm laser is pointed at earth, the size of the spot will be roughly as large as California. Even with the largest optical telescopes, the loss in this link will be insane. That's why you need single photon detectors.
As you get to farther and father distances, one thing you can do is shift from on/off keying to large-M Pulse Position Modulation. This way you can save up the power on your satellite to send fewer but higher power laser pulses, each of which carries more bits of data. I believe the DSOC mission will go up to M=256. Meaning each pulse of photons received on earth will carry 8 bits of information based on when it arrives within an alphabet of 256 time bins.
The issue is not so much background radiation (you'd have similar issues with RF), and your SNR is going to be reduced because of diffraction (as you rightfully point out goes at r^2). However the SNR would still be much better for optics because diffraction scales as 1/lambda.
The reason why JWST did get an optical link is that people developing these things are rightfully conservative and optical links in space are really still under heavy development.
Are you able to talk about what kind of hardware is on the satellite? I'm curious if it's commodity like the Mars helicopter or something made for the purpose.
Not OP, so I'm guessing purely on my knowledge of how most of the industry works, but there's likely some sort of FPGA with custom IP at the center, connected to a powerful optical transmitter/receiver.
It sounds like this test payload was part of a larger CubeSet built by NASA, but the actual datacom components seem pretty much off the shelf (besides the optics). 100Gbps transceivers, an optical mux, and an EDFA - all common in terrestrial telecom - and some IR optics to collimate the beam.
Fun and somewhat related fact - this is one of the main advantages of hollow-core transmission fibers! As far as I know, currently only used to extend the range of HFT orgs...
Hollow core fiber? Total internal reflection requires the core to be a higher index of refraction than the cladding, so I wonder how that works!
Turns out, it functions differently. Instead of total internal refraction it has to rely on weird physics like photonic crystals. The pictures are absolutely wild.
This isn't that hard to do, even with 1960s technology.
It takes about 3 days with the Apollo launch system to get from the Earth to the Moon. So if you load roughly 800 4TB portable hard drives into one of these and fly it to the Moon, this will result in an average transmission speed of 100Gbps, if my math is correct.
The purpose of this is to go the other way, from orbital sensor platforms that aren't intending to ever return intact. To do it your way, you'd need to send a disk-carrying vehicle into orbit, dock with the satellite, download onto the disk, and then return. Since the satellites don't exactly have USB ports on them, you couldn't do that anyway, and in practice, would still need to retrofit them with some form of high-speed data transfer mechanism to get the data onto your disks. The best option would very likely still be optical transmission, as trying to insert a cable into a USB port on something moving between 7000 to 17000 MPH is not exactly trivial.
At that point, what advantage is there in sending a rocket to receive the transmission and then carry the data back when you can just use the exact same link to send the data straight to Earth?
Theoretically, they could carry some number of disks into orbit, fill them, and then drop them back on parachutes. Something like that is actually what the earliest surveillance satellites did. They dropped film from orbit that the military retrieved and developed.
Huh? What a bizarre nitpick. I could fit 800 4TB portable hard drives into my van, and drive to the next city. That would be a quicker data transfer than via the internet. I guess that means fiber isn’t anything notable?
I wonder if re-transmission is the best solution to corrupt blocks. Some students that commuted to school on a train (MIT?) figured out if they transmitted ECC-style blocks combined with RAID-style parity blocks they could instead rebuild corrupt data.
It all depends on the kind of corruption. Periodic spikes, white noise, blackouts - different problems need different solutions
This is known as "forward error correction". If the nature of the corruption is known and predictable, then you can design a reasonably efficient scheme to mitigate it. Otherwise, I suspect some amount of acknowledgement and retransmission at the MAC layer or above is a good idea. It really depends on the latency of the link, though, and the probability of corruption. If a round-trip delay is larger than a reasonable window size, and corruption is frequent, then FEC would help a lot. It's the only option for one-way comms such as digital broadcast television.
Fun factoid: Reed and Solomon were working at MIT Lincoln Laboratory (ie where the OP result is from) when they invented RS codes. ISTR they were also working on satellite comm, in which Lincoln has a long history.
[Source: I spent a decade there myself and drank the kool aid.]
Indeed. It's fun to look at the MCS table for something like 802.11ac. a lot of different rates are actually the same underlying modulation, but with varying amounts of overhead from forward error correction.
802.11 data frames also have an acknowledgement at the MAC layer. The radios dynamically ramp up the MCS rate until packets start dropping, then ramp the rate back down.
And still, wifi links are among the top sources of packet loss. Without sophisticated forward error correction it would probably be much worse, but it shows that this is not an either-or question. The correct answer is: do both.
I think communications theorists would argue that retransmitting is essentially an FEC, and likely a suboptimal one for these links. That said, real-world constrains don't always match up with theory, while modern optical communication makes extensive use of FEC (however RS codes are not used much anymore, in long-haul it's LDPC codes) AFAIK FECs for these type of "intermittent" links are very underdeveloped. Moreover, if the probabilities of long outages is too high, one would need a lot of memory for the FEC which becomes difficult to do in real-time processing (remember we need to process at >100Gb/s).
US geointelligence uses error-correcting codes, but beyond that, individual detectors on sensor arrays fail all the time, and you can't go up there to replace them and you don't want to bring down an entire satellite when it still mostly works, so ground processing needs to be robust to missing data anyway. It's fairly straightforward to just interpolate pixels, but the full process is a bit more sophisticated and also involves building overlay layers rating the quality of each pixel, so follow-on processing like object and change detection is able to take into account not just hey, what am I looking at, but also how reliable is each individual part of the scene. To a human looking at the final image, though, you'd never know the difference.
I'm assuming most of what this would be used for is imagery collected from space. For whatever reason, other commenters seem to think this is for holding conversations, but it clearly says it's for data from science missions. Even if you were trying to talk to someone on the other end, though, it's rarely that big a deal if part of a word cuts out. That happens all the time with existing ground-based calls or even just two people in a loud room and the human brain knows how to handle it.
Another comment has pointed this out, but Reed-Solomon coding was invented as a method of forward error correction. It was only applied later to storage systems as an "erasure code" because it can detect and correct extremely long runs of bad bits. Comparatively, it can detect and correct many fewer random bad bits.
Nearby to me is the Microsoft HQ campus. Few miles away is Amazon, not to mention most every other major software company. Even SpaceX has an office here. My home has one available ISP, and that's Comcast. I pay monthly for 100/30 what other homes pay for 1000 up & down. The home 50ft behind mine has fiber. It's insane to me.
Happy to pay upto £5.000,00 for setup/digging and then upto £60/month for 1Gbps in Central London. G.Network, Pure Fibre, BT (not a business address), Hyperoptic all don't want to bite while some have fibre in a street short distance away (~100 meters) from my house
It's tough in the UK, especially London, especially especially central London.
I moved from NZ where we'd done probably one of the best fiber rollouts of any country; I had 1Gbps into an apartment from about 2016 or so. Before that I'd had 200ish mbps into a house in the suburbs in about 2014 thanks to the previous tenants having run a business from the place and paid for a line in.
Now I live in the UK I can see why it's hard - too much history/housing here is just built up far too much/not built up in a modern way either. Also I think when obtaining permissions from property owners in the case of flats, Kiwis are generally more likely to approve of something like that than people here.
Exactly. Which is why I'm always less than enthused by articles such as this. With the telecommunications industry, it seems breakthroughs such as this never make their way to the average or even slightly high-end user such as myself.
Wake me when telecommunications industry gets their shit together.
I might be wrong here (for a lack of in-depth knowledge) but isn't the full-body motion of the cubsat (for aiming) and the ARQ protocol the real development here?
I know the laser links themselves might not be new, and while it hasn't been done like this, we knew that it would be possible considering we've done a ton of variants (sat-to-sat, station-to-station, over fiber, over glass, in open air, in a vacuum), even just hitting the beacons on the moon and measuring the reflection somewhat shows that long-distance links could be done. Maybe the new-ness in the laser aspect is the small satellite and energy package compared to high-powered lasers you might expect?
Useless but slightly relevant aside: I was almost part of the first ecommerce transaction from outer space. This one civilian guy was in the middle of a large luxury item purchase before his space ship took him to outer space. My bosses emailed him and I think he responded from space somehow. They wanted him to sign off on the purchase to make it official, but he didn't until he got back to Earth Prime. Almost!
I don't always want to be the party pooper and this tech does have a lot of benefits when trying to transmit large items. However, you ain't ging to be video conferring to the moon or Mars with it. Not a single word about latency in the article.
For those curious, I went and looked it up, and the one way (not round trip!) light travel time to the moon is about 1.3 seconds, and one way light travel time to mars is 3-22 minutes or so, depending on how far they are at the time.
So maaaaaybe you could have a really painful conversation with someone the moon, but not Mars.
edit: This is WITHOUT any latency introduced by the link/protocol, or if you have to then route the message from one side of the earth to the other, just time-of-flight distance calculations, so the absolute minimum possible latency.
This is already sending data via lasers traveling through free space (and air, which has nearly the same speed of light as free space). You're not going to get a medium with a faster speed of light than free space unless you get into space-warping theoretical stuff. Even if you managed to run a fiber optic cable between a satellite and the earth, it would be around a third slower than lasers through space.
Speed of light is still a limit, fastest possible roundtrip time between the Earth and moon is 2.5 seconds. Round trip between the Earth and Mars would be over 6 minutes when Mars is at its very closest to Earth.
In space, latency will be dominated by distance. In low orbits, lasers will have low latency, good enough for interactive applications. Geosynchronous, Moon, and farther can't be helped.
Laser link (not RF) implies a whole bunch of things:
1. speed of light through atmosphere, so basically c
2. line-of-sight is required, likely stationary base stations. Probably also subject to atmospheric and weather conditions
Humans wil adapt. If you want to call mars an ai will support you what the question/following sentence will be so you can answer the predicted question in one swoop. Problem solved.
The satellite is only 300 miles up. The processing power on the satellite itself probably matters more for latency. At 200Gbps max (2x100), it's probably no slouch.
Encouraging to see the CubeSat program producing such notable progress. Hopefully, we'll see many more reports of engineering advancements from miniaturization and commercial components adapted to the rigors of space.
I don't think they have any demonstrated space to ground comms yet. There are a number of players working on this though. I would say Lincoln lab is still the leader in this field though.
Yes, but at what latency? That's the big pie in the sky with space intercommunication. Typical (pre-StarLink) latencies with satellite are like 3k milliseconds.
On what service? I have done a lot of military SATCOM, and our normal rount-trip latencies to geostationary could be as low as ~560ms. Not anything like as low as LEO (which are far, far closer to earth) but not unusable.
Why is latency the “big pie in the sky”? We’re not talking about counter strike or Halo CE matches here, we’re talking about the ability to transmit large amounts of data from deep space very fast.
The fastest way to transmit data (albeit not from space) is FedEx (mailing hard drives). This is not ideal. Why? Latency. That's what makes networks better than pigeons.
I'm curious how many households this would support, taking into account usage distributions over time. Would this be sufficient for a medium city? A large town?
Zero households. This system is meant to communicate with one ground station at a time, and only for a very brief window. It's really only good for downlinking bulk data that was collected by the satellite, not Internet access.
I've seen numbers of about 2.7Mbps[1] for average peak traffic rates per subscriber on cable, which would give you about 37,000 users if I didn't mess up my bits and bytes.
But as the FSO link is point to point, you would need something like a high-altitude platform station (HAPS) like a blimp, UAVs with RF or a RF tower on the ground to receive the FSO signal, and then broadcast it to many users.
how so? stock markets are using lasers to reduce trading latency vs everything else available so what do you mean not usable for real time communication?
There's an inherent limit to how fast you can communicate with objects in space, so a video call with someone where you will have 1-2s of lag (guaranteed by laws of physics, not occasionally) might be unusable. There is also the impact of distance, you might have even higher latency to the same object depending on the time of the year (Earth - Mars today vs Earth - Mars few months out).
Your figures are totally wrong. The distance to geostationary orbit is around 30,000 km, so at the speed of light (~300,000 km/s), that is 100 ms of latency.
Low earth orbit is 3,000 km or so, meaning that's only 10 ms each way.
Odds are good that if you had a mesh network of low earth orbit satellites (like Starlink) you could actually get an antipodal point-to-point video call with less latency than with terrestrial fiber. That's not a function of bad terrestrial switching/routing: it's the fact that light travels faster through vacuum.
You're dreaming, buddy. 5G can't even be successfully utilized in the richest nation on earth (U.S.). This brand new tech is decades from customer applications. Cool display of technology, but I won't get excited over something that won't improve the lives of anyone except the military industrial complex and its benefactors while internet services and cell services continue to degrade every year.
The US is notoriously bad for fast internet in general though. Here in Asia 5G is mostly rolled out in some places, I was getting 100Mbps on my phone the other day.
Since you've already started breaking the site guidelines again, I've banned this account.
If you don't want to be banned, you're welcome to email us at hn@ycombinator.com with reason to believe that you'll stick to the site guidelines in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
I don't think that because a computer uses 1000W of power it would produce 1000W of heat though. I think if that were true then there could be no computation done since all the energy of the electrons would be converted to heat. I'm not sure what the efficiency rating of computers is, or how much is turned heat.
Not to rain on your back of the napkin math. Although I think your sentiment is right, the cooling systems for computers in space would need to be bigger since you can use convection to move the heat away.
Yes this is why many high frequency traders have installed laser or microwave links to their nearest stock exchanges, simply to gain a few microseconds advantage in their trades. Fibre isn't as fast as laser going through the air. It's 50% slower (as the parent comment states) (unless you are using special hollow core fibres which are uncommon for now). More importantly, fibres rarely go in a straight line between 2 points, they wind their way through buildings, down into basements, through buried pipes etc and this all adds extra distance to the route and hence more delay to the signals. A line of sight link is the shortest route between 2 points.
That question does not really make sense. Fibre links use lasers. Maybe what you are asking about is free-space vs fibre links. Yes some HFT players use free-space optical or microwave links, because of the improvement in latency, they typically also eliminate all (or most) FEC which introduces significant latency as well. However, they don't really care about data rate so much.
For almost all other applications the factor 1.5 you gain from free-space vs fibre is not worth it. A trip around the equator in an optical fibre takes 200ms. That is acceptable for almost everything. Moreover, the latency of using any something free-space would likely not be much better, because one would need to regenerate at least 3 times (which might add ~10ms or so for each regeneration)
Lasers are easier to setup (you just need two end stations) and don't need dedicated lines (you don't have to lay any fiber). If you have an existing connection between two points, then fiber might be better / same, but if you had to setup something for cost / speed, then laser would win.
And yet here in 2022 cell reception quality is quite literally worse than it was in 2010. Telecommunications industry is really baffling at times. Just like when people were excited for 5G, I'm very skeptical that this technology will ever actually improve our lives in the short to medium term.
Overall, services across the board seem to be worse than they were 10 years ago. And yet the tech is certainly more advanced. Really disappointed as a whole with the telecommunications industry. Maybe that's the field I should have focused more on, as they seem to be struggling to improve things even with technological breakthroughs such as this.
This is 1 point-to-point link (laser) with direct aim required. No forwarding, no more than 1 user.
I don't see how this relates to the use-case of millions of broadband users that you are talking about where you are routing fiber cables all over the place. We do have 800-gigabit fiber in core networks, we just don't route it to every home b/c why would we. And 5G radiates to 1000s of users simultaneously.. regularly getting multi-gigabit speeds on mmWave as a regular user is pretty amazing to me.
>Overall, services across the board seem to be worse than they were 10 years ago.
Do you live in the US? The pathetic state of telecoms in the US is a US problem, not a global problem. In many other places, our internet and cellular service is fast and cheap. But you guys don't like regulation or competition, so this is what you get.
MKBHD recently shared the same sentiment in a video. He feels, anecdotally, that 5G is worse than LTE. I feel the same way and recently switched my preferred network to LTE.
You might find a better experience doing the same. I've found 5G to be truly awful and I live in one of the biggest cities in America where you would expect better infrastructure.
Imagine looking at a shell on the bottom of a swimming pool while there are ripples in the water....
Usually the shell is a bit distorted. But at some points in time, you see two shells... And other points in time, none.
If the water represents the atmospheres shimmering due to changing density, and the shell represents the satellite you're trying to receive data from, then at some points in time, you won't be able to receive any data at all, because the receiver cannot see the satellite.
Network links that are up and down every few milliseconds aren't very useful for much apart from bulk science data download. Perhaps that's why this is marketed for science missions rather than space internet?