Hacker News new | past | comments | ask | show | jobs | submit login
SpaceX Gets FCC Approval to Sell Wireless High-Speed Home Internet from Space (cordcuttersnews.com)
797 points by hsnewman 20 days ago | hide | past | web | favorite | 524 comments



Every story about SpaceX satellite Internet (Starlink) is plagued by questions about high latency. I think Elon needs to plan on spending a lot of marketing money to overcome this specific misconception.

Starlink latency is quite good. LEO is a lot closer than GEO (less than ~1000 km* vs 35786 km.) SpaceX actually intends to compete with terrestrial systems on latency; many important routes will have significantly lower latency than any feasible terrestrial system.

Remember, fiber isn't latency free. It isn't even speed of light. It's about 70% SOL. Radio, on the other hand, is SOL (or so close it doesn't matter.) So there is a cross over point where the latency of a LEO satellite system is actually superior despite the uplink/downlink path.

*depending on which orbital "shell" is involved. figures range from 340 km to 1200 km.


No consumer will worry about latency or any other component of the physical feat of delivering internet via Starlink. Consumers are currently buying downstream bandwidth by the mbps from one or maybe two providers with no other options. Only technology oldheads who remember satphones and Hughesnet will worry, then research, then try it out for themselves. Starlink has the added benefit of being very easy to give a “consumer trial period.” It will sell itself, in the USA at least.


What do you mean remember HughesNet? People are still on it. Some of those people are young. All of them have been taught what latency is by having satellite internet.

There are plenty of non-technical consumers who have had a very unwilling education in what latency is and how it differs from speed.


EDIT: Sounds like this site's measurements for satellite internet are incorrect. Maybe HughesNet partners with fiber providers in some areas and therefore brings average speed down.

We could use objective reporting of internet speeds in finer detail. Any ideas?

Original comment:

--HughesNet's average latency is now 62ms, 58 ms jitter,--

https://broadbandnow.com/HughesNet-speed-test

--That's quite reasonable.--


That is incorrect. Round trip is still 20k+ miles that is ping to ground station before it's sent to one of the sats. It will still be 600ms+


That's unfortunate. Looks like we don't have an objective measurement of satellite internet speeds, then.


Sure we do, we have the laws of physics. HughesNet has geostationary satellites. They orbit at ~22,200 miles up. Assuming speed of light, it's going to take ~120ms per hop. There's a minimum of 4 hops (you -> satellite -> ground station -> satellite -> you), or 480ms.

And that's assuming perfect conditions and ignoring the rest of the normal latency encountered online. It's physically impossible to have a latency lower than ~480ms with geostationary satellites.


Why can't you avoid the ground station and just do satellite to satellite for relay? I believe that's precisely how starlink works.


You're pinging a device on the ground

  t=0 You send out ICMP request
  t=120ms it hits the satelite
  t=240ms it arrives at the ground station, ICMP reply occurs
  t=360ms hits satelite
  t=480ms arrives back at the originating machine
The only way doing satelite-satelite at a GEO level would be useful would be

1) You're pinging a device on satellite in GEO (one you don't have LOS to)

2) You're pinging a device that's also connected via a different GEO 120 degrees away from the first

in this case you'll

  t=0 You send out ICMP request
  t=120ms it hits the satelite
  t=240ms it arrives at the ground station 1
  t=280ms arrives at ground station 2
  t=400ms hits satelite #2
  t=520ms arrives back at the destination
  t=640ms response arrives at sat #2
  t=800ms response arrives at ground station 2
  t=840ms response arrives at GS#1
  t=960ms response arrives at sat #1
  t=1080ms response arrives back at originating machine

You could change that, but you won't save much in GEO as you're connecting to two different satelites, 120 degrees apart or 70,000km (235ms)

  t=0 You send out ICMP request
  t=120ms it hits the satelite
  t=355ms hits satelite #2
  t=475ms arrives at destination
  t=595ms response arrives at sat #2
  t=830ms response arrives at sat #1
  t=950ms response arrives back at originating machine

Reality of course is that typical satelite latency is 800ms or more (depending how much you pay to skip buffering) rather than 480ms.


In GP's post the second satellite is the same satellite, just on the way back. The post assumes for demonstration that the ground station is the destination.


Color me skeptical that Starlink will reach its theoretical maximum capabilities, or that it is competitive with 5G. I'm fine with fiber until then. It's affordable in most of the world. The problem of broadband monopolies in the US is not going to be resolved by a new privately-owned technology. We should hold local politicians' feet to the fire, not throw money at new tech from a monorail salesman.


> it's affordable in most of the world

I'm afraid you are drastically underestimating how much it actually costs to install fiber.


Good point. I meant the developed world.

Regarding the 3rd world, Facebook offered India free internet with strings attached, and IIRC they rejected it. That idea wasn't too popular there or here.

Would Musk be given a pass for offering discounted or free internet to 3rd world countries, and what strings would investors suggest he attach?


> Would Musk be given a pass for offering discounted or free internet to 3rd world countries, and what strings would investors suggest he attach?

Why would he want to do that? Facebook wanted to do that, because that would drive more users to their platform. SpaceX wants to use Starlink in order to bankroll BFR[0], so what they want is to simply sell the service at a price point that generates them enough money. It's not beneficial to them to discount it, make it free, or play other shenanigans.

--

[0] - Or whatever it's called now; it'll always be the BFR in my heart.


> Why would he want to do that?

Because granting internet fast lanes to certain content providers is a way to generate revenue.

Zero-rating is already happening [1].

[1] https://en.wikipedia.org/wiki/Zero-rating


If you have a lot of market power and you want to make as much money as possible the rational thing to do is to charge people according to their willingness to pay[1]. You can't do this perfectly but this is what is behind senior discounts and Intel's huge number of SKUs. Since people in Botswana can't profitably resell their internet in England, since Starlink can easily charge different amounts in different countries, and because the marginal cost of servicing a new region is approximately zero it would be crazy for Starlink not to do geographic pricing tiers.

[1]https://en.wikipedia.org/wiki/Price_discrimination


That sounds like a yes.

If money is your only goal, that's completely rational. However, that's not how I've typically heard people champion Musk. He's been lauded as a progressive person whose aims are meant to serve humanity, not merely his wallet. Supposedly.


All human beings are complex and you can't just sort them into good people and bad people. Elon seems to be more idealist than most about the fate of humanity as a whole but also seems to be willing to work his employees to the bone, drive sharp bargains, etc to pursue that goal. He's said that he sees Starlink as a cash cow that'll fund future Mars colonies and I don't have any reason to believe that he isn't being sincere about that.


> All human beings are complex and you can't just sort them into good people and bad people.

Some would make the argument that if everyone worked with only their wallet in mind, we'd be better off. I wouldn't.

> Elon seems to be more idealist than most

Elon is not an idealistic engineer, he is a businessman. He actively misleads about products he must have been told will not meet the timelines he promotes.

> about the fate of humanity

Not sure where you get this idea. The way he talks about AGI taking over the world is creepy. He thinks it's inevitable, nevermind that we have no idea how to design anything close to AGI.


It's not like serving humanity and making money is mutally exclusive. If there are markets in developing countries he can serve cheaper than anyone else both sides profit: he makes money, the people get cheaper internet.

That said, internet already tends to be the infrastructure demand best satisfied in developing countries. Building cell phone towers is much easier than building roads, and with cheap labor and no purchasing power comes cheap internet.


> If there are markets in developing countries he can serve cheaper than anyone else both sides profit: he makes money, the people get cheaper internet.

Let's see whether it is net-neutral internet or not, then judge. My guess is it will have exclusive offers to content distributors in order to lower cost.


Australia would like to argue this point, but the latency of our replies would make it seem like they're coming from the late 1990s.


>Good point. I meant the developed world.

I think your parent also meant the developed world.

In devloping countries it's sometimes much easier to get fibre. No or very little infrastructure means no old copper wires lying around. You can go fibre instantly this way.


It’s cheap to get fiber in developing countries because labor costs are extremely low and government regulation is lax.


Fiber is affordable in cities (a market nearly impossible to service by satellite anyways). Even developed countries struggle with deploying fiber in rural environments.

Every satellite internet offering targets rural settings, ships and planes. StarLink is no different, just that they will be competitive in a wider range of such settings.


>> We should hold local politicians' feet to the fire, not throw money at new tech from a monorail salesman.

We need to do both, as Seattle has proven with its sporadic availability of gigabit fiber and the shutdown of third parties attempting to bring it here.

In fact, doing the latter IS doing the former.


Centurylink covers 70÷% of the city with gigabit for $65 a month, take a look at the cable franchise map before throwing stones: http://www.seattle.gov/tech/services/cable-service/cable-fra...


Yeah, I happen to live in an area that CL offers 6mbps maximum to, and it is weirdly in a minority-dense part of Seattle metropolitan. They have zero plans to expand to our area, they've stated repeatedly.

I'll throw stones as much as I like, thanks.


The city contract requires they service the lowest income parts of the city first, hence the extremely patchwork rollout over the last few years.

Due to the mandatory client base skew, the city is causing them to redline middle and high income areas of the city. Fiber is in Georgetown and Rainer Valley, but your SoL in Fremont's built up areas and in Sand Point.

Universal coverage should have been mandated, but its easy to end up like Kenmore (which required total Fios coverage from Verizon, now Frontier) and has seen Finn Hill go unserviced for years. Enforcement of the franchise contract is just as critical as initial negotiation.

Outside of a few select areas, most of Seattle only ever saw ADSL (or no DSL in parts of the north end) as we didn't allow large VDSL2 cabinets in the public ROW. I think this has caused Centurylink to build fiber in the city, as they've rarely overbuilt VDSL2 areas like Bellevue, Kent & Auburn.


>> Rainer Valley

Nope. I live square in the middle of it, can't get double digit mbps download speeds from them.


I would encourage you tp double check during the daytime on https://shop.centurylink.com/MasterWebPortal/freeRange/shop/...

I've noticed they seem to offer ADSL (6mbps) after 6pm or so at addresses that already have fiber, presumably their prequal database for fiber goes offline for a few hours a day.


I've called multiple times and schedule a check every three months to see if it's been updated. Been doing that for years. Won't give you my cross streets but I assure you, they have nothing for me.

They say it's available just a few blocks over. That's been the case for a long time and they can't seem to get it about 0.25 miles further. Been that way for years.


Who's the "we" that is throwing money at this? Starlink is privately funded.


You are correct, it’s more prevalent than I realized, but those people will most likely welcome Starlink with wide open arms.


Starlink (and other similar systems that are hopeful to launch) are the real competitive threat that Google Fiber only partially lived up to being.

It threatens the existing entrenched providers everywhere. They can't hide from it. They will have no choice but to respond. They face a scenario losing tens of millions of access customers over time to this approach. And the existing options are priced so high in the US, it provides a big fat margin opportunity for Starlink & Co. to target (and ride for the benefit of paying for the buildout).

We had to go to space to route around the cable oligopoly, beyond their local crony jurisdictions.


Nope, Starlink would be crushed by tens of millions of customers (mostly concentrated in major metro areas) in the US and those customers would switch right back to cable. The capacity isn't there.


Can you show me the definitive proof that Starlink (along with its competitors, who will plausibly also launch a vast number of satellites) can't support eg 20 million customers in the US over time? I'd be very interested in that demonstration of the limits on the market.

Also can you support the premise that their customers would be concentrated in major metros, when the biggest beneficiaries will be outside of metros where broadband options are drastically worse (which is why the primary market for HughesNet the past two decades has not been concentrated in major metros).

Access providers in metros can soundly compete with Starlink. They'll lower prices, increase speeds, improve bundles, etc. Their infrastructure and customers are already in place there, they won't just ignore Starlink, they'll compete. That heavily limits the upside potential for the Starlink concept in metros. It's everwhere else, mostly lacking any real broadband, that Starlink & Co will face minimal competition and will particularly lure customers. It's why HughesNet still has over a million customers today.

I've seen a lot of people claim - with very little supporting evidence thus far - that the market is extremely limited due to capacity restraint.

While you're proving out your claim, if you don't mind given that you've got a strong handle on the market ceiling, please provide what you believe to be the maximum subscriber potential for Starlink - and the approach in general - in the US over time.


I think Starlink is for rural customers and can't compete with decent wired broadband. You were the one who seems to be putting it against cable (which exists in the metros, not in rural areas).

The MIT work linked in this thread calculates a total worldwide capacity in the ~20 Tbps range for Starlink.


I didn't see the link, but here's the presentation[0] I believe you're referring to. It estimates Starlink's max total system forward capacity at 23.7 Tbps.

0. http://www.mit.edu/~portillo/files/Comparison-LEO-IAC-2018-s...


> All of them have been taught what latency is by having satellite internet.

Yes, life is a great teacher.

HughesNet sends me a post card every quarter or so. Sure I live in the middle of nowhere and would seeming be likely to want it. What makes me laugh is even the old people in the area who you'd think wouldn't know how to use a computer will warn friends, neighbors and community not to use them. They may not know the term latency, but they know that the quality of calls are poor and they know uploading videos from their phones takes way too long.


Customers of HughesNet know about latency but it's low on their list of concerns. The top three being data caps, price, and data caps.


I was going to ask, "Didn't HughesNet try this?" .. I didn't realize they were still around, although it does make sense for a lot of rural areas with limited broadband options.


Or RVs. Even chain stores use the service: easier to manage 1 big Hughes subscription instead of 500 or 5000 different local ISP contracts, each with their own quirks.

LTE may have put a dent in that model though.


LTE hasn’t made as much of a replacement in Home Broadband in rural as much as you’d think largely due to tower congestion. Cell site density is so low — plus deprioritization at 30/50gb.

In addition you’ve usually got only 1-2 networks in deep rural.

Source : I run an MVNO.


> No consumer will worry about latency

I think this would only be true if internet gamers weren't a thing, but they are, and they'll care.


competitive gamers make up a very small percentage of all internet users (or even gamers!). Like professional athletes, they will move or go to a special facility for this sort of latency requirement.

Many games are not as latency sensitive as you think - people used to play over modems!


If by "competitive" you mean anyone who plays any sort of real-time game such as RTS, FPS, or even real-time mobile game. Many consumers may take a while to figure out that latency is the reason they lose/die when it doesn't look like they should, but they will feel the effects immediately.

Granted users in rural areas are likely the most apt to benefit from this service and those users don't likely fantastic internet to begin with. However, if SpaceX markets this to areas with decent quality internet they're likely to get a bad reputation for "crappy" internet.


Yes, even those people make up a tiny fraction of the internet. Satellite internet typically will not work for those people, and there is no point in trying to sell to them.


People outside the western world still very much use internet cafeś / game cafes to play games on decent hardware and with a good connection.


Kids who play Fortnite are absolutely not “a tiny fraction of the Internet”.


2.4% of internet users play fortnite. Is that a large percentage to you?


That's the same percentage as Linux users, and just yesterday I closed my Box.com membership because the don't have a Linux client "due to low demand".


It pretty much is imho because it's just this one game and there are more.


Kids who play fortnite don't have any decision authority when it comes to whether or not to use cable or satellite.


Ehhh, not sure about this. If you're a parent and your kid is begging you to get faster internet, you might listen to them a little bit. If their marketing can get those kids to do that, then that might be a successful campaign.


I agree in principle that kids can influence parental purchasing, and that marketing is geared to take advantage of that. But the effect appears to be most effective for items that are exciting and tangible objects in front of the child; think checkout aisle candy, not “boring” adult things like finance and bills.

Here’s an interesting parallel; how much say does any child have in the cell phone plan that a family uses? Probably damn close to 0. That’s a decision that the parents make on their own, and dictate the rules of usage to their children. I suspect that internet service will remain that way as well.


Sample of things kids who play Fortnite have (indirect) decision authority on:

fortnite v-bucks default gift for birthdays, displacing the movie gift cards

racking up hundreds of dollars worth of v-bucks per year

getting family to buy a Samsumg S10+ so they can reap the associated "skin" to their Fortnite account

graphics cards upgrades

(expensive) gaming mice and other gear pushed by youtubers

general merch like toys


The huge amounts of marketing aimed at kids would suggest kids on average have a lot of influence on decisions in their household


For candy and toys, sure. But there are a wide range of goods and services where the child has 0 say. Cars, homes, and even cell phone companies never market to kids in order to influence the parents, implying that children have no meaningful impact in these areas.

Now these services advertise to adults with children, such as being able to call your kids whenever with the unlimited cell phone plan, but that’s still advertising to the adult, not the child.


I play League of Legends and have only DSL but my ping is only 35 ms and I can't complain -- many other players have it worse than me.

What did make it impossible for me to get kills was having 30 ms of latency in my display system. I think the network latency is well-compensated (if the typical user has 50 ms of latency, just delay me by 20 ms) but the display latency kills me. When I switched to a different monitor I started playing much better in about five games.

I think the game generates a 10 MB log for a 30 minute session so the bandwidth requirement is not much.

I had the same experience playing Titanfall earlier; in that case turning on the "Game Mode" on the TV turned me from someone who couldn't win at all to somebody who could get halfway up the ladder.


Display (and input!) latency are real things, and matter quite a bit. There have been studies in the past where people measured the full button press to frame update time using high speed cameras, and not all games are equal. As for game mode on TV's, that specifically why it exists. As a passive medium a little latency doesn't matter, but when a system is responding to input, latency is noticeable.

If you're interested in how different TV's perform, segregated by their individual input ports (in cases where some ports are optimized) and modes, check out https://www.rtings.com/. I used it a couple years ago to pick out a good 4k TV as a monitor, and the very thorough information helped me pick out something that works well as a computer monitor (and also for the occasional gaming).


This is a legit question.

How does 30ms of display latency affect LoL gameplay? I don't play it, but I've seen videos of it a few times and with it not being a FPS I wonder what aspects of the game are that greatly affected with an additional 30ms of display latency.

Mind you, any mouse lag drives me up the wall. I've never measured at what point I perceive that mouse lag, but it's a fairly low number I'm sure.


I guess it depends on the game being played. When I played C&C in the 90s over modem, the latency was huge but it wasn't a major concern for that type of game.

Now that I play real time sports, and I'm no professional, it still makes a big difference in my enjoyment of the game when I'm playing at home with ethernet and top tier internet service vs playing at my parent's house on wifi and basic internet. The lag can really affect my ability to play my best and it sucks losing because you don't have a low latency connection.

That said, I'm very curious to see what the latency will be, because if it's within a reasonable range it could be acceptable. Either way, I don't think even casual gamers will have a significant impact on their sales, but it will have some if the lag is too great for some of today's games.


If the latency is 700ms you can’t even have a decent Skype call.


Try having an RDP session to fix Windows servers based in Australia with 400+ ms latency (Europe is unfortunately nowhere near Australia) on your GUI, the ordeal will leave a permanent scar on your psyche.


The speed of light is insignificant for such a short distance. Even if the signal travels as much as the circumference (40000km) of the earth at 70% lightspeed it shouldn't take more than 180ms. The lowest possible latency between Australia and Europe would be 60ms. Congested networks and network routers with suboptimal routes introduce far more latency than the distance. I regularly get between 100ms - 200ms latency between California and Germany and the latter number includes the realtime encoding/decoding of the video stream.


I've found RDP to be infinitely better for shitty networks than say VNC, if that's the other option.


Starcraft on my dial-up!


The latency is copper to your home. Dial ups are as fast as broandband but throughput low.


You forget about the decoding tone signals and telephone exchange load. Long distance calles have a very big latency for a reason.


You might want to do some googling, that is absolutely not true for a few reasons that are not about throughput.


Internet gamers are fine with double-digit millisecond latencies, and this is what Starlink will offer. It takes less than 4ms to reach 1000km LEO orbit, so 16ms for the full round-trip (you -> sat, sat -> server, server -> sat, sat -> you). Make it 2ms ground-to-satellite and 8ms round-trip if you believe the model from this thread[0] (satellites at ~600km). Using its numbers, add 2x 2ms for each inter-satellite link you have to travel back and forth.

It still adds up to sub-100ms latencies. It ain't bad.

--

[0] - https://www.reddit.com/r/Starlink/comments/9wrkhm/starlink_l...


With the right type of architecture, low-earth satellite could offer better latency than terrestrial.

terrestrial internet goes through optic fiber in which the speed of light is maybe half the speed of light in a vacuum. Already HFTs use microwave links to shave off a few milliseconds between New York and Chicago. From Boston to Seattle the extra 600 miles up and down from the constellation would be overcome by the fast path along that route.

Note that requires switching from one satellite to the next, not the "bent pipe" architecture to your local rent seeking wireless company that StarLink and other LEO constellations plan.


I’ve read the paper that started this idea of Starlink being a low latency alternative for hft (http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf) There are a couple of problems with the paper. The first is that they establish a range of acceptable latency as existing between the great circle route and the fastest ping time on the internet between two points. For example they found that NY to London was 75ms round trip on the internet. However the long time standard bearer for a pure fiber path had been 64ms until the latest gen cables arrived in 2016. At that point the latency was sub 60 ... very comparable to the 55ms great circle route. But it’s important to understand that the starlink point to point route will be constantly changing — so only in one or two instances will that latency even come close to the existing terrestrial routes. More often than not the latency would trend closer to the 70ms range - well beyond anything considered competitive either in the current generation of routes much less the prior generation. It’s as if the authors simply had no idea about the hft market and what latencies might be needed (all of which has been published publicly). But there’s another problem with the study. For the solution to work, spacex would need to dedicate one of its laser links to enable communications across other satellites and dedicate that capacity to this application. And further, the study did not address the issue of optronic delays within the satellites. Most microwave manufacturers care little about hft latency tolerances and intermachine delays can be 1-2ms per hop. Assuming starlink to not to be much different (although I don’t know this to be true or not) that would definitely add even more latency to the equation. So all in all while it’s interesting and will do amazing things for consumers and enterprises, I don’t see starlink being a competitive hft solution unless the route in questions is really long and a lot of other engineering stuff can be made to work to enable it. Call me a skeptic but I don’t think we will see it.


> terrestrial internet goes through optic fiber in which the speed of light is maybe half the speed of light in a vacuum.

What's your source for this? According to my quick search, it's typically 0.66c - 0.76c while researches have reached 0.99c.


Terrestrial microwave has almost the same speed as light in a vacuum, so I think it's possible a microwave network beats Starlink.


Remember each ~25 km you'd need a relay after you hit the horizon. That adds non-zero latency vs a longer-range-through-vacuum transmission.


25 km is less than typical, but remember each direct LEO path between popular destinstions is likely to be congested.


I’m just pointing out that the slice of consumers who are a) internet gamers b) concerned about Starlink latency and c) still think Starlink is old school geosynchronous satellite. Gamers will of course worry about latency just like they optimize every other aspect of their setup, but not to the effect that SpaceX needs to mount a marketing campaign to that effect.


Yeah, and they'll care enough to actually find out what the latency is rather than reflexively shooting things down. And don't forget word of mouth. All it takes is one prominent streamer that you follow, or anyone in your friend group, saying good things about it -- and now it's on the table for you.


That's a pretty small market and definitely not the target of this technology.


I don't think you're allowing for the the temerity of terrestrial services. They will conflate the poor performance of legacy satellite systems with this system. They will exaggerate the latency deficit over short paths and allow people to think this is the general case. They will do this to leverage the preexisting perception of poor satellite service performance that so obviously prevails.

Only a solid marketing campaign can deal with this problem.


Another thing is jitter. By design flying satellites are going to create jitter, and even with good latency, it can be bad for stable video streaming, video calls, etc.


The jitter will be unlike regular jitter though.

The moving satellites will lead to the latency of all packets being the same, but vary over many minutes.

The bad jitter is the stuff that varies on a packet to packet basis.


That's true and bandwidth is actually something I've been wondering about: what kind of bandwidth can an inexpensive ground-based system get, reliably? Sending bits in a fiber is famously free of interference, while hundreds of miles of atmosphere is notoriously bad as a transmission medium. I haven't seen any kind of calculations around this. How many clients can hook up to one satellite concurrently, what kind of bandwidth will each get, realistically (theoretical max bandwidth is not so interesting). Without more information, my first thought has been that Star link sounds like an excellent, satellite-based internet technology ideal for connectivity in isolated places (far from urban areas, on ships and airplanes, etc) but which may not really be competitive for consumer internet access in urban areas. Hope I'm wrong.


> what kind of bandwidth can an inexpensive ground-based system get, reliably?

Theoretically 56 Kbit, but practically closer to 40 Kbit. I'm being serious! Given the choice between hugely expensive and high latency satellite Internet and dial-up, many have opted for dial-up, even in this day and age! We're not even talking that far away from urban centers, either, especially in hilly areas. A couple hour's drive outside of San Francisco is enough to get to cell-phone network dead spots, never mind LTE. There are some hyper-regional wireless ISPs using 802.12-based gear (Uniquity), but those are the exception.

Given that, (which is so extreme that I'm sure that not everybody that reads this will even believe me), any competition to HughesNet will be very welcome. Prior to SpaceX, the cost of getting a constellation of satellites into orbit was prohibitively expensive, but with this being done by SpaceX, the launches for this will be done "at cost", (corporate accounting and cost centers notwithstanding). (Which, mind you, is still several million dollars per launch in rocket fuel.)

Hopefully that means a genuine competitor to HughesNet, but that's a low bar. There is a large initial investment (of time and money), but FCC approval is an important first step towards providing this service.


The atmosphere shouldn’t be too much of a problem. It thins out very rapidly over a few km altitude, and the beam is going almost straight up. It’s nowhere near as bad as the same distance through the atmosphere at near ground level.


I am excited about the possibility and think it will (eventually) sell itself though word of mouth (how exactly do you advertise better Internet access to a market with poor Internet access? Billboards and USPS mailers?). I doubt most consumers have the background to know difference between latency and bandwidth, so hopefully this is price competitive with HughesNet!


I think that the key competitive advantage is exactly how cheap it is to acquire new users. Anyone that is off the main grid can still get decent internet without signing up to a long term landline contract. This removes one of the big barriers to entry and changes the industry as a whole.


> Consumers are currently buying downstream bandwidth by the mbps from one or maybe two providers with no other options.

Isn't this going to improved with 5G? Verizon already has plans to offer 5G internet at homes in 10 cities by end of this year.


"in cities". there are a lot of people that don't live in major cities.


At some point Verizon and AT&T will get the service requirements relaxed so that 5g counts as servicing a home. At that point they’ll run through and service all of the old rural installations. Hopefully the rural power companies do the same with the fiber they run inside their steel lines. There’s my guess anyway.


There is a reason for this as there are a lot of customers in on place. Satellite based internet allows you to have your infrastructure where you need it, even this requires a close control of your orbits. And then question is whether satellite internet will be a) faster b) be more reliable and c) cheaper than ground based fibre and 5G. If yes it might give these two alternatives a hard time even in metropolitan areas. If not satellite based internet will only be interesting for rural areas. Not sure if the potential higher prices off-set the smaller customer base. And in developing countries higher prices are basically off the table anyway.

Satellites still are in space, so everything is more expensive up there, getting things up, repairing things, building things for space, operating things in space...


I would say there are a lot of people who live in cities/towns instead. You need the scale for things to work. I hightly doubt this project is only aiming at people not living in cities.


VOIP. 150 ms max recommended, noticeable degradation of UX at 250ms. Many phone operators use wifi to connect when they can. If people's phone does not work well, they'll complain.


To get better than 10mb down and 2mb up, I'd have to buy it from another country. Please let me be the first customer.

How can it be worse than literally all I can get?


No consumer will worry about latency or any other component of the physical feat of delivering internet via Starlink.

Gamers and nerds of various kinds will worry.


The article says 15ms. Even 2x that at 30ms or 4x is totally playable on most every game.


30ms round trip is very good. 60ms round trip is quite noticeable for expert players.


>It will sell itself, in the USA at least.

Sorry for this shallow comment but I've read the title several times now, my mind races to the same thought:

>...to sell wireless high-speed internet from space!

"I wish them wild success, it's promising to see the opportunity for disruption of our telcom mafia as wireless comms tech evolves, but selling internet from space... total horseshit! Success or not, most space internet subscriptions will be sold from a call center in India." /s


[flagged]


Please consider my apologies. I was taken aback by you're reply. It was not my intention to express bigotry but I thoughtlessly offered an unintelligible attempt at poor pedantic humor absent of context and comedy. You replied quickly and so I choose not to delete it.

I enjoy watching this technology as it moves to become an available option. The article is 9 sentences of an information scarce announcement, but as is often the case, I returned here to find plenty of useful information in these comments. Interestingly, ATSC 3.0 which is packet based and already overflowing with DRM now looks to be nearing a consensus. As early as 2020 (US) it could prove to satisfy enough content distribution traffic to reduce some downstream bandwidth demand and relieve congestion. SpaceX in partnership with ATT or CenturyLink for dedicated up-link over DSL would be an interesting offering. https://www.atsc.org/wp-content/uploads/2018/01/A360-2018-Se...


As someone who has run a tiny ISP in a tiny town, I assure you “people” are much more knowledgeable than you assume. I was actually able to disable all monitoring systems, since I would get a phone call, immediately following any event that caused high latency, or any kind of internet disturbance. I learned that people who live in smaller rural towns are in some way much more dependent on the internet to provide things like entertainment, news, info, media, since there are fewer social events, theaters (etc, etc).


> "I would get a phone call, immediately following any event that caused high latency"

Also at like 04:00 in the morning? :- )


Probably means someone thinks your service is shoddy and they set up their own monitoring.


He's halving altitude to 550 km, in ideal cases bringing latency to 15 ms. Some satellites will be even lower, 350 km, for populated areas. https://arstechnica.com/tech-policy/2019/04/spacex-changes-b...


Is it 15ms to the satellite? If so, that's like 30ms to land, which would negatively impact gaming if it still needs to travel from where it lands to the end goal (via normal pathing?)


1000 km / speed of light = 3.34 ms. Suppose you want to ping a server next door to you and your ISP is a satellite 1000 km away. The distance your signal travels is 1000km from client to satellite, 1000 km from satellite to server, 1000km from server to satellite and 1000km satellite back to client. 4000 km at the speed of light is 13.34ms (the transceiving latency is probably negligible in comparison, though I'm not 100% sure about the DSP requirements for this sort of system)

If the target server is far away and the signal goes 7000 km from satellite to satellite in space before coming back down, then your signal it will need at least 46.7ms more in space for a 14km round trip in space, which results in a minimum possible ping of 60ms to ping a server 7000 km away. Fiber-optics are 30% slower than radio links, so this 60ms minimum is 10% lower latency than the best fiber-optics can do: (7000 km / .7 speed-of-light) = 66ms

If the distance to your server is 2.333x the distance to your satellite you will have lower latency with radio than fiber optics. Bringing the altitude of the satellites down will make a big difference, but you also need a lot more of them in a constellation to have low average distances.


But for something more realistic for gaming, a player in Boston hitting a server in NY. The fiber message travels something like 200 miles, while the satellite message travels 2,200 miles.


Radio waves are faster than light going through a fiber optic cable, and the satellite mesh network will have fewer hops (switching packets on fiber is expensive time-wise).

The difference in latency won't be nearly as bad as a naive comparison of these two numbers would suggest. Depending on what hops your terrestrial traffic is going through, the satellite transmission could well be faster. To give you a trivial example, I'm seeing 11 ms ping right now to a server within my own city, that's only a couple miles away. This is ~0% signal transmission delay and ~100% switching delay.


It's still transmission delay, you are underestimating how fast routers forward packets today. Likely the server has a different provider and them and your ISP does not peer in your city, so you have to go somewhere else where they peer (or where their transit peers).


I think no ISP would connect to others with a huge coil of fiber to add that amount of latency.


No? But they do not connect to each other in every city either, which is why you can get this delay even if both parties are in the same city.


All the uplinks of my ISP with one exception (carrying traffic through various lines up to european IXs) are connected to the Internet through the same MSK-IX which is 900 miles away from my city and adds 4ms latency. I don't think GGP's traffic route is that long though.


Not sure what you are talking about. 900 miles is ~1500km, 1500km at the speed of light in fiber (2/3 of SOL) takes 7,5 milliseconds, so 15ms roundtrip.

But I don't see why it's relevant. Forwarding in modern ISP routers is on the tens to hundreds of microseconds level, even if there are many hops it does not add up to that much. So what he has is likely not "switching delay", and even saying that 11ms is 100% switching delay when it's the same city (low amount of hops even if multiple ISP's are involved) is just ridiculous.


> takes 7,5 milliseconds, so 15ms roundtrip.

Took that number (it was for one way trip though) from memory, might be scrambled. Just checked, RTT 17ms with 3ms jitter for me.

22ms RTT is still too much though. Even with centralized network for cross-border traffic like in my country, there are IXs way closer than those 1500km to any town on european part and obviously to the East of Urals (the longest distance between IXPs is ~3000km for Krasnoyarsk-Khabarovsk, the longest distance to the closest IXP of 1500km have Norilsk - the most northern city on the globe and it's satellite Dudinka).

edit: fixed some english.


So maybe satellites simply aren't the best-use case for gaming?

I don't think Elon is counting on gaming to be the backbone of this service.


Boston to New York is 215mi or a 430 mile round trip. At 200 miles up where they intend to put the satellites orbiting over major cities, it's going to be about 1000 mile round trip.

I thought that the US servers for most games are in the middle of the country to get the best average ping for a cross-country audience. Blizzard is like Chicago and Texas.


> Boston to New York is 215mi or a 430 mile round trip.

Assuming terrestrial lines travel as the crow flies, which is not all that likely (but probably not too far off).

> At 200 miles up where they intend to put the satellites orbiting over major cities, it's going to be about 1000 mile round trip.

Other people here are noting that fiber signals travel at ~70% the speed of light, while radio waves travel at very close to the speed of light. If that's true, it puts the optimal time for land based optical communication at ~3.3ms[1] ms, while the optimal time for a low orbit satellite at ~5.4ms[2] (if I didn't screw up the math). That's slower, but not by a whole lot, so it might even be better than you're predicting.

Also, are people actually getting sub 4ms round trip times between Boston and New York? If not, then the difference here might be entirely subsumed by other factors.

1: 430 Miles / 186,000 M/s / 0.70 = 0.0033 s

2: 1000 Miles / 186,000 Miles/s = 0.0054 s


Nope, the biggest AWS region is in Virginia (as well as a bunch of others, energy is cheap there). There are also lots of datacenters in Dallas, TX and San Jose, CA. NYC has a lot of connectivity (throughout the northeast, and transatlantic) but the cost of real estate is so high that it is uneconomical to operate lots of large datacenters there.

The majority of the population in the US is in the eastern half (1/3 in states on the East coast), so you don't actually decrease median latency by putting datacenters in the geographic center: https://commons.wikimedia.org/wiki/File:United_States_popula...


Nope, the mean population center of the contiguous United States is somewhere west of St. Louis.

https://en.wikipedia.org/wiki/Mean_center_of_the_United_Stat...

Actually not too far away from the geographic center which is somewhere in Kansas.


That doesn't disagree with what I said. The majority is still in the East (the mean is East of the geographic center), and you don't want to optimize for average latency anyway. You'll get the best median latency with two datacenters by putting one on the mid-Atlantic coast (~180M people East of the Mississippi) and one on the mid-Pacific coast (another 50M in Pacific Coast states). That means the latency for ~2/3 of the population is way lower, and the max latency is still about the same as putting them in the center of the country.


Not usually. In my experience, they usually follow the typical popular datacenter areas e.g. East Coast/West Coast. I've mostly only seen really popular games offer Central locations like Dallas/Chicago. But it's getting a lot better now that vultr (COD uses them) and other VPS providers get more and more locations.


A 550km trip at the speed of light should take just under 2ms. It looks like they've accounted for the other stuff.


I mean, I'm a gamer but really... its pretty low on the list of important reasons to support such a project.


Rural customers are use to 100+ ms response time. Even getting 60 ms response time would be a major improvement for many customers.


30ms is a good ping for games...

edit - yeah I forgot about the endpoint then to the game servers data center


The 30ms figure is just the earth->satellite->earth part though. It still has to go through terrestrial wires to the server, and then be processed, and back through wires to the transmitter.


Datacenters make deals with network providers. Why should it be any different with Starlink?


30ms would be you and the game server both on Starlink, on the same satellite. Otherwise, you have to add that to the latency between the ground station and the target server.


IIRC, the plan with Starlink is to have many stations that are supposedly easy to operate in many large-ish datacenters in major areas as opposed to systems like Iridium which have few, complex downlink stations in comparison.


That's not 30ms ping, that's add 30ms ping.


Not always I don't think.

Maybe from me to a geographic location 400 miles away, yes. But what about 4k miles away, like coast to coast?

That would probably depend on how many satellites it has to bounce through and how far it has to travel terrestrial, but I would think there is a geographic distance where that 30ms is faster.


Up to three. After one bounce, it can be redirected into a geostationary or a polar satellite where the final bouncer would reach its LEO peer again.


That statement assumes no transmission latency on your current setup. Kinda doubt that's the case.


Has anyone published the expected lifetime of those satellites? At the lower end of that altitude range the drag is going to be pretty severe.


I presume that's just one direction?


The high latency questions are only from people who really don't understand the industry. The question most analysts in the industry ask are about how they plan to be profitable.


"People who don't understand the industry" describes nearly all potential customers. I don't think this refutes the GPs point.


Most potential customers don't know the difference between bandwidth and latency, they just think of it as speed.


I disagree. I think people equate bandwidth with speed, because that's what all of the commercials and marketing materials say. Latency is hardly ever mentioned.


Put the average consumer on a gigabit connection with 5 sec of latency and ask them how fast it is.


Yes people have been burned by HughesNet and garbage like that. But it won’t take much to educate them, people can understand the difference. A simple graph showing the ping times and people will get it.


I suppose this argument works really well on HN. how many people do you think know or care about a ping time? if they did, don't you think we would see all of the commercials for broadband bragging about their latency instead of bandwidth? Streaming media dominates the internet today, and that's unaffected by high latency.


Steaming media is particularly poor application for Starlink, which is starved for bandwidth.


ATSC 3.0 is a possible avenue for offloading some streaming traffic. It seems that the consumer hostile packages we've come to love are taking shape and 2020 may see some adoption...

https://www.atsc.org/newsletter/atsc-3-0-where-we-stand/


I agree, but that's the opposite of what people think.


High latency has been the dealbreaker for satellite internet. No one wants to use satellite internet if they can avoid it.


Is there a writeup anywhere what the bandwidth and latency is likely to look like? I assumed the latency would be worse that that of a land line.


Latency, yes: http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf with a nice video of the simulation here: https://www.youtube.com/watch?v=AdKNCBrkZQ4&feature=youtu.be and a bunch of vidoes that include updated information here: https://www.youtube.com/channel/UC-k1j7M2-hBfXeECd9YAQ_g

Bandwidth, not to my knowledge. As the nice paper above says its too speculative.

Nothing official.


Thanks, great video.

Imagine every one of those satellites running BGP!


They probably won't... Links between satellites will use a different routing scheme. Traditional network protocols will "disappear" at the sending ground station, only to "reappear" on the receiving one.

A video link of a simulation was posted on a different comment that shows the "routing" between multiple satellites.


Why wouldn’t they use something like BGP to route between the satellites? Because the topology is explicitly known ahead of time and so the routing can be more “hard coded?”

An interesting side effect of this network is that latency experienced by any two users, anywhere on the network, should be close to equal.


It depends. If you're connecting to a data center across the street, the latency will probably be worse. If you're connecting to one on the other side of the world (or somewhere where the packets have to take a more circuitous route), the latency might be better. The satellites aren't that high up so latency won't be a huge issue.

I think bandwidth limitations will be more problematic. The Earth is really big, and there's not really a good way with LEO orbits to focus your satellites over urban vs rural areas. IOW you can focus coverage on lower latitudes but not really certain longitudes.

IIRC previous FCC filings said each satellite would have ~20 Gbps of bandwidth. That'll be great if you're in a rural area, and I think the gigabit speeds some people are hoping for are actually possible in certain situations. However a lot of people think that they're going to be able to replace the Wifi router in their apartment with a Starlink antenna... I don't think that will be practical any time soon. I'm not an RF person but I doubt there's enough spectrum for each satellite to have tens of thousands of users simultaneously.

edit: also, like with 5G the signal quality is going to be crappier when it is rainy or cloudy.


It's going to be a similar situation to cell phones. When you purchase the product, you know that it will be slow some times outside of your control, but that's the trade off for being able to get connectivity almost anywhere. With no hassle as long as you can see the sky. On your boat? Sure! On your farm 30 miles from anything? Sure!

It will be interesting to see if they geo-lock the base stations or will allow you to move them and then re-calibrate/re-register your location for load balancing.


There will need to be some level of geo-locking, since the price variation between poor/rich countries will likely be 10x


Being profitable is their problem (I can purchase a connection from someone else), but latency is something I have to worry about.


SpaceX actually intends to compete with terrestrial systems on latency; many routes will have significantly lower latency than any feasible terrestrial system.

I would like to see a framework, where the backend of a cloud system can migrate opposite to the direction of orbit, so the same virtual cloud server could remain (by 'hops' where it serializes a copy of itself, then fails over to the copy) more or less geostationary. Such migrations would happen as seldom as once per 10 minutes.

My game server cluster isn't too far away from having such a capability. (I was planning a variety of capabilities, such as using cheap GCP "ephemeral" instances by quickly failing over to such copies.)


Why would you need to do this? The constellation still needs downlinks - and the downlink nearest the physical location of the user should always be where that traffic pops back onto the Internet.


For a game server cluster, I would be running a game loop implemented as a series of pure function functor-like objects, located "very near" a spatial database. In that case, I would be migrating to provide the very lowest latency, as if the system were running on a "geostationary" server just a few hundred miles overhead. (Which has to pause for a quarter second, once every several minutes.)


That sounds like a terrible experience. You want to avoid latency spikes at all costs. People might not like latencies as high as 100ms to 200ms but it definitively is possible to enjoy a laggy game like that. Suddenly getting a 500ms latency every minute will cause a lot of weird and unfair situations that can basically ruin the entire game.


That sounds like a terrible experience.

The usual latency should be on the order of 30-40ms round trip.

Suddenly getting a 500ms latency every minute

No one said anything about that. The client should have some form of dead reckoning, and there should be an edge server that maintains connections. With the right kind of game mechanics, no one notices the server disappearing then reappearing behind the scenes. I cite 250ms as a worst case. It should be possible to get the impact as low as the server skipping one frame every few minutes. The clients don't need to skip any frames.


the edge server in this case would need to be in space, on a satellite.

That's going to be a very expensive edge server!


I agree, jitter bigger than a synchronization time is worse than high latency. When player have high latency (and I was in top3 on NA Solo ladder in League of Legends while playing from Urals with 220ms), he could only predict the moves of the opponent because what you see is far behind the server state, but with stable latency he knows the exact delay between an input and registering it on the server which puts less pressure to mind to deal with desynchronization between the picture on a display and the actual state.


I agree, jitter bigger than a synchronization time is worse than high latency.

I'll always grant that fast twitch FPS will always have a problem with this. That's not the genre I'm interested in creating games for, and it's not what my server framework is designed to support. (It is real-time action, but the pacing is more akin to RTS than FPS.)


I didn't say that high latency isn't frustrating. But you can effectively shoot the air on the route of the opponent where you'd think he would be by the time server would have your input.


A bit off topic but what does the word 'constellation' mean? A quick Google didn't yield much, and I'm not entirely sure what field it's from. My guess is "a set of connected satellites"? Or is it more specific than that?



Ahh thank you!


If only there was a large mass of rock and mud that was geostationary.


This is basically vmware's vmotion feature.


That's quite cool. I'm doing my feature at more of an API level. That said, many applications still need to be architected to do this sort of migration well. The turnkey version just guarantees a demo version. (As technology develops, the turnkey capability will increase, but people's demands on the technology will also increase.)


Wait, what? Are you planning to put your game server in orbit? Because otherwise I don't understand what you're asking - a server on the ground will always be "geostationary" - it's the satellites it talks to that will change.


Here is a neat simulation of a proposed network: https://www.youtube.com/watch?v=QEIUdMiColU


Admittedly I haven't researched it that well, but I'm curious how often they would have to aim their sensors at the sun to receive from a neighboring satellite. I'd have to assume those links would need to be subtracted from the system, which could slightly drive up latency around sun up and sun down for a part of the planet.


The sun is pretty small compared to space, something like .0005% of the full sky sphere, and almost half that time it's covered by Earth. Since there's no atmospheric glare in space, I suspect it's not a problem given decent engineering.


There are still internal reflections within the optical system - glare from that will likely be much larger than atmospheric glare


Given communication is point-to-point, and very coherent, I'd think this is extremely easy to solve if glare in the system was a real issue. Aim the receiver through a tube (or array of tubes) painted black, for instance. Certainly easier than building an equally precise telescope.


Latency does not worry me, but the economics do.

An LEO satellite constellation has a bad urban/rural problem because most of the Earth's surface is ocean, uninhabited or uninhabitable. In the meantime, people from New York City and Detroit who hate their cable company will compete for network resources for people in the middle.

To take advantage of those thousands of satellites, subscribers will need to install high-performance phased array antennas that will probably cost at least $6000. There will be the capital costs for the space segment, also since the system is just a bent pipe to Verizon, AT&T or whoever your local rent-seeker is, they will take a cut.

Thus it is likely to be a very expensive service.


> at least $6000

SpaceX is working on making a low-cost phased-array antenna the size of a pizza box, targeting $100-300.

Direct source is a presentation Elon Musk made in 2015, video linked in this faq: https://www.reddit.com/r/Starlink/comments/ayec7p/starlink_f...


I just spent half an hour searching for that too...

Transcript for those who prefer text: https://web.archive.org/web/20150329174815/http://shitelonsa...


Cool idea, but sounds a lot like your typical Elon announcement. SpaceX ia doing great in the rocket business sue to clear advantage regarding development costs (I still have my doubts regarding their true launch costs, so). Pushing them to develop new antennas shifts the focus, not sure if that is such a good idea...


> will probably cost at least $6000

What is your source for that figure?

If it's military/aviation/naval phased array radar this is a very different use case. The targets are transceiving from well known locations, not maneuvering to reduce signature. The orbiting target is well above the horizon, not hidden in clutter. There are well defined frequency bands so high frequency agility is not necessary. There are no purpose built enemy jammers to overcome so extraordinary power levels or heroic filters aren't necessary. There are no afterburners or naval catapults to withstand.

About the worse case I know of with the bands involved in this system is rain fade or snow cover.

We already have MIMO beamforming transceivers at nearly disposable prices in the form of wifi routers. I think it's entirely feasible to develop a fixed function civilian phased array at a reasonable cost.

Anyhow, with any luck we'll know soon.


Any source on needing a $6,000 antenna to get spaceX internet? That would be a deal killer but haven't heard that.


Why do you think people who live in cities will want satellite internet?


Because they hate the cable company, because some of them have outrageous amounts of money, because some of them would like to have a backup connection, and because they are crammed in tight so even if the takeup is only 1% they still will consume a lot.


Isn't a lot of demand good for the economincs of this service?


Elon should really do a John Henry if he can afford to set it up. Run a head-to-head data-transmission with checksum validation in some theatrical setting. For bonus points he could (if it's possible) run a self-driving car demo where the computation is being done and relaid by a LEO uplink.

Also, I hear what you're saying but... Right now the Algorithmic traders all use fiber and new fiber is constantly being rolled out to support even lower latency connections so I'm skeptical - if it was competitive wouldn't G&S be using it to contact the London stock exchange? Or, is it that fiber can produce lower latency in idyllic scenarios but that congestion pushes the latency beyond that accomplishable by an LEO setup for us plebs?


> if it was competitive, wouldn't G&S be using it to contact the London stock exchange?

They will, but no such satellite network currently exists. Right now, all of the satellite internet services are using satellites in high orbits. That means they can cover more of the Earth's surface per-satellite, and the satellites last a long time. This was necessary when rocket launches cost >$10,000/kg.

SpaceX is drastically lowering the prices of launching satellites ($1k-$2k/kg, Starship will probably be <$500/kg), and they are taking advantage of that by launching thousands of satellites into LEO (Low Earth Orbit). These satellites will have a 5-7 year service life, which will reduce the cost per satellite and allow for regular upgrades via replacement.

Newer hardware and lower altitudes means less latency and more bandwidth than previous satellites. It will definitely beat transatlantic fiber on latency, but bandwidth will be more limited.


I remember reading about line-of-sight microwave links being used for trading here on HN. I think line of sight is the key difference between fiber and radio.


I dont know if I'd call it direct line of site, but low bandwidth directional microwave communication is definitely used over-the-horizon from say Chicago to London. This [1] Bloomberg article has a great rundown on microwave in finance.

Bandwidth is low, but latency beats fibre, and you can bounce microwave off the ionosphere between London and Chicago (and various other routes). This article doesn't mention the cross Atlantic transmissions, but it is possible, and does happen.

Disclosure: I have worked at several of the Chicago based firms mentioned in the linked Bloomberg article.

Edit: grammar, formatting.

I wasnt directly involved in any of the microwave networking, but ask me what you want and I'll answer to the best of my ability.

[1] https://www.bloomberg.com/news/features/2019-03-08/the-gazil...


More than you can possibly ever want to know about this subject here:

https://sniperinmahwah.wordpress.com/2014/09/22/hft-in-my-ba...


Microwave, or even HF. Definitely using direct RF links though.


As far as I'm aware, there aren't any LEO satellite internet providers at the moment (a few in GSO, but the latency there is probably too long to make it relevant for trading), so I think that it just isn't an option. It'll be interesting to see what happens when the first LEO constellations come online.


Serious algorithmic traders use point-to-point microwave links, because the speed of radio waves in air is about 50% faster than light in a glass fiber.


The majority of the latency improvment from these hft networks is because the path taken by them is more direct. Fiber routes follow roads and railroad tracks. The series of point to point antennas takes closer to a straight line.


With a phased array could you link this through a LEO swarm to hit targets hidden beyond the horizon?

I'm super unfamiliar when it comes to this sort of hardware.


> So there is a cross over point where the latency of a LEO satellite system is actually superior despite the uplink/downlink path.

There is, but given the rise of the use of edge caching and edge compute for latency-sensitive applications, with edge locations located to optimize access to masses of users, that crossover is unlikely to be actually reached in many of the cases where latency is actually a concern, and Starlink is going to be stuck behind unless it is able to both get edge caching and edge compute on its satellites and get content/service providers to use it, and comsats aren't exactly optimal locations for that.


Latency is also much less important with highly cacheable content. At least, I can’t really think of examples to the contrary. My patience for loading pages browsing is many orders of magnitude higher than realtime uses. Bandwidth seems to dominate in this context.

Latency of multiple seconds, like you can see in some places around the world, is much more frustrating than the kind of latency discussed wrt Starlink.

The lack of a close edge node is kind of interesting, though, so i’m not trying to dismiss your comment in the least.

Edit: wording


Honestly I will be fine with bad latency if I can get reliable wireless internet anywhere on earth. If I need good latency, like for playing games, I will go to a location with fiber.

I really, really think SpaceX Internet can do a lot of good. It may force competition. It may give internet access to many countries where it's not an option. It may even help fight censorship, or open the door to an alternative to the current internet network.

Lots of potential.


>many important routes will have significantly lower latency than any feasible terrestrial system

Is that accounting for recent developments in hollow core fibers?

>"The researchers successfully demonstrated the first high-capacity, low-latency data transmission experiment performed using a hollow-core fibre. In this, they found that light propagated 31 per cent quicker than in a solid core fibre, increasing from 70 per cent of its full speed in a vacuum to 99.7 per cent. To put this in context, this means that data propagating in this fibre would arrived 1.54 microseconds/per km earlier that it would in an equivalent length of conventional solid fibre. Not only did the light almost travel at its fastest possible speed, but it did so with a very low loss of 3.5 dB per kilometre."

https://www.southampton.ac.uk/news/2013/04/speed-of-light-fi...


> Is that accounting for recent developments in hollow core fibers?

No, it doesn't.

You're citing a story from 2013. Has this seen deployment in the field? I haven't found any evidence of this. That makes me wonder what is missing in this story.


That story was the first ever measurement of light speed in a new fiber technology, so it was obviously still in the lab at that stage.

It is still really an emerging research field, here's something more up to date -

https://www.laserfocusworld.com/articles/print/volume-54/iss...

But you can purchase it these days -

https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=91...

Also, I would think that the first transatlantic cable of this stuff might be kept pretty exclusive, given the economic opportunity presented by pissing around on the stock exchange.

edit - here's some more recent press from the Southampton lot - https://www.ses.ac.uk/2019/04/03/fibre-optics/


Why do you think that satellite to satellite routing will be particularly fast? There are a lot of hops, some satellites are moving relative to one another, and the satellites are power and bandwidth constrained. It sounds like a latency nightmare.


Consumer connections will use bent pipes. Satellite to satellite links are limited and will be mostly used by high paying customers like high frequency traders.


How badly does "weather" affect TCP latency over an LEO satellite internet pipe?


Depends on the frequencies being used. Starlink is in the ku and ka band so it will be affected by weather, especially heavy rain or snow(called rain or snow fade in the field)

Standard solution to rain fade is to make sure you have a power budget and can ramp power when heavy rain starts.


Thanks! Especially thanks for the terminology, that's very helpful to look up more about it.


Given that speed of light is ~3×10⁸ m/s an additional ~1,000 km (10⁶ m) up and down and back again would increase latency on the order of ~10 ms (10⁻² s).

~10 ms of additional latency isn’t wildly out-of-sync with real world latencies of ~40 ms within North America and ~70 ms across the Atlantic. [1] And if we factor in the ~30% reduction of speed of light in fiber, the delta might actually be less than ~10 ms.

[1]: https://enterprise.verizon.com/terms/latency/


Local latency might be okay (worse then fiber, slightly worse than cable) but using space based routing is going to be a poor experience. There are a lot of latency-adding hops in space, and the terrestrial link bandwidth is so poor that popular destinations (LA or NYC) are going to be congested at their terminal endpoints, meaning you'll go the long way around and exit the satellites at a farther, more rural base station than is ideal, and use fiber for the final hops.


The problem I have heard isn't so much latency but path to profitability. Unlike GEO satellites, LEO based internet is not going to be fully operational until a significant number of satellites are up there to provide continuous coverage at a single point on earth. Financially, this makes for a very front-loaded-capital / debt-intensive business which will falter unless subscriber numbers and technology all pan out exactly as estimated.


If you could actually compete against fiber I feel like the funding would be very easy to acquire if your first step was linking up the various stock exchanges - those guys pay up the nose for even slight latency decreases.


This isn't going to be lower latency than existing low latency solutions.

EDIT: For more context, the old low latency fiber path between Chicago and New York was under 7ms, and current low latency radio solutions are under 4ms. SpaceX's solution is marketed as being "as low as 25ms".


It will matter for international trading, however.


> fiber isn't latency free. It isn't even speed of light. It's about 70% SOL.

There's use cases (high-frequency trading) where maintaining faster-than-fiver microwave links makes a lot of sense.

https://arstechnica.com/information-technology/2016/11/priva...


Not everyone has realtime needs. If you're not voice chatting or playing realtime games it's not that bad.


Even in voice chat's case latency isn't a huge issue until you get really high numbers. But you're right, even higher latencies can fit the 80% or even the 90% use case.


And Ground infrastructure doesn’t exactly take a straight path either. Add 5-10% slack in the line for loops/splice trays/pole movement, and then going around artificial and natural barriers...


When AWS introduced Ground Station [0] I was unsure as to their motivation. Looking back on it, this kind of endeavour might have been the impetus for the service. In fact, given how much of the "Internet" depends on AWS, SpaceX using Ground Station might actually lower latency to your favorite website.

[0] https://aws.amazon.com/ground-station/


A misconception probably due to an overestimation of how far away space really is; the ISS only orbits at roughly 240 +/- 30 miles, and the (admittedly arbitrary) boundary of space is at 60 mi (100 km).


The boundary of space varies depending on who you ask.

https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n_line


The article implies more uncertainty than there is. To my knowledge everyone agrees to define the edge of space at 100km (about 62 miles), with the exception of the US where for historical reasons the edge of space is seen as 50 miles above sea level. There is no international agreement, but even though the line is arbitrary there is no huge variety of widely held opinions.


I feel like a majority of people falsely think the cell phone in their pocket talks to satellites anyway. So a space delivered ISP might not be a big shock for people.


Could you define the acronyms you use please? LEO, GEO, SLO


I believe:

LEO - Low earth orbit (close to the earth)

GEO - Geostationary equatorial orbit (further from the earth, more efficient in terms of energy required to maintain a fixed position[1])

SOL - Speed of Light (a limiting factor in fiber optic communications - and over the air communications as well)

[1] From reading a bit, maintaining a fixed position relative to the ground at lower altitudes involves a constant velocity adjustment in the form of fuel expenditure - the equilibrium for each orbital period corresponds to a single altitude, so different orbital periods (when minimizing fuel usage) imply different distances.


Re [1], you generally can't do fixed positions anywhere else but GEO. Trying to stay stationary at a lower altitude would require using amounts of fuel that are way beyond what we can put in space.

Note that GEO satellites do carry fuel with them for "stationkeeping" - between errors in navigation, variation of Earth's gravity (due to it's irregular shape and composition) and influence of solar radiation and other celestial bodies, those satellites do have to occasionally and very slightly adjust their orbit to stay stationary.


Sorry yes, I was trying to express that but I think it came across unclearly - maintaining an unstable orbit requires a constant expenditure of fuel.


LEO - Low Earth Orbit

GEO - Geosynchronous Earth Orbit

SOL - Speed of Light


I wonder if this is a temporary software design problem. I mean, we have response time to burn these days, so we design with that in mind. If latency became a fact of life due to demand for a better deal on Internet access, maybe people will figure out how to make software that doesn't depend so much on a tight server loop.


Reminds me of the story recently which mentioned Grace Hopper and her famous visual representation of how far information could travel in a nanosecond when answering a question about why satellite communications takes so long.


It will only take a few real customers talking about the latency being fine to make this a non-issue. It's very easy for people to test on their own. I doubt they'll have to do much specific marketing about this issue.


Sorry no offence but why is this even a conversation? I use Hughes which is geostationary. 600 to 900ms latency. That is around 35000 km altitude. These will be hundreds of km altitude. Kinda basic math to get the answer here.


>many important routes will have significantly lower latency than any feasible terrestrial system.

Neutrinos brah.


Latency's not going to be a problem. What is going to be a problem is what nobody is talking about: LEO satellites move. This means the downlink antenna has to have motors and move too, or it has to be a phased array and be expensive.


> phased array and be expensive

There isn't any question about the type of antenna. The details have been filed with the FCC. The antenna will be phased array.

Obviously there is a high front loaded capital cost to developing a phased array transceiver for this system. The scale of the market, however, is vast; hundreds of millions of units. I believe the per unit cost of manufacturing these transceivers will be low due to this scale.

I can't know that however, so we'll have to wait and see.


Why hundreds of millions? At most this service will have a couple million people on the entire constellation if you want the speeds musk is saying. Otherwise, you have much lower speeds and cram more people, which is a trade-off all satellite technology has.


We can't compute credible figures at this point because a lot of the technical information is proprietary and the business model isn't known. I don't believe Elon has committed to any specific bandwidth; searching yields claims of "up to" 1 Gbps.

Given the rates cited here[1] (20 Gbps/satellite) I seriously doubt that 1 Gbps is going to be offered at a price level that allows it to be widespread. More likely we'll see service offered in the 10s of Mbps.

So the best we may achieve is some napkin level estimates. Let us assume 100 Mbps as a realistic subscriber rate. 12000 satellites[2] * 20 Gbps/satellite * 100/1 over-subscription (typical, see here[2]) / 100 Mbps = 240 million theoretical subscribers.

Obviously every figure here is open to endless dispute, but we're in the right order of magnitude. I believe this is the scale Elon is thinking at; he's trying to fund interplanetary space travel with this. In any case there will need to be a couple mountains worth of transceivers for the plan to work, and making fixed function electronics at that scale is typically highly cost effective.

[1] https://www.reddit.com/r/Starlink/comments/7xzkl5/starlink_s...

[2] https://en.wikipedia.org/wiki/Starlink_(satellite_constellat...

[3] http://www.ctcnet.us/wp-content/uploads/2014/02/CTC-Connecti... "Cable modem and DSL providers often have a 100:1 or greater oversubscription ratio for residential users and a 50:1 ratio for business users."


I'm on a small island in the Pacific. If I can get 100 Mbps, by any means, whatever the latency, it will competitive. Right now I pay for 25, the two ISPs on island swear they're building out to 50, and I average 15 down and 1.3 up. my latency sits around 250-300 ms to North America and 150-250 to Asia.


Your math left out the biggest factor, which is useable bandwidth. Most satellites will be over water at any given time, so the number of users will be far less. Any you're talking about full build out, which is a decade away. Internet consumption has been growing nearly exponentially.


That's a great point. Figure 80% of the Earth is ocean, 240e6 * 0.2 = 48 million subscribers. Still large enough to justify, for instance, the custom ASICs necessary to achieve a low cost phased array.


Elon has previously said they are targeting $100-300 for a pizza box sized antenna.


Thanks. I suspected it would be PA, but I have not read the filing.


That's the innovation this is taking advantage of -- the downlink antenna will be tiny and can be embedded.

This is a new market, not entering into an existing market.


It will not be tiny, but they are designing a pizza box sized phased array antenna with a target price of $100-300.


I'm sorry but you are incorrect. You cannot innovate your way around the laws of physics. Do you want 5 Mbps from a LEOSAT? Fine, you need a pointable antenna. An Iridium-style omni antenna cannot accomplish this. The S/N ratio is simply too low.


Sorry, but there are no laws of physics preventing innovation in the antenna space.


I suggest you study information theory and antenna theory. What you are suggesting is theoretically impossible. If a satellite could paint the continent with a megawatt, it might work. But satellites transmit with tens of watts, which gets spread over a large geographic area. The ground antenna must be pointed for decent bandwidth, either physically (with motors) or virtually (phased array) to achieve the requisite S/N ratio.


This is correct. Basic antenna theory covers C/N and how it relates to antenna size. There's no magic allowed.


> What you are suggesting is theoretically impossible

But then...

> it might work

Are you an expert in this field? I'm guessing no based on this exchange...


Your deceptive edits are...deceptive. The “it might work” was preceded with a hypothetical condition and then followed with the real condition. It wasn't “its theoretically impossible but it might work”, but, “its theoretically impossible; if satellites could carry transmitters two orders of magnitude more powerful than they actually can, it might work.”


That's not what "theoretically impossible" means. You literally just described a scenario where what I said would be theoretically possible, based on your own understanding of the technology, which is decidedly incomplete.

You also didn't answer the question about your expertise, I'm assuming because your answer is, "No". Maybe leave the correcting of others to the actual experts...


> That's not what "theoretically impossible" means.

Sure it is, if the hypothetical is pointing out a constraint on the boundaries of theoretical impossibility (or, in theory, if the hypothetical is itself possible to describe but theoretically impossible because of physical limits, though that doesn't appear to be the case here; there might be a theoretical lower limit on the size of a transmitter of the require power output or the supporting power system, but in theory there is no limit to the size of a satellite except the bound at which it becomes the primary because it's larger than the thing it is supposed to orbit.)

> You also didn't answer the question about your expertise

A careful examination of the thread will find that no such question was previously raised. There are more than two people on HN, so just because you get a response that doesn't make the person responding necessarily the person to whom you were responding. HN shows comment authors handles in case you ever need to know to whom it is you are responding, which you might want to do before attacking them based on assumptions about their identity.


If you can't see how something can't both be theoretically possible and impossible at the same time, we have nothing more to discuss.


If getting paid to design ground stations for LEO satellite constellations counts as expertise, then yes, I'm an expert in this field.


Most people think their cell phone signals go to space.


I don't know about the LEO numbers but quick back of the envelop calculation assuming 1000 km means this latency would be nearly 3 ms.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: