Hacker News new | comments | ask | show | jobs | submit login
SpaceX just got FCC approval to launch 7,518 satellites (fastcompany.com)
438 points by crunchlibrarian 3 months ago | hide | past | web | favorite | 280 comments



Too bad the public markets have given Elon such a lashing with Tesla, because I would love to buy into SpaceX.

But then, I’m not the only one;

“There is an unlimited amount of funding that the company could probably access globally in private markets," Hilmer said, adding that he has personally met many of "a diverse group" interested in SpaceX. Everywhere I travel around the world, investors of all types — individuals, family offices, hedge funds, sovereign wealth funds or private equity — want to get into SpaceX," Hilmer said. "It's almost all investors I talk to."

Of course at the same time I’m happy they aren’t public. The market couldn’t handle the time horizons that SpaceX operates under, nor the mission statement that drives them.

[1] - https://www.cnbc.com/2018/04/13/equidate-spacex-27-billion-v...


Anybody who sees a ceo who plans to spend a large chunk of company profits on colonizing mars for the sake of humanity might think to themselves, "I wonder if I can start a similar company, avoid the mars colonization expenses and undercut them on price."


You misunderstand just slightly, and through no fault of your own I might add since SpaceX’s messaging is deliberately murky here.

SpaceX has no plans right now to colonize Mars. In fact, they keep saying, “Look! We want other companies or nations to step up and plan for how to establish a colony. We’re only going to do it if we absolutely have no other choice.”

SpaceX is truly establishing a financially viable solar transport system that may eventually extend beyond our solar system. This is analogous to the birth of the U.S. railroad system. We don’t know yet what what don’t know is possible.

SpaceX is poised to own space transport outright. That’s major!

Now what does Mars have to do with this? It’s just a helpful organizing goal. People love a good milestone. Something to reach for with meaning. I mean if you’ve been following SpaceX’s 15-year history you’d see that they’re nothing if not methodical in their planning and attainment of milestones.

SpaceX is one of the most well-run companies in the world. And right now they have the best prices, the best technology, the best pace, the best outlook, the best...

They have no competition. Literally. I’d invest the entirety of my lifetime earnings in SpaceX if I could. We’re witnessing historic achievements in the making.


I think for Musk, Mars is more than just a milestone, but he’s able to be realistic enough that he knows that he SpaceX needs to focus on the rockets for now.


I agree SpaceX is way ahead, but I wouldn't say they have no competition. Blue Origin is making some very interesting moves.


Interesting, yes; however in tech, products, partnerships, experience or any other quantifiable measure, Blue Origin is far, far behind SpaceX.

SpaceX has launched many Orbital missions.

Blue Origin has launched none.

SpaceX has two functional, tested, orbital spacecraft in production.

Blue Origin has a sub-orbital pod with goldfishbowl windows that's not quite ready yet.

SpaceX has successfully partnered with NASA for commercial cargo missions to the ISS.

Blue Origin has not.

SpaceX has landed more than a dozen rockets post-flight.

Blue Origin has landed < 10 sub-orbital rockets.

SpaceX has designs for rockets and spacecraft that might plausibly target inter-planetary travel.

Blue Origin has not gotten to orbit yet.

Don't get me wrong; I want Blue Origin to succeed. However, the slope of their progress graph has been much shallower than SpaceXs. I don't see them ever catching or overtaking SpaceX unless something drastically unforseen happens.


SpaceX has landed more than a dozen rockets post-flight. - Its 20 (5 failed)


Blue Origin was Founded in 2000 two years before SpaceX and they still have yet to reach LEO. That’s a really bad sign of actual progress. They are optomisic they might be able to launch four years from now assuming schedules don’t slip. But 6+ years from now is more likely.


I'm not sure what's "interesting" about having been around for longer and still having achieved absolutely nothing.


The rocket that Blue Origin has developed is very powerful. See https://en.wikipedia.org/wiki/BE-4. It'll be used by various companies for their rockets.

The SpaceX rocket has certain trade-offs. Basically the upper stage is not that powerful compared to various other rockets. This is usually not too bad because they go for the cheapest option. Blue Origin might have a huge advantage there though.

Blue Origin is not in a need of money, this drastically changes things. It seems they went first for nailing a super powerful rocket, then to develop something close to the SpaceX BFR called New Glenn (https://en.wikipedia.org/wiki/New_Glenn).

Note that in various cases people compare expendable versions with reusable versions. Further, SpaceX is often better for lower orbits.

Suggest to follow /r/spacex on reddit. Above is what I learned from reading discussions there.


New Glenn is not really a BFR-class design. It falls somewhere between Falcon 9 and Falcon Heavy.


The advantage of rocket science as a business is that there's a pretty big barrier to entry for anyone who wants to spend a lot of money to get in. Plus a lot of the rocket scientists who work for SpaceX (as opposed to ULA or Astrium) are there specifically because of the Mars colonization mission.


Gravity is quite the barrier!


Once you have full scale mining of space materials and infinite automated production of anything, why “on earth” would you need to sell anything to anyone ever? Wealth is something you need only if you lack something else.


Wealth? Isn't that just capital? Just like the infinite automated production would be capital as well.

I would rephraze this as what need is there for markets and money?

It's not about personal gain. It'd about how to manage the resource pools of an entire civilaztion.

What you are describing is the perfectly planned economy. From empirical experience national economies cannot be efficiently planned. USSR did not have the computational capability we have now - but we don't know is there any way to control a national - lest a planetary or solar economy by any other means than markets.

So, the question to me is - if solar ecomony can be planned then there is no need for markets and money, but if it's computationally intractable, or, the market way of organization is the cheapest and best way to do it, then we are stuck with markets and money up to the heat death of the universe.

Interestingly, if from algorithmic point of view markets and money is the best way to organize a civilizations resources, then it's likely alien civilizations (if they exist) will have a market economy too.

I would hope the question of reaource allocation is tractable and that the future of mankind is more like Iain Banks's Culture rather than "robber barons of the Oort cloud".


>avoid the mars colonization expenses and undercut them on price.

it would also lead to "undercutting" on technology. Look at Bezos - without such a powerful Mars hyper-drive the best he can do is being engine supplier to ULA. Hugely respectable achievement on its own, no doubts, yet nothing close to SpaceX who has already really advanced our civilization and is on track to advance it even further.

In particular i think the Mars mission based POV allows to filter for the best architectures long-term, like modular construction F9/Heavy which wouldn't be necessary the best in the short term of just servicing Earth satellites.


Then why haven't we seen competitors to SpaceX besides the money burning pit that is Blue Origin?


I read that as there is a lot of money that wants to piggyback on the success of SpaceX instead of fundamentally developing anything via their own investments.


Which is fine, if there’s someone who can create it’s amazing if we can support them.


Back then the rocket scientists from JPL and MIT had dozens of aerospace contractors where they could go, and science would be moved forward. Nowadays, the rocket scientists join any of the dozens of hedge funds, but some go work for SpaceX. This is the stagnation of outsourcing.


> This is the stagnation of outsourcing.

This is the stagnation of paying your scientists poorly.

One of the reasons for so many microbreweries in San Diego was paying microbiology types so poorly. More than one microbrewer here has said: "Well, I was getting paid kinda crappy in biology. So, I started a microbrewery. Now I get paid about the same, don't have to put up with managers, and get to drink beer."


This is the stagnation of paying your scientists poorly.

All the same. If there are no jobs for scientists, salaries drop, the oversupply isn't going to go away anytime soon. What's this STEM shortage they are talking about in Washington?


What do you think investment is exactly?


I think that systemic interest in investment in fundamental economic improvements has declined while systemic investment in financial vehicles (pardon the pun) is rampant. This would be ok if the financial vehicles were connected to economic well being, but that link is increasingly tenuous.

It basically takes a crazy, put it all on the line billionaire to assemble technologies, but why doesn't the economic system regularly do this? Musk is great, but I think his skill, much like Jobs, is excellent integration of a number of existing technological possibilities. Why is Musk, as an individual, the one of the few that can do that at scale, vs industries that have in actuality more resources than Musk.


I think salesmanship and being able to sell a daring vision is key. Why else would anyone invest in risky long term projects like space travel when they could just buy an ETF instead?


The guy they’re quoting sells SpaceX stock on the secondary market. Of course he’ll say it’s selling like hot cakes—it’s what he’s hired to sell.


Yeah, it seemed like hyperbole but fit my narrative and was the first hit on Google.

Oops, am I not supposed to admit that?


Hah, upvoted for honesty!


Sure it can. Those who sent like it sell to those who do, just like in the private market.


You can buy into them. They are allowed to sell on secondary markets.


The fact that SpaceX can piggyback on its own customers' launches to put these in orbit is a significant advantage over competing communication constellations.

Depending on the cost of their satellites it might make sense to use launching them as a means of testing the upper limits of reusability of their rockets. IE, they might not want to risk a customer payload on a rocket that has made 10 launches. But if they are going to build 7,518 satellites the marginal cost is likely to be rather low so it might be worth it to push the risk threshold to stretch the number of trips per rocket. Also, it could be a good opportunity to clear out their inventory of pre-block 5 Falcon 9s.


I would be surprised if they launch many of the satellites on customer launches. They have some very specific orbits, and are planning on dropping the satellites off in orbits much lower than where they will actually be operating. It would probably be more fuel efficient to drop off a bunch of them in the right inclination all at once, then boost them up into spaced out orbits.


It's possible that they will use reflown Falcon 9s that have exceeded their reuse capability (~10 launches) to deliver these satellites. You're only paying for the propellant, the oxidizer, the second stage, and your fixed operations cost; the rest of the vehicle has been amortized through paying customer flights.


They only reuse the first stage at this point, so still a lot of cost there.


It's not about piggybacking on customer launches - these will mostly be launched on dedicated launches.

Rather, it's about using spare capacity freed up by the long-foreseen slowdown in the geostationary launch business that has historically been SpaceX's (and everyone else's) bread and butter, without crashing market launch prices.


All pre block 5 boosters are either retired or had been disposed of.

https://old.reddit.com/r/spacex/wiki/cores


This video [0] really puts into perspective what SpaceX is trying to accomplish. Incredible stuff if they pull it off!

[0] https://www.youtube.com/watch?feature=youtu.be&v=AdKNCBrkZQ4...


FYI, the constellation was revised and an updated video was made. https://www.youtube.com/watch?v=QEIUdMiColU&feature=youtu.be


That is amazing. Seeing the complexity of the grid is mind blowing - and the realization the blocks representing the satellites are the size of cities makes you realize how big space is. That said, future launches are going to have to run a gauntlet...


It seems like that doesn't it, but space is MASSIVE. Imagine if there were only 7000 people spread out on the face of the earth, would you be worried you'd accidentally run into one of them? These things are up another few hundred miles so are even more spaced out than that.


This is true but not as useful an intuition as it first appears. The major difference for satellites is that everything in low-Earth orbit is traveling at ~8 km/s, so the spatial volume they traverse in any amount of time can be surprisingly large. A cubic-meter satellite traverses 250 cubic kilometers of volume in a year. (If you want to get an order-of-magnitude estimate for collision frequency, you imagine one satellite occupies it's normal volume and the other occupies the traversal volume, and then just ask what the chance is that those volumes overlap, given the amount of space above the Earth within the altitude range of the orbits.)

And indeed, there has already been an accidental satellite-satellite collision (not just satellite-debris collision).

https://en.wikipedia.org/wiki/2009_satellite_collision

The Iridium constellation has about ~80 satellites, and one passes within 5km of another satellite about 50 times/day.


If Starlink didn't route data in space and instead used space only as a last mile, would it be significantly cheaper, or have higher bandwidth, or be feasible to provide without data caps? What are the economic and technical tradeoffs here?

I don't mind the latency of today's fiber or cable Internet, and if it's problem, it can be solved by moving servers closer to users.

That is, what if, instead of bypassing the fiber backbone, Starlink just tried to connect everyone to it? Because I'm assuming that sending a terabit of data between two major cities, like Chennai to NYC is going to be cheaper via undersea fiber.


The problem it certainly going to be uplink/downlink - even with this massive constellation, you're fairly limited by the available RF spectrum. If you had a decently high subscription rate in a reasonably densely populated area, you'd probably be down to dozens of megabits per second per user at most.

This could deliver absolutely massively better internet in less populated areas like rural, shipping, etc. but I feel like people are perhaps getting the wrong idea and thinking this could supersede terrestrial networks in urban/surburban areas - in which case there will be a lot of disappointed people.


I didn't ask about users in densely populated areas. Imagine two villagers on opposite sides of the world communicating via Starlink. Would it be cheaper per bit for Starlink to offload the data to the nearest metro area, from where it can travel around the world by fiber, till the closest metro area for the receiver, at which point it gets sent via satellite? In other words, use satellite only as the last mile on each end, not for the backbone.


If you're in the middle of the Pacific it won't work unless the satellites talk to each other and route the data to a ground station that may be thousands of miles away.


Sure but most people don't live in the middle of the Pacific, so is an optimised system possible for them?


80msec is not a good outcome considering the cost and complexity of the investment. Aren't there alternative technologies to achieve this connectivity?


I think you might be underestimating latencies in the real world.

These [0] are statistics from Verizon showing 90ms RTT for trans-atlantic connections. Trans-pacific is > 100ms. 80ms seems highly competitive for me in this context.

[0] https://enterprise.verizon.com/terms/latency/


there is a video posted in the comments that compares latencies to cables. spacex advertises latencies from 50-80 , but cable is in all cases fasters (rather obvious i think).


It is a good outcome considering the speed of light and existing technologies. I don't know what you rightfully expected if you were hoping for serious improvement on that number, other than breaking the laws of physics.


my question is if the slight improvement in latencies makes the investment worthwhile


It's still a somewhat good value, compared to GEO satellites. It's no fun to use those.


Another interesting watch is the TED talk by SpaceX COO Gwynne Shotwell earlier this year.

https://www.youtube.com/watch?v=Dar8P3r7GYA

Say what you will about Musk, but the guy is truly ambitious and willing to shoot for the stars ( or mars at the moment ).


Curious how this will pan out.

It's a shocking contrast to be in or near a city and have broadband speeds, and then be just a few dozen miles outside one and have... literally nothing.

I just loaned my Iridium phone to a friend who was going to the jungle, and although he was able to make the data connection work, even doing email at 2400 baud(!) proved useless. Inmarsat is faster, but vastly more expensive.

Outside of those two, there is no global solution.


It depends on what you mean by "email". "Doing email" as in SMTP/POP at 2400 baud is feasible, "doing email" as in accessing a https webmail service is not.

You can communicate effectively over 2400 baud if (and only if) you use protocols and services designed for low bandwidth; luckily we have those protocols built and tested, even if we have mostly abandoned some of them.


It's workable if you take your MTU down to 256 or 128.


Are you actually suggesting dropping SSL? Sure, Iridium is somewhat encrypted and indeed provide its own Facebook and Twitter apps.


Over 2400 baud? Yes, I probably would suggest dropping SSL and instead transmit pre-encrypted data over a very plain, very simple connection to avoid the overhead of SSL handshake (something like 5kb for current keysizes?) and its latency.

GPG-encrypted email over plain POP would work; and you can have automatic "remailers" on both sides of the link that handle the encryption/decryption once the communications are done over a "normal" connection.


You pay full handshake setup only if you don't have a shared key. If reducing bandwidth is the priority you would always set up a key on first connection and use the maximum allowable expiry time (10 days?) so long as you reconnect this often you never do another full handshake since you have a key (and you're allowed to get a new key with the old one, resetting the expiry timestamp each time).

With this slimming approach you also wouldn't use these grotesque "... and here's a portrait of my dog" type certificates we sometimes see, that may be several kilobytes in size, nor use the lengthy RSA keys. You'd do the minimum possible certificate for a single name and an EC key.

So it'll probably cost ~1kB on first connection and then only a few hundred bytes on subsequent connections, without any special components, this all just works with generic software you already have if configured appropriately.

You can shave a little more off if you agree to custom configure everything since the proof-by-certificate isn't mandatory in TLS per se, you can do everything with shared keys if you agree out of band what those keys are. This is intended for IoT-type applications.


1-RTT TLS 1.3 handshake with small ECC keys and certificate chain optimized for size, and TLS shouldn't be a problem to sync mail over slow connection.


Simple and reliable pgp encryption will outlast tls 1.3, and rural users (quite reasonably) expect more life from their devices than city folks. No one is going to download and update with a 200MB image at 2400bps afterall.


pgp email is basically deprecated by the security community. Security people do not use and do not recommend pgp email. TLS is used by everything, and a version of it will be used for a long time, for example everything that will use IETF QUIC or HTTP3+ on top of IETF QUIC.


TLS is transport layer encryption. PGP is mail content encryption. TLS can't replace PGP, as it deals with encryption on another layer.

While you might feel that it's deprecated (says who?) it's certainly in use, even by those "security people".


even doing email at 2400 baud(!) proved useless

In the late 90's I did a lot of my grad school coding and implemented a website for a state agency though 9600 baud modems. EMACS was great in those days, precisely because it's a Lisp machine OS masquerading as a text editor. I used to read USENET news, read my email, do my work, shuffle between multiple remotely logged in shells, all through EMACS.


Starlink could be a godsent for online gamers. Anything under 100 ms for fps games is quite playable and you don't need much bandwidth for those udp packets. Just guarantee some minimum bandwidth allotment of say 1 mbps and make sure the satellites are not so oversubscribed as to cause unnecessarily long or unoptimized routing. Games between west coast usa and eu or eu and asia would be playable.


If you're going to another continent, you're going to see a big latency savings. If your going to another state, you're probably going to get lower latency on fiber. But really, a 60Hz frame is 33.3ms, and many online games advance their world state at 60Hz too, so it's easy to get up to 67 ms of latency just sitting still.


You are right in that games have latency inherent to the processing of game engine ticks and depending on when you press the button the server might have just advanced to the next frame when your input gets processed and you're looking at another delay of potentially tens of milliseconds on top of network round trip and of course however long it takes the game to draw the updated info on your screen once the information gets back.

Wonder if it would make sense nowadays to send world state updates to clients at 1000 Hz or if it's even feasible?


A lot of people play with 144hz (what I play on) and some even higher (240hz). Which is just under 7ms and just over 4ms, respectively.


He's talking about network frames, not graphics frames.


is this really true? I haven't played video games in a long time, but I remember when playing counter strike that someone with a 20ms ping was at a significant advantage over someone with an 80ms ping.


I think playable is meant more in a "we can have a fun game" sense and less of a "I can be competitive at a high level" sense. Even the former becomes hard at higher latencies.


Sure, I'm using 4 G mobile internets now and occasionally have to play call of duty at around 80 ms and I can still kick ass. There's no place like LAN for hardcore competitive gaming but if the game's netcode is done well 80 ms is good enough. The trick is to have some sort of lag compensation, which cs should have as far as I know, where you don't have to lead your shots so the feel of the game is mostly the same, you just get the reports a bit later than you would at 20 ms.


Maybe this will help boost the "game as a service" offerings.


As far as I know, they still serve different purposes. The starlink antenna needs to see a large portion of the sky and needs to be stationary.


Iridium already employs SpaceX for their launches.


What's perplexing is why Iridium is not doing way more. They are essentially just letting SpaceX to cannibalise them. Sure, they don't have fancy rockets, but still can scale their satellite net before Elon catches up with sat tech or outsource launches to even cheaper Indian launch provider.

p.s. the story of Iridium and how they recovered is quite amazing.


I think the more amazing part of Iridium’s story is that they got the money to launch their constellation in the first place. It’s crushing debt and inevitable bankruptcy with the assets being acquired for something like 1/20th of a cent on the dollar was to be expected really.


Iridium is around now. Both their next gen and the old gen. SpaceX will take atleast 5-7 years to get full coverage. The customer base of iridium is very different than what SpaceX is targeting.. there should be room for more than one player


Is there anyone who knows more about radio stuff that can explain why they want to have both Ku and V band for users? It it just a matter of having more spectrum? Or are some bands more suited to crowded areas, like a city where there might be lots of customers sharing bandwidth? Do you need different hardware for using the different frequencies?


There are a few possible reasons for using multiple bands. With the higher frequency of the V band you can very high data rates down to the ground. However, higher frequency RF will be very attenuated by rainfall [0]. So the Ku band could be used as a backup to the V band.

The V band (or optical links) will also likely be used for the inter-satellite communication.

You do typically need different hardware for using the different frequencies. With some of the more advanced software defined radios, you could use both of the at the same baseband within the same radio. But you will still need some sort of frequency conversion [1]. You will likely need different antennas for each band as well. And to get an efficient system you also want to add filters for each band.

[0] http://happy.emu.id.au/lab/rep/rep/9510/txtspace/9510_032.ht... [1] https://en.wikipedia.org/wiki/Low-noise_block_downconverter


The higher data rates come from the fact that there is a lot more spectrum available in the v band. From what I remember there is about 4 to 5 gigahertz usable there. Ka-band has about 2-3GHz, and Ku about 1. as you mentioned, attenuation is one of the factors why you wouldn't use v band. Another reason is that it uses quite a bit more power, and the RF electronics are much more expensive and rare.


This is such a cool problem.

We already have a ton of Geo sat which can do this kind of communication but they are super expensive and have a limited bandwidth.

These satellites will actually have a ton of limitations in how much data they can send around and how they'll have to balance out their signals. Geo are easier to point to because, well they don't move.

But these are going to be moving and changing all the time so you'll have to connect to multiple satellites every day. I'm spit balling here but they'll probably be overhead for 10 minutes? Think about switching your router every 10 minutes. Or you get a rainy day and your signal clarity goes down. Or you are over the equator in a band that is used strictly for GEO.

This is going to be a super cool problem to solve. And I'm sure I don't even understand the half of it.

Edit: Sorry example of router is pretty bad. It's more like running your phone but you have to specifically aim your antenna at each tower that you're passing while driving. The complexity is moving nature of the network and the targeting nature of the antennas. I have 0 clue if phone signals are targeted but I believe they are radial signals and more like a beacon than a laser.

Load balancing these can be a pain as well because if you get too much signal on an antenna it can actually block all signal.


>> Think about switching your router every 10 minutes.

Your cell phone switches towers all the time while you're on the road. My cellphone (republic) switches from WiFi to cellular network mid-call if I'm on it when I leave a building. This is not a new problem.


Well dang. That's a really good point. I remember early cell phones that would drop signal when running between towers.

The hard part here is that you have to target the beam at the satellite. It's not a wide angle beam that's used it's a focused beam. (This might not be true based on what they implement)

But for Geo it's a targeted beam. So this isn't a perfect corollary, and my example was pretty bad lol.


Geo satellites also do beamforming. See epic ng or quantum.


Satellites shooting lasers to talk to each other and create a global network in space is not a new problem?

Damn, I guess we ran out of interesting problems to solve then /s.


>My cellphone (republic) switches from WiFi to cellular network mid-call if I'm on it when I leave a building. This is not a new problem.

Networking technology switching is unrelated to a call, which always uses cellular.


my cellular provider (T-Mobile) uses UMA/GAN to switch between wifi enabled cellular network access and cellular network access proper on the fly


Republic specifically does this; it's their whole marketing gimmick. You pay a low amount for a modest data cap, but the phone seamlessly prioritizes WiFi networks when available.

I've been using them for years and it works fine.


Not sure about republic, but many phones implement WiFi calling.


His network probably allows Wi-Fi calling and text messaging. So it would be switching.


Motorola solved this problem with its Iridium constellation back in the 90s.

Handoff between satellites isn't really that hard. Your phone knows where it is and what birds are overhead (and which are about to appear/disappear) so they can adjust automatically. Routing through the satellite constellation gets kind of hairy, but is also solvable via orbital mapping and some arcane routing algorithms.

The real trick is getting enough link margin with omni antennas to get good bandwidth. As fun as they would look, nobody wants a cell phone with a little cartoon satellite dish on top that whirls around to track birds passing overhead.


The real trick is getting enough link margin with omni antennas to get good bandwidth.

What about phased arrays?

Also, why not have a completely different scale of ISPs? How about something the size of a soda vending machine or something the size of a suitcase which contains all the hardware for a very small scale internet service provider?


The latest rumors I'd heard was that was exactly the plan though still not at phone scale. The modem was supposed to be a pizza box sized piece of hardware with a phased array.


Phased arrays are nice, but have a slight problem with being stupendously expensive. It's going to be hard to convince people to jump on board if the starter hardware pack is $20k.


Kymeta is coming out with a directional flat antenna. It wouldn't suit for a phone but I don't think these are meant for phones. I was not saying cell phone would have spinny disks on the top.

The last part is, where I think the real fun will emerge. Doing everything above but optimized based on the load of each satellite with a link budget computation involved will be cool. It may prove out that the optimization isn't needed but I think that kinda stuff is fun.

https://www.kymetacorp.com/markets/connected-car/kalo-trek/


> nobody wants a cell phone with a little cartoon satellite dish on top

Speak for yourself! ;)


> Think about switching your router every 10 minutes.

Cellular and WiFi AP roaming are solved problems. For SpaceX, this will be rather straightforward as the design specifies fixed ground stations that will handle negotiating/maintaining upstream connection(s).

> Or you get a rainy day and your signal clarity goes down.

That's more likely to be an issue, but having multiple satellites overhead, plus the fact that the constellation will be located in LEO (thus higher signal strength due to proximity) will help mitigate signal attenuation due to clouds/precipitation.


> Cellular and WiFi AP roaming are solved problems.

While you may be right regarding the context in which you said this — satellites — as a blanket statement, that's not true: my smartphone loses Internet connectivity when I leave my house, and takes a few seconds before it connects to the cellular data network.

I'm surprised that phones don't establish a backup 4G connection when Wifi signal strength decreases suddenly, but before it disconnects. This has been talked about for years, but for whatever reason it hasn't been deployed or doesn't work well, at least with Android P.


Sorry that wasn't clear. I meant cellular roaming between towers, and WiFi roaming between access points -- not roaming between the two technologies.

It's currently possible to maintain a single connection (including IP address, open sockets, ip tables, etc.) while roaming between cell towers and WiFi APs. That's not true when switching carriers/providers/technologies, although I agree that would be a great next step for cell phones.


Makes sense, thanks for clarifying.


Bleh - the router example was bad. Sorry I didn't think that one through too much. I did an update though.

You're right on the signal degradation being significantly lower. But the LEO aspect means that there might be a higher percent of total loss due to rain (and probably adjacent signal interference).

Either way it's a cool problem.


My knowledge of satellite internet is circa 2004 when I was in the desert of Iraq.

For the client premises equipment it's a hard problem but entirely solvable. Frequency hoping radios have been around a long time. Link bonding has been around a long time. Which satellites will be in view, where, and when is knowable. I think the harder part will be the backhaul... in space.

I'm curious if they'll use beam shaping antennas for CPE. Surely they won't ship antennas with moving parts right? Also I'm curious about the power requirements of the CPE.

Another problem will be security. This is a known problem with satellite uplinks. They basically spray everyone's traffic all over God's green Earth and if you listen in you used to be able to grab lots of data right out of the sky. These days it should all be encrypted, even then some of the encryption was easily cracked.


There is a startup called... googling... Kymeta. That is creating shape changing satellites for cars, planes, and ships.

Encryption might be handled by default with the https nature of the web now. But it'll just make https even more important going forward. It would also leave a lot of meta data open to sniffing which I don't think people would want either.


Kymeta is not really a startup. They have been around for a very long time.


Or you are over the equator in a band that is used strictly for GEO.

It's my understanding that this constellation is going to be significantly lower than GEO, so that's not a problem.

Also, the dead zones are toward the poles, but those are mitigated in phase two.


There is another constellation system (I forget the name) and they're going to have a partnership with GEO to beam signals for customers as the satellites pass over the equator.


There was recently a video posted theorizing this here: https://www.youtube.com/watch?v=AdKNCBrkZQ4

Very interesting watch.

Based off the paper: https://twitter.com/awm22/status/1044512585599602688


This is awesome thanks.


The bandwidth of geosats can be vastly improved with spotbeams:

https://www.montanasatellite.com/support/satellite-footprint...


>We already have a ton of Geo sat which can do this kind of communication but they are super expensive and have a limited bandwidth.

And high delay, which is the main problem.


With about a thousand being in your horizon at any given time I don't think it'll be an issue to roam between them. In a Metropolitan area it's not rare to be around dozens of cell sites. Likewise cellular roaming is quite transparent.


Yes, but your cell tower aren't moving around relative to each other. And in order to get the data rates that they need at the speed they need, the signals between the satellites will be more focused rf or laser communication. So all of the satellites have to constantly know which satellite is closest to them and how to point antennas and/or lasers.


Yes, but your cell tower aren't moving around relative to each other.

Depends on the observer. If you move in a circle around a set of radios (driving around town) either you're stationary and the other radios are spinning or vice versa. Also, many satellites will share the same orbit, they will be stationary as far as each other are concerned.


True, the satellites within the orbital plane will remain close to stationary to each other. There will be some perturbations, but generally will be stationary relative to each other. In one of the plans I saw floating around the inter-satellite links within the orbital plane will basically be stationary laser links between the satellites. And the cross-links between the orbital planes will be done with phased array antennas so they can be electrically steered instead of physically steered.


A GPS receiver is constantly switching between the satellites in the GPS constellation. This sounds like that, on a bigger scale.


Actually without proper filtering you do see large jumps in pose when you reacquaire satellites. Solved problem and the routing example is not ideal, but yeah.


Does anyone have any context on what sort of internet speeds satellite internet has been benchmarked to provide. I can't imagine the latency being anywhere near good enough to do many of the high bandwidth things people use the internet for today.


Traditional satellite internet uses satellites in geostationary orbits, or 35,786km high above the equator. This is great because satellites in that orbit are stationary relative to the ground, so you just need a parabolic antenna pointed at a specific point in the sky to get great reception. But it's also terrible for latency because the signal takes 120ms just to get up, and another 120ms to get down. Assuming you live on the equator, directly underneath the satellite. For someone living in the temperate regions, typical real rtts are in the 700ms area.

Starlink and OneWeb are different in that they intend to use a lot of satellites in low orbits to maintain constant coverage. This is technically much harder, not the least because you need thousands of satellites to get reasonably good coverage, but also because the ground station and the satellite transceiver both need to track each other. This was not technically feasible before, but modern AESA antennas can steer their signal without having to move the antenna, and can both transmit and receive multiple simultaneous beams and very rapidly move the beams around when doing time-sharing.

The minimum round-trip time to something very close to you and also close to it's own ground station will be on the order of 10ms. However, where the system really shines is long distance communication. The satellites will pass the signal between each other using lasers, and will get the signal to the other side of the earth much faster than terrestrial fiber, both because laser in vacuum travels substantially faster than one in fibre, and also because the fibres don't get to follow ideal great circle paths.

Independent researches have evaluated the likely latencies of the system, and the results are frankly shocking. For example, today on the existing fiber network, rtt between London and Singapore is ~160ms. On Starlink, the rtt will be ~90ms.


Not all satellite internet, currently, is dependent on geostationary. Read up on o3b's network and architecture, which has been operational for several years now. The cost, however, is basically "if you have to ask, you can't afford it". O3b is a market option for ISPs and telecoms in very remote or island areas that were previously dependent upon leasing Ku or C band transponder megahertz.



> Traditional satellite internet uses satellites in geostationary orbits, or 35,786km high above the equator. This is great because satellites in that orbit are stationary relative to the ground, so you just need a parabolic antenna pointed at a specific point in the sky to get great reception. But it's also terrible for latency because the signal takes 120ms just to get up, and another 120ms to get down. Assuming you live on the equator, directly underneath the satellite. For someone living in the temperate regions, typical real rtts are in the 700ms area.

This doesn't check out.

The radius of Earth is 6,300km. Assuming a receiver on the north pole, the total distance should only be sqrt((35786 + 6300)^2 + 6300^2) = 42555km, which isn't notably larger than the 36000km at the equator.

I'm sure you're still right about the latency, but it can't be just distance doing it.


in practical real world use the absolute minimum ping time you'll see from a 1:1 dedicated capaacity (SCPC) link via geostationary, from end terminal to larger earth station, is 489 to 491 ms. This will vary up to 495ms depending on modem modulation/framing and FEC type, and actual line of sight distance to satellite (low inclination aim angle for the antenna, if you're not directly under the satellite, vs two sites that are directly under it as viewed from geostationary).

Ignoring TDMA oversubscribed VSAT networks for the moment, there are a number of different possible modulation schemes and FEC types, and FEC code rate (payload vs FEC percentages), and things that are unique to different SCPC modems, which will vary the modulation by a few ms beyond pure speed of light.


The latency from the distance light travels does not vary that much. The rest of the ~600 ms latency is caused by the same thing as latency on ground systems (routers, etc.). And the processors on spacecraft are not particularly fast.


Who builds routers that go in space? Cisco? That's got to be some crazy hardware.


Cisco has built routers for space multiple times before. The first time was way back in 2003: https://en.wikipedia.org/wiki/CLEO_(router)


I believe companies like Boeing or Space Systems Lorale do that kind of work in their satellite design groups. Same with Orbital ATK.


Apparently Elon is gonna try use consumer hardware or build their own, thats kinda part of it's cost advantage.


Is lower latency the main benefit? Because there are other ways to combat that, like locating servers closer to where users are. Yes, there are some applications like HFT or gaming or whatever that need lower latency, but I think it would be more useful to the world at large if Starlink provides people all over the world with affordable broadband Internet access with a minimum speed of say 20mbps, say. People far from cities, people in developing countries with bad infrastructure (till a few years back, I had a 4mbps connection), people on islands with bad connectivity, trains, ships, planes, etc. Does it do the latter?


>The satellites will pass the signal between each other using lasers

I thought there were regulatory barriers to doing that


SpaceX released a video about a week ago estimating that SF-London times were be around 60 ms, IIRC.


That sounds like a realistic number, considering the measured latency for the test satellites they launched earlier this year:

https://www.reddit.com/r/spacex/comments/8mccic/musk_tintin_...


"Regular old satellite internet" is very high latency because it's few satellites in high orbit. SpaceX is doing many satellites in low orbit, resulting in much shorter paths and low latency.

Latency for medium to long distances could be better than regular fiber, both because of straigher paths and because speed of light in the relative vacuum of the satellites' orbits is higher than speed of light in fiber.

We don't know yet what kind of equipment the satellites will have, but fundamentally bandwidth is constrained by bandwidth per covered area. I guess they will sell gigabit or better to cargo ships at sea, similarly good speed to rural regions, but nothing interesting to urban areas. Providing fast internet to a city via satellite would be very challenging, providing lighning fast internet to lone people in a desert is much easier and more profitable (no competition).


I'd expect the first and easy goal to be to sell high speed access to airline passengers.


I'd rather expect it to be first sold to high frequency traders. HFT firms have paid astronomical amounts to shave milliseconds off of their connection times.


Which is why I would expect them to already have better-latency options than going to space and back. Does anyone even bother setting up an HFT firm outside fiber range of a major exchange?


Considering how hard it is to get a cell signal inside a plane on the ground, and that these will require special CPE, and the airlines can't change the equipment on a plane without a ruling by the FAA, I wouldn't bet on it.


Onboard internet provided by ground-based cellular or satellite already requires aftermarket hardware -- external antennas, routers inside the plane, and more.

It does require extensive testing and permission by the FAA (and the aircraft manufacturer, and the airline, and others) and extensive support teams in terminals but it's done semi-routinely.


not easy inside a faraday cage. In fact, how does spaceX plan to deliver the internet? Would people need access to a roof antenna? That rules out a lot of use cases


What is the last time you traveled on an airplane? In flight wifi is pretty routine now, aircraft have lots of infrastructure to enable this. It would definitely be possible to adapt that hardware to connect to the satellite system launched by SpaceX instead, all it would take is a different antenna and radio system that the Sat operator would be happy to provide.


I thought you meant some kind of direct passenger connection. I think airplanes already use satellites for the internet connection, no? In any case , i dont see the huge benefit here: they will still all be sharing the same routers and probably have the same connection speed.


The amount of bandwidth available to a single aircraft would be much larger than it is today with this system.


I had Hughes satellite service for a while. Bandwidth was good. Latency was terrible. For surfing the web and streaming content it was fine. For anything twitchy, like driving a computer desktop through terminal services or any kind of gaming, it was nigh unusable. Plus, there were fairly low data caps that limited how much streaming one could do.

The big difference with Hughes and SpaceX is that Hughes satellites are geostationary and SpaceX's will be low earth orbit, so SpaceX's transmission distances and times should be much better.

Hughes provided a decent option for very remote areas. SpaceX appears to be putting together a first class option for everyone.


SpaceX and One Web both have an unfortunately low number of ground stations per square km, so remote areas can have a receiver for everyone, and urban areas will max out the number of receivers pretty quickly.


You'd be surprised, this article describes round-trip times to be 25ms to 35ms and gigabit speeds: https://arstechnica.com/information-technology/2018/02/space...

Who knows if they'll actually reach these speeds/latencies in practice though, that remains to be seen.


Yeah, a significant portion of the latency is just the distance to the existing satellites in geosynchronous orbit. For light to get out and back to the geo belt is around 240 ms, excluding any other factors. Getting up to the promised gigabit speed will be significant challenge. The original demo satellites that SpaceX launched only had data rates of ~15 Mbps.


These satellites are planned below 2000km altitude, where the radio signal delay is more like 3 milliseconds. It should be comparable to accessing AWS us-east datacenters from St. Louis MO


Perhaps their true market will be for industrial IOT devices that need to connect from all over the world.

If I'm selling large HVAC units with cloud monitoring functionality, it'd be attractive to ship units that are pre-connected to SpaceX's satellite network. I wouldn't have to worry about setting up agreements with multiple cellular providers or having to depend on site-provided Wifi/ethernet.

The point is, those devices could very lucrative for SpaceX without requiring as much bandwidth (no need to stream Netflix).


There is definitely a market for this. Orbcomm already does this for a lot of machine to machine communication. However, their data rates are quite slow on the order of kbps.


Same with Iridium Certus/Next - their devices are rumoured to be tiny chips (even with current constellation the hotspot is just slightly larger than modern 4G ones).

You don't really need much bandwidth for most IoT apps.


Presumably those gigabit speeds are shared by the thousands/millions of people on your node. Wireless is a shared medium.


You're conflating bandwidth and latency.

Not much needs both high bandwidth and low latency.

The latency has the potential to be better than the current internet backbone when going intercontinental due to straighter paths, but it'll be higher when going to your local AWS, probably.

Bandwidth is anyone's guess.


The amount of BW is based on how much spectrum you have (see Shannon's Law). I am not sure how much they have. Also, I am not sure they will have enough power to provide in building service.


> Also, I am not sure they will have enough power to provide in building service.

The issue for that is the wavelengths they are using. Their properties basically completely rule out any kind of in-building service, there needs to be a clear uninterrupted path between the receiver and the transmitter. (But they penetrate water well, so should not have a problem with clouds.)


FCC-18-38A1.pdf say 10.7Ghz - 12.7Ghz, that's a lot of spectrum. Depending on the SINR they get, which depends on the power, antenna sizes etc. you could get some pretty good speeds, probably above 100Mbps. But at those frequencies not likely to have any in car or in building. You can penetrate a building with higher frequencies if you have high enough power levels, but not from a satellite. So to use it you will have to install an antenna on the roof of your house.


I was actually just curious about bandwidth, I've only ever used satellite internet through the context of being on an airplane, and was curious if satellite internet was good at supporting things like netflix.


Yes, there is plenty of bandwidth to watch Netflix on satellite.


Over long distances, their latency is going to be significant better, as light speed in fiber is over 30% less than through vacuum.


There were recently two simulations done on possible latencies based on the FCC filings. You can find the video simulations in the links at the bottom of this post.

You can see international links that have latencies lower than the existing internet by a significant margin.

Unfortunately neither of those simulations take into account that the first iteration has been changed to only use radio and no laser links.

[0]: https://www.youtube.com/watch?v=AdKNCBrkZQ4

[1]: https://www.youtube.com/watch?v=QEIUdMiColU


The first video mentions at 6m55s[1] that as part of the third phase they're deploying satellites in high inclination orbits, among other reasons to provide coverage for Alaska, since this is an FCC requirement.

I wonder if it's exclusively because of that reason, and if so that the reason international customers will enjoy coverage at high northern and southern latitudes is because of the accident of history that the US ended up acquiring Alaska, and that there's some FCC regulation that says you need to provide coverage for all 50 states.

1. https://www.youtube.com/watch?v=AdKNCBrkZQ4#t=6m55s


This might be true of geo stationary satellites, but not of LEO satellites, which are close enough to even consider video or voice conferencing, albeit with latency and jitter even worse than bluetooth.


This scares me a bit. The idea of having LEO internet is great and all, but the idea of quadrupling the number of functioning satellites, all operating in different orbits but at a similar altitude seems incredibly dangerous to me.

The article mentions the movie Gravity, which is a bit unrealistic as it portrays multiple large bodies in orbit all being at about the same altitude (which isn't the case in reality). That is not the case with this web of satellites. If a chain reaction of collisions does occur, it would cause a field of tiny, fast, deadly debris all orbiting on a similar orbital "plane". It would pretty much blanket the planet. Wouldn't this cause a large issue for anything attempting to reach orbit? What am I missing here?

EDIT: A lot of replies here mentioning the fact that LEO spacecraft decay more quickly than higher orbits. Please note that not all LEO orbits are low enough to guarantee a quick decay without powered retrograde thrust. Stuff can hang up there in LEO a long, long time depending on the actual altitude.


Well since it's in LEO and since the sats will have been destroyed or damaged they won't be able to maintain their orbits, so the debris orbit will be decaying from the moment of the incident and should re-enter in a matter of a few months. The scenario I'm sure will be simulated by numerous people/orgs by the time this project is in full swing.


This is not true of the orbit that the SpaceX satellites are going in to. LEO is a bit of a general term that covers altitudes anywhere from 100 km to 2000 km. While it is true that something at 400 km altitude will deorbit in a few months, they will be orbiting at an altitude of 1100-1300 km. At that altitude it will take hundreds of years from them to deorbit. The drag goes down exponentially as you get in to higher orbits. The FCC filing says that the SpaceX satellites will have enough fuel remaining in order to deorbit the satellite after the satellite has reached its usable mission life.

This was a big concern for NASA though. Their typical guidelines were that commercial satellites have to have >90% reliability to deorbit the satellite. But they want to increase this for the SpaceX satellites because if 10% (~400) of the satellites end up failing then they are effectively up there forever.


I believe you're thinking about their initial application. They've updated it [0] to an altitude of 550km, so atmospheric drag should still de-orbit broken satellites fairly promptly.

[0] - <https://fcc.report/IBFS/SAT-MOD-20181108-00083>


Thank you, I was thinking about their initial application. Yes, de-orbit times from 550 km are on the order of years to possibly tens of years. The requirement from the FCC is to deorbit your satellite within 25 years after the end of mission.

Edit: Their most recent application only moved the Phase 1 satellites in to the lower orbit. In later phases they are still planning on having more than 1000 satellites at the higher orbital altitude.


Yes, de-orbit times from 550 km are on the order of years to possibly tens of years.

So it wouldn't stop progress for a century, but might it put a damper on things on the same scale as the Great Depression?


SpaceX recently updated their plans to put most satellites between 300 and 400km. That ensures a quick deorbit of the satellites and debris and cuts down the ping times.


Interesting. That does make me feel a bit more secure about the whole thing, but it does raise the question of how much they will need to invest in orbital upkeep/replacement of decayed spacecraft. I can see this getting expensive quick.


Definitely expensive, but it is likely still much less expensive than putting all of the satellites in to the higher orbit. It takes a lot more fuel to get the satellites in to the higher orbit, as well as still having enough fuel to get it back down to de-orbit the satellite.

Plus, in the ~1100 km orbit you are getting in to the lower Van Allen belt where the radiation will degrade electronics much more quickly. This can be mitigated with radiation-resistant parts and more shielding, but it adds a lot more cost.


There are new technologies like air breathing ion engines and electrodynamic tethers, which have the potential to sustain low orbit satellites for years.


they filed that because it's believed that they have mechanical problems on the satellite. Not because they wanted to:

https://mobile.twitter.com/TMFAssociates/status/106076080100...


Isn't the lower orbit even better for them in terms of latency though? So long as the orbit doesn't decay faster than the planned 5 year life span of the sats, it's actually a win.


The lower orbit means the satellite coverage on Earth is smaller now, so they need more satellites to compensate for delivering the same capacity in a given area. It's better in terms of latency, but it's not a significant difference.


Very good point, thanks for responding.


Not to mention a struck satellite (probably) no longer has the capability to deorbit, at least not in a controlled manner.


Little pieces of smashed satellites are not obligated by the rules of physics to decay.

An example of how long shit can stay in space: A Saturn V booster rocket is still out there, slowly orbiting the planet: https://en.wikipedia.org/wiki/J002E3


I'd think that only a fraction of the debris will head down into the influence of atmospheric drag. The rest would fly away at an angle that either takes it parallel to the path of the orbit, or higher in the orbit (however short that may last). These satellites, while in a LEO orbital configuration, I would assume would still be high enough to where there is a decent amount of room before significant drag comes into play. Otherwise, it would not be economical to have these in orbit as they would decay and need to be replaced too quickly to justify the cost.


They’re already in place that has enough drag. The satellites will deorbit in months to years without periodic boosts. Debris will come down faster, since they have more surface area relative to their mass. A single impulse always results in an orbit that intersects the original orbit at the location of the impulse, so it’s impossible for a collision to raise debris entirely beyond that altitude.


SpaceX gave their proposals for this (very valid) concern here:

https://licensing.fcc.gov/myibfs/download.do?attachment_key=...


Think how big the Earth is. How hard it would be to find a Xerox machine in the Amazon or the middle of the Atlantic if someone stuck it there. Space is much, much, bigger.


> How hard it would be to find a Xerox machine in the Amazon or the middle of the Atlantic if someone stuck it there.

Part of the problem, though, is that said Xerox machine is traveling at 17,000 miles per hour.

We've already had a collision (https://en.wikipedia.org/wiki/2009_satellite_collision) between two of the ~3-4k satellites in orbit. SpaceX's constellation is planned to be double that number.

That single collision turned two satellites into two thousand high-velocity projectiles, too.


> That single collision turned two satellites into two thousand high-velocity projectiles, too.

Just to add to that point: https://en.wikipedia.org/wiki/Envisat#Space_safety

A single 10kg. chunk of debris could result in over 8 tonnes of additional debris, triggering a rather nasty chain reaction.


I'm no stranger to the vast physical space at play here (amateur astronomer, here). I'm aware there is a LOT of room for this stuff to move around, but there is still a threshold, however high, to where the amount of debris would become dangerously numerous.


And you think that we can only fit <10k objects in an area larger than the surface of the earth?


<10K isn't the problem. It's when that 10K becomes much, much larger because of an unforeseen collision with an unrelated spacecraft/piece of space junk. It only takes one big collision to utterly litter that particular orbit, and thousands of slightly different orbits caused by the new trajectories of the various bits of the original spacecraft.


So whats your solution? Never put a bunch of satellites in space?


He's not necessarily saying he know's a solution, just that such a problem should be considered. Satellites do collide - https://en.wikipedia.org/wiki/2009_satellite_collision

The idea of satellite collisions and space junk is an actual concern https://en.wikipedia.org/wiki/Kessler_syndrome .

Sure a few washing machines in space isn't a big concern. Now imagine a few collide, hundreds of bits and bolts flying around earth, at unknown trajetories, at thousands of miles per hour.


There are a variety of potential mitigations. Legal requirements that satellites deorbit or park in a safe orbit before going dark, technology like https://en.wikipedia.org/wiki/E.Deorbit to capture and deorbit large chunks, better tracking of orbital debris, etc.


At any one time there are usually <10k airplanes in the sky,and they still hit each other sometimes despite going 23x slower than satellites


The vast majority of those collisions occur in areas where planes are forced to congregate (around airports, etc).


Isn't there high traffic where polar intersect? See: https://en.wikipedia.org/wiki/2009_satellite_collision.


To be fair, a lot of that is because of the fact that planes follow paths that are designed to keep collisions from happening, but aren't always adhered to.


There has been a collision between an Iridium satellite and another one before. It's a valid concern.


If they're travelling 7 km a second, maybe??


Now imagine that Xerox machine is constantly moving at 17,000 mph. How interested would you be in figuring out where its going?


I think this issue is over played a bit sometimes.

1. Satellites can kill themselves by dropping back into the atmosphere by their own propulsion system

2. Even if they fail, most of these are low enough that the time until they fall is reasonable short.

3. The objects are known and tracked so even if they are out of control, the others will avoid them.

4. We are moving into a world with cheap access to space and lots of satellites we need service satellites anyway and multiple are in development. This could be a legal requirement but even without it will develop because SpaceX doesn't want to have debris flying around on 'their' plane.


This is a well informed answer to your question:

https://www.quora.com/Why-dont-satellites-crash-into-each-ot...


Thank you for the article, but it really doesn't answer my question. That thread's answers assume that we are talking about the current situation we have around the Earth. We have no precedent for this many satellites orbiting at the same altitude, but at different latitudes/longitudes (which negates the safeguard that they are moving at the same speed, it doesn't matter if they can intersect). The satellites would all need to be moving at both the same speed AND the same direction to not have a risk of collisions (which would of course be a circle, useless to this application).


First, check out the video explaining the orbits: https://www.youtube.com/watch?feature=youtu.be&v=AdKNCBrkZQ4...

Doing some quick math, the shell in which they'll be orbiting has a surface area of 373,367,808km. With ~7000 satellites, an even distribution would leave them 533,382 km between satellites. All the satellites have station keeping thrusters to keep them in LEO with drag. Keeping them from running into each other is probably pretty simple, in comparison to the other challenges they'll need to solve.


Sure, but this assumes nothing ever collides with one and causes debris to be introduced to the web. The web itself will clearly be ok as long as it remains intact, but if something does cause one of these to RUD catastrophically, we now have debris following a similar, but just different enough orbit to be dangerous.


You won't find an answer to your question because you don't seem willing to drop your alarmist attitude.

There is just far too much space for 100k objects, let alone 10k to ever run the risk of collission. These satellites are all meant to be in line-of-sight. They will communicate with optics as opposed or in conjunction to radio.

You fear unforseen space particles destroying one and causing a chain reaction. Why don't you hold that same fear for existing satelittes which typically operate at an orbit with far lower decay rate? The orbit for StarLink is very low, small particles would not sustain their energy long enough to be a risk factor. The StarLink sats maintain orbit with fuel, everything else without fuel in that orbit will fall. The chance that a particle will end up in the same orbit, on the same plane, at the same time is astronomically low.

Your comments just echo the same type of delusions of people who think in their brillance they've found problems others have not foreseen. SpaceX is really just full of amateurs you ought to submit your findings.


> There is just far too much space for 100k objects, let alone 10k to ever run the risk of collission.

No, there isn't. Satellites can and do collide despite the vast amount of space in a given orbit. It is improbable, but likely enough to warrant at least some concern.

> Why don't you hold that same fear for existing satelittes which typically operate at an orbit with far lower decay rate?

I do, as does the FCC. This is evident in the fact that they now require any body wishing to launch a spacecraft guarantee that the craft can be put into a graveyard orbit.

> Your comments just echo the same type of delusions of people who think in their brillance they've found problems others have not foreseen. SpaceX is really just full of amateurs you ought to submit your findings.

I'm genuinely sorry this post made you feel this way. If I already had the brilliance to answer this question as I'm sure many bright minds at SpaceX do, I would not have asked the question and rebutted the responses to begin with.


> Satellites can and do collide despite the vast amount of space in a given orbit.

Excluding planned collisions, or collisions during failed docking, I found 4 instances of collisions between satellites and space debris. Ever. Now, a couple of things:

* Yes this larger constellation will increase those odds in the future.

* No, a catastrophic doomsday scenario did not occur when those happened.

* SpaceX will have complete control of these satellites through their operational lifespan (excluding collisions or malfunctions).

* Orbital decay for debris at the altitude they're operating at is ~10 years.

Also, have to ask... is all this concern a result of reading Seveneves? :)


I'm afraid I haven't heard of Seveneves before today, although it does look like an interesting read.

My concern is just from a general interest in spaceflight. More of a wish to protect a resource rather than a Chicken Little fear of the sky literally falling out.


Satellites don't just cross their paths by magic. If their orbits don't intersect, they can't collide. Satellites don't collide with each other, it has never happened.

Satellites do collide with debris, but that's a different issue. Furthermore, they don't need a guarantee that they can be put into a graveyard orbit - the proposed satellites (and even any debris, if they'd explode for whatever reason) will be in a graveyard orbit from day one until they stop being pushed up and deorbit.


> Satellites don't collide with each other, it has never happened.

Actually, they do, and it has indeed happened before (https://en.wikipedia.org/wiki/2009_satellite_collision). Far from a Kessler syndrome scenario, but certainly something that should be examined.


I'm guessing LEO satellites has a much shorter orbital decay so more likely to burn up in the atmosphere is something goes wrong.


These types of satellites typically sag out of orbit and burn up after a few years


This is known as Kessler syndrome.

It could make orbital planes unusable, but is generally thought not to pose a substantial problem for launching through them, so worst case we just have to start putting satellites a bit higher.


According to Bloomberg, since there are currently fewer than 2,000 operating satellites, SpaceX’s new additions will dominate space.

This number surprised me, much lower than I expected. Looked it up and I'm seeing varying numbers, but generally in the 1k-4k ballpark.


This is for operating satellites, but the Air Force currently tracks ~21,000 pieces of debris in orbit (about anything larger than a baseball) around the earth.


SpaceX satellites will be much, much smaller though.


Do they need more than US approval to launch satellites into international space?


This FCC approval is to be able to talk between orbit and the ground in the US. SpaceX needs approval from other countries for the same, in order to offer service there.

Launch approvals are a separate thing.


Per the Outer Space Treaty, no. Under it, space is free for the use and exploration of all countries. The US is liable for SpaceX in-space fuckups, though - if they crash a satellite into something, the US is responsible.


Good question. I doubt the Russians or Chinese consult the FCC for launches.


goodbye, great firewall of china...


Why. China can just force SpaceX to turn off over China. Unless this is part of a political 'Radio Free Europe' style operation China will control this as much as everything else.


Why. China can just force SpaceX to turn off over China.

How? By shooting down satellites?


As others have pointed out regulations. I believe Musk also really wants Tesla to succeed there (it is a very large market for them) and I have no doubt the Chinese government would punish another company of Musk's if they wanted to.


Regulation. Do you think SpaceX would operate an illigal network in a foreign company? Do you not think China had the pull to tell the US to keep SpaceX in line?

The can ban the sale of terminals. They can scan waves.


One of the consequences of the Sputnik launch was that it basically enshrined the principle that overflying another country with an active satellite was not an illegal act. (That is, the US government decided not to protest, because it meant they wouldn't have to apologize for their own future spy satellites.) I doubt China wants to break this precedent.


No. There is a difference between flying over and operating communication satilliets.


As far as international law is concerned, I doubt it. Sputnik was transmitting the entire time it overflew the US.


Its about offering a service in China, not about the technical act of communicating.


This thread applies also to Russia, where complaints about coming constellation were already voiced in the government.

In both cases I assume the issue at stake is the freedom of communications. I'd hope to have both technical means to communicate (and hide those communications) and enough support on USA side not to pressure SpaceX to deny service worldwide.

We'll see how it goes.


Yes, they can do that [0].

However, the easier option is to just scan for the right EM waves on the ground and go after the people using the illegal network.

[0]: https://en.wikipedia.org/wiki/2007_Chinese_anti-satellite_mi...


Yes, they can do that [0].

But it's a crappy enough move, it's not certain they would.

However, the easier option is to just scan for the right EM waves on the ground and go after the people using the illegal network.

So someone just has to figure out a way to hide the uplink.


"not certain they would."

This is a country that has the GF. One that routinely oppresses its citizens. One that gives its citizens a social rating that can prevent them from traveling etc etc. If there's one thing I'm sure of, it's that the CCP will find a way of stopping access to this. If it takes hacking into SpaceX, shooting down 12K satellites, etc, it will. Maintaining power is what the CCP does.

"hide the uplink"

Kind of hard to hide EM transmitters that aren't going to be designed for special ops teams...


This is probably a stupid thing to ask, but: Could this setup be hacked to provide a loose form of GPS?


Not stupid. Sort of. The term you're looking for multilateration (edit: i'm twisting the term a bit for my own use here, it's not exactly multilateration but it's not exactly triangulation either). Without looking at the RF specd in the FCC docs or the satellite hardware part list, I'm not sure if you could do it on the receiver, but it would be straightforward for the constellation visible to the receiver to triangulate a receiver's position and provide it over a maintenance channel to the terminal.


The answer is probably yes, but it would be a side-effect since data transmission will be the primary purpose. Actually it may turn out to be pre-condition for getting the best throughput - i.e. knowing when and where the sats are and when your transmission window is to within a tight tolerance will be necessary and that's what you need for GPS.


Thanks! I didn't know the term ( https://en.wikipedia.org/wiki/Multilateration ).

I was thinking of WW2 radio pinging when I asked. I'm sort of imagining a software simulation that has real-time updates of sat positions - I assume more than 3 in any useful domain - that checks delay/ping against where the sats actually are.


You need more than just any satellites to provide GPS. Each of them needs a very precise atomic clock (among other things I'm sure I'm not aware of).


You can build GPS (well, a GNSS would be more accurate) system without atomic clocks.

One way to do so requires user equipment that can transmit back to the satellites, but that will already be the case for SpaceX's customers.

You can also use Doppler measurements to get your location, although this tends to be less accurate, especially if you're moving quickly


An interesting use case for a "backup" GPS is to rely on GPS itself for the accuracy you need, but look at the other constellations to verify if GPS is being spoofed - in that way you can protect yourself against the GPS misdirection attacks (e.g. by relaying with a carefully calculated delay) that have been performed multiple times now; you'd know that you're getting a fake GPS signal and can't rely on it.


You can also use Doppler measurements to get your location, although this tends to be less accurate, especially if you're moving quickly

There would still be a market for lower cost lower resolution location.


But GPS is cheap enough and widely enough deployed. Devices like laptops that don't usually have GPS are usually used indoors. I don't see the market.


I wonder if the resolution would be better than cellular and/or wifi triangulation. Obviously those won't work in remote areas..


Yes. The Iridium (the closest analogue) already can do this. It's not very good, though. Worse than cell tower triangulation, I think.


Why do they need approval for this? It's not like Space is owned by anyone?


Why do owners of airplanes need approval to fly? It's not like the air is owned by anyone?

Just because someone doesn't own something doesn't mean you can just launch stuff up there with no regard for other such things to be launched/already in the air. Things can and do collide, and the ramifications of that are actually much more dangerous when you don't have an atmosphere to pull the pieces down in the event of a collision.


The radio spectrum is limited, and belongs to the people. Careful regulation ensures it remains usable. Likewise, regulating orbits is important. For example, to limit potential damage from space debris.


For the communications, not the launch.


Political power comes from the barrel of a gun, and SpaceX is well within many country's sights.

e.g. that's the law.


You mean "i.e."


But the path into space from the US is.


Governments are responsible for stuff launched into space form their territory.


When nobody owns something, we collectively own it. This means that we need to make sure that one bad, or reckless actor does not ruin it for everyone else. [1]

[1] https://en.wikipedia.org/wiki/Kessler_syndrome

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: