But then, I’m not the only one;
“There is an unlimited amount of funding that the company could probably access globally in private markets," Hilmer said, adding that he has personally met many of "a diverse group" interested in SpaceX. Everywhere I travel around the world, investors of all types — individuals, family offices, hedge funds, sovereign wealth funds or private equity — want to get into SpaceX," Hilmer said. "It's almost all investors I talk to."
Of course at the same time I’m happy they aren’t public. The market couldn’t handle the time horizons that SpaceX operates under, nor the mission statement that drives them.
 - https://www.cnbc.com/2018/04/13/equidate-spacex-27-billion-v...
SpaceX has no plans right now to colonize Mars. In fact, they keep saying, “Look! We want other companies or nations to step up and plan for how to establish a colony. We’re only going to do it if we absolutely have no other choice.”
SpaceX is truly establishing a financially viable solar transport system that may eventually extend beyond our solar system. This is analogous to the birth of the U.S. railroad system. We don’t know yet what what don’t know is possible.
SpaceX is poised to own space transport outright. That’s major!
Now what does Mars have to do with this? It’s just a helpful organizing goal. People love a good milestone. Something to reach for with meaning. I mean if you’ve been following SpaceX’s 15-year history you’d see that they’re nothing if not methodical in their planning and attainment of milestones.
SpaceX is one of the most well-run companies in the world. And right now they have the best prices, the best technology, the best pace, the best outlook, the best...
They have no competition. Literally. I’d invest the entirety of my lifetime earnings in SpaceX if I could. We’re witnessing historic achievements in the making.
SpaceX has launched many Orbital missions.
Blue Origin has launched none.
SpaceX has two functional, tested, orbital spacecraft in production.
Blue Origin has a sub-orbital pod with goldfishbowl windows that's not quite ready yet.
SpaceX has successfully partnered with NASA for commercial cargo missions to the ISS.
Blue Origin has not.
SpaceX has landed more than a dozen rockets post-flight.
Blue Origin has landed < 10 sub-orbital rockets.
SpaceX has designs for rockets and spacecraft that might plausibly target inter-planetary travel.
Blue Origin has not gotten to orbit yet.
Don't get me wrong; I want Blue Origin to succeed. However, the slope of their progress graph has been much shallower than SpaceXs. I don't see them ever catching or overtaking SpaceX unless something drastically unforseen happens.
The SpaceX rocket has certain trade-offs. Basically the upper stage is not that powerful compared to various other rockets. This is usually not too bad because they go for the cheapest option. Blue Origin might have a huge advantage there though.
Blue Origin is not in a need of money, this drastically changes things. It seems they went first for nailing a super powerful rocket, then to develop something close to the SpaceX BFR called New Glenn (https://en.wikipedia.org/wiki/New_Glenn).
Note that in various cases people compare expendable versions with reusable versions. Further, SpaceX is often better for lower orbits.
Suggest to follow /r/spacex on reddit. Above is what I learned from reading discussions there.
I would rephraze this as what need is there for markets and money?
It's not about personal gain. It'd about how to manage the resource pools of an entire civilaztion.
What you are describing is the perfectly planned economy. From empirical experience national economies cannot be efficiently planned. USSR did not have the computational capability we have now - but we don't know is there any way to control a national - lest a planetary or solar economy by any other means than markets.
So, the question to me is - if solar ecomony can be planned then there is no need for markets and money, but if it's computationally intractable, or, the market way of organization is the cheapest and best way to do it, then we are stuck with markets and money up to the heat death of the universe.
Interestingly, if from algorithmic point of view markets and money is the best way to organize a civilizations resources, then it's likely alien civilizations (if they exist) will have a market economy too.
I would hope the question of reaource allocation is tractable and that the future of mankind is more like Iain Banks's Culture rather than "robber barons of the Oort cloud".
it would also lead to "undercutting" on technology. Look at Bezos - without such a powerful Mars hyper-drive the best he can do is being engine supplier to ULA. Hugely respectable achievement on its own, no doubts, yet nothing close to SpaceX who has already really advanced our civilization and is on track to advance it even further.
In particular i think the Mars mission based POV allows to filter for the best architectures long-term, like modular construction F9/Heavy which wouldn't be necessary the best in the short term of just servicing Earth satellites.
This is the stagnation of paying your scientists poorly.
One of the reasons for so many microbreweries in San Diego was paying microbiology types so poorly. More than one microbrewer here has said: "Well, I was getting paid kinda crappy in biology. So, I started a microbrewery. Now I get paid about the same, don't have to put up with managers, and get to drink beer."
All the same. If there are no jobs for scientists, salaries drop, the oversupply isn't going to go away anytime soon. What's this STEM shortage they are talking about in Washington?
It basically takes a crazy, put it all on the line billionaire to assemble technologies, but why doesn't the economic system regularly do this? Musk is great, but I think his skill, much like Jobs, is excellent integration of a number of existing technological possibilities. Why is Musk, as an individual, the one of the few that can do that at scale, vs industries that have in actuality more resources than Musk.
Oops, am I not supposed to admit that?
Depending on the cost of their satellites it might make sense to use launching them as a means of testing the upper limits of reusability of their rockets. IE, they might not want to risk a customer payload on a rocket that has made 10 launches. But if they are going to build 7,518 satellites the marginal cost is likely to be rather low so it might be worth it to push the risk threshold to stretch the number of trips per rocket. Also, it could be a good opportunity to clear out their inventory of pre-block 5 Falcon 9s.
Rather, it's about using spare capacity freed up by the long-foreseen slowdown in the geostationary launch business that has historically been SpaceX's (and everyone else's) bread and butter, without crashing market launch prices.
And indeed, there has already been an accidental satellite-satellite collision (not just satellite-debris collision).
The Iridium constellation has about ~80 satellites, and one passes within 5km of another satellite about 50 times/day.
I don't mind the latency of today's fiber or cable Internet, and if it's problem, it can be solved by moving servers closer to users.
That is, what if, instead of bypassing the fiber backbone, Starlink just tried to connect everyone to it? Because I'm assuming that sending a terabit of data between two major cities, like Chennai to NYC is going to be cheaper via undersea fiber.
This could deliver absolutely massively better internet in less populated areas like rural, shipping, etc. but I feel like people are perhaps getting the wrong idea and thinking this could supersede terrestrial networks in urban/surburban areas - in which case there will be a lot of disappointed people.
These  are statistics from Verizon showing 90ms RTT for trans-atlantic connections. Trans-pacific is > 100ms. 80ms seems highly competitive for me in this context.
Say what you will about Musk, but the guy is truly ambitious and willing to shoot for the stars ( or mars at the moment ).
It's a shocking contrast to be in or near a city and have broadband speeds, and then be just a few dozen miles outside one and have... literally nothing.
I just loaned my Iridium phone to a friend who was going to the jungle, and although he was able to make the data connection work, even doing email at 2400 baud(!) proved useless. Inmarsat is faster, but vastly more expensive.
Outside of those two, there is no global solution.
You can communicate effectively over 2400 baud if (and only if) you use protocols and services designed for low bandwidth; luckily we have those protocols built and tested, even if we have mostly abandoned some of them.
GPG-encrypted email over plain POP would work; and you can have automatic "remailers" on both sides of the link that handle the encryption/decryption once the communications are done over a "normal" connection.
With this slimming approach you also wouldn't use these grotesque "... and here's a portrait of my dog" type certificates we sometimes see, that may be several kilobytes in size, nor use the lengthy RSA keys. You'd do the minimum possible certificate for a single name and an EC key.
So it'll probably cost ~1kB on first connection and then only a few hundred bytes on subsequent connections, without any special components, this all just works with generic software you already have if configured appropriately.
You can shave a little more off if you agree to custom configure everything since the proof-by-certificate isn't mandatory in TLS per se, you can do everything with shared keys if you agree out of band what those keys are. This is intended for IoT-type applications.
While you might feel that it's deprecated (says who?) it's certainly in use, even by those "security people".
In the late 90's I did a lot of my grad school coding and implemented a website for a state agency though 9600 baud modems. EMACS was great in those days, precisely because it's a Lisp machine OS masquerading as a text editor. I used to read USENET news, read my email, do my work, shuffle between multiple remotely logged in shells, all through EMACS.
Wonder if it would make sense nowadays to send world state updates to clients at 1000 Hz or if it's even feasible?
p.s. the story of Iridium and how they recovered is quite amazing.
The V band (or optical links) will also likely be used for the inter-satellite communication.
You do typically need different hardware for using the different frequencies. With some of the more advanced software defined radios, you could use both of the at the same baseband within the same radio. But you will still need some sort of frequency conversion . You will likely need different antennas for each band as well. And to get an efficient system you also want to add filters for each band.
We already have a ton of Geo sat which can do this kind of communication but they are super expensive and have a limited bandwidth.
These satellites will actually have a ton of limitations in how much data they can send around and how they'll have to balance out their signals. Geo are easier to point to because, well they don't move.
But these are going to be moving and changing all the time so you'll have to connect to multiple satellites every day. I'm spit balling here but they'll probably be overhead for 10 minutes? Think about switching your router every 10 minutes. Or you get a rainy day and your signal clarity goes down. Or you are over the equator in a band that is used strictly for GEO.
This is going to be a super cool problem to solve. And I'm sure I don't even understand the half of it.
Edit: Sorry example of router is pretty bad. It's more like running your phone but you have to specifically aim your antenna at each tower that you're passing while driving. The complexity is moving nature of the network and the targeting nature of the antennas. I have 0 clue if phone signals are targeted but I believe they are radial signals and more like a beacon than a laser.
Load balancing these can be a pain as well because if you get too much signal on an antenna it can actually block all signal.
Your cell phone switches towers all the time while you're on the road. My cellphone (republic) switches from WiFi to cellular network mid-call if I'm on it when I leave a building. This is not a new problem.
The hard part here is that you have to target the beam at the satellite. It's not a wide angle beam that's used it's a focused beam. (This might not be true based on what they implement)
But for Geo it's a targeted beam. So this isn't a perfect corollary, and my example was pretty bad lol.
Damn, I guess we ran out of interesting problems to solve then /s.
Networking technology switching is unrelated to a call, which always uses cellular.
I've been using them for years and it works fine.
Handoff between satellites isn't really that hard. Your phone knows where it is and what birds are overhead (and which are about to appear/disappear) so they can adjust automatically. Routing through the satellite constellation gets kind of hairy, but is also solvable via orbital mapping and some arcane routing algorithms.
The real trick is getting enough link margin with omni antennas to get good bandwidth. As fun as they would look, nobody wants a cell phone with a little cartoon satellite dish on top that whirls around to track birds passing overhead.
What about phased arrays?
Also, why not have a completely different scale of ISPs? How about something the size of a soda vending machine or something the size of a suitcase which contains all the hardware for a very small scale internet service provider?
The last part is, where I think the real fun will emerge. Doing everything above but optimized based on the load of each satellite with a link budget computation involved will be cool. It may prove out that the optimization isn't needed but I think that kinda stuff is fun.
Speak for yourself! ;)
Cellular and WiFi AP roaming are solved problems. For SpaceX, this will be rather straightforward as the design specifies fixed ground stations that will handle negotiating/maintaining upstream connection(s).
> Or you get a rainy day and your signal clarity goes down.
That's more likely to be an issue, but having multiple satellites overhead, plus the fact that the constellation will be located in LEO (thus higher signal strength due to proximity) will help mitigate signal attenuation due to clouds/precipitation.
While you may be right regarding the context in which you said this — satellites — as a blanket statement, that's not true: my smartphone loses Internet connectivity when I leave my house, and takes a few seconds before it connects to the cellular data network.
I'm surprised that phones don't establish a backup 4G connection when Wifi signal strength decreases suddenly, but before it disconnects. This has been talked about for years, but for whatever reason it hasn't been deployed or doesn't work well, at least with Android P.
It's currently possible to maintain a single connection (including IP address, open sockets, ip tables, etc.) while roaming between cell towers and WiFi APs. That's not true when switching carriers/providers/technologies, although I agree that would be a great next step for cell phones.
You're right on the signal degradation being significantly lower. But the LEO aspect means that there might be a higher percent of total loss due to rain (and probably adjacent signal interference).
Either way it's a cool problem.
For the client premises equipment it's a hard problem but entirely solvable. Frequency hoping radios have been around a long time. Link bonding has been around a long time. Which satellites will be in view, where, and when is knowable. I think the harder part will be the backhaul... in space.
I'm curious if they'll use beam shaping antennas for CPE. Surely they won't ship antennas with moving parts right? Also I'm curious about the power requirements of the CPE.
Another problem will be security. This is a known problem with satellite uplinks. They basically spray everyone's traffic all over God's green Earth and if you listen in you used to be able to grab lots of data right out of the sky. These days it should all be encrypted, even then some of the encryption was easily cracked.
Encryption might be handled by default with the https nature of the web now. But it'll just make https even more important going forward. It would also leave a lot of meta data open to sniffing which I don't think people would want either.
It's my understanding that this constellation is going to be significantly lower than GEO, so that's not a problem.
Also, the dead zones are toward the poles, but those are mitigated in phase two.
Very interesting watch.
Based off the paper:
And high delay, which is the main problem.
Depends on the observer. If you move in a circle around a set of radios (driving around town) either you're stationary and the other radios are spinning or vice versa. Also, many satellites will share the same orbit, they will be stationary as far as each other are concerned.
Starlink and OneWeb are different in that they intend to use a lot of satellites in low orbits to maintain constant coverage. This is technically much harder, not the least because you need thousands of satellites to get reasonably good coverage, but also because the ground station and the satellite transceiver both need to track each other. This was not technically feasible before, but modern AESA antennas can steer their signal without having to move the antenna, and can both transmit and receive multiple simultaneous beams and very rapidly move the beams around when doing time-sharing.
The minimum round-trip time to something very close to you and also close to it's own ground station will be on the order of 10ms. However, where the system really shines is long distance communication. The satellites will pass the signal between each other using lasers, and will get the signal to the other side of the earth much faster than terrestrial fiber, both because laser in vacuum travels substantially faster than one in fibre, and also because the fibres don't get to follow ideal great circle paths.
Independent researches have evaluated the likely latencies of the system, and the results are frankly shocking. For example, today on the existing fiber network, rtt between London and Singapore is ~160ms. On Starlink, the rtt will be ~90ms.
This doesn't check out.
The radius of Earth is 6,300km. Assuming a receiver on the north pole, the total distance should only be sqrt((35786 + 6300)^2 + 6300^2) = 42555km, which isn't notably larger than the 36000km at the equator.
I'm sure you're still right about the latency, but it can't be just distance doing it.
Ignoring TDMA oversubscribed VSAT networks for the moment, there are a number of different possible modulation schemes and FEC types, and FEC code rate (payload vs FEC percentages), and things that are unique to different SCPC modems, which will vary the modulation by a few ms beyond pure speed of light.
I thought there were regulatory barriers to doing that
Latency for medium to long distances could be better than regular fiber, both because of straigher paths and because speed of light in the relative vacuum of the satellites' orbits is higher than speed of light in fiber.
We don't know yet what kind of equipment the satellites will have, but fundamentally bandwidth is constrained by bandwidth per covered area. I guess they will sell gigabit or better to cargo ships at sea, similarly good speed to rural regions, but nothing interesting to urban areas. Providing fast internet to a city via satellite would be very challenging, providing lighning fast internet to lone people in a desert is much easier and more profitable (no competition).
It does require extensive testing and permission by the FAA (and the aircraft manufacturer, and the airline, and others) and extensive support teams in terminals but it's done semi-routinely.
The big difference with Hughes and SpaceX is that Hughes satellites are geostationary and SpaceX's will be low earth orbit, so SpaceX's transmission distances and times should be much better.
Hughes provided a decent option for very remote areas. SpaceX appears to be putting together a first class option for everyone.
Who knows if they'll actually reach these speeds/latencies in practice though, that remains to be seen.
If I'm selling large HVAC units with cloud monitoring functionality, it'd be attractive to ship units that are pre-connected to SpaceX's satellite network. I wouldn't have to worry about setting up agreements with multiple cellular providers or having to depend on site-provided Wifi/ethernet.
The point is, those devices could very lucrative for SpaceX without requiring as much bandwidth (no need to stream Netflix).
You don't really need much bandwidth for most IoT apps.
Not much needs both high bandwidth and low latency.
The latency has the potential to be better than the current internet backbone when going intercontinental due to straighter paths, but it'll be higher when going to your local AWS, probably.
Bandwidth is anyone's guess.
The issue for that is the wavelengths they are using. Their properties basically completely rule out any kind of in-building service, there needs to be a clear uninterrupted path between the receiver and the transmitter. (But they penetrate water well, so should not have a problem with clouds.)
You can see international links that have latencies lower than the existing internet by a significant margin.
Unfortunately neither of those simulations take into account that the first iteration has been changed to only use radio and no laser links.
I wonder if it's exclusively because of that reason, and if so that the reason international customers will enjoy coverage at high northern and southern latitudes is because of the accident of history that the US ended up acquiring Alaska, and that there's some FCC regulation that says you need to provide coverage for all 50 states.
The article mentions the movie Gravity, which is a bit unrealistic as it portrays multiple large bodies in orbit all being at about the same altitude (which isn't the case in reality). That is not the case with this web of satellites. If a chain reaction of collisions does occur, it would cause a field of tiny, fast, deadly debris all orbiting on a similar orbital "plane". It would pretty much blanket the planet. Wouldn't this cause a large issue for anything attempting to reach orbit? What am I missing here?
EDIT: A lot of replies here mentioning the fact that LEO spacecraft decay more quickly than higher orbits. Please note that not all LEO orbits are low enough to guarantee a quick decay without powered retrograde thrust. Stuff can hang up there in LEO a long, long time depending on the actual altitude.
This was a big concern for NASA though. Their typical guidelines were that commercial satellites have to have >90% reliability to deorbit the satellite. But they want to increase this for the SpaceX satellites because if 10% (~400) of the satellites end up failing then they are effectively up there forever.
 - <https://fcc.report/IBFS/SAT-MOD-20181108-00083>
Edit: Their most recent application only moved the Phase 1 satellites in to the lower orbit. In later phases they are still planning on having more than 1000 satellites at the higher orbital altitude.
So it wouldn't stop progress for a century, but might it put a damper on things on the same scale as the Great Depression?
Plus, in the ~1100 km orbit you are getting in to the lower Van Allen belt where the radiation will degrade electronics much more quickly. This can be mitigated with radiation-resistant parts and more shielding, but it adds a lot more cost.
An example of how long shit can stay in space: A Saturn V booster rocket is still out there, slowly orbiting the planet: https://en.wikipedia.org/wiki/J002E3
Part of the problem, though, is that said Xerox machine is traveling at 17,000 miles per hour.
We've already had a collision (https://en.wikipedia.org/wiki/2009_satellite_collision) between two of the ~3-4k satellites in orbit. SpaceX's constellation is planned to be double that number.
That single collision turned two satellites into two thousand high-velocity projectiles, too.
Just to add to that point:
A single 10kg. chunk of debris could result in over 8 tonnes of additional debris, triggering a rather nasty chain reaction.
The idea of satellite collisions and space junk is an actual concern https://en.wikipedia.org/wiki/Kessler_syndrome .
Sure a few washing machines in space isn't a big concern. Now imagine a few collide, hundreds of bits and bolts flying around earth, at unknown trajetories, at thousands of miles per hour.
1. Satellites can kill themselves by dropping back into the atmosphere by their own propulsion system
2. Even if they fail, most of these are low enough that the time until they fall is reasonable short.
3. The objects are known and tracked so even if they are out of control, the others will avoid them.
4. We are moving into a world with cheap access to space and lots of satellites we need service satellites anyway and multiple are in development. This could be a legal requirement but even without it will develop because SpaceX doesn't want to have debris flying around on 'their' plane.
Doing some quick math, the shell in which they'll be orbiting has a surface area of 373,367,808km. With ~7000 satellites, an even distribution would leave them 533,382 km between satellites. All the satellites have station keeping thrusters to keep them in LEO with drag. Keeping them from running into each other is probably pretty simple, in comparison to the other challenges they'll need to solve.
There is just far too much space for 100k objects, let alone 10k to ever run the risk of collission. These satellites are all meant to be in line-of-sight. They will communicate with optics as opposed or in conjunction to radio.
You fear unforseen space particles destroying one and causing a chain reaction. Why don't you hold that same fear for existing satelittes which typically operate at an orbit with far lower decay rate? The orbit for StarLink is very low, small particles would not sustain their energy long enough to be a risk factor. The StarLink sats maintain orbit with fuel, everything else without fuel in that orbit will fall. The chance that a particle will end up in the same orbit, on the same plane, at the same time is astronomically low.
Your comments just echo the same type of delusions of people who think in their brillance they've found problems others have not foreseen. SpaceX is really just full of amateurs you ought to submit your findings.
No, there isn't. Satellites can and do collide despite the vast amount of space in a given orbit. It is improbable, but likely enough to warrant at least some concern.
> Why don't you hold that same fear for existing satelittes which typically operate at an orbit with far lower decay rate?
I do, as does the FCC. This is evident in the fact that they now require any body wishing to launch a spacecraft guarantee that the craft can be put into a graveyard orbit.
> Your comments just echo the same type of delusions of people who think in their brillance they've found problems others have not foreseen. SpaceX is really just full of amateurs you ought to submit your findings.
I'm genuinely sorry this post made you feel this way. If I already had the brilliance to answer this question as I'm sure many bright minds at SpaceX do, I would not have asked the question and rebutted the responses to begin with.
Excluding planned collisions, or collisions during failed docking, I found 4 instances of collisions between satellites and space debris. Ever. Now, a couple of things:
* Yes this larger constellation will increase those odds in the future.
* No, a catastrophic doomsday scenario did not occur when those happened.
* SpaceX will have complete control of these satellites through their operational lifespan (excluding collisions or malfunctions).
* Orbital decay for debris at the altitude they're operating at is ~10 years.
Also, have to ask... is all this concern a result of reading Seveneves? :)
My concern is just from a general interest in spaceflight. More of a wish to protect a resource rather than a Chicken Little fear of the sky literally falling out.
Satellites do collide with debris, but that's a different issue. Furthermore, they don't need a guarantee that they can be put into a graveyard orbit - the proposed satellites (and even any debris, if they'd explode for whatever reason) will be in a graveyard orbit from day one until they stop being pushed up and deorbit.
Actually, they do, and it has indeed happened before (https://en.wikipedia.org/wiki/2009_satellite_collision). Far from a Kessler syndrome scenario, but certainly something that should be examined.
It could make orbital planes unusable, but is generally thought not to pose a substantial problem for launching through them, so worst case we just have to start putting satellites a bit higher.
This number surprised me, much lower than I expected. Looked it up and I'm seeing varying numbers, but generally in the 1k-4k ballpark.
Launch approvals are a separate thing.
How? By shooting down satellites?
The can ban the sale of terminals. They can scan waves.
In both cases I assume the issue at stake is the freedom of communications. I'd hope to have both technical means to communicate (and hide those communications) and enough support on USA side not to pressure SpaceX to deny service worldwide.
We'll see how it goes.
However, the easier option is to just scan for the right EM waves on the ground and go after the people using the illegal network.
But it's a crappy enough move, it's not certain they would.
So someone just has to figure out a way to hide the uplink.
This is a country that has the GF. One that routinely oppresses its citizens. One that gives its citizens a social rating that can prevent them from traveling etc etc. If there's one thing I'm sure of, it's that the CCP will find a way of stopping access to this. If it takes hacking into SpaceX, shooting down 12K satellites, etc, it will. Maintaining power is what the CCP does.
"hide the uplink"
Kind of hard to hide EM transmitters that aren't going to be designed for special ops teams...
I was thinking of WW2 radio pinging when I asked. I'm sort of imagining a software simulation that has real-time updates of sat positions - I assume more than 3 in any useful domain - that checks delay/ping against where the sats actually are.
One way to do so requires user equipment that can transmit back to the satellites, but that will already be the case for SpaceX's customers.
You can also use Doppler measurements to get your location, although this tends to be less accurate, especially if you're moving quickly
There would still be a market for lower cost lower resolution location.
Just because someone doesn't own something doesn't mean you can just launch stuff up there with no regard for other such things to be launched/already in the air. Things can and do collide, and the ramifications of that are actually much more dangerous when you don't have an atmosphere to pull the pieces down in the event of a collision.
e.g. that's the law.