- Rapid, passive disposal in the unlikely event of a failed spacecraft
- Self-cleaning debris environment in general
- Reduced fuel requirements and thruster wear
- Benign ionizing radiation environment
- Fewer NGSO operators affected by the SpaceX constellation
The first two are because there is more atmospheric drag. I believe that orbital debris in the case of a collision was something that SpaceX was struggling to mitigate (every struggles, but no one has put up this many satellites before).
The third is because originally the plan was to launch to a 400km orbit and then have the satellites lift themselves to a 1150km orbit. Now they intend to launch to a 300-350km orbit and lift themselves to 550km. They expect that the smaller amount of lifting will increase satellite lifetime by 50% even after accounting for atmospheric drag.
The fourth is apparently just "there's less radiation lower, and radiation is bad for electronics".
The fifth is just "less of the other theoretical internet constellations are at this height" as far as I can tell.
(All information sourced from the technical information attachment)
I'm curious: why will this provide a longer life? Is it the lift burn itself that affects the lifetime (so reduced lift burn is better for the satellite)?
If you want to keep a satellite in a mostly circular orbit from 350x350km to 600x600 km you do periodic very small boost maneuvers.
From my layman's understanding, because the effect of aerodynamic shaping breaks down at very low pressure, exhaust speed would have to be travel speed (TAS) x (total cross section / intake cross section) to keep orbit. Assessing whether that puts the concept in the realm of feasible technology or not is beyond my skills, but at least there seem to be projects working on that question. If it does work out, it would completely change the economics of LEO use.
Basically, there's already nothing left at their new planned altitude. In either scenario they'd need to burn fuel in order to compensate for irregularities in Earth's gravitational field, but I assume they've run the numbers internally.
The higher the orbit, the more fuel you need to deorbit.
Even over short distance the latency (which the "Legal Narrative" pdf quotes as 15ms) is negligible for almost all applications.
 paper http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf / video https://www.youtube.com/watch?v=AdKNCBrkZQ4
If we're talking about theoretical limitations then hollow core waveguide fibers should get you close to vacuum speed.
But 4,400+ satellites! How many launches is that going to take?
Also, glad to see that Alaska doesn't get shafted in this. As the computer voice noted, it's an FCC requirement.
Using a Falcon 9 at 25 satellites per launch it would take 177 flights, about 36 flights per year.
Using a Falcon Heavy with 40 satellites it would take 112 flights, over 5 years that's about 22 flights per year.
Using a BFR assuming 350 satellites per launch, until someone comes up with a better number, would need 13 flights total.
Only a little </s>.
Imagine a business plan for 4,400 satellites being sane. What a world.
In those 10 years they went from being literally laughed out of rooms, to being the forefront of the industry. And even now that they are arguably "on top" in many ways, they keep on trying to do these insane things.
They are currently trying to build a rocket with the largest launch capacity in history, launch and maintain a satellite network which is the largest in history (if successful, it won't just be the largest, but will be at or near half of all satellites in orbit!), and still have a goal of getting humans to mars.
Say what you will about Elon Musk, the guy knows how to set goals.
In my view, that's kind of the point. SpaceX internet service might make money, or might not- let's presume it loses a little bit of money over time. That's fine, because it's true purpose is to reduce costs for SpaceX the rocket manufacturer and refurbisher.
Manufacturing facilities operate best when they have an even load. Scaling up and down, laying off then hiring, etc, is bad for business. By having this perfectly flexible customer, SpaceX can do a lot less of that. They can scale up at an even pace.
The end goal will be SpaceX launching every X days exactly, always with a mix of external and internal customers on those launches.
if you follow the motion of the boxes.
"On March 29, 2018, the Commission authorized Space Exploration Holdings, LLC, a wholly owned subsidiary of Space Exploration Technologies Corp. (collectively, “SpaceX”), to construct, deploy, and operate a constellation of 4,425 non-geostationary orbit (“NGSO”)satellites using Ku- and Ka-band spectrum. With this application, SpaceX seeks to modify its license to reflect constellation design changes resulting from a rigorous, integrated, and iterative process that will accelerate the deployment of its satellites and services. Specifically, SpaceX proposes to relocate 1,584 satellites previously authorized to operate at an altitude of 1,150 km to an altitude of 550 km, and to make related changes to the operations of the satellites in this new lower shell of the constellation."
"Under the modification proposed herein, SpaceX would reduce the number of satellites and relocate the original shell of 1,600 satellites authorized to operate at 1,150 km to create a new lower shell of 1,584 satellites operating at 550 km"
This shell will also now use 24 orbital planes instead of an originally planned 32. (per a table in the technical information pdf).
The total number of satellites in the constellation goes from 4,425 to 4,409.
So, while it is a valid concern... until we put up “millions” of items, I think astronomers will be pretty safe. However, orbital debris avoidance... much bigger issue.
Edit: Clarification. The above is only true if having a satellite in the field of view is enough to be a problem.
That said, this project will approximately double the number of man made satellites. I believe these satellites will be smaller than normal, but also closer than normal.
I'm not an expert, but I think number of satellites is actually a really bad predictor for impact. In particular there was one set of satellites that had really bad effects for some reason. See more here .
They have not publicly stated anything about the inter-satellite links, but a research paper published by independent researchers(2) estimated that based on other state of the art, they get >100Gbps of bandwidth between two communicating satellites. The total trans-pacific capacity between NZ and the US will be better than the SC fiber line because there will be many different non-sharing paths using different satellites.
(1): Also likely for crossing connections. The paper mentioned below assumed use of laser for those too, but that is IMHO unlikely because steering would be too hard. Lasers work great for satellites on the same plane and those that are on the neighboring ones, because the angular speed that the system needs to move to track stays very low -- the satellites near them are almost stationary from their point of view. In contrast, satellites on crossing planes zip by very fast and have high angular motion, especially when near.
Those speeds may be great for individual endusers, but if you're a datacenter which needs a lot of bandwidth then starlink would be limited by the shared radio bandwidth to the sats, which could be quite crowded in an urban environment, while fiber wouldn't be. That's why I only see them as competing on latency, not on bandwidth.
Their plan to launch O(1000) satellites is to get lower latency and higher bandwidth, which would render the current generation of satellite internet obsolete.
It's a great example of the sort of business plan that's only possible with cheap launches that SpaceX's reusable rockets have provided.
Here's a more detailed primer:
Why not? Just saying "you are wrong" doesn't really add much to the conversation. I'm interested to know more.
The other issue with LEO is that if you want to double your satellite capacity, you need to launch twice as many than you currently have in the sky, instead of just one or two more large ones. This presents logistical problems, and technical as well to some extent.
When people typically refer to GEO satellites, they're unfairly making the assumption that the old crop of GEO satellites are where technology is today; namely fixed, low-capacity satellites. This is not the case. With the HTS (High-throughput satellites) and XTS (Extreme throughput satellites) that have movable capacity, not only do you have a comparable amount of usable capacity to LEO, but you can also move it as business needs change. The latency issue will never change, but if you see my other comments, I'm skeptical they'll be able to achieve the latency everyone is quoting.
> Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
I.e., it's an asymptotic upper bound.
It's also interesting to compare with Ω() (Big Omega — asymptotic lower bound) and Θ() (Big Theta) (big-O AND big-Omega): https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachm...
A good textbook on this subject is Introduction to Algorithms: https://www.amazon.com/dp/0262033844/ .
If you want to be puritan the only fault I see in my definition is instead of using a generic function I assumed it's linear function - but that's for explaining the colloquial use.
It's most commonly applied to worst case running time, but is often applied to expected running time ("hash table insertions run in O(1)"), space complexity, communication overhead, numerical accuracy and any number of other metrics.
order of...order of
yeah I know there is no "growth"
And I wasn't saying that "on the order of" is an approximation of what O() actually is in CS, merely that that is how OP used it.
LEO latency best case: 6ms
I've used those Hughes satellite connections before, I never got anything close to 250ms. More like 400ms.
Consumer grade hughesnet stuff will vary anywhere from 495ms in the middle of the night to 1100ms+ during peak periods due to oversubscription.
It would be technically possible, but uneconomical and an inefficient use of space segment transponder kHz to have customers in Wyoming moving traffic through a teleport in the Chicago area. Here's an illustration of Ka-band spot beams on a typical state of the art geostationary satellite:
Applying the same concept to starlink, telesat's proposed system, and oneweb, if they build a number of teleports geographically distributed near rural areas, it will allow individual satellites to serve as bent-pipe architecture from CPE --> Teleport, within the same moving LEO spot beams, or to have customer traffic take only one hop through space to an adjacent satellite before it hits the trunk link to an earth station. For example customers in a really rural area of north Idaho along US95 might "see" a set of moving satellites that also have visibility to an earth station in Lewiston, ID, where carrier grade terrestrial fiber links are available. Or a customer in a remote mountainous area of eastern Oregon may uplink/downlink through a teleport in Bend.
The ultimate capacity of the system will be determined by how few hops through space they can get the traffic to do. Since every satellite will be identical and capable of forming a trunk link to a starlink-operated earth station, when it's overhead of it, they have an incentive to build a large number of earth stations geographically distributed around the world.
It's basically the same idea as o3b's architecture but at a much smaller scale.
Another consideration: adding another 50ms to GEO latency isn't really going to change anyone's opinion. It's still targeted towards streaming, and latency doesn't matter as much since they're not targeting real-time gamers. SpaceX needs the latency to be very low to hit that market. There's a world of difference going from a 30ms ping to an 80ms ping, and once you're past a certain point, it puts you in the same camp as GEO.
This is wrong. From their FCC filing(1), they use AESA phased array antennaes, and each satellite is capable of simultaneously maintaining "many" (unspecified) steered beams that are <2.5 degree wide.
Also, the receiver is capable of distinguishing between multiple beams covering it so long as there is more than 10 degrees of angular separation between them from it's point of view. If I understood it correctly, this will allow nearly every visible satellite at the same orbital height (less the ones very nearest to horizon) to communicate with targets that are geographically very near to each other at full bandwidth. After the very first phase has been launched, they can provide a total of ~500 Gbps of downlink bandwidth to any spot target that lies between 40 and 60 degrees latitude. The later additions at high orbits help with total capacity and especially with targeting multiple targets relatively near each other, but do not help provide more bandwidth per city, as that is limited by the 10 degree angular separation requirement.
The VLEO (330km-ish) constellation will help with that by reducing the size of each spot.
if I had to guess on the earth station siting, they are picking locations which are medium-sized cities with decent terrestrial fiber connectivity, which will be within the same satellite view footprint as adjacent rural areas. Such as an earth station in Boise may serve mountainous remote areas of ID.
This 200 Earth station figure also lends me to believe that the first manufacturing run of satellites may not have any satellite to satellite trunk link ability at all, but that they will ALL be bent pipe architecture. this means that if SpaceX wants to serve a particular area, they need to have an earth station on terrestrial fiber in the same region, which is simultaneously visible to satellites and end users.
If the space segment only adds 120ms to what would be an otherwise same latency rtt ping, it's not so bad, people in the US have been spoiled by having CDNs very near all major ix points.
I'm not sure what the justification is to assume that Starlink will not be horribly oversubscribed, either. Last I checked it was supposed to be about 32Tbps with all satellites operational. A substantial amount of that is completely wasted over water, so the effective capacity for customer that actually have the money for SpaceX to generate revenue is very small. The types of services people need in remote areas, whether it be a plane or in a village that has never had internet are not those that require low latency. They are either streaming media (plane), or web browsing. In that sense, I don't see how Starlink has and advantage there.
I would be shocked if they could deliver something better than cable on DOCSIS 3 to even 10% of cable customers with comparable service. My guess is it will be tailored more to high-paying customers that happen to not be able to get decent cable.
A lot of the technology press has misunderstood the most desirable applications and locations for it. People think that it's going to compete for a residential internet service in a suburb of a city like Portland, or Sacramento, or Denver. If you can get 300 megabit per second DOCSIS3 service in one of those locations, that would be drastically better. where it is going to be a game-changer is all of the locations that are right now dependent on highly oversubscribed geostationary small vsat services, and extremely rural areas where there isn't even a single last-mile terrestrial Wireless ISP. and for ships in the middle of the ocean, if the gigabyte per dollar cost is significantly less than inmarsat or other options.
I agree that the service could be better in theory, but at the same time, existing satellite internet service also could be better by taking on fewer customers and not being as congested. But that's a cost trade-off. And in this case, I believe SpaceX has a higher cost per customer to recoup, so it seems in their best interest to also be congested to increase revenue.
It will be ~80Tbps after the first three phases (the LEO constellation), ~240Tbps after VLEO.
I agree with you that they probably cannot offer enough bandwidth to compete with residential internet in densely habited areas.(1) The system is really interesting in less densely inhabited places, and for backhaul. The complete system has more transcontinental bandwith between almost any two (distant enough) places than all submarine cables between them put together. This alone will likely pay for the whole system, with plenty to spare.
(1) With a few exceptions. After the full constellation is up, New Zealand will have ~30 times more downlink capacity than the entire bandwidth use of the country as of right now, and will also have tens of times more connecting capacity with North America and Australia than it currently has. But that requires a country of only 6 million in the starlink sweet spot that gets all of the bandwidth of all visible satellites to the east of it.
Light travels about 1800km in 6ms, but that's just one way. Straight up and straight down at 550km is 3.6ms.
Maybe using the ones in lower orbit to cover more densely populated locations?