> also control the infrastructure running a huge portion of the internet
AWS has less market share than Microsoft. The Cloud is a pretty competitive space.
> are working on creating a new ISP
This is great, look what Google Fiber did to fiber availability in the US. More competition is great in this space.
I find it ironic that you're complaining about FAANGs in the context of ISPs, who control literal monopolies in several regions.
FAANG is huge and market caps already greatly exceed that of the old-school monopolies.
Also, at it's peak Standard Oil was worth about 6% of the US stock market. Apple is currently worth 5.3%. This comparison should be robust to differences in interest rates. In this light, Apple is slightly smaller than Standard Oil was. But, the market is broader now and there are more lines of business in existence. So a company that dominates an entire line of business would be expected to have a lower proportion of total stock market cap now than in 1910.
At the very least 2021 Apple and 1910 Standard Oil are very comparable in size. There might not be a clear way to tell which is bigger.
They didn't say a "majority" though, they just said a huge portion.
In the context of the internet and how many devices are out there, I think he would absolutely be right in saying "a huge portion."
According to gartner AWS has over double Azure’s market share by revenue.
What metric are you using for your stat?
Dumb question. Why do people use inflation? Like Rockefeller would take the cash and put in a %2.5 account for 100 years? He probably would have put it in an investment at minimum gaining %7-10 account for 100 years - giving him multiple trillions.
A common refrain is that Carnegie Hall only cost $1M ($29,581,868.13 when adjusted for inflation) to build, so why do we still credit its founder with their name? Shouldn't we rename it in honor of someone who's contributed more to the Hall?
What this doesn't take into account is what it would cost to _build_ a new Carnegie Hall today. Labor is far more expensive (highest $/hr ever in 2019 if I'm not mistaken ) today and so are building materials .
So it's true he'd see compounding returns from investing, but to do what they did back then would cost significantly more today. IE, their dollars took them further back then.
Also, worth noting Rockefeller donated 6% of his salary to charity every pay check every single year of his life, not just when he could "afford" it . So if you take into account the _missed_ compound returns of those charitable contributions, you can start to get a sense for just otherworldly their charitable efforts were.
Not trying to say these guys were angels. And yet, as rich as they were, I think too often that overshadows the gargantuan contributions to charity they made.
Is it really such a great thing to give away money you don't need. IMO tha should be the absolute minimum baseline expectation for someone who controls so much wealth.
Because it is very hard to think about the currency values in terms of purchasing power.
Inflation is the lowest metric in that count, but reflects a somewhat uniform drop in purchasing power (per dollar) across the whole market.
My friend's mom told me her mortgage for a Bay Area house was 75$ a month, the two cars parked in front were worth more than a 2 bed house when she got the cars.
So it was much cheaper to buy a house, but much more expensive to get a car and I can't even have a ballpark figure for what a gigabyte of computer memory would have cost in 1971.
So, inflation adjustment is at best a rough proxy for purchasing power at current costs.
Walmart meanwhile does also have a marketplace, but I think still sells the overwhelming majority of their stuff direct in their physical stores.
Remember Walmart has revenue of like 600 billion a year or something crazy.
At most, starlink can replace one tmobile.
amz seem to be buying into everything under the sun and trumpling usage agreements (i.e. "reselling" consumer broadband with people joining via dark-patterns despite it being against their ISP terms of service, via amz sidewalk product)... and that can scale up to O.G. Bell levels of monopoly.
Other than Kuiper, Amazon has shown no signs of becoming a serious player in the consumer ISP space. Sidewalk is not a consumer ISP service in its own right, it needs to piggyback on a third-party consumer ISP service. Amazon is nowhere near becoming anything resembling Bell.
Capacity would be aggregate bandwidth, which is determined by the satellite's communications technology (number of transponders, spectrum allocated, modulation used, etc) rather than directly by which orbit it is in.
Comparing constellation sizes does make sense in determining total capacity (aggregate bandwidth of the constellation) assuming that Starlink and Kuiper use similar capacity satellites. I think that is a reasonable assumption, but we have limited info on what Kuiper are planning. Current Starlink satellites have on average 20 GBps capacity each. I can't find any public info on the planned capacity of Kuiper's satellites. Even if Kuiper has higher bandwidth satellites, they'd need a fourfold advantage in bandwidth to overcome Starlink's fourfold advantage in satellite count. And SpaceX is likely to increase the capacity of Starlink in future iterations, so even if Kuiper did have such a bandwidth advantage, it might not last.
The satellite capability is one thing, but spectrum is another. SpaceX doesn't have priority on a lot of the spectrum they have, so it might be completely unusable by the time they go live.
I feel like we are arguing two different things. I was saying Kuiper can't beat Starlink. You are arguing Starlink is not going to succeed. I'm more optimistic about its prospects than you are, but even if you are right, it wouldn't change my original point – if both fail, then Kuiper still doesn't beat Starlink.
Blue Origin is two years older than SpaceX, and yet has achieved far less than SpaceX has in the same timeframe despite a two year head start. Over that timeframe, SpaceX has successfully delivered 124 Falcon 9 payloads to orbit. Blue Origin hasn't sent anything to orbit yet, New Glenn is targeted to "late 2022" but most observers think it won't launch until 2023 (or even later).
Blue Origin is the one that needs to prove itself here. SpaceX has a proven capacity to deliver payload to orbit. With Starship, that capacity is going to greatly increase. But even if Starship runs into trouble and gets delayed, they still have a proven capacity with Falcon 9 to rely on while any issues with Starship gets delayed. Blue Origin is working through the issues on New Glenn without any fallback.
But to be clear, none of this matters. The business case to sell a $1000 terminal for under $300 isn't closing. They're subsidized right now, and public perception will be very different once the real sticker shock comes in.
Is Blue Origin good at its goals? On New Glenn, which is what matters here (not New Shepard), the jury is still out. By contrast, for Falcon 9, the jury returned a verdict long ago.
> The business case to sell a $1000 terminal for under $300 isn't closing. They're subsidized right now, and public perception will be very different once the real sticker shock comes in.
They say they've been cutting the manufacturing cost from $3000 to $1500 to $1000; you are assuming they aren't going to succeed in cutting it further. Even if they can't get it all the way down to $500 (the current sticker price), if they can get it down to $600 or $700, a $100-$200 price rise for new customers caused by withdrawing the subsidy is unlikely to be a huge issue. (The $300 figure was just an aspirational goal Musk set, nobody is paying that little right now, who knows if anyone ever will.)
Some stuff I read sounded rather bleak for the likes of FAANG.
So; how did antitrust work out for Bell? Not too terribly bad, in the long run. (Though there were huge benefits for us consumers. . . )
Honestly I'm not sure of the main corporate opposition to anti-trust other than ego and/or laziness. It'll force the resulting business to get even better. If you got your position by being the best, you can still be the best in the new world.
Even slightly rural places it’s not cost effective for traditional isps to dig a cable so what’s the markets solution? Launch thousands of satellites! I don’t know why you’re being downvoted. The market has failed, requiring external forces to intervene(the government). Whether it be Title 2ing them or allowing municipal competition something needs to happen
'messed it up'- care to elaborate?
Working at FAANG is easily regarded as one of the higher end jobs in the SWE world (from comp and prestige) and does generally require you to be in the top % of people who know how to code. That much is fairly indisputable so unachievable is a good way to put it for most folks (even if they are into programming already)
It's really not. There's thousands and thousands of engineers, from mediocre to truly great. With the right preparation, a lot of it just comes down to getting lucky with who interviews, how they're feeling that day, etc -- luck.
You might find this interesting: https://www.youtube.com/watch?v=r8RxkpUvxK0
You can't get in without both luck and preparation. That part I won't deny, but most exclusive things in life involve that bit of luck since everyone starts to hit the "threshold" of qualifications pretty fast
No, you cannot say the same thing about Ivy League admissions, at least not at the current point in time. Ivy League has a massive supply of qualified candidates to fill the demand (aka the fixed number of seats they have) many many times over. Tech companies don't have that luxury. They have a lot of candidate supply, but the "qualified" candidate supply is much more difficult to come by.
Ivy League admission officers admitted multiple times in public interviews and public statements that there are a lot of people that would qualify and succeed at those schools just fine, but the schools have very limited numbers of seats, so they have to have the bar be set higher and higher in order to accommodate truly the "best" (based on whatever metrics they use to determine the best).
With FAANG? They are desperate to hire good engineers. As someone who interviewed candidates at one of them, for every single interview I went with the mindset that I want to hire that person, as long as they demonstrate they are competent enough to do the job. There is no such thing as "limited" number of spots (as it is with Ivy League admissions). If every single candidate we interview is qualified, we will hire every single one of them, we won't be artificially raising the bar just to accommodate a specific X number of people because we cannot fit more. We can. Of course, my specific team cannot hire 50 people, but if all 50 candidates are competent, we will try to set them up with one of our sister teams or any other team in the company that desperately needs engineers, and there are tons of those teams.
The real problem is that a lot of those people we interview tend to not even be able to do fizzbuzz, and I am not exaggerating. I used to think that people said that statement as a joke back then, but after interviewing enough candidates, I realized that they weren't kidding.
TL;DR: they aren't similar, because Ivy League has a fixed number of seats available, and even if every candidate is qualified, they can only accommodate a predetermined number of them, so they have to artificially raise the cutoff metrics. For every candidate my team interviews, we can easily hire every single one of them if they are competent, without artificially raising the bar just to have a fixed number of hires.
You have to be capable and interested in passing a FAANG interview. Presumably those that failed the FAANG interview process are either incapable or uninterested. My money says it's mostly the former.
You have to pass-as what FAANG deems as the top 1%. Most people don't.
There is a reason that most people do not work at FAANG.
And I want to echo a point filoleg made. You don't need to be the top 1%. It's not like Ivy League school acceptance where they are artificially throwing people out. FAANG is dying to hire. My team has been trying to hire multiple people for months and we have gotten exactly 0. Every hiring manager goes into interviews wanting to say yes and the requirements are extremely clear and manageable for anyone willing to put in the time.
AND I am only talking about the algorithm and behavioral parts of the interview and working on the assumption that I will pass any general questions on system design, ML, etc.
Other than the CS degree, you are also make a 130+ IQ assumption here.
1. IQ is going to be in the 120-130 range for MIT and other FAANG target schools (https://www.iqcomparisonsite.com/occupations.aspx and factor in the GPA necessary and those correlations). If you are NOT from those schools, you are less competitive and have more to catch up on (like me). Whatever you say, either you had a lot of study free time or your IQ is higher than you think, as IQ translates directly to learning speed, especially on computer-related tasks.
2. Big deal. For a young student out of school? No, it's the rational choice, though a lot of people out of school/college NEED to work to keep the bills paid. For older people - dedicating a good 3-4 hours a day is going to lead them to underperform at work (at least it would for me), so it's kind of a big risk.
The pay increase is also "debatable", arguments like COL, vesting, decrease in quality of life due to commute, relatively-high (by non-FAANG) standards salary, taxes, etc come up. And that's without mentioning potential kids.
Other companies don't propagate their corporate culture so explicitly, so it seems a little weird and cultish. But if you're a big company doing things the same way other big companies do them, you're going to get the same results that other big companies get. Amazon wants to get better results than other big companies.
When I started at Amazon, I felt like 14 was simply too many. I tried to do a Carlin-style winnowing, but darned if I could get the number below twelve without starting to lose things that the company obviously thinks are important. (They recently added two more, by the way.)
As a bonus, after a stint at Amazon, your familiarity with their leadership principles can make you very desirable to other companies. I know a number of people who were individual contributors at Amazon and were snagged by Microsoft for leadership positions, either team leads or product managers.
It seems like this is a sorta "pre-PIP"?
> If Amazon employees don’t improve while unknowingly in the Focus program, they are then placed into the “Pivot” program, according to previous reporting from Business Insider. Employees told Business Insider that if they were placed in Pivot, they were either offered a severance package or given a chance to be put on a performance improvement plan.
So "Focus" is more a manager-focused thing and then if the manager isn't able to turn things around, out comes the real PIP, is my impression.
Honestly, I've had to put someone on a PIP before, and more support and training on what to do before it got that bad would've been very appreciated.
I think the manager needs to be transparent about "you need to do better" but not about internal management practices necessarily at that point.
And “instruct managers to hide from employees when they're on a PIP”
This is not as clear cut. The document cited by Seattle times clearly states to not go into details of the Focus application, and instead go into how they can improve.
“Do not discuss Focus with employees. Instead, tell the employee that their performance is not meeting expectations, the specific areas where they need to improve, and offer feedback and support to help them improve.”
Start looking for another job before I get fired. Which I assume is about 10x easier than doing so after the fact.
Class is also a generalization:
I'm a gainfully employed SWE, and I (by choice) live in a neighborhood where I hear gunshots nightly. I don't own any new clothing, my nicest pair of shoes are my work boots, and my friends are almost entirely blue-collar people. I could easily afford to live like an upper-middle-class yuppie, but I border on lower class at first glance.
Is class entirely economic, or is there a social aspect?
Assuming Starlink is similar, are there any risks of "imprisonment" on Earth having ~8k low-orbit satellites flying around? In that they gravely affect efforts to fly Humans to the Moon/Mars?
So just from a "unusable space" point of view, it's on the order of 10000x less of a problem than airplanes. The caveats here are the satellites are moving much, much faster than airplanes, and they stay in the sky much, much longer.
But it's not really a huge problem unless stuff goes wrong and you get Kessler Syndrome. This is more of a risk with the higher constellations like Amazon's and OneWeb's than it is with SpaceX Starlink (which is in a low enough orbit to de-orbit all debris within a few years, rather than centuries).
This doesn't feel right. Shouldn't the right frame of reference be the distance to the center of the earth, not the sea level? Then it is not a 100x difference, but more like a 10% one.
Area of sphere: 4πr^2
Airplane cruising height: ~10km
Satellite orbit height: ~550km
Radius of earth: 6371km
thus, relative increase: (6371+550)/(6371+10) = 1.084 = 8.4% increase
Squared (because of first formula) that corresponds with a 1.084^2 = 1.175 = 17.5% increase in area.
Still, a few caveats:
1. Earth is huge. 510.1 million km² is a lot of space (+20% at 550km altitude). We could have a million sats with each having more than 1km^2 to themselves.
2. Satellites orbit at different heights. Amazon's and SpaceX's satellites will not be on the same orbit.
3. Starlink satellites are at a sufficiently low altitude that Kessler Syndrom is not a problem; even if they all simultaneously turned into millions of pieces of dead debris at the same time, the atmospheric drag would make them lower their orbits and burn up in just a few months.
You are standing in a field.
There is a 10x10 meter plate hovering 10 meters above you. There is a 1 square meter target on it. You fire a gun upwards at a random location on the plate. There is a 1 in 100 probability you hit the target.
Now imagine there is a 20x20 meter plate hovering 20 meters above you. It is perfectly occluded by the original 10x10 meter plate. It also has a 1 square meter target on it. When you fire at a random location on the original plate, the bullet passes through it and continues on to the higher plate. What is the probability you hit the target on the higher plate? I believe it is 1 in 400.
From this thought experiment it seems that altitude from launch point is what counts.
Clearly the 10x10 plates are lining up (modulo curvature). But the 20x20 plates are not, they're overlapping. So when I shoot through a random location of my own 10x10 plate, there's a chance that I'll hit a target from somebody else's 20x20 plate. Sum up those additional chances, and they'll cancel out the 4x difference you found.
This feels like it would make a nice puzzle, your phrasing makes for a great misdirect / sleight of hand.
When talking about the area of the spherical shells, it conflates what is above me as equally relevant to things that are on the other side of the planet from me.
That is, satellite X and Y may be over me at 500km and 1000km distance, respectively. Later, they may be directly through the earth from me at distance 13200km and 13700km distance.
In the first case, if I shine a laser straight up, my probability of hitting X is 4 times higher than my probability of hitting Y.
In the second case, (if I could somehow shine a laser straight through the earth), the probabilities are nearly equal.
But my intuition is that for the purpose of escaping earth, this second case does not matter, because we are just dealing with what is above us, not the entire spherical shell.
If I am launching a rocket, and there is a 1x1 meter satellite orbiting 1000km above me. What is the probability that my rocket hits that satellite compared to an identical satellite at 2000km above me? The area of the angular sector of the sky that the 2000km altitude satellite is 1/4 of the 1000km altitude satellite.
That is, you 2x the height of the target which results in the probability of hitting being 4x less.
Every satellite is in a very deterministic orbit which requires energy to change (enormous amounts of energy for a significant change) so they don't change their orbit significantly nor often.
We really don't. We mostly know for commercial flights, but not so much for general aviation. What aviation does is have zones around airports with restricted airspace that is well-controlled.
This is also true for satellites.
Do you think they don't know where their satellites are?
* Those 20k planes don't all fly at the same time.
* Retired / Broken planes don't fly, satellites may still be on orbit for decades.
* There are no debris flying in the sky at 11000 km/h
* Planes can adjust their path at will instantly for avoidance.
* Planes can be grounded instantly if we need to.
Orbital space is a limited resource that gets depleted very fast and recovers very slowly. We are talking about launching in the next ten years 5x the total number of satellites that were ever launched so far. I am sure humans in 50 years would still be able to launch a thing or two in space as well.
This is an actual problem, and unlike for planes, once the problem is apparent, you can't just take some of them out of the sky while you figure out a solution.
They absolutely do. Check out https://www.flightradar24.com, there are currently 15,673 planes in the air worldwide at the moment I am typing this comment.
The main reason is that the Starlink satellites fly in a very low altitude, such that even if they lose all control, they will deorbit in a few years. Which means, if something went horrifically wrong, the Starlink system debris would clear itself within a short period of time.
It looks like the Amazon Project Kuiper satellites will be slightly higher up, but still have a natural orbital decay time of between 5-7 years. https://spacenews.com/amazon-lays-out-constellation-service-...
So, the long term risks from these kinds of low-orbit mega constellations is fairly low. If anything goes catastrophically wrong we wait a decade and it's gone.
I think personally, the longer term risks we should be wary of are medium altitude and geostationary orbits that won't naturally clear themselves for decades or centuries if something goes wrong.
SpaceX satellites are around 550km attitude. If someone puts satellites higher than that, collision debris will last longer, the satellites' fuel will last longer so the satellites won't have to be replaced as often, but network latencies will be higher. Seems like 500-600km is the optimal zone for the primary constellations of internet satellites.
Video showing decay of debris vs. its altitude. The "X" shape is because each debris is plotted twice, once at it's perigee and again at its apogee (describing the ellipse of the orbit as they generally are not perfectly circular)
Higher altitude satellites would be a bigger concern but these aren't a big deal.
In general the question of collision risk and debris is something evaluated for every launch/constellation. Starlink, for example, mostly avoids it being an issue by flying so low that debris quickly falls to earth and burns up in the atmosphere (they also design their satellites to fully burn up in the atmosphere). On the flip sides Starlink is planning on an order of magnitude more satellites than this.
Even the worst case though doesn't really impact humans ability to fly to the moon/mars. You can make low earth orbit relatively unusable because there is a high collision risk if you hang out there for a year, but you should basically always be able to fly through low earth orbit to a higher orbit with negligible collision risk.
Space junk is an issue, but it's not anywhere near crisis level yet.
I think it's sometimes hard to reason about the vastness of space, but imagine if the planet Earth had exactly 20,000 cars on it's surface. Even if you crossed the street without looking, your odds of getting hit by a car would be incredibly low. And ofc in reality low earth orbit is bigger than the surface of the earth AND we know where every obstacle is located. If humanity ever gets to the point where we decide it's too crowded, most of these constellations are low enough that they'd naturally deorbit in less than a decade.
Yet, SpaceX already launched 1730 satellites, 1630 of which are active, with a planned constellation of 12,000 satellites.
Amazon's Kuiper Systems hadn't even launched yet, and they're going with ULA for their first launch, which AFAIK, is much more expensive than SpaceX, with only 9 satellites as opposed to 60 satellites at a time.
 See https://en.wikipedia.org/wiki/Starlink
 See https://en.wikipedia.org/wiki/Kuiper_Systems
Also, they've submitted authorization for up to 42,000 total starlink satellites: https://spacenews.com/spacex-submits-paperwork-for-30000-mor...
These low earth orbit constellations will naturally experience orbital decay and at the end of their useful life will simply burn up on re-entry. They're explicitly designed to prevent Kessler Syndrome:
See also: https://en.wikipedia.org/wiki/Kessler_syndrome
a) debris damaging other satellites or space stations. There currently is no proper liability currently and different monitoring systems are still in development
b) astronomy from earth will see problems.
As long as they are on their orbits there is enough space (haha) and if they don't cause conflict with radio frequencies they also don't cause issues
Starlink operates at 550km altitude:
Most launches are timed for minimizing fuel to achieve the desired orbit, and there's only a few seconds of wiggle room for a launch window. So, a particular launch window may be preferred over another depending on the relative probability of a collision. Nobody is explicitly timing their launches or ascent profile with regard to other satellites other than for space stations and other explicit destinations. Most launches the rocket just gets the satellites up into roughly the right orbit, and then the satellites use their own propulsion systems to maneuver into precise orbits over several months.
They do this thing called "COLA", Collision On Launch Assessment, an analysis of launch trajectory to ensure they it won't hit known objects.
Just like the combustion engine development created the transportation and assembly revolution some ~60 years after, profits from the newly created peripheral markets led to massive profits in specific tech sectors (oil, steel)... Flush with cash, they started horizontal, vertical acquisitions, leading to the massive corps of the time (1950s).
We now have the internet. We are roughly almost 60 years into the internet creation revolution cycle.
We have seen how this movie plays out.
10x faster than anything that has ever been available in many regions of the globe that currently have some form of limited access? Yes.
And then of course this Internet being available absolutely everywhere on Earth.
O3B's MPower constellation promises 10 Gbps terminals, but those terminals are very likely to go to big enterprise customers, as they will cost several $10k's.
Telesat's Lightspeed constellation is aimed at rural areas of Canada and 5G backhaul. To do the latter, they'll have to deliver at least 1 Gbps. Telesat's been in the game for a long time, so I wouldn't doubt that they'll deliver.
There's two Chinese constellations going up. They'll probably deliver anything and everything they can, but it remains to be seen what the satellite and terminal capacities will be. The west still has an edge here.
Starlink started with residential service but they'll try to expand into everything they can. One of applications I expect to see is to use Starlink to get data out of Teslas and back to the ML team so they can improve their self-driving code.
Kuiper? I mean, who knows. It's Amazon, so, they'll probably also try to expand into everything they can. I wouldn't be surprised if their delivery trucks will use it as backhaul.
You might notice that none of the above, so far, are actually aimed at connecting the unconnected. That's because the terminals, so far, are too expensive and too power-hungry. The only two initiative I'm aware of that are actually trying to connect the unconnected are:
OneWeb, which has truly global coverage (including the poles) and has a quite smart design, including working crosslinks and a relatively affordable terminal. Also, in the arctic, they can provide connectivity to militaries, so there's some good cashflow there.
Curvanet, an initiative by Tom Choi. They promise a sub $100 terminal that can run on 5 W (!) and does not require an ESA but can still deliver up to 50 Mbps. If they can actually pull this off I will buy a unit and stick on my roof just for fun. Also, if they succeed, they would be absolutely best-positioned in the market, as no one can (so far) match their cost or power consumption.
Ultimately, we're using radio similar to any 4G or 5G connection. There are some differences, but they probably work against satellite internet more than they work for it. Terrestrial wireless networks like Verizon or T-Mobile can easily split cells to get more capacity (install another tower and split traffic between them). It's harder to do that with satellites. Plus the satellites will be over ocean so much of the time when their capacity can't be used (and other satellites in the network will take that traffic). You do get better line-of-sight which is helpful, but the amount of capacity is somewhat limited. Elon Musk has even said that they'll "most likely" be able to serve 500,000 customers. "More of a challenge when we get into the several million user range."
These are unlikely to be services that offer a faster internet completion for people who are already well-served. Of course, there are a lot of people who aren't well-served.
Over the coming years, 5G home internet is going to become more common. T-Mobile is looking to sign up 7-8M customers which would make them the 4th largest home broadband provider. Verizon has announced that they want to cover 50M households for home internet by the end of 2024 (T-Mobile already offers home internet coverage to 30M households; also, remember that while there are 330M people in the US, there's only around 130M households). While 5G won't reach everywhere that satellite will, it will fill in some of the broadband gaps that we currently have. 5G will also offer an alternative to wired home internet in many areas.
A lot of the interest in satellite broadband is driven by government money. The US government is offering $9.2B to expand access to 5.2M home/businesses in this one instance and there's a lot more money where that came from. The Universal Service Fund in the US shells out billions every year to companies providing rural connectivity. It might not even be that there's much interest in these programs beyond government subsidies. Of course, once the satellites are traveling over other areas, might as well make the service available and get some extra money.
Musk has said that Starlink might require a $30B investment to be viable, but might become cash-flow positive after $5-10B. $5-10B is probably well within the realm of government subsidy in the US. I mean, the government will definitely be spending much more than that subsidizing rural broadband over the coming years, but whether Starlink/SpaceX will get that money remains to be seen.
I wouldn't say this is just about poor regions. There are plenty of far-flung places that I wouldn't classify as "poor" that don't have great internet.
> "The average launch cost for Atlas was about $225 million per lift," Bruno told FLORIDA TODAY this week. "Where Atlas V is at today is almost half of that just by virtue of all the changes we have done in this business."
from , so around 100Mio per launch.
The same article cites 50-60Mio per Falcon 9 launch.
 says the payload mass to LEO is between 9.5 and 18.5 metric tonnes for the Atlas V (depending on the configuration, mostly how many boosters you attach), and 16.8T for a reusable Falcon 9.
So I'd say Amazon pays at least twice as much as Starlink, measured by mass in LEO.
As far as I could tell, these were serious efforts, but it's important to understand the organizational goals. Facebook benefits from increased connectivity regardless of who operates the network (as long as it's not actively hostile to FB, anyway). So there was a focus on research and publication rather than build-out. If FB can find a viable improvement in networks, and convince networks to use it, that increases connectivity which is good for FB. Additionaly, if network providers use FB developed technology in their networks, it might improve relations with those networks, which are sometimes strained because of competition in the messaging space.
In my mind, this is the same as Google Fiber. Google Fiber was a terrible business for Google, but as a result of announcing their plans to build out in specific cities, the incumbent networks built out high speed fiber in most (or all) of those cities and maybe a few other places, which increases penetration of high speed connectivity, which is good for Google as a whole, so it's still a win.
Edit: For added context, I worked on this along with several other FB Connectivity projects.
I mean sure nobody would have done anything like that without Starlink. But it's like every mobile provider would setup their own antennas in a way.
Maybe if the military had done this first, a similar path would follow (not likely though). As it is, the military is planning their own massive LEO satellite system .
I understand all of that. I guess it's just wishful thinking that this could be one thing where the world gets their shit together.
However, having the option to switch networks increases the efficiency of administration, customer service, etc.
TL:Dr it's not black and white
I don't see how satellite internet falls victim to such egregious market failures.