
Musk explains why SpaceX prefers clusters of small engines - BerislavLopac
https://arstechnica.com/science/2018/02/musks-inspiration-for-27-engines-modern-computer-clusters/
======
mhandley
There are two other reasons which SpaceX has mentioned before:

\- When you're landing a rocket, you need to be able to throttle down quite
low. Even a single Merlin 1-D engine, throttled down, is too much thrust to be
able to hover with a nearly empty booster. It's really hard to get stable
combustion at very low throttle settings. Having only one engine out of nine
running for landing makes this much more manageable.

\- There are economies of scale and reliability when you're building large
numbers of something. Ariane 5 only launched 6 times in 2017. It uses one
first stage engine, so they're only building one engine every 2 months. Falcon
9 launched 18 times in 2017, with 9 first stage engines per launch, that's
roughly an engine every 2 days. More continuous construction, better economies
of scale, more repeatable.

~~~
vannevar
_There are economies of scale and reliability when you 're building large
numbers of something._

Yes. One of the biggest predictors of reliability in aerospace systems in
general is time in operation---the longer you've actually run something, the
closer it approaches the upper limit of reliability for that component.
Running 27 small copies gives you 27x the time in operation vs a single big-
ass engine.

The downside is complexity, and in particular unknown failure interactions
which could bring down the entire system in a cascade. As long as many of the
components are identical, though, an improvement in reliability of that
component should translate directly into lower probability of failure
interactions since there are fewer failures, period.

~~~
bunderbunder
How does one quantify the risk of cascading failure?

My initial instinct, informed by computing, is to say that it's easier to
avoid cascading failures in the system that is composed of more, smaller
parts. All other things being equal, in a rocket with 5 engines, if one of
them fails, then each of the remaining ones needs to pick up 1/4 of the slack
to compensate. In a rocket with 9 engines, not only would each of the
remaining engines need to pick up only 1/8 of the slack, but the total size of
the gap would be a little over half as much, too.

(This is obvs ignoring the possibility that the failure in question is
catastrophic.)

~~~
neltnerb
To answer the first question, you use a fault tree analysis to predict
potential failure starting points (like a broken component) and then describe
how those failures will propagate through the system.

[https://en.wikipedia.org/wiki/Fault_tree_analysis](https://en.wikipedia.org/wiki/Fault_tree_analysis)

For an example, say I'm building a system that needs to hold a block of
aluminum at 550C, 99% of the time. Okay, so you add a thermocouple and a
heater to it, easy.

What if the thermocouple fails?

Well, if the thermocouple fails open then the temperature will read infinity
and the heater will shut down and probably produce a non-catastrophic failure.

If the thermocouple fails closed, the temperature will read room temp and the
heater will blast full on until the aluminum melts at 660C, which is a
catastrophic failure.

If the relay in the temperature controller fails, the furnace probably turns
off but theoretically could fail on if the relay switch gets fused.

Okay, so I can see that there is an unlikely but possible chain of events that
could cause a catastrophic failure. So I add a second thermocouple to act as a
safety shutoff using a second redundant relay and controller if it reads a
temperature above 600C.

Total probability then is estimated by either using real world performance
metrics or best-guesses. I'd say the odds of a thermocouple failing in 10
years of operation at 550C is nearly 100%, so this failure will nearly
certainly occur.

Or consider an LED array with 10 of them in parallel. If one blows open, the
remaining 9 each get 10% more current so are more likely to fail. So your
first branch of the tree might be that the odds are 10% that an LED will fail
at design current within five years. That may well not qualify as a failure,
especially since the other 9 LEDs are ~10% brighter due to the higher-than-
spec current. But now your probability for the next failure is 20% within five
years. So you do need to define different outcomes, usually by severity of
impact and probability of outcome in event of a predicted possible failure
point.

~~~
ethbro
Is there a name for (or keywords to search for) weighing the tradeoffs between
attempting to reduce failure effects in a component itself vs addressing them
at the system level?

E.g. while you _could_ mechanically debounce a button itself, it's usually
easier to engineer the system in such a way that trigger bounce doesn't cause
issues

Wondering how the call is made on where the appropriate fix to increase
reliability should be made? Or is all bespoke / gut?

~~~
neltnerb
So the proper way to think about this is in terms of where you put
abstractions. Much like you'd write a function or library, you can abstract
physical machines by idealizing the component in a system.

I don't think it's really a separate keyword to search for. This is all
probabilities.

The math isn't complex, the hard part is writing down a complete graph of all
the connections between different components, environments, and failure
scenarios. If your valve is made of five parts, and one of them has a 10%
chance of failing per year, then your valve has a 10% chance of failing per
year. If it has two parts that have a 10% chance of failing in a year, then
assuming independent failures the total probability of failure of that
component is 19% in a year.

These numbers are rarely known with so much precision during initial design.
Consider it akin to estimating the probability of certain kinds of predictable
bugs in a library you're using. How much do you trust that github repository
vs intel? The most robust thing to do is typically to design around your best
guesses but then do validation testing to refine your guesses.

So if I think a critical valve or seal has a high probability of failure but
have low confidence in what the probability is, I'll take that valve or seal
and literally set up a test case to make sure it performs as expected. Then I
can collect real statistics and go from there. Data >> Guesses, but the
systems are so complex that guesses are where you have to start.

Then you'd basically put the system together, one part at a time, and validate
with each added part that the entire system still behaves as expected. And you
throw in some edge cases to ensure that controls are working properly, like
perhaps in the aluminum heater case you'd simply break a thermocouple yourself
to ensure that the safety system works. But you'd do that in testing, not in
production.

It's really very analogous to unit tests, unsurprisingly because the need is
similar. I've had vendors ship me special custom thermocouples that they
claimed would run for 10 years at 600C. We threw them in an oven as a trivial
validation test. They caught fire. We didn't use that vendor again. But by
analogy that's how the firmware blob you get from a vendor is too. They sure
claim it does something, but until you've done real testing with it who knows?

As you pin down the true probabilities of different failures, you just
propagate them through your graph of possible failures to estimate the
probability of different scenarios and focus on the high risk and high
likelihood events. Sometimes the risk is as simple as "the system will be down
for an hour while we replace a failed component". No biggie, maintenance is an
expected cost. Sometimes the risk is a nuclear plant meltdown.

EDIT: The goal of the above is to identify which _causes_ result in critical
failures with high likelihood. Once you've identified them, then you focus
down on addressing the root cause. It's more about identifying where problems
would start if there were a bad scenario, so you know where to spend more
attention in quality control.

If you identify that the debouncing is a cause where if it doesn't work your
machine doesn't work as needed, the actual solutions could be software or
hardware. What's the relative probability that each solution will work? How
costly is a failure? How much does it cost to implement? At that point you're
talking cost models with reliability requirements as an input.

~~~
thisacctforreal
These comments are incredibly helpful, thank you for taking your time to write
them.

Please correct me if I'm wrong, but I think is named Reliabilty Engineering /
Safety Engineering? Those might be some good things to search for people
interested.

------
simonh
One thing not mentioned is throttle range. Larger engines can’t throttle down
as low as smaller ones. That’s irrelevant for launches, but matters crucially
for landing.

I know the F9 engines can’t throttle down to a hover, even on a single engine
with the fuel tanks mostly empty, but I’m sure they did all their landings up
until the last few on a single engine that throttled down to its lowest
setting for a reason. Controllability matters and while max power suicide
burns are theoretically ideal in practice landing at full thrust on all
engines would be highly unlikely to ever be workable. If F9 had say 3 larger
engines, I doubt landing would be possible at all. Also you need to be able to
build a configuration with an engine in the middle.

~~~
rocqua
The recent falcon heavy launch (and some falcon 9 launches before it) actually
had the boosters land using 3 engines. So going by ratio, it should still be
able to land if it had 3 larger engines.

This 3 engine landing failed for the centre core, which is why it was lost.
Specifically, there wasn't enough 'lighter fluid' to relight all 3 engines
required, only one engine was lit. Thus the booster tried a 3-engine landing
on a single engine, and hit the water at 300mph IIRC.

~~~
simonh
Sure, which is why I said up until the last few, but if you look at the
footage of the two boosters landing closely they didn’t simply do a 3 engine
landing burn.

They powered up the centre engine, then lit two engines either side of it, but
then turned those off a few seconds before landing and still actually landed
on a single engine for the last ten seconds or so. It’s hard to be sure
exactly because the telephoto footage of the booster catches the start of the
burns but misses the side engines shutting down. But when it pans back on to
the engines, it’s clear only one of them is still lit (with one other flaring
off some unspent fuel). That’s a very precise and tuneable thrust curve you
wouldn’t be able to do with one bigger engine.

~~~
rocqua
Ah I see.

Interesting about the final part of the landing occurring with a single engine
even with a 3 engine landing. Makes me wonder why they don't do something like
9 engines for the breaking burn and then switch to a single engine for the
landing part. It might just be that is their final plan but 3 is easier to
test than 9.

~~~
simonh
It’s possible, they might gradually start using the three engines for longer,
then maybe even use more engines. I doubt the last part though, the burns
aren’t for very long already and I think 3 engines probably gives plenty of
kick. There’s also the issue of fuel flow dynamics to the engines, but only
SpaceX will have any idea about that.

------
Robotbeat
We can mark this inaugural Falcon Heavy launch as the point when people
stopped laughing at BFR.

The latest version of BFR is only about twice as much thrust as the final FH
variant (which will launch in a few months with slight thrust upgrades) and
around the same number of engines. Recovery, even with such a complicated
bunch of stages, seems to work pretty well, validating SpaceX's knowledge of
reentry and reuse.

Launching crew within about a year from now is when people will stop laughing
about SpaceX sending people to Mars.
[https://www.youtube.com/watch?v=0qo78R_yYFA](https://www.youtube.com/watch?v=0qo78R_yYFA)

------
aurizon
A large portion of the efficiency of small engines comes from the reduction of
"hoop stress". This is the linear tension in the wall of a pressure
vessel(rocket engine) which varies as the square of the diameter. Twice the
diameter = 4 times the pressure (hoop stress) the walls must take. A rocket is
a special case of a balloon - with an expansion nozzle attached to couple the
impedance of the combustion chamber to the outside environment. This means
that 9 smaller rocket engines will weigh less than 1 large rocket engine of
the same thrust. On top of this is the small engine basic throttling
capability with the overlay of cutting out engines to add greater capability.

~~~
theothermkn
> This is the linear tension in the wall of a pressure vessel(rocket engine)
> which varies as the square of the diameter. Twice the diameter = 4 times the
> pressure (hoop stress) the walls must take.

This is incorrect. Cutting the cylinder in half lengthwise and taking a unit
length, we see that the cross section of the walls of the chamber (unit length
x 2 x wall thickness) resists the pressure force from the contained fluid (2 x
radius x unit length x pressure). Since the pressure and unit length are
constant under scaling, this resulting force grows linearly with the radius.
Thus, wall thickness has to grow proportionally to the radius, and there are
no mass penalties from the chamber wall from either a smaller or larger
engine. A similar result holds for spheres. Indeed, the particular result is
independent of the geometry of the pressure vessel.

The larger immediate result is that pressure vessels scale just fine with
size, up or down, and there's no benefit at either end of the length scale, in
relation to contained volume.

What you do get for a smaller engine is more manageable combustion
instabilities. Look at the 5-fold symmetry (odd, rather than even) of the
injector baffles in the SSMEs. This is to stop a tangential oscillation mode,
an important failure mode of larger engines. What you lose for smaller engines
is that you have more of the fluid "close" to the walls. The boundary layers
don't grow linearly, so you get more heat transfer at the boundary in
proportion to the contained fluid. (Radiant transfer in the engines
complicates the picture, but convective transfer is definitely proportionately
worse for smaller engines.)

~~~
Robotbeat
It's true that pressure vessels in principle don't care about scale when it
concerns mass per unit volume (at a constant pressure). But a combustion
chamber's thrust is (to zeroth order) proportional to cross sectional area,
not volume.

The combustion chamber only needs to be a certain length (L star) to achieve
efficient combustion. Any longer and you're just adding mass with no benefit.
But there are practical limits to shape of the combustion chamber. You can't
have it too squat or it loses structural efficiency. Thus, above a certain
size, you're better off from a mass efficiency standpoint with having a bunch
of smaller combustion chambers than one big huge one.

And this is true even more for the nozzle. You can use a much shorter, and
thus lighter, nozzle if you have a smaller engine. For the same expansion
ratio, therefore, clustering a bunch of smaller engines is more mass efficient
than a single big engine.

(But if you go REALlY small, you have minimum gauge issues and you lose
thermal and combustion efficiency.)

~~~
theothermkn
> You can't have it too squat or it loses structural efficiency.

I'm not sure what you mean. Over "squatness," if we mean l-star to cross-
sectional area, the 'structural efficiency' remains constant, in that we've
contained (square) more fluid for (thickness x perimeter = square) more wall
material, and done so for (square) more thrust.

> You can use a much shorter, and thus lighter, nozzle if you have a smaller
> engine.

A nozzle, again, can be modeled as a pressure vessel. (Neglecting shear stress
in the first analysis.) So, if we concede that pressure vessel mass ratios are
invariant under scale, then so are exhaust nozzles. What we are really worried
about is the amount of fluid that is in the boundary layer for heat transfer
and shear reasons, and this gets slightly worse with engine size. The area-
Mach relations govern the size of the exit bell, so the exit surface for a
larger engine grows linearly with the throat area, and we're back to the
pressure vessel scaling laws.

> You can use a much shorter, and thus lighter, nozzle if you have a smaller
> engine.

The point being that, if your nozzles are 9 times smaller, they're only 9
times lighter.

It's difficult to talk to everyday SpaceX enthusiasts, who seem to have most
of their information second-hand. When, for example, to I bring up frozen flow
vs. equilibrium flow in this discussion? When do I point out the combustion
instability limitations of larger engines? When do I point out the exorbitant
research costs associated with taming those instabilities? When will any of
that ever dissuade a layperson from their enthusiasm for the square-cube law?

~~~
Robotbeat
Pressure vessel equation means structural efficiency is invariant based on
_volume_ but we're talking (to zeroth order) _area_ , thus smaller pressure
vessels (nozzles, chambers, etc) are more structurally efficient (at the large
size limit...).

To expound: Mass ∝ Volume, Volume ∝ length^3.

Thrust ∝ cross sectional area, area ∝ length^2.

Thus, Mass ∝ Thrust^(3/2), or:

Thrust to weight ratio ∝ sqrt(thrust) for a single engine.

> The point being that, if your nozzles are 9 times smaller, they're only 9
> times lighter.

This is incorrect, for the reason I shared earlier. Thrust is proportional to
cross section, not volume. Thus scaling laws (at some point) favor smaller
engines. (Note that this is assuming we're already big enough that we have
full combustion and are not experiencing minimum gauge issues, etc.)

My knowledge doesn't come from being a SpaceX enthusiast. In addition to
having a physics degree, years ago I was part of a very early-stage startup
(never got off the ground) at one point looking at launch vehicles, and so I
read Sutton's Rocket Propulsion Elements (a very nice introductory text, I
highly recommend) among others and did a bunch of scaling analysis. (SpaceX
would've been a competitor.)

------
SpuriousSignals
It all depends on how an engine fails. A failure to provide thrust is
survivable with more engines. But a vulnerability that causes a fuel line to
go boom, taking out the whole rocket, is made more likely by having more
engines.

~~~
jballanc
IIRC, the engines in the Falcon 9 are pretty well separated from each other,
such that even a catastrophic failure of one engine is survivable by the rest.
Indeed, that seems to be what happened the one time an engine did fail (albeit
on a very early version of the Falcon 9 with the "square" engine layout):
[https://www.youtube.com/watch?v=dvTIh96otDw](https://www.youtube.com/watch?v=dvTIh96otDw)

~~~
sebazzz
It is not an explosion. From the first comment:

Engine didn't explode, it detected an anomaly and shut itself down. Here's
spacex's statement concerning the event-

"Approximately one minute and 19 seconds into last night's launch, the Falcon
9 rocket detected an anomaly on one first stage engine. Initial data suggests
that one of the rocket's nine Merlin engines, Engine 1, lost pressure suddenly
and an engine shutdown command was issued. We know the engine did not explode,
because we continued to receive data from it. Panels designed to relieve
pressure within the engine bay were ejected to protect the stage and other
engines."﻿

~~~
gizmo385
> Panels designed to relieve pressure within the engine bay were ejected to
> protect the stage and other engines.

Does anyone have details on this? I'm just curious how these work because it
sounds fascinating.

------
Klathmon
It makes sense from a reliability perspective to have more smaller semi-
redundant parts, especially since it sounds like they have the whole control
system down, "scaling up" from managing and controlling 3 engines probably
isn't all that different from managing 9, and eventually 31. (obviously this
is still rocket science, and nothing is "easy")

I'm curious if there are other benefits. I'd imagine that manufacturing the
smaller engines gets easier and faster and more reliable as they make more of
them.

Also, I wonder if this plays into their reusability? If there is a defect
found in one of the 9 F9 engines, theoretically you could replace just that
one and refly the rest. I have no idea if this is even possible, but it seems
like it could be.

~~~
hinkley
But there has been talk in the past, including from Musk if memory serves,
about the limits of reliablility.

We aren’t talking about hard drives here. You throw a couple more in and if
one fails you just turn it off. Hard drives don’t explode and destroy the hard
drive next to them or cause the enclosure to fail.

More rockets is more things trying to explode in only the same direction.

The other responder talked about the operational excellence that can’t be
achieved if the numbers get too small. That seems more likely.

~~~
hinkley
Just an addendum, while Musk in fact compares the rockets to computers, I have
found that what the management takes away from conversations with engineers is
often not the most important part.

But typically everyone on the team has opinions about success or failure of
the project and we all discount something important. Managers doubly so. They
rarely credit the spotters who keep them from falling when their process has
huge holes in it.

------
avmich
> The Russian Soyuz rocket has five engines, each of which has six thrust
> chambers.

RD-107 on the four side blocks has 6 chambers, RD-108 on the central block has
8 chambers.

------
ksk
>For computers, Musk said, using large numbers of small computers ends up
being a more efficient, smarter, and faster approach than using a few larger,
more powerful computers.

I always figured, in theory, a super-super-powerful single threaded single
processor would out compete the multi-threaded multi processor design on
efficiency, because of the hardware and software inefficiencies in inter
processor hardware connections and communication. In practice, I suppose there
is always a physical limit on the total capacity of low latency addressable
ram and storage you can manage to shove into an architecture.

~~~
bluedino
Depends on the task - it turns out rocket thrust is quite parallelizeable

------
allthenews
There's something I don't understand about space travel.

We made it to the moon in 1969, on the first computer to use ICs[1]. Since
then we've seen monumental advances in computer science, material science,
manufacturing, rocketry, and just about any other component of space flight.

Why 50 years later is it still such a relatively difficult task to launch a
rocket into space? Why is it still so expensive and failure prone, when we
were able to launch so many vessels with substantially less capable
technology? It seems like space travel simply has not scaled with the rest of
our technology, but I imagine I'm missing something.

Perhaps the ratio of human life to risk tolerance has increased, such that we
effectively spend more time and resources designing away failures and refuse
to launch with the same level of risk that was acceptable decades ago?

1\.
[https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Desig...](https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Design)

~~~
adventured
Apollo cost ~$120-$150 billion in today's dollar.

Falcon Heavy cost $500m-$1b to get right.

Given some time, Musk could probably make it so a trip to the Moon costs $300
million, maybe less.

We've ( _they 've_) accomplished an extraordinary cost improvement.

It's still such a monumental task, because the laws of physics have not
changed. Building a huge rocket, putting people in it, and firing it at the
Moon, is not the hard part per se (not killing them in the process, and
bringing them back safely, is). We could have done that all over again at any
time if we desired to spend the money. The next level of difficulty, is
turning it into a truly routine task, and building something on the Moon.
That's a dramatic leap up from only exploring the surface of the Moon. We've
avoided doing that, not because we can't, but because it's a cost benefit
equation, and the benefit has not been considered worth doing at the cost,
even as the cost gradually declined. Now that cost benefit equation has
improved dramatically enough to favor it being worth doing.

Simply put, as a society we're not willing to spend $500 billion or $1
trillion to build a Moon base. But we might be willing to spend $50 or $100
billion over time to do it. We're not willing to spend $10 or $20 billion for
a trip to the Moon, but we are probably willing to spend $300 million or even
a billion.

~~~
simonh
That’s not quite a fair comparison. The $500m to $1bn was just for the heavy
variant, it doesn’t include the cost of developing the F9 itself. Musk has
previously estimated the cost of just developing the landing capability by
itself at about $1bn. But still, yes it’s a lot less than Saturn V.

------
craig1f
TL;DR; - You get more fault-tolerance with a large number of smaller engines

~~~
nine_k
Interestingly, the Soviet Н1 Moon-bound rocket [1] used a similar setup, and
it was plagued be reliability problems: making many smaller parts work
reliably _at the same time_ is harder for obvious reasons.

Either reliability of engines went seriously up, or software control
(impossible in early 1960s) made it possible to operate a bunch of less-than-
ideal engines successfully.

[1]:
[https://en.wikipedia.org/wiki/N1_(rocket)](https://en.wikipedia.org/wiki/N1_\(rocket\))

~~~
orbital-decay
Actually, N1 failures had little to do with the complexity itself, and engines
were reliable enough for the time.

Due to the lack of funding (the soviet lunar program was given the priority
well into the moon "race") they were using the old methodology of testing it
in the actual flight, not doing any static fires and only doing a bare minimum
of ground testing. Saturn V, on the other hand, heavily relied on the ground
testing before the launch. That's why Saturn V mostly worked and N1 failed.
Energia worked perfectly much later, because it was developed with the proper
amount of ground testing.

~~~
Florin_Andrei
The Soviets won the first round of the space race (until the mid-60s) because
of multiple factors, but mainly because of the laser-focus at the highest
levels to push the technology as far as it could go. It helped a lot that they
had an engineering genius heading the program (Sergey Korolev), and the top
politician during that time (Nikita Khrushchev) was a forward-thinking
progressive (relatively speaking - please keep it in context) who was a big
fan of space.

Korolev (pronounce: Karalyov) died in the mid-60s, just before the Moon
program had started to gear up for the big time. Khrushchev was ousted also
during the mid-60s by retrograde bureaucrats.

With both the political and the technical leadership in turmoil, the program
fell on very hard times. The didn't get enough funds, could not get proper
testing done, and pushed a lot of QA to the live launches. Predictably, the
results were "spectacular" \- but in a bad way.

A little before that time America finally got its resolve together ("We choose
to go to the Moon in this decade and do the other things, not because they are
easy, but because they are hard...") and started pouring massive amounts of
financial and engineering efforts into its space program. Again predictably,
the results were spectacular - but in a good way.

If your leadership is indifferent and you don't have the stuff you need, you
lose. If you work hard and put all your energies into it, you win. And that
applied to both sides, each in its turn. Who knew?

Good book on this topic (and related):

[https://www.amazon.com/Korolev-Masterminded-Soviet-Drive-
Ame...](https://www.amazon.com/Korolev-Masterminded-Soviet-Drive-
America/dp/0471327212/)

\---

I wish Korolev was around these days so he could see Elon Musk's multi-engine
design. I think he would like it. In a (somewhat vague) sense, I see the
Falcon Heavy as late vindication for the tremendous efforts, against all odds,
of the engineers who busted their asses trying to shoot the N1 into the Moon.
The idea was sound, it was just not yet the right time for it.

~~~
cuspycode
Korolev's contribution is even more impressive when you consider that he was
imprisoned in the Soviet Gulag for many years (for political reasons),
suffering under living conditions which probably shortened his life.

[https://en.wikipedia.org/wiki/Sergei_Korolev](https://en.wikipedia.org/wiki/Sergei_Korolev)

~~~
Florin_Andrei
Seriously, I can't comprehend the guy.

He starts studying liquid fuel rockets in the '30s, and does some amazing work
probably trailing only the top German engineers in this field.

He's denounced by some envious low-lifer who wanted his job and is arrested
(along with Valentin Glushko, another great rocket scientist) during the
stalinist Great Purge at the end of the '30s, when a simple anonymous note was
enough to get someone disappeared. They torture him, sentence him to death -
but then he's commuted to hard labor in the gold mine, where the poisonous
environment and poor conditions meant the average life expectancy was barely
over one year. Loses all his teeth to scurvy.

Meanwhile his friends back in Moscow are lobbying with Lavrenti Beria (the KGB
boss) to release him - they succeed and he's placed in the "easy prison" where
a bunch of intellectuals were doing essentially white collar slave labor (with
pencil on paper, sure, but no choice in the nature of the work) for the Soviet
government. He's released towards the end of WW2.

Then Stalin figures he needs to catch up to the Germans in rocketry, so
Korolev is rehabilitated, made colonel of the Red Army, and finally starts
working again on his rocket engines. They copy a bunch of German designs
first, use some German engineers (who were prisoners) to get them started.
Then continue on their own.

He develops the first Soviet ICBMs, but that was just what paid the bills. He
keeps pushing for a real space program. Launches Sputnik 1 into space. Leads
the Soviet space program until the mid-60s.

When he died, he was working on plans for manned missions to Mars and beyond.

I mean, what motivates a person to keep forging ahead against such adversity?
Death sentence, hard labor in the poison mine, years of imprisonment and
disgrace - and then he builds and launches the world's first ever satellite.
To say nothing of the fact that, like Elon Musk, he was a man of many talents:
great engineer, very effective leader, and a good politician and lobbyist.
It's amazing.

------
XR0CSWV3h3kZWg
> using large numbers of small computers ends up being a more efficient,
> smarter, and faster approach than using a few larger, more powerful
> computers

Huh? Using large numbers of small computers is definitely smarter and more
cost effective. I'm surprised by the claim that it's more efficient and
faster. The moment what you are doing needs to hit the network you eat some
real costs in terms of efficiency and latency.

------
jlebrech
and soon it'll be clusters of boosters, there's space to fit another 4 in a
hexagonal pattern. you could lift a mini hexagonal mars base fully assembled
that way, land it and land the next one quite close.

~~~
marktangotango
In the post Falcon Heavy press conference, he specifically mentioned scaling
up to a Super Heavy with two additional side boosters, four in all. Sounded
like they had designed for that, from the way he just threw it out there.

Seems odd to me, since FH is an interim vehicle until BFR comes on line in
5-10 years.

~~~
gizmo385
> Seems odd to me, since FH is an interim vehicle until BFR comes on line in
> 5-10 years.

I thought they were targeting launches much sooner than 5-10 years out for the
BFR?

~~~
Maybestring
They are, that doesn't mean you should expect them to hit that target.

Musk doesn't seem to set timelines with the expectation that they will be
attained. They are always best case, but development never is.

~~~
jlebrech
yep, they still need to make money.

------
RachelF
Perhaps another reason is that they already have these small engines?

Why not use what you have already built and debugged rather than build a
bigger, expensive engine, for only marginal gains.

------
grondilu
Would it then make sense to plan for even more smaller engines? Like while BFR
is meant to have 31 Raptor engines, could it have 60 Merlin engines instead or
something?

~~~
ygra
One goal of BFR is Mars, from which you can only get back if you can produce
fuel on the ground. Which kinda rules out RP-1 as propellant. Hence the need
for a methane engine.

Unrelated to the number of engines here, obviously; just another point to
consider. And if they end up designing a new engine they could just as well
apply what they have learned in the meantime. Merlin is a very conservative
design, favouring simplicity and cost over thrust. If you plan for re-using
your rocket a thousand times, cost of an engine isn't that much of an issue
anymore and you can prioritise other aspects.

~~~
philipwhiuk
> from which you can only get back if you can produce fuel on the ground

Is this actually realistic?

------
pensivemood
Can we have less spacex here please? Thank you.

~~~
always_good
Be a big boy and click the downvote/hide buttons instead of hoping the
universe adapts to you.

------
DanCarvajal
Hey it worked for the N1.....wait.....

