I think the Voyager probes teach us lots about what's wrong with corporate America. Imagine devices that don't have built-in obsolescence, or licensing that expires, or that isn't built by committee where they just keep shipping requirements around from one department to another. Imagine if it weren't all proprietary and if it didn't have no public or even complete documentation.
Think of all the problems we have instead: Boeing airplanes that need to be rebooted if their computers are up for too long, an Ariane 5 blowing up because using the old thing should be "good enough", Microsoft Windows on ATMs and vending devices that literally can't not have pop-ups. It's like we've ceded control of our ability to do things to "methods" that corporations insist upon, even though they've been proven worse.
Heck - if businesses had their way, would the Internet be run on Novell, with millions of Novell admins all around the world constantly needing to fuss with things just to keep it running?
It's nice to see when science takes priority to everything else, and the hardware reflects that.
> Imagine devices that don't have built-in obsolescence, or licensing that expires, [...]
You can buy long lasting stuff just fine, if you are willing to pay. You can get eternal licenses (or an outright sale of rights), if you are willing to pay.
It's all about trade-offs, and many people have priorities that are different from 'last as long as possible'. Eg many computers from the 1980s are still perfectly usable today, but who would want to use them? I'm sure you can probably also still use your old rugged Nokia phone today, especially since they had easily replaceable batteries. But who would want to use a 25 year old dumb phone?
There are some enthusiasts which do use these old devices. And it's great that they can do so! Never say corporations don't deliver!
But by and large people have different priorities than wanting their obsolete tech to live forever.
Yup and in the UK a lot of our card payment boxes for car parks will stop working when 3G is shut down, and will be replaced with an app. People without phones need not apply!
One can see how quickly people could drop through the bottom of society once you can't afford to pay your phone or car bills. None of that technology comes cheap, not at least if you hit hard times for a while, and you will become a second class citizen.
Really? Oh man ... I was in the UK this summer and had to install a different app to pay for parking (or car charging) in every city I went to, and some of them wouldn't accept a non-UK licence plate :( High demand for developers keeps my salary high, but it's all such a waste.
Dumb phones are still being sold that are 2G only.
Not just that, but “landlines” are still being offered that are really a 2G phone restricted to work only in the vicinity of a home that's not (yet?) covered by actual landlines.
It's also used by emergency services.
It's considered a technology of last resort, and the service provider that wins the concession to provide it, with 100% coverage of a given area, probably finds it useful that it can meet the conditions with such tech.
Denmark does the same. 3G has been shutdown, or mostly at least. 2G is kept running, with no plans to shut it down in the future.
Things that uses 2G:
- Electricity meters
- Alarm systems
- Cooling and heating systems
- Rat traps (?)
- And an absolute crap-ton of early "IoT" devices.
And it's considered a backup network, with better coverage and range than 4G/5G. It also makes sense as the country is actively starting to remove the old landlines. The first cables are actively been removed from the ground.
Yeah, there was a lot of debates when they started relying on cell networks for alarms systems (mostly for the elderly) as a way to not have to maintain the copper.
I don't know that it's necessarily that, simply that SMS became very heavily used _and still is_ for basic functionality, essential services, billing, notifications etc. in Europe, Africa, Asia - unlike the US. Compare to e.g. the US requirement for universal lifeline (landline) phone service.
That's not a good example of 'planned obsolescence'. 2G wasn't 'planned' to be obsolete, it was superseded by better technologies. Also because 2G is a *service* and not a product, it was a business decision to sunset the *service* since there is a maintenance cost to 2G *service*, as well as an opportunity cost in not being able to reserve 2G radio frequencies for 4G and 5G networks.
Yes, and this perfectly illustrates my point that putting in the extra effort to make your product ultra study is not a good engineering nor business decision, if that means the product outlasts the surrounding eco-system it needs to function.
You're being overly pedantic. But OK, 2G is a bunch of infrastructure that is then rented out creating a cell service. Everything else holds true in that there is a maintenance cost and an opportunity cost.
The point is that it has been around so long that so many things depend on it. That makes sunsetting it a public policy decision and not just a business decision.
Yes, that's stuff from 25 years ago. It still works in the sense that the hardware was built to last.
It's not the fault of the phone's hardware that the world has moved on. But this underscores my point: if the hardware lasts longer than the eco-system around it, that's useless, and the company should have probably saved customers a few pennies by going with something less sturdy.
> You can buy long lasting stuff just fine, if you are willing to pay. You can get eternal licenses (or an outright sale of rights), if you are willing to pay.
Where do you buy long lasting stuff? This is a serious question, I'm looking for new appliances and I am willing to pay more, within reason.
Let's say I'd pay double of what an appliance with similar technical characteristics would normally cost, which I want to go towards higher quality materials, craftsmanship, and quality assurance. I expect such an appliance to work and and be economically repairable for at least 20 years.
As far as I can tell, this is nearly impossible. Many brands that used to produce higher quality products have downgraded the quality of their materials and craftsmanship in order to juice quarterly profits for investors. Their reputation seems to lag the quality of their products by about 10 years.
> Where do you buy long lasting stuff? This is a serious question, I'm looking for new appliances and I am willing to pay more, within reason.
Miele is good for household appliances like washing machines and dishwashers. Kärcher is good for vacuum cleaners. There's lots of other German (and Swiss and Japanese etc) brands like that.
But in any case, I didn't (and can't) promise that you can get that kind of quality by merely paying double.
Btw, almost all cars these days last a lot longer than they used to, and with fewer repairs. Quality has gone up across the board. Planes also fall out of the sky less often. (And that includes Boeing, despite their recent troubles, their track record would be seen as unachievable good about 20 years ago.)
Miele was my first thought as well because they were historically regarded as an expensive, but high quality brand. Now they have a Trustpilot rating of 1.6.
Our Miele appliances work just fine after using them for 7 years and after several house moves. I have never heard of this trustpilot before, interesting. Seems to be an American website?
I have worked with some people in the defence industry who got just that from Adobe and a few similar vendors. They had to negotiate with adobe and sign NDAs both ways, and they payed through the nose for it. But you can do it.
"Be defence forces of (checks profile) one-of-the-five nation states" is not a standard of negotiation requirement I deem attainable for just about anyone that isn't one of the five or EU the union.
Fair enough, but how life-crucial can an old copy of Adobe be?... I'm assuming a project like the Voyager mission relies on something a bit more bespoke than a copy of Adobe Creative Whatever. I really hope the defence forces core mission doesn't depend on Adobe Creative Cloud.
I guess the main reason here isn't "keeping an old version", but having a version that doesn't require an internet connection to be activated and doesn't send any data to Adobe.
But having an older version can be useful too because some features from previous releases may be missing in current ones, so that's a way to ensure access to the old files. A couple of years ago all the Pantone colours used in Photoshop just became black after an update because Adobe stopped licensing Pantone stuff.
Right, "willing to pay" is not enough. You also need to know which products will actually last a long time and which products are just the regular crap sold at a high markup. Often this is impossible to tell if you are not an expert in the field.
There data is held hostage, they lack the knowledge and the skill to migrate services, the several companies they could buy from are owned by often one huge owner which prevents any meaningful competition. Consumers have a choice only in the placebo sense of which colour to choose in the company store with the company scrip.
> You can buy long lasting stuff just fine, if you are willing to pay. You can get eternal licenses (or an outright sale of rights), if you are willing to pay.
Would that 'twere so simple. Most previously reliable premium manufacturers have jumped on the enshittification bandwagon. You can get better design and a more exclusive branding if you are willing to pay more, but where exactly would you turn if you are looking, for example, for a dish washer that will just work for the next ten years? Preferably one that doesn't come with cloud integration. There are still a very small number of ultra-premium brands that really care about their customers, but apart from being way too expensive for most people, they are increasingly hard to find. One of the side-effects of so many companies trading their reputations for "shareholder value" is that trust is eroded for everyone, including those that still deserve it.
> [...] but where exactly would you turn if you are looking, for example, for a dish washer that will just work for the next ten years?
Miele works well for us.
I understand your complaint. I'm also sure it's really hard to get high quality buggy whips these days without paying through the nose, now that everyone has switched to these newfangled horseless carriages.
That's what I would have said as well, until recently. Last year we had to replace ours, and its been an absolute nightmare. Cost us endless hours of wrangling with technicians and customer service to have it installed and configured properly (and already repaired once). ¯\_(ツ)_/¯
While I agree that "corporate America" is problematic, the lessons from Voyager are not really applicable. For starters, it was designed to take advantage of something that predictably happens every couple of hundred years. Very few corporations last that long. Then there is the element of predictability. The solar system pretty much works like clockwork. You can predict simple things, like the orbits of major bodies, centuries in advance with a great degree of accuracy.
Compare that to the tech industry. IBM itself can trace it's origins back to a data processing company about 140 years ago. They probably had the most natural evolution into a modern tech company because of those origins. Yet noone at the original mechanical tabulator company could have predicted what IBM would be producing 70 years later, nevermind a century later. Nintendo is roughly the same ago and is more-or-less doing the same sort of thing (gaming) but has gone from low-tech to high-tech so their path was not predictable. Yet those are abnormalities. Almost anything that old is an institution of some form (e.g. government or education). Perhaps the closest thing we had to predictable was Moore's Law, but that wasn't a pure law in the physics sense. It was reliable enough to build an industry upon for many decades, but it could have failed much earlier than it did due to hurdles in technological developments.
The other thing about Voyager is that its intrinsic value grew with time. While we could develop a mission that could go as far as it did faster, and have better instruments to conduct research, it would take decades to build the probe and get there. It also makes sense to support it to get what little data it can return to us because it would help us develop better missions when we do decide to go that route.
There are plenty of 100 year old companies doing basically what they did 100 years ago (Exxon, Coke, Hershey, Wells Fargo, etc.). They’re just not tech companies.
I was thinking about companies approaching 200 years old since I knew my point would be more difficult to make if I said 100 years (and because they had a launch window that comes up roughly once every 200 years, though I checked that out and it is 175 years). That said, you're right about my basically overlooking companies that weren't based in technology.
Exxon is a cute example though. At one point, they owned Zilog.
* Exxon: Massive amounts of technology have been ingested by the oil majors in the last 50 years. It is basically a high tech business at this point. Also, if we only focus on the physical work, fracking didn't exist 50 years ago, and now it is a major part of the industry. Again: Huge changes.
* Coke/Hershey: Think about how much automation exists in their manuf. Do any humans touch a "bottle" of Coca Cola before it is packaged? I doubt it in highly developed countries.
* Wells Fargo: the same as Exxon. Commercial banking has a fraction the number of employees _in proportion to their assets under management (AUM)_ compared to 50 years ago. How/Why? Automation / computers!
Their point wasn’t that innovation doesn’t exist. Obviously things change. Still, both Exonn, Coke and Wells Fargo are still fundamentally in the same business as they were before. That’s not the case of other companies. IBM became big selling punch-card tabulating system. Have you seen a punch card recently?
> Ibm was not in the punch card machine making business. It was in the data processing business, and still is.
IBM absolutely was in the punch card machine making business in that they were absolutely selling punch card machines and not data processing consulting. And no, they are not in the data processing business today. They are clearly a consulting company first with some software development bolted.
They actually pivoted twice from a mostly hardware company to a mostly software one and once again to a mostly service based one.
Wells Fargo is still doing financial services but banking has very different meaning today. Computer is no longer a job title after all.
So sure those brands are 100 years old, but the underlying companies have been restructured several times with major mergers and spin-offs etc. ExxonMobil even did the baby bell thing of splitting off from a single entity only to merge again and again in the coming decades.
I would say that IBM's business has changed less than Wells Fargo's. IBM still supplies number-crunching devices and services to large corporations and government agencies. The technicalities have changed, but the core business is the same. But running a bank under a gold standard is very different from running a bank under a fiat money and fractional reserve regime.
I was thinking about how we can predict the location of planets with a high degree of accuracy. They are pretty close to an ideal physical system and is governed by a single physical law. The dynamics are so well understood that we have been able to predict the existence of, and find, other planets based on small deviations. We have even been able to refine our understanding of the structure of planets, even of gravity itself, based upon small deviations.
That said, it certainly includes our ability to use gravity assists when planning a mission.
Especially for a first time in all of humanity type of mission, half a century ago, which yielded brand new data on faraway objects we'd never had, and considering it's still going and reporting data, it's arguably a bargain basement price for such a thing.
Not to mention that it has a chunk of highly radioactive plutonium that acts as a battery / power source. That whole thing is nuts that it got built, "shipped", and still works ~50 years later, approximately 1.21 jiga-kilometers away from Earth.
Everything is designed to last between a few years and a decade, with the expectation that there's no point engineering for longer than that because something better will replace it by then. This feels bad but it's generally correct. Practically every aspect of technology is improving so fast there's little point trying to engineer things that last, because a stronger, cheaper, lighter, thinner thing will be available long before the thing has expired.
Planned obsolescence isn't only a thing with fast moving technology either. Engineers design bridges with a finite lifespan in mind as well, and it's not because they think there'll be a much better bridge in X years. It's just that the relationship between the expected lifespan of a thing and its cost to develop is exponential.
What's wrong with the Boeing case on needing to be rebooted.
It doesn't seem much worse then memory leaks in missle guidance tracking systems that exceed flight time. We have finite resources, if the effort to correct is minimal what's the harm?
> We have finite resources, if the effort to correct is minimal what's the harm?
That mentality at Boeing as, literally, costed many lives.
The harm is that nobody knows why there's a memory leak requiring a reboot (or if it's even a memory leak). What happens when that very same issue is combined with a rare case and causes the death of hundreds of people?
"Have you tried to turn it off and on again" may be fine for a $20 Internet-of-insecure-and-shitty-Thing bought on alibaba. Not so much when lives are at play.
> This condition is caused by a software counter internal to the GCUs that will overflow after 248 days of continuous power. We are issuing this AD to prevent loss of all AC electrical power, which could result in loss of control of the airplane.
> A simple guess suggests the the problem is a signed 32-bit overflow as 231 is the number of seconds in 248 days multiplied by 100, i.e. a counter in hundredths of of a second.
This seems sane given that the planes don't operate for 51 days constantly (I'm not in aerospace so please correct me, it seems a reboot could occur with refueling without issue)
Usually planes are turned around too fast to be waiting for them to fully reboot every time they fuel. Typically they would prefer to keep them running for weeks at a time to minimize any issues or delays with extra steps when the clock is ticking and customers are waiting to board and fly
Commercial aircraft need continual software updates to operate. They are, in a sense, living, breathing machines. Things like navigation and terrain databases are updated inside of 30 days.
Adding a scheduled reboot is one more item on a checklist that was already being run through.
It's counterintuitive, but performing a reboot as a scheduled maintenance item is far more risk averse than going in and touching code that has been otherwise thoroughly tested and signed off by regulatory authorities.
The chances of introducing a new bug when attempting to repair the former presents additional risk to what amounts to a convenience issue.
Mainframes in the late 80s got so good nobody was rebooting them. Then in the 90s someone's mainframe had the powerbackup generators fail in a power outage and the system went down (a once in 500 year event, but with more than 500 mainframes around the world it was statistacally bound to happen). the system didn't boot correctly and it took months to figure out all how to start all the services it was running that the person who started them left without add them to the startup configs. Now everyone reboots a couple times a year so that when things don't restart correctly the person who knows about them still remembers something about it.
> it took months to figure out all how to start all the services it was running
Having had to migrate a 12 year old dying server this weekend, yeah, I was 24/7 strongly cursing the idiot who didn't document anything[0]. On the plus side I did get to update a bunch of stuff to more modern practices.
[0] You will not be surprised to learn that idiot was me.[1]
[1] My other servers are much better - anything that hasn't yet been properly service'd has its own `RUNME.sh` which runs whatever it is in the correct way.
Also in case of emergency, eg after a power loss or whatever, you might have to do a reboot anyway. So you might as well make sure that this code path is well exercised.
I'd rather deal with a ground hog day of the system being for the millionth time in its first day of operation, than dealing for the first time with the system being in its millionth day of operation.
> Usually planes are turned around too fast to be waiting for them to fully reboot every time they fuel.
To be clear, this affected the Boeing 787, a plane usually focused on long-haul between medium sized cities. It is incredibly rare to see a long-haul flight turned-around immediately. Normally, they have max two flights per days, and for longer routes, just one route per day. There was plenty of time to reboot. I don't think anyone was ever in danger.
Also, I am starting to grow tired of "anything Boeing does is bad" on HN in the last 6-12 months. The Boeing 787 was a huge hit, both technically and commercially. (I would say the same for the Airbus A350.) I certainly never worked anything as important or cool in my career. The endless booing from the HN peanut gallery adds little new and/or useful information to the discussion. Yes, I expect to be downvoted for this last paragraph.
TIL. I had thought a computer reboot was snappy compared to filling those fuel tanks, that's so counterintuitive to me. That does make it more of an issue then.
"Let's build something that we KNOW will catastrophically fail, because we deliberately ignore to take account limited resource availability of that system."
For a critical systems, that's just lazy and unacceptable.
> Microsoft Windows on ATMs and vending devices that literally can't not have pop-ups.
I know of several ATMs that still run OS/2 2.1 or OS/2 Warp. They're a slow dying breed that is being replaced by Unix (Linux or similar), but they still exist. Chances are, if you're in Europe, and the ATM you're using is one of the "slow screen refresh rate" ones, it's still running OS/2.
I have a hard time feeling complainy about stuff like this because if engineers are doing their jobs properly, they’re implementing the defined requirements. Requirements are guided by business decisions on what matters and what doesn’t matter for a project.
So ultimately this means that if the argument is that the business is making the wrong decisions, then that means there’s an opportunity for someone to profit by proving them wrong.
I love Voyager but it also cost something like a billion dollars. So it’s a bit unsurprising that it is such a resilient system.
> I have a hard time feeling complainy about stuff like this because if engineers are doing their jobs properly, they’re implementing the defined requirements. Requirements are guided by business decisions on what matters and what doesn’t matter for a project.
Well, it's a bit of back and forth between business and engineering.
> So ultimately this means that if the argument is that the business is making the wrong decisions, then that means there’s an opportunity for someone to profit by proving them wrong.
Generally yes, but sometimes it's hard because of double-sided network effects. The classic example being expensive but prestigious scientific journals:
All the scientists and funding agencies would benefit from moving to cheaper journals, but an individual scientist will try to publish their best work in the most prestigious journal they can get.
(And often the complainers mix in some good old paternalism, too: 'oh, those customers are ill-informed and the companies are exploiting them by tricking them into buying inferior products and getting trapped.' or some story like that.)
> I love Voyager but it also cost something like a billion dollars. So it’s a bit unsurprising that it is such a resilient system.
The money was necessary, but not sufficient. Compare the great performance of the Voyagers with the disaster that was the Space Shuttle program or the ongoing farce that is the International Space Station.
The profit argument is not great for everything humans need, not everything can be driven by profit motives, look where that’s gotten us in medical care in America, social insurance, etc. Profit driven enterprise is efficient, but needs balance that doesn’t seem to be there today in order to limit the extremes.
I agree. Voyager happened because of what you say. But then we cannot compare apples to orangutans by pointing out how Voyager rocked but all these for-profit endeavours sucked.
> Imagine devices that don't have built-in obsolescence, or licensing that expires, [...]
We have that sort of- in open source by now. Basically the base implementation is free. But for more advanced extras you pay extra. Then the company providing the extras collapses and the extra becomes open source in one way or another.
I mean, the Voyager probes are completely proprietary. They’re completely custom, built for a unary purpose.
Building long lasting hardware makes a lot of sense when you expect to use that technology for a long time. I’m sure there was a time when people thought we’d be using steam power forever.
It makes less sense if the rate of technological innovations makes hardware or software obsolete every 5-10 years.
Principal agent problem. Look at the final outcomes. Business exists to make money even if it means the most malicious ways (ex planned obsolescence), science project are not there to make money.
> if businesses had their way, would the Internet be run on Novell, with millions of Novell admins all around the world constantly needing to fuss with things just to keep it running?
No, I doubt it. The businesses that make all the hardware and software that powers the internet are perfectly fine with it being an open protocol.
Expanding a tiny bit on your point, from where I am sitting, it all be summed up with greedy, plain and simple...and extends further to Capitalism as a whole, not just corporate America. I work for a US-based manufacturing company that has customers around the world as the guy who fixes the things the engineers build when they inevitably break. I have encountered so many design flaws that could have been done far, far better but we're not, largely due to cost, that I have come to just expect it. This is admittedly sad.
Some other commenter mentioned we can have all these great long-lasting things if we "are willing to pay." We, the consumer my might be, but the manufacturer and distributer who makes the the product not only want to sell you something once, but want to sell it to you repeatedly while maximizing their profit margins. This would not be so bad if a good chunk of those yearly record profits reached the folks who were on the work floor actually making the product (thus allowing them to buy more expensive things) but instead, the money gets sucked up and stays up, while costs get passed on to the consumer who likely is not making enough to afford the longer-lasting version. Which, I guess is the point of Capitalism, in a nutshell.
I'm not saying your wrong but it's also possible that technology whether created by public agencies or private companies is simply more complicated now and because of that .. more likely to fail over time.
But of you think logically from first principles, it makes no sense to have both an a320 and 737.
All this competition and “corporate greed” where we literally have two airplanes that are functionally identical rebuilt from the ground up is an huge, massive waste of human capital and enabled by the government through monopoly of copyright and patents.
While I think creators should get paid, I wonder if there is a better way to manage copyright that what we do.
I think the government granting this extreme of a monopoly on copyright is one of the weirdest things we tolerate.
There has to be a way where we can statutorily authorize copying at x% fee or something.
That way we don’t build two copies of the same thing!
A320 and 737 are functionally similiar only in terms of "a flying machine moving you in the air at 900 kilometers per hour". Both Airbus and Boeing are exploring in a space of different design and engineering concepts, where there is no clear winner.
> Imagine devices that don't have built-in obsolescence
There is no such thing as "planned obsolescence". The trade-off is price vs quality. You can buy or source a hammer that will last you a lifetime, but will cost 10x the standard rate of hammer ... most people opt for the $20 option vs the $200 option, knowing full well, the $20 option may not last as long.
>Boeing airplanes that need to be rebooted if their computers are up for too long, an Ariane 5 blowing up because using the old thing should be "good enough", Microsoft Windows on ATMs and vending devices that literally can't not have pop-ups.
Couple things here:
1) None of those are examples of 'planned obsolescence'.
2) You really do not want to have every software application built the same way you build software for NASA. It would make software development incredibly expensive and slow.
3) I think everyone agrees that Boeing has major quality problems when compared with Airbus. But again, not an example of 'planned obsolescence', and it is not necessary for Boeing to build software like NASA builds software for the space shuttle, in order to produce quality airplanes (with quality software).
>if businesses had their way, would the Internet be run on Novell,
Huh? The software business is fully private. Most companies are not running on Novell because other businesses 'had their way' and competed.
Yes. As a left-libertarian, I criticize corporations and their profit motive all the time. I see the concentration of power being married to the inherent incentives of increasing share value for the shareholder class. The only way I see we can reform the system — and fix many of the problems — is to gradually replace shares with utility tokens, ie the accounting system inside an ecosystem that producers and consumers of a marketplace use, and gradully buy out the parasitic class of “investors” who just want to extract rents and enshittify everything. It is a bit like the efforts Europe undertook at peacefully buying out slaves from masters or serfs from feudal lords. Just get out gradually from an economic system that encouraged exploitation.
We have just been inculcated with this exploitation as a “natural” part of our capitalist system for so long, since childhood for most of us, that many have a knee-jerk reaction to shut down this line of criticism and not hear the simple solution. Many on HN have built a startup when they’re young, been part of the VC industry, akin to a music prodigy or young actor or model being “discovered” by “talent scout”, and sold a dream which only a few achieve. For every 1 person who makes it big, 1000 who do the same thing fail. But they keep licking the boots of the system hoping they’ll be among the lucky ones, so they better say nice things. If they see words like “utility tokens” they immediately think “grift”, when in fact the system they’re supporting has grift and exploitation built in. Having utility token holders buy out shareholders also may be an element in furthering SDG goals and “stakeholder capitalism”, and saving the planet. But hey, as soon as the words are uttered which may trigger pushback, the system starts to protect itself, in the form of downvotes or strawman attacks etc.
Yeah, planned obsolescence is one feature of corporate control and profit system. As is surveillance capitalism, or pollution, or pushing people to work long hours, or have a gig economy, neglect their kids, or having people in echo chambers be angry at each other online because anger and misunderstanding leads to more “engagement” etc. All the ways that represent gradual “enshittification” that Cory Doctorow coined, is not an accident. These are all negative externalities, which are all traced back to one main factor: the profit motive of shareholders as a core feature in the system, forever. The rents aren’t going to extract themselves.
I haven’t yet had a chance to work on aerospace (or beyond) equipment though I do work on medical grade things and some of the parts are aerospace certified, though they haven’t been put through the ringer. In particular, I work with high voltage/amperage, high liquid pressure and high temperature systems. Some of it has been running for close to 25-30 years with very few issues. Only recently have I had to open a power supply that degraded since it was first put into service (over 20 years ago). I’ve gotten into a habit of identifying parts that, through experience, I know will fail or can fail and replaced them with ones that are all but guaranteed to outlive me. I want the person, if anyone, who comes after to see the work that’s of higher standard than what came out of the factory. No machine can continue working forever, but one can get much closer to having fewer failure points. As an aside, to get to here however was a struggle of obtaining all sorts of things I shouldn’t have access to because the manufacturer started locking access down heavily. Story for another time.
> I’ve gotten into a habit of identifying parts that, through experience, I know will fail or can fail and replaced them with ones that are all but guaranteed to outlive me.
Depending upon your NDA/security level, can you share a specific example? Maybe you see a steel/aluminum chain between gears that you know will wear out in X years, but could be replaced with something that is tungsten/titanium that will last for 5X years.
I'm reading Pale Blue Dot to my kids at night currently so this is really awesome. (The Voyager missions are described in excellent detail in ways that I never appreciated fully before.)
It blows my mind that these are machines from the 8-track era. And they have fallbacks and redundancies that were completely ahead of their time.
NASA loves to downplay expectations in case something goes wrong, but people really underappreciate just how overengineered these things are, which makes sense when a bad mission can be political suicide for their future funding.
Single digit dollar sounds more like the Appollo program. I think it's been a long time since the entire NASA budget was more than a penny per tax dollar.
NASA says the voyager mission cost 865 million dollars from the start in 1972 to Neptune encounter in 1989, and currently runs at 7 mllion dollars per year.
Cool! I do the same thing with books like Asimov’s Earth and Space (science) or Lois Lowry’s Number the Stars (fictional history). What other books can you recommend?
Yeah it's really amazing that these come from a time where normal people had never even heard of bits and bytes. And now they're the furthest man made objects and their data link still works.
One feels a terrible disappointment Sagan didn't live to see the future mission projects he talks about in the book get finished and most of them succeeded IIRC. By happenstance the 2024 Solar System BBC series mostly uses animation but has some real photos and videos to document a lot of happened since the book was published.
The last time I read Cosmos I hit the part about the Cassini-Huygens mission where he wondered what we might find under the atmosphere of Titan, and was able to immediately just find out.
I imagine a lot of people who work on space missions do not outlive their work - which feels sad but also ... inspiring?
I think Carl Sagan would overall just kind of be deeply sad about how it is turning out.
“In the demon-haunted world that we inhabit by virtue of being human, this may be all that stands between us and the enveloping darkness. I worry that, especially as the Millennium edges nearer, pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive.”
Whenever such an announcement is made, I keep asking myself something along the lines: "Just how much stuff did they put on board that thing, that there is always some way of using something differently or something different, to get back a working connection???" Incredible engineering.
Working in the government space sector, generally speaking, you're given a mission lifetime which is seen as a minimum. your payload/spacecraft/instrument needs to last at least this long. some things end up being limited by the physics of the system (See the RTG on the Voyager probes for a common example, or CEMs on plasma spectroscopy instruments for a more obscure example). However, in pretty much all cases where the physics doesn't put a hard cap on the thing you're building, nobody wants the thing building to be the first thing, or critical point of failure of the whole system.
jval43's quote puts it pretty well. It's not just that you're designing something to last, you're designing it not to fail. It also tends to help when you have a bunch of really smart folks from a number of disciplines working on the same problem.
With that said, you need to walk a fine line as far as the level of redundancy and fallbacks you put in place versus the overall SWaP (Space Weight and Power) of the system. I can go somewhat deeper into SWaP issues if you would like.
I remember learning about Hubble telescope losing its gyroscopes in one axis so it couldn’t balance and was effectively dead until someone clever figured out you could use the pressure from the sun to act as a third axis of motion to keep the craft stable. Sometimes it’s not just the stuff onboard but the ingenuity of the engineers that saves the day
I didn’t realize the Voyagers relied on a once in a 175 year planetary alignment. What a lucky break technology had advanced to the point we could make use of it.
It wasn't just a lucky break, it was the result of furious efforts by scientists to lobby years in advance to take advantage of the alignment, as well as engineers who repurposed two Mariner probes to save money, as well as canny NASA bureaucrats who sneakily downplayed the possibility of a full Grand Tour in order to reduce estimated costs (with the full intent to underpromise and overdeliver, which Voyager 2 successfully did by "coincidentally" being on course to visit Neptune and Uranus after completing its primary mission to Jupiter and Saturn (to this day, Voyager 2 remains the only visitor to Uranus and Neptune)).
Still a huge amount of luck involved. If that planetary alignment had happened even 10 years earlier NASA probably wouldn't have had the capability to do anything with it. If it had been sometime recently, say after the Challenger disaster, I doubt it would have got funding...
That literally isn’t true. A number of satellites had to be delayed or redesigned due to the Challenger disaster. Those satellites were planned to be launched on the Space Shuttle, and the Challenger disaster grounded the Space Shuttle for an extended period. And even when the Shuttle returned to flight, certain planned capabilities were cancelled as a result of Challenger, including West Coast launches (from Vandenberg) and the Shuttle-Centaur (an additional rocket stage designed to be carried in the Shuttle payload bay). Due to these changes, some orbits became inaccessible to the Space Shuttle, forcing those missions on to alternative launch vehicles. And switching the launch vehicle for a satellite isn’t simple, it requires redoing engineering analysis (every launch vehicle causes somewhat different stresses on the payload, so in the worst case the payload may actually require design changes to handle a different launch vehicle) and integration testing (the payload and launch vehicle need to talk to each other, and you don’t want to get that wrong, or else you can get failures like the first Boeing Starliner test flight.)
Even today, launch vehicles are shared between both crewed and uncrewed missions (Dragon uses Falcon 9, Starliner uses Atlas V), so a launch vehicle grounding due to a failure in one mission type absolutely can impact the other type too
> There’s no way that a disaster on a manned mission would affect a satellite launch. They share nothing in common.
It would definitely affect satellites, since the shuttle was a major satellite launch system. But even a deep space probe project like Voyager would be at risk, even though it seem directly related, because by affecting the political prestige of NASA, it effects everything that has to get funded at NASA – the politics of legislation are not bounded by rational direct impacts.
Normally I assume that privatization is always bad, but can you honestly look at Artemis, Orion, and the fucking Lunar Gateway and think, oh yes more of that please? Commercial space produced almost all successes of the past decade. (That said nationalizing Boeing might be an improvement, but then I’d worry about Lockheed buying America with America’s money.)
You misunderstood: The problem isn't private enterprise, it's organized crime.
It's the conversion of public property for pennies on the dollar. It's theft and graft, except it will be legitimized by the government and supreme court. And you'll never get it back, because that would be considered "nationalization" of private property.
It was Putin's playbook after the collapse of the (orchestrated?) collapse of the Soviet Union - and they're apparently reusing it for the US as well.
Oh, did I mention that Putin was democratically elected?
if they used the same booster, say something like a Falcon 9, that was the cause of the disaster, anything using that booster would be put on hold until it was cleared. it just so happens the examples used very different systems, but that's not what you were referring to with your self assured declarative that just doesn't have as much weight as you want it to in modern launch systems.
Not just any foreword, a pretty big endorsement. You can borrow the book from the internet archive to read the full foreword, but here's the last bit:
And in the end, it turns out that something will happen in 1982 that just may—
No, no, read it for yourself. Read it carefully and you'll find it far more fascinating than the tale of any millionaire found stabbed in any library, locked or otherwise. And far more important, too especially if you live in California.
ISAAC ASIMOV
17 April 1974
The scenario where a gravity assist doesn't work is if turning sharply enough would require your minimum planetcentric approach distance to be less than the radius of the planet - you'd crash into it instead.
This is why Voyager 2 couldn't also do Pluto - it would have needed to change course by roughly 90º at Neptune, which would have required going closer to the center of Neptune than Neptune's own radius.
The most unusual gravity-assist alignment that we did was for Pioneer 11 going from Jupiter to Saturn. The encounters were separated by roughly 120º of heliocentric longitude. Pioneer 11 used Jupiter to bend its path "up" out of the ecliptic plane and encountered Saturn on the way back "down". Nowadays we wouldn't bother doing that (we'd wait for a more direct launch window instead), but the purpose of this was to get preliminary Jupiter and Saturn encounters done in time before Voyager's launch window for the grand tour alignment.
Could Voyager have reduced velocity, glanced off Neptune, waited for the return path on a narrow elliptical orbit, and then boosted to effectively make the 90° turn to Pluto at that time? Was that impossible given its Neptune approach trajectory, or would glancing off and waiting have been more fuel?
And, why didn't this vortical model that includes the forward velocity of the sun make a difference for Voyager's orbital trajectory and current position relative to earth? https://news.ycombinator.com/item?id=42159195 :
Breaking that down: Voyager itself couldn't have reduced velocity, it had nowhere near enough reaction mass to do that. Hypothetically a spacecraft could but you might be talking about orders of magnitude more reaction mass. (Which means multiples more of the fuel to launch and accelerate that mass itself, which could quickly escalate beyond any chemical rocket capabilities.)
It also likely wasn't possible to get to Pluto on some future Pluto orbital pass. The limiting factor is likely that Voyager's incoming trajectory to Neptune was already too far beyond solar escape velocity to get into that narrow elliptical orbit you propose. (You'd have to slingshot so close to Neptune's center that you'd hit the planet instead.)
Designing from the beginning to come in slower to Neptune and adjust to encounter Pluto on some future Pluto orbital pass was probably possible, but yeah you might be talking about time scales of Pluto's entire orbit or even multiples of that. (We do similar things for inner solar system missions, like several encounters with Venus separated by multiple Venus-years, but that's on the order of single-digit years and not hundreds.)
The common answer to a lot of these outlandish slingshot questions is usually, yes it's eventually possible by orbital mechanics, but it gets so complicated and lengthy that you may as well just build another separate spacecraft instead. We talk about Voyager's grand tour alignment because it's captivating, but realistically if that hadn't happened we would have just done separate Jupiter-Uranus and Jupiter-Neptune missions instead.
The sun's motion relative to the galaxy doesn't matter for any of this - nothing else in the galaxy is remotely close enough to affect anything, the nearest star is still over 1000x Voyager's distance.
Is it possible to focus on a reflection on the Voyager spacecraft; or how aren't communications ever affected by lack of line of sight?
So there was no way to flip around and counter-thrust due to the velocity by that point in Voyager's trajectory (without a gravitationally-assisted slowdown or waiting for planetary orbits to align the same or in a feasible way)
> How can I intuitively understand gravity assists?:
> Aim closer to the planet for a lower pass for a greater change in direction (and velocity from an external frame of reference), farther from the planet for a smaller change; aim ahead of the planet for a slower resulting external velocity, behind for a higher velocity: gravity assist guide (image from this KSP tutorial)
- Vindication! KSP is what I probably would have used to answer questions like this; though KSP2 doesn't work in Proton-GE on Steam on Linux and they've since disbanded / adjourned the KSP2 team fwiu.
- > various SPICE lessons provided by the NAIF translated to use python code examples
TY for the explanation.
Hopefully cost effective solar sails are feasible.
FWIU SQR Superfluid Quantum Relativity doesn't matter for satellites at Earth-Sun Lagrangian points either; but, at Lagrangian points, don't they have to use thrust to rotate to account for the gravitational 'wind' due to additional local masses in the (probably vortical) n-body attractor system?
The alignment greatly reduced the amount of energy/propellant/weight to launch the Voyager missions with all four of the outer planets on the menu. Alignments that allow you to reach a smaller set of the outer planets with the same budget are more common: years or decades.
(Edit) Another thought, since you mentioned time—numerical computing power and the math required to exercise it have advanced greatly since the 1970s, and it's likely that some of the trajectories and maneuvers feasible (again with the same fuel budget) today, even if they took 100+ years to complete, weren't even calculable back then.
The alignment was important because it allowed visiting four planets with one spacecraft. So you only had to launch one spacecraft. (We launched two anyway.)
If you are willing to launch four spacecrafts to visit four planets, the alignment restrictions are much relaxed. You do need to be careful about your launch window to get a nice boost, but it's measured in years between windows, not so much centuries.
> The alignment was important because it allowed visiting four planets with one spacecraft. So you only had to launch one spacecraft. (We launched two anyway.)
IIRC the second probe was mainly intended as a backup to the first one, but visiting Titan and visiting Uranus/Neptune were mutually exclusive, and visiting Titan was higher priority, so if the first probe succeeded the backup could be (and was) sent on the four planet track.
No, there's no pingponging. There's only so much momentum you can exchange with a planet, because there's only so close a flyby that you can make before it stops being a flyby and becomes a fly-into.
Ignoring the feasibility of the physics, imagine trying to calculate all the resulting trajectories to point antennas at to talk to the probe. With computers and equipment of the 1970s.
“We didn’t design them to last 30 years or 40 years, we designed them not to fail,” John Casani, Voyager project manager from 1975 to 1977, says in a NASA statement.
> this is HN, a bunch of computer programmers think they know more than <figure of authority>
And they are correct
At least on the programming part, having in mind the huge advances in computers since the Voyager was built.
Any professional computer programmer here knows more in their field than a programmer from 70's. A Voyager built today with similar resources would be much better, 100% guaranteed.
Voyager 1 was a fantastic machine done by a terrific team, but lets not pretend that the state of the art hasn't changed. Anybody with computer skills polished towards building a machine in 1977 would be basically unemployable for building a machine in 2024.
You might be surprised to read the papers written by those early software developers. They were writing in the late 1960's and early 1970's about fundamental issues most developers of today don't fully grasp.
You might think, for example "waterfall, ewwww", but if you go back and re-read the first paper on waterfall development, it makes clear that waterfall development is in fact an anti-pattern. How many here are stuck on "modern" teams that think waterfall is a good idea, and yet those clueless old folks had figured out it was a dead end 50+ years ago.
One of the most critical aspects of managing software development is Conway's law. For distributed scalable systems, if you aren't thinking about Amdahl's law you're just a hacker and not actually an engineer. Check the dates on those papers.
They built incredibly sophisticated systems using incredibly primitive building blocks. If you honestly think they couldn't ramp up on PHP or Python or k8s, you need to spend a bit more time around some actual badasses.
I seriously doubt that the average dev today knows more than the average dev in the 70s. In fact, I would happily wager money on it, if it were somehow provable one way or the other.
There are so many abstractions today that you don’t _have_ to know how computers work. They did.
Re: state of the art, you don’t need or want state of the art for building things that go into space, or underwater (closest analogue I can think of that I have experience in [operating, not coding for]). You want them to be rock-steady, with as close to zero bugs as possible, and to never, ever surprise you.
The average dev knows far far less today. Just how deeply people had to know hardware back then is a massive difference.
And if we look at the average dev today, most code is in the framework, or a node package, or composer package, and the average dev gets by via stack overflow or AI.
There are certainly devs that actually understand coding, but the average dev barely does. Most devs don't understand the underlying OS, the hardware, or externals such as databases at all. It's all abstracted away.
And coders back then had almost nothing abstracted away.
> Any professional computer programmer here knows more in their field than a programmer from 70’s
Programming for regimes with different constraints is…very different. In a very real sense, “their field” for most modern programmers isn’t even the same field as developing software for 1970s deep space probes, plus, the issue wasn’t even about software but about end-to-end building and launching space probes (that was both what the quote was about and the field of the Voyager project manager.
But thanks for demonstrating the kind of hubris that the post you were responding to described.
But this is about the overall engineering of Voyager, not just the programming. Also, I'm skeptical how much better modern hardware will fare in deep space conditions, considering the use of finer and more fragile electronics. Since you're talking about general people instead of specialists, also consider how the median software developer seem to focus less on correctness, reliability, and optimization, compared to the standards of spacecraft engineering.
1) It was sophisticated indeed, top of its game, but lets not lie to yourselves. We still have best engineers and better programmers today. Just to put things in context in that time we moved from "Pong" to "World of Warcraft".
And is not just software. Reducing an entire computer room to the palm of your hand but with better storage, graphics and computing power is basically black magic. I can't imagine what Voyager could do with a current Nvidia chip.
2) Just because people is not trained in some specific domain does not mean that they couldn't be motivated to do it. I bet that the people that built the Voyager didn't born with the instructions engraved in their brains. And if they learned, other people can also.
If I learned something after lurking HN for a lot of years is to never, ever, underestimate this community. This place stills keep surprising me in good ways.
> Also, I'm skeptical how much better modern hardware will fare in deep space conditions, considering the use of finer and more fragile electronics.
Since then, we had massive advantages in manufacturing. Maybe COTS parts aren't as usable in space as they were back then, but we can now easily manufacture something more resilient or, as a fallback, simply use those old parts. Also, basically all current electronics are designed to be and are used on earth ~100% of the time. Over-engineering it for use in space is just a waste.
> skeptical how much better modern hardware will fare in deep space conditions
Why? Deep space radiation is only 4x the dosage compared to LEO. Starlink satellites use modern tech and they've spent >10,000 collective years in space since we launched more than 2 of them. The whole "modern electronics are more fragile" issue is overblown. The CPUs are tiny and easy to shield. The MMICs use huge features that you can see with a normal microscope.
Where did you get the number of 4x from? It seems different than what I understand, but I don't have any sources handy.
"Modern electronics are more fragile" issue really is not overblown. One of my peers have tested different types of non volatile memory in LEO and the TLC NAND sample gets totally wiped by ionizing radiation within the first week. CPUs, while being mostly logic and less susceptible to low energy events, can still be easily destroyed especially if radiation causes latchup. MMICs and discrete devices have huge features in comparison yes, but the junctions still degrade notably under radiation.
From my opinion as someone working on LEO satellite hardware, it's easy to have opinions about stuff like correctness and reliability because it is not naturally intuitive and usually requires observation of many samples over a long time that it doesn't affect most engineers. However, I've definitely seen a strong correlation between the effort spent on correctness and reliability, and the success of missions.
What is Starlink’s failure rate? Genuinely asking; I don’t know. My point is that if it’s > 0, that’s a problem for something designed to go billions of miles away.
The longevity of Voyager has only a little to do with software engineering, and latest software engineering has even less to do with building spacecraft like Voyager.
If anything I expect modern software engineering to have significantly higher risks of failure than programmers of ye olde days.
Why? The first step today would be installing Discord[1], the second step would be updating code live 420 no scope[2], and the third step would be figuring out how many JavaScript abstractions are desired.
I think the pushback is b/c this falls on common stereotypes about modern software being bloated and unnecessarily fragile. Those are justified stereotypes often enough, but spacecraft software is such a different animal that it just doesn't really apply very often.
That's an authority argument, even if in this case a strong one. What you seem to be overlooking is, that merely from this one quote, we cannot conclude, whether that is really all of methodology that went into building the Voyager 1 and 2. So while it is a witty quote, it doesn't actually tell us much, without additional statement, that we don't need to look any further for other methods that were applied.
>Nothing lasts forever, and if you don't figure out when it's going to fail, it's going to be sooner rather than later.
You might be surprised about the reality of the situation.
I had a professor who worked on the design and fabrication of the Apollo Guidance Computers, which likely was a somewhat similar process to the one being discussed here. It's been quite a few years since his lecture on it, but the process went something like this:
They started with an analysis of the predicted lifetime/reliability of every chip type/component available to potentially include in the design.
The design was constrained to only use components with the top x% of predicted life.
Then they surveyed each manufacturer of each of those component types to find the manufacturer with the highest lifetime components for each of the highest lifetime component types.
Then they surveyed the manufacturing batches of that manufacturer, to identify the batches with the highest lifetimes from that manufacturer.
Then they selected components from the highest lifetime batches of from the highest lifetime manufacturers of the highest lifetime components.
Using those components, they assembled a series of guidance computers, in batches.
They tested those batches, pushing units from each batch to failure.
They then selected the highest quality manufacturing batch as the production units.
When he gave this talk, decades after the Apollo era, NASA had been continuing to run lifetime failure analyses on other units from the production batch, to try to understand the ultimate failure rate for theoretical purposes.
Several decades after the Apollo program ended, they had still never seen any failure events in these systems, and shortly before the time of his lecture, I believe NASA had finally shut off the failure testing of these systems, as they were so remote from then "modern" technology (this was decades ago, hence the quotes around "modern").
This is what happens when you have the best minds committed to designing systems that don't fail. Yes, the systems probably will fail before the heat death of the universe. No, we don't have any idea when that failure time will be. Yes, it's likely to be a very long time in the future.
(And, of course, this is typed from memory about a lecture decades ago on events happening decades before that. This being HN, someone here probably worked on those systems, in which case hopefully they can add color and fix any defects in the narrative above).
Incidentally, you have the question backwards: no one really cares when it's going to fail. We care when it's not going to fail: will the spacecraft make it to its destination or not? It doesn't really matter what happens after that.
This might seem like a nitpick, but changes in approach and mindset like this are often the difference between success and failure with "impossible" problems like this. So it's critical to get your approach right!
You don’t do it that way. You figure out the how it’s gong to fail, when that failure is likely, and then engineer it not to do that in the relevant timeframe.
FWIW his quote also applies to a lotta devices here on Earth. For example guns are not designed to last forever, but they are designed not to fail. You don't want to hear a click when you expect a bang or vice versa. As a side effect, they last forever. It's fairly common for a 100 year old gun to work perfectly in 2024.
There are also expectations about maintenance and some notion of a “standard” environment. For example, unmaintained firearms work less well (or fail) when exposed to humid conditions.
It's fascinating how Voyager 1, despite my lack of space knowledge, utilizes a nuclear power source for 40+ years, offering steady and reliable power without any moving parts that could degrade over time.
In contrast, India's decision to rely on solar panels led vikram lander to be dead in just 14 days due to lack of sunlight (afaik).
I'm curious about the rationale behind this choice when nuclear power seems like a far superior option. Can someone shed light on this decision?
First, the nuclear power source is a giant hunk of plutonium. It is expensive to get, dangerous to use, and due to concerns about further refinement, is restricted internationally.
Second, it is toxic inherently — the source is continuously radioactive at a hazardous level to humans, plutonium itself has acute and long-term toxic effects aside from the radioactivity, and if a launch fails, the rtg will disintegrate and poison hundreds of miles (see Kosmos 954, which disintegrated over Canada)
Third, it is HEAVY. They produce 40W per kilogram. Solar panels produce three times that much on Mars, and can be folded compact for launch.
Voyager used an RTG because its planned mission took it far beyond where sunlight can generate power, and it could do so because it had the budget of NASA and plutonium from the Department of Energy.
Solar panels are way cheaper, lighter, easier to procure, easier to launch, and tend not to cause international incidents.
Kosmos-954 didn't poison hundreds of miles, square or otherwise.
They could only find a dozen of radioactive bits, each only dangerous within a very small area around it, and not really leaching anything due to its ceramic nature. Most of the fuel dispersed and became harmless by dilution, probably never even reached the surface.
I wonder if you could do a hybrid approach, where the nuclear device is very small, but able to charge the battery over a longer duration to the point where the solar panels can be repositioned and utilized again.
Lots of missions use radioisotopic heaters, where you don't bother with the thermocouples and just have the material get warm and protect components which are vulnerable to low temperatures.
That's the main reason why spacecraft don't survive a temporary power outage: terrible environmentals.
But at this point, we don't have a lot of Pu-238, which is one of the only decent candidates.
I don't know the exact reasons why Vikram didn't get a fission reactor. But I can assume from similar missions:
1. Solar is pretty good as far as Mars and it gets worse as it travel further from the Sun. This is why most probes that travel past Mars need a nuclear reactor (Voyager, Pioneer, Cassini, etc). Going closer to the sun they get even better
2. Sending radioactive materials on rockets presents a risk and it is avoided if possible, lunar probes are usually cheaper and can still benefit from solar, so no need for nuclear. Imagine throwing plutonium in the atmosphere in the case of an accident
3. Nuclear reactors in probes are small and rely on decay radiation, they _usually_ have pretty small powet output, solar has a lot
4. And last but not least, price, solar is much cheaper than nuclear
Am I wrong that the plutonium in the Voyagers is not in a fission reactor but in an RTG (Radioisotope Thermoelectric Generator), which converts the heat from the plutonium into electricity. ?
I suppose the heat is result of fission, but I don't think an RTG is what is meant by a fission reactor. ??
Using plutonium works great but there are two issues. 1) they don’t output that much power. Few hundred watts at most, and they decay at a fixed rate. 2) you need to get your hands on a decent amount of plutonium. Great for dirty bombs, hard to source.
Both Canada and the US have restarted production specifically to produce RTGs for NASA, but the process takes time to scale up and automate. It's gone up 4x in 4 years and continues to increase, so this is a problem that will eventually be "fixed".
Isn't it something like space-reactor plutonium is a waste product from nuclear weapons production, and since we don't really make nuclear weapons at scale anymore, we aren't really making (refining?) plutonium anymore. And NASA has some amount on reserve, but they're rationing it out carefully. So the Clipper probe had to go with a massive solar array (100ft, the length of a basketball court) because they would rather save their plutonium for some future rover mission.
A good reason is the lack of availability of the needed isotope (Pu238).
The Europa Clipper has a huge array of solar panels instead of an RTG due to the last of the available supply going into the New Horizons spacecraft.
Pu238 was a cast-off isotope from nuclear weapons development so it was more readily available during the cold war. We should be happy that it's scarce now.
Also solar panels have gotten a lot better than they were when Voyager was launched, but even today anything going out past Saturn is not going to be able to use solar energy.
RTGs need plutonium 238. I've read even US doesn't have a lot available. The Europa Clipper will be using solar panel for example. India could also use batteries and a standby mode during the 14 days without sunlight. But any extra weight would add to the launch cost. Maybe in future missions as they get confident with successful landing, they will have bigger lander that can survive the lunar night. Even the early Mars rovers from NASA were tiny and solar powered (ie Sojourner in 1997.)
Availability :) a RTG requires Plutonium 238, which needs to ne created almost on purpose in a nucleare reactor. Not all nations have this ability or they are running such expensive programs. Also in the USA they are reserved for programs where there is very little light available
Would there not be some kind of benefit to send a chaser after the probes in order to act as a relay as the signal gets further away? Or is the ground based array just as good as anything we could put in space at this time?
The voyager probes were built in an incredible hurry to take advantage of a once-in-a-few hundred years optimal gravitational boosting path. Something launched later would never have been able to keep up, plus the antenna would be so, so much worse. The ground reptilian networks have effective antenna lengrhs of miles.
Can we send a voyager 3 with much advance battery and sensor, and at much faster speed so that it may reach farther than voyager 1 and 2, in let’s say just couple of years ?
Strictly speaking, the voyager missions already are the faster space craft that caught up with another spacecraft - Pioneer 10 and 11.
Looks like we lost contact with 10 and 11 in 1995 and 2006 respectively. They both ran out of power and shut down.
The voyager missions used a rare planetary alignment to get boosts. And a radioisotope thermal engine that has gotten pushback in later spacecraft designs, although they have ceramic versions now meant to address most of the issues.
That said, New Horizons, which gave us those lovely shots of Pluto, was launched in 2006. But it is traveling faster than the Pioneers but slower than Voyagers, so it’ll be the third farthest away at some point.
1. Our rocket / propulsion technology today may be cheaper (thx largely to SpaceX), but it doesn't really necessarily provide MUCH more delta V. Meanwhile
2. Voyager launches relied on a once-in-a-blue-moon (not quite once in two centuries) alignment of various planetary bodies to give a spectacular orbital slingshot boosts.
So my limited understanding is that we can't really overtake voyagers very easily. Whether our current technology coild be made more reliable in the long term is another good point of discussion :-)
The alignment isn't what matters for overtaking Voyager. Jupiter alone is enough, and New Horizons nearly did. Jupiter accounts for the vast majority of any gravity-assist plan. Saturn has 30% the mass and 2/3 of the orbital velocity, so it could only add 20% more over what you get from Jupiter alone, and the ice giants are smaller and slower yet.
New Horizons didn't nearly overtake Voyager. It's currently traveling about 4 km/s slower, and as it's still closer to the Sun, it's also decelerating more.
Yes, I wrote that short for brevity. If you must have the full thing spelled out: New Horizons nearly got enough velocity at Jupiter to eventually overtake Voyager. 4 km/s is "nearly" in astronomical terms. New Horizons wasn't really trying to optimize for speed at Jupiter (its closest approach was over 10m km), and a spacecraft that did could easily overtake Voyager using Jupiter alone and not need an alignment with Saturn or anything else.
The alignment was important in that it allowed Voyager 2 to visit Jupiter, Saturn, Neptune and Uranus in a single mission using gravity assists. The final velocity of Voyager could be matched in other ways, but not exceeded by so much that a probe could catch up to the Voyagers any time soon.
As far as I understand their speed mostly comes from gravity assists from Jupiter, so you'd have to wait for suitable orbital alignments and the basic technology for "much faster speed" doesn't exist.
> the basic technology for "much faster speed" doesn't exist.
…but we’re very close. The next technique will be flying near the sun then deploying a solar sail for a huge speed boost. Voyager goes 3 AU/yr, solar sail boost with todays technology will enable 7-9 AU/yr.
Highly recommend watching Slava Turyshev discuss his work on an SGL telescope, which employs this technique.
> and the basic technology for "much faster speed" doesn't exist.
Various forms of nuclear propulsion have been investigated. None of them are anywhere near ready for missions, but that seems to be more due to lack of investment in their development (and environmental/legal/regulatory/geopolitical/etc concerns) than any scientific obstacle. If NASA/etc were really serious about it (as in willing to spend multiple billions a year on it), it could probably be made to work in only a few years.
This probably can happen as soon as there’s some manufacturing and mining capacity beyond LEA, so that it’s safe to use nuclear technology without any environmental impact. Moon is an option here, so likely it’s 5-15 years after first Moon base is established (I‘d expect exponential growth of it).
> This probably can happen as soon as there’s some manufacturing and mining capacity beyond LEA, so that it’s safe to use nuclear technology without any environmental impact
The idea pursued nowadays is you launch using chemical propulsion and then only turn on the nuclear propulsion once you reach a safe distance from Earth. This is different from the original 1950s Project Orion which proposed to use nuclear pulse propulsion (i.e. repurposing nuclear weapons for propulsion) from the surface to orbit, which would have produced enough fallout to likely kill a handful of people per launch (in the long-run through higher cancer rates). The question then is - is it safe to launch nuclear material to orbit using chemical propulsion? Yes, we can secure it in containers designed to survive catastrophic loss of the launch vehicle. But, will the general public believe it is safe, even if it actually is? Possibly not-which is a political obstacle rather than a technical one.
The other issue is that nuclear propulsion systems can be too large/heavy to launch on a single chemical rocket, but you could launch them as multiple modules assembled together in orbit.
I don’t think this need or should depend on off-Earth manufacturing or mining capacity. I think it is going to be a long time before the highly complex manufacturing supply chains needed to turn raw materials into cutting edge technology like nuclear space propulsion systems exists off Earth. But we should be able to manufacture them modularly on Earth, such that in space we’d be doing module assembly rather than manufacturing.
I think it will be a long time before local manufacturing of high-tech goods on the Moon is cheaper than shipping it from Earth. Lunar manufacturing is likely to be limited to proof-of-concepts (demonstrating that we can manufacture X on the Moon, even if more expensively than shipping it from Earth), basic construction materials (“mooncrete”), rocket fuel (H2O=>H2+O2, maybe CH4 from carbon-bearing lunar ices). Likely small scale greenhouse agriculture too (to improve the crew diet), maybe even some aquaculture
Yes, but mostly no, also no, and, frankly, why would we.
Yes: we could lift off a much heavier spacecraft, give it plenty of fuel, and many of its parts would be lighter than their 1970s equivalent, giving us lots of room for modern sensors
Also no: the old Voyagers benefited from a lot of gravity assists from half the solar system, thanks to an alignment which won't happen again until 2151 (https://space.stackexchange.com/questions/5075/when-is-the-n...), so unless you're not in a hurry, we won't have that.
Why would we: the _point_ of the Voyager crafts was to do close flybys and collect plenty of data from the outer planets, not to go as far away and as fast as possible. You want to be as slow as possible near them, so you have science time. You're rushing this part in order to get right away to the centuries of nothing which follow?
Just a small aside, are Stirling engines used on other spacecraft? The Wikipedia article suggests development was largely abandoned a decade ago. My first glance concern would be that because it's reliant on moving parts it is liable to fail sooner than a purely thermoelectric RTG (due to part wear, lubricant leaks, fatigue, etc). This would seem to be quite important on the timeline of a probe that is going to take a long time to reach the subject of its investigation.
Which, to your point, only works against the idea of hypothetical Voyager 3.
Unfortunately a gravity assist maneuver is almost "incompatible" with an ion thruster.
If you approach e.g. Jupiter you gain speed (it's pulling you in), which you then lose as you get away from it (as it's still pulling you in, meh). Gravity assists work because you use your chemical rocket right when you are closest and speed away, "robbing" Jupiter of the chance to claim the energy it lent you on approach.
Ion thrusters have very low thrust, so you would accelerate veeeeery slowly away from Jupiter's gravity well - and in this time it will keep affecting you and slowing you down, and the whole thing would be barely worth doing.
You could bolt on a very simple solid rocket booster just for the gravity assist, of course, but its ISP will be lower and you'll have to carry its mass until you can expend it.
I think you are mixing up gravity assist with the oberth effect. I'm pretty sure Voyagers did no massive burns close to the giant planets for example. A (so far theoretical) Solar Oberth maneuver on the other hand...
Yes, it's being tested (NASA ran one for ~15 years continuously, which seems promising, I don't know if the test is still running), but I don't know if any were actually launched and are used in space.
We could make a Voyager 3, but I don't think there is any way to expect it to catch up with currently feasible technology. And it could only be launched at specific times.
It certainly could have more advanced sensors and batteries. I don't know if the battery improvement would really matter on a decades long mission.
Even if we could there isn't much point. There just isn't very much out where the Voyagers are, and there won't be for a long time. We're shutting down sensors on Voyager because there just isn't much to see. It's just vacuum.
If we had something capable of getting to Voyager in a year that might be worthwhile, because it would stand a chance of getting somewhere interesting in a few decades. But we are absolutely nowhere near that level.
Not a couple of years. The Voyagers have been doing around 38 000 mph since the late 1970's. That is rougly 17 km/s. The proposed Interstellar Probe mission aims to do 20 km/s or slightly more. It will take it decades to overtake any of them (no actual overtaking will probably take place though, as its trajectory most likely will be different).
As others have pointed out, the speeds for Voyager 1 were because of gravity assist of Jupiter and Saturn while Voyager 2 was a gravity assist of Jupiter, Saturn, Uranus and Neptune. Jupiter & Saturn line up relatively often enough so we could try to outdo Voyager 1 speeds a bit with a lighter air craft due to various advancements. But it's unclear we'd learn anything really new from having sensors that reached further out and our technology for propulsion really hasn't meaningfully advanced to outdo gravity assists from Jupiter + Saturn. There's some proposals to use nuclear explosions behind a probe to achieve speeds of ~10k km/s which would be substantially faster but there's numerous obstacles (cost + international treaties banning the use of nuclear in space).
Were engineers 50 years ago just much smarter than we are now? It’s pretty unbelievable that these things still work. Or is there something systemic about how they were able to achieve so much so long ago with basically no computers to help.
>Were engineers 50 years ago just much smarter than we are now?
Were the 2006 sony santa monica programmers smarter then us when they delivered God of War 2 for the PS2 on its tiny resources?
Usually the constraints force you to become inventive and smart. Put a hard limit of 128 mb per tab on web browsers and suddenly both the developers of the browsers and the web developers will become extremely smart.
The over abundance of resources has made the majority of software engineers lazy. If we once again live in scarcity times - suddenly people will start to ask why we need 8 abstraction layers to do something.
I would guess they were quite smart bunch, but no superhuman - just people tasked with extremely challenging task, trying to jury rig something on the edge of possibility. My guess is that they took their tasks more seriously and to heart.
> The over abundance of resources has made the majority of software engineers lazy.
I don't think "lazy" is the right word, it's just that cheap hardware makes most optimizations not worthwhile. Why spend weeks optimizing an application, costing thousands of dollars and potentially introducing a lot of complexity bugs, when paying 50$ instead of 5$ for a server is also an option? Not to mention the importance of time to market and the time value of money (even if the server cost will armotize, spending the money later might be worth it).
There obviously is some laziness and useless complexity in our industry, but a lot of it simply is the rational choice. We all love the ingenuity of assembler-hacking the last few bits of performance out of the CPU, but for most usescases, this is simply wasted effort.
They had lot less resources and knowledge to estimate the right amount of resilience for achieving the mission goals and erred on the side of cautions.
You can see something similar happening in bridges, weight per meter load has come down considerably since the 60s because material understanding is much more advanced and they meed less margin for the same safety factor.
> The Kurilpa Bridge is a multiple-mast, cable-stay structure based on principles of tensegrity producing a synergy between balanced tension and compression components to create a light structure which is incredibly strong.
Delighted to see this here — the Kurilpa Bridge is less than 50 meters from my front door and is something to behold each time I get the chance to use it.
There was much less overall complexity (and capability). These engineers interacted directly with capacitors and circuits to accomplish their tasks. Integrated circuits enable a lot of modern society, but it also segregates engineering between fabrication time and everything afterwards.
I find that more analogue / mechanical type machines even more amazing in someways. Like how they say it ‘decided’ to switch to its backup radio. How are those decisions made on such basic hardware? I guess thinking mechanically is just a very foreign concept for me mostly making software
The Voyagers weren't exactly cheap, but by NASA standards they were made on a shoestring budget because all the money at the time was being diverted to development of the space shuttle.
Most of our car engines run at a small fraction of their capability, even when going at highway speeds. Most servers have a 15-25% utilization to handle failover situations. If you have a beefy as fuck set up with extra wide tolerance margins, things can last for a very long time. Car engines with a chamber that’s essentially containing explosions still end up running for decades with very little maintenance. Well, some longer than others. This is also true for consumer electronics/appliances. You still have things going just fine from almost a century ago. Yes, some things fail faster than others and the designers were aware of limitations and trade offs, but most components continue working. What would be wise in this case is to look at how the voyager was designed for reliability to understand why it hasn’t completely gone off the rails. Meaning, what are the common anpproaches and why do they fail, and why doesn’t the voyager fail. What does it do that’s different than the usual method. Lots to be learned there but I don’t think the people were wizards, they probably dealt with very brittle things and had tight specs and knew how to design certain parts to very close to perfect. The manufacturing of parts plays a huge role in reliability. If you know you can achieve better manufacturing output but it is a matter of cost, then that’s easy - government can pay whatever. But if you don’t know, then that’s a harder situation to be in and that’s guess work. But if you have a capacitor that’s normally 20% off rating that you’d buy in a store, and you know you can get it to be down to 1% off or even less from rating, then you go for that with everything you’ve got. And all parts are made with as wide of tolerance as you can get so that as they degrade, things can still work. Easier said than done, but there it is. Things will eventually break and not work. But for now, Voyager has bought time, quite literally.
It's not nonsense. Many of the radio receiver setups made use of diversity or dual-diversity configurations. Also, transmitter and receiver blocks from different manufacturers were wired in switchable primary/backup configurations.
It's just amazing how little progress in space flight has been since then and since the Apollo missions. I was just looking at some newspapers from 1969 (saved by my mother-in-law) and it was all about the Moon mission, and what awaits us - they talked about settlements throughout the Solar system, flying cars and so on and so on. Given what happened since, it was a huge disappointment. And now ancient Voyager is our last best hope to explore the outer rim.
We have to way to know it it's actually the fastest and farthest object: A 1957 nuclear test was conducted underground, but the scientists decided to cap the borehole with a sizable concrete and metal plug.
The nuclear explosion may or may not have caused said plug to reach space - the data from the cameras indicate it had at least 6 times the needed escape velocity, but it is difficult to estimate whether it would completely disintegrate or if enough of it would survive the atmosphere and whether it would "count"
I was struggling trying to think up any other plausible scenario, where one unit in a continiously operating piece of electronic /hard/soft ware was rebooted
after such a long time
and Voyager most likely has no competition
in this regard or in a number of others
sure speaks volumes for how well the nasa crew ,knew there stuff, and got it right
durring that era,on budget,on time, and now doing inter generational space research
making the actual designs and blue prints of the space craft, very much worth re examinining
AFAIK launch vehicle cost were never the problem for deep space probes. Launch windows still are - Voyager was extremely lucky in this regard. And then there is probe design and construction - SpaceX made launches cheaper, but that just shift most of the cost into the payload. And then there is no point in having such a fleet if you there isn't enough funding for scientists to process the results.
Most of the probes would be identical. Building two identical probes would cost only marginally more than 1, perhaps 10% more.
> but that just shift most of the cost into the payload
It reduces the cost, not shift it.
> And then there is no point in having such a fleet if you there isn't enough funding for scientists to process the results.
Plenty of funding would become available if the funding for junk science stops, such as:
"In 2021, the National Institutes of Health (NIH) awarded $549,000 to a Russian lab performing experiments on cats, including removing part of their brains and seeing if they could still walk on treadmills, according to the Washington Times." https://nypost.com/2024/11/13/us-news/where-elon-musk-can-st...
Is there some existing plan? A paper that supports this idea? It's not clear to me that the probe we send to one place for one purpose is the same we'd send to another.
> Building two identical probes would cost only marginally more than 1, perhaps 10% more.
I've read from experts - somewhere here on HN - that isn't how the costs work.
> Plenty of funding would become available if the funding for junk science stops, such as
> "In 2021, the National Institutes of Health (NIH) awarded $549,000 to a Russian lab performing experiments on cats, including removing part of their brains and seeing if they could still walk on treadmills, according to the Washington Times." https://nypost.com/2024/11/13/us-news/where-elon-musk-can-st...
$500k is not enough to matter for a space probe, and I don't see why that brain research is junk science. If you truly think the research was about half-brained cats, then you really should appreciate that research. :D
> It's not clear to me that the probe we send to one place for one purpose is the same we'd send to another.
We've launched twins before. The two Voyagers, for example. Two Vikings, for another.
When we build different airplane models, quite a bit comes from other airplanes. Even if the parts aren't identical, they are usually just resized or tweaked or modernized. Look at all the 737 variants, and the 747 variants, for example. If every model was a ground-up redesign, nobody could afford to fly.
When the 757 and 767 were developed, there was a big push to share identical parts between them. This was a big success, and saved huge amounts of money.
> I've read from experts - somewhere here on HN - that isn't how the costs work.
I debated those experts here, and a lot of their arguments didn't hold up. For example:
1. only need to design once for N craft
2. only need to devise a test plan once
3. only need to build test equipment once
4. only need to develop the expertise once
5. only need to write the software once
and so on and so forth.
> I don't see why that brain research is junk science
What's more important? Solar system exploration or half-brain cats?
> We've launched twins before. The two Voyagers, for example. Two Vikings, for another.
Good point. Do we know the marginal cost of the second one? Why don't we send twins now?
> When we build different airplane models, quite a bit comes from other airplanes.
I expect we're already doing that with space probes, to the degree it's possible. I feel like some of your argument is the old 'these people are idiots and I know obvious ways to do it better'.
A good question is, why doesn't NASA do it that way? It would be interesting to hear the response of someone there.
I can think of reasons it might not be effective - e.g., the probe for a few months of Europa terrain observation might be very different than the 10-year solar radiation observer, which might be much different than the Mars whatever, etc.
> I debated those experts here, and a lot of their arguments didn't hold up. For example: / 1. only need to [____] once
Those are the benefits of standardization and mass production and can be quite valuable, but not everything can be done that way. That's why we have different planes, cars, etc. I might want everyone in our company to use the same laptop, but different people have different needs.
I can come up with other possible issues, but I have no idea without someone with actual experience:
There can be a significant cost to engineering for standardization. Parts, assembly, a supply chain, a production line - maybe not worth the cost at the quantities needed. And there are things like the F-35, intended to save money by meeting the mission requirements of militaries in a ~dozen countries and services, including all of the US Air Force, Navy, and Marines - all simultaneously! Anyone who has designed even small systems would, I think, feel alarm at reading that. It took a little time and money to find a way to do all that effectively, please all those bosses, with an all-in-one tool. (Off the top of my head, just think of power supplies that can suit every possible demand on space probes!)
Weight and size are especially an issue for space probes, which would seem to make an all-in-one tool more challenging. The proposed solution often is modularization - a standard framework with interchangeable mission modules. In my limited experience trying it, it's a mess: when things break or need lots of extra attention, IME as a general rule it's most often at the interfaces between systems and between subsytems; the interchangeable module approach is asking for trouble. I know the US Navy tried that with littoral combat ships, and it failed. (Which doesn't mean it never succeeds.)
> What's more important? Solar system exploration or half-brain cats?
I worked for 3 years on the 757 stabilizer trim system and gearbox. My opinions on this are not those of a layman.
I also worked for a time as an electronics assembly guy (i.e. gnome). One task was to assemble 10 RS-232 electronic cards, all the same. The first one took me 2 hours. The last one 20 minutes.
The first time I took the intake manifold off of my V8 Mustang, it took me 2 hours. The 4th time, 20 minutes.
The same acceleration happens when I assemble IKEA furniture. Or when I helped a friend change 4 brake rotors on his car.
This is not because I invented a more efficient process. It's simply that I knew what to do.
I flat out do not believe there's something about space hardware that makes this not possible.
P.S. The service manager at a car dealer once told me that changing the alternator would cost 2 hours of a mechanic's time. I told him if it took more than 20 minutes the mechanic was incompetent, and proceeded to explain to the manager step-by-step what the exact procedure was for that model car. The result was I got a much better deal :-)
> I flat out do not believe there's something about space hardware that makes this not possible.
I agree. I'm sure everyone at NASA agrees as well. I won't go on and talk us into a circle.
Another issue may be that NASA often pushes the bleeding edge of space exploration - that is part of their job. Commercial companies can handle the the already established tech like orbital launch.
There's an excellent National Academies report on NASA from this past spring, which includes a great section on current leading missions, and the technologies needed, including those that must be developed. NASA seems to develop new tech for every mission; they take on missions before the R&D is done on many components - they seem to be just theories. That might be hard to mass produce.
For the HN ultra-nerds: is there a book that details the voyager 1 construction including sensors, PCBs, materials compositions manufacturing processes and all that? I’m looking for something so dense I’ll need a therapist to find my way back to society afterwards. Truly curious thank you.
"NASA reconnected with Voyager 1 after a brief pause" (30.10.2024)
https://news.ycombinator.com/item?id=41992394