> [The S-band] uses a different frequency than the X-band transmitters signal is significantly fainter. The flight team was not certain the S-band could be detected at Earth due to the spacecraft’s distance, but [it turned out to work]
This is the most fascinating part to me. Isn't it well-established how sensitive a signal we can hear? Did they implement something like a new signal analysis method that enabled it?
And it says this wasn't used or even tried since the 80s anymore, I guess it grew too faint. Looking up the frequencies, X is 8–12 GHz and S is 2–4. Doesn't that mean X gets more data across at the same redundancy level? Why have this slower transmitter at all for only the first years, power conservation despite the fresh RTG?
Did they implement something like a new signal analysis method that enabled it?
Well, we do keep building bigger and bigger antennas and antenna arrays. So while Voyager can't change, we do. And we can build more and more sensitive (i.e. noise rejecting) equipment.
X is 8–12 GHz and S is 2–4
I don't know much about the Voyager design but the beam width is related to the frequency and so the S-band transmitter will have a larger beam width and thus can point less accurately that the X-band when trying to talk to Earth. Conversely, X-band is higher frequency than S-band and it's likely they would be able to use more bandwidth. So, interesting trade offs.
> Did they implement something like a new signal analysis method that enabled it?
They arrayed three antennas together.
> Doesn't that mean X gets more data across at the same redundancy level?
There's nothing special about the frequency itself. The advantage for X-band is that the antennas at both ends have more gain. 12 dB for the spacecraft and 11 dB for the ground station for a total of 23 dB.
No, they didn’t. The three antennas are in California, Spain, an Australia; they can’t all point at the same point in the sky at once, and even if two could do so, they’re not designed to work as an interferometric array.
If you catch a site commutating with Voyager you will sometimes see it using two dishes... though most often it's just the one big one at the site. When they do, its not getting signal on two, but having one of the track the carrier wave (I think).
... and to "even if two could do so, they’re not designed to work as an interferometric array." They can.
> The DSN anticipates and responds to user needs. The DSN maintains and upgrades its facilities to accommodate all of its users. This includes not only the implementation of enhancements to improve the scientific return from current experiments and observations, but also long-range research and development to meet the needs of future scientific endeavours.
> Interferometry
> The accurate measurement of radio source positions; includes astrometry, very long baseline interferometry, connected element interferometry, interferometry arrays and orbiting interferometry. Measurement of station locations and Earth orientation for studies of the Earth.
> Very Long Baseline Interferometry
> The purpose of the Very Long Baseline Interferometry (VLBI) System is to provide the means of directly measuring plane-of-the-sky angular positions of radio sources (natural or spacecraft), DSN station locations, interstation time and frequency offsets, and Earth orientation parameters.
I'm wondering also. Based on the wording here. I suspect we have more and better telescopes than we did when it launched.
> The flight team was not certain the S-band could be detected at Earth due to the spacecraft’s distance, but engineers with the Deep Space Network were able to find it.
Certainly, but then isn't it expected now with the new antennas? They installed some upgrade but don't know to what sensitivity it goes? Surely they do so the article must be handwavy about it if I'm understanding things correctly
Someone else commented about a wider spread in this other band, though. Perhaps the operators were not sure what that does for adsorptions and reflections by intervening dust or so?
Haven't you ever written code that you thought should work, but didn't due to a reason you couldn't foresee? I don't think the uncertainty here comes from question marks in the link budget equations but from the possibility of unforeseen problems making detection not work when it should.
You can increase throughput by adding repeaters around the solar system, but latency has a hard limit due to the speed of light.
Also throughput could be increased by changing or updating the modulation and coding schemes. AFAIK Pioneer and Voyger probes were still using PSK and FSK.
It depends on whether they're running at a lower data rate due to the low signal power (which leads to low SNR, reducing channel capacity) or if it's just the speed of light delay.
In the former case, a relay could help quite significantly.
In the latter case, it would just add even more delay.
There isn't enough throughput to achieve a negative delay which offsets the actual flight of light time to reach the object. The ratio of throughput delay to actual light delay is so infinitesimal it might as well be zero.
The creators of the bot are responsible for the consequences of their actions. There are mentally unstable people (if that's true) and neglectful parents (again, if true - it's easy to duck responsibility by smearing the powerless victims), and if the bot is a threat to those people then the developers are doing wrong.
How is that the same and how different than the prior issue? I feel like your comment sets me up to work that out - it's your comment, what do you think? :)
Started watching after your comment.. Fantastic documentary! And its great to see some shots of the screens they use to manage the probes and stuff. And yes, some ninjas cutting onions..
... I found it a bit poignant / emotional as well! ... those engineers in a dusty office beside McDonalds... quietly keeping the mission going... in touch with something that's beyond our solar system now... True engineers! No glitz... quiet dedication...
For a very engaging documentary about the remaining team of Voyager 1 scientists, watch "It's Quieter in the Twilight". The ending kind of tails off, mostly due to the Covid pandemic, but it's a great look into how a group of aging scientists and engineers (the last to really understand how Voyager 1 works) are keeping it alive.
This is interesting. When we talk about interstellar travel in the far future, this may be another issue to consider. The handover of knowledge will have to happen over at least a few generations.
It will end up being a software application whose development and upkeep sprints/projects will be measured in multiple years instead of weeks.
> ...but it's a great look into how a group of aging scientists and engineers (the last to really understand how Voyager 1 works) are keeping it alive.
That kind of brings up some interesting questions of team composition. When you're building something complicated that will last 50 years, you probably want a few people on the team who are: 1) talented, 2) very young (like fresh out of college), and 3) have the mindset of a lifer. You deliberately get them involved in all the stuff and put them in the room when all the decisions are made, then they serve to preserve the institutional knowledge of the project for the latter half of its lifetime.
Hey smart people, if we were to launch a similar probe today using the most advanced technology available, how long would it take a probe to reach the same distance as Voyager 1?
The lineup barely matters for gravity assist. Jupiter is the vast majority of the total you can get. Saturn has 30% of the mass and 2/3 the orbital velocity, so adding Saturn gets you only 20% more above using Jupiter alone (and the Voyagers didn't really try), and the ice giants are smaller and slower yet.
We could easily overtake Voyager via only Jupiter if we wanted to (and New Horizons eventually will), and Jupiter-to-any-target launch windows come at least every 12 years.
"The beauty of the gravity assist is that you use the gravity field of a large body to change course. A common misconception is that the gravity assist increases speed, but it actually leaves speed unchanged. It's more accurate to say that the gravity assist changes direction, since velocity is both a magnitude (speed) AND a direction."
That's only thinking about it from the perspective of the planet. Relative to the planet, the magnitude of the incoming velocity vector is equal to the magnitude of the outgoing velocity vector. But in the reference frame of the sun, the planet loses an infinitesimal bit of momentum, and the spacecraft gains that momentum.
And that's assuming you don't do an extra burn at periapsis, which is far more efficient at changing speed than doing the burn in interplanetary space.
Where is the quote from? Gravity exerts a force, therefore by newton's second law there is acceleration. Nothing says the acceleration is purely angular. The closer the pass to the planet, the greater the acceleration.
Wiki explains it well. From the frame of reference of ship and planet, the relative speed is the same. But relative to the sun, the ship can be moving faster.
"A gravity assist around a planet changes a spacecraft's velocity (relative to the Sun) by entering and leaving the gravitational sphere of influence of a planet. The sum of the kinetic energies of both bodies remains constant (see elastic collision). A slingshot maneuver can therefore be used to change the spaceship's trajectory and speed relative to the Sun"
There’s an upvoted sub comment to that which has a good correction. That statement as a whole is just not true and a meaningless statement. When you pass an orbiting body you will take some of that body’s orbital momentum and add that to your current momentum. There’s really no semantic argument in my mind that could justify the above statement. You change direction because you have more in total. There’s no bending of the direction here, just addition of new vectors but that’s exactly how all acceleration works anyway so it’s a bit meaningless to make that statement and I’m really not sure what they are trying to get at. Maybe from the pov of the orbiting body there’s no change but that’s not the thing anyone cares about when doing a flyby.
I think the issue mentally is if you treat the gravity of a planet like a valley, where you accelerate into it but on the way back out you lose all that gained velocity. The difference here is that the valley moves in the opposite direction ever so slightly as you pass it, and you gain the momentum from that valley's movement. At least, I think that's how it works.
Not claiming to be smart people, but it depends on what you mean by "available." The fastest probe that we could launch soon is a sundiver solar sail. It requires no new magical technological leaps. It has been explored mostly in service of the solar gravitational lens concept. [0]
The sundiver probe could go 547 AU in 17 years. Voyager 1 is at 162 AU. So around 5 years.
"Launch today" is a very undesirable constraint for this goal. All of the probes that have escaped made extensive use of gravitational slingshot effects, which are only available during favorable launch windows.
I'd imagine trying to build a radio transmitter powerful enough to transmit that deep into space would raise a few eyebrows before you even get the chance to send anything to the satellite.
That and having to guess what commands to send to the probe
There are stories floating around of hackers connecting to old satellites (for fun), so it's possible.
NASA could have gotten away with sending a password in plaintext (can anyone intercept a focused radio signal?), or DES, but it's possible they didn't use anything.
Where it is now, you'd need some serious equipment to transmit to Voyager, and network security is all about cost-benefit analysis; it's not worth the while for any adversary.
You'd be spending precious compute and power to use any kind of encryption. It doesn't seem worth it for something that isn't of significant strategic value.
The source code isn't hiding in a repo somewhere for security reasons — it's spread around on various pieces of paper and computers over the last 50 years. There isn't a single source of truth. Adds a whole other level of wizardry to keeping the thing running.
It costs money that would better be sent towards other projects, and NASA needs to be as careful as possible with spending their very limited budget.
Having a ton of people run around the office for a couple months to collate a bunch of documents so you can better pass info on to a new generation of workers when the satellite might not even be usable anymore isn't very efficient. Might as well just pay an extra 50% or whatever to the 5 dudes who know what's going on until the thing is inop. Even if it died today, the mission still would've been a massive success.
Given how simple the computer is, I very much doubt it. If anything, it might have a very simple xor encryption or just a passphrase. If anyone were sufficently motivated, it probably would be trivial to snoop on the DSN transmissions and crack any authentication. I'm sure it'd be susceptible to a simple replay attack at any rate
The problem is simply that you need a huge transmitter with (AIUI) some special and unique modulation hardware. Also there's nothing to be gained from interfering with the Voyagers. Really the only practical thing you could do is shut them down a couple of years before they die anyway. There's just no point.
Even if you magically had your own DSN, would anyone but NASA even know exactly where they are with enough precision to communicate with them? In a way, that's now your layer 1 authentication key. The coordinates of where to "point" your DSN.
I'd be very surprised if its exact position in the sky is not public information.
Even if not, it should be fairly straightforward to compute its position from the initial flightplan. Once it escaped Jupiter its trajectory is just a straight line.
I was about to say that Voyager likely also suffers from the Pioneer anomaly, which is suspected to be caused by the RTG's thermal radiation pushing the craft off course [1]. But according to wikipedia we can't really tell because the effect of the maneuvering thrusters that keep Voyager aligned is much bigger than the Pioneer anomaly (Voyager 1 has fuel until about 2040).
It only really needs to be on the upper managements desk:
"The $3.57 you save on capacitors per unit will cost you $50 in lost good will."
On the other hand there is a balance between longevity through simplified maintenance and replacing aged appliances with newer and significantly more efficient models.
Alternative phrasing: "the $3.57 you save (per unit) today will give you a $100.000 end of year bonus, but cost the company millions in future lost sales"
> but cost the company millions in future lost sales
Isn’t the company going to make more sales in the future (and hence more profit)? And isn’t replacing stuff with new versions going to lead to improvements in people’s lives through more efficient, quieter, and more effective technology?
It's been my experience that newer technology, though being more efficient, etc, breaks much faster than the old powerhouse tech from the 50's, 60's, and the 70's. I don't see my 1950's-something oven dying anytime soon. It will outlive me if I don't replace it for that one shiny new feature I convince myself I just have to have, or because it doesn't match my curtains.
So, yes, replacing stuff with new versions will bring more and more sales as opposed to building something that will last. Hence "planned obsolescence" and the war on making things repairable that we've seen lately. Great for business, bad for the customer.
To be fair, it's a bit more subtle than that. There's a level of survivor bias involved - all the unreliable appliances from the 50s-70s have long since been hauled off to scrap metal recycling, so what's left are the long-lived ones.
Modern electronics certainly can be made with much higher reliability than their mid-century ancestors, but the driving factor that prevents this is aggressive cost cutting that happily shaves pennies off COGS to shift the statistical distribution to the left. Unless consumers are willing to pay more for long-lived devices, this is doomed to continue.
Because the mechanism by which most contemporary appliances turn into junk is the "control board" "breaking" (which seems like it must be flash endurance, meaning directly planned obsolescence) and is unrelated to survivorship bias - old machines used mechanical timers, and even when they started using solid state it was simple. I'd say it's actually fallacious to point to survivorship bias, because old machines were built with parts that could either be repaired or replaced.
I've got a relative's dryer right now that's acting up. Do I really want to do the work of calling around to a bunch of repair guys to find out which one won't charge me a fee to come out and say "it's going to cost more to fix than to buy a new one, and we can have the new one installed and delivered tomorrow" ? I've heard this trash spiel so many times at this point I don't even care to try engaging, despite not even having to pay for it myself.
No, I'll spend the 20 minutes taking a few screws off and looking at the thing. Then order parts. Then next week, an hour or two to replace the part. Then it will likely be sorted for the next decade, but if it does break again it will continue to be repairable rather than effectively a consumable.
> Isn’t the company going to make more sales in the future
Only if you don't overdo it. When your products break too quickly many customers will stay one-time customers and switch to products from the competition. There's also the reputation damage to consider.
And of course it works best if you have a fairly high market share. If you have a low market share most products on the market are from your competitors, so you can you are better off boosting your reputation with longer-lasting products (compared to other products at the same price point).
Come to think of it, the "break it faster to sell more" strategy works mostly in monopolies, duopolies or with market collusion (like the Phoebus cartel that lowered the lifespan of light bulbs)
Or in other words, the $3.57 in savings will allow the product to compete in a lower price segment and increase sales significantly.
It is the behavior of the buyers that drives costs down. People are extremely cost sensitive in the mid to low segments, shifting their purchase decisions from one product to another just because of less than $1 price difference. Some companies cannot survive at all without saving those $3.57.
Buyers do not exist in a vacuum, and consumer behavior is commonly manufactured. Consumer behavior has never been a substantial justification for optimizing for wasteful and environmentally business practices in pursuit of quarterly growth.
> replacing aged appliances with newer and significantly more efficient models
Are we still expecting to make significant efficiency improvements for appliances in the next 30 years? Will it be enough to justify the production of a new appliance?
Legal warranty for appliances like washers, dryers, refrigerators among others should probably be raised to at least 5 years.
The thing that gets me about modern warranties on appliances is how weasely they'll market their warranties. I've got GE clothes washer and dryer proudly proclaiming their 10 year warranty. It's a 10 year warranty on the motor and the drum (IIRC). Not on the motor inverter unit, which had a one-year warranty. Guess which part is likely to fail? Guess what that GE service tech is going to recommend you do after he prices out several hundred dollars of parts he thinks he might need because he's too lazy to actually diagnose the issue?
An LG dishwasher with a similar 10 year warranty on the pumps and what not in the dishwasher. Awesome, great. The display panel has failing LEDs. Is that under that warranty? Nope. Who cares about the pump not technically failing if one can't know what mode the dishwasher is in?
If they're going to stick a sticker on the face advertising their warranty on an appliance it should cover the whole appliance. Not just a small handful of parts that should practically never fail under regular use while all the surrounding stuff has a nearly useless warranty.
I'm so salty about warranties and support these days I usually try and do every possible thing I can do to fix the problem myself before obviously voiding a warranty before I ever bother calling their support. So worthless most of the time.
Sadly, there's has no system for long term reviews. In my dreams, Amazon/etc would engage customers about their durable products and ask how often you still use the product and it/when you're thinking of replacing it...
Given the economics, I wonder if best buy could pay customers $10 for a survey of their old products, knowing that it'll inspire upgrades etc.
I would not spend 10 seconds of my time writing a product review or answering a survey like that. If Amazon is interested in selling good products they can hire product testers who will do teardowns, destructive testing, and running them through a 5-year simulated use durability test.
Amazon is now, largely, just Aliexpress with faster shipping and easier returns (and higher prices).
Even in areas where they have brand name products, it's often impossible to surface them through their search. (I've, many times, failed to find something there and then went and searched Google/etc and the top result has been... an Amazon link to exactly what I'm looking for.) And if you purchase through Amazon, there's no reason to believe it's not gray market or something else where you may end up having issues with support/warranty if you ever need.
And combined with the inventory commingling, even if you find brand name products there you can't be sure you'll actually receive it and not a knock-off. So it really only makes sense to order things that are already the cheap/knock-off quality anyway.
So... yeah, there was a nice period of time there where Amazon was just "shopping made more convenient". These days it's "Aliexpress made more convenient". Unless I'm setting out to buy cheaply made Chinese imports with no warranty, I'm not even go to start looking on Amazon. There's little reason to.
> "The $3.57 you save on capacitors per unit will cost you $50 in lost good will."
Or it might cost you $0 in lost good will, and will gain you $5 in sales because many price-sensitive people will buy the thing that's cheapest, without doing an omniscient analysis of its quality.
Not to mention that gold-plating your capacitors won't do you much good if some other part is expected to fail first.
>if you cut corners, you may be rich, but then I'll think you're a bad boy!
Uh oh, a moral judgement from a peasant? Say it aint so. Anything but that. I'm literally shaking right now.
Anyway, here are some actual incentives:
- If you do some shady corner-cutting, you'll be legally compelled to trade in your Bugatti and drive a used Kia the rest of your life
- If this chemical causes bodily harm to me, we shall inflict bodily harm on thee
- A portion of the profits will be placed in a trust and will be passed down to your children, if and only if your product lasts long enough to be passed down to our children
- If you (banker) lose our money, you will lose your head
These rules discourage new businesses from starting and you end up in a situation like France where their largest company is some fashion company formed eons ago (probably before all the regulations).
France is an extremely wealthy country with possibly the best quality of life in the world - many would say better than in the English-speaking countries home to most of HN.
Consumer products like cars, washing machines, etc... are mass produced and made to last for a specified amount of time, at the lowest possible price. Voyagers were unique pieces made with a quasi-unlimited budget.
The thing with consumer products is not evil, they are made to consumer expectations. Would you buy a $10k washing machine? Probably not, even if is designed to outlast you. Of course you also won't buy a washing machine that lasts a week, no matter how cheap it is. Manufacturers did studies and noticed that there is a sweet spot at around 10 years or so of lifetime for typical household usage. More than that and it become too expensive and people might want to change anyways to benefit from new technologies or some other reasons.
Once the "ideal" lifetime have been established, you are going to have parts that last the expected 10 years, others that last less, and other that last more, possibly forever. The parts that break too early need to be addressed as it may cause expensive warranty returns and a loss in reputation. But the parts that last forever also need to be addressed. Let's say the water pump is over-specified, sure it is great, it will never fail, etc... but all that doesn't matter because the machine will be trashed because of some other failure. So why would the manufacturer and ultimately the customer pay extra for that pump they have no need for. Instead, a cheaper should be used. The ideal consumer product is one where every part breaks down at the end of the expected life of the product, but not before, any more than that is a waste of money.
That's thanks to that value engineering that we all can afford cars, washing machines fridges and televisions, with cash to spare. These are not luxuries for the elite anymore. They are not made like they used to, but we don't pay them like we used to either.
I know people get frustrated with broken appliances and electronics, but I feel like once you look at the forces involved it seems fairly obvious that it'll happen:
* Environmental regulations require additional or more complicated systems - which adds points of failure (IE: Variable-speed motor controllers, Exhaust Gas Recycling, etc). They also require other changes - like using lightweight plastics where metals would have been used previously. Plastic fan blades crack, metal ones don't.
* Consumers, on the whole, only care about price and features. Those are tangible. Some people care about things like maintainability and longevity - but they make up a tiny, insignificant fraction of consumers. Thus, companies optimize for price - even if it comes at the cost of longevity. Any company that doesn't quickly loses market share to global competition from dozens of others that are more than willing to make that sacrifice and offer a better deal.
Those two factors explain everything about why modern appliances fail the way they do.
Voyager is something that I keep thinking every time I see people talking about what great things musk/spacex are doing with their fail often/fail fast approach.
I do get the value of iterating quickly towards a solution but that does not invalidate the conservative engineering approach other people have to take to build something much more stable/reliable where they have to be very cautious and can't just break things until you have a solution.
Voyagers were built by JPL and not contracted out so align to a degree with the spacex model. In part this was because budget got cut badly.
“In order to reduce costs and overheads, NASA decided to leave design and construction of the Mariner Jupiter-Saturn spacecraft to JPL, rather than to Boeing, General Electric, Hughes, Martin Marietta, and North American Rockwell, all of which had some level of preparation for a Grand Tour proposal. The largest aerospace firms lobbied NASA Headquarters and Congress for the contracts. In order for expensive projects to pass congressional scrutiny as part of the NASA budget, they often had to include an intention to contract out much of the work” voyager - Andrew J. Butrica
I think they also did build a test spacecraft ?
Anyways the approach (smaller focused team) contrasts with the big SLS type projects that are contracted out for political reasons.
On the other hand, it's environment isn't particularly hostile (no moisture, dust, vibration, though it gets more radiation than on Earth), and I think it doesn't have much in the way of moving parts (?)
About a decade ago I bought a USB fan to plug into my computer at work. It has been turned on and running for almost 10 years straight. I thought to myself "I should actually leave a good Amazon review for this fan, it's held up to the test of time for little consumer electronics like this."
I went to look up my purchase but not only is that listing gone, the brand no longer exists.
Exactly. You want a W/D combo that'll last forever? No gimmicks? No nonsense? Get a speed queen. Ugly as hell, basic as hell, and built like a brick shithouse and made right here in the good ol' US of A.
Problem is... they're expensive. Real pricey. This kinda industrial-tier consumer stuff does exist, but very few people bother buying it.
Speed Queens were exclusively installed each of the dozens of coin-operated laundromats I’ve visited in my life. This is because a machine awaiting repairs doesn’t make the owner a penny.
As always, ask or imitate an expert with aligned incentives. A seasoned local auto shop owner guided me towards Acura after two of my fancy German cars needed multiple expensive repairs. He doesn’t service Honda group vehicles, but drives an Acura himself and I couldn’t be happier after listening to his advice.
Dog no they don’t lol. My family has been in both ecosystems over the years and by far the iPhones outlast the Android phones they buy both in quality and in updates.
True, but what percentage of phone owners will actually do this. Less than a fraction of a percent? The remaining 99 percent chuck their phone in a landfill when it gets slow.
Plenty of people buy secondhand equipment. Manufacturers want to stop that; spare parts are hard to get or needlessly expensive, and manuals are woefully incomplete or nonexistent.
I've contracted for someone whose customers thought so. (My job involved making stuff work with without an important part from a defunct vendor.)
Their costs really did increase all the time. Not needlessly, despite what some people thought: their cost per spare part really did grow quite a lot, as the number needed per year decreased and the fixed overhead slowly increased.
Conversely, the system which has been described as the longest running computer system after Voyager is the UK's Police National Computer - turning 50 and still in use today. Most UK citizens will be on it in some way, shape or form.
Keeping it alive is a remarkable feat, but costing taxpayers eye watering sums of money, due to the ever increasing shortage of skills, while a modern cloud alternative is developed. Be careful what you wish for.
I’m constantly marveled at the reliability of an extremely complex machine, with hundreds of moving parts, that was built more than 50 years ago, in operation for 47 years, and built to operate for only 5 years.
Wow: Triggering the protection system a second time turned off the regular radio:
"While the S-band uses less power, Voyager 1 had not used it to communicate with Earth since 1981. It uses a different frequency than the X-band transmitters signal is significantly fainter. The flight team was not certain the S-band could be detected at Earth due to the spacecraft’s distance, but engineers with the Deep Space Network were able to find it."
Heros, both those who made the S-band radio and those who managed to retrieve the signal.
Meanwhile a javascript app of mine from 2022 does not build now because I can’t get it to install the dependencies for some reason.
BTW, that transmitter apparently can not transmit back to earth at this point because it is too far away. But looks like it can receive (we can transmit with much higher powers here?) and they managed to send a command to restart the primary transmitter. Now debugging.
> Meanwhile a javascript app of mine from 2022 does not build now because I can’t get it to install the dependencies for some reason.
Tired take... I promise that if your JS app ran in 2022 and you completely isolated the computer from the internet, your JS app would still run in 2067. This is completely unrelated. Also JS is probably one of the easiest language to repro, if you have a lockfile you're good to go.
The transmitter software built back then was so close to bare metal that it practically could be called hardware.
Stuff today has been so hilariously abstracted on such thick layers of complexity pressed into neat "simple" blocks that it's almost miraculous anything works day-to-day.
Once you fully internalize that your buggy program is running on a buggy framework, in a buggy language, in a buggy sandbox, on a buggy virtualization, on a buggy file system, scraping along on buggy silicon running buggy microcode, managed by other buggy silicon running buggy firmware, using peripherals with their own buggy silicon and firmwares, with everything happening billions of times per second (a car engine typically doesn’t reach a billion cycles over a 20 year lifespan ), it seems mind bogglingly improbable that anything works at all lol.
The fact that it does work, and can even routinely reach 5 nines, is a testament to the generosity of the universe… and a good argument for making sure that to whatever extent possible, you should write your software to be resistant to random events and erroneous operations. Fail safe/ fail soft, fail and retry, fail and restart. The more resilience we build into our work, the less angst we create in the world.
Yeah, but if it wasn’t there wouldn’t be enough days in a human life to learn to use it.
Everything we create is to save on our most precious resource: human effort and time.
AI is so ridiculously wasteful if you think about it, from a computing point of view. But, it doesn’t require us to write a dedicated piece of code for each new problem, thus it’s saving humans effort and time.
Of course, the question remains, what does come out of all this productivity?
Actually, the amazing thing is the Voyager team has documentation to know how all that stuff built in 1977 works, so they can still maintain it.
So much software nowadays is written like it's a disposable item (and then the company decides to keep using what amounts to a 2006-era paper cup from McDonalds every day in a business-critical process for the next 20 years).
Actually.... most likely no, unless you fudge the system clock. Root CAs will expire, chains of trust will break and your app most likely won't run unless you very carefully design it to not have any such dependency, not even indirectly through any of the imported libraries you use. Or your browser will throw an error and refuse to start for the same reason.
But hey, maybe you won't even be able to login if you do so through a Google/Microsoft/Apple ID.
...and you use an SSD? Be amazed at how fragile they are. Chances it will be running 45 years are... not good.
This suggests a question: how does the Voyager handle authenticating commands? Whatever crypto they had in 1977 is surely completely broken now. Security by obscurity? Or can anyone with a large enough antenna send commands to it?
The Voyagers’ protocol use no encryption and the specification appears to be public. I’ve linked a PDF of what appears to contain the protocol description below. So anyone with a *large enough* antenna can talk to it. It just happens to be that the “anyone” on earth really is just NASA and its Deep Space Network.
Meh. I tried to npm install and run it. Npm failed with a cryptic error message. I have a built docker image of this so in any case, I can eventually fix it. Was not that important though.
But then again, why the fuck does that fail now?
I had a friend who was convinced that JPEGs would slowly degrade sitting on a disk. This was around 1990s so he would store important things in BMP so they would not degrade.
Your friend was sort of not entirely wrong. A JPEG decoder can do many decisions on how to decode, while a BMP is just pixels. (So then it's "only" a matter of color spaces.)
I haven't experienced this with JPEGs but wouldn't be surprised, as I have experienced it with MP3s. Newer decoders can't play some of my MP3s made with older encoders.
Well, a compliant JPEGs will still open in any reader, but can look slightly different. In a low-resolution image, it could change some fine detail you really cared about.
Ages ago I did some bit flip tests on different image encodings for a school project. It was interesting to see how a single flipped bit would propagate across the image in encodings that relied on similarity to nearby pixels for compression. Really the effect was limited to the frame, but the effects could be striking.
I'm not even sure how to categorize such an amazing known but widely crudded upon truth. It's incredible but this tower of protocols has been quite enduring and allows many many epochs of js to come and go, as time marches on. And it will keep being here.
There's so much scaredness, anger, and mistrust of js. But this tired take was with us, and still is with us, and will keep being with us for a long long long time. Nothing else comes as close to being a media form as HTML+js+css.
Caveat, the output from your build (JS/HTML/whatever) will likely still work like it did today in 20 years time.
JS isn't the problem, it's the overcomplicated dependency and build systems that were layered on top. It's why Go is a breath of fresh air. A 10 year old codebase will still build and run like it was made yesterday.
because it's nothing something that ever surfaces in the first page of google when searching "how to use npm" and it's never discussed in any youtube videos or bootcamp programs. it's only ever something 'found' when dredging through stackoverflow when searching for error messages.
Will we ever achieve near-light speed travel and catch up to these guys? Or do we first reach the point where we can upload our consciousnesses into machines (bio or silicon) such that speed no longer matters as much?
Off topic a little, but I feel that Heinlein-esque generation ships or frozen embryo payloads carrying biology and all that bio requires are not the future of human expansion. I think it’s long-lived mechs carrying encodings of human neural pathways, with maybe just enough biology to generate randomness. (Could we make hearty fungi function as synapses?)
Reminds me of the sci-fi trope where the first humans are sent off into deep space to reach a new habitable planet. And when they arrive it's already full of cities and life because their original ship was much slower than the high tech ships that came later, beating them by decades to the new planet.
This is part of the story of the new Peter F. Hamilton book "Exodus The Archimedes Engine" [1], which is also the basis for an upcoming game by ex-BioWare developers [2].
For some reason I find a number of comments in this thread quite weird, like they are written by AIs. I'm guessing there definitely are people who are creating bot accounts with AI at this point...
It would be really great if your comment itself was created by AI to throw off people suspicious of your account and it being a bot and its comments being AI.
(thinking aloud/voice in my head)
If 'those things' are recharged by sunlight, I wonder how will they get the energy to continue transmitting (assuming they have whatever source AND solar panels - for when the fuel runs out and they can't catch any sunlight?)
my dialogue with ChatGPT:
Voyager 1 and Voyager 2 are powered by \*Radioisotope Thermoelectric Generators (RTGs)\*. These RTGs use the natural decay of \*plutonium-238\* to generate heat, which is then converted into electricity using thermoelectric materials.
Here’s how the process works:
1. \*Radioactive Decay\*: Plutonium-238, a radioactive isotope, decays over time, releasing a consistent amount of heat.
2. \*Thermoelectric Conversion\*: Thermocouples in the RTGs convert this heat directly into electricity.
3. \*Power Supply\*: This generated electricity powers the instruments, computers, and communication systems on each spacecraft.
Each year, the power output of the RTGs decreases slightly as the plutonium decays, so the Voyager missions have had to shut down non-essential systems over time to conserve power. Despite this, both Voyager 1 and Voyager 2 have enough energy to continue transmitting until at least the late 2020s, when their power levels will likely drop below the minimum required for communication.
That's the problem, they won't have enough energy to transmit anymore.
The Voyager probes are powered entirely from their RTGs, they have no solar panels nor batteries[1]. So if the power output from the RTGs drops below the level needed to power the transmitter, Voyager can't talk to us anymore. There's no batteries that can be recharged and no solar panels it ever drew power from.
The exciting part from this discovery is the potential to keep talking to Voyager longer than we thought. If the S-band transmitter uses less power and we're still able to detect that signal, we may be able to communicate with the probes with lower RTG outputs than we initially thought. Though at that point, if the radio is the only thing still running and there's no instruments operating, the usefulness is probably pretty low at that point.
Fundamentally there isn't any change in the underlying physics. You're limited by the half life of the isotope, so you either choose a longer-lived isotope (which actually may be even harder post Cold War as much of the ideal isotopes were byproducts of nuclear weapons manufacturing) or launch a bigger RTG than you need so you have enough power later on in the mission.
I think the more interesting thing is its suddenly become so much cheaper to put things in orbit, we could simply put a larger satellite with a larger nuclear battery out there.
Maybe this has been laid out elsewhere, but exactly what "science" is Voyager 1 doing at this point? It's taking "readings" of things that may simply help us to get some picture of where we live... but what value does it truly have?
This is a soft value, but I’m glad for every bit of Voyager news that comes out. I find it personally inspiring, think that’s probably true for others, and I’ll take as a small win anything that tends to inspire people about science (or STEM more generally).
Other taking a picture of where we live it is studying the boundary of our solar system (heliopause).
We can't know what value of the readings will have in the future of the science being studying. The studies will end in 2025 when it won't be able to power any instrument.
We can still track the position and speed. Any deviations from theoretical models would be interesting. Non deviations are also useful, although less interesting.
In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the Voyager 1 and Voyager 2 space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose".[89][90]
In May 2021, NASA reported on the continuous measurement, for the first time, of the density of material in interstellar space and, as well, the detection of interstellar sounds for the first time.[91]
This is the most fascinating part to me. Isn't it well-established how sensitive a signal we can hear? Did they implement something like a new signal analysis method that enabled it?
And it says this wasn't used or even tried since the 80s anymore, I guess it grew too faint. Looking up the frequencies, X is 8–12 GHz and S is 2–4. Doesn't that mean X gets more data across at the same redundancy level? Why have this slower transmitter at all for only the first years, power conservation despite the fresh RTG?
reply