Semi-related tangent: During the early 1990's I developed a star chart and ephemeris for the (momentarily) fashionable Windows Mobile platform called "Pocket Stars" that calculated one's geographical position from three or more sextant observations, mainly as backup for offshore sailors should their GPS system fail. For reasons I still can't fathom, a contractor for the Israeli military purchased many copies, apparently so their tanks and troops could remain oriented after all other electronic systems failed. My 15 minutes of Dr. Strangelove entanglement.
For reasons I still can't fathom, a contractor for the Israeli military purchased many copies, apparently so their tanks and troops could remain oriented after all other electronic systems failed.
EMPs are a fantasy. Absolutely and unreservedly a complete fantasy. ZERO have ever been tested, and there’s literally a better use for a nuclear warhead. Not only is it a a completely hypothetical attack, it would look identical to a normal nuclear ICBM attack, and absolutely would result in a traditional retaliatory attack. Why would anyone trade an EMP for all their cities? Literally doesn’t make sense. No one in the nuclear arms control world takes it seriously. It’s a boogeyman for defense contractors and grifters, but I repeat myself.
All that said, GPS denial is a very real thing, and has been repeatedly demonstrated in an operational context. It’s relatively trivial to do. You just need a regular ground based antenna, and not a particular strong transmitter because satellite signals are weak. Just broadcast a different timing signal and voilà! You now tricked someone into thinking they’re 50 miles away from where they really are.
Wait what? There were specific nuclear EMP tests. And sure, in any realistic nuclear war scenario EMP is just an unpredictable byproduct like your link says, but that doesn't make it fantasy I believe?
No. That’s not what I mean. I’m talking about big boogeyman of an EMP attack that destroys the either the North American electrical grid and assorted electronics or merely all electronics in a city.
There is no nonnuclear EMP weapon. Every proposed EMP attack is literally a hydrogen bomb delivered via an ICBM and detonated at extremely high altitude. That is the only way to deliver one to the proper place and the only way to deliver the required energy. That’s just the physics. Even then, the actual effect at ground level, is unpredictable because of simple shielding and atmospheric turbulence. But the undetermined effects aren’t what makes an EMP fantastical. It’s fact that it’s delivered by an ICBM!
The premise of an EMP attack is that somehow the attacker could surprise their enemy and land a catastrophic knockout blow near instantaneously (where “instantaneous” is defined as “between 10 and 30 minutes” (i.e. the flight time of either an SLBM or ICBM)) with little to no retaliation. A Launch on Warning policy (i.e. launch a retaliatory strike when an incoming missile is detected in the air) along with ground and space based surveillance systems makes attempting to execute an EMP attack suicidal.
Missile launch detection systems have been operationally deployed and maintained since the 1960s. They work. Launch on Warning has been the policy of the United States since the 1960s. Implicit in a LOW policy is that the retaliatory strike order is given in minutes from a detected launch. This means the retaliatory nuclear weapons are already sent on their irreversible course before any incoming detonation occurs. This is the defense posture a hypothetical EMP attacker is lobbing an ICBM into. This isn't fantasy. This isn't just math. It’s the explicit nuclear posture of the United States for the past 60 years. It’s what makes Mutually Assured Destruction (MAD) work, and arguably has maintained peace between nuclear states for 70 years.
If an attacker has decided to launch a nuclear laden ICBM they just started a full on nuclear exchange, because that’s the response. No one is waiting to see where it goes. It’s “The missiles are flying. Hallelujah, Hallelujah.” Now. Given that you’re in an inevitable nuclear exchange, is it reasonable to waste a nuke on a roll of the dice on whether it actually do anything when you could actually blow something up? Nay! Knowing that you are being blown up?
On a related note, this is the exact situation why the proposed Prompt Global Strike weapon is suicidal. An ICBM armed with high explosives, looks exactly like an ICBM armed with nuke. Similarly, a kinetic energy hypersonic glide vehicle is not suicidal specifically because it doesn’t travel on a ballistic arc.
>EMP ran to thousands of amps, damaged at least 570 km of telephone lines, 1000 km of buried power lines, and caused the destruction of the Karaganda power plant.
A 300kt blast at 290km was able to induce 1.5 to 4.5kA in the unshielded power line buried underground at 2m, over the area hundreds of kilometers across. Imagine what it could do to the areas that are a bit more populated than Kazakh steppes in 60's.
So yeah, while the nuclear EMP is probably not going to be used as a primary weapon in a global nuclear strike as you said, it's not a huge stretch for one of the warheads to be dedicated for the EMP over some large but less important area, not necessarily as a decapitating strike, so preparing for it makes sense. Besides, there would be at least local EMPs in the area after the "ordinary" nuclear strike, which can be really disabling if unprepared for.
Yeah. So you are probably correct that it is unlikely that someone only sends a single warhead to do an EMP attack. What I don't understand is why you would think that it is unlikely that they send their whole volley, and some warheads are programmed to hit targets while some others are programmed to detonate to maximise EMP damage.
Sure there is MAD, but if you are a military leader would you like to hang your hat entirely on that? After armageddon when you climb out of your bunker would you like to be the one who still has tanks to command, or the one who can't command anyone because all your radios have fried?
Regarding your other comment: why wouldn't a high-altitude detonation, even outside the atmosphere cause an EMP? I feel like the gamma photons emitted in space will eventually hit the atmosphere and with that cause electrons to spiral along field lines. Isn't the question just one of intensity?
Or is it largely dependent on multi-photon interactions to impart enough impulse on the electrons?
> Why would anyone trade an EMP for all their cities?
ICMBs were a theoretical threat. Cold War doctrine had a more realistic (and less apocalyptic) WWIII that would be fought by tactical nukes and tanks through Europe.
We will have countries against each other fighting for their borders but world war is v unlikely in our lifetime.
We love our cheap goods, cheap-ish energy and ability to change presidents and prime ministers.
Yes China, North Korea and Russia banding against the rest of the world is a real threat but China has shown it is selfish too and cares more about its economy.
WW1 was a pretty inefficient war, WW2 got better weapons and communication infra, but modern war fare is seriously destructive.
Intercontinental missiles fired from stealth submarines with nuclear payloads that break apart in air into 20 little payloads that then destroy an entire city 1000s of kms.
Modern warfare doesn’t need armies of millions of men. Whoever can best see their enemy via satellites and oceanic array, direct the most destructive energy using missiles and drones decapitates their enemy.
One nuclear warhead possesses multiple times the energy spent on both world wars combined.
> Yes China, North Korea and Russia banding against the rest of the world is a real threat but China has shown it is selfish too and cares more about its economy.
I really want to believe that. But the same logic did not stop the first world war.
In that case, why both sides of the Cold War spent fortunes on conventional forces along the Iron Curtain?
Like yes it is playing with (nuclear) fire, and maybe they were wrong, but there plenty of professionals who, maybe biased by their positions, who felt the need to get ready for limited nuclear war. And I'm not really convinced either side would want to risk destroying human civilization over Frankfurt.
> Also, there is no difference between a tactical and strategic nuke from an escalatory perspective. Once the genie is out, it’s out.
I mean, why? Why shouldn't tactical and strategic be separate steps on the ladder?
There are people who argue that there are ways to keep limited nuclear warfare limited. [0] I think the RAND institute also published some study on it not outright rejecting the idea but I can't find it atm.
"unique radio-electronic weapons based on new physical principles" sounds an awful lot like the breathless tales of secret Russian Tesla death rays that have been circulating since the 80s, if not earlier.
If Russia could "neutralize entire armies with just one short electromagnetic impulse", or even just "disable missile warheads and onboard aircraft electronics miles away", don't you think they'd be using that capability against Ukraine?
The author of that paper was obsessed with the idea of EMP weapons,[1] so I'd take it with an entire shaker full of salt unless you can find solid supporting evidence elsewhere.
> don't you think they'd be using that capability against Ukraine?
No, and that's entirely the point.
Their non-use of their advanced weapons systems is as equally interesting a subject as their forward-deployment of expendable, less advanced systems.
The Russian military certainly have their inefficiencies - as all modern militaries do, including "our own" - but they also have capabilities that are going to be more important to use against NATO/USA, than Ukraine.
The Russian mindset appears to support the idea that World War Three is well and truly under way and has been since the West illegally invaded Iraq, in 2003. Ukraine is merely the latest in this conflagration that has been rendering 'lesser nations' asunder, for two decades already. The whole world has been watching not only American/NATO, but also Russian doctrine in play for decades.
So I guess the doctrine is, don't play your best hand first .. and reserve your big muscle for when you fight big muscle.
NATO is adding its muscle to Ukraine, no question. But I wouldn't expect to see Russian - or NATO - advanced weapons systems in use until there is actually direct, open conflict between Russia and NATO.
Only then would non-nuclear EMP's, and indeed tactical ("micro") nukes, and other such more 'advanced' weaponry end up on the battlefield, if there is even one after the first few hours of 'real war'.
Russia have thrown everything they have at Ukraine over the last year-and-a-half. Cruise missiles, hypersonic missiles, their best troops and naval assets.
Now they're buying equipment from Iran (!?) and North Korea.
Russia, in their deep USA envy thought they'd have their own Shock and Awe! 3 day SMO.
Instead they overestimated themselves, and massively underestimated Ukraine.
The cupboard is empty. The only thing they have left is nuclear weapons, which they're quite rightly terrified of using. All that remains now is a long slow and grinding defeat as Putin expends every available Russian male in his desperate attempt to remain in power and not back down.
>Russia have thrown everything they have at Ukraine over the last year-and-a-half.
I don't believe that's the case. Russian military doctrine has always been to reserve the best systems for the end-game, and front 'fodder' in the beginning stages of things.
I see Russia's war theatre manifest also in Syria, where the very same tactics are utilized to suppress the field.
"Shock and Awe" is a US doctrine. Russian is more "Shake and Hold".
"Massively overestimating themselves/underestimating themselves", is very difficult to contextualize, if you don't actually speak Russian.
>The cupboard is empty.
I'm sorry, I really don't agree. You might want to take a deeper look:
>> Russia have thrown everything they have at Ukraine over the last year-and-a-half.
> I don't believe that's the case. Russian military doctrine has always been to reserve the best systems for the end-game, and front 'fodder' in the beginning stages of things.
This still doesn't explain the BMP-T's, the T90's and S300's that are deployed to Ukraine and destroyed in Ukraine.
If you keep that device off and shielded it should survive the attack. Then you switch it on. Power should come from the tank, plus the internal battery for how long it lasts. If the tank can't power on after that attack it's useless anyway.
There was an article from several years ago where the US Coast Guard was teaching US Navy personnel how to do celestial navigation, since this was lost institutional knowledge in the USN but retained in the USCG. The purpose was for navigation in GPS denied areas.
I wasn’t around for the 90s edition of windows phone (at least not enough to understand any of the tech) — i guess for lack of a camera and compass +gyro, would the observations be done out of band and just put into a form?
I'm going to replay this from memory without googling it, just to keep the late 20th century air of some people swapping unverifiable facts in a pub: There was an old friend of my mother's, now deceased, who was a retired engineer. He claimed to be the guy who invented the 2-part tuna can, and was known for handing out end chunks of titanium castings left over from some business he ran. At any rate, he had a couple of probably 8x4 inch (wide by high) cylinders of ground quartz crystal, so flat that you could put a drop of alcohol between them and they would be inseparable if you tried to pull them apart vertically. He claimed these were from the navigation system of Polaris nuclear missiles and their navigation worked by what you'd expect: Looking at the stars and comparing them to a super accurate timestamp, plus some core memory logarithmic tables or whatever. I apologize of I've gotten any of this wrong, but he seemed legit, and again I have switched off Google for this one.
My father worked on the guidance system for the Trident nuclear missile at Singer-Kearfott for many years and this is mostly accurate. There was a lot of geometry involved.
They used to test launch the missiles back during the cold war from submarines off of California. The missile would cross the continental US and land in the water off Florida. The Soviets would invariably send "fishing" boats to measure the tests but the coast guard would never shoo them away -- they wanted the Soviets to know just how accurate these missiles were. And they were accurate.
It blows my mind what was possible with analog tech.
As far as I’m aware test shots of both us Air Force and navy icbms off the west coast went to Kwajalein atoll.
Some other interesting details…
Naval icbms have star tracking for alignment and correction but land based ones do not.
The reasoning is that they are all dead reckoning…which requires knowing the initial location. The silo doesn’t move, but the subs do. Their location is tracked with another inertial guidance system but that adds error, so they use the star tracking.
Overall, the inertial navigation systems in modern icbms are not the limited in accuracy.
My favorite…there was a lot of pushback inside the us military on investing in increasing the accuracy of naval ICBMs. Doing so was seen as communicating/implying a shift in strategic goal from targeting cities (lower required accuracy) to targeting hardened facilities (higher required accuracy). That was seen as confrontational because sun based nuclear weapons had been generally treated as a second strike capability, but the primary reason to have the ability to target hardened facilities was for a first strike
From what I recall land missiles (minuteman) does its own alignment while in the silo to give it what it needs to hit that window in space where the RV detaches. There some documentation and YouTube videos depicting the alignment process.
Yup. But that’s based on knowing where the silo is, they are less mobile than subs (or trucks or trains). The alignment is all based on dead reckoning from where it started to where it’s going to know where it is, where it wants to be, and how to steer there.
So the confrontational is more because of the perceived shift to offensive ("unprovoked") first strike over "defensive" second strike, rather than any perceived loss of a deterrent second strike capability?
As I understood it yes. The ability to disarm the other side at will, and prevent any retaliation against cities, was seen as an more offensive capacity
Just want to note that the front page top 10 - 15 the last 24 hours or so has been disproportionately full of US government and military related posts. Something must be coming, some big news. BOLO
I'm recounting second hand so can't source anything but here is an SLBM test done in 2021 by a sub off of Florida, and it landed in the South Atlantic. So they must have several different ranges, and possibly it's changed over time.
Generally the logic is don’t launch on trajectories over land unless you have to. I believe all of the land based is ICBMs are still tested from vandenberg on the (most beautiful section of) California coast near Lompoc for the same reason. Vandenberg doesn’t have any deployed ICBMs, they all fly to kwaj for testing. Same reason nasa/spacex/seemingly everyone but China make it a goal to not have launches go over populated areas.
It's not just analog tech: it's actual engineering. The hardware we have is amazing, most of our software royally sucks. But software done right is also most impressive.
To get great software, you need a small team and give them extremely clear requirements of what it should do with an assload of extremely clear testcases.
This is a fairly generalizable rule in any engineering, unless you really are running the Apollo program (then when divide-and-conquering should probably remember this rule).
The best software comes from a small team of top level devs who are also top level subject matter experts on the software’s domain. This doesn’t happen very often.
> Use of pointers required a written, approved, variance from the SW standards board. All the code was formally peer reviewed by at least 6 people. More complicated code would be reviewed by 20+ people - in the same room.
Do you think the "software engineering" people are giving engineering a bad name?
"Christian scientists" are Christians who attach themselves to the word science because science has such a powerful reputation. In their case I'm not worried about their trickery because everyone recognizes this. Everyone knows that Christian scientists are not scientists.
With software engineering it's not as clear. It's possible to create software applying engineering principles, but that's not what 99%+ of what is called "software engineering" is. And because most people don't recognize this, I think eventually this might run off "engineering".
In case you are wondering about all those “reading rooms” you may see in various cities:
Not to be confused with Christianity and science, Christians in Science, Christians in science and technology, or Scientology.
Christian Science is a set of beliefs and practices which are associated with members of the Church of Christ, Scientist. Adherents are commonly known as Christian Scientists or students of Christian Science, and the church is sometimes informally known as the Christian Science church. It was founded in 19th-century New England by Mary Baker Eddy, . . .
Yeah, I've worked in structural engineering with real chartered professional engineers. Most of that work was relying on rules of thumb and bodging in hacks to fit around too small customer budgets, changing requirements, contractors trying to exploit loopholes, or bad designs from architects. No job ever had the budget to do anything "properly".
Software development usually seems far more controlled than that was.
I have my interpretation of "engineering" based on my degree, wiki's definition and discussions.
But people have various interpretations of that, especially those who try to act as if for some weird reasons software was not engineering and that's what I'm asking for.
The guidance systems had to account for local variations in gravity as the missile flew to its target. One of the reasons ballistic missile accuracy improved so much was that measurements and mapping of these local anomalies improved so much.
Most are due to magma reservoirs or large mineral deposits. Some are well-known in their effect, but their specific cause is still unknown:
the "quartz crystals" were likely optical flats and they were connected by wringing-on (like a set of gauge blocks).
https://en.wikipedia.org/wiki/Optical_flat#Wringing
I'm not sure why you'd use an optical flat in a missle nav system, now I'm curious.
Given the CEP they were aiming for, and amount of time they had to fiddle with them, I'd guess they started very carefully with the physical precision of components.
The radius of a circle within which 50% of rounds will be found.
For the V-2 rocket this was 4.5 km.
For the Minuteman III missile, 240m.
Blast effects decrease with the cube of the radius, so a more accurate weapon means a smaller effective warhead. The "Flying Ginsu" AGM-114 Hellfire missile has a CEP of 5 meters or less, and uses blades rather than explosives to minimise any unintended casualties.
But then withdrawn from service due to treaty, because with that accuracy, first striking your target's silos and hardened facilities starts to be a viable option.
Sometimes you can be too accurate.
Also, related, the W76-1/Mk4A height-adjusting fuze solved this this in a different way, by recognizing that there's a 3D kill-volume (above and around the target) rather than a 2D kill circle.
Consequently, if you aim beyond the target and have your warhead measure actual height vs expected once on a ballistic trajectory, you can adjust the fuzing height to detonate within the volume, even if you overshot the target. See figures 2 and 3. https://thebulletin.org/2017/03/how-us-nuclear-force-moderni...
Un/fortunately, this puts us back into viable first-strike territory.
Based on pictures the CEP should be very small, centimeters maybe.
This missile doesn't explode, instead it send 6 blades at the target. Pictures show direct hits.
I'm well aware, and that was my take, and I was initially going to write "within a metre", but couldn't find substantiation of that. The Hellfire itself offers the 5m value.
Any window that a telescope is looking through will need to be optically flat, otherwise it will totally mess up the ability of the telescope to get a clear image.
I interned at the Charles Stark Draper Laboratory 20+ years ago. One of the guys was telling me at lunch about his work that day. He had to run some simulations to tell the Navy if their Trident guidance system needed to be sent back for re-manufacturing after a forklift operator dropped it. My understanding is that the handling crate had some kind of integral rough handling detector. (I presume calibrated weights on thin wires, in 3 axes, so you can bracket the peak accelerations by looking at which wires broke and which didn't.)
I was told that once above most of the atmosphere, the MIRV bus guidance unit pops the lens off and takes a snapshot of the stars. The MIRV bus is rotating at that point, so the telescope scans a good section of the sky, but it takes a snapshot at a very specific time and is only interested in a rather small patch of sky. It compares what it sees with a stored copy of what it should see, and uses that to re-calibrate the inertial measurement unit.
The way it was described to me was that one rotation checked the star field, the IMU was re-calibrated, the very next rotation was used to double-check the IMU calibration, and right after that, the MIRV bus started popping off the individual warheads. It seems strange to me that they would use such a short observation window to re-calibrate the IMU, rather than having a long period of continuous re-calibration. It's also highly possible that the guy explaining it to me was giving intentional misinformation to avoid accidentally disclosing TS/SCI to a non-cleared intern.
The nitty-gritty of guidance is fascinating. It's unfortunate that the main use cases for ultra-accurate guidance are in weapons.
There was a pendulous integrating gyroscopic accelerometer from the Apollo IMU sitting in a glass case in the lab... a shame given how few people were allowed into the lab.
Rough handling indicators are either glass vial based or contain a weight that slides into a window if sufficient force is exerted. There are plastic springs designed to break and other springs that force the weight to stay in the indication window.
An enthusiastic guy on youtube was trying to drop an egg from the edge of space and got shutdown for developing a terminal guidance system. So it's not so much about the only use being weapons, it's that weapons have state monopoly on precision guidance.
There is much of a Chevaline, an upgrade to Polaris, in the Museum of Berkshire Aviation [1] which is near Reading, about 50 miles west of London. They also have some of the navigation systems, such as gyroscopes you can get a good look at.
I find the level of engineering fascinating, it's not often you get to see such detail close up.
I took a few photos[2] when visited about 10 years ago. Being a small museum it's quitehiggledy-piggledy but they also have a range of unusual exhibits such as a model of the Miles M.52 and a real Fairey Jet Gyrodyne.
>cylinders of ground quartz crystal, so flat ... their navigation worked by what you'd expect: Looking at the stars and comparing them to a super accurate timestamp, plus some core memory logarithmic tables or whatever
I didn't expect that, can someone explain? Sounds like some kind of dead reckoning system, I don't get why you need flat quartz discs to read the stars though?
Star trackers have been around for quite a while, so I'm not surprised! Even the Voyager probes had the ability to orient themselves relative to the Sun, Earth, and the star Canopus.
[it] was so powerful that it could see the stars even in daylight
That's a very odd thing to say: when you're at 85,000 feet (as noted, the SR-71's cruising altitude), the concept of a blue "daylight" that blocks the view of space only exists below you, not above you [1]. There is not enough Rayleigh scattering at that height to get in the way of a camera looking for stars to map to a star chart. It's pretty much why they used star navigation: they're always visible when you're halfway to space.
There are actually some interesting patents related to the star sensor in the devices, particularly related sky background gradients. Northrop’s patents are more interesting than Lockheed’s, IMHO. In all cases the patents mention using an IR-pass filter to improve contrast.
The sensor is essentially an analog lock-in/synchronous detector with a rotating shutter and a wedge prism that nutates the star field about the boresight. the patented component in most cases is the shutter (there are different patterns, and Northrop came up with some clever designs).
You end up with a frequency modulated signal from the photomultiplier tube (carrier is from the shutter, modulation frequency the difference between the prism and shutter).
By measuring the phase and magnitude of the modulated signal, you can steer the telescope onto the star, and the “coding gain” from the lock-in is substantial.
I have recently been bitten by the modular synthesis bug, and it is wild to me how these terms seem to have correspondence even though I am learning it from the audio domain. I guess a signal is a signal and that is the power of analog.
For context: learning about carriers, modulation frequency in relation to audio synthesis [acknowledging there is more than one type of 'FM' in relation to audio but nonetheless similar]).
> when you're at 85,000 feet (as noted, the SR-71's cruising altitude), the concept of a blue "daylight" that blocks the view of space only exists below you, not above you
The Astro-Nav System didn't work only at cruising altitude, it tracked stars even when the SR-71 was on the ground. [1]
I don't necessarily doubt you, but linking to a 76-page PDF without any indication of where the reader is supposed to find support for your claims is not nearly as convincing as you think it is.
> I don't necessarily doubt you, but linking to a 76-page PDF without any indication of where the reader is supposed to find support for your claims is not nearly as convincing as you think it is.
It's funny you say that, because based on almost all the assertions you read here on HN which are taken for granted without any links to support them, you'd think linking to an authoritative PDF would be more convincing than if I hadn't linked to it. But based on your comment it seems like it works the opposite way (unless I point to the exact place supporting my assertion, presumably).
But sure, fine, I'll take your bait.
I only skimmed the document, so I can't point you to the best place where to look for it. But I can point you to page 31, paragraph 10A-69 which says the following: "When alignment is performed in a hanger, there is a possibility that the system will track false stars (ceiling lights, etc). To prevent this from happening, the INERTIAL ONLY mode is selected and enabled after completion of an alignment. After the airplane taxies into the open, ASTRO INERTIAL mode is selected and enabled". I'm pretty sure this supports my assertion.
> It's funny you say that, because based on almost all the assertions you read here on HN which are taken for granted without any links to support them, you'd think linking to an authoritative PDF would be more convincing than if I hadn't linked to it. But based on your comment it seems like it works the opposite way (unless I point to the exact place supporting my assertion, presumably).
Not at all, your comment was strictly better for including that link, thanks for including it. It was neat to briefly skim the PDF, even if I don't have time to read the 76 pages of technical documentation.
> I only skimmed the document, so I can't point you to the best place where to look for it. But I can point you to page 31, paragraph 10A-69 which says the following: "When alignment is performed in a hanger, there is a possibility that the system will track false stars (ceiling lights, etc). To prevent this from happening, the INERTIAL ONLY mode is selected and enabled after completion of an alignment. After the airplane taxies into the open, ASTRO INERTIAL mode is selected and enabled". I'm pretty sure this supports my assertion.
Maybe. But that whole paragraph is hard to understand. How were they doing alignment in a hangar to start with, were these roofless hangars?
> How were they doing alignment in a hangar to start with, were these roofless hangars?
I don't think so.
You can read more about alignment in page 7, paragraph 10A-23 and the following pages. It'd usually involve using test equipment and manually inputting the current coordinates (e.g. heading, position, altitude, date/time, ...) into the system, from what I understand.
Regardless, the paragraph I mentioned in my previous comment clearly instructs to enable ASTRO mode after the alignment is completed and the airplane is taxiing in the open. It also mentions that in ASTRO mode, the system can confuse the ceiling lights of the hangar for stars (hence why it needs to be disabled inside the hangar), which means that ASTRO mode does track stars on the ground.
Now that's a money quote. The following sentence, which says that "If star tracking has not commenced before takeoff, it should start at an altitude where cloud cover and sky brightness conditions have improved.", leads one to believe that it was possible for the sky to be too bright to track stars from the ground sometimes (since it basically says exactly that), but it now seems clear ground tracking during the day was possible. Maybe it depended on the latitude, time of day, and position of the Sun relative to the stars (low latitude in the middle of the day without sufficiently bright stars far enough away from the Sun = sky too bright, higher latitude early or late in the day with sufficiently bright stars far enough away from the Sun = track from the ground, any other combination = results may vary). Actually, I just saw that the bottom of page 42 describes how to handle situations where the sky was too bright for specific stars.
The whole document supports the claim, it details ground astroinertial startup which would not occur if it could not see stars from the ground. The fact they made this document available is very useful to those who are actually invested in the subject.
I think that in order to calculate your latitude and longitude position using what amount to high-tech sextant readings you have to know your local down vector to a high degree of accuracy. That is, the system judges the position of the Blackbird by comparing its local down vector with the angles to the star sightings. If the down vector measurement is off by an arcsecond, the measured latitude and longitude position on the earth will also be off by an arcsecond (at sea level, about 600 feet).
It seems to me that there's an interesting problem that the Blackbird goes so fast that its down vector changes dramatically over the course of a typical flight, so I'm wondering how it accurately updates its down vector reference. Jets can go through arbitrary and wobbly 3-d trajectories, accelerating in any direction, so it can't be a simple measurement of gravity. It would have to be a accelerometer combined with inertial and gyro and elevation readings all summed up.
If you can measure the front and back horizon accurately you can measure the left and right horizons too, and that will give you an accurate down vector.
But I don't think it's possible to measure a horizon that accurately. There are temperature and pressure dependent atmospheric effects at low angles that effect how light refracts and curves through the atmosphere, and these effects will have to be compensated for to measure the angle accurately. Additionally, there are mountains and valleys to compensate for.
A very precise estimation of which direction gravity is pulling down towards. Which is almost, but not exactly, the vector towards the center of the earth. And it also needs to be disentangled from the suborbital centrifugal/centripetal forces experienced by the movement of the Blackbird.
if you use a SWIR camera stars are pretty visible in the daytime even at ground level. Scattering is related to a fourth power of frequency so by the time you're down to IR the amount of scattered light is pretty low.
(which is also why the sky is blue-- or really UV, if we could see it)
The software for plate solving is pretty good. I've had some fun taking movie screenshots and figuring out if the stars are real, and if they're the right hemisphere. (can't be used for location without the time and orientation of the camera)
You can see one of these up close - both the device, and the airplane! - at the Evergreen Aerospace Museum in McMinnville, OR.
They also have on display another Blackbird payload labeled DEF-H. It’s a nondescript white box which you are allowed to look at, but not allowed to know what it does. XD
I got to see an SR-71 on display in New York several years ago. What really struck me was how BIG it was, seeing pictures of it doesn't really give you sense of scale, it was a massive plane.
Even more impressive seeing it sweat fuel onto the apron in the afternoon sun for a few hours then engine ignition with Chevy V8 start cart, takeoff followed by low level transonic flyby.
What’s more, at its highest speeds, the engine transitioned from conventional turbojet to mostly ramjet compression by extending/retracting a nose cone within each engine intake. Doing so moved the bow shockwave of the cone so that it reflected inside the engine intake, which had the effect of slowing the Mach 3 air down to subsonic speeds so it could be used for engine combustion. But even still, energies were so high that almost all the thrust came from afterburning - the engine combustion stages were basically just spinning as air went through them.
The whole plane is an engineering tour-de-force, especially considering it was designed with slide rules
The fuel doesn't ignite at "low" temperatures, one reason is because the whole airplane leaks fuel until it's at an high enough speed. At that point, maybe because of the surface temperature, or the forces it tackles at Mach 3+, the whole thing becomes airtight.
Yup, at Mach 3+ the fuselage expands so significantly that gaps need to exist when it’s stationary, otherwise it’d buckle. So the gaps mean it leaks fuel when stationary.
Not sure you are were trying to be serious or not, but the DEF-H was one of the defensive systems used to jam surface to air threats. The system had two "modules" with one in the left and one in the right chine.
That was a great read, thank you. TIL about TEB, and how at one point they were investigating coal slurry as a fuel? (Although tbh that sounds like someone having fun on Wikipedia).
I've visited that one. As a someone who grew up during the Cold War, the idea that we've got these once super-secret military aircraft just sitting where any old tourist can walk up and take picture from a few feet away always generates some cognitive dissonance.
In Palmdale California, at the corner of 25th St E & E Palmdale Ave [1], there's what is essentially a parking lot with a chain link fence around it. Sitting there are a SR-71, a A-12, a D-21 (supersonic recon drone originally launched from A-12 variant), and a U-2. You can just sit there and take all the pictures you want.
When I was in Boy Scouts we had a camping trip to SAC. A snow storm forced us to relocate inside and the Air Force came up with the museum as our temporary sleeping area. I got to sleep about 20 feet away from that SR-71. I will never forget how cool that was.
I was in NYC for the first time a few months ago in Hell's Kitchen. Looked out of the window from the top floor of a loft we were in and could see the USS Intrepid out the window. I wish I would have realized this sooner.
For anyone thinking about going (highly recommended, it's a great museum) the Concorde is currently elsewhere for maintenance. It's supposed to be back early next year.
There's an A-12 Blackbird at the San Diego Air & Space Museum in Balboa Park. Not an expert, but my understanding is the A-12 was basically the beta version that eventually lead to the SR-71.
The A-12 was the original single seat version developed for the CIA. But eventually the Air-Force was given all responsibility over the planes and they re-designated the plane and added a second seat to fit their pilot philosophy.
There were supposed to be more variants, even tactical bombers but military politics and budget constraints and yadda yadda yadda.
The Air Force museum in Dayton has a YF-12, which was the prototype of the interceptor variant. It's readily distinguishable from the A-12 and SR-71 because the chines don't extend to the front of the nose. This so that the nose cone could be made of material that is transparent to radar.
"The SR-71 needed to be able to fix its position within 1,885 feet (575 m) and within 300 ft (91 m) of the center of its flight path while traveling at high speeds for up to ten hours in the air."
They refueled in-flight, and the high speeds they are referring to probably weren't sustained for 10 hours... I think it is more like "while traveling at high speeds and for up to ten hours"...
The Museum of Flight has an M-21 - in the Blackbird family but a few feet shorter than the SR-71. It's a truly wonderful plane and I love that you can walk underneath it and get right up close. As I recall they also have one of the engines pulled out to look at as well.
The Museum of Flight also has an entire SR-71 cockpit (salvaged from a crash) that visitors can sit in. It's the only place I know of in the world where regular people are encouraged to touch one.
Off tangent here but I thought I'd mention that yesterday a replica of a traditional Polynesian ocean-going canoe, the Hokulea, sailed into San Francisco, navigated with non-instrument methods including star observations.
https://www.sfchronicle.com/bayarea/article/hokulea-polynesi...
The Hokulea's maiden voyage was in 1975 and since then it has made global voyages to demonstrate and preserve ancient Polynesian wayfinding methods.
Here's a video where a local news channel covered an earlier voyage of the Hokulea (2014) and summarized how the ancient sailors would have used the stars as reference points:
https://youtu.be/dla3RoQo37M
> Similarly there are people who can navigate the desert, even though the dunes are mostly ever shifting.
To be clear, most deserts don't feature dune seas at all, and even the Sahara is mostly rocky plateau, dune seas comprising about 10% of its surface area.
That's incredible. I was just speaking with colleagues about the best conditions for surfing waves. The best waves here in California originate in storms on the opposite side of the Pacific. The waves travel thousands of miles over the course of days without appreciably dissipating.
The waves do dissipate, but the dissipation and attenuation is what makes the best waves. In the storm the waves are huge and messy with all kinds of frequencies interfering with each other. Attenuation means the high frequency / small amplitude waves dissipate the quickest and the lowest frequency / high amplitude waves dissipate the slowest.
By the time they've moved a long way, all that's left are the lowest frequencies which makes for a clean smooth wave. The amplitude will have reduced compared to the original storm, but that original long period is still there creating nice long gaps between the waves. This is why surfers care about the period of an incoming swell - it's a proxy for how the swell has cleaned up on it's travels.
The best waves come from the biggest storms the furthest away.
For anyone interested in the SR-71 (and other Cold War era spy planes), Skunkworks by Joe Rich is a fascinating read with lots of fun details about their development.
Also, fun fact - in 2025 we will cross over into a period of history where the SR-71 first flew closer to the Wright Flyer than to the present day.
I made an example "digital sextant"/navigation computer in JavaScript. Sadly most browser's support for controlling the camera limits its usefulness, but I am typically able to get within 10 miles of my actual location. I made it mostly as an example of how the algorithms work for a talk I gave for the Louisville Astronomical Society.
The article seems to skip the matching part. I see it in the source code though. But it looks like it's just using two stars, how does that work? I know ASTAP and Astromentry.Net use three or four stars and compute the angles and distances between them.
Each star will give a circle on which you could be located. 2 will be the intersection of 2 circles, which is 2 points. If you have some additional assumptions (e.g. Northern hemisphere), it is possible to eliminate one of the points. Otherwise you need a 3rd point to get a full triangulation.
What you're describing is a lat/lon position fix, but the link nickponline gave is for a plate solver (finding the RA/Dec of stars in an image, and determining the camera orientation). All other plate solvers I've looked at use three or four stars and measure the angles and distances relative to each other. Since nickponline's code appears to only use two, I'm intrigued as to what the angle is being measured relative to. It kinda looks like it requires the image to be oriented with North as up, which seems like an interesting twist. It seems a little like cheating, but honestly most people will know the orientation of their camera, and it probably speeds up the search considerably, and would reduce/eliminate the need for building index files.
Interesting that it’s not confirmed if the SR-71 flew in the Southern Hemisphere. Would be a very bold design if it didn’t and the system wasn’t designed for it.
The F22 had an unintentional problem of being unable to cross the international date line, which was discovered when attempting to fly from Hawaii to Japan.
Given that it may very well be better to assume that anything untested doesn't work at all and to live within those restrictions.
There have supposedly also been avionics issues with flying at negative altitude. Israel has an airbase in the Dead Sea which is located below sea level.
We had a great example of this in a recent Geomob London talk: ask Google Maps to route from Wairiki, Fiji (-16.8048, 179.9893) to Welagi, Fiji (-16.7349, -179.9439), two villages on the coastal road of the same island and about 10km apart, but as you can see from the coordinates on opposite sides of the anti-meridian.
You get "Sorry, we could not calculate driving directions from "Wairiki, Fiji" to "Welagi, Fiji"
It astounds me that this is still a problem in 2023.
I company I coded for surveyed the entire Fiji region in 1997, drafting aircraft at 80m ground clearance in long lines across both sides of the 180 longitude line.
Every single bit of commercial US GIS software available at the time was garbage and failed to correctly compute line lengths that straddled the 180 longitude line, areas of rectangles that did the same, correctly polygon mask etc.
Fortuneately we had in house software (that had the same flaws, considering it originated from Sunnydale, California) and source code so this became Yet Another Patch I had to write in on the fly in the midst of everything else to move forward.
I wrote issue tickets on this to ArcView | ESRI (of the day), the open GIS groups of the day, and raised it with anybody on relevant IRC boards | early reddit who was talking about their aircraft control software, etc.
I haven't thought about that for 14 odd years now .. but I really thought the base libraries in Google Maps would correctly handle geodesic geometry by now.
I'll bet if SV wasn't such a hotspot that such things would be handled better. You can't dogfood this if you're in Mountain View and chances are that the intersection of 'people working on Google Maps' and 'people that use Google Maps in Fiji' is empty.
That would make sense if Google Maps wasn't 18yrs old, but I don't really have opinions on that sort of thing as I've never worked for a large tech company on a mature product. Maybe there's not much motivation to try that hard after a certain time.
Yes, I think I should have made my point better: if you can't dogfood it and the population affected is small people just don't care, especially if there is zero economic incentive. It sucks that it is like that.
Interestingly they're not far (relative to myself in Australia) from Scripps in San Diego .. the ocean research company famous for dolphin research and not ocean tempreture mapping to acoustically track submarines in the cold war.
Scripps authored some pretty decent open source CLI unix mapping software that you could pipe together to do all manner of things data presentation wise that dates back to the 1980s at least.
That could handle line lengths across the 180 longitude line .. they thought globally and ran sensors from California to Hawaii to the South China Sea (and elsewhere in the world).
Even the original Sydney-based digital mapping start-up Where 2 Technologies that spawned Google Maps (not the 3D Google Earth) could handle those calculations (as I recall, they even pulled some libraries from people I worked with) - I'd have to check but I believe Where 2 shared some DNA with ERMapper (Perth, W.Australia based global resource mapping company).
At some point post aquisition they must have rewritten | simplified assumptions and never availed themselves of US open software paid for by their own tax dollars.
Fair point - the original core version from the mid 80s(?) was written by TerraSense Inc. of Sunnyvale, California.
In the above I was recollecting a code header from ~ 26 years ago, I'm Australian and got a single letter wrong in a district name of country I don't live in.
You okay with that or are you just here to nitpick?
Interesting, is the ring road around Tavuki Island THE only road that falls on the anti-meridian, at all? A quick look along the line and it looks like it.
It's wild that on google maps, even dropping a pin on the Tavuki road and trying to drag it across the line, it just stops, hitting this invisible wall.
Something about this experience makes me want to go there....
Wow, even zooming in and out on that island makes Google Maps skip into the middle of the ocean.
Edit: and the nearby Rabi Island has a discontinuity at the meridian in map view. Zooming in on it in satellite view sent the map into a crazy refreshing state with flickering grid lines.
These real-life effects by our decisions in technology sometimes half way around the world are very interesting to me. Similar to how some poeple are unlucky enough to live in their communities geographic center that becomes the default location for GPS lookups for that location.
There was US mapping software (not the international scientific survey semi open source stuff, commercial mapping for US public) in the late 1980s and 1990s that actually had positive longitude displays over the US mainland so that their users didn't have to think in negative numbers.
As I noted elsewhere in this thread, large swathes of US software couldn't handle mapping computations that straddled the date line and it's breathtaking that this is still an issue today.
I was working on a program that sales managers would use to divide sales territories up between salespeople and one feature was the ability to select all territories a certain distance from a point and using the most simplistic algorithm the code would crash if the circle crosses the 180 line so we had to use a somewhat more complicated algorithm than we first thought for this, even though all of the U.S. is one one side of it
We didn't have Guam in our list of territories as I don't think our customers had sales offices there or if they did they already knew who was going to serve it.
The distance along the zero latitude equator from -179.99 W to +179.99 E should be relatively easy to compute and yet a large proportion of software in the mapping domain screws that up - even software that controls drones and| used by commercial pilots in the US, etc.
Ok so to clarify it’s not the international date line but the latitude? Which makes more sense to me because the international date line is not actually a straight line.
Maybe they should use sin and cos of the angles instead of longitude. A sin, cos pair specifies longitude precisely without any discontinuities. Quaternions would be even better if you want to include latitudes as well.
The space shuttle was untested in flying over the new year between December and January. It needed a clock reset and they weren't about to test that on orbit.
I assume they would have gotten around to testing it if the space shuttle program ever got anywhere near its planned operational capacity.
But with only 4 operational shuttles, ~20 days maximum ndurance and only 5-10 flights per year, it was simply enough to just avoid scheduling any mission in the latter half of December.
(Though, I notice STS-103 departed for an 8 day mission on December 19th 1999. That was cutting it close)
> Given that it may very well be better to assume that anything untested doesn't work at all and to live within those restrictions.
There are an infinite number of untested cases though. Figuring out the important ones is the hard part, otherwise they likely would have been identified and fixed in development.
Would it invert North and South after crossing the line? No, that’s the story of the Soviet bomber that took off the wrong way from the airstrip in a secret night mission, and ended up 180° in Iran. Fortunately the two iranian jetfighters supposed to down it, ended up chasing each other. Real world is hilarious. https://youtu.be/i-bdJF6TUFs
Is it true that military branches (navy, airforce) from around equator use a grid-square reference system for radars and have issues when going further north, while triangle-based reference systems are a really good model in polar countries that doesn’t scale well when going South?
It’s a problem so predictable that I can’t believe I have been told the truth, and it’s impossible to find the right keywords to search that on Google ;)
The military does use a grid square reference system, Military Grid Reference System (MGRS), and to resolve the issues when sufficiently far north, the grid squares stop being aligned with the latitudes and longitudes, but rather just sit lined up with the 0/90/270/180 longitudes, as a sort of circular cap cut out of grid paper. (Properly speaking, it's Universal Transverse Mercator (which is a series of 60 Mercator projections) between 80S and 84N, and Universal Polar Stereographic near the poles).
It says they did have capability to navigate in Southern hemisphere though it might be implied one had to use two stars only visible in southern hemisphere.
When I first heard of astro-nav (for missiles) I thought it was so future sounding, like star trek.
But now that I do astrophotography regularly it's just another tool that I use. It's super simple to do, I can take a photo of the night sky, and if I know the focal length and pixel size of my camera I can figure out exactly where my telescope is pointed in seconds, with an accuracy 2.5 arcseconds.
You can even do it blind (without knowing any details about the telescope/camera) though that takes a couple of minutes.
One of the clearest signs we live in a pinnacle of technology is that mathematicians and astronomers have been refining the equations that produce ephemerides for thousands of years, and the inverse problem (given star locations and time, predict location and pointing vector) to the point where an average human with average hardware can instantly estimate things that would have taken the world's most powerful observatories a great deal of effort to do.
In my experience, if there was a well-defined problem in the 70s or 80s, some clever person came up with a workable solution using a tiny microprocessor with limited RAM.
The way I think about it is... when I was a kid I hated waiting at lights and imagined you could build a computer vision/ML system (this was in the '80s) that coudl recognize traffic and change the light to green when it was safe, rather than timing out. I mentioned this to some traffic engineer and he pointed out the simpler solution was to put magnetic detectors under the road and then just look for blips which are cars driving over.
And then someone rolls up with a bicycle, and because it doesn't have as much metal in it as the detector expects, they end up sitting at a red light forever. Been there, done that.
My local town has a roundabout with a cut-through lane for buses. The cut-through has a traffic light to help the bus emerge. Every so often, a (presumably non-local) car driver accidentally enters the bus lane, and then has to wait for the next bus to trigger the traffic light.
Compared to the rest of the engineering, I don’t think the computation is all that hard. It doesn’t need to be that fast since you only need to do a correction maybe every 10-20 minutes or so. The math was innovative in that they invented the Kalman filter for these types of computations. You pick your star such that it’s by far the brightest one in its neighborhood. So the image processing is just finding the brightest pixel.
From what I’ve read (mostly the excellent book “Inventing Accuracy” the main challenge is very precisely aligning the telescope with the “stable member” in the gyroscope.
Not exactly. The SR-71 system had an expected position of the star, and then it would mechanically scan the telescope in a rectangular spiral scan pattern with a single pixel photodetector until it found a star with the expected brightness. This could take up to 22 minutes from a semi-cold start (roughly, you know your heading within 3 degrees, and your position within 1 degree lat/lon). [1] Once the system had shot a couple of stars and the IMU was dialed in, it could do an update in about 30 sec.
The Trident I used a Vidicon tube (what they used to use for TV cameras before CCDs were invented) [2] The Trident approach was to shoot a single star once in the flight.
The Trident II uses a CCD. The first version of the guidance system for that missile (The Draper Mark 6) used a 90x90 CCD [3]. That has been replaced with the Mark 6 Mod 1, which they say has increased resolution but they don't give a figure [4]
I don't know the technical specs of the Z-80, but even on a minipc's ancient and super cheap CPU (I think it's a 2 point something ghz dual core) it can do a solve in seconds.
If you know roughly where you're looking, the lookup is really quite quick.
The hardest part would be doing the image recognition, but that wouldn't be that bad since the position (and therefore expected picture) is already known, a priori, to a good accuracy.
My electronics instructor in the 90s had worked in industry in the 70s/80s at Ball aerospace. One of the jobs he worked on there was building at test fixture for the star navigation systems from IIRC the first space shuttle.
He said it involved going through thousands of LEDs (expensive at the time) and bin them based on brightness so they could arrange them based on the intensity of the stars.
The most badass story related to 'using basic tools and a steady hand in a pinch' is Buzz Aldrin pulling out a sextant and sliderule to do some calculations in Gemini XII. The description in the article reads "Aldrin pulled out a sextant and his slide rule and put his MIT doctoral research to work."
A college friend of mine who was at the Air Force Academy in the 1980s told me she was trained in celestial navigation in case they needed it in a pinch. Both of us were pretty interested in the math.
Nitpick: The North Star (Polaris) wasn't used over 4000 years ago:
So now you can see why Polaris will not always be aligned with the north spin axis of the Earth - because that axis is slowly changing the direction in which it points (precession)! Right now, the Earth's rotation axis happens to be pointing almost exactly at Polaris. But in the year 3000 B.C., the North Star was a star called Thuban (also known as Alpha Draconis), and in about 13,000 years from now the precession of the rotation axis will mean that the bright star Vega will be the North Star.
Well, good point, but every moment in history had its fixed south and north stars, you could find easily, if you were familiar with the sky. Which everyone was back then. As well as some animals, who also use the stars for navigation.
There’s no good reason to assume that in the absence of an extremely bright star that happens to be north-aligned that a culture navigated the stars with a north-anchored reference ;)
The reason would be, that you can see that the sky is rotating around a certain point in the north, or south depending where you are. All the stars are changing - but this point is not. So even when there is no bright star at that spot - there will be star constellations around pointing towards it. A fixed point or area of reference ( and with no artificial lights, there definitely would be some star to see).
But of course if you really know the dark sky well, there are a lot more points of references, you will remember what constellations will be where at what time of the night at what season. Or where the moon is, etc.
Anyone know if modern military aircraft have something similar? I imagine it would be even cheaper and easier to do these days given the processors and cameras we now have. Seems like a good backup to have.
Former military aerospace engineer here. In my opinion it would only be worth it on long range strategic bombers such as the B-52.
Fighters don't have the legs to fly far enough that celestial navigation becomes worth the added complexity.
For other air mobility platforms like the C-130 or C-17 in my experience they do not include these features, as GPS, INS, and regular old "ask ATC for a vector" are usually good enough.
There are ongoing experiments with magnetic and other forms of navigation, some of which are classified, but I'm a civilian now so I don't know any specifics.
Your friend probably looked out the skylight and actually oriented himself on the stars' positions.
I'm saying something different, which is that unlike the SR-71, the C-130 didn't and doesn't have an instrument that scans the sky and automatically determines where it is based on the constellations it can see.
Astronomically corrected INS[1] is definitely still a thing in aerospace in general. I have no knowledge of whether or not they are in modern military aircraft though.
I think we're seeing similar, or equivalent, technology being used in Ukraine right now. The Russians routinely jam GPS frequencies and have been doing so for years/decades (one of the few things the Russian military is considered good at).
Yet we see Ukraine doing precision strikes not only with long-range missiles, but also with all kinds of drones.
Sometimes it's just strange, and it doesn't seem like a coincidence. It's far too often that I see, read or experience something and then it pops up on HN a few moments later.
In this case, I went to Duxford a day before this was posted on HN. I visited the #962 SR-71 in person. At the day this was posted, I also discussed about the SR-71 with a neighbour and how it was stationed at RAF Mildenhall here in the UK.
With that said, SR-71 is such an awesome machine, it contains so many breakthroughs and innovations. It was so advanced that even the tooling, methods, paint and even fuel had to be invented specifically for this aircraft.
How complicated/expensive would it be to build an open source star tracker nowadays? How accurate of a fix could you get just by pointing a phone at a sky? Or a DSLR on a commercial gimbal?
I am sure that the article is fascinating. However, I could not get passed the ads. The text of the article loaded within a second, but then the ads started to appear pushing the text somewhere. Then cookies banner took up a solid 1/4 of the screen. Then some of the ads reloaded changing size.
Horrendous experience that killed any desire to read the article itself
I float through the atmosphere gazing up upon the lights mapping my way, I see a bear crossing with it cub, oh what glory doth Orion project, and soon onwards towards my destination does my heavy soul carry forth atomic bluster and putrid death, but in these last few moments I ponder my place among the stars.
Modern, and even old, sensors absolutely can see stars during the day. It's really a question of magnification. When you magnify the image, the pixels a star falls on don't change much, but the background gets darker because the background is spread over more pixels.
I was talking to a grad student who was having an issue getting descent flat frames (images of a uniform field, used to account for dust and optical train effects). I asked why she didn't just take a picture of the sky in the daytime like most amateurs do, and she said she always ends up with stars in the image doing it that way. They tried finding the least populated part of the sky, but still always picked up stars.
That was the only time I got to talk to her, so not sure how she ended up solving it. She was searching for transiting exoplanets, so looking for a star's dip in brightness of less than 1%. Even a bug temporarily flying in front of the star could throw it off.
Some people do use tissue paper, or a t-shirt over the lens, and that works for making pretty pictures. But she said none of them come close to 1% accuracy.
But now you have me wondering how the big telescopes do it. Things like the JWST or HST, or even large ground based scopes don't really have any of those options.
Simplest implementation of star trackers I've seen is the Canopus sensor[1] used in old spin stabilized satellites: the "tracker" is just a light intensity sensor and a computer on a spinning spacecraft. I don't know if that has to do with this implement or not, but a star tracker in general could be that simple.
It was a huge waste of money, the US was always a paranoid state which was extraordinarily secure. This is precisely what Eisenhower warned about, the rise of the MIC which has its own self-sustaining logic.
I partly feel that the real reason it was a waste of money is that after it was finished, the country was too nervous due to Francis Gary Powers' U-2 being shot down to actually try using the SR-71 for its original purpose of flying over the Soviet Union, meaning they were using a very expensive plane for the missions.
Ironically enough, the much cheaper but less impressive U-2 is still in use today.
> the flight plan was recorded on a punched tape that told the aircraft where to go, when to turn, and when to turn the sensors on and off.
This is equally fascinating to me. Before each flight, someone had to generate a flight plan and punch it onto a physical tape, then load it into the plan. T
No mid-flight "re-calculating route" if stuff goes wrong.
One of the most interesting and under-appreciated aspects of early aviation that is routinely forgotten by history is the fact that navigation was extremely difficult in a world before GPS. For example in WWII, pilots getting lost was a huge problem, especially for the Navy pilots. Imagine taking off of an aircraft carrier and flying hundreds of miles away from it, completing a mission and then trying to retrace your steps and find the 100 yard long dot in the wide open ocean again while flying at 10,000+ feet altitude. Pilots navigated with simple directions of, "heading 310 for 50 knots, heading 120 for 80 knots" and they would follow a handwritten piece of scrap paper and then try to come back. There were no landmarks to follow. To make things even more complicated, we didn't exactly know distances very well. In a world like the ocean with no landmarks, you rely purely on compass and distances and time for directions. But if there is a tailwind vs a headwind then you could travel 80 knots in 30 mins instead of 45 minutes. This could put you wildly off course when you go to turn. It was incredibly scary to get into a cockpit and sail off into the ocean with no landmarks and hope to find a target and return back to the carrier (which itself is also moving). This is also why bombers and even long-range fighters actually had dedicated "Navigators" on board who's sole job was navigation during flight.
As an example, The Battle of Midway was almost a complete failure for the USA because the main bomber squadron actually got lost trying to find the ships that they were meant to bomb. Iirc they had meandered around the destination area for almost an hour and were bingo on fuel. The squad commander had actually given the order to turn around and as the planes were turning around, one of the pilots spotted the Japanese carriers in the distance. The battle of midway up to this point was a complete failure and had overexposed the Navy. The first dive bomb attack had lost all but 3 dive bombers (one of which only survived because he got separated and lost). The Japanese were getting ready to counter strike on the remaining US Navy which were congregated (against the will of most admirals and generals) just a hundred miles away. The fighter wing had shown up before the second primary bomber wing (which had gotten lost) and had alerted the Japanese to the surprise attack, in addition to devestating losses (about 2/3) for the fighters. If these lost bombers had returned back without completing their successful bombing raid, it is safe to say that history would be re-written. Japan would have destroyed the US Navy and eventually consumed the USA. This would have distracted the US from aiding Europe and Europe would have likely been lost to Germany. All because of how difficult it was to navigate before GPS.
I just find early aviation so fascinating with how they accomplished so much with so little. We often look back and forget so many simple things about aviation like navigation because we take it for granted today. Hell, my watch can pinpoint my location anywhere on earth to within a few feet. Yet location in the 40s and 50s was usually determined by drawing circles on a map that cover 50 mile diameters and saying "we think we are somewhere in here".
A prime example is the original story. In a world before GPS, where we need precise long-range navigation, what are you going to do? You have to work with what you have. We didnt have satellites yet (well at least not ones like we needed), you are too high for anything visual on the ground, too fast to reliably navigate manually, so what's left? The stars. I'm sure someone in that brainstorming meeting said it was impossible, but there were no other choices, so they figured it out. And it's incredible!
This vastly, vastly oversells the stakes of Midway. Japan lost before the war ever began[1]. By the end of the war the US produced 17 fleet carriers, 9 light carriers, and 76 escort carriers. US strength at Midway was 3 fleet carriers. By the end of 1945 the US would still have built several new Navies as well as the atomic bomb. Japan was struggling to finish China on their side of the Pacific, their armed forces would be totally incapable of securing the sea lanes to the US let alone land such a massive invasion force as to sweep across the whole continent.
The Japanese war plan was strategically inept from the outset - hit the US really hard, then sue for peace from a position of strength. But even if they got the carriers at Pearl Harbor, it was only a matter of time before US industrial output buried them and they failed to account for an enemy committed to total war. The turning point happened at Midway but the specific time and place don't really matter - it was inevitable. For Japan to win the war they would have had to walk an improbable golden path from victory to decisive victory while suffering no irreplaceable losses.
By the way, Germany displayed similar strategic ineptitude. Even if Germany won the Battle of Britain, they would not have been able to supply any invasion force they managed to land on Britain. They had no surface fleet, and they were staring down the largest navy in the world! All the Royal Navy needs to do is contest the Channel for a critical week or two and you've just lost your entire invasion force. As it happens they lost the BoB. Unable to finish the war against a globe-spanning empire, they proceeded to send their entire army into the USSR...
...and while US lend-lease certainly helped, the Soviet counteroffensive at the end of 1941 already inflicted such damage on the Wehrmacht that they started suffering shortages of men and equipment. The unsuccessful German offensive in 1942, Case Blue, was more limited in scope than Barbarossa because of these shortages - attacking in the southern USSR rather than the entire front as in 1941. Case Blue started the same month as the Battle of Midway - the outcome of the two could hardly be linked. With or without lend-lease, by end of '42 they are knee-deep in Russia with no prospect of further advance, while Britain blockades them on the other side.
> This is equally fascinating to me. Before each flight, someone had to generate a flight plan and punch it onto a physical tape, then load it into the plan. T
> No mid-flight "re-calculating route" if stuff goes wrong.
From what I understand of the manual[0], the system doesn't get completely lost if the crew decides to abort and fly away or something. The preprogramming is mainly so the sensors and camera automatically engage at the predefined points in the planned flight.
Pilots in WWII had things like the predecessor to TACAN to navigate with -- my grandfather was a pilot in the Army Air Corps and described navigation with radio beacons regularly.
Aircraft Carriers don't move far in the short time CV based aircraft are up -- they just need to navigate back to where they took off from. That's not as hard as it sounds, even with 1940s technology. You just need to navigate to within radio range, which is a couple hundred miles if you're in the air.
Royal Navy carriers had homing beacons to help their aircraft find the carrier again. The beacon rotated once a minute, the pilots synchronized their watches to the beacon before taking off, when they heard the signal through their headphones the second hand on their watch would indicate the bearing back to the carrier.
RN strike aircraft also had their own radar which made it easier to find things.
That's a "hot take" but entirely wrong. All three of the major allies (US, UK, USSR) realized they needed each other, hence lend-lease [1], the "Germany first" policy (defeat Germany before Japan) [2] and the Malta conference [3].
Without American equipment, the USSR would not have had the supplies to push the Germans west. Without a Russian front, the UK and the USA might not have been able to dislodge Germany from Europe, and the UK may have fallen [4]. Without Britain in allied hands, it would have been vastly more difficult for the USA to stage D-Day, and they certainly would not have been able to provide the fuel for their logistical needs via the Pluto pipeline [5].
If the U.S. Navy had been tied up in the Pacific, I think it’s tough to say what would have happened on the eastern front. The Soviets were pretty dependent on U.S. aid for food, trucks, planes, etc etc. If the naval balance were shifted a lot, that aid might not have made it.
Hopefully, one of the good things that will come out of the Russia-Ukraine War is the eternal shattering of the smug "Ha ha Americans, everyone knows that it's the Soviets that beat the Nazis" notion. Setting aside the total complicity of the Soviets in helping the Germans to dismember Poland in 1939 and attack themselves in 1941, without massive American and British aid Moscow would have fallen in 1941 and the Soviets would have sued for peace in 1942.
There was a story from the book Skunk Works that the system was sensitive enough that it occasionally locked onto tiny holes in the roofs of hangars.
The crazy thing about this system too is that, like everything on the SR-71, it was done on a shoestring budget (relatively) and crazy fast. So a navigation system like this was done as a necessary workaround to traditional navigation of the time.
> to collect information about hostile and potentially hostile nations using cameras and sensors.
I will sound very picky but, well, I'm pretty sure a rather big part of all civilians of all nations are not hostile. So I disagree with the wording. A nation is not hostile. A fraction of it is.
This was tuesday's afternoon rant. Thanks for clapping.
But well, the SR71 was really impressing when I was a kid, mysterious too.
The early North American SM-64 Navaho supersonic cruise missile that was cancelled developed some technologies that were later used in other projects.
(Same for Lockheed Suntan.) If you fly high and fast enough, there's considerable overlap with spacecraft navigation technologies. You can navigate in space with sextant too. No trouble with clouds!
Modern IBCM also correct their trajectories this way: they don't need too much accuracy (because of the large blast radius of nuclear warheads) and they are hypersonic during re-entry which limits the possibilities, so they use inertial guidance, but to improve accuracy they correct the trajectory at the peak of the ballistic trajectory using the stars.
I think GP finds it impressive that the technology enabling a device to _automatically_ resolve its position (simply by looking up at the sky) goes back this far. It impresses me, at least.
Sextants don't track stars. They just measure altitude and leave it up to a human to line everything up, measure it correctly, time it correctly, then look up the right data in a log book.
At planet.com we had star scanner and we also calibrated our camera with the moon. When you traverse the horizon in 15 minutes , I think about 30k mph from what I remember , knowing exactly where you are is tough. Knowing where you point the camera is even harder. And no gps didn’t really work well up there
That’s basically the law in aviation — rivet everything. My college roommate was studying to be a pilot, and I swear half his first year he did nothing but rivet.