This is a classic XY issue, suggesting a fix ("Check continuity...") instead of describing the problem ("O2 tanks exploded during Apollo 13 mission").
Further, I believe that the suggested fix is incorrect, or at least insufficient. The Apollo 13 investigation indicated that a list of factors led to the fan wires in the O2 tanks having damaged insulation. However, unless the wires were already short-circuited before stirring, checking continuity first would not have detected the short. Indeed, the tank was stirred twice earlier in the mission without incident. The investigation suggested that operating the fans itself may have eventually moved the wires into contact with each other, which combined with the damaged insulation, finally allowed an electrical arc and the resulting explosion to occur.
The correct fix is to upgrade the thermostatic switches which protect the tank heaters from overheating to accommodate 65 V DC, so that the fan wiring isn't damaged in the first place. In addition, the tank acceptance procedure should be amended to require switch cycling under load.
Another tidbit. Toward the end you hear mission control say "30 seconds." That's how much fuel is left[1]. Those guys had steel spines.
If you can't get enough of this stuff, I highly recommend "A Man on the Moon" by Andrew Chaikin[2], as well as the HBO mini-series produced by Tom Hanks and based largely on that book, "From the Earth to the Moon"[3]. "Failure Is Not an Option" by Gene Kranz[4] (flight director on Apollo 11 and 13, among other things) is also a good read.
Wow ! Just saw the whole landing again. I have seen the video before , but I never understood who was saying what . This is the best way all those conversations could have been visualized !
Not that I believe in any particular moon conspiracy hypothesis, but the reflectors being on the moon do not proove anything else than someone did put them there. Wether humans went on the moon to put them there, or some other delivery system was used is (strictly) logically up for debate.
Conspiracy theorists typically suffer from enormous confirmation bias, so it's far more likely that this will lead to new claims about how the code couldn't possibly have worked on a real mission. I wouldn't even be surprised if someone found a transcription error, claimed it was an error in the original code, and then accused the maintainers of a cover-up for fixing it.
## At the get-together of the AGC developers celebrating the 40th anniversary
## of the first moonwalk, Don Eyles (one of the authors of this routine along
## with Peter Adler) has related to us a little interesting history behind the
## naming of the routine.<br>
## <br>
## It traces back to 1965 and the Los Angeles riots, and was inspired
## by disc jockey extraordinaire and radio station owner Magnificent Montague.
## Magnificent Montague used the phrase "Burn, baby! BURN!" when spinning the
## hottest new records. Magnificent Montague was the charismatic voice of
## soul music in Chicago, New York, and Los Angeles from the mid-1950s to
## the mid-1960s.
Random note: When I was in the 5th grade, our math teacher taught us the meaning of Nota Bene, and used a NB sign to mark important notes when deriving formulas. Still use to this day when making notes
> At MET 102:39:31 the best possible confidence builder occurred — throttle down, right on time. "Ah! Throttle down... better than the simulator" commented Aldrin, "Throttle down on time!" exclaimed Armstrong, their excitement palpable. In the official transcript of communications between spacecraft and ground during the powered descent, these are the only exclamation points[11].
Also of note, in zero-gravity the ullage space must be forced to the opposite end of the tank before ignition so the fuel is at the intakes. Commonly this task is accomplished by "ullage motors" which fire to settle the fuel before primary ignition[1].
Considering that the binary was woven by hand using copper wire into the computer's core rope memory, I imagine the flight revision had to be finished months before the actual flight.
There's a simulator, if you want to run it.[1] But it's just a simulator for the computer; there's no spacecraft attached.
There's a mod for Kerbal Space Program which gives it real solar system planets and dimensions. (KSP's world is way undersized so things happen faster.)[2]
There's another mod for Kerbal Space Program to give it real dynamics.[3] (KSP doesn't really do dynamics right; the spacecraft is under the gravitational influence of only one body at a time. This is why there's that sudden trajectory change upon Mun capture.)
Someone should hook all that together and do a moon landing in simulation.
There's an Orbiter plugin[1] that simulates the rest of the spacecraft. IIRC it can interface with this simulator to get an accurate simulation of the computer.
Whenever I put in temporary code like that, I always leave my full name, the date, and a snarky comment about "suuuuuure this is temporary". Seriously.
(A friend of mine, who when in search of a new go-to-programming language instead of Python, was more interested in Swift vs Rust because of this attitude).
> Remind your friend these comments are only left by a small vocal minority and is not representative of the project or its maintainers.
Thanks for mentioning this, I certainly will the next time. The last time this came up, this was not an attitude I had previously noticed, although did see a bunch on Hacker News the following week
> This is akin to not liking something because you don't like the people who already like it, despite how much you'd like it otherwise.
I would totally agree with this, but at the same time, while not a technical reason, if one works in primarily a single programming language I can imagine the nature of the community would be a legitimate factor to consider---in this case though as you point out, it would be an inaccurate opinion of the nature of the Rust community.
To your last point, I agree slightly as well but my rebuttal would be that each person chooses how much and at what level to participate in a community they are in and which sub-communities they identify closer with.
I can imagine a person being proficient and working in any language without the need for them to be involved with the community at all, or if they do need to interact do so in a read-only matter.
The two aspects my friend's perception of the Rust community are both a) rewriting existing projects in Rust for the sake of having a version written in Rust[1][2], and b) it seems a frequent trend on Hacker News to see comments about "why didn't you write it in Rust" or "how about porting it to Rust" on posts about projects and "should have written it in Rust" comments on posts about security bugs. As the parent to your comment points out, this is probably a small group in the community who comments often.
As I could see from the way my score, on the post that started this discussion fluctuated, this is clearly a topic some are both sensitive about or like find humor in (or both!).
EDIT: I didn't mean to derail the discussion of a fascinating code posting; but I assume the comment I responded to that spawned this tangent, "Have they considered rewriting it in rust?"[3], was a joke made about the other comments on HN to rewrite things in Rust.
[1]: From a pedagogical standpoint, obviously this is a potentially good way of learning a language so if for learning and not pure calorie burn I personally don't see this as wasteful.
[2]: <<possibly exaggeration warning>> I understand there is some motivations within (perhaps a small part) the Rust community to replace the world'S C-systems-code with more secure Rust code.
FWIW, I did performance analysis of the guidance computer and the 1202 and 1201 alarms at the start of my ACM Applicative 2016 keynote: https://youtu.be/eO94l0aGLCA?t=4m38s
hey there Brendan, thats a pretty awesome presentation (I m only 20 minutes into the talk as of now ).. But you mention that the Apollo Engineers expected the CPU load to be about 85% during descent. And the Guidance computer's Kernel ran "Dummy Jobs" when no real jobs were run.
What are these Dummy Jobs ? And Why did they have to do this instead of just leaving the CPU idle ?
wow , wow ! Looks like I don't understand the first thing about the CPU design . Do CPUs have to be designed to IDLE ? Can you throw some more light on this ?
A basic model of a CPU is running an infinite loop like this:
1. If interrupts not masked, check for interrupt
2. Load instruction
3. Advance instruction pointer
4. Execute instruction
It doesn't ever stop - as soon as the current instruction is finished executing it moves on to the next one. So, if you don't have anything better for the CPU to do, you need to have it spin in a loop of instructions anyway.
More modern CPU designs typically include an instruction that means "halt until next interrupt" which actually stops the CPU from fetching and executing instructions.
Why do CPUs and GPUs run hotter when doing more intensive tasks?
In your last statement I could see it making sense where the CPU actually halts, but did prior CPUs always run at about the same temperature? Or do these idle processes throw fewer instructions at one time so it's not as overwhelmed?
Modern CPUs, GPUs and SOCs have power management states that disable entire submodules when they're not in use, by actually gating off the clock to them. If you run without power management enabled, you'll find that they run hot all the time.
> did prior CPUs always run at about the same temperature?
Basically, yes. But then they typically produced so little heat they had passive heatsinks, up to and including the Pentium II (~20 W TDP) and ATI Rage 128 that I used back in '99.
wow.. Thanks for the explanation there.When we switched to modern CPUs that could actually Halt, was there actually hardware/physical changes to the CPU ? Or was it just a software change (ie) Added a new instruction to the existing instruction set ? ..
Is that what made it difficult for us to design processors capable of "idling" ? (ie) completely new hardware design
It was a hardware change. In those older CPU designs, the external clock signal was directly driving a state machine, so for as long as the clock was applied, the state machine would go.
It's important to realise that there was no good reason to have the ability to stop the CPU in those days - power consumption by the CPU itself was truly trivial compared to the memory and peripherals it was attached to, and those CPUs weren't really damaged or worn out by running continuously. Having the CPU spin in software when there was nothing else to do was perfectly fine.
Normally the CPU clock runs continuously and every cycle the program counter increments (or gets changed by a branch instruction of some kind.) If you want to stop the CPU, you have to gate the clock somehow. Maybe a timer that you could configure and enable via software. But that's extra complexity.. and if you use dynamic logic (which is smaller and faster than static logic), you lose state when you halt. Spinning in a tight loop, on the other hand, doesn't require any hardware support.
It's interesting to me that the AGC contains an implementation of a virtual machine that is used to perform the higher-level mathematical functions (called 'The Interpreter'). Some details are available in this PDF starting on page 74: http://www.ibiblio.org/apollo/NARA-SW/E-2052.pdf
It would be fun to do some research into the embedding of higher-level virtual machines in earlier computers. I'm thinking of 'The Interpreter' in the AGC as being an ancestor to 'SWEET16' in the Apple II (https://en.wikipedia.org/wiki/SWEET16), or the 'Graphic Programming Language' (http://www.unige.ch/medecine/nouspikel/ti99/gpl.htm) in the TI-99/4A.
The Apollo Guidance Computer (AGC) was a digital computer produced for the Apollo program that was installed on board each Apollo Command Module (CM) and Lunar Module (LM). The AGC provided computation and electronic interfaces for guidance, navigation, and control of the spacecraft. The AGC had a 16-bit word length, with 15 data bits and one parity bit. Most of the software on the AGC was stored in a special read only memory known as core rope memory, fashioned by weaving wires through magnetic cores, though a small amount of read-write core memory was provided.
When I worked on my start up, We built our complete hardware and then software for it.
I had to write down the drivers, and display library for the 128 x 64 lcd display with a simple scheduler, FSM and all!(Hard to mention all the work) Bulk of the work I did was using paper, eraser and pens.
A lot of work in unchartered territory requires paper work. I realized the more I worked on paper the more correct the code was and overall it took faster to write(given fewer bugs).
Also was among first to invent scheme that automated the coding and testing from specs for high-reliability. Her case is the only one I follow as I lack data on the rest.
I recommend reading Digital Apollo[0] about the development of the computer, and actually the entire man-machine interface of early spaceflight. The machines were made in the milieu where computer mediated control was highly controversial. (e.g. "A machine might work when everything is fine, but will never work in an emergency.") Essentially there was huge argument between pilots and engineers, about how much automation should be done. It was so bad, that pilots even tried to insist on flying the rocket into orbit. (If I recall correctly, in simulations in a centrifuge, only Armstrong was able to successfully not crash the Saturn V in a manually controlled ascent.)
The other recurring theme in the book is the disturbingly short MTTF for flight computers during the mid 1960s. Statistically, NASA had to plan for a computer failure in route to the moon, and so repair-vs-replace became a serious issue. (Yeah, they seriously considered soldering in zero-g.)
For anyone interested, XPrize winner Karsten Becker talks to popular youtube blogger David Jones about radiation, extreme heat & cold in space and specifically talks about bit flip and how electronic parts are sourced for such endeavors.
Interesting to me was the "paper work" cited in the interview for space harden components. In other words, people are concerned with stuff falling back to Earth (wouldn't it burn up?) or used for not so friendly purposes (war).
> For instance, the type of memory was called core rope memory
Rope and core memory were the standard memory technologies of the day and very likely have not been chosed for they radiation hardness. The fact is, solid state memory became reliable and available in quantities only in the second half of the 1970-ies.
"Shame on him who thinks ill of it." It's almost as if the authors anticipated the need to administer percussive therapy, Buzz Aldrin style, to trolls of the far-distant future.
"A set of interrupt-driven user interface routines called Pinball provided keyboard and display services for the jobs and tasks running on the AGC. A rich set of user-accessible routines were provided to let the operator (astronaut) display the contents of various memory locations in octal or decimal in groups of 1, 2, or 3 registers at a time. Monitor routines were provided so the operator could initiate a task to periodically redisplay the contents of certain memory locations. Jobs could be initiated. The Pinball routines performed the (very rough) equivalent of the UNIX shell."
- https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Softw...
The DSKY and PINBALL (something flashy with buttons) was a demo.
And that demo got us to the moon.
"Apparently, nobody had yet arrived at any kind of software requirements for the AGC's user interface when the desire arose within the Instrumentation Laboratory to set up a demo guidance-computer unit with which to impress visitors to the lab. Of course, this demo would have to do something, if it was going to be at all impressive, and to do something it would need some software. In short order, some of the coders threw together a demo program, inventing and using the verb/noun user-interface concept, but without any idea that the verb/noun concept would somehow survive into the flight software. As time passed, and more and more people became familiar with the demo, nobody got around to inventing an improvement for the user interface, so the coders simply built it into the flight software without any specific requirements to do so."
people could've really used a higher-level language compiling to optimized AGC(apollo computer) assembly. Is there any reason why they didn't develop one? It seems it would've helped tremendously with the productivity and verification (and a lot of the explanations and equations would be readable as code, not as an non-executed comment)
It was pretty much taken as gospel everywhere at the time that NO compiler could match the speed and size of a well crafted assembly language routine. Back then there were some noble attempts at building optimizing compilers, and probably the more notable one was IBM's ambitious ForTran H. But that's 50-year-old tech now, kids.
Remember also that memory was at a terrific premium. I don't have any specific knowledge about the AGC, but there's an interesting story I read once about a memory shortage in another project - Intel's 8080.
(If you'll permit me an OT digression...)
As the story goes, the program space was so tight in the original microcode for the Intel 8080 microprocessor there wasn't room to spare for a one-byte constant in the code! The architects decided that the AAM and AAD instructions in the 8080 set should have a required operand - 0x0A or 10 - so that the instruction could refer to itself and know that you were operating in base 10!
A side effect of this is that the Intel processors could actually execute AAM and AAD instructions in number bases besides 10; Intel had never formally acknowledged that the instructions do this, and so in the NEC V20 or V30 chips - which were supposed to be Intel compatible - you couldn't change the AAM or AAD operand - it had to be 0x0A.
> Is there any reason why they didn't develop one?
“Now you have two problems.”
Edit: OK, maybe that deserves explanation. First, an optimizing compiler is a much bigger project than the AGC code. Second, current optimization techniques didn't exist in 1969, even assuming NASA had enough budget for machine(s) to run them. Third, you need to verify the binary anyway, and small source changes (in either the guidance code or the compiler) can lead to large binary changes.
I still think it's easier to verify two isolated projects: the compiler(that can be reused) and the apollo code instead of mashing all of the equations code in assembly. I doubt the people were so used to assembly(especially scientists) that they they didn't require a magnitude more time to mentally parse/unparse logic and equations to machine code. (Also the compiler doesn't have to be very high level, even just a glorified macro-assembler with some syntactic sugar for math does seem not hard)
Long story short, in circa 1969 higher-level languages were mostly the purview of academics. No one had a computer powerful enough to run a language compiler/runtime, and programmers were real men(tm) and wrote assembly by hand because that was all they knew how to do. That's not to say that there weren't any advantages to using a higher level language, but in the case of something like the apollo computer, they couldn't risk compiler bugs or slow code gumming up the system and potentially killing the astronauts.
Even today, certain ridiculously-high-performance or super low latency tasks (i.e. embedded devices, high frequency trading) drop down to the assembly level because that small bit of overhead the compiler adds (for such modern coddling conveniences as function calls and type safety) are just too much. It's not crazy, it's just what's needed for that particular job.
There were programming languages in widespread commercial use (e.g. Fortran and COBOL) and many others for niche applications or associated with particular manufacturers.
I could be wrong. But I think part of why they did it this way was so they could edit things on the fly if a emergency dictated so. Having to ship a compiler (would have likely been an entire separate computer) would have not been feasible.
With things done this way and documented this why they could (fairly sure did) have the pages printed out on paper and such. And if say there was some bug or new routine that needed to be added mid flight they could go to a key board input (no screen) and with the instructions from the ground reprogram a section or add a new jump point or change a value mid flight.
Remember, computers were not what they are today. They did not have super powerful laptops and such with fancy tools.
"The bulk of the software was on read-only rope memory and thus couldn't be changed in operation, but some key parts of the software were stored in standard read-write magnetic-core memory and could be overwritten by the astronauts using the DSKY interface, as was done on Apollo 14."
Seams you were mostly right about not being able to change it. I was only somewhat right because I assumed it would all be editable. I over estimated the technology they had back then. I would gather from that there was for more fixed code than editable.
> "The bulk of the software was on read-only rope memory and thus couldn't be changed in operation, but some key parts of the software were stored in standard read-write magnetic-core memory and could be overwritten by the astronauts using the DSKY interface, as was done on Apollo 14."
Ah, yes right, there was something. I've been fascinated by the AGC by some time¹, yet I completely forget about that. Time to hunt down the mission protocols to understand what this patch did.
------
1: … and the LVDC, which is the computer that controlled the Saturn-V. Because it was based on the computers used in ICBMs still a lot of information about that is classified. A lot of people conjectured that it was a IBM System 360 reshaped, but when actual LVDS board got torn down by electronics nerds over the past couple of years some significant differences to the 360 were discovered.
Hey, uh, Apollo <n for n > 10 and n < 18>? This is Houston. Could you pull out the ROM banks and flip a couple of bits for us? we made a mistake or two. All you need are some wire cutters and a very steady hand.
While "noli me tangere" is the Biblical phrase this alludes to, "noli se tangere" would mean in context "don't touch this." It's not that the programmer misremembered "noli me tangere" but that he played on the reference.
I've often wondered many things about the cleanliness, maintainability and style of such code (this particular system, in fact). It's fun to be able to actually poke through it.
It's an incredibly well done and at times hilarious narration of the moon mission. (Spoiler: contains a part where Armstrong overrides the automatic control and lands manually)
I was 14 and listened to this live. We all were turning blue when those computer alarms were called during the first landing. Turns out it was a mistake in protocol. Computer was overloaded.
"This source code has been transcribed or otherwise adapted from digitized images of a hardcopy from the MIT Museum. The digitization was performed by Paul Fjeld, and arranged for by Deborah Douglas of the Museum. Many thanks to both. The images (with suitable reduction in storage size and consequent reduction in image quality as well) are available online at www.ibiblio.org/apollo. "
I mean, I realise that this is the least of the amazing achievements we're talking about here, but yea.. respect :)
https://github.com/chrislgarry/Apollo-11/issues/3