I just find the concept of programming at such a low level fascinating. Stripping away all the complexities, boilerplate, and niceties of modern programing and ending up with a system like this where you go from directly modifying the contents of memory, to playing a fully fledged game written in basic is amazing.
A good reminder that at the end of the day all this CS stuff is just ones and zeros in a chip.
This is the computing I fell in love with back in the 80s/90s, when I could realistically understand most, if not all about a given computer (that applies mostly to 80s computers).
Nothing wrong with the stuff we do today, but I think some of the "magic" of fully mastering a computer and seeing all those raw commands executing at lightning speed (relatively speaking!) was lost.
Its not lost, Its now microcontroller/arduino programming. Its a lot of fun to write code for something so direct to the hardware. There is no OS, no security, no multi tasking. You just write a bit to a certain memory address linked to a pin and motor go brrr.
Sometimes I think the higher end embedded chips are even more fun, because you have an FPGA with shared memory to the processor and tons of IO pins. You can do some truly ridiculous things, including emulating other processors in the fpga fabric hehe.
I haven't tried FPGAs yet, just doing basic AVR programming with GCC-AVR and avrdude. Do FPGAs have an open source/hobby community around them like AVR does? I have heard bad things about FPGA companies being old fashioned/secretive.
Nothing? There are quite a lot of things if you ask me. As much as things are more complicated out of necessity they are also more complicated for completely unnecessary reasons too.
Sends me back to programing a PDP-11/40 around 1972. Managed to get a lunar lander program to work, although the console was a teletype (not a cool video screen)
You can get an 8-bit MCU if you want to explore that aspect. Modern embedded programming is often on a similar low level, and even on 32-bit MCUs you’re still poking registers and directly accessing memory with no protection. You want to display something on a screen, you can just get yourself a TFT display, write the driver yourself and send a good old fashioned byte array via SPI or parallel. Meanwhile just sending pixels to the screen on a modern PC is not at all straightforward, unless you use a library abstracting everything from you.
I recommend a 32-bit MCU like the Atmel AVR32 instead of 8 bit. The 32-bit ones aren't conceptually any more complicated. In fact, 8-bits adds a lot of complexity in doing arithmetic and updating registers with multiple single-byte writes.
Typical AVR32s give you 32 bits, 33 MHz RISC for $12. That would have been a decent workstation in 1990.
8 bit microcontrollers have the advantage that most of them are 5V devices vs 3.3V on the 32 bit ones and can also sink and source more current on their pins allowing you more flexibility to hook them up directly to USB Power or LEDs or motor drivers without conversion and from my experience are less sensitive to being bricked or inconsistent behavior from non ideal voltages applied to their pins.
I had the experience you describe playing around with a z80 and asm.
You can freeze the CPU with one pin, single step the whole CPU, etc.
Or things like swapping memory banks, doing something, then swapping it back and returning to the original codepath.
It's neat that it all fits in your head. You can, for any slice in time, walk through all the pin states, registers, memory locations, etc...and fully understand why they are all in the state they are.
Agreed. As much as we talk about how programming is manipulation of ideas...at the end of the day it is a very physical process too. You are manipulating actual physical things in this world. This is a fascinating concept to think about. Programming is physics.
What is neat about these demonstrations is how it illustrates that a computer is just a bunch of switches. Computers got better because they were able to jam more switches into a smaller area.
Their earlier mainframe cousins often had features to easy "Initial Program Loading". The IBM 1401 had a load button that was hardwired to read a single punchcard (80 6bit words) into memory locations 001 to 080 and jump to it.
The CDC6000 had a panel with 144 toggle switches, acting essentially as a 144 bit bootrom. (12, 12bit words)
These were expensive mainframe computers, so they could usually afford some budget towards IPL functionality.
But I guess even these cheaper minicomputers were expected to stay on long enough that there was little point spending money on IPL functionality to make cold booting easier.
What you saw was the cold start procedure for a completely uninitialized machine -- manually enter a short bootstrap loader of a dozen instructions or so, just enough to load a longer one from whatever I/O device was handy. A lot of machines had this kind of start procedure -- if there's a large row of switches on the front panel, that's probably what they were there for.
(Core memory would retain its contents without power, so if you were absolutely sure nothing could possibly have disturbed that initial bootstrap routine since the last time you toggled it in, it might still be there. But a lab minicomputer of that era, probably didn't have any memory protection at all, so that's a pretty big "if".)
I enjoy these retro computing videos. I have fond memories of programming on IBM 360 series systems, DEC, DG, TI and PR1ME systems. I was fortunate that my high school in the 70s had a PDP-8 and an ASR-33 TTY that students could sign up to use.
So much of life is a matter of luck, and I was definitely lucky to be a part of the computer revolution over the course of my career.
If you want to learn more about what he's doing on the front panel there, check out this fantastic introduction to how the first PCs were operated with front panel switches. This series is about the Altair 8800 but the same ideas apply to most computers of the era. It starts out writing programs with front panel switches, then shows how bootloader programs worked to load larger programs off of tape or some other IO mechanism. https://www.youtube.com/watch?v=suyiMfzmZKs&list=PLB3mwSROoJ...
This brought back an old memory. My first exposure to a lunar lander game looked very similar to the game he plays in the video. It was at the Pacific Science Center in Seattle. This was in the early 80s, so it was probably running on a microcomputer, not a behemoth like this. I only vaguely understood what was happening, and I couldn't land successfully. But I loved it, nonetheless.
I really wish I could have gotten my 7 year old son to think that this was what games looked like. He would have stuck to his Diary of a Wimpy kid books and never looked back.
It's a balance between compactness and ease of converting to binary. The switches on the front panel are in binary. It's a lot easier to convert octal to binary than hexadecimal. For example:
Octal arithmetic is pretty easy as compared to hex and some instruction sets have 3 or six bit opcodes and/operands. So confusing or no it's pretty much for the convenience of the programmer or operator.
A good reminder that at the end of the day all this CS stuff is just ones and zeros in a chip.