I went on to write my first Mandelbrot renderer in Python. I was delighted, in a way, that it worked, but badly disappointed at the performance.
I rewrote the thing in C, and then rewrote it again to make use of multiple CPU cores. I am certain there are Mandelbrot renderers out there that put mine to shame. But still, it was fun, and I got some pretty desktop wallpapers out of it. And I learned a thing or two along the way. Also, I rewrote it again using PVM (Parallel Virtual Machine) to spread the work out across several computers. So I actually learned two or three things along the way. ;-)
In a previous job, I had a supervisor who learned programming in the mid-80s who told me about rendering the Mandelbrot set, then using bit-twiddling on the VGA memory to change the color palette on the display. I was not sure if I should be envious to have missed those times or happy.
I tried it in BASIC on an Apple IIe. it was the first time I was confronted with the question of whether it was safe to run the computer over night, or if it would overheat.
The next morning, the computer was fine, but it still hadn't reached an interesting part of the set.
So once I got a PC I ended up using Fractint. In those days, Intels had slow floating point, so they ported the algorithm to use integer. That was much more effective than jjust speeding up the machine clock (my first learning about performance).
Palette cycling was "cool" but by the time I got to college, while I thought I'd spend all my time generating fractals, I found the real world more interesting...
One example would be the advice given by Scientific American's A.K. Dedney's "Computer Diversions" Here's a 2010 reprint of a 1985 article: https://www.scientificamerican.com/article/mandelbrot-set/
And "Fractint" (a fast viewer for many fractal types) was my introduction to open source, although they called it "stone soup". https://en.wikipedia.org/wiki/Fractint
So the Alto must pre-date Mandelbrot sets and therefore this demo is non-canonical, like playing Tetris on it, not possible at the time or even imagined.
I also remember how short things like Iterated Function System plotters could be - eg a few dozen lines of BASIC could get you a complicated fern piccy in a few seconds on a 80286.
I suspect they had access to some decent Unix workstations with expensive displays. The quality of the plates was breathtaking - you can barely see the graduations in many of the images and they are totally smooth in most of them.
I've just had a massive hit of nostalgia - thanks!
The Science of Fractal Images lists, as credits for the colour plates, these people and institutions: http://imgur.com/a/KSnDZ
One improvement for accuracy is, if you find all four corners have the same color (no matter if it's an 8x8 or 4x4 block), pick another pixel and iterate that one; if the color doesn't match, skip the optimization. That is more expensive, up to 25%, but it can avoid mistakes in the chaotic areas of the set.
And of course, the large cardioid in the center has an analytic expression, and you can skip pixels inside that area, or a conservative approximation of the area.
Now you can do this real-time on the GPU: https://www.shadertoy.com/view/4df3Rn
Realtime, 60 fps, incredibly beautiful fractals on iPhone and iPad. (am not affiliated with them, just a fan)
I just got a new 12.9" iPad Pro this weekend, and Frax absolutely pushes the boundaries in terms of what the iPad can do. Just the maths involved to render the light, colour, texture and movement - in real-time, at 2732x2048 resolution - blows me away.
EDIT: I just made this, and a video exploration of the fractal too.
Similar to packing up bits to characters shot to some sort of dot matrix printer attached to VAX, to print out such images, a couple of years earlier. At least that generated the images more quickly.
The Kaypro 2X was long in the tooth, by that point.
I wonder whether it will still fire up?
If this hasn't already been featured on HN, I'd be surprised...
Now get off my lawn or I shall be forced to shake my cane at you quite vigorously.
Honestly, those were fun, fun days.
It was mostly because of the clock speeds, wasn't it? I seem to remember once hearing that below 20 Mhz you could play "TTL Lego" but above that you ran into problems with things like propagation delay.
So, 20 MHz seems pretty credible for a region where you would see problems due to propagation delay—maybe even somewhere below that with the original 7400 series.
That's also a reason that I hadn't thought about so much why putting the whole CPU on a single die would look more and more practical over time: propagation delay in wires!
If you had the money. Those chips weren't cheap back then (even the 74xx branch), and you still needed quite a few to do anything useful when it came to building a CPU.
Even though its BASIC interpreter used a 40-bit float, that I was downsampling the resolution to get 64-ish dithered colors, and that it was compiled down to mostly ROM calls, it took a very long time to render regions, in special those closer to the set itself.
Ok. As clock-efficient as the 6502 was, it still ran at 1 MHz.
Coding Challenge #21: Mandelbrot Set with p5.js - https://www.youtube.com/watch?v=6z7GQewK-Ks
VEX IN HOUDINI: MANDELBROT AND MANDELBULB - https://www.sidefx.com/tutorials/vex-in-houdini-mandelbrot-a...
The second link is Houdini specific but you should be able to follow along with the free version on win/linux/osx.
can generate the Mandelbrot set in a fraction
of a second, while the Alto took an hour.
The Alto was built about 40 years ago, right? That would be 30% performance improvement per year. Is that about right for single cpu performance acceleration over the last 40 years?
 Is using a GPU cheating? Are we comparing computer performance or CPU performance? The Alto didn't even have a CPU as we know it today...
But BCPL is actually an interesting language. I knew that it influenced C (as did Algol and other languages), but until using BCPL I didn't realize how much of C was really the same as BCPL.
That was a nice stepping stone to C on the Atari ST.