Hacker News new | comments | show | ask | jobs | submit login
One-Hour Mandelbrot: Creating a Fractal on the Vintage Xerox Alto (righto.com)
112 points by darwhy 10 months ago | hide | past | web | favorite | 53 comments



I still remember the day I read the article on the Mandelbrot set and went, "Whoa, I understand the math!"

I went on to write my first Mandelbrot renderer in Python. I was delighted, in a way, that it worked, but badly disappointed at the performance.

I rewrote the thing in C, and then rewrote it again to make use of multiple CPU cores. I am certain there are Mandelbrot renderers out there that put mine to shame. But still, it was fun, and I got some pretty desktop wallpapers out of it. And I learned a thing or two along the way. Also, I rewrote it again using PVM (Parallel Virtual Machine) to spread the work out across several computers. So I actually learned two or three things along the way. ;-)

In a previous job, I had a supervisor who learned programming in the mid-80s who told me about rendering the Mandelbrot set, then using bit-twiddling on the VGA memory to change the color palette on the display. I was not sure if I should be envious to have missed those times or happy.


yeah, once z = z2 + c was explained, most of Mandelbrot was easy for me too.

I tried it in BASIC on an Apple IIe. it was the first time I was confronted with the question of whether it was safe to run the computer over night, or if it would overheat.

The next morning, the computer was fine, but it still hadn't reached an interesting part of the set.

So once I got a PC I ended up using Fractint. In those days, Intels had slow floating point, so they ported the algorithm to use integer. That was much more effective than jjust speeding up the machine clock (my first learning about performance).

Palette cycling was "cool" but by the time I got to college, while I thought I'd spend all my time generating fractals, I found the real world more interesting...


The first Mandelbrot generator I played with took more than 3 hours on a BBC Micro to render a 160x256 image. That was 30 years ago and I still try out variations on this from time to time (larger exponents, fractional exponents, complex exponents, animations, etc).

http://alquerubim.blogspot.com.br/search/label/Fractais


This used to be a rite of passage for programmers of the time.

One example would be the advice given by Scientific American's A.K. Dedney's "Computer Diversions" Here's a 2010 reprint of a 1985 article: https://www.scientificamerican.com/article/mandelbrot-set/

And "Fractint" (a fast viewer for many fractal types) was my introduction to open source, although they called it "stone soup". https://en.wikipedia.org/wiki/Fractint


Fractint was, if not a killer app, really popular for EGA on PCs. Compute was getting fast enough at about the same time that the computations weren't too excruciating and EGA cards were the first mainstream graphics that Mandelbrot etc. sets looked decent.


I was there too. Plus at the time Mandelbrot was new, as new as Deepdream images are today.

So the Alto must pre-date Mandelbrot sets and therefore this demo is non-canonical, like playing Tetris on it, not possible at the time or even imagined.


This reminded me of an interview w/ Carl Sagan, Stephen Hawking, and Arthur C Clarke from 1988. Clarke demos the Mandelbrot set and is clearly very excited. If only they knew...

https://youtu.be/1srEEbB_vrI?t=18m31s


Hawking knows.



The Alto was introduced in 1973, but was still available in the early 80s, so there is every possibility that someone wrote a Mandelbrot generator on it since it was discovered in 1980. However it is more likely that this happened on the Xerox Star which was around from 1981 and also after 1985 when the first popular Scientific American article about the set appeared...


… while in the same era Peitgen and Richter were publishing books with detailed Mandelbrot sets in full-colour plates. What computers were they using?


I worked in a books warehouse before college and two of their books had a little accident and ended up in the rejects bin along with some of my course texts. I lent the P and R books to someone and forgot who so actually bought them later on. They are amazing.

I also remember how short things like Iterated Function System plotters could be - eg a few dozen lines of BASIC could get you a complicated fern piccy in a few seconds on a 80286.

I suspect they had access to some decent Unix workstations with expensive displays. The quality of the plates was breathtaking - you can barely see the graduations in many of the images and they are totally smooth in most of them.

I've just had a massive hit of nostalgia - thanks!


I recall seeing full-colour real-time Mandelbrot set zooming and fractal landscape rendering on a Meiko Computing Surface (containing 32 or 64 Transputers, IIRC) at university, circa 1990. I studied Occam at the time but we weren't allowed to actually program the Transputers ourselves.

https://en.wikipedia.org/wiki/Meiko_Scientific#Computing_Sur...


They reference IBM labs. The Beauty of Fractal Images talks about an IBM 4361-5 with extended precision, and an IBM 4250 printer for one of the monochrome plates near the end of the book.

The Science of Fractal Images lists, as credits for the colour plates, these people and institutions: http://imgur.com/a/KSnDZ


A cousin, the 4341, was my first mainframe. That was the first time I was exposed to the concept behind pg's "beating the averages article": at a time most of my colleagues were writing dBase or COBOL code, I interned with people using Cincom's Mantis. It was the Rails of the 3270.


I remember their demos. Their secret sauce was in the algorithm, which somehow would spend little time on the featureless parts.


One simple way is to start with a largish square, say 8x8 pixels, and evaluate the Mandelbrot set at the corners. If you get the same color, paint the square in a single color. Otherwise split it in 4 smaller blocks, evaluating the set at a 3x3 grid (which is not really correct, the four squares don't actually have any corner in common, but it's twice as fast). Again if any small square has the same color at the corners, fill the block with one color.

One improvement for accuracy is, if you find all four corners have the same color (no matter if it's an 8x8 or 4x4 block), pick another pixel and iterate that one; if the color doesn't match, skip the optimization. That is more expensive, up to 25%, but it can avoid mistakes in the chaotic areas of the set.

And of course, the large cardioid in the center has an analytic expression, and you can skip pixels inside that area, or a conservative approximation of the area.


Yup - did that, and it was a lot faster on an Archimedes A310, I recollect minutes rather than hours and that was without the FPA unit.


It crazy how far computers have improved so this is the kind of thing you can do as a hobby project.

https://m.youtube.com/watch?v=0jGaio87u3A


I remember trying to do this on a 286 and it would take the better part of a day to do an EGA quality image. A friend at the time had a 386 and it seemed blazing fast, it would take only a few hours to do a 640x480 VGA image.

Now you can do this real-time on the GPU: https://www.shadertoy.com/view/4df3Rn


For anyone on iOS, I recommend you check out the app Frax. Created in part by the Legendary Kai Krause!

http://fract.al/

Realtime, 60 fps, incredibly beautiful fractals on iPhone and iPad. (am not affiliated with them, just a fan)


Frax is a masterpiece. It's one of those extremely rare apps that manages to effortlessly combine extreme technical competency with a creative, intuitive and inspiring user experience.

I just got a new 12.9" iPad Pro this weekend, and Frax absolutely pushes the boundaries in terms of what the iPad can do. Just the maths involved to render the light, colour, texture and movement - in real-time, at 2732x2048 resolution - blows me away.

------

EDIT: I just made this[1], and a video exploration of the fractal too[2].

[1] http://i.imgur.com/s7sNYJ0.jpg

[2] https://youtu.be/35f6YbV39e8


If you're not feeling so artistically inclined - or maybe just burned out on the Mandelbrot set - you can get a "flame" fractal image or video every month from a guy I know who does them professionally. https://www.patreon.com/QuinnPalmer


Super late edit - it's several wallpapers per month and one video. Also you can get physical objects with the designs on them through his kinda-defunct old site http://fractal.world/


Reminds me of writing programs to map the Mandelbrot and Julia sets onto the display of my Kaypro 2X. Didn't even have real bitmapped graphics -- just a way of packing... I think it was 4-pixel squares, into a character representation and then writing that.

Similar to packing up bits to characters shot to some sort of dot matrix printer attached to VAX, to print out such images, a couple of years earlier. At least that generated the images more quickly.

The Kaypro 2X was long in the tooth, by that point.

I wonder whether it will still fire up?


There's a "mode" if you will, on CGA cards, that involves something similar - best explained here:

http://8088mph.blogspot.com/2015/04/cga-in-1024-colors-new-m...

If this hasn't already been featured on HN, I'd be surprised...


"You can also see the CPU's registers on this board." That made me LOL a bit. Very interesting architecture.


I've come to understand that many CPUs were not single-chip, well into the integrated circuit era. Today we might assume that a microprocessor will come in a single package, but that wasn't a reasonable assumption in other decades, where the computer designer might even be creating the CPU as well, and not have access to a semiconductor fab.


Are you trying to make me feel old, simply because I once had the 74Fxxx TTL catalog nearly memorized? Or that I can do an instruction decoder and conflict interlock unit in 100K ECL?

Now get off my lawn or I shall be forced to shake my cane at you quite vigorously.

Honestly, those were fun, fun days.


It's surprising to me how in the 1970s, if someone needed a processor for something (even a video game or CNC controller), it was normal for them to invent a new instruction set and put together a processor from TTL chips. The barrier to entry was very low.


>...put together a processor from TTL chips. The barrier to entry was very low.

It was mostly because of the clock speeds, wasn't it? I seem to remember once hearing that below 20 Mhz you could play "TTL Lego" but above that you ran into problems with things like propagation delay.


The stated propagation delay for TTL chips was usually about 10 or 20 nanoseconds (sometimes larger for more complex logical operations, and frequently smaller in subsequent logic generations, like 5 nanoseconds or so). I just checked this in a late 1970s databook.

So, 20 MHz seems pretty credible for a region where you would see problems due to propagation delay—maybe even somewhere below that with the original 7400 series.

That's also a reason that I hadn't thought about so much why putting the whole CPU on a single die would look more and more practical over time: propagation delay in wires!


> The barrier to entry was very low.

If you had the money. Those chips weren't cheap back then (even the 74xx branch), and you still needed quite a few to do anything useful when it came to building a CPU.


Well, there's always the 74181, a 4 bit ALU[0] as used in the Alto[1] among others like the NOVA and VAX 11/780.

0. https://mil.ufl.edu/4712/docs/74LS181.pdf

1. https://en.wikipedia.org/wiki/74181#Computers


And now my crummy laptop can render a smooth-shaded version full-screen at 12 FPS. https://www.shadertoy.com/view/lsX3W4


And it does that with a brute-force algorithm. Try xaos, it can render much faster using ONLY THE CPU


I wonder if a more efficient microcode could be written for the Alto to optimise just for rendering Mandelbrot sets


The Alto has up to 3K RAM to hold custom microcode, and some programs used their own microcode for performance. So, yes it would be quite possible to speed up the Mandelbrot set with microcode. But writing Alto microcode is very difficult, so I'm not planning to do that...


It wasn't much faster on the Apple II...

Even though its BASIC interpreter used a 40-bit float, that I was downsampling the resolution to get 64-ish dithered colors, and that it was compiled down to mostly ROM calls, it took a very long time to render regions, in special those closer to the set itself.

Ok. As clock-efficient as the 6502 was, it still ran at 1 MHz.


An amazing demonstration of the math behind some fractals, which runs on web gl, is Steven Witten's "How to Fold a Julia Fractal"

https://acko.net/blog/how-to-fold-a-julia-fractal/

Highly recommended


I recommend these too:

Coding Challenge #21: Mandelbrot Set with p5.js - https://www.youtube.com/watch?v=6z7GQewK-Ks

VEX IN HOUDINI: MANDELBROT AND MANDELBULB - https://www.sidefx.com/tutorials/vex-in-houdini-mandelbrot-a...

The second link is Houdini specific but you should be able to follow along with the free version on win/linux/osx.


I remember writing Mandelbrot on C64 and Amigas. Depending on depth you could wait some time. So it often was overnight, with a pleasunt suprise the next morning (or some ugly looking picture)


    On a modern computer, a Javascript program
    can generate the Mandelbrot set in a fraction
    of a second, while the Alto took an hour.
Hm... a "fraction" could mean anything. If it's exactly a second, then the speedup factor would be about 3600x.

The Alto was built about 40 years ago, right? That would be 30% performance improvement per year. Is that about right for single cpu performance acceleration over the last 40 years?


Well with this[0] implementation[1] my laptop can render 3840x2400 pixels at 60fps, so ignoring the larger resolution, so how about 1/60th of a second?

[0] https://news.ycombinator.com/item?id=14588789

[1] Is using a GPU cheating? Are we comparing computer performance or CPU performance? The Alto didn't even have a CPU as we know it today...


[1] Well, a GPU is somewhat geared towards this kind of workload. Both comparisons (Alto to modern CPU and Alto to modern GPU) have some meaning.


30% improvement for net performance might actually be about right. Raw improvements were bigger, but software got slower.


With the limited computing power, they should have at least used the bilateral symmetry to halve the time.


Nice, just a bit disappointed it was in BCPL and not Mesa.


I'm working on getting Mesa and Smalltalk running on the Alto, so stay tuned...

But BCPL is actually an interesting language. I knew that it influenced C (as did Algol and other languages), but until using BCPL I didn't realize how much of C was really the same as BCPL.


BCPL is almost a predecessor to C. The 8 bit micro that I used had a BCPL compiler for it.

http://www.computinghistory.org.uk/userdata/images/large/PRO...

That was a nice stepping stone to C on the Atari ST.


It wasn't "almost" a predecessor to C, it was a direct predecessor.


There was B in between, which was Ken's attempt to a BCPL compiler, even the language's manuals look quite similar.


Thanks for the information, curious how it will turn out to be.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: