A bitplane is a region of memory rendered as pixels. The Amiga hardware allowed you to (basically) define a width and height for the region, as well as a x,y offset. So by e.g. increasing the x-offset you could scroll horizontally through a bitplane with a width larger than the screen's.
The ball is in the middle of a large empty bitplane. By simply changing the offset values it bounces around the screen. The graphics hardware does the heavy lifting. No pixels need to be erased, no polygons rendered. You're just looking at the same ball, but the window through which you look has moved.
Colour cycling can be used with palettes. Lets say you have 32 colours in your palette. Draw a row of 32 pixels, each in a different colour. Now change the palette so all colours are white, except for the first, which is red. Your pixels still refer to their respective colours in the palette. A frame later, make color 1 white and color 2 red, etc. This way you get animation without redrawing pixels (expensive), but simply by resetting the palette (cheap).
It's worth mentioning the reveal, which the page omits entirely, and was the astonishing bit of the CES demo at the time.
In the original demo they were showing the normal desktop screen, showing window dragging, resizing, showing text editors, clocks, shell and what not. Running 640x480 4 colour. The whole time you could hear the bang bang of the boing ball bouncing, but see nothing.
At some point a viewer asks "what the heck's that noise?" and the demoer pulls down the desktop screen to smoothly, and without glitching, reveal a 320x240 32 colour screen running the Boing demo behind it. In its day, that was almost unbelievable. PCs had just got 16 colour EGA and switching res resulted in a couple of seconds of monitor blanking or losing sync as it switched.
I can't remember if they also had a 4096 colour HAM image (Hold and Modify - a way of cheating with palettes and getting a LOT more colours) behind that.
The Amiga had virtual screens that operated a little differently to a modern approach - they could be massive with a monitor sized viewport onto it. They could have multiple resolutions and bit depths. They could be brought forward and sent back, and dragged (only up and down) to reveal multiple screens at a time. Visualise having a full size browser window, and dragging the whole window down to reveal the desktop, but switching resolution and colours as you go.
People left speechless.
The copper (display co-processor) allowed switching of palette and resolution, with 0 cpu use, on a particular scan line every frame.
It seriously took 10 years or more before a Windows desktop could move stuff around the screen as smoothly, mainly as the Amiga didn't actually move a lot of stuff a PC did. PCs were upping colour depth as they aged rather than chasing more speed though. Virtual screens were a shadow of their Amiga implementation.
While i own an A500 (still sitting in the basement) i was too young to really grasp what i was dealing with.
Recently i learned that the Amiga was effectively two computers in one. you had the 68k doing its thing, and then you had the custom chips doing their thing, and in the middle you had a shared RAM pool.
I guess to a degree the current CPU+GPU setup comes close, but that on the Amiga the GPU had priority.
That said, the tight coupling of CPU, RAM and the custom chips was perhaps the Amiga's Achilles heel. While on the PC you could switch out the various parts, the Amiga, at least on the A500, A600 and A1200, could not.
It's hard trying to answer this briefly. Hope it doesn't come out meaningless!
It's comparable with current CPU+GPU but on a vastly simpler scale. The copper only had 3 instructions! Blitter, well most things, were DMA controlled but the whole chispet had a ton of special registers that both CPU and copper could get at. You could set what got priority. Coders had to learn to wait on blits as anything else - OS as you moved a window, another app or screen could have got it first. Hence why many games just hit the hardware - keep it simple, and single threaded, and to hell with the OS. :)
About chip memory. Amigas had two types of memory Chip RAM and Fast RAM. Chip could be accessed by CPU and any of the chips, Fast RAM just the processor for programs and data. Modern CPU+GPU has two separate pools of RAM linked via the PCIe. If there were a similar architecture to the Amiga, but with modern complex silicon and design, I'm sure games coders would have dreamed up a few extra effects and tricks. Or maybe it wouldn't matter any more with the vast speeds we now reach, I don't know.
Incidentally that tight linking, display coprocessing, along with the DMA channels, and readily available genlock, is why it ended up a niche success in broadcast. Nothing else apart from £10,000+ dedicated things like a Quantel Paintbox (who bizarrely still exist) could touch it, especially after the Video Toaster came along.
The chipset needn't have been the achilles heel if Commodore had not been run by an asset stripping moron at the time. Designs were being done for follow on sets, but never made it, and we got AGA (a fraction of AAA) years late. I think they started on AAA in 87, and cancelled it or put it on hold probably a dozen times. The 3000+ was an astonishing prototype machine that only existed on Dave Haynie's desk, for ages. Commodore gave us the lousy cost cutting 4000 with IDE not SCSI as follow up.
Having my dad buying an IBM PC with a CGA monitor, I can't say how much you are right. The Amiga graphics architecture was so much above everything at the time... Few years after that I wen into the demoscene and it was horribly difficult to achieve what the copper CPU could do... You had to time scanline readline with devious accuracy, change palette registers in non obvious ways, atc. And don't even talk about the music , the Amiga soundchip was eaqually powerfull and the soundblaster card, although it could play better sound, was absolutely nowhere to actually mix sound. Oh yes, I missed the Amiga :-) Fortunaltely, the PC took over, so what I've learnt chasing the Amiga is actually useful :-)
Yeah me too. I had great fun and learnt so much. When I ended up doing a little Windows programming at work some years later - I couldn't believe how horrible it was compared to Intuition. No surprise my career majored on *nix. :)
Since no one else has confirmed: Yes, the Amiga had HAM mode, and there were various pictures done (generated or scanned) that showed it off. (Something like [0] was just not going to happen on PCs of the day...). I think there were even some games that worked with HAM mode, awkward as it was.
> In its day, that was almost unbelievable. PCs had just got 16 colour EGA and switching res resulted in a couple of seconds of monitor blanking or losing sync as it switched.
Windows let each application (or even window, it's been a long time I don't remember the specifics) define its own palette in 256-color (8-bit) mode. You had reserved colors to make sure the gui looked the same but you'd get all kinds of strange artifacts when you switched to an application that had a custom palette but you could still see other applications windows.
It was years where you had to make the trade off of color depth and related display quirkiness at the expense of speed and performance
The fun days of switching your new £300 graphics card to 16 or 24 bit mode, moving a window, and watching Windows redraw every line and icon. Step. By. Painful. Step.
It might be obvious from the name, but there's also just one bit per pixel in a bitplane, and then you stack five bitplanes to get 32 colors.
You can have different offsets into each bitplane, which is why the grid could stay in the same place while the ball bounced.
The Amiga only supported bitplanes and not "chunky" modes like VGA where you have one byte per pixel. This made graphics with 256 colors (supported by the later AGA chipset) really slow as you had to do eight writes per pixel.
For a while there was a lot of competition between coders to write the best chunky to planar routine. This enabled you to write graphics effects with a chunky graphics buffer and then convert every frame on the fly to planar. One of the fastest was written by Kalms:
Yes, this is why recreating Doom on the Amiga was so difficult. I remember wanting to play a Doom-like first person shooter back in my youth and being completely frustrated because the performance just wasn't there. It took several years before anything even close became available and by that time the world had moved on. I really think this limitation was one of the reasons that the Amiga faltered; it was a terrific platform in almost every single way, but it couldn't do first person shooters worth a damn. :(
This also let you make "memory peekers". It was just a simple assembly program that would offset the pointer to a bitplane based on the mouse vertical movement. You could "look" at the RAM and it was one way to rip images since you would see the bitmaps of images from the game still in RAM after the reset.
Yeah, that was a great way to "visualize" chip RAM contents.
Curiously, on some later A500 OCS models, you could also see that into "slow RAM" expansion module range! Just needed to point bitplane pointers above 0x80000.
It appeared at 0xc00000 for the CPU and 0x80000 for chipset.
If I remember correctly, the offsets were measured in words (16 bits on the M68k), and then you had scroll registers where you could offset with 0-15 pixels.
This was complicated a bit since you only had two scroll registers, one for even and one for odd bitplanes.
We stayed in a mountain bungalow during summer vacation, there was a waterfall close enough to hear (and be loud) but far enough away that it's noise sounded like a pleasant version of TV static.
I had my best night's sleep in years. My theory was that it drowned out ambient bumps and crashes that usually wake me up at night.
I recall camping at some relatives that had a waterfall near enough that you heard it any time you were outside. After about the second day or so you hardly even noticed.
For readers following along at home, R.J. Mical [1] was a co-inventor of the Atari Lynx [2]:
"Under the auspices of a game company called Epyx I was co-inventor of the first color hand-held game system, the Lynx, which finally was acquired by Atari.
I was co-designer of the Lynx hardware system, and I implemented an entire software development suite including a run-time library of hardware interface routines and a celebrated set of debugging, art and audio tools. We received many patents for the Lynx.
In addition, we developed 6 games to be available at the launch of the system. I produced these 6 titles, was co-designer of several of them, and managed the programmers, artists and audio/music designers." [3]
The bouncing sound: took a platic bat (used for working out 'disagreements') and hit an alumimum garage door while digitizing the interior sound of the garage with an apple II, and masaging the digital data to play on the Amiga.
In this version, the ball actually bounces around. OP version doesn't seem to do any actual bouncing. The "boing" name didn't make much sense. I have to say, the bouncing version is a lot more impressive.
I remember what were the effective limits at these times and to me it still looks impressive.
As an example, the observer would never get the idea that it's done by changing "the beginning of the screen" as the grid background stays on the same place all the time, and just the ball (with the shadow) moves.
Also note the change of the rotation direction as the ball hits the wall.
I feel obligated to point out the Amiga documentary http://www.frombedroomstobillions.com/ , in case there are any veterans here who have been living in a cave etc. It is an absolute must see, great on so many levels, and also detailed the last minute efforts done to get the launch right.
Wait, that's how it worked? That's it? It was a trick?
That demo was my introduction to the Amiga. Whenever I met an Amiga fan later in life I would picture that demo in my head.
Adult, programmer me knows all too well that demos are marketing material, aka "lies you can't be sued for", but really faking 3D rendering is pretty low even by marketing standards. I am extremely disappointed.
This kind of screws with the memories of some interactions I've had with old fans over the years.
At that time, computer graphics was often a choice between tricks or nothing.
You could say it's still true, a lot of graphics in AAA console/PC games could be called fake same way. Precomputed physics and such. That subsurface scattering? Fake. Soft shadows? Fake. Fluid dynamics? Fake.
Although, Amiga could have rendered that in real time too. Just two bitplanes and a smallish area.
My memory of the era is that no one would have thought they were actually 3d rendering in real time: that was Just Not Possible.
I'll point out that once they finally, FINALLY made a memory card for the Commodore 64, that plugged into the megabyte-per-second DMA port, there was a similar bouncing ball demo done for the C-64. Purely by loading whole frames into the memory card and blasting them onscreen as needed, like a page flip.
Ok, let me help. The Amiga has the ability to have bigger screens than the monitor can show at once. It can also move those 'screens' in any direction, and therefore show different bits of themselves on the monitor. Think of it like moving a slide around on a microscope, you can't see the whole slide at once through the eyepiece, so you move it around.
The Amiga can also overlay these screens on top of one-another, letting bits of the lower screens show through the transparent bits of the upper screens. For a real-world example, think of two clear plastic overhead projector sheets, one on top of the other. The top one is just the ball, and the bottom sheet is the grid. By moving the top sheet around, you create the effect that the ball is bouncing around the grid.
To emulate the ball rotating, the Amiga used one of it's other graphical tricks: palette cycling. The ball is actually made up of many thin strips of colour. Think of these strips as Colour1 to Colour30, all of the colours are painted white, except for a few evenly spaced red ones. To make the ball 'rotate' the colours assigned to Colour1 thru Colour30 are shifted one place to left (or right) a few times a second. This created the illusion that the ball is rotating, when in fact it's not.
All of this is done via hardware routines in the custom graphics chip, and uses almost no CPU time at all. It may not have been done how people think it was done, but it was still an impressive demo of the hardware regardless.
Basically, it's all smoke and mirrors, and please ignore the man behind the curtain.
The demo looks nice, but it doesn't extend to other shapes or more complicated scenes very well.
To be honest, it makes me a little sad whenever people choose to use "smoke and mirrors" to create an effect, rather than do the technical correct thing (or admit the effect is impossible to achieve in general).
The program was written mainly in 'C' (some 830 lines of main program, with about 300 lines for the sound code). A small assembly language snippet provided the sine/cosine calculation code (about 100 lines, most of which contain the sine/cosine lookup table).
The setup for the animation and the background is quite clever and compact. The background is rendered first by drawing lines. Then the ball image (it's referred to as the "globe" in the code) is rendered, segment by segment, and for each segment, each facet of the ball is rendered as 8 strips. These are used for colour-cycling, giving the appearance that the ball is rotating.
The demo automatically adapts to PAL or NTSC, changing the aspect ratio for the background pattern and the ball. The sound effect of the ball hitting the correctly pans right and left as the ball moves around.
All setup and rendering operations are performed using operating system functions only.
Only the file system portion (dos.library) was written in BCPL (ported from TRIPOS in two or three weeks). The kernel (exec.library) was in assembly, and the GUI (intuition.library) was written in C.
The ball is in the middle of a large empty bitplane. By simply changing the offset values it bounces around the screen. The graphics hardware does the heavy lifting. No pixels need to be erased, no polygons rendered. You're just looking at the same ball, but the window through which you look has moved.
Colour cycling can be used with palettes. Lets say you have 32 colours in your palette. Draw a row of 32 pixels, each in a different colour. Now change the palette so all colours are white, except for the first, which is red. Your pixels still refer to their respective colours in the palette. A frame later, make color 1 white and color 2 red, etc. This way you get animation without redrawing pixels (expensive), but simply by resetting the palette (cheap).
Anything else?