Somebody definitely had the full source code back in 1995 when I tried my best to figure out who to contact.
While I succeeded in making contact with the right person, it quickly transpired that the full source code was subject to proof of having a valid Novell license. Needless to say, no such license was available at ESCOM at the time and an opportunity was lost, perhaps permanently.
That said, I would be mighty surprised if the full AT&T SVR4 source code which was the foundation for Amiga Unix has never been accidentally/intentionally leaked. Could be a fun summer project to rebuild Amiga Unix from scratch ;-)
The "Amiga ROM Kernel Reference" manuals were a set of books published by Addison-Wesley which detailed the Amiga operating system APIs, its data structures, purpose and architecture. Three editions were issued between 1985 and 1992. The complete set of the final edition would comprise the "Amiga ROM Kernel Reference Manual: Libraries", "Amiga ROM Kernel Reference Manual: Devices", "Amiga ROM Kernel Reference Manual: Libraries" and "Amiga ROM Kernel Reference Manual: Includes & Autodocs". The "Amiga Hardware Reference Manual, 3rd edition" and the "Amiga user interface style guide" were published at the same time but did not cover the same ground as the ""Amiga ROM Kernel Reference Manual" set.
What was always absent from the "Amiga ROM Kernel Reference Manual" set was good coverage of the AmigaDOS component of the Amiga operating system. While documentation was available as part of "The AmigaDOS Manual" (published by Bantam Books), it barely scratched the surface. The "Amiga ROM Kernel Reference: AmigaDOS" volume you quoted from was not written by the Amiga and Commodore developers, but by Amiga operating system software developer Thomas Richter, quite recently. It is based upon earlier research conducted by Amiga software developer Ralph Babel and others, filling in the many blank spaces which the "The AmigaDOS Manual" did not even address.
Note well that this volume is not to be mistaken for the "Amiga Hardware Reference Manual, 3rd edition".
The Amiga default file system was not quite that sophisticated. As used on floppy disks, it did well by offering more than one option to reconstruct the layout of the file system data structures. For example, the disk data blocks were chained, so that you could follow this chain and find every block belonging to the same file. But you could also follow the chain of the associated file list blocks and obtain the same information.
Which disk blocks were still available for allocation was tracked by the bitmap (never documented in the original AmigaDOS Manuals), a series of blocks in which each bit stood for an individual block which was either allocated or not. There is just one bitmap for the entire volume and the Amiga default file system did not care for redundancy. The bitmap does not even use block checksums because you could reconstruct it at any time by following the directory structures (unless the bitmap was corrupted and spread its troubles, which you would never suspect or know). This is what, as a by-product, the Disk-Validator accomplished in the Kickstart 1.x days.
The floppy disk version of the Amiga default file system made use of checksums for each of its data structures, with the exception of the bitmap. This made it slow going, but then you quickly learned of defects which the file system reported.
Fun fact: the Amiga default file system in the Kickstart 1.x days was particularly and likely needlessly slow during directory scanning. The metadata produced by the scanning API would include the number of data blocks which a file would consist of. The file system could have easily calculated that figure from the file size, but it did something else instead: It visited every single data block, counting their number one at a time. If you ever wondered why it took Workbench so incredibly long to read the contents of a volume or drawer until at last one single icon appeared, this is why. On the other hand, scanning a directory automatically verified that all of the file data in this directory was sound.
To the best of my knowledge, the Amiga default file system never paid any attention to the disk head position. The file system was designed for larger mass storage devices (up to 54 MBytes in the Kickstart 1.x version, with 512 bytes per block), not so much for floppy disks. The Amiga disk drive, however, could end up recalibrating the disk head position as needed, which could be mistaken for a return to a parking position, if you will.
The Amiga default file system would use write caching, with the lit floppy disk LED indicating that the cache had not yet been pushed to disk. The disk would keep on spinning for some three more seconds after the last write command had been executed. But if you used a third party floppy disk drive which had the LED indicator tied to the read head actuator instead of the spindle motor, you were likely to remove the disk when the buffer had not been flushed yet.
Thanks for the fascinating detail and trip down memory lane!
It's been a few decades since I used Amiga OS as a daily driver, so I must have been misremembering, or more likely, conflating my recollections with aspects of other disk operating systems which struck me as new or interesting around the same time.
Yes, Manx Software Systems lost out to the competition, with the last release being Aztec 'C' 5.0 (with patches) for the Amiga (they also sold the source code to their ANSI 'C' compliant runtime library). Version 5 is one of the compilers available for download from the web site you mentioned.
What's absent are the precursors to this release, i.e. Aztec 'C' 3.4a and beyond. I for one would like to complete my own Aztec 'C' 3.6a which I bought back in 1988. I backed up the disks to image files in the 1990'ies, but these original disks had read errors by then.
Aztec 'C' was a "classic" compiler which translated what you wrote into assembly language source code (with a peephole optimizer involved), then this was assembled into object code and linked. One of the rare features which no other vendor offered was a custom overlay manager which allowed Aztec 'C' compiled programs to load/unload sections of the program in and out on demand. The default overlay model required that you preplanned which parts of the program had to be in memory at a time, which lacked the flexibility Aztec 'C' offered.
Are you Olaf Barthel? Much respect! Your Amiga software is fantastic. Thanks for all the things you've written over the years.
The overlay system was indeed very unusual, Thomas Richter did a good job describing it in https://aminet.net/package/docs/misc/Overlay (and of course it's also covered in Ralph Babel's Amiga Guru Book). That's really interesting that Aztec supported it natively, I never knew that.
But it also seemed to be that, if you wanted to, newer linkers (e.g. blink, phxlnk/vlink) combined with appropriate pragmas in the C code, and an example overlay manager from one of these sources, would allow you to make your own overlay executable?
I can't say I ever did it myself - the nearest I got to seeing an overlayed executable was seeing Titanics Cruncher or Imploder use the overlay feature to decompress while loading, skipping the whole question of loading/unloading nodes.
The Aztec 'C' overlay manager code is something of a clever hack which also needs support from the linker to pull it off.
Unlike the original overlay manager (ovs.asm) provided by MetaComCo, two helper functions (segload() and freeseg()) would let you load and unload those overlay nodes on demand whose function pointer address you passed to them. For example, if a node would contain a function called palette() you invoked segload(palette) and, sufficient memory being available, would load the node into memory which would contain that function.
Both segload() and freeseg() cleverly rewrote the jump tables which led to the respective functions through the miracle of self-modifying code (fine in the age of the 68000-68030 but a nightmare on the 68040). The linker did its part by producing overlay information which "hacked" the dos.library/LoadSeg function behaviour, preventing it from ever unloading any of the overlay nodes because that was now the job of the freeseg() function.
Half of the magic of the overlay support happens within dos.library/LoadSeg, one quarter happens in the linker, 20% happens within the overlay manager code and 5% happen in the AmigaDOS manual's developer documentation on how to tell the linker to use overlays.
The original Electronic Arts creativity software (Deluxe Paint, Deluxe Video, Deluxe Music and, um, Deluxe Print) made use of overlays extensively because the original Amiga then featured only 256 KBytes of RAM. With the next year's release of Deluxe Paint 2, etc. the developers had already switched from Lattice 'C' to Aztec 'C' because it gave much better control over which parts of the program would have to be in memory at a time.
You can write an overlay manager in 'C' but it is bound to be far less elegant that the assembly language version. I am familiar with Ralph Babel's 'C' version which (of course) perfectly matches the behaviour of the original ovs.asm code. But I would be really scared to try it in production code.
Thankfully, the use of overlays fell out of favour as newer Amiga models either shipped with more RAM installed, or could be expanded more easily.
It is a complex, non-trivial operating system which has its merits in terms of design and prudent use of resources. All of this is well-documented and understandable. You can learn how everything works, top to bottom and back again, which is a feature modern operating systems tend to have lost over the years. I would also argue that it is a humane design, meant to be understood.
This is very hard to fix because the Workbench has to find every single icon file in a drawer, match it up against any drawers or files, then load the icon file and display it. There is no icon cache or, for that matter, no "these are the icons we last processed when we looked into this drawer" file.
Whenever you ask Workbench to show the contents of a drawer you are starting a full scan of that drawer's contents with Workbench picking up, one directory entry at a time, what it finds. Put another way, finding and displaying the icons is strongly I/O bound and limited by what the file system can deliver.
It has been like that since 1985/1986 and the fundamental architecture of Workbench has not changed since then. There is a large degree of freedom afforded by storing icons in individual files which impose no limitations on the size of the icons. You pay for that freedom by waiting for the icons to appear and the scanning finally finishing.
Fun fact, the original Workbench design in the Kickstart 1.x ROMs acknowledged that reading the directory entries of a drawer one at a time was too slow, especially on floppy disk. The solution to the problem was to create ".info" cache files which listed every entry which had an icon file associated with it. Workbench would read that file unless the parent drawer's "last changed" time stamp was more recent than the ".info" file's and thereby skip a lot of unnecessary directory scanning: it just had to read those icon files and be done with it. One small problem about that cache: the floppy disk file system scattered file data and metadata widely across the medium which as a by-product increased the effort of retrieving it.
Similar outcome with Ram Disk :). Its one thing to not display custom icons, its another to delay everything because you query for custom icons sequentially instead of displaying temporary ones.
Falling back onto default icons is, arguably, just as tricky, if not more so.
Workbench cares about "real icons" because they are more likely to contain information relevant to the applications which created them (tool types, default tool, etc. are likely set to specific values). A default icon may be a convenient placeholder so that you may delete or rename the associated directory entry, but that may not be what matters to the user.
There is a side-effect in using default icons: they have to be placed in the drawer window so that they are visible and do not overlap with other icons. Workbench tries its best, but its internal data structures lean more towards memory usage efficiency than towards making non-overlapping icon placement efficient in terms of complexity and speed.
I would love to know about an algorithm which would render non-overlapping icon placement less complex and more efficient. The current implementation contributes its share to making directory processing slow. It scales poorly with the number of icons it has to check. This is why the "Show all files" view option is not particularly fast.
The "Disk Doctor", as shipped with AmigaOS versions 1.2-2.04 was both a crude and cruel tool.
Its primary purpose was to "restore" a damaged volume to a state which enabled read-access, allowing you to copy the data to a different volume. At the time most users (myself included) viewed it as a repair tool which should have been able to restore a damaged volume to operational state again. You needed a second floppy disk drive to be able to copy the data to a different disk which was a rare thing to have in the old days. So we settled for what should have been a "fix in place" repair operation, but in reality "Disk Doctor" never left a volume in a better state than it was before, even if the file system structures had been sound to begin with.
The reasons why the "Disk Doctor" was so bad at repairing anything are legion.
For example, if a track could not be read because of a flipped bit or physical damage, "Disk Doctor" would attempt to restore the entire track to a sane state by reformatting it (physical low-level initialization). The "Disk Doctor" uses the same buffer for reading and writing data, which means that when it reformats a track, it will (as a side-effect) write the data that was last read back to disk. That data was written to a different track, and if it happened to contain file system data structures, then "Disk Doctor" would later pick them up and try to make sense of them: it was basically "fuzzing" itself.
Part of the diagnostic operations performed prior to "repair" was to detect damaged or deleted file system structures, such as files or directories which no longer had a valid parent directory ("orphans"). The "Disk Doctor" would gather these and add them to the root directory. It did not check if there already were files or directories present in the root directory which shared the same names, which could have the effect of corrupting the root directory. You might be able to list the root directory and find several files with the same names stored in it, but accessing these would only go as far as the first entry in the directory entry list.
When trying to recover the "orphaned" directory entries the "Disk Doctor" made no attempt to verify that the files and directories it added to the root directory were structurally sound. You could have ended up with fragments of files which were deleted ages ago and whose parts had been partly overwritten since they had been deleted. These broken file data structures could reference data and metadata blocks, which in turn would then break the disk validation process.
If you want to try (and I would recommend trying, since even this early version of the program is an amazing design, with much to learn from), please be aware of the following:
1) The source code is incomplete; you will need the original EA IFF 1986 (March) files to fill in the gaps. You will also need to find a replacement for the LFMULT() function.
2) The paths to the header files will need editing.
3) The original program was built using the Lattice 'C' compiler, and it uses a couple of library functions which are specific to that compiler, such as stscmp(), stcu_d(), setmem() and movmem() for which you will need to provide replacements.
4) You may want to replace the BootIT() function with a stub, which originally seems to have been part of the disk copy protection scheme.
This should get you started. Once you get the program running, watch out how you use it:
As was common in these days, string buffers are rather short (e.g. 20 characters for a file or drawer name, 30 characters for font names) and the code is oblivious to this limitation, easily leading to buffer overflows and crashes.
Also, the hardware acceleration which Deluxe Paint uses is not properly secured by calling OwnBlit()/WaitBlit()/DisownBlit() in the proper sequence at all times (some are correct, some are not, and the code shows that the developers struggled with mitigating the ugly side-effects of not getting it right). This will cause the image processing functions to glitch and corrupt memory. Apparently, this happened only rarely in 1986 because the Amiga system was "slow enough" so that writing to the Blitter registers while it was still running didn't always lead to things going awry.
ROM space is very tight for this release on account of the much larger mass storage drivers (SCSI, IDE) and the integrated OCS/ECS/AGA graphics.library.
Commodore was already pushing the limits for the 1994 Kickstart 3.1 in the Amiga 1200 ROM, and it only got worse. The Amiga 4000T ROM would no longer contain workbench.library either (in 1994) because the graphics.library and the SCSI driver took up so much room.
There is no room for workbench.library and icon.library in the 512 KB 3.1.4 ROM. Going back to a previous set of workbench.library/icon.library versions is not an option because these do not fit either.
1 MB ROMs are not viable solution at this point. Only a select few models support these, and the point was to make the 3.1.4 update available to all desktop systems.
Besides, a 1 MB ROM would require retooling of the build process for the operating system, which we did not attempt. The last 1 MB ROMs were built in 1994 (for the CD32) and very little documentation survives on how this is done, and how the ROM images need to be prepared.
We were already quite busy with the desktop systems and chose not to spend the time on research and QA work necessary for the 1 MB ROM build.
>There is no room for workbench.library and icon.library in the 512 KB 3.1.4 ROM.
Unfortunately. Understandable.
>Only a select few models support these, and the point was to make the 3.1.4 update available to all desktop systems.
Nobody is telling you to ditch the 512k rom. It's just it sucks not to have workbench.library and icon.library, when using a system that does maprom with 1mb support.
>The Amiga 4000T ROM would no longer contain workbench.library either (in 1994) because the graphics.library and the SCSI driver took up so much room.
And roms with workbench inside were proudly released recently. Meaning both there's interest, and that you understand this interest.
>We were already quite busy with the desktop systems and chose not to spend the time on research and QA work necessary for the 1 MB ROM build.
It's good if there was at least consideration.
Maybe try and release romable versions of workbench and icon library? The community will take over from there, with tools.
> Nobody is telling you to ditch the 512k rom. It's just it sucks not to have workbench.library and icon.library, when using a system that does maprom with 1mb support.
The next best thing is to load both the workbench.library and icon.library using the LoadModule command (which is part of the AmigaOS 3.1.4 update, if I remember correctly) and reboot your machine. Both libraries will then remain in memory as if they had been in the ROM. They will survive subsequent warm reboots, but they will consume RAM.
Loading the libraries in this manner can be handled by the Startup-Sequence, for example.
> Maybe try and release romable versions of workbench and icon library?
Both libraries are technically "fit" to go into ROM, if there were enough space available for them to fit ;-) They are "romable".
> As long as it's fast ram, it's pretty good for a next best thing. I wasn't aware.
The ROM changes which shipped with the AmigaOS 3.5/3.9 updates all exercised the same mechanism by which operating system modules loaded into RAM whose contents would survive a warm reset would supersede the contents of the ROM image.
If I remember correctly, the same tool (LoadModule) had been used then.
Documentation for these technical aspects has always been somewhat lacking I'm afraid... Not everyone wants to trawl the Amiga forums for hints on what is possible and how.
This time, however, there is an official FAQ on the web site of the company which sells the product and further pointers to more information on Amiga forums which cover the AmigaOS 3.1.4 update.
> That's great news. I'm not sure where I heard they weren't. I'm hopeful tools to do that will pop up on Aminet :)
While I succeeded in making contact with the right person, it quickly transpired that the full source code was subject to proof of having a valid Novell license. Needless to say, no such license was available at ESCOM at the time and an opportunity was lost, perhaps permanently.
That said, I would be mighty surprised if the full AT&T SVR4 source code which was the foundation for Amiga Unix has never been accidentally/intentionally leaked. Could be a fun summer project to rebuild Amiga Unix from scratch ;-)
reply