Hacker News new | past | comments | ask | show | jobs | submit login
Crash course to Amiga assembly programming (reaktor.com)
182 points by nallerooth on May 11, 2017 | hide | past | web | favorite | 52 comments



Awesome article OP, thanks for linking it here; I've been looking for something along these lines for the Amiga for a while, actually.

Tangentially related: if you haven't already, I'd suggest dabbling in development for these "retro" systems. Not only will they teach you about memory management, &c but you will also be able to familiarize yourself with architecture of decades past. Much like a Shakespearean scholar doesn't just reader William's texts, but also attempts to understand the sociopolitical climate in which they were written; so, too, ought a proficient programmer see where she has come from to integrate not just "best practices" but an optimism which guides her progress.

Or one could just read Stack Overflow. shrugs


BTW, there's a Stack Overflow for the retro computing:

https://retrocomputing.stackexchange.com/


Checkout this series of video tutorials by Photon of Scoopex :)

https://www.youtube.com/watch?v=p83QUZ1-P10&list=PLc3ltHgmii...


I'm a huge Amiga fan, but what is there to learn from it that couldn't be obtained by working at the lower levels of a modern system?


Devices are memory-mapped without going through many nested protocols (USB, PCIe, etc). What drivers do is completely transparent. Library calls are a simple subroutine which jumps into the real details, without a privilege barrier.

Much of the accessibility of the architecture is that it's a single address space, with no MMU interference to speak of (later models had MMUs, but where only used for a few enhancements here & there at best).

Besides, 680x0 code is much more pleasant to deal with than x86. But that's the case for pretty much any non-x86 ISA. ;-)


Well, unlike modern machines, the system was made to be simple enough to understand while being as powerful as possible.

Modern computers, on the other hand, have to either up the complexity by quite a bit and keep the power ( trying to rewrite an old and simplistic DOS based mandelbrot set generator for modern system got me mostly battling my misundertanding of SDL: putting bytes in A000 was way simpler. ) or sacrifice power by going back a generation or two...


Thanks! I'm hoping to find some time for this and learn enough to be able to create something Amiga-ish for our next hackday at work.


Nice to see articles on how to code from a modern host. Last time I coded Amiga assembly was on an actual Amiga 2000. Blitter, Copper, non maskable interupts, ahh the memories

as an aside, the VR community right now in 2017 is in the very early Amiga stage. Theres Atari ST vs Amiga, ( Oculus vs Vive ). Also individual coders can have a widely played and adopted game / experience. The games are fun and not overly complex and there's some great early adopter communities ( /r/Vive )


What did you use NMI for?

I've coded a lot of asm, blitter and coppper on the Amiga but I don't recall ever using NMI's.


1) I think the debugger used it 2) I had an Action Replay that used it 3) It existed 4) I had some off computer hardware I was talking to

memory a bit hazy !


Using the emulator's exogenous debugger is "cheating" ;)


It seems like every month or so on HN there's a new Amiga post, and this has resulted in me becoming slightly obsessed with the machine (even though I never had one as a kid since they weren't terribly popular in the US).

Learning the assembly sounds like it could be a fun weekend project, so thank you!


The beauty of old school 68k assembly is astounding compared to the unstructured, messy piece of garbage that is X86 assembly.

Intel should be ashamed of themselves for 20 years later still not having cleaned up their act.


The opcode map tells a different story, however. x86 looks very organised in octal:

http://i.imgur.com/xfeWv.png

68k and a lot of the other Motorola CPU's opcode layout looks far less organised to me; there's no common pattern among all the ALU ops, for example, and there seems to be plenty of undefined/unused gaps:

http://goldencrystal.free.fr/M68kOpcodes-v2.3.pdf


Let's assume the x86 octal layout is the pinnacle of enlightenment and the 68k instructions were assigned by rolling a 65536-sided die. So what? I don't write my assembly using DIP switches, I write my assembly using mnemonics.


> I don't write my assembly using DIP switches, I write my assembly using mnemonics.

In other words: Not the x86 instruction set is bad, but the common assembly language(s) (there is Intel and AT&T syntax) that are used to describe it.


I wrote an emulator for M68K nearly 25 years ago. I wrote a generator to produce the opcode dispatch table. It used regular expressions on bits (textual strings of 1's and 0's) to condense the generator's specification; I remember it being quite condensable that way.


I don't see how this is relevant to anything at all.

>there's no common pattern among all the ALU ops, for example, and there seems to be plenty of undefined/unused gaps:

Presumably there is a pattern and that is why the gaps exist.


The x86 kludginess is a result of decades of new features being layered in while maintaining backwards compatibility.


True, but the original 8086 was already ultra kludgy (segment registers etc), even though it was NOT backwards compatible with its predecessor 8080; and every new x86 generation had seen more kludges added.

The 68000, which was essentially a contemporary of the 8086 (something like 6 month difference in market availablity IIRC), was clean and orthogonal, and they kept it kludge free at least through the 68040 (which was the last one I used, and which was contemporary with the super-ultra-kludgy 486).

I don't disagree with your statement, but I think it gives the impression that Intel did a reasonable job at any given time subject to compatibility constraints, and in my opinion that is not so.


While the 8086 could not directly execute 8080 code, it was designed to make translation of 8080 to 8086 code straightforward. This turned out to be a sound business decision for Intel.

I was less impressed than others with the orthogonality of the 68000 when it came out. I was used to the PDP-11 instruction set, which was genius in its orthogonality and simplicity.


While the 680x0 might have been nicer to program, because of its very CISCy semantics, turns out that making a performing OoO implementation was impossible with the transistor budget of the 90s. Not so much with the much uglier but simpler x86.

Worse is better.


> While the 680x0 might have been nicer to program, because of its very CISCy semantics

It indeed was. I don't understand why Intel at least didn't try to be inspired by the good bits here.

> turns out that making a performing OoO implementation was impossible with the transistor budget of the 90s. Not so much with the much uglier but simpler x86.

Yup. This killed the Amiga, and this was what forced Apple to move to PowerPC. Motorola just couldn't deliver fast enough chips at a reasonable cost.

No reason not to praise them for what they did do right though.


> Yup. This killed the Amiga

Business reasons (i.e. Commodore's collosal incompetence) aside, the fact that screen image was represented as bitplanes (instead of a byte/bytes per pixel) was also a major problem. It made 3D games very computationally expensive, hence no Wolf3D, Doom etc. on Amiga.


I'll add too that most of the software bypassed the APIs and ran at a low level. Amiga would have had a difficult time updating their architecture without invalidating all their software.


> While the 680x0 might have been nicer to program, because of its very CISCy semantics

Convince me to be wrong, but to my knowledge the 68k was more comfortable to program since

* it simply had more general purpose registers available (this "register shortage" was only reduced in x86-64)

* at that time x86 was in DOS particular used in real mode (which is not very comfortable to program for), though protected mode was theoretically available from 286 on

* there are some commonly instructions that had/have to be used with specific registers, which made the x86 instruction set feel "rather unorthogonal" (though in my opinion it was better than its reputation). For example the "MUL" instruction (unsigned multiplication) only accepts some specific register(s) as destination

> http://x86.renejeschke.de/html/file_module_x86_id_210.html

This was IMHO much worse in real mode, since for example

- there in/out were common instructions for writing DOS applications

- you could not address relatively to the stack pointer sp in x86-16 bit code (cf. https://reverseengineering.stackexchange.com/q/11442/12822), so that you had to write introns and outrons like

  push ebp
  mov ebp, esp


  mov esp, ebp
  pop ebp
(this is not a concern in x86-32 and x86-64 bit code, since there you can address relatively to esp/rsp).


I can't see them being anything but proud of that legacy! In fairness they tried with IA-64, but the world didn't want it. And perhaps both HP & Intel were half hearted about it.


IA-64 was a terrible fit for anything other than carefully targeted code for very specific algorithms. Anything data-dependent ended up with poor code density. It was left to AMD to deliver the 64-bit instruction set that people actually wanted.


IA-64 was not a clean break; it was an enormous bet on compilers getting sufficiently smart by the time the cheap gets out. They didn't. In fact, today's compilers are still not smart enough to make the IA-64 a reasonable architecture.


Modern X86 machine code is optimised as an intermediate language between compilers and the microcode the processor runs internally, subject to the constraint of backwards compatibility to all previous X86 machine code versions. Clean design is impossible due to the backwards compatibility constraint.


Indeed.

    mov esp, 0
"Move 0 into esp".

Seriously, X86? Who came up with that?


The same engineer who came up with

    dest := source;
and

    memcpy(dest, src, size);
as well as

    obj.set(0);
Numerous assembly languages have destinations on the left.

Some have load/store instructions where the memory is always on one side:

    load [r1], r3   ; --> direction
    store [r1], r3  ; <-- direction
it's a matter of assembler syntax; the "AT&T" syntax used by the GNU assembler and gcc for x86 has destination on the right.


Read it as "esp = 0". "add eax, 3" is like "eax += 3" in C.

This is the Intel syntax and it is the most popular for doing x86 assembly.

The other syntax, the AT&T syntax, works the other way ("mov $0, %esp" puts 0 into esp).


I'm not sure AT&T syntax is much better.

  movl $1, %esp


What is wrong with that? Besides the fact that the canonical way to zero a register on x86 if flags are of no concern is

  xor reg,reg
(recommended way) or

  sub reg,reg


It's wrong if you try to read it as "move 0 into esp" because the source and destination are reversed in code from how it's expressed in English. The way to fix that is to use the other syntax, or change how you think of the English. If read like Lisp, then it's "set esp to 0" and 'mov' is just a poor mnemonic.


I read "mov a,b" as "Move a into b".


It is canonical in Intel syntax that the parameter order is always "destination, source" (in AT&T syntax it is "source,destination").


If you think that article is awesome, you should fasten your seatbelts and check out this video tutorials by Photon of Scoopex (Demo Group).

https://www.youtube.com/watch?v=p83QUZ1-P10&list=PLc3ltHgmii...


That looks awesome - thanks!


Arrrgh! Great article (and there goes my weekend) - but this article reaches my eyes 25 years too late. I would have killed for this information as a kid!


I actually got started coding Assembly on the C64. I found the C64 was much easier to code to. I keep telling my kids, "If I had youtube tutorials when I was a kid I would be ruling the world."


I saw this at the end of the article, was hit by a huge wage of nostalgia, and had to comment:

> Update 9.10.2015: exec.library base address is stored in > memory address $4. The library base address itself is not $4.

I learnt 68k assembler for the Amiga in high-school and made _exactly_ this mistake. After many "guru mediations" (the flashing red and black box) and much head-scratching I finally realized the problem.

It was a big breakthrough in my early days as a programmer and taught me a solid lesson about indirection and pointers.


This was an interesting read, though I've never really delved into 68k assembler, even when I was playing with the Amiga.

In the early 1990s I purchased an Amiga 2000, then a couple of years later an Amiga 1200. A few years after that, Commodore "died" and I moved to a 486 and Windows (though I had used a DOS-base laptop prior to that). A few years later I move to Linux.

I still have my Amigas, and my 1080S - but I have no idea if any of it still works; they haven't been booted in over 20 years (though I have kept them stored well). I've thought about getting them running again; maybe someday (I've got more actual retro hardware than I have time and space for, honestly).


Great Article!

The Amiga community is 100% alive and well, with new software and hardware being developed all the time. If you were a fan of the Amiga then now is a great time to jump back in and see the fun stuff that the community is doing.


I'm tempted to jump back in. Back in the day programming involved books and a lot of guess work. Now I can google the hell out of any problem.


A bit off topic:

I'm looking to get an Amiga mainly to run Octamed and get that Paula chip sound. Can someone suggest a model to buy?


Join the CommodoreAmiga group on Facebook and ask there. We're a great bunch of guys (and gals).

Edit: Realised that Facebook search is rubbish, this is the actual group URL: https://www.facebook.com/groups/CommodoreAmiga/



Paula is exactly the same across the whole Amiga range AFAIK, so no difference when it comes to sound.

Not sure how much resources Octamed requires though.

An A1200 with CF hard disk + ACA board seems like a safe bet


There are differences in audio low pass filters between different Amiga models. It can't be ever entirely disabled. I think Amiga 1200 filter affects the sound the least.

IIRC, AGA chipset (Amiga 1200, 4000) is also capable of up to ~56 kHz sample rate.

OCS/ECS is just up to 28 kHz.


IIRC (that was a long time ago) I could use only 8 tracks on my A600 because I only had a meg of memory. A friend had extended his A600 memory to 2MB and I think he could get 6 tracks.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: