Hacker News new | comments | ask | show | jobs | submit login
Easy 6502 - Learn the 6502 Assembly Language (skilldrick.github.com)
179 points by n8agrin on July 8, 2012 | hide | past | web | favorite | 78 comments



It's fun to see people being encouraged to relive my youth. And 6502 assembly is indeed a nice, compact thing to learn.

Though I think the most valuable thing I learned from studying assembly was actually about higher-level languages: How function calls work, and the mechanism of stack frames. This discussion doesn't reach quite that far, and actually I'm not sure how one would work such things into a discussion of 6502 assembly – I learned about them in the context of 680x0 assembly. Back in my 6502 days, I never even encountered a compiler, so I never had the chance to disassemble a C or Pascal program. (To this day I can't name a 6502-based C compiler, though I'm sure they existed someplace. I knew about Pascal compilers, but I never got my hands on one; back in the 1980s compilers actually cost money.)


Still there:

http://www.cc65.org/

What i learned from the 6502 was the use of pointers through the indirect addressing, later when I learned C they became vey easy.


I learned 6502 on my Atari 800. The closest thing to C that I used was a language called Action! (https://en.wikipedia.org/wiki/Action_programming_language)


This discussion doesn't reach quite that far, and actually I'm not sure how one would work such things into a discussion of 6502 assembly

The 6502 supports function calls easily: JSR/PHA/PLA/RTS

(Better even than some modern microcontrollers…)


... but limited to 256 bytes.

The 6502 stack is pretty much a toy for a language like C or Pascal, and you definitely need to use a non-hardware stack for actual variables and stuff, probably even return addresses.

(I never ran out of 6502 stack programming in assembly, and I wrote a LOT of it, back in the day).


> back in the 1980s compilers actually cost money

They still do, only in Linux/BSD land they do not.

After all, people have to bring bread back home.


Check out http://msdn.microsoft.com/en-us/dd299405.aspx. Gobs of stuff, compilers, linkers, headers free for the download. So the statement They still do, only in Linux/BSD land they do not. might not be true.

Visual studio full version is not free, but you can easily build (compile, link, run) everything for zero dollars. (Not even a picodollar.)


Try to get the free MFC framework, with the free 64 bit C++ compiler and the free static code analysis tools.


According to the release notes there, you get compilers that produce 32 and 64 bit executables.


You can find free compilers for (almost) everything on every OS. What do you mean with Linux/BSD land here?

And bringing bread home by making compilers; can you point out a commercial compiler that 'brings bread home' besides the Visual Studio compilers (which, actually, do not bring that bread; you can download the compilers for free)? I'm curious as I thought paid compilers currently are niche and don't make the companies selling them enough to actually pay more than maybe 1 person working on it full time, if that. Maybe Intel is making enough of pure compilers to actually call it a business?


I mean that having compilers with 100% full support is something that only happens in open source land.

All commercial UNIX vendors sell their compilers. Even in Solaris case, the EULA does not allow you to sell software done with their "free" compiler.

Intel compilers are not free for Mac OS X and Windows.

All the SDKs for game development from the console owners are not free.

The express editions of Visual Studio compilers are limited in which optimizations they offer, libraries and supported processor targets.

The compilers from Portland group for HPC.

The compilers from CodePlay for the game industry.

Even the compilers from Apple can be said that they are not free, after all you're paying indirectly for them when you buy a Mac Developer's license.


The xcode download that I use to build lots of stuff cost $0.


And how much did you pay for your Mac when compared with a PC with the same technical specifications?

Plus, last time I checked you need to pay to be able to have proper access to all developer information, https://developer.apple.com/programs/mac/.

Please read properly what I wrote, "paying indirectly for them", do you know what _indirectly_ means?


> Plus, last time I checked you need to pay to be able to have proper access to all developer information

You checked wrong. Developer docs for production versions of Mac OS and iOS are all available for free with Xcode. You only need to pay for membership to the developer program, which gets you access to prerelease OSes (and related documentation), and lets you distribute your applications through the App Stores.


Mac cost perhaps a grand five years ago, still running fine. So about $200 a year. Meanwhile cheaper laptops I have used die a quicker death as they are not built as well, apparently. So my guess is that they are roughly equivalent.


The Portland Group's (PGI) main business seems to be HPC compilers.

CodePlay make compilers for game developers.

There are companies that have made a business out of selling fully-integrated build & debug toolchains for embedded devices. There is usually some amount of customisation required for each type of device, and there's a lot of value in not having to do this yourself.

So yes, there are still companies making compilers as their main, paid product, but they tend to target specialised markets.


Thanks! I did see some AAA game compilers and tools before indeed. But I remember that being very niche and tight knit, small teams. Which is bringing home the bacon, but not compared to making more high level game tools (Unity for instance); unless you are an AAA game dev with very specific needs, you wouldn't bother even looking for those tools because they are so commodity, while high level tools like Unity and Unreal (with all it's add-ons) are not commodity and thus making millions of $.

But yes, you are right; embedded, HPC and gaming are good examples. Thanks for pointing that out.


In 2011, 3% of Intel's revenue came from "software and services", up from 1% in 2010 [1]. They also acquired McAfee in late 2010. So it's probably safe to assume that compilers generate somewhat less than 1% of Intel's revenue, perhaps significantly less. But even if only 1% of that pre-McAfee 1% figure is compilers, that's still over $5 million (over $50 billion * .01 * .01), so I suspect Intel's compiler business profitably employs a number of full-time developers.

Microsoft's combined "servers and tools" operating segment represented nearly a quarter of its revenue and over 20% of its operating income last quarter [2], and while it's not immediately clear how much of this comes from the "tools" portion, I don't imagine it's anywhere near zero.

[1] http://www.microsoft.com/investor/EarningsAndFinancials/Fina... [2] http://www.intc.com/annuals.cfm


I recommend also the 6809 as an example of another very well-designed processor from the same era. Most notably, it has post-increment and pre-decrement addressing, and basic 16-bit arithmetic support.

http://www.textfiles.com/programming/CARDS/6809

http://en.wikipedia.org/wiki/Motorola_6809

The modern Atmel AVR is in many ways a spiritual successor to the 6502 and 6809:

http://en.wikipedia.org/wiki/Atmel_AVR_instruction_set

(That said, I cut my teeth on the 6502 and it will always have a spot in my heart!)


I, like you, started on the 6502 and then learned the 6809 (well the book was 6809 but the processor was actually the Hitachi 6309). They were fun processors to program on and I guess if push came to shove I would chose the 6809, but I had the most fun with the 6502 (actual screen vs. serial port).

I later learned IBM 370 assembler (why not have the CompSci's assembler class on a mainframe) and 8086/88. I hated both. They felt wrong compared to the 6502/6509. Although 8086 assembler did help my grade in the graphics class.

Assembler is great to learn because it teaches you what the final form of your program is.


About two years ago I picked up 6809 assembly while writing an emulator for it, to assist in reversing a device built around a 6809. Amazingly simple processor/architecture.


Kind of reminds one of the PDP-11 architecture.


We were taught on the 8085. We had these cute little kits that were hinged wooden boxes that you opened up. They had all the circuitry exposed which was awesome, a 4 or 6 digit 8-segment LED display and a numeric keypad. Sometimes the kits would go on the fritz and the most common solution was to press all the chips into the sockets. Did I mention it was awesome.

My proudest/excitingest moment with them was at the end of the semester when I implemented a long division program on them. It would take two numbers and output the decimal fraction to an arbitrary number of digits using long division.

Aaah!


> So, why would you want to learn 6502? It’s a dead language isn’t it? Well, yeah, but so’s Latin

6502 isn't dead, it has even been relaunched:

http://www.h-online.com/open/news/item/Relaunched-the-6502-m...

6502 was my first processor, and it was real fun to learn Assembler. I used also figForth and learned from the ground up how it was implemented. Amazing days!

Today Assembler isn't fun anymore.. I don't even know the exact name of my current quad core :)

Could it be that FPGA programming will be the next golden age of "Assembler"? Content industry works hard to lock computer hardware with DRM. So if we want to keep our freedom then we'll have no other choice than to build our future computers ourselves - again.


I don't know if "relaunched" is the right term, since WDC has been producing variations over the 6502 design ever since Bill Mensche (one of the original designers) left MOS (he had joint rights to the design).

From WDC's homepage:

"Annual volumes in the hundreds (100’s) of millions of units keep adding in a significant way to the estimated shipped volumes of five (5) to ten (10) billion units. "


I question the idea that homebrew CPUs on FPGAs are the future of desktop computing. Core IP availability aside, FPGAs are very expensive.

Also, FPGAs are typically configured with an HDL, which can be likened more to C than assembler.


> FPGAs are very expensive.

Yes currently, but price depends on demand. On the other side, if FPGAs remain expensive then Assembler could become even more important to put as much code as possible into an fpga.

> can be likened more to C than assembler.

For that reason I mentioned the word Assembler in quotes. Btw VHDL is much more like Ada than C while Verilog is different from both.


FPGAs also consume more power for a given clock rate, and their maximum frequency is less than a dedicated chip. It would be unusual to dedicate an FPGA to implementing a CPU.

It might change in the future though? I remember some years ago, the possibility of nanotube based chips were being touted, offering FPGA like programmability and speeds greater than a dedicated chip. I haven't heard of it since.


I'm not sure about that, you can get small FPGA kits for less than $100 now, and the software is free and multiplatform (but not open). The DE0-nano dev kit is an example, it's about $80. I imagine the availability of things like the Raspberri Pi will bring prices down even further.

Aside from similar to some programming language, HDL is pretty interesting to learn in its own right. (Although I disagree that it's like C. It's more like a declarative language for circuits, though it's true that you can stick imperative-like code in there. But treating it like C is a recipe for problems.)


Sure, but you can get an MSP430 dev kit (stripped down IDE for C; USB dev board with a programmer and a few buttons and LEDs; a couple devices in DIP packages) from TI for $4.30 plus shipping.

Also, most microcontroller companies provide all the specs you need to roll your own end-to-end software for the device. Aside from specifying the machine language (so you can write your own compiler), they also have app notes for programming the onboard flash via JTAG or another interface. With programmable logic, it seems like the only parts that don't require the vendor's own programmer and synthesis software are legacy SPLDs like 22v10s.


you can get small FPGA kits for less than $100 now

I know, but a DE0-nano has only 20k LE's, and runs at 50MHz. You'd probably be limited to simulations of an Intel 8008 or thereabouts.


It is also very hard to program in Assembly nowadays, because there is no longer a 1:1 mapping between the instructions and what the processor does.


This is an excellent introduction -- thanks for posting it :) I teach high school students programming as part of a summer camp and this is a perfect resource for students who are already further along and want another challenge. The introduction to assembly will help them in their later careers and 6502 assembly is likely less of a shock than any recent ASM.

With the 32x32 display they could even do some simple graphics to stretch themselves :)


Besides simple graphics, the original Prince of Persia was developed in 6502 assembly - the source has even been released recently: http://jordanmechner.com/blog/2012/03/prince-of-persia-sourc...


Very elegant and well done! I recently went through the process of learning 6502 C64 ASM (an unfulfilled childhood dream), by writing a simple Tetris clone:

https://github.com/cjauvin/tetris-c64

The basic concepts came very easily (with a quick skim through Jim Butterfield's ML book), but I struggled a bit more with the C64 hardware intricacies: zero page addressing, IRQs, double-buffering to avoid flickers, etc. This was a very interesting and enlightening experiment (in terms of the perspective it sheds on modern-day tools and languages), and I plan to write about it soon.


It's pretty unusual to use double-buffering on the C64 - wastes memory. You'll find a lot of code times screen updates to the raster interrupt to either do updates during virtual blank, or start updates after the portion of the screen in question was updated. Of course that assumes you can do the updates in one frame, so you will find exceptions.


In fact, I don't know if what I did really corresponds to "real" double-buffering: at any point, the next move is created in an alternate buffer, which, when ready, is then switched as the visible portion of the video RAM (at vblank, using the raster interrupt). It works ok, but it's also quite possible that this mechanism might be a bit overkill.


I'm attempting to finish writing my unpublished BBC micro game 'Zen' written in 1988...

http://www.youtube.com/watch?v=Gg8Dtod83qA

It's so much fun developing in 6502 assembler again!!!


The cool thing is that, as a BBC game, it well sell only slightly fewer copies today than it would have in 1988. You've lost nothing by waiting :)

Game looks great, btw!


So much fun, thank you. Some of my earliest programming was on a C64 with the Merlin Assembler -- this brought me back _and_ brought a big grin to my face.


This brought back my collage days! I had job coding Z80 ASM to control electronics and RS232 data links... such fun :)

People would give you such funny looks when they saw you staring into a terminal of hex and you pointing out .. ahh that 3E CF needs to be 3E D3 :)

Best thing is that I ending up knowing all the op-codes and most of the instruction timings of by heart (which made interfacing to those 16 x 2 line LCD modules easier)

NMI anyone?


Both the Apple II and the Commodore PET used the 6502.

I'm amazed that these were not mentioned in the original article.

I wrote a bunch of 6502 code for Micro Technology Unlimited's music and graphics boards for the PET, and even wrote my own assembler in PET Basic. Fun Times!


I remember the 6502 from the days the Apple II hit the market. It was huge back then. Strangely, the article lists a number of 6502 systems down to Futurama's Bender but doesn't mention Apple II.


If anyone would like to use this to write emulation instructions for Prince-of-Persia II [0], I would be happy to merge them into the github repo [0].

[0] https://github.com/jmechner/Prince-of-Persia-Apple-II


This brings back memories of painful Friday morning lectures in yr1 of uni! XD


As a humorous aside, from Futurama the robot Bender allegedly runs on a 6502.


Educational hacks are the best hacks. Bonus points for historical value!


This is cool stuff. A few noob questions:

I notice that the 0x00-ff memory space has no 0xff byte and that 0xfe seems to change a lot, but it's not explained. What's going on here? What's in 0xfe and why is 0xff not shown?

I noticed that there's no multiplication operator. Does that mean that multiplication was performed using looping addition? What about division, floating-point, et al?


* Not sure I follow this question. The zero page (address 0x00-FF) can store any values 0x00-FF.

* This is an old processor, so it doesn't support fancy stuff like multiplication and division and floating-point. Generally you just try to avoid multiplication whenever possible, or keep it to powers of 2, since that can be done with shift operations. You can do multiplication using looped addition, but if you were to implement it you'd want to use a constant-time binary multiplication algorithm instead.


If you open the Monitor in the first example and step through it in the debugger, the byte at address 0xFE changes at every step. This doesn't seem to be documented and it's not clear to me why it's doing that.

Moreover the content of 0xFE keeps changing even after execution stops (if you keep hitting the "step" button).

0xFF isn't even listed, but a load from that address seems to work (with value 0x00). [ETA: actually, that seems to be a quirk of the Monitor's initial config. If you set "length" to 100 instead of ff, you can see byte 0xFF in the monitor. Nothing wrong or weird there.]


Examining it a bit more, The value in 0xFE can be ignored. It is just a random value. I don't know why.

There is a function called 'setRandomByte' that is called on every debugger step, but it has no comments so it may just be leftover debugging logic.


Ahh, I see what you mean.

0xFF doesn't show up because the Monitor's length is set to 0xFF, which is 255. If you set the length to 0x100 it properly shows the first 256 entries.

As for the value at 0xFE, I don't know.


Hey, thanks. Yeah, I haven't yet fully explained all of the simulator.

0xfe is a random number - a new random byte is generated on every instruction.

0xff contains the ASCII code of the last key pressed.

My simulator is adapted from the one at http://6502asm.com/beta/index.html - that has a help screen with some more info. I mostly cleaned up the (atrocious) JavaScript and added a memory monitor and disassembler (and implemented a few instructions they'd left out).


LSR (logical shift right) is an unsigned division by 2. ASL (arithmetic shift left) is an unsigned multiplication by 2.


For a assembly language designed for humans they sure did like there 3 letter TLA's.

How hard would it of been to have STORE_ADDRESS instead of STA.

To realy learn a assembly language you realy need to write code and to write code you realy have to have a project/purpose.

Now 6502 is one of the oldest assembly languages still in active use as they still do very well in the microcontroler sector. Though that said ARM is also in that area and alot cheaper to obtain ARM compatable kit. ARM was born out of frustrations/limitation with the original 6502 CPU and in that may be a better more practical use of your educational time.

That all said - every programmer of any language should at least learn/play with one assembly language sometime in there life, maybe one or two. I remember after my ZX81 I opted for the Oric-1 over the Spectrum just becasue it had a different CPU (6502) and after that I opted for the AtariST (6800) and a amstrad PC (X86).

Also inventing your own CPU/assembler is not as hard and intimidating as alot will think. All are very rewarding and a good use of your time on a rainy day.


Out of interest, how old are you?

I remember writing programs that wouldn't have fitted in memory if we'd used things like "STORE_ADDRESS" instead of "STA". The assembler would have had to have been more complex in order to process instructions that were of variable length, instead of the opcodes being a predictable 3 letters.

I've written assembler by hand - sheets and sheets of it - because there wasn't a decent editor on the machine I was writing for. These were the days when you were writing code for the machine, and not for the people who would maintain it afterwards. The structure of the code had to be clear, and the comments were as much for yourself as anyone else, but the opcode names were a complete non-consideration. If you didn't know them, you couldn't program anyway.


45 and the whole point i was making is:

1) learning something new you might as well have something easier to learn 2) just becasue historicaly you had to use abreviations is not a handicap you have to impose upon yourself thesedays - especialy if your learning it from scratch and for educational aspects. 3) Sure you can use short TLA's instead of a longer version but for ease of reading and learning then something alot cleareer for the human without that over-comprimise you had with memory of computers is a artificial limitation. 4) realy not hard to run a substitution script to conver long to short and vice a versa - sed anyone!

We have all done assembly by hand and hand converted it, the compiler was a luxury for some back then and there small memory machines and even then you were not limited by the official shortcode versions of TLA's.

Thing is with hand converting is that you write something not maintainable on many levels, but as you said, you bent towards those lmitations as you had not alot of choice.

So if you want on say a Z80 write RETURN or the offical mnenoic of RET or go real hard code and just write C9 (using this example as my personal memory space seemed to of kept that one alive) then it was your choice. When you went to code it was C9h so converting RETURN or RET was something you did.

Least only op code that was standard across CPU's was NOP or "NO OPERATION" aka do nothing or 00h or 0 or 00000000, that was kinda portable and used by many for funky double-entry code padding etc. Though that was due to memory limitations and scary stuff to maintain, yet fun and rewarding to code. Apple early OS used that approach alot due to memory limitatons.

Heck of memory was such a limitation back then - explain COBOL becasue I can't, sadly still remember that as well :|.


Well, there's no point in getting out of shape about it, and I've got lots of other more important things to do than trying to convince you of this, but it seems to me that learning 6502 is already pretty useless as compared with learning something like ARM7 or StrongARM. Even there the assemblers still use TLAs for the operations. I honestly feel that it just seems right to maintain the contextual relevance and, in some sense, the culture of assembler.

If you want readability then by all means use Python or Go or Ruby or something like that. I don't know anyone who writes in assembly who doesn't use the TLAs (or similarly concise designations) for the operations, no matter what processor they're using. In feels to me like there is something natural about it.

But even beside that, I personally find that abbreviations make it easier to think in whatever subject I'm working on. When I write in assembler I think "MOV" - I don't think "move". Jargon in any field is there to make communication faster and more effective, and linguistics says that common expressions gets shorter over time.

So I think you're trying to improve the wrong thing, and while to some it may seem obvious that spelling out operations more verbosely and making them more obvious will help people learn, I'm not convinced. Sometimes concise, precise and semi-opaque terms can actually help learners.


I thought the original post was wrong to choose 6502. "So, it was designed to be written by humans. More modern assembly languages are meant to written by compilers, so let’s leave it to them. Plus, 6502 is fun. Nobody ever called x86 fun." I assume he's never written ARM. It was designed to be written by hand, is delightful to write, much more orthogonal than 6502, and still relevant today.


STA didn't mean store address anyway, what would your long neumonics for store X or store Y be?

Having learned 6510 asm when I was younger, mov always seemed backwards and magical to me.


I'm not convinced. There's so little to your average assembly language that making the mnemonics longer won't help. With 6502, it would be totally pointless. You're going to be spending, what, 1 week learning this stuff, and then the next N years using it. You'll get used to it quickly enough. It makes more sense to optimise for experts, than it does for people who don't yet know what they're doing.

(And anyway, where do you stop? If you can't remember that STA means store accumulator and LDA means load accumulator, how will you remember what (&70),Y means, or what flags they use, or how many cycles they take? You'll end up with something like SUBTRACT FROM ACCUMULATOR MEMORY IN ADDRESS STORED IN &70 WITH Y REGISTER AND INVERTED CARRY FLAG WITH RESULT AFFECTING N AND Z AND C CLEARED IF BORROW AND V SET IF OVERFLOW TAKING 6 CYCLES PLUS PAGE BOUNDARY CROSSING PENALTY ;) - and even that probably isn't clear enough, because how will the poor reader know what the page boundary crossing penalty is if they don't know already?)

If you have something like x86's PUNPCKHBW, or POWER's rlinmw, and try to describe what they do clearly, you'll end up in even more of a mess. A one-volume instruction reference manual, sorted by opcode, with diagrams and pseudocode, would be far more useful.


As if to do almost the exact opposite of backing up my point, the PPC opcode I was thinking of is in fact `rlwimi' - Rotate Left Word Immediate then Mask Insert. I was thinking more like, Rotate Left and Insert Mask Word. Oh well.

So maybe longer opcodes would help, but I'd have got it wrong in either event - and I'd still need to have double checked the docs, to remind myself, again, just what the hell it does exactly.


IIRC, on 6502, 0x00 is BRK, not NOP. http://www.masswerk.at/6502/6502_instruction_set.html agrees with that.


I forgot for every rule there is always an exception - cheers.


Many exceptions: http://en.wikipedia.org/wiki/NOP :-)


STA is `store accumulator' not address. Would you have liked to type STORE_ACCUMULATOR with the high frequency that the instruction occurs on an editor that had no auto-complete? Even reading it is slower than STA. And having them be all three letters meant assemblers could pack the text into memory in fixed-length records; every byte mattered.

Here's the table of ARM mnemonics in the source to Acorn's BBC BASIC for the ARM. https://www.riscosopen.org/viewer/view/castle/RiscOS/Sources... For the 6502, space was tight enough that it was packed to less than three-bytes per mnemonic.

ARM was born from Acorn's frustration with 16-bit CPUs that they considered as successors to the 6502, e.g. 68000, not the 6502.


Your right https://sites.google.com/site/6502asembly/6502-instruction-s...

"ARM was born from Acorn's frustration with 16-bit CPUs that they considered as successors to the 6502, e.g. 68000, not the 6502" yes and no, were both right. ARM looked at a 16bit replacement for the 6502 and found the options of the 68000 not having the performance they wanted. They went to America and checked out the work on the replacement for the 6502 and concluded that they could just make there own CPU and so they did.

Had the replacement for the 6502 not been a one man team then history would be different now.


>  And having them be all three letters meant assemblers could pack the text into memory in fixed-length records; every byte mattered.

As much as I agree with using the mnemonics, this is a bogus argument. Even C64 BASIC tokenized stuff before storing it, because there's no reason to store the name at all. In fact, if you prefer, the 6502 instruction set is small enough to represent it in the assemblers editor as a single byte index into an array. Or you could just use the opcode itself.


BBC BASIC tokenised BASIC keywords before the line was stored in memory, e.g. PRINT was represented by a single byte. Some tokens needed more than one byte, especially in later versions.

But I'm talking about assembler here. BBC BASIC has a built-in assembler, e.g. 6502, Z80, or ARM, depending on the CPU it's running on. The assembler source in the BASIC program is not tokenised on input but stored as plain text. Instead, when those lines of BASIC, since that's what these embedded lines of assembler, wrapped in [...], are, get run the machine code is assembled at the address in BASIC's P% integer register variable and P% is moved on. At that point of execution BASIC must hunt for the mnemonic, stored in the "tokenised" BASIC line as plain text, in its table; the table I reference in the case of ARM BASIC. That table can be laid out as it is because each mnemonic is three characters long, e.g. mov, ldr, stm, and bic.

You mixing tokenising BASIC, which BBC BASIC did, and the embedded ARM assembler, which it didn't, and then adding in an "assembler's editor", and there wasn't one of them. Just lines of BASIC program, 10, 20, ..., some of which switched to assembler with a [.


I'm not talking about the BBC specifically all - the specific system is irrelevant - and so I'm not "mixing" anything. Many 6502 based systems did have assembler editors; many more had "monitors" that would assemble line by line on the fly - if not built in then as common extensions.

(in fact I did most of my M6502 assembly programming in a monitor, with a notepad to keep track of where various functions started; it was first a couple of years after a I started doing assembly that I got a proper macro assembler for my C64, and even then exactly because "every byte mattered" it was not at all uncommon to still stick to a monitor on a cartridge rather than have a macro assembler "waste" precious memory for the assembler and source text)

What I'm talking about is the general idea that longer keywords somehow would prevent an assembler from using fixed length records to represent lines, though reading it in context of what you wrote above I see your reference to fixed length records referred to the table used for assembling, not to the source lines in which case it makes slightly more sense to me.

Though not fully, as it'd be both faster and take less code to use custom search code to match the input against the available opcodes than to insist on a fixed length record - did a quick check and it should be doable to save at least a dozen or two bytes and reduce the average search time significantly by range checking and using lookup table for the first character. It might've been convenient to write the code with fixed records, but it's far from optimal in terms of either performance or code size, so it doesn't seem like code size bothered them that much in this case.

The "every byte mattered" applies to source too on these systems, and I actually find it really curious that they went to the step of supporting inline assembly but then didn't apply that optimization to the source given the limited memory and performance of these systems. Especially since the opcode itself makes a very obvious token candidate, potentially leaving the "assembly" step itself reduced to mostly copying data and applying address fixups.


STORE_ACCUMULATOR may be easier to understand than STA, if you're seeing it for the first time, but it's not easier to use once you know what it means.

Incidentally, I wrote a simple 6502 assembler back in the days, and I took the opportunity to invent my own notation, just for fun. I became quite adept at reading and writing it, and standard notation felt very verbose after a while.

Here is, in standard notation, a program that copies 256 bytes from ORIGIN to DEST.

        LDY #$00
  LOOP  LDA ORIGIN,Y
        STA DEST,Y
        INY
        BNE LOOP
        RTS
Here is the same program in my notation (yes, all on one line. I used a space character to delimit instructions).

  Y<0 LOOP: A<(ORIGIN,Y) A>(DEST,Y) Y+ #LOOP ]


STORE_ADDRESS takes more than 4 times the memory of STA. RAM was not cheap! Also having fixed width and shorter opcodes made assemblers faster and easier to write.


Erm you don't run assembly languages you still compile them to machine code!

Fixed width is easier to process and not necasaryly to write. Remember it is about learning a assembler here - not pandaring towards limited computer memory and processing approaches of the time. THAT is a seprate issue and on that note thanks for the mod down point ;|.


As pjmlp points out, not everyone compiled to machine code by hand - I talked about assemblers being faster, not assembly. Those who did work by hand would also appreciate fixed width and short opcodes, and squared paper...

If this is about learning 6502, then rewriting the official assembly into something new would be antiproductive. But don't blame me for touching your mod points.


IF its about learning 6502 then its not hard to run a SED script and convert your version into the official version. Work with what you find easiest and use the computer to do the hard work.

I've done hand coding and those who have will agree, its a education in futility in painful uneeded processing for sadists. Coding sheets are fun but when you can type faster than you can write then they are very annoying.

Now back in early home micro days you had no real choice but to hand code your assembly into machine code and in that fixed coding sheets realy made no difference at all and if anything I found got in the way apart from screen design.

Point is in thsis day and age - impossing and having to be forced into learning TLA's when you can have something meaningful is something realy not needed, but thats another story.

Is this about learning 6502 or learning assembler as they are both seperate area's. 6502 has a nice history of reading and was done back in the time were one chap could invent a CPU, one man could write a application etc etc. Nowadays its not as easy due to size/complexity etcetc.

If you want to teach somebody something then imposing artificial limitations of the days - is that realy needed as a extra level of distraction, we can agree to disagree upon that.


You still needed to write the program before assembling it, so your suggestion would be taking too much memory in the assembler editor.


memory is not a issue thesedays for assembly language. to impose such limits today for the sake of history then you may as well cut up bits of cereal box's, get a hole punch and make your own punched cards!


6502 is also not for "these days". That said, its a tutorial about _learning_ it, not redesigning it to be modern and easier.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: