Though I think the most valuable thing I learned from studying assembly was actually about higher-level languages: How function calls work, and the mechanism of stack frames.
This discussion doesn't reach quite that far, and actually I'm not sure how one would work such things into a discussion of 6502 assembly – I learned about them in the context of 680x0 assembly. Back in my 6502 days, I never even encountered a compiler, so I never had the chance to disassemble a C or Pascal program. (To this day I can't name a 6502-based C compiler, though I'm sure they existed someplace. I knew about Pascal compilers, but I never got my hands on one; back in the 1980s compilers actually cost money.)
What i learned from the 6502 was the use of pointers through the indirect addressing, later when I learned C they became vey easy.
The 6502 supports function calls easily: JSR/PHA/PLA/RTS
(Better even than some modern microcontrollers…)
The 6502 stack is pretty much a toy for a language like C or Pascal, and you definitely need to use a non-hardware stack for actual variables and stuff, probably even return addresses.
(I never ran out of 6502 stack programming in assembly, and I wrote a LOT of it, back in the day).
They still do, only in Linux/BSD land they do not.
After all, people have to bring bread back home.
Visual studio full version is not free, but you can easily build (compile, link, run) everything for zero dollars. (Not even a picodollar.)
And bringing bread home by making compilers; can you point out a commercial compiler that 'brings bread home' besides the Visual Studio compilers (which, actually, do not bring that bread; you can download the compilers for free)? I'm curious as I thought paid compilers currently are niche and don't make the companies selling them enough to actually pay more than maybe 1 person working on it full time, if that. Maybe Intel is making enough of pure compilers to actually call it a business?
All commercial UNIX vendors sell their compilers. Even in Solaris case, the EULA does not allow you to sell software done with their "free" compiler.
Intel compilers are not free for Mac OS X and Windows.
All the SDKs for game development from the console owners are not free.
The express editions of Visual Studio compilers are limited in which optimizations they offer, libraries and supported processor targets.
The compilers from Portland group for HPC.
The compilers from CodePlay for the game industry.
Even the compilers from Apple can be said that they are not free, after all you're paying indirectly for them when you buy a Mac Developer's license.
Plus, last time I checked you need to pay to be able to have proper access to all developer information, https://developer.apple.com/programs/mac/.
Please read properly what I wrote, "paying indirectly for them", do you know what _indirectly_ means?
You checked wrong. Developer docs for production versions of Mac OS and iOS are all available for free with Xcode. You only need to pay for membership to the developer program, which gets you access to prerelease OSes (and related documentation), and lets you distribute your applications through the App Stores.
CodePlay make compilers for game developers.
There are companies that have made a business out of selling fully-integrated build & debug toolchains for embedded devices. There is usually some amount of customisation required for each type of device, and there's a lot of value in not having to do this yourself.
So yes, there are still companies making compilers as their main, paid product, but they tend to target specialised markets.
But yes, you are right; embedded, HPC and gaming are good examples. Thanks for pointing that out.
Microsoft's combined "servers and tools" operating segment represented nearly a quarter of its revenue and over 20% of its operating income last quarter , and while it's not immediately clear how much of this comes from the "tools" portion, I don't imagine it's anywhere near zero.
The modern Atmel AVR is in many ways a spiritual successor to the 6502 and 6809:
(That said, I cut my teeth on the 6502 and it will always have a spot in my heart!)
I later learned IBM 370 assembler (why not have the CompSci's assembler class on a mainframe) and 8086/88. I hated both. They felt wrong compared to the 6502/6509. Although 8086 assembler did help my grade in the graphics class.
Assembler is great to learn because it teaches you what the final form of your program is.
My proudest/excitingest moment with them was at the end of the semester when I implemented a long division program on them. It would take two numbers and output the decimal fraction to an arbitrary number of digits using long division.
6502 isn't dead, it has even been relaunched:
6502 was my first processor, and it was real fun to learn Assembler. I used also figForth and learned from the ground up how it was implemented. Amazing days!
Today Assembler isn't fun anymore.. I don't even know the exact name of my current quad core :)
Could it be that FPGA programming will be the next golden age of "Assembler"? Content industry works hard to lock computer hardware with DRM. So if we want to keep our freedom then we'll have no other choice than to build our future computers ourselves - again.
From WDC's homepage:
"Annual volumes in the hundreds (100’s) of millions of units keep adding in a significant way to the estimated shipped volumes of five (5) to ten (10) billion units. "
Also, FPGAs are typically configured with an HDL, which can be likened more to C than assembler.
Yes currently, but price depends on demand. On the other side, if FPGAs remain expensive then Assembler could become even more important to put as much code as possible into an fpga.
> can be likened more to C than assembler.
For that reason I mentioned the word Assembler in quotes. Btw VHDL is much more like Ada than C while Verilog is different from both.
It might change in the future though? I remember some years ago, the possibility of nanotube based chips were being touted, offering FPGA like programmability and speeds greater than a dedicated chip. I haven't heard of it since.
Aside from similar to some programming language, HDL is pretty interesting to learn in its own right. (Although I disagree that it's like C. It's more like a declarative language for circuits, though it's true that you can stick imperative-like code in there. But treating it like C is a recipe for problems.)
Also, most microcontroller companies provide all the specs you need to roll your own end-to-end software for the device. Aside from specifying the machine language (so you can write your own compiler), they also have app notes for programming the onboard flash via JTAG or another interface. With programmable logic, it seems like the only parts that don't require the vendor's own programmer and synthesis software are legacy SPLDs like 22v10s.
I know, but a DE0-nano has only 20k LE's, and runs at 50MHz. You'd probably be limited to simulations of an Intel 8008 or thereabouts.
With the 32x32 display they could even do some simple graphics to stretch themselves :)
The basic concepts came very easily (with a quick skim through Jim Butterfield's ML book), but I struggled a bit more with the C64 hardware intricacies: zero page addressing, IRQs, double-buffering to avoid flickers, etc. This was a very interesting and enlightening experiment (in terms of the perspective it sheds on modern-day tools and languages), and I plan to write about it soon.
It's so much fun developing in 6502 assembler again!!!
Game looks great, btw!
People would give you such funny looks when they saw you staring into a terminal of hex and you pointing out .. ahh that 3E CF needs to be 3E D3 :)
Best thing is that I ending up knowing all the op-codes and most of the instruction timings of by heart (which made interfacing to those 16 x 2 line LCD modules easier)
I'm amazed that these were not mentioned in the original article.
I wrote a bunch of 6502 code for Micro Technology Unlimited's music and graphics boards for the PET, and even wrote my own assembler in PET Basic. Fun Times!
I notice that the 0x00-ff memory space has no 0xff byte and that 0xfe seems to change a lot, but it's not explained. What's going on here? What's in 0xfe and why is 0xff not shown?
I noticed that there's no multiplication operator. Does that mean that multiplication was performed using looping addition? What about division, floating-point, et al?
* This is an old processor, so it doesn't support fancy stuff like multiplication and division and floating-point. Generally you just try to avoid multiplication whenever possible, or keep it to powers of 2, since that can be done with shift operations. You can do multiplication using looped addition, but if you were to implement it you'd want to use a constant-time binary multiplication algorithm instead.
Moreover the content of 0xFE keeps changing even after execution stops (if you keep hitting the "step" button).
0xFF isn't even listed, but a load from that address seems to work (with value 0x00). [ETA: actually, that seems to be a quirk of the Monitor's initial config. If you set "length" to 100 instead of ff, you can see byte 0xFF in the monitor. Nothing wrong or weird there.]
There is a function called 'setRandomByte' that is called on every debugger step, but it has no comments so it may just be leftover debugging logic.
0xFF doesn't show up because the Monitor's length is set to 0xFF, which is 255. If you set the length to 0x100 it properly shows the first 256 entries.
As for the value at 0xFE, I don't know.
0xfe is a random number - a new random byte is generated on every instruction.
0xff contains the ASCII code of the last key pressed.
How hard would it of been to have STORE_ADDRESS instead of STA.
To realy learn a assembly language you realy need to write code and to write code you realy have to have a project/purpose.
Now 6502 is one of the oldest assembly languages still in active use as they still do very well in the microcontroler sector. Though that said ARM is also in that area and alot cheaper to obtain ARM compatable kit. ARM was born out of frustrations/limitation with the original 6502 CPU and in that may be a better more practical use of your educational time.
That all said - every programmer of any language should at least learn/play with one assembly language sometime in there life, maybe one or two. I remember after my ZX81 I opted for the Oric-1 over the Spectrum just becasue it had a different CPU (6502) and after that I opted for the AtariST (6800) and a amstrad PC (X86).
Also inventing your own CPU/assembler is not as hard and intimidating as alot will think. All are very rewarding and a good use of your time on a rainy day.
I remember writing programs that wouldn't have fitted in memory if we'd used things like "STORE_ADDRESS" instead of "STA". The assembler would have had to have been more complex in order to process instructions that were of variable length, instead of the opcodes being a predictable 3 letters.
I've written assembler by hand - sheets and sheets of it - because there wasn't a decent editor on the machine I was writing for. These were the days when you were writing code for the machine, and not for the people who would maintain it afterwards. The structure of the code had to be clear, and the comments were as much for yourself as anyone else, but the opcode names were a complete non-consideration. If you didn't know them, you couldn't program anyway.
1) learning something new you might as well have something easier to learn
2) just becasue historicaly you had to use abreviations is not a handicap you have to impose upon yourself thesedays - especialy if your learning it from scratch and for educational aspects.
3) Sure you can use short TLA's instead of a longer version but for ease of reading and learning then something alot cleareer for the human without that over-comprimise you had with memory of computers is a artificial limitation.
4) realy not hard to run a substitution script to conver long to short and vice a versa - sed anyone!
We have all done assembly by hand and hand converted it, the compiler was a luxury for some back then and there small memory machines and even then you were not limited by the official shortcode versions of TLA's.
Thing is with hand converting is that you write something not maintainable on many levels, but as you said, you bent towards those lmitations as you had not alot of choice.
So if you want on say a Z80 write RETURN or the offical mnenoic of RET or go real hard code and just write C9 (using this example as my personal memory space seemed to of kept that one alive) then it was your choice. When you went to code it was C9h so converting RETURN or RET was something you did.
Least only op code that was standard across CPU's was NOP or "NO OPERATION" aka do nothing or 00h or 0 or 00000000, that was kinda portable and used by many for funky double-entry code padding etc. Though that was due to memory limitations and scary stuff to maintain, yet fun and rewarding to code. Apple early OS used that approach alot due to memory limitatons.
Heck of memory was such a limitation back then - explain COBOL becasue I can't, sadly still remember that as well :|.
If you want readability then by all means use Python or Go or Ruby or something like that. I don't know anyone who writes in assembly who doesn't use the TLAs (or similarly concise designations) for the operations, no matter what processor they're using. In feels to me like there is something natural about it.
But even beside that, I personally find that abbreviations make it easier to think in whatever subject I'm working on. When I write in assembler I think "MOV" - I don't think "move". Jargon in any field is there to make communication faster and more effective, and linguistics says that common expressions gets shorter over time.
So I think you're trying to improve the wrong thing, and while to some it may seem obvious that spelling out operations more verbosely and making them more obvious will help people learn, I'm not convinced. Sometimes concise, precise and semi-opaque terms can actually help learners.
Having learned 6510 asm when I was younger, mov always seemed backwards and magical to me.
(And anyway, where do you stop? If you can't remember that STA means store accumulator and LDA means load accumulator, how will you remember what (&70),Y means, or what flags they use, or how many cycles they take? You'll end up with something like SUBTRACT FROM ACCUMULATOR MEMORY IN ADDRESS STORED IN &70 WITH Y REGISTER AND INVERTED CARRY FLAG WITH RESULT AFFECTING N AND Z AND C CLEARED IF BORROW AND V SET IF OVERFLOW TAKING 6 CYCLES PLUS PAGE BOUNDARY CROSSING PENALTY ;) - and even that probably isn't clear enough, because how will the poor reader know what the page boundary crossing penalty is if they don't know already?)
If you have something like x86's PUNPCKHBW, or POWER's rlinmw, and try to describe what they do clearly, you'll end up in even more of a mess. A one-volume instruction reference manual, sorted by opcode, with diagrams and pseudocode, would be far more useful.
So maybe longer opcodes would help, but I'd have got it wrong in either event - and I'd still need to have double checked the docs, to remind myself, again, just what the hell it does exactly.
Here's the table of ARM mnemonics in the source to Acorn's BBC BASIC for the ARM. https://www.riscosopen.org/viewer/view/castle/RiscOS/Sources... For the 6502, space was tight enough that it was packed to less than three-bytes per mnemonic.
ARM was born from Acorn's frustration with 16-bit CPUs that they considered as successors to the 6502, e.g. 68000, not the 6502.
"ARM was born from Acorn's frustration with 16-bit CPUs that they considered as successors to the 6502, e.g. 68000, not the 6502" yes and no, were both right. ARM looked at a 16bit replacement for the 6502 and found the options of the 68000 not having the performance they wanted. They went to America and checked out the work on the replacement for the 6502 and concluded that they could just make there own CPU and so they did.
Had the replacement for the 6502 not been a one man team then history would be different now.
As much as I agree with using the mnemonics, this is a bogus argument. Even C64 BASIC tokenized stuff before storing it, because there's no reason to store the name at all. In fact, if you prefer, the 6502 instruction set is small enough to represent it in the assemblers editor as a single byte index into an array. Or you could just use the opcode itself.
But I'm talking about assembler here. BBC BASIC has a built-in assembler, e.g. 6502, Z80, or ARM, depending on the CPU it's running on. The assembler source in the BASIC program is not tokenised on input but stored as plain text. Instead, when those lines of BASIC, since that's what these embedded lines of assembler, wrapped in [...], are, get run the machine code is assembled at the address in BASIC's P% integer register variable and P% is moved on. At that point of execution BASIC must hunt for the mnemonic, stored in the "tokenised" BASIC line as plain text, in its table; the table I reference in the case of ARM BASIC. That table can be laid out as it is because each mnemonic is three characters long, e.g. mov, ldr, stm, and bic.
You mixing tokenising BASIC, which BBC BASIC did, and the embedded ARM assembler, which it didn't, and then adding in an "assembler's editor", and there wasn't one of them. Just lines of BASIC program, 10, 20, ..., some of which switched to assembler with a [.
(in fact I did most of my M6502 assembly programming in a monitor, with a notepad to keep track of where various functions started; it was first a couple of years after a I started doing assembly that I got a proper macro assembler for my C64, and even then exactly because "every byte mattered" it was not at all uncommon to still stick to a monitor on a cartridge rather than have a macro assembler "waste" precious memory for the assembler and source text)
What I'm talking about is the general idea that longer keywords somehow would prevent an assembler from using fixed length records to represent lines, though reading it in context of what you wrote above I see your reference to fixed length records referred to the table used for assembling, not to the source lines in which case it makes slightly more sense to me.
Though not fully, as it'd be both faster and take less code to use custom search code to match the input against the available opcodes than to insist on a fixed length record - did a quick check and it should be doable to save at least a dozen or two bytes and reduce the average search time significantly by range checking and using lookup table for the first character. It might've been convenient to write the code with fixed records, but it's far from optimal in terms of either performance or code size, so it doesn't seem like code size bothered them that much in this case.
The "every byte mattered" applies to source too on these systems, and I actually find it really curious that they went to the step of supporting inline assembly but then didn't apply that optimization to the source given the limited memory and performance of these systems. Especially since the opcode itself makes a very obvious token candidate, potentially leaving the "assembly" step itself reduced to mostly copying data and applying address fixups.
Incidentally, I wrote a simple 6502 assembler back in the days, and I took the opportunity to invent my own notation, just for fun. I became quite adept at reading and writing it, and standard notation felt very verbose after a while.
Here is, in standard notation, a program that copies 256 bytes from ORIGIN to DEST.
LOOP LDA ORIGIN,Y
Y<0 LOOP: A<(ORIGIN,Y) A>(DEST,Y) Y+ #LOOP ]
Fixed width is easier to process and not necasaryly to write. Remember it is about learning a assembler here - not pandaring towards limited computer memory and processing approaches of the time. THAT is a seprate issue and on that note thanks for the mod down point ;|.
If this is about learning 6502, then rewriting the official assembly into something new would be antiproductive. But don't blame me for touching your mod points.
I've done hand coding and those who have will agree, its a education in futility in painful uneeded processing for sadists. Coding sheets are fun but when you can type faster than you can write then they are very annoying.
Now back in early home micro days you had no real choice but to hand code your assembly into machine code and in that fixed coding sheets realy made no difference at all and if anything I found got in the way apart from screen design.
Point is in thsis day and age - impossing and having to be forced into learning TLA's when you can have something meaningful is something realy not needed, but thats another story.
Is this about learning 6502 or learning assembler as they are both seperate area's. 6502 has a nice history of reading and was done back in the time were one chap could invent a CPU, one man could write a application etc etc. Nowadays its not as easy due to size/complexity etcetc.
If you want to teach somebody something then imposing artificial limitations of the days - is that realy needed as a extra level of distraction, we can agree to disagree upon that.