Hacker News new | past | comments | ask | show | jobs | submit login
6502 source code for BBC Micro game Crazee Rider from 1987 (github.com)
165 points by gmiller123456 4 months ago | hide | past | web | favorite | 45 comments



~5000 lines of code for that?

Don’t know if that sounds like a lot or a little but the answer is yes. If you watched the youtube video, yes it was hard do that and took a long time.

5000 lines, but if I recall correctly there was no way do something fancy like this using only one line of code:

x = 1000 + 1000

I don’t know what programming courses bother teaching, or even mentioning a line of code like this because, what’s there to say?

I’m not even sure how to put it in perspective to a person under 200 years old that, I had to spend a pretty good amount of time to learn enough to do that (add together two numbers with more than 8-bits each).

And that even the variable x is an oversimplification, because there were no variable names. You had a CPU, a few supporting chips, and some bytes of memory to directly throw instructions at or move stuff back and forth between.

Why would anyone have been interested in doing that?

Not sure, somehow seemed exciting at the time. Should the interest question have a different answer for today’s software when we look back from the future?

Or, are we going to be scratching our heads again? Why did it take them so long to generalize AI they might as well have worked farming the fields?

Can’t recall how much easier it became when you had a “Macro Asseembler” (which I guess most professionals used from day 1 but not always people who got these systems as a Christmas present). Just that finally having one felt like getting some kind of new luxury car.

Yes I know.

And it was uphill both ways to school, and my lawn, and so forth. My apologies.


Old Apple ][ asm coder here.

Don't despair. It's not the merits of your "product" being judged. It's just the wrong audience (mostly). Colleges now don't even bother to teach two's-complement. Of course today's "coders" won't get it. This is a generation that has never had to count cycles, get timing just right, or worry about running out of memory. The idea of paying $1000 for a C compiler or $500 for an assembler (in 1985 dollars!), is almost unimaginable now. I remember the excitement of a 1200 baud modem -- my initial response was "wow! The text is coming faster than I can read it!"

The systems were primitive, sure, (but at the time they were amazing!) but this challenge was met with creativity. What the games lacked in photorealism they surpassed with "fun."

Programming was learned by doing. And most of the folks I met were amazingly generous about sharing techniques. It was like Jazz musicians sharing a cool lick, with others. There were no books, really.

Coding in assembly is like juggling. It takes a little effort to get all the balls in the air and get the rhythm going, then it goes until you hit a roadblock and it all falls down. You plan, think, and start again.

>>somehow seemed exciting at the time For you, me, and thousands more, it was absolutely magical.

Thank you for sharing this. I'm reading it now and found myself smiling.


Sorry if I implied I wrote this game I did not, just knew like you would what it took to do stuff like this.

Glad to see your reply though, as it somehow makes the nostalgia seem less like it was all just a dream.

I think you may have had a rougher go of it being on an Apple, I mostly did Commodore stuff. Didn’t the Apple have the nightmare framebuffer layout? Something like 7-bit increments of pixels across a scanline instead of nicely aligned bytes, or some similar bit of graphics insanity?


The Apple ][ was actually a joy, but there were challenges : interleaved raster lines and quirks of hires color byte alignment (certain colors could not border other certain colors). As I understand it, commodore had better support for sprites and overlays. And commodore sound may have been easier as well, with a dedicated sound chip (am I remembering this correctly). I'm sure it was a wash, in the end, for which of us had a better experience.


Colleges now don't even bother to teach two's-complement.

I learned two’s-complement in college. And at the least: one’s-complement, sign-magnitude, BCD, IEEE floating point. And arithmetic on same.

Which colleges are you referring to?


The ones with programming courses.

This is is just a guess, but the guess is you didn’t learn any of that in CS101. Sounds more EE related. Of course there’s often no clear dividing line between the two.

Regatdless that’s assuming a full CS/EE undergrad program in either case. There are tons of people who takes courses, have careers, and all those boot camp style programs.

Any way people can learn and develop skills is good, so I don’t suggest any of the above as lesser ways to learn.

I’m just unaware of the concepts you mention being taught in typical 100-200 level courses that have emphasis on software programming or computer science.


I've been taught two's complement in multiple undergrad classes over the last ~8 years, including last semester. I've also been exposed to one's, sign-magnitude, and BCD.


I had an MSX in 1985. As a commercial assembler was way too expensive for me, I had to wrote my own one in MSX Basic. And I couldn't afford a floppy drive, either (it was nearly twice as expensive as the computer itself). And you know why I wrote my own assembler? Because I wanted to code games and Basic just didn't cut it. So well, I know your pain :)


I taught myself MSX Basic and Z80 ASM by playing around with sources I typed over from mags and by reading tutorials in the same mags. I took me years, during which I made games and other software, until I found I you could code machine language in a different way than 250 data 3E, AA etc. I remember all opcodes until today as I spent years writing software with those, translating by hand. It was fun though: everything was in front of you; no libraries to install and learn, just trying out and learning tricks to optimize and get better results.


Exactly the same scenario for me.

The Basic language was great and easy, loved it. However like yourself I wanted to makes games. It wasn’t like you had to learn 6502 to get a mere 50% performance increase.

Doing anything with pixels seemed a couple of order of magnitudes slower so it wasn’t really optimizing just required.


Here's a YouTube video of the game, for those unfamiliar (like me). https://youtu.be/GFMp891Q6tY


Makes me think of all the code lost to poor storage. Like all my C64 related code from the 80s and 90s. I had hundreds of programs typed in regular notebooks.

The code for this game is really readable and easy to pick up after all these years of not doing any 6502 coding. Any 6502 wasm targets?


There's quite a few 8-bit CPU emulators that can be compiled to WASM. Static translation of 6502 (or Z80) machine code to WASM is tricky because a lot of 8-bit code was using self-modifying code, so you'd need at least some sort of JIT.

Shameless plug: I've started writing easy to integrate 8-bit chip emulators in standalone C headers a little while ago:

https://github.com/floooh/chips

For instance used in these WASM emulators:

http://floooh.github.com/tiny8bit


How about a C64 disk image that you run through mame online, like https://archive.org/details/C64Gamevideoarchive

(and see https://news.ycombinator.com/item?id=18379387 to automate a bit the whole process )


I copied my software sources from the 80s onto floppies until the mid 90s after which I put it on a tape. Years later I tried to get the data off the tape, but the tape was unreadable. Such a shame (for me, no one else is missing anything). Which is why I think everyone should just put whatever they have online; maybe someone likes it down the line and if not it cannot hurt.


Not WASM but also an equally fun target: https://scratch.mit.edu/discuss/topic/282269/


Amusing how GitHub thinks the project is 94.9% Makefile and 5.1% Batchfile. It obviously doesn't know about Asm very well.


Github is using Linguist[0] for analyzing languages in a repo.

https://github.com/github/linguist


GitHub supports asm to certain degree but definitely not the .6502 extension (is it the standard choice?).


I only ever did a clear screen or similarly simple programs in asm but I believe 6502 was a popular one. I think it was supported on both my TSR-80 and my first IBM PC.


Off-topic but anyone know a way to have Github recalculate these stats for idle projects without needing to re-commit stuff? I have one repo which says 99% Mathematica which is so far from the truth (there is no Mathematica code whatsoever.)


If it reports wrong, a recalculation likely won't help, you need to workaround/fix the underlying reason why it reports it wrong.

For example, it doesn't count document-type files by default; you need to modify `.gitattributes` files to override that.



Only 5171 lines of asm in this game, as measured by "wc -l src/*.6502". See the youtube link posted here to get an impression what it does.


Related; the original sourcecode for Fort Apocalypse at https://github.com/heyigor/FortApocalypse


It's unfortunate that neither this BBC Micro game nor the Fort Apocalypse game are free software -- software one is free to run, inspect, share, and modify (even commercially).

The Crazee Rider software seems to lack a license at all. Thus no permission is granted to engage in the freedoms of free software.

Fort Apocalypse is also nonfree; it is distributed under a nonfree license -- http://creativecommons.org/licenses/by-nc-nd/2.5/ -- that even the license's author recommends against using for software per https://creativecommons.org/faq/#can-i-apply-a-creative-comm... .


To me, this is pure art


Certainly it is. I had a computer teacher tell me that the 'cls' method in DOS would take hundreds of lines of asm to write yourself. A couple years later I decided to try it. Turns out it's only a few lines if you take advantage of OS Interrupts. I assume the teacher was suggesting it would take all these lines if you didn't "cheat" and use an interrupt.


Now I just have to dig my BBC Micro out from its cardboard box and find a display device that it can still connect to. :D


And hope the electrolytic capacitors still work.


The RIFA X2 capacitors in the PSU tend to explode. I opened mine up and was lucky to find that my BBC is one of the few later revisions that don't use these. Well worth checking though.


The failing electrolytic capacitor syndrome is interesting to me because I worked as a hardware designer in that period and I don't remember it being common knowledge that those components had a limited life. I do recall that we mostly used solid Tantalum capacitors with the transition to surface mount technology but I believe that was more to do with survival in the production process than long field life time.


Yeah... It did boot up a couple of years ago when I last plugged it in, though, so there's still hope! :)


Could just recap it. I did an Apple a while back. Was not hard.


It is really easy to connect a Beeb to SCART.


Be careful with the TTL RGB levels, though. You should add some resistors.


Capture device could be quick and easy.


This game reminds me of Super Cycle for the commodore 64, which also had an 6502.

https://www.youtube.com/watch?v=FiebnDbq0w0

Are there any collections of source code for commercial games for the C64 ?


I would love to see Crazy Climber. Excellent game.


Yeah, this game definitely has a "Crazy Climber" aesthetic .. I too would love to see a Crazy Climber re-make, albeit for my preferred 8-bit machine the Oric-1/Atmos systems ..


Assembly is so beautiful.


Especially 6502 assembly, because all the opcodes have three letters, so they form a sleek column of text that slides smoothly by without friction from jagged bumpy edges. All other assembly languages look rough and ragged to me!


CPU, registers, interrupts, RAM, I/O... It’s just you and the machine baby.

I started programming with assembly, and evem though today I rarely use it, any excuse I can find, I’ll dig into in.

No question, modern languages are more efficient, but knowing exactly what instructions are being run and how they interact with the hardware (a physical object), is so much more aesthetically pleasing.


In the embedded world there is still enough of it. Today we had to free up 1200 bytes because our software did no longer run after adding 2 new features; not enough free memory. With higher level languages we would already have no chance with the previous features on the hardware, with asm we managed to optimize enough to free up the needed memory. It is amazing you always seem to be able to push it further. All that was a must in the 70-80s homecomputers as they were not (easily, without soldering generally) expandable, so you had to make it work on what was there. I got into my current work because I like that kind of optimizing and puzzling from when I did it in the 80s as a kid.


I used to work in the embedded world about 10 years ago. There was definitely plenty of byte-level optimization then, but reading the tea leaves pointed to a very different direction (eg, embedded java, embedded SQL).

Makes me happy to hear folks are still hacking bits in 2018.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: