The tools were pretty good. Deluxe Animator II and Deluxe Paint - both for MS DOS - is what I used for all my console games - Game Gear, Game Boy, SNES etc.
I don't think I did any art for it – as I recall I calculated sprite x,y offset to hand off to the programmer for the motion.
Pretty happy to be doing it to - this was my first job out of high school.
I encourage Gameboy enthusiasts to check out the pret project on Github:
Asesprite is currently my favorite pixel editor / animator ATM:
Google doesn't bring up anything about this--do you mean DeluxePaint Animation or Autodesk Animator?
MS DOS Deluxe Paint II and Deluxe Paint Animation are correct.
I did use Autodesk Animator some - but for games the Deluxe Paint tools were better.
I mean, whether or not pixel-paint software exists, graph paper, even today, is still great for sketching and ideating pixel art [and/or level designs, but I'll just say "pixel art" from here on.] If you try to come up with a design for a 8x8 or 16x16 sprite or icon by freehand sketching it, you'll probably come up with something that won't look legible at that size. Better if you have the constraints of the grid even in your sketchbook; and so, better if your sketchbook is graph paper. (In fact, I believe they've make special-purpose graph-paper sketchbooks for forever now, specifically for ideating cross-stitch and latch-hook art.)
We're really only just barely out of the era of ideating pixel art using physical media, with the advent in the last ten years of cheap tablets that can run pixel-painting software, that are light and convenient enough that an artist would prefer to lug them around in place of their sketchbook.
Of course, after the ideation process—once you've got a draft you like—you'd likely be using some kind of software (visual or not) to input it. Scanner tech, and downsampling algorithms, were both far too crap back then to rely on faithful digitization of "discrete" information like pixel-art pixels. (Unless you were using a special scanner, special software, and special constraints on your inking process. Remember Scantron sheets? A 1980s pixel-art scanner would require basically the same fraught workflow.)
But also, on the other side of the urban-legend scale (that the article thankfully doesn't repeat), no artist was ever having to do bit math to get packed words to type into a resource file. Programmers may have done that for first tests with crappy programmer-art, but even back then, comparative advantage was a thing, and any artists on a project weren't hired to be good at calculating on long strings of numbers.
Instead, from the very beginning of computers being hooked up to displays and having character/tile display modes with configurable character/tile memory, there were already "textual graphics" formats like NetPBM, where you could use a visual editor like DOS edit or Unix vi as your "pixel art editor." These files can then be compiled, resulting in the a C or ASM source file defining some constant as a uint8-array literal. (They weren't necessarily standardized; but it was very easy to reinvent these formats and their tooling, and many companies did in the process of producing early game software.)
There was pixel-paint software too, even in the early 80s... but it wasn't on every platform. And, since you were often stuck on the platform you were targeting, that was a problem. (You were fine when targeting closed platforms like the gameboy, because the SDK would usually be built for a workstation-class system; but when targeting your average 8-bit micro, this was often the case, and 8-bit micros had no niceties like "standardized floppy disk formats" or "networking standards" to transfer in data from your nice graphics workstation. If you want to write e.g. ZX Spectrum software, you write it on a ZX Spectrum—or one of its younger brothers, once those came out.)
But, no matter the platform, if it was display-oriented, you could pretty reliably get a visual text editor. (Even on the ZX Spectrum!) So writing textual "1"s and "0"s into text files was indeed a thing—for a while, at least.
Mark Ferrari: 8 Bit & '8 Bitish' Graphics-Outside the Box
It's really exciting to see renewed interest in these techniques and "8-bitish" games in general since it probably means we've chosen the right time to make our handheld.
My favorite line from the post.
The 4MHz value is the clock speed, but all instructions take a multiple of 4 clock cycles to accomplish, so you often talk about it as a 1MHz "instruction cycle" machine.
The fastest memcpy implementation I've seen for it (not including in-hardware DMA features, which are only available for copying to OAM (sprite) RAM on the original gameboy) takes 4.5 cycles per byte (one 16-bit word every 9 cycles).
That means that if you were trying to update every pixel on the screen (which doesn't actually work - the GB's screen pixels aren't memory mapped directly but generally specified via a combination of up to 384 unique 8x8 tiles, which are then arranged on a 20x18 tile grid), which would be 160 * 144 * 2 bits/pixel = 5760 bytes, it would take 25920 instruction cycles ~= 24.72ms, which comes out to approx 40.45 FPS.
Of course, as I mentioned, the actual GB doesn't work like that, and has a Pixel Processing Unit that handles writing to screen every frame. While the PPU is writing to the screen, the CPU cannot access VRAM, which means that the CPU actually only gets what's called the v-blank period, which is only 1140 instruction cycles, which is only enough time to copy up to 254 bytes or so (I'm skipping over some detail here, like how you could squeeze a few extra bytes out by loading the first 4 or so bytes into registers before vblank begins, or how you can also (with very careful timing) write to VRAM during a ~60 cycle window on each line knowing as the h-blank and OAM search periods).
In practice, games work despite these limitations because they generally only have to modify a small amount of the screen in any one frame. Scrolling is implemented in hardware, and you can write new data off screen then scroll into it so you can take multiple frames to write it if needed.
This plus other tricks (for example, animating 50 copies of a common tile by editing the tile's pixel data, which affects every copy at once) let these games accomplish so much with so little.
It's impossible to compare to a modern sytsem with a polygon-based GPU, they're entirely different things.
Defense, agriculture, home automation, industrial automation, finance, automotive, oil&gas, space resarch, robotics. The list goes on!
I think the proprietary toolchain stuff is also changing, but slowly.
Although there's no way past proprietary chips, at least the open-source toolchains for them are coming along.
Yeah, see, that's a hold up for me. If I'm going to go through the effort of building something completely non-commercially viable than I want it to at least be completely open. And before you say anything, no, I'm not at all interested in making anything commercially viable.
Cortex-M0 is very comparable to a 68k with 32-bit registers. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....
And a ARM CortexM0 is about 12k gates, it's hard to get much simpler than that.
Yeah, that's precisely the problem. Though you're right, the M0 at least does fit what I'm talking about pretty well, in-order, no-MMU, etc.
Honestly you never know when something you are familiar with becomes important later.
Something that was important to get it done may not be as important in later tasks. (My Operating Systems Class at university was greatly useful in my first job, not as much after) My dabbling in JS got me job programming timecard apps for blackberries and changed my work trajectory.
This field changes so fast and you can't learn everything, so learning things you enjoy and help you get what you are working on now.
That and data-structures and databases, you can never know enough about both.
I stole a collection of games from someone else's project to use for testing.
My goal was to get in the door so I picked a language I knew well and set some constraints:
1. Doesn't have to be fast. Just has to run.
2. Write opcodes as pure functions, passing in an array of memory and returning a new array of memory. This made it easy to test and debug. But also made it super slow.
3. DO NOT look at someone else's code. This forced me down a way slower path where I learned a whole lot more than I would have if I just reimplemented others' work.
I heartily recommend everyone try it for a couple months. No TV, no mobile data plan, no home internet, no radio. It's amazingly refreshing.
Edit: another reason is that almost all social media is geared towards making you addicted to it and encourages hopelessness, and when you get rid of social media, not just deleting accounts but even to stop consuming it regularly, your attitude will just change for the better on a daily basis. Waking up will feel refreshing, not burdensome, like today is a new opportunity, not a new day of drudgery. Well, at least it will play a big part in this shift.
What I am interested in, though, are the public library board meetings that they hold monthly, open to the public. I can't imagine what kind of things they discuss, and I'm thinking of attending one just to find out.
The hard part is dealing with live APIs, but fortunately these can be recorded while online and replayed when offline, which seems like a good practice anyway during development.
Cool. How do you do this?
I'm going the route of others with making a fantasy console, and will be content to run it on mobile platforms in an emulator, which is what happens with the pico8.
That said, I do have a half baked idea in my head to make something from a AtMega and an ESP32. If the higher clockspeed of the esp32 is enough to make it pretend to be a valid xram interface for the AtMega. Then output s-video out the DACs.
I am considering teaching a university course where the students implement a NES or GB emulator though. Should be a blast!
I was considering doing the same for my Machine Language class, but it seems that many games rely on the precise timing of the emulation to work. And, because the output was an anolog TV, there are a lot of tricks that need to be figured out. I thought that was a no-go for a course.
One thing I didn't see mentioned in the article are the Pan Docs. Those document basically everything about the Game Boy at a low level.
Another great GB development resource is https://github.com/gbdev/awesome-gbdev
The "Nintendo" logo you see when booting a Game Boy comes from the cartridge and is checked against one stored in the console.
If they don't match the whole thing stops, which means that you have to add (and distribute) a Nintendo logo to your game in order for it to work on real hardware.
I had an Atari Lynx but had to sell it because I needed the money. It was like the Game Boy but Atari and it had color graphics and slim cartridges.
I think Free Pascal or some other language can make console games by cross compiling.
Right now I have a Raspberry PI with RetroPI to play Gameboy games.
Stay hungry and hack.