
Designing a 68K Single Board Computer - omnibrain
http://www.bigmessowires.com/2014/10/27/designing-a-68k-single-board-computer/
======
jacquesm
How Motorola managed to squander its lead is still a mystery to me. The 68K
ran rings around anything available at the time and the 68030 had an on-chip
MMU removing one of the last barriers between micro and mini computing.

Easy to code for, very nice instruction set, linear memory model able to
address lots of RAM, memory mapped IO, what else was there to wish for. And
yet, IBM picked X86. Beats me. The 6809 would have been a better choice for a
micro than the 8086 was.

~~~
kabdib
There was a project at IBM that used the 68K. If memory serves, IBM thought it
would be "too powerful" so they chose the 16-bit 8086.

Writing boot ROM code for the 68K [at Atari, not IBM...] was a lot of fun. For
instance, the first thing we needed to do, before we could use any RAM, was to
size the available RAM; this code ran a little memory test using just the 16
registers (and no stack, of course). Didn't take long to write, but it was a
neat little puzzle.

Writing embedded software is neat. I still get a rush when a new design wakes
up and prints out "Hello world!" on a serial port, and on the ST it was _way_
cool when the floppy drive did its first seek.

"Hey guys, watch this!"

 _Grknnnnkknknkg_

Little things. :-)

Don't know if you could do CPLD initialization with the same constraints, but
it doesn't seem _that_ hard . . . :-)

Motorola seemed to run out of 68K gas in the late 1980s. Maybe they were
distracted? They developed the 88000 RISC, and at Apple we actually had some
NuBus boards that we were urged to write software for, but that effort wasn't
terribly well organized and ultimately fizzled (one fine day we were told to
immediately rip out all our 88K dev boards and return them. Something
contractual probably fell apart, and we guessed that some lawyers had gone
non-linear...)

~~~
marktangotango
Attempting to control the consumer market by choosing a lesser chip does sound
like something IBM would do. They were all about protecting their turf (as400
and mainframes).

~~~
jacquesm
I ran into an AS400 a week ago on a job, box the size of your average PC,
second hand (refurbished) cost: $125000...

That's a good markup, and it doesn't seem to do anything that a regular rack
mounted intel box couldn't do. It probably has some reserve in terms of IO and
reliability but for the spot where it's sitting it is total overkill (ancient
ERP system only running on that hardware).

~~~
rogerbinns
The AS/400 is completely different hardware and software. I don't mean that it
just a reimplementation of what you are used to, but rather alien to what you
are used to. It is designed to be a business system, and an evolution of the
System 32/36/38.

I recommend Frank Soltis' books about it, although they are very expensive.
For example an earlier edition is [http://www.amazon.com/Inside-As-400-Frank-
Soltis/dp/18824191...](http://www.amazon.com/Inside-As-400-Frank-
Soltis/dp/1882419138)

As an example there is extreme backwards compatibility, pervasive database,
single address space (a different meaning of virtual memory than you are used
to), sharing, tagged and protected pointers, clustering etc.

~~~
jacquesm
Thank you! I've worked with some pretty weird hardware in the past so I think
I'll be just fine but pointers are very much appreciated.

------
unwind
Cool! I love the 68k, I learned assembly on it (on the Amiga because Europe)
and it ruled.

It sounds like a nice project, although I personally almost can't consider a
68K-based hobby computer without graphics. :) Of course graphics isn't
trivial, so I totally understand the design.

This part:

> A CPLD is strongly preferred over an FPGA, because the CPLD’s configuration
> is held in its internal non-volatile memory. An FPGA’s configuration is RAM-
> based, so it would require something else to configure it every time the
> board boots up.

Actually isn't true anymore, there's at least one FPGA family with built-in
flash: Lattice's MachXO2 family
([http://www.latticesemi.com/Products/FPGAandCPLD/MachXO2.aspx](http://www.latticesemi.com/Products/FPGAandCPLD/MachXO2.aspx)).

------
adestefan
Anyone interested in this might want to look into the N8VEM Project [0]. It
started out as a single board Z80 system, but people have now designed
expansion cards. Other people have also added other SBCs with 6x0x, 68k, and
808x CPUs

[0] [http://n8vem-sbc.pbworks.com/w/page/4200908/FrontPage](http://n8vem-
sbc.pbworks.com/w/page/4200908/FrontPage)

------
zackmorris
I have fond memories of writing assembly for the 68000. It had many RISC-like
properties (a large number of registers, few specialized instructions like the
looping and string handling of the x86, not having to worry about memory
segments, etc). I wrote some interesting B&W scrolling blitters after seeing
the full screen labyrinth scrolling in Beyond Dark Castle. I can't remember if
they used bit shifting or rolling, but I don't think I managed 30 fps because
the Mac Plus had such a slow bus that only vertical scrolling at 30 or 60 fps
was possible by reading from scan line offsets of the source image. I used
Apple’s BlockMove(), which was like an even more optimized version of
memcpy(). The horizontal scrolling jittered when it hit 8 bit multiples
because I wrote an optimization that skipped the shifting. I actually
commented that out just to get consistent speed (I later synced to the
vertical refresh instead).

I often feel that Apple's jump to PowerPC was not worth it. Losing hand-
written assembly was a major blow and took Apple out of gaming for years
because no Mac port came even close to the optimized innermost loops of games
like Descent. I remember running games on a friend’s 486 at full speed that
only got 10 fps on my 60 MHz PowerPC. Those were bleak years. Luckily video
cards eventually leveled the playing field (but brought their own issues,
which we still struggle with until DSP instructions are brought on-chip the
way the FPU was).

------
davelnewton
I loved the 68K and built several boards with it, although my board skills
were maxed out after the 68010. I had a PT68K2 ([http://bitsavers.trailing-
edge.com/pdf/peripheralTechnology/...](http://bitsavers.trailing-
edge.com/pdf/peripheralTechnology/PT68K2/)) and several other commercial 68k
machines including an Atari 1040ST and wanted a Falcon.

I still have all my original reference material, and this book,
[http://www.amazon.com/MC68000-Assembly-Language-Systems-
Prog...](http://www.amazon.com/MC68000-Assembly-Language-Systems-
Programming/dp/0669160857/) was one of my favorite computer books ever--I
devoured that thing.

I still actually reference that book, and a couple others of that era,
regarding microcontroller interfacing, even though I've moved on to Arduinos,
RPis, and the bazillion other incredible SoCs I embed into stuff all the time
now.

------
rpledge
I'm afraid the lack of an MMU will make running Linux impossible unless things
in the kernel have changed dramatically since last looked into this.

~~~
chrisdew
"Luis Alves’ machine used a regular 68000 at 20 MHz, and got decent
performance running ucLinux."

ucLinux doesn't need an MMU - that's its reason for existing.

~~~
rpledge
Missed that - very cool, I hadn't heard of that project.

------
cp51
Guys check out the ex natami project and apollo core project
[http://www.apollo-core.com/](http://www.apollo-core.com/) also check out
[http://majsta.com](http://majsta.com)

~~~
unwind
The Apollo project looks awesome, but the site is very shallow and doesn't
provide much detail about the actual delivarables of the project, or its
licensing.

It sounds as if maybe aims to be commercial, which sounds ... I don't know,
like a hard sell these days.

It would be so awesome to pair it with some graphics processor in a suitable
FPGA and ... uh ... write demos, I guess. :)

------
tzs
A single board 68K system is fun to make. My senior year at Caltech, in 1982,
I took a microprocessor lab and for my project built a general purpose 68K
system (as a joint project with a friend--we shared the design, and then each
built a system).

Here are photos of my system:
[http://imgur.com/a/jS42c](http://imgur.com/a/jS42c)

I don't recall the specs exactly, but it was 6 MHz CPU clock speed, something
like 4K of EEPROM and 2K of static RAM, and two serial ports.

We almost weren't allowed to do this, because in prior years there had always
been a few general purpose systems, and they wanted people to try something
that hadn't been done to death. However, those were all 8-bit systems. This
was the first year they had obtained some 16-bit processors, and decided those
were sufficiently different that they would let us get away with general
purpose systems.

Some of the part choices might seem odd. That was due to cost. They gave us
the 68Ks, and I think we got the EEPROM and static RAM from them, too.
Everything else was on our dime, which meant there was a lot of scrounging
around at surplus stores to find things that fit into a broke student budget.

One design feature I really liked was the way we did the serial port
connectors. Note that the connection to the board uses a standard DIP socket.
That has two advantages:

1\. It was cheap!

2\. It is NOT a keyed connector. That allowed plugging it in two ways. We
wired them up so that both worked, but one was equivalent to inserting a null
modem.

My friend's system had a funny problem. His and mine should have worked
identically, since they were the same design and same layout. However, for
some reason his serial ports would not run faster than 2400 bps, whereas mine
went up to at least 19200 (the speed of the fastest terminal we had access
to). If he tried to go faster, some characters would work, and some would not.

It turns out that when he went to the EE stockroom to buy bypass capacitors
for the serial lines, the stockroom gave him the wrong capacitors. They were
much much much larger than they were supposed to be, and so were filtering out
as line noise frequencies that were low enough to affect the data. If a
character only had a couple 0/1 or 1/0 transitions, not too close together, it
would work. If it had more, or they were too close together, it got messed up.

There was also an hilarious incident with the development system. The EEPROM
burner was hooked up to some HP system. The 68K cross assembler ran on
Caltech's IBM 360. So, we'd write our assembly code on the VAX, then there was
a process to submit it to be cross assembled on the 360, then you could go to
the HP and download it from the 360 and burn it into EEPROM.

There were strict orders to delete your files from the HP as soon as you
burned your EEPROM, because the disk was near full. The thing had been there a
long time, and was full of junk from old projects, and no one knew what was
really junk (such as past student files) that could be deleted, and what might
be someone's research files. The space crunch on the machine was starting to
get very annoying.

Also, no one really knew much about the OS on the HP. Everyone just learned a
set of commands by rote to transfer files, burn EEPROMs, and things like that.

One fine day, I happened to notice that there was a place in all the commands
that manipulated my files that a "1" or an "A" (I forget which) appeared near
the file name. Curious as to what it might mean, I tried copying a file but
changing that to a "2" (or "B") for the destination, and it worked. I then
asked for a listing, again using "2" (or "B"), and there was my file, alone on
an empty disk.

I had discovered that there were actually TWO drives in the computer--and the
second was empty (except for my file)! So...a bunch of (allegedly) bright
people at Caltech had spent months, or even years, struggling with low disk
space in the lab, and all that time there was an empty second disk in the
computer...wow

------
icantthinkofone
As the article said, I loved the 68K. Far more than anything Intel had at the
time and I wasn't alone. For work, I designed a complete 68K system from the
component level. For me it was easy. Everything made sense. It was a great
time.

