
Minimal single-board computer based on Motorola 68000 - homarp
https://github.com/74hc595/68k-nano
======
noone_youknow
Designer of the rosco-m68k (which was mentioned in another comment - thanks!)
here, always good to see another 68k SBC on the block. Looking forward to
seeing how this evolves, would be especially nice to make it so $0 isn’t
permanently a ROM address (to allow interrupt vectors to be changed). Even on
68010 I had to do some magic in the address decoder to make ROM be temporarily
at $0 at reset.

~~~
Teknoman117
Most of the boards I've seen that solve this have a shift register that counts
the first 4 bus cycles. The data in pin is tied to 5v and the reset is tied to
the reset pin on the 68k. IIRC, the clock pin is connected to the AS signal.
If the 5th bit is zero, the ROM is selected. The addresses written into the
first 8 bytes of the ROM are the initial (supervisor) stack pointer (typically
the end of your RAM for embedded systems) and the ROM entry. At the start of
the 5th bus cycle, the 5th bit goes high and the ROM is no longer mapped to
the lower addresses and the CPU has jumped to the true ROM entry point.

You could also of course have a register to flip the ROM being active in the
lower addresses (default selected) that you later disable.

You can also make a nifty DTACK generator by using another shift register. The
clock is attached to the system clock, the data in pin to 5v, and the
(synchronous, active when AS is deasserted) reset pin to AS. You then just
AND'd the output of your chip select generators and an output bit of this
shift register. (N)OR all the AND gate outputs together and wire that to DTACK
(active low for the DIP parts).

The first rising edge of the system clock after AS is asserted is the exact
clock phase that DTACK should be asserted for a 0 wait states. Anything AND'd
with the first bit of the shift register is a 0 WS peripheral, the second bit
is 1 WS, third bit is 2 WS, etc. The DTACK de-assert at the end of the bus
cycle also works correctly. AS deasserts in S7 (which is a falling edge of the
system clock). The next rising edge of the system clock is in S0, which is
when DTACK is supposed to be deasserted. (since the reset is synchronous, the
clock edge will cause the shift register to reset and all the outputs will go
low)

My current side project is interfacing a 68010 (12.5 MHz) to an FPGA. Most of
the issues are the bus voltage-level translation since all the FPGAs are only
3.3v tolerant these days. I have machinations to build a paged MMU for it that
can address more than 16 MiB. I'd like to use a bigger m68k (68030 / 68040) in
the future, but I'm starting with what I'm more comfortable with. the "Texas
Cockroach" chips have fewer signals to worry about. Also, learning KiCad and
how to use PCB-Way assembly since I have trouble with small surface mount
parts (high-speed/density bus translators, currently designing with the
74LVC16T245).

It's a bit hard to stay motivated when someone with a bit of patience and
motivation (and have taken a decent computer architecture course) can build a
100+ MHz 32-bit RISC-V system in verilog on a ~$100 FPGA devkit.

~~~
noone_youknow
You're right that's how most boards handle having ROM low for the first four
cycles, and that's how mine does it. I use a 74LS174 hooked up just like you
say.

Early in the project I did a DTACK generator similar to what you describe, but
now it's handled by a GAL which allows it to be zero-wait-state in certain
address spaces while supporting external DTACK for IO devices. This also
allows me to easily tri-state the signal do expansion boards can generate
their own DTACK.

Your side-project sounds interesting, I'd love to take a look :) is it online
anywhere?

~~~
Teknoman117
I've got to actually start writing things down about my personal projects :)

At this point I've still just been learning about the bus timing. I've got an
m68k hooked up to an 16 MHz Arduino Mega which is running the m68k at 4 MHz so
I can get 4 samples per clock of the m68k
([https://imgur.com/CxSlIHL](https://imgur.com/CxSlIHL)). I can actually drive
the CPU with the arduino since I can manage DTACK just quick enough with some
inline AVR assembly.

Hit a small snag though. Turns out the 68010's I ordered are fakes.

Ended up with the same parts you did (saw your post at
[https://hackaday.io/project/164305-roscom68k/log/175626-fake...](https://hackaday.io/project/164305-roscom68k/log/175626-fake-
mc68010s)) (what I received:
[https://imgur.com/WCwRpHJ](https://imgur.com/WCwRpHJ)).

I was fairly naive to the fakes when I started ordering parts. I've basically
ended up with all fakes. Got some Harris 80C286-25s which are fakes (internet
seems to suggest they are rebadged 20 MHz parts, so at least they are the same
base part (static core 286)), but I don't have the capacity to test them quite
yet. I've also ended up with a 50 MHz 68030 that seems to actually be a 33 MHz
68030. Again, I can't test it yet so I'm _really_ hoping it's not an EC part
rebadged. I wanted the MMU.

~~~
noone_youknow
Awesome :) I've played around a bit interfacing Arduino to mine too, and found
I had to use AVR assembly to get the timing right.

Yep, those are exactly the same as some of the fakes I have here. Adeleparts
definitely rings a bell as the source of some of mine too.

Sadly it seems to be quite common to remark the "lesser" 030/040/060s to
suggest they are the fully-capable ones :( Fingers crossed yours isn't like
that!

~~~
Teknoman117
If you don't mind me asking, do you know what the fan-out capability of the
"big DIP" m68k's are? I've not noticed a hobby board built around one that has
bus transceivers or line drivers, but all of the boards built around things
like the 8088 and 8086 have them. I know those required latches to de-
multiplex the bus, but the datasheet for the Intel parts recommend line
drivers and bus transceivers except for simple "minimum-mode" systems.

------
snvzz
The memory map hurts.

68000 doesn't have a vector base register like 010+. Instead, the vector base
is always 0x0, which here is in ROM, which is too much of a restriction.
Installing a 010 instead should allow for getting around this.

Also blatantly missing is a NMI switch.

Still, it always makes me happy when I see open SBC designs based on the 68k
family. Retrobrew[0] has a bunch of them, and they are less restrictive, or
use 030 instead of 000/010.

[0]:
[https://www.retrobrewcomputers.org/](https://www.retrobrewcomputers.org/)

~~~
msarnoff
Project author here, I agree. This was a toy project that I allocated a fixed
amount of time to, hence the cut corners. I'm working on another design that
uses programmable logic to handle multiple interrupt sources (yes, including
an NMI button) and allow either ROM or RAM to be banked into address 0.

~~~
snvzz
I'm looking forward to that :)

------
vhodges
Also worth checking out: Bill Shen (plasmo) has his Tiny68K (in a number of
variations including '020) system and one for the RC2014 eco system. The
original:

[https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:ti...](https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:tiny68k)

More recently:
[https://www.retrobrewcomputers.org/doku.php?id=builderpages:...](https://www.retrobrewcomputers.org/doku.php?id=builderpages:plasmo:cb030)

------
jjoonathan
> Due to the minimal address decoding circuitry, accessing certain memory
> regions will cause multiple devices to be selected. This should be avoided.

Two bus drivers enter, one bus driver leaves!

------
yjftsjthsd-h
Clarification on the "Forbidden (multiple devices selected) " bit - Am I
reading correctly that the memory addressing is, essentially, a little buggy
as a side effect of optimizing for simplicity, and that results in mapping
multiple things to certain addresses?

Also, I somehow didn't realize that you could buy what appears to be a new
68000, and for $8.95
([https://www.jameco.com/shop/ProductDisplay?catalogId=10001&l...](https://www.jameco.com/shop/ProductDisplay?catalogId=10001&langId=-1&storeId=10001&productId=2288039)).
In my defense, last time I searched it wasn't obvious, mostly because nobody
labels it as a "68000"; it's a 68HC000P-12, which only in the details is
listed as a "6800" (sic) family - I assume that's just a typo. And to be fair,
I'm sure most people looking for a 68000-series know how to look for it; it's
like expecting people to know that a 80486 is an x86 usually called a 486.
Just a bit of friction for a newbie.

~~~
jacquesm
This was quite common. You only decode out as much as you need and if that
results in phantom appearances of devices or ROM as long as it doesn't
interfere with the operation of the device that is perfectly ok. Memory map
aesthetics are important but sometimes circuit simplicity is more important.

~~~
Taniwha
I think it's still 'buggy' if the result is a bus fight (that might actually
damage chips)

~~~
jacquesm
It shouldn't, normally unless you designed the map wrong. That would be a real
fault. In some systems it was possible to use dynamic map changes to position
ROM over RAM and then to do weird things like getting the CPU to write to ROM
but I've never heard of that actually damaging anything though speculation
that in a tight loop that should be possible was rampant. We sure tried ;)

But when decoding banks of addresses you'd typically decode just one chip at
the time, but possibly in multiple locations, otherwise vacant.

~~~
TomVDB
They are probably using A[19] as chip select for one device, A[18] for a
different one, etc.

So if you read from 0x000c0000, you get a conflict.

I wouldn't call it a bug if it's done deliberately to save some gates. :-)

~~~
jacquesm
Heh, clever. But you'd be even smarter to hook A19 to ~CS and A18 to ~EN to
get the same effect without the conflict.

~~~
compiler-guy
That would require two additional NOT gates, which, in turn, would require an
additional chip, which given the layout of the board, would require additional
board space.

But what would the benefit be, other than a somewhat cleaner address space?
You don't get to actually use the space that isn't mapped in this scenario--
RAM, or whatever, doesn't magically appear there. You also end up having to
worry about additional gate delay.

So, from the software's perspective, you go from, "Don't use these addresses,
unpredictable things will happen." to "Don't use these addresses, nothing will
happen."

Either way, code that uses these addresses is buggy. So you are paying extra
hardware for very minimal benefit.

~~~
kiwidrew
_> So, from the software's perspective, you go from, "Don't use these
addresses, unpredictable things will happen." to "Don't use these addresses,
nothing will happen."_

If the address decoder allows certain address ranges to select _more than one
device_ (which appears to be the case), the problem is far more serious: don't
use those addresses [even for reads!] because it will literally fry the output
drivers of the conflicting chips.

~~~
marcan_42
Bus conflict damage is largely a myth, at least for modern ICs. Output drivers
are a lot more robust than you think. More likely the extra current will cause
a reset or glitch if the power supply circuitry/decoupling can't keep up with
the current spike.

~~~
kiwidrew
Really? So a CMOS driver that's driving a high logic level which is connected
to a CMOS driver that's driving a low logic level won't get destroyed by the
resultant short circuit?

As far as I understand (and I admit that I might be wrong here) a typical CMOS
driver outputs a high logic level by connecting the output pin to Vcc (via a
low-resistance FET) and it outputs a low logic level by connecting the output
pin to GND (also via a low-resistance FET). And the circuit traces on the bus
are also fairly low resistance. Wouldn't this short circuit result in
dangerously high currents flowing?

Certainly high enough (>100mA) to violate the device's "absolute maximum
ratings", the ones that you aren't supposed to exceed even momentarily?

I'd be very interested to hear more about the robustness of output drivers and
the amount of abuse they can tolerate.

~~~
marcan_42
You're correct about how the outputs work, but the practical reality is that
currents end up limited enough by the FET's on resistance that nothing gets
damaged. Yes, it's outside the spec, but exceeding the AMRs doesn't mean your
chip dies. I had a 5V microcontroller survive being put across 12V once. And I
had a Threadripper motherboard die by shorting out its CPU Vcore FET, which
would send the PSU's 12V rail into the CPU, and the CPU survived (PSU shut
down before any damage was done).

If shorting a pin to the opposite rail destroyed your IC people would be
destroying Arduinos left and right with trivial mistakes while experimenting,
and they wouldn't be able to get away with having no I/O protection :-). You
usually don't get >100mA out of a single IO line short - maybe 50mA. Having a
bunch of paralleled bus contention can cause more damage (not to the drivers,
but to power routing and other shared resources), but at that point you should
be hitting PSU current protection limits (which are more important for overall
design robustness).

Console modchips of olde (PS1/2/GameCube/etc) worked by overdriving bus lines
with a stronger driver (often multiple lines ganged together). No consoles
were hurt by this.

I did part of the design and board layout for Glasgow revC, an FPGA-based USB
interface board, and I stress tested our IO level shifter chips for short
circuit robustness. Leaving the outputs shorted hard to the opposite rail
overnight did no apparent harm (bypassing the protection resistors), other
than getting the chip nice and toasty for the duration of the test.

Edit: just remembered a personal anecdote. I only recall ever killing an
output driver with a short, on a typical microcontroller, once in my life
(could've been a fluke). And I've done _many_ stupid experiments. Shorting
outputs briefly for experimentation or unorthodox workarounds is solidly in
the "no big deal" category in my mind, and it's practically never been a
problem. E.g. "I don't know which side is TX and RX in this UART, so I'll just
try both" "This device is bricked so let me short out the flash to force it to
fall back to bootloader mode", etc.

~~~
kiwidrew
_> I stress tested our IO level shifter chips for short circuit robustness.
Leaving the outputs shorted hard to the opposite rail overnight did no
apparent harm (bypassing the protection resistors), other than getting the
chip nice and toasty for the duration of the test._

That's impressive!

Thanks for the reply, sounds like I can be a bit less cautious without fear of
blowing things up.

------
ChuckMcM
Just needs a frame buffer and you almost have a Sun-1.

~~~
msla
I was just about to say that. More to the point, though, you'd need an FPGA
for the custom MMU, which might be a sticking point these days, although there
is a Sun-2 emulator:

[https://news.ycombinator.com/item?id=22350986](https://news.ycombinator.com/item?id=22350986)

[https://github.com/lisper/emulator-sun-2](https://github.com/lisper/emulator-
sun-2)

------
rvense
Another new 68k board that came out recently is Rosco:

[https://rosco-m68k.com/](https://rosco-m68k.com/)

It's for sale on Tindie and the person who made it seems to focusing a lot on
making a toolchain available etc. If you want to actually programme the 68k it
looks like a good bet.

------
mytailorisrich
Great. At 12MHz and 1M RAM that's still above an Atari ST or Amiga 500.

There used to be plenty of such computers based on 8bit to 16bit discrete CPUs
(Z80, 680x, 8255, etc), with schematics in electronics magazines.

This was great because everything was simple enough that you could fully
understand and use 100% of the hardware yourself and code 100% of the software
yourself. And all components were standard discrete ones (like this project),
not SMCs, so there were also very easy to handle.

You did not even need a PCB. I once built such simple computer based on a Z80
using good old wire wrap [1] on a prototyping board, which was quite a common
thing to do, but quite a torture to be honest and I would not want to try with
a 68000...

[1]
[https://en.wikipedia.org/wiki/Wire_wrap](https://en.wikipedia.org/wiki/Wire_wrap)

~~~
jacquesm
Having worked on a fairly large 68K board that was wire wrapped I would not
recommend it.

~~~
dboreham
I built one with graphics floppy and scsi. Wasn't too bad mind you I was 19 so
had infinite spare time and I worked the summer for a defense contractor so
had unlimited wire wrap wire and gold plated sockets.

~~~
jacquesm
I worked on a prototype board that was getting a little older (I did not wrap
the original), apparently the tension of the wrapping tool was off or
something else did not go according to plan during the original build because
quite a few of the connections were flakey and had to be redone. This can be
pretty tricky if there are sometimes 6 wires running to one pin and the whole
thing is a rats nest of identically colored wires.

I did get it all sorted out and working again but that wasn't my idea of
having fun. Fortunately I got into software from hardware or I would have
likely given up. You know you have a nasty problem when tapping the board can
make it crash. Shades of Gollum and Coke.

------
genpfault
So what are the options for proper digital video (DisplayPort/HDMI) generation
on retro systems like this?

Other than the typical (and ugly) "duct-tape a Raspberry Pi to it" and/or
"software composite video to a converter dongle" approaches.

~~~
snvzz
The OSSC[0] is my go-to. OSHW, it takes component input (RGB, VGA, YCrCb) and
audio, and outputs HDMI. Its line by line processing means virtually no
latency is added.

[0]: [https://videogameperfection.com/products/open-source-scan-
co...](https://videogameperfection.com/products/open-source-scan-converter/)

------
filereaper
We were taught on the newer Coldfire processors which use a modified m68k
instruction set.

I never understood the reason for the split address and data registers, it
also seemed like the address registers had a few bits intended for
segmentation?

Anyone know the rationale behind the split address vs data registers design
and architecture? Most other instruction sets use the same registers for both
data and addressing, what advantage do specialized address registers give.

~~~
dboreham
Two ALUs?

~~~
jejones3141
Yup. The Signetics 68070 that was used in CD-i was like a 68000, but slower
because it didn't have that second ALU.

------
mark-r
I wonder why they used a 68000? The 68020 was a significant upgrade, and other
than not being a DIP package it made a great SBC.

~~~
tyingq
He did mention through hole being desirable. A DIP 16Mhz 68010 could be an
easy upgrade. And you would get virtual memory addressing.

~~~
icedchai
68010's still need an MMU for virtual memory. So do 68020's, now that I think
of it.

~~~
tyingq
I see some evidence that an MMU is optional:
[https://news.ycombinator.com/item?id=7684824](https://news.ycombinator.com/item?id=7684824)

~~~
noone_youknow
MMU is optional, unless you want virtual memory. The 68010 fixes a bug in the
68000 whereby it didn’t stack enough information to recover from an address or
bus error, but you still need an MMU if you want to actually do virtual memory
in any performant fashion.

------
mrlonglong
I'm just wondering if it wouldn't be easier to do without the ROM and map the
16MB to RAM, and then carve out the last 64KB at the 16th megabyte for IO
space for devices, using an AVR as a bootloader to download code into the RAM?
That would make it easier to manage the memory and interrupt vectors, yes?

------
jacquesm
If you want the same instruction set and and even smaller board you could use
the 68008:
[https://en.wikipedia.org/wiki/Motorola_68008](https://en.wikipedia.org/wiki/Motorola_68008)
they're going to be hard to find though.

------
barochoc
It’s been too long since I last played around like this. This put a bee under
my bonnet and has me wanting to try something similar. It’ll be a disaster no
doubt but I’ll learn from my mistakes. Thanks for posting.

------
tzs
68K single board computers are fun. Here's the one I did in college in 1982
[1][2]. 6 MHz 68K (I think). 4K 16-bit words of EEPROM. 1K 16-bit words of
static RAM. Two RS-232 ports.

The way the class worked was that they would supply the processor, and I think
they may have also supplied the RAM, but everything else you were on your own
for. Anything you see in those photos that makes you wonder "Why the heck did
he use that!?" probably has the answer "It was cheap".

That's why it is on an S-100 bus prototyping card--I found that at a surplus
store. That's why the reset button says "CLR"\--the Caltech EE stockroom had
for some inexplicable reason a box full of cheap buttons from some old
calculator. That's why the power connector is weird--found that and its mate
at the same surplus store I found a power supply.

I put a nice feature in the RS-232 connections. Note that the cable between
the RS-232 connectors and the board plugs into an ordinary DIP socket, which
is _not_ a keyed connector. The way it is wired up is that it works both ways,
but one of the ways is like using a null modem.

There was one very amusing incident when I was writing software for it. The
68K cross assembler ran on Caltech's IBM 370. There was some HP workstation in
the lab that you could enter your code on and it could submit it to the 370
for assembly and retrieve the output.

The HP workstation was a few years old, and no one really knew much about it.
It ran some weird OS and nobody had bothered to learn much about it--they just
all knew enough to edit, submit stuff to the 370, and do simple file
manipulation.

The thing was full of several years accumulated projects from students,
research code from professors, and who knows what else, so space was tight and
no one really knew what was safe to delete.

One day I'm using it to submit my code to the assembler, and I notice that in
a lot of the file command there was some letter or digit (I forget which) that
you had to include but didn't seem to have an obvious purpose.

So I did the obvious thing--I tried one of the commands but with that letter
or digit incremented.

It turned out that was the drive specifier, and I was now using the second
drive in the workstation--a drive that nobody else knew it had and was
completely empty. They had been struggling with lack of free space on this
thing for years, and all that time there was a second drive in it just sitting
there empty!

[1] [https://i.imgur.com/Ts9wcfW.png](https://i.imgur.com/Ts9wcfW.png)

[2] [https://i.imgur.com/3D4rvdC.jpg](https://i.imgur.com/3D4rvdC.jpg)

------
codezero
It's funny how much this looks like the internals to my original Palm Pilot
which I disassembled not too long ago :)

------
rasz
no pdf diagram is a poor form :(

------
rcar
As someone who has done a fair bit of tinkering with stuff like Arduinos,
NodeMCUs, ESP32s, etc. along with Pis and similar, what sort of itch does this
scratch for people that those wouldn't? From looking at the Rosco version, it
doesn't seem like it's a cost savings or anything, and the hardware is
certainly much weaker than modern options.

~~~
rcar
To be clear, I'm genuinely curious - this isn't intended as an insult to any
of the folks who have worked on these projects.

~~~
kiwidrew
Most of the "modern" embedded stuff comes as a complete system-on-chip these
days, i.e. all the peripherals and memory are integrated into the CPU on a
take-it-or-leave-it basis. It's also rare for chips to offer an external
parallel bus interface (the classic A0-A15 and D0-D7 pins), which means that
any extra peripheral devices need to hang off an I2C/SPI port where the CPU
core can't access them directly.

And the available chips (AVR, ESP32, LPC2xxx, etc.) are all proprietary
designs with a single manufacturer. Even the ARM chips, which generally use
some variant of a Cortex-M core, have wildly different peripherals. So
migrating between families is difficult/impossible.

In contrast, "classic" chips like 8051, 6502, x86 and 680xx all have external
parallel bus interfaces and are (or were) produced by multiple manufacturers
(often as part of "second source" agreements).

So when building a system using these chips, the designer has a large degree
of flexibility and freedom to design the system architecture. Whereas building
something using modern embedded chips is mostly an exercise in parametric
search trying to find an existing chip which offers exactly the right set of
peripherals for the intended design. It reduces the system designer to a mere
consumer of off-the-shelf SoCs instead of being a true builder/architect.

~~~
rcar
That's helpful; thanks. I guess coming from much more of a software
background, having the ability to write C code (which feels like an acceptable
veneer over the hardware to me and works well across chips) and having
programmatic access to the pins feels pretty empowering and all-encompassing.
However, it does make sense to me that someone coming from a hardware-first
view of the world would feel those barriers to direct hardware access much
more acutely and recognize the limitations that I don't.

I appreciate the thoughtful and detailed response.

