
Linux Kernel Is Still Seeing Driver Work for the Macintosh II - caution
https://www.phoronix.com/scan.php?page=news_item&px=2020-Linux-Macintosh-II-ADB
======
cbmuser
The m68k port is well maintained and is one of the oldest Linux ports of all.

There are multiple active kernel maintainers and the port is regularly seeing
improvements and new drivers such as for the X-Surf 100 ethernet card for the
Amiga or the ICY board (an I2C board) for the Amiga.

There is also ongoing work to support LLVM [1] and consequently Rust [2] on
Motorola 68000.

Disclaimer: I'm Debian's primary maintainer of the m68k port and I'm
supporting a lot of these efforts directly or indirectly.

> [1] [https://github.com/M680x0/M680x0-mono-
> repo](https://github.com/M680x0/M680x0-mono-repo)

> [2]
> [https://github.com/glaubitz/rust/tree/m68k-linux](https://github.com/glaubitz/rust/tree/m68k-linux)

~~~
DCKing
I have a hardware question. Seeing that the heyday of the Motorola 68000
series was in the early nineties, aren't modern Debian's storage and memory
requirements getting to be very prohibitive, even for a minimal headless
install?

I'm just curious what kind of hardware setups are used for running and
developing Debian on these systems. What are you running on/developing with?
Is a desktop computer with qemu the fastest practical 68k computer you can
have?

~~~
ido
wasn't the heyday of the Motorola 68k the 80s? by the time the 486/040 came
about it was already starting to show that motorola was not able to match
intel's mhz push.

~~~
mschaef
> wasn't the heyday of the Motorola 68k the 80s?

It depends on the market. For PC/workstation class machines, the 80's into the
early 90's were 68k's moment in the sun. But I distinctly remember doing new
embedded development work on 68K in the late 90's and and Palm Pilot (1997)
had a 68K core also. So even though they had to retreat from one market, they
stayed active in the embedded space. (Of course, so did Intel... that same
project with the 68K also ran on an 80188EB, 386SX, and an AMD Elan SC400).

> by the time the 486/040 came about it was already starting to show that
> motorola was not able to match intel's mhz push.

Intel had a few more minor growing pains too. The 486/25 and /33 chips were
easily accepted into the market, but the /50 was not. There were problems at
the time getting a 50MHz motherboard to work correctly and not cause too many
problems with radio interference. So the 50MHz part wound up being limited to
the high end of the market. The step beyond 33 for the 'mainstream' PC market
was the DX2/66, which left the bus/motherboard at 33MHz and used the on-chip
cache to run the CPU itself at 66. So faster CPU than a DX/50, but less bus
bandwidth, which turned out to be a reasonable tradeoff. (Particularly given
that I/O was often very slow, anyway, being forced through a 16-bit 8Mhz AT
bus.)

The Pentium, of course, brought the >=60Mhz motherboard back into the PC
mainstream, and there was also a 486/DX4 part around the Pentium timeframe.
Contrary to its name, the DX4 was a clock tripled CPU that would run 33/100
(with an option for a doubled 50/100 as well). Clock multipliers came to the
Pentium with the second revision... the original P5/60 and /66 ran 1:1, but
the subsequent P5/90 and /100 ran 1.5:1.

~~~
tyingq
VME bus 680x0 servers were in pretty wide use in the appliance market through
the 90's as well. Things like firewalls.

~~~
mschaef
> wide use in the appliance market through the 90's as well. Things like
> firewalls.

Thank you for clarifying with that bit... I was (very briefly) trying to
imagine how a 90's era dishwasher or something might use a 68K CPU in a VME
cage.

~~~
tyingq
Funny enough, the Coldfire series of 680x0 derivatives were marketed at "smart
home appliance" manufacturers.

~~~
mschaef
Seems like a natural fit, if maybe even a bit overpowered.

That said, the embedded 68K project I was on was for an industrial process
control device. (We were making hardware that would fit in a valve or sensor
in the field and connect it to a network.) Because of various safety
requirements, total power dissipation was very, very low. (draw on the order
of mA, IIRC, and fairly low voltage.)

------
Someone
_“Apple Desktop Bus is the proprietary interface used by Apple in the late 80
's and 90's for connecting devices like keyboards and mice to the system
rather than a PS/2 interface.”_

The implication (between the lines) that Apple went with a proprietary
solution while an open one was available bends history a bit. Judging by
release date, ADB predates PS/2 by half a year or so.

Also, was the PS/2 keyboard/mouse interface any less proprietary than Apple
Desktop Bus at time of initial release?

~~~
duskwuff
> Also, was the PS/2 keyboard/mouse interface any less proprietary than Apple
> Desktop Bus at time of initial release?

Not really. They were both proprietary. ADB was Apple's solution, and PS/2 was
IBM's solution.

ADB was rather nifty from a technical perspective -- it was a multidrop bus
which could support multiple devices, as opposed to PS/2 which could only
support a single device per port. It was common for mice to be chained off of
keyboards, for example, and there were a number of third-party peripherals
which could be connected to ADB as well, like graphics tablets.

~~~
wolfgang42
ADB was basically USB 0.1: 5-volt multi-device serial bus for desktop
peripherals, polled by the host, with dynamic addressing and standard device
classes. The only thing it was missing was hot-plugging. (The protocol could
handle it, but there were mechanical/electrical issues that made it hazardous
to try.) They even had a connector that made it hard to tell if you were
trying to plug it in upside down!

~~~
adrianmonk
With the connector itself, it was hard to tell, but the plastic part you held
in your hand (molded plastic plus strain relief) wasn't symmetrical.

You could thus develop a habit of putting, say, the flat part next to your
thumb, and then you'd know the alignment of the cable. So if you knew the
alignment of port, you could do reliably on the first attempt.

USB makes a slightly worse mistake, which is that the plastic part you hold in
your hand feels the same way if you turn it around 180 degrees. So you have to
point it at your face and look at it if you want to know which way you're
holding it.

~~~
twic
I think all of the USB-A jacks i have are made out of a sheet of metal folded
over on itself, with the join on the bottom. You can tell which way up a jack
is by looking for the join. See this picture of some random bitcoin gizmo:

[https://www.ccn.com/wp-content/uploads/2014/11/usb-top-
botto...](https://www.ccn.com/wp-content/uploads/2014/11/usb-top-bottom-
together.png)

It's certainly not as easy as with ADB, but it only takes a quick glance to
orient a USB jack. I can't be the only person to have noticed this.

~~~
moftz
The part that screws me up is whenever the USB port is upside down, meaning
the plug goes in with the retention holes facing down.

------
stx
Back in the early 2000s I did not have much money and but was able to get my
hands on all sorts of slightly exotic and out of date computers for free.
People just gave them away. I really enjoyed running Linux on old SGI or Apple
computers. Over time however the novelty wore off and I realized that even old
x86 hardware was cheaper and faster. Any people running Linux on some rare old
hardware care to share why?

It can be a great learning experience though.

~~~
cbmuser
> Any people running Linux on some rare old hardware care to share why?

It's called retro-computing and it's simply a hobby. You could also ask why
people are fixing, maintaining and driving around with cars that are decades
old.

It's also a very good method to learn everything about kernel development and
maintenance. On x86, there are enough people looking at and working on the
code, so you will have a hard time to find things for improvment.

The m68k port, on the other hand, has many places where you can help with
improving the code and therefore get your feet wet with kernel development.

~~~
Haemm0r
Maybe some companies are interested in this as well. As I know from work, the
PLC manufacturer Pilz [1] e.g. uses 68k type CPUs in some of the products.

[1] [https://www.pilz.com/](https://www.pilz.com/)

------
Polylactic_acid
Does the kernel ever remove support for devices or does this stuff just bloat
up the codebase for eternity?

~~~
ploxiln
Yes, linux removes support for stuff over time. But most drivers for old
simple peripherals are considered not a problem - as long as they are clean
optional components of their respective subsystems, they don't bother anyone.

One example of support being removed includes old intel 386 processors, due to
multi-core support complications apparently:
[http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html](http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html)
(that left intel 486 and later still supported)

A bit more recently, a bunch of obscure architectures were removed, mostly
because the last compilers to support them are too old or buggy:
[https://lwn.net/Articles/748074/](https://lwn.net/Articles/748074/) (Linux is
in general really great about supporting a range of gcc/binutils versions
going back about 7 years ... compare to some other popular projects these days
which require a Go or Rust toolchain from just a few months ago.)

And occasionally a driver is moved to "staging" to see if anyone complains,
before being removed, because it is being a bother and no one seems to be
maintaining it:
[https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ea2e813e8cc3492c951b9895724fd47187e04a6f)

This is related to why "upstreaming" a driver can be very different from just
releasing the source. Making a driver acceptable for merging into the mainline
kernel, means making it clean enough that maintenance costs will be extremely
low, for the next 5 to 10 years, even as the linux kernel sustains a
surprisingly large amount of change in every single 3 month release cycle.

~~~
stinos
_they don 't bother anyone_

Well, it's been like 2 decades since I last compiled a kernel myself, but back
then I was definitely bothered with the pages of obsolete hardware which could
be selected from when running _make menuconfig_. Not sure if/how people
configure kernels these days but I assume it didn't get better?

~~~
cbmuser
What's wrong with the kernel supporting a wide variety of hardware?

As long as the stuff is actively maintained, I don't see a problem as this
means it's actually being used.

Linux always has had a very wide hardware support and if you are bothered
about that, you can either use the make target "localmodconfig" or just don't
build your kernels yourself.

~~~
stinos
_What 's wrong with the kernel supporting a wide variety of hardware?_

I think you misunderstood my reply. I'm not saying the kernel should drop
support, I'm merely replying to the 'doesn't bother anyone' with a practical
example of why it might bother someone on a particular level.

~~~
spoopyskelly
It bothers you that the kernel supports hardware you don't have?

~~~
stinos
Again, no, that is not what bothers me. I'm not sure if I was really that
unclear - twice - or if it's just because it's monday :)

So here it goes, in full (similar to sibling poster cesarb's story): once you
could, spending some time, just go over pretty much every possible option and
select what you needed. Which mattered back then (at least to me), because
compilation already took half a day on my machine so the less things to
compile the better. Now over the years came more and more support for
different hardware. That fact by itself obviously did not bother me, also
because it meant I could finally hook up my _insert not too common device_
here. What started to bother me was just that it took more time to go over all
options. Whishful thinking because hard to implement, but I would have liked a
flag like "ok you can just skip everything for harware which only existed
before 1995 because I don't have that". So, reflecting on that period, I'm
simply wondering what it must be like today to go through all options. I.e. I
wonder if others might be bothered by the sheer amount of options out there.

------
rjsw
I had been meaning to see if anyone was interested in some fixes to the IOP
ADB driver for the IIfx and some Quadra machines.

Apple had provided some internal documentation for the IOP controller but it
is incorrect. The Linux driver for it just polls any ADB device for data
directly instead of letting the IOP process it and just interrupt the CPU when
there is a complete ADB message. I have got a working IOP driver for
NetBSD/mac68k that the Linux people might like to copy.

------
Hydraulix989
Why is Macintosh II still being worked on but 386 support got killed? I
genuinely want to know. I suppose it might be because 386 requires more
gymnastics around systems programming for memory management, etc.

~~~
skissane
My speculations:

(1) there are people who are so keen on keeping 68K/Macintosh/etc support
alive that they keep on working on it, while maybe no one displayed the same
keenness when it came to 386

(2) 386 complicates code paths for 32-bit x86, which people still care about
(even if only a little) for real world production use. By contrast, 68K stuff
sits in its own directory tree and has little impact on the rest of the kernel

~~~
zozbot234
> (2) 386 complicates code paths for 32-bit x86, which people still care about
> (even if only a little) for real world production use.

I don't know how much of a concern this is IRL, given that one could hide
essentially all 386-specific workarounds behind #ifdef's.

~~~
innocenat
386 specific workaround code still needed to be updated if relevant part of
the code is updated.

If you look at the patch the remove 386 support
([http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html](http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html)),
you can already see that it's already under #ifdef. There are quite a number
of atomic instructions not available for 386 which all requires workaround.

~~~
zozbot234
Looks like that code could have been refactored to be better self-contained.
Though I suppose removing it altogether is a fine way to go about it; it can
always be reintroduced in the future if folks commit to maintaining the
support going forward.

------
whoopdedo
I was amusing myself with using obsolete tech with a modern computer this
weekend when I needed a video card for a server. Not wanting to waste power on
an unnecessary GPU I looked around for the least capable card I had and found
one from 1997[1]. It still works fine with a processor made in 2017. And Linux
had no complaints using the generic VESA X.org driver. Though there is a
native Tseng driver still there, the particular live USB I had would segfault.
Actually it looks like the last two bugs filed against it were marked WONTFIX
in 2011 so I'd say the driver is unmaintained and Xorg should consider
removing it.

(I wonder if the new ADB driver would be useful with the USB-to-ADB adapter I
have. Although the only thing I could plug into it is a trackball.)

[1]
[http://www.vgamuseum.info/index.php/component/k2/item/465-ts...](http://www.vgamuseum.info/index.php/component/k2/item/465-tseng-
et6000)

~~~
rvense
USB->ADB adapters are more like ADB->USB translators than ADB on a USB bus,
though. They'll only support some peripherals, and they'll show up as normal
HID devices. You won't be able to connect your PageMaker dongle,
unfortunately.

------
duxup
So is someone running the kernel on their Macintosh II... to do a thing
(obviously as the article notes it was tested on a Centris 650) or is this
just an awesome thing to do because it could be done?

------
eyesee
Here's a fun story about the creation of ADB by Woz:
[https://eggfreckles.net/2013/12/27/adb-the-epitome-of-
early-...](https://eggfreckles.net/2013/12/27/adb-the-epitome-of-early-apple/)

------
cletus
I have a question: doesn't all this extra (basically useless) driver code just
increase the security risk for Linux?

Like aren't you just increasing the attack surface? Since all these drivers
exist in kernelspace.

So there's three levels, as I understand it that driver code like this could
be included (or not) in the kernel. This is not my area of expertise so please
excuse me if I'm mistaken here but you have:

1\. Not included in the kernel and not included as a module. This is obviously
excluded at source and is the safest;

2\. Available as a module but not loaded by default; and

3\. Included in the kernel. I'm not 100% sure if there even is a distinction
between this and (2) anymore. I remember at one point you could include code
as part of the configure process during compilation and this was distinct from
modules (at least for a time).

Either way, there doesn't seem to be a difference philosophically between (2)
and (3), right? A vulnerable module can be loaded from userspace, generally
speaking.

Also, I'm actually curious how often drivers are the source of security
vulnerabilities? Is this a common or rare vector?

~~~
duskwuff
The driver in question can't even be compiled for x86 systems, either as a
module or otherwise. Having it in the source tree is completely harmless.

~~~
cletus
That misses the point entirely. It’s not about this driver specifically but
concerns the thousands of other “dead” drivers and similar that could be
loaded.

~~~
wtallis
Most people use their distro's default kernel binaries, which tend to include
only modules that are at least somewhat likely to be used by that distro's
target audience. Nobody ships kernels with _all_ modules compiled.

