There are multiple active kernel maintainers and the port is regularly seeing improvements and new drivers such as for the X-Surf 100 ethernet card for the Amiga or the ICY board (an I2C board) for the Amiga.
There is also ongoing work to support LLVM  and consequently Rust  on Motorola 68000.
Disclaimer: I'm Debian's primary maintainer of the m68k port and I'm supporting a lot of these efforts directly or indirectly.
>  https://github.com/M680x0/M680x0-mono-repo
>  https://github.com/glaubitz/rust/tree/m68k-linux
Edit: OP explains here: https://news.ycombinator.com/item?id=23675025
I personally became the m68k maintainer in Debian because I was asked whether I would like to work on the port shortly after I became a Debian Developer and I agreed.
I learned really a lot about kernel and software development in general that I stayed with it. It also helped me land a job with one of the big Linux companies.
I'm just curious what kind of hardware setups are used for running and developing Debian on these systems. What are you running on/developing with? Is a desktop computer with qemu the fastest practical 68k computer you can have?
There are new hardware accelerators being developed such as the Vampire which will provide a faster basis for running Linux on Amiga/m68k:
Currently, I'm using mainly QEMU for development. QEMU's m68k emulation has received tons of improvements thanks to the heavy use of the Debian/m68k project.
In fact, all the package builders for m68k are currently QEMU-based and building the whole Debian archive on QEMU has been proven to be the best quality testing of QEMU ;-).
That's very impressive... I'd known people had been doing Amiga accelerators for a while, but not that sophisticated.
Also very cool that they use a custom Amiga style mouse pointer on the website. (Viewed the site on an OSX machine and I'll admit to the double take I did...)
(I understand wanting to run the original Amiga OS on faster gear.)
It depends on the market. For PC/workstation class machines, the 80's into the early 90's were 68k's moment in the sun. But I distinctly remember doing new embedded development work on 68K in the late 90's and and Palm Pilot (1997) had a 68K core also. So even though they had to retreat from one market, they stayed active in the embedded space. (Of course, so did Intel... that same project with the 68K also ran on an 80188EB, 386SX, and an AMD Elan SC400).
> by the time the 486/040 came about it was already starting to show that motorola was not able to match intel's mhz push.
Intel had a few more minor growing pains too. The 486/25 and /33 chips were easily accepted into the market, but the /50 was not. There were problems at the time getting a 50MHz motherboard to work correctly and not cause too many problems with radio interference. So the 50MHz part wound up being limited to the high end of the market. The step beyond 33 for the 'mainstream' PC market was the DX2/66, which left the bus/motherboard at 33MHz and used the on-chip cache to run the CPU itself at 66. So faster CPU than a DX/50, but less bus bandwidth, which turned out to be a reasonable tradeoff. (Particularly given that I/O was often very slow, anyway, being forced through a 16-bit 8Mhz AT bus.)
The Pentium, of course, brought the >=60Mhz motherboard back into the PC mainstream, and there was also a 486/DX4 part around the Pentium timeframe. Contrary to its name, the DX4 was a clock tripled CPU that would run 33/100 (with an option for a doubled 50/100 as well). Clock multipliers came to the Pentium with the second revision... the original P5/60 and /66 ran 1:1, but the subsequent P5/90 and /100 ran 1.5:1.
Thank you for clarifying with that bit... I was (very briefly) trying to imagine how a 90's era dishwasher or something might use a 68K CPU in a VME cage.
That said, the embedded 68K project I was on was for an industrial process control device. (We were making hardware that would fit in a valve or sensor in the field and connect it to a network.) Because of various safety requirements, total power dissipation was very, very low. (draw on the order of mA, IIRC, and fairly low voltage.)
Super Socket 7!
I was curious about FPGAs since they currently have plenty of headroom to emulate early nineties consoles (inluding 68k based ones like the Genesis) - it's awesome to see that FPGAs can now be used for continued life of old CPU architectures.
I'm not too fond of emulators and like to work on actual hardware although the FPGA 68k cores are interesting.
The implication (between the lines) that Apple went with a proprietary solution while an open one was available bends history a bit. Judging by release date, ADB predates PS/2 by half a year or so.
Also, was the PS/2 keyboard/mouse interface any less proprietary than Apple Desktop Bus at time of initial release?
Not really. They were both proprietary. ADB was Apple's solution, and PS/2 was IBM's solution.
ADB was rather nifty from a technical perspective -- it was a multidrop bus which could support multiple devices, as opposed to PS/2 which could only support a single device per port. It was common for mice to be chained off of keyboards, for example, and there were a number of third-party peripherals which could be connected to ADB as well, like graphics tablets.
You could thus develop a habit of putting, say, the flat part next to your thumb, and then you'd know the alignment of the cable. So if you knew the alignment of port, you could do reliably on the first attempt.
USB makes a slightly worse mistake, which is that the plastic part you hold in your hand feels the same way if you turn it around 180 degrees. So you have to point it at your face and look at it if you want to know which way you're holding it.
It's certainly not as easy as with ADB, but it only takes a quick glance to orient a USB jack. I can't be the only person to have noticed this.
The progenitor of USB was the SIO interface on the Atari 8-bit series. (https://en.wikipedia.org/wiki/Atari_SIO)
The inventor of SIO (Joe Decuir) was also part of the USB team, and credits SIO as having a big influence on USB.
SIO was actually pretty neat, for those early days, it was effectively an implementation of a serial-based virtual-filesystem-like layer, where a device I/O block held pointers to routines for read-char/write-char/open/close/.../xio (where xio was the catch-all, like ioctl())
You could create your own SIO drivers, or load in 3rd-party drivers (the floppy-disk driver was a loaded SIO driver) , and the machine came with some already installed by the OS (keyboard, screen, cassette i/o, printer). Any external device would plug into the SIO port in the back of the machine, which was molded so it only went in one way.
SIO as a software API wasn’t limited to the external bus, either. Atari introduced an 80-column box (the XEP-80) which used the SIO api internally to drive 8-bit parallel data across the bidirectional joystick ports (yes, the 80-column card connected via the joystick ports, because if you were running with 80-column text, apparently you never played games...) The joystick ports gave a higher bandwidth than the SIO port, but there was a parallel port on the XL/XE as well...
Of course, the requirements for 8-bit computers were a lot lower, so the bus ran at the blazing speed of 19200 baud normally (you could boost it higher, up to 72k), and the entire thing was run off the clock-domain of POKEY, the sound chip.
There was a lot of neat engineering in the Atari 8-bit line, most of which was ignored because it was a “games computer”...
For multiple people chatting on virtual worlds at the same time, it worked fine!
And still was even after Macs switched to USB. I've got a couple of those keyboards in my collection.
The PS/2 was supposed to succeed the "PC", and was intentionally proprietary, to prevent clones.
Despite their eventual failure, many technologies first introduced in the PS/2 range eventually became standards, including the 16550 UART (serial port), 1440 KB 3.5-inch floppy disk format, Model M keyboard layout, 72-pin SIMMs, the PS/2 keyboard and mouse ports, and the VGA video standard.
Most standards in the industry are bogged down by committees or designed to serve a myriad of use cases. Since Apple is vertically integrated and ships in volume, they generally don't have issues shipping proprietary solutions with very narrow use cases.
Most of the negative messaging you hear around Apple's proprietary solutions are usually resentment from people who want but can't have it.
Speak for yourself. I would much prefer my iPhone to have a type-C port than the Lightning port it has; one less cable to keep track of.
> They needed a solution and what was available was either woefully insufficient or non-existent so they developed a proprietary solution that met their needs.
Lightning is 8 years old and shipped in products 2 years before the USB-C specification was even finalized, 4 years before it was standardized. And it wasn't until 2018 that USB-C began being widely adopted.
USB-C is still kind of a shit show in terms of compatibility, you can get a charger/cable combination that will charge one device and destroy another. Until that gets ironed out, Apple probably won't be using it in Phones.
 ...and, apparently, the first desktop systems that relied exclusively on USB (https://en.wikipedia.org/wiki/Legacy-free_PC).
1. Windows 95 USB support was terrible so nobody used it. But Windows 98 had just come out with good USB support.
2. Some PCs came with USB ports but most didn't. Legacy ports were always present. If you wanted USB you often had to add a card. So nobody used USB in PC-land.
3. Apple's market share was tiny but the "Bondi Blue" iMac (which only had USB ports, not legacy ports) was the first product they had made in a decade that was actually desirable. It was so popular that it made Apple a player in the market again.
4. Even though the Bondi Blue iMac's USB "hockey puck" mouse was ergonomically terrible, it didn't dissuade people from buying the computer, because they could just buy a different USB mouse due to factor 5.
5. Steve Jobs lobbied vendors to make USB peripherals by convincing them they would work on both the Mac and the PC. Which was true.
5 was probably the most important factor. Vendors had hundreds of peripherals ready to go on Day 1 of the Bondi Blue iMac's introduction, and most of them were blue. People plugged them in to Windows 98 machines and they just worked. The surge in USB device availability (plus Windows 98) caused PC manufacturers to begin including USB ports universally. But that surge was due to Steve Jobs' lobbying.
The issue with USB on the PC side is that USB on Windows 95/OSR2 just never worked properly. People may remember the infamous Bill Gates demo of Windows 98 at Comdex where plugging in a USB scanner to demo Plug and Play crashed the computer.
When Windows 98 SE came out everything worked fine.
PC hardware, on the other hand, clung to shipping with serial and parallel ports for an amazingly long time, even on laptops, where hardware designers must have struggled with fitting those huge connectors in.
In my experience, it didn’t help that Windows’ USB support was lousy for years. On the Mac, you plugged in a mouse. A second later, it worked. Windows found it necessary to tell you it detected a device and had to “search for a driver” for seconds and even asked you to locate a driver about every other time (exaggerating, but plugging in a device in another USB port then before could trigger that) you plugged in a device it had seen zillion times before.
I think USB would have won on the combination of merit and being from Intel, anyways, but who knows? Maybe, Intel would have come up with something else. If USB1 had flopped, USB2 wouldn’t have needed backwards compatibility, for example.
There's a similar argument about the iPhone - is it solely responsible for how all our phones look today, or was it inevitable (see the LG Prada)?
From 1998 to 2000 USB only devices were marketed primarily to Mac users because it was seen that they were the only ones who needed it. PC users stuck with parallel or serial because they were cheaper, but where possible manufacturers would add a USB port. Then the higher speed USB 2.0 and with it flash drives and that changed everything. Rather than the iMac I'd say it was the iPod that really made USB desirable.
Although I think the most-sold device in those days of USB were external USB floppy drives for all the iMac users who needed to read floppies.
I think perhaps the iMac gave USB the customer base (of people who had no other option) to get the prices of chipsets and devices down to the point where it could also be accepted by the PC market.
But new PCs had USB ports before the software was ready, for about a year or so.
PCs did not normally use RS-232 for keyboards, but up through the late 90s they often used them for mice.
(Serial mice typically got away with drawing power from one of the signal pins. This was a pretty gross hack, but it worked well enough.)
I'm pretty sure every hardware platform had its own standard, and there were more desktop/workstation platforms then (Amiga, NeXT, Sun, SGI, etc)
You seem focused on defending the reputation of Apple when pointing out the interface as proprietary is as much about highlighting the fact that a driver exists for it despite being proprietary as it is about disparaging uncooperative hardware vendors.
The entire tone of the article is basically "The Linux kernel is so amazing not only is it still maintaining support for ancient hardware, it's proprietary hardware (and what I read between the lines is likely reverse engineered by volunteers) as well!"
Probably some of this had to be done (specs never quite match implementation), but the ADB protocol and hardware interface was quite well documented by Apple in Inside Macintosh and Guide to Macintosh Hardware which I’m sure the Linux devs availed themselves of.
There was also an Apple KB article (written mostly for hardware and driver developers) with the charming title of “Help! Space Aliens Ate My Mouse!” which detailed all of the things that could go wrong with ADB that Apple hadn’t thought of when they wrote the original documentation.
It can be a great learning experience though.
It's called retro-computing and it's simply a hobby. You could also ask why people are fixing, maintaining and driving around with cars that are decades old.
It's also a very good method to learn everything about kernel development and maintenance. On x86, there are enough people looking at and working on the code, so you will have a hard time to find things for improvment.
The m68k port, on the other hand, has many places where you can help with improving the code and therefore get your feet wet with kernel development.
Then again, my father in law still refuses to buy new computers and will literally run them til they blow out.
So their is definitely someone out there keeping the thing running.
One example of support being removed includes old intel 386 processors, due to multi-core support complications apparently: http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html (that left intel 486 and later still supported)
A bit more recently, a bunch of obscure architectures were removed, mostly because the last compilers to support them are too old or buggy: https://lwn.net/Articles/748074/ (Linux is in general really great about supporting a range of gcc/binutils versions going back about 7 years ... compare to some other popular projects these days which require a Go or Rust toolchain from just a few months ago.)
And occasionally a driver is moved to "staging" to see if anyone complains, before being removed, because it is being a bother and no one seems to be maintaining it: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
This is related to why "upstreaming" a driver can be very different from just releasing the source. Making a driver acceptable for merging into the mainline kernel, means making it clean enough that maintenance costs will be extremely low, for the next 5 to 10 years, even as the linux kernel sustains a surprisingly large amount of change in every single 3 month release cycle.
Well, it's been like 2 decades since I last compiled a kernel myself, but back then I was definitely bothered with the pages of obsolete hardware which could be selected from when running make menuconfig. Not sure if/how people configure kernels these days but I assume it didn't get better?
As long as the stuff is actively maintained, I don't see a problem as this means it's actually being used.
Linux always has had a very wide hardware support and if you are bothered about that, you can either use the make target "localmodconfig" or just don't build your kernels yourself.
I think you misunderstood my reply. I'm not saying the kernel should drop support, I'm merely replying to the 'doesn't bother anyone' with a practical example of why it might bother someone on a particular level.
So here it goes, in full (similar to sibling poster cesarb's story): once you could, spending some time, just go over pretty much every possible option and select what you needed. Which mattered back then (at least to me), because compilation already took half a day on my machine so the less things to compile the better. Now over the years came more and more support for different hardware. That fact by itself obviously did not bother me, also because it meant I could finally hook up my insert not too common device here. What started to bother me was just that it took more time to go over all options. Whishful thinking because hard to implement, but I would have liked a flag like "ok you can just skip everything for harware which only existed before 1995 because I don't have that". So, reflecting on that period, I'm simply wondering what it must be like today to go through all options. I.e. I wonder if others might be bothered by the sheer amount of options out there.
7 years old gcc was released back in 2013, or 36 years after gcc's initial release
So while the codebase may become larger with the additional devices supported, this doesn't have to impact your running kernel at all. Modules that represent hardware not installed on your system aren't typically loaded and therefore don't occupy RAM.
You can also blacklist modules, in the event something's loading where it shouldn't and you don't want it in RAM.
That being said you can do an `lsmod` and look at the modules actually loaded. If some are there that represent things you'll never use (like QNX partition support), blacklisting them may save you some KBs of RAM and lower your kernel's attack surface. QEMU's floppy disk hardware support was impacted by a security vulnerability not too long ago, and I've had the `floppy` module blacklisted for a long time on my VMs.
If you are running Linux on a purely static system that won't change hardware over its life (including things like USB devices), you can then compile a kernel with all modules "built-in" and disable module loading entirely.
As long as there is an active maintainer for the code, the code can stay forever and it's not really bloating up the kernel due to the modular nature of it.
bool "Apple Desktop Bus (ADB) support"
depends on MAC || (PPC_PMAC && PPC32)
Apple Desktop Bus (ADB) support is for support of devices which
are connected to an ADB port. ADB devices tend to have 4 pins.
If you have an Apple Macintosh prior to the iMac, an iBook or
PowerBook, or a "Blue and White G3", you probably want to say Y
here. Otherwise say N.
As a reference point, I know NetBSD explicitly decouples these things; if you plug a brand-new PCI device into any system with a PCI slot, NetBSD doesn't care that you're plugging ex. a Sun keyboard into a "PC" USB card in a PowerPC mac - it has a keyboard driver, a USB driver, a PCI driver, and all of them use the same internal interfaces regardless of where they originated. I assume Linux does it the same way, but I don't know explicitly.
Apple had provided some internal documentation for the IOP controller but it is incorrect. The Linux driver for it just polls any ADB device for data directly instead of letting the IOP process it and just interrupt the CPU when there is a complete ADB message. I have got a working IOP driver for NetBSD/mac68k that the Linux people might like to copy.
The presence of these workarounds was an obstacle to maintenance of other x86 code. Meanwhile, the presence of a driver for an old Macintosh system is hardly an obstacle to anyone.
Because developers care about Motorola 68000, but no one cares about i386.
No one wanted to step up to work on support for the original i386, so it got removed.
(1) there are people who are so keen on keeping 68K/Macintosh/etc support alive that they keep on working on it, while maybe no one displayed the same keenness when it came to 386
(2) 386 complicates code paths for 32-bit x86, which people still care about (even if only a little) for real world production use. By contrast, 68K stuff sits in its own directory tree and has little impact on the rest of the kernel
I don't know how much of a concern this is IRL, given that one could hide essentially all 386-specific workarounds behind #ifdef's.
If you look at the patch the remove 386 support (http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html), you can already see that it's already under #ifdef. There are quite a number of atomic instructions not available for 386 which all requires workaround.
You'd think so, but: https://wiki.apollo-accelerators.com/doku.php/apollo_core:st...
(I wonder if the new ADB driver would be useful with the USB-to-ADB adapter I have. Although the only thing I could plug into it is a trackball.)
Like aren't you just increasing the attack surface? Since all these drivers exist in kernelspace.
So there's three levels, as I understand it that driver code like this could be included (or not) in the kernel. This is not my area of expertise so please excuse me if I'm mistaken here but you have:
1. Not included in the kernel and not included as a module. This is obviously excluded at source and is the safest;
2. Available as a module but not loaded by default; and
3. Included in the kernel. I'm not 100% sure if there even is a distinction between this and (2) anymore. I remember at one point you could include code as part of the configure process during compilation and this was distinct from modules (at least for a time).
Either way, there doesn't seem to be a difference philosophically between (2) and (3), right? A vulnerable module can be loaded from userspace, generally speaking.
Also, I'm actually curious how often drivers are the source of security vulnerabilities? Is this a common or rare vector?
Of course the driver probably won’t do anything if it never gets attached to a device (other than responding to probe requests on the bus it uses.)