Hacker News new | past | comments | ask | show | jobs | submit login
Linux Kernel Is Still Seeing Driver Work for the Macintosh II (phoronix.com)
187 points by caution 14 days ago | hide | past | favorite | 119 comments



The m68k port is well maintained and is one of the oldest Linux ports of all.

There are multiple active kernel maintainers and the port is regularly seeing improvements and new drivers such as for the X-Surf 100 ethernet card for the Amiga or the ICY board (an I2C board) for the Amiga.

There is also ongoing work to support LLVM [1] and consequently Rust [2] on Motorola 68000.

Disclaimer: I'm Debian's primary maintainer of the m68k port and I'm supporting a lot of these efforts directly or indirectly.

> [1] https://github.com/M680x0/M680x0-mono-repo

> [2] https://github.com/glaubitz/rust/tree/m68k-linux


Thank you!! By the way, just curious, what draws you to work on it?

Edit: OP explains here: https://news.ycombinator.com/item?id=23675025


I have been a long-time Amiga user (since the 90s) and never let go of the machine - like many others in the community (we even have regular conferences and meetings etc with new hardware being developed and released).

I personally became the m68k maintainer in Debian because I was asked whether I would like to work on the port shortly after I became a Debian Developer and I agreed.

I learned really a lot about kernel and software development in general that I stayed with it. It also helped me land a job with one of the big Linux companies.


I have a hardware question. Seeing that the heyday of the Motorola 68000 series was in the early nineties, aren't modern Debian's storage and memory requirements getting to be very prohibitive, even for a minimal headless install?

I'm just curious what kind of hardware setups are used for running and developing Debian on these systems. What are you running on/developing with? Is a desktop computer with qemu the fastest practical 68k computer you can have?


I have in fact an Amiga 4000/060 running Debian unstable as of now.

There are new hardware accelerators being developed such as the Vampire which will provide a faster basis for running Linux on Amiga/m68k:

> https://www.apollo-accelerators.com/

Currently, I'm using mainly QEMU for development. QEMU's m68k emulation has received tons of improvements thanks to the heavy use of the Debian/m68k project.

In fact, all the package builders for m68k are currently QEMU-based and building the whole Debian archive on QEMU has been proven to be the best quality testing of QEMU ;-).


> There are new hardware accelerators being developed such as the Vampire which will provide a faster basis for running Linux on Amiga/m68k:

That's very impressive... I'd known people had been doing Amiga accelerators for a while, but not that sophisticated.

Also very cool that they use a custom Amiga style mouse pointer on the website. (Viewed the site on an OSX machine and I'll admit to the double take I did...)


Tone: Honest inquiry. What do you get out of running Linux on an m68k accelerator? (I don't know what the answer is, but I'm happy to hear it and have no intention of arguing about it.)

(I understand wanting to run the original Amiga OS on faster gear.)


wasn't the heyday of the Motorola 68k the 80s? by the time the 486/040 came about it was already starting to show that motorola was not able to match intel's mhz push.

> wasn't the heyday of the Motorola 68k the 80s?

It depends on the market. For PC/workstation class machines, the 80's into the early 90's were 68k's moment in the sun. But I distinctly remember doing new embedded development work on 68K in the late 90's and and Palm Pilot (1997) had a 68K core also. So even though they had to retreat from one market, they stayed active in the embedded space. (Of course, so did Intel... that same project with the 68K also ran on an 80188EB, 386SX, and an AMD Elan SC400).

> by the time the 486/040 came about it was already starting to show that motorola was not able to match intel's mhz push.

Intel had a few more minor growing pains too. The 486/25 and /33 chips were easily accepted into the market, but the /50 was not. There were problems at the time getting a 50MHz motherboard to work correctly and not cause too many problems with radio interference. So the 50MHz part wound up being limited to the high end of the market. The step beyond 33 for the 'mainstream' PC market was the DX2/66, which left the bus/motherboard at 33MHz and used the on-chip cache to run the CPU itself at 66. So faster CPU than a DX/50, but less bus bandwidth, which turned out to be a reasonable tradeoff. (Particularly given that I/O was often very slow, anyway, being forced through a 16-bit 8Mhz AT bus.)

The Pentium, of course, brought the >=60Mhz motherboard back into the PC mainstream, and there was also a 486/DX4 part around the Pentium timeframe. Contrary to its name, the DX4 was a clock tripled CPU that would run 33/100 (with an option for a doubled 50/100 as well). Clock multipliers came to the Pentium with the second revision... the original P5/60 and /66 ran 1:1, but the subsequent P5/90 and /100 ran 1.5:1.


VME bus 680x0 servers were in pretty wide use in the appliance market through the 90's as well. Things like firewalls.

> wide use in the appliance market through the 90's as well. Things like firewalls.

Thank you for clarifying with that bit... I was (very briefly) trying to imagine how a 90's era dishwasher or something might use a 68K CPU in a VME cage.


Funny enough, the Coldfire series of 680x0 derivatives were marketed at "smart home appliance" manufacturers.

Seems like a natural fit, if maybe even a bit overpowered.

That said, the embedded 68K project I was on was for an industrial process control device. (We were making hardware that would fit in a valve or sensor in the field and connect it to a network.) Because of various safety requirements, total power dissipation was very, very low. (draw on the order of mA, IIRC, and fairly low voltage.)


Some DX4 from AMD made it to 160Mhz overclocked.

Super Socket 7!


SS7 was for the pentium/pentium 2 class CPUs (K6 et al), the Am5x86 was for socket 3 486 motherboards.

Yes - I think "heyday" is the wrong word choice here. What I meant to say is that my impression is that the most powerful hardware with 68k CPUs came out in the early 90s, and outside of microcontrollers there hasn't been much development in 68k since. At least, that is my impression from googling around - the 68060 in Atari clones and late Amiga revisions around 1995 seem to be the end of the line for 68k.

As mentioned in another comment, you can get very fast m68k hardware with the Vampire accelerators these days which actually implement additional instructions and are therefore called 68080 by its developers.

Ah very cool. It seems that Apollo 68080 is a 'virtual' 68k CPU running on an FPGA.

I was curious about FPGAs since they currently have plenty of headroom to emulate early nineties consoles (inluding 68k based ones like the Genesis) - it's awesome to see that FPGAs can now be used for continued life of old CPU architectures.


It is a little tricky for the older CPUs that want 5v I/O. There's not a universal bi-directional level shifter that works well in all common situations.

the ColdFire SOCs (system on a chip) are still available and of newer "stock" vs chasing down original 68xxxx chips.

I'm not too fond of emulators and like to work on actual hardware although the FPGA 68k cores are interesting.


The Assembly class I took ~1999-2000 was writing 68K assembly using the Tesside 68000 simulator.

“Apple Desktop Bus is the proprietary interface used by Apple in the late 80's and 90's for connecting devices like keyboards and mice to the system rather than a PS/2 interface.”

The implication (between the lines) that Apple went with a proprietary solution while an open one was available bends history a bit. Judging by release date, ADB predates PS/2 by half a year or so.

Also, was the PS/2 keyboard/mouse interface any less proprietary than Apple Desktop Bus at time of initial release?


> Also, was the PS/2 keyboard/mouse interface any less proprietary than Apple Desktop Bus at time of initial release?

Not really. They were both proprietary. ADB was Apple's solution, and PS/2 was IBM's solution.

ADB was rather nifty from a technical perspective -- it was a multidrop bus which could support multiple devices, as opposed to PS/2 which could only support a single device per port. It was common for mice to be chained off of keyboards, for example, and there were a number of third-party peripherals which could be connected to ADB as well, like graphics tablets.


ADB was basically USB 0.1: 5-volt multi-device serial bus for desktop peripherals, polled by the host, with dynamic addressing and standard device classes. The only thing it was missing was hot-plugging. (The protocol could handle it, but there were mechanical/electrical issues that made it hazardous to try.) They even had a connector that made it hard to tell if you were trying to plug it in upside down!

With the connector itself, it was hard to tell, but the plastic part you held in your hand (molded plastic plus strain relief) wasn't symmetrical.

You could thus develop a habit of putting, say, the flat part next to your thumb, and then you'd know the alignment of the cable. So if you knew the alignment of port, you could do reliably on the first attempt.

USB makes a slightly worse mistake, which is that the plastic part you hold in your hand feels the same way if you turn it around 180 degrees. So you have to point it at your face and look at it if you want to know which way you're holding it.


I think all of the USB-A jacks i have are made out of a sheet of metal folded over on itself, with the join on the bottom. You can tell which way up a jack is by looking for the join. See this picture of some random bitcoin gizmo:

https://www.ccn.com/wp-content/uploads/2014/11/usb-top-botto...

It's certainly not as easy as with ADB, but it only takes a quick glance to orient a USB jack. I can't be the only person to have noticed this.


The part that screws me up is whenever the USB port is upside down, meaning the plug goes in with the retention holes facing down.

ADB was maybe 0.5..

The progenitor of USB was the SIO interface on the Atari 8-bit series. (https://en.wikipedia.org/wiki/Atari_SIO)

The inventor of SIO (Joe Decuir) was also part of the USB team, and credits SIO as having a big influence on USB.

SIO was actually pretty neat, for those early days, it was effectively an implementation of a serial-based virtual-filesystem-like layer, where a device I/O block held pointers to routines for read-char/write-char/open/close/.../xio (where xio was the catch-all, like ioctl())

You could create your own SIO drivers, or load in 3rd-party drivers (the floppy-disk driver was a loaded SIO driver) , and the machine came with some already installed by the OS (keyboard, screen, cassette i/o, printer). Any external device would plug into the SIO port in the back of the machine, which was molded so it only went in one way.

SIO as a software API wasn’t limited to the external bus, either. Atari introduced an 80-column box (the XEP-80) which used the SIO api internally to drive 8-bit parallel data across the bidirectional joystick ports (yes, the 80-column card connected via the joystick ports, because if you were running with 80-column text, apparently you never played games...) The joystick ports gave a higher bandwidth than the SIO port, but there was a parallel port on the XL/XE as well...

Of course, the requirements for 8-bit computers were a lot lower, so the bus ran at the blazing speed of 19200 baud normally (you could boost it higher, up to 72k), and the entire thing was run off the clock-domain of POKEY, the sound chip.

There was a lot of neat engineering in the Atari 8-bit line, most of which was ignored because it was a “games computer”...


It also presaged using the bus as a handy source of 5V power - I have a SCSI Ethernet adapter for PowerBook notebooks that gets its power from a separate pass thru cable you need to plug into the ADB port.

I have one of those SCSI Ethernet adapters as well. We used to use a desktop computer with a modem running Vicom Internet Gateway, and then share that single modem connection with other computers over Ethernet.

For multiple people chatting on virtual worlds at the same time, it worked fine!


> It was common for mice to be chained off of keyboards, for example

And still was even after Macs switched to USB. I've got a couple of those keyboards in my collection.


I read there was an ADB modem in the early days but ADB was too slow to keep up with advances

The PS/2 interface was developed by IBM for their PS/2 range of computers, hence the name. "Personal System 2".

The PS/2 was supposed to succeed the "PC", and was intentionally proprietary, to prevent clones.

Despite their eventual failure, many technologies first introduced in the PS/2 range eventually became standards, including the 16550 UART (serial port), 1440 KB 3.5-inch floppy disk format, Model M keyboard layout, 72-pin SIMMs, the PS/2 keyboard and mouse ports, and the VGA video standard.


The inverted T model M, IIRC, was introduced in the 3151 terminal. Some time later, the RT/PC also had it. Even though they had the layout that has since dominated the industry, none of these had a PS/2 connector, since the PS/2 was only introduced in 1987.

I never knew there was another layout https://en.wikipedia.org/wiki/Model_F_keyboard

What happened is pretty much par for the course when it comes to Apple, not to imply that there's anything wrong with what they do. They needed a solution and what was available was either woefully insufficient or non-existent so they developed a proprietary solution that met their needs. When an industry standard eventually provided a better option they moved (e.g. ADB -> USB).

Most standards in the industry are bogged down by committees or designed to serve a myriad of use cases. Since Apple is vertically integrated and ships in volume, they generally don't have issues shipping proprietary solutions with very narrow use cases.

Most of the negative messaging you hear around Apple's proprietary solutions are usually resentment from people who want but can't have it.


> Most of the negative messaging you hear around Apple's proprietary solutions are usually resentment from people who want but can't have it.

Speak for yourself. I would much prefer my iPhone to have a type-C port than the Lightning port it has; one less cable to keep track of.


I guess you missed where I said:

> They needed a solution and what was available was either woefully insufficient or non-existent so they developed a proprietary solution that met their needs.

Lightning is 8 years old and shipped in products 2 years before the USB-C specification was even finalized, 4 years before it was standardized. And it wasn't until 2018 that USB-C began being widely adopted.

USB-C is still kind of a shit show in terms of compatibility, you can get a charger/cable combination that will charge one device and destroy another. Until that gets ironed out, Apple probably won't be using it in Phones.


Also ironic: While Apple didn't invent USB, they were the first PC manufacturer to adopt it in a big way which led to it becoming commonplace everywhere.

That makes it sound like their adoption was the main cause for it becoming widespread. Do you have a source to back that up? It's just not how I remember it, but that might be my memory failing and/or being warped.

This is just an anecdote, but in 1998-1999, the iMacs we bought where I worked were the first desktop systems I worked with that used USB[1]. My PC at the time used PS/2 for the keyboard and mouse, SCSI for external storage, a PC joystick port for game controllers, RS-232 for the first 8-port MIDI interface I ever owned, and the parallel port for my printer. I didn't migrate to mostly-USB until probably five years later, because I had plenty of non-USB hardware that still worked perfectly, and USB 1.x was slow. Mac users didn't really have that option, so I can certainly believe claims that they were one of (if not the) main drivers of USB.

[1] ...and, apparently, the first desktop systems that relied exclusively on USB (https://en.wikipedia.org/wiki/Legacy-free_PC).


Ironically it took Firewire in addition to USB for the Mac to free itself from legacy ports. SCSI, for instance, wasn't able to run over USB.

Indeed. And for people who assume USB's success was due to the iPod, remember that the first iPods were Firewire-only. By the time iPods adopted USB, USB was already popular.

I think it was a combination of several factors:

1. Windows 95 USB support was terrible so nobody used it. But Windows 98 had just come out with good USB support.

2. Some PCs came with USB ports but most didn't. Legacy ports were always present. If you wanted USB you often had to add a card. So nobody used USB in PC-land.

3. Apple's market share was tiny but the "Bondi Blue" iMac (which only had USB ports, not legacy ports) was the first product they had made in a decade that was actually desirable. It was so popular that it made Apple a player in the market again.

4. Even though the Bondi Blue iMac's USB "hockey puck" mouse was ergonomically terrible, it didn't dissuade people from buying the computer, because they could just buy a different USB mouse due to factor 5.

5. Steve Jobs lobbied vendors to make USB peripherals by convincing them they would work on both the Mac and the PC. Which was true.

5 was probably the most important factor. Vendors had hundreds of peripherals ready to go on Day 1 of the Bondi Blue iMac's introduction, and most of them were blue. People plugged them in to Windows 98 machines and they just worked. The surge in USB device availability (plus Windows 98) caused PC manufacturers to begin including USB ports universally. But that surge was due to Steve Jobs' lobbying.


And no, I haven't found a source yet. I tried. These are my memories of that time from following Apple very closely because I bought a bunch of Apple stock when Jobs came back and I went to Macworld and WWDC a lot in those days.

Mostly matches my memory, but I think Bill Gates was lobbying hard for usb everywhere. Designed for windows 98 stickers required USB ports as I recall so pc pretty much all had USB starting about that time.

>Microsoft Windows 95, OSR 2.1 provided OEM support for the devices in August 1997. The first widely used version of USB was 1.1, which was released in September 1998. Apple Inc.'s iMac was the first mainstream product with USB and the iMac's success popularized USB itself. Following Apple's design decision to remove all legacy ports from the iMac, many PC manufacturers began building legacy-free PCs, which led to the broader PC market using USB as a standard.

https://en.wikipedia.org/wiki/USB#History

The issue with USB on the PC side is that USB on Windows 95/OSR2 just never worked properly. People may remember the infamous Bill Gates demo of Windows 98 at Comdex where plugging in a USB scanner to demo Plug and Play crashed the computer.

https://www.youtube.com/watch?v=73wMnU7xbwE

When Windows 98 SE came out everything worked fine.


I think it’s hard to make that judgment, but the iMac certainly created a clear market for USB devices at a time when PC users would have to buy an extension card to use USB hardware, but simply could plug in serial or parallel devices.

PC hardware, on the other hand, clung to shipping with serial and parallel ports for an amazingly long time, even on laptops, where hardware designers must have struggled with fitting those huge connectors in.

In my experience, it didn’t help that Windows’ USB support was lousy for years. On the Mac, you plugged in a mouse. A second later, it worked. Windows found it necessary to tell you it detected a device and had to “search for a driver” for seconds and even asked you to locate a driver about every other time (exaggerating, but plugging in a device in another USB port then before could trigger that) you plugged in a device it had seen zillion times before.

I think USB would have won on the combination of merit and being from Intel, anyways, but who knows? Maybe, Intel would have come up with something else. If USB1 had flopped, USB2 wouldn’t have needed backwards compatibility, for example.


Heck, Windows 10 still pops up an "installing drivers" dialog whenever I plug in a USB mouse. The mouse does start working almost immediately, but the dialog persists for a dozen seconds longer. One of my earliest fond memories of trying out Linux back in the day was plugging in random early USB devices that were a pain to get working in Windows but worked milliseconds after plugging in on Linux (assuming they had drivers, to be fair).

I think it's impossible to actually know for real, but I certainly had that impression at the time. Mostly from how it felt like the USB version of peripherals tended to get clear/blue plastic cases to match the iMac whereas the parallel/serial versions had more traditional designs.

There's a similar argument about the iPhone - is it solely responsible for how all our phones look today, or was it inevitable (see the LG Prada)?


That's not how I remember it. Every PC had two USB ports. They didn't get used for much because USB 1.1 wasn't great. But they were there since 1997. It was a common question to ask if your computer had "Windows 95b with USB".

From 1998 to 2000 USB only devices were marketed primarily to Mac users because it was seen that they were the only ones who needed it. PC users stuck with parallel or serial because they were cheaper, but where possible manufacturers would add a USB port. Then the higher speed USB 2.0 and with it flash drives and that changed everything. Rather than the iMac I'd say it was the iPod that really made USB desirable.


USB 1.1 was already more than good enough for flash drives, since flash drives competed with floppy disks (way slower and poor capacity and reliability), Zip drives (not much faster even in the ideal case, and slower if they were parallel port), CD-Rs (need burning software so not available on public/school/office computers, slow workflow, slow burning, expensive single-use discs).

Although I think the most-sold device in those days of USB were external USB floppy drives for all the iMac users who needed to read floppies.

I think perhaps the iMac gave USB the customer base (of people who had no other option) to get the prices of chipsets and devices down to the point where it could also be accepted by the PC market.


They didn't get used much because software support was not there. There were issues with Windows 95 and NT4. But also many PCs would stop at POST without a PS/2 keyboard plugged in.

But new PCs had USB ports before the software was ready, for about a year or so.


The open solution was RS-232, which Macs of the era didn't support without extension hardware.

PCs did not normally use RS-232 for keyboards, but up through the late 90s they often used them for mice.


Macs did support RS-232, via the MiniDIN-8 "printer" and "modem" ports. This port didn't supply power, though, making it a poor choice for a mouse or keyboard.

(Serial mice typically got away with drawing power from one of the signal pins. This was a pretty gross hack, but it worked well enough.)


The Mac technically had RS-422 (https://en.wikipedia.org/wiki/RS-422#Applicationsj, which allowed for much higher data speeds and larger cable lengths, but could be used as RS-232, too.

The Mac serial ports were much better than PC serial parts. They ran faster and could be used over longer distances and had hardware support for networking with those cheap LocalTalk boxes - as I understand it, you're basically physically daisy chaining the computers, and the hardware is capable of looking at a packet and decide whether it's for this computer and if isn't it can pass it on without software intervention.

Yeah, ADB came first and NeXT implemented a version of it on their computers.


more fundamentally, no one focused on cross-platform input device 'standards' at that time, so the suggestion itself is a bit of an anachronism.

I'm pretty sure every hardware platform had its own standard, and there were more desktop/workstation platforms then (Amiga, NeXT, Sun, SGI, etc)


Many SGI systems came with PS/2 keyboard and mouse ports. Other SGI systems (like the Indigo) came with PS/2-shaped DIN ports that just didn't work with PS/2 peripherals.

Finding specifically that between the lines says more about your biases than the hidden implications.

You seem focused on defending the reputation of Apple when pointing out the interface as proprietary is as much about highlighting the fact that a driver exists for it despite being proprietary as it is about disparaging uncooperative hardware vendors.

The entire tone of the article is basically "The Linux kernel is so amazing not only is it still maintaining support for ancient hardware, it's proprietary hardware (and what I read between the lines is likely reverse engineered by volunteers) as well!"


> likely reverse engineered

Probably some of this had to be done (specs never quite match implementation), but the ADB protocol and hardware interface was quite well documented by Apple in Inside Macintosh and Guide to Macintosh Hardware which I’m sure the Linux devs availed themselves of.

There was also an Apple KB article (written mostly for hardware and driver developers) with the charming title of “Help! Space Aliens Ate My Mouse!” which detailed all of the things that could go wrong with ADB that Apple hadn’t thought of when they wrote the original documentation.


Back in the early 2000s I did not have much money and but was able to get my hands on all sorts of slightly exotic and out of date computers for free. People just gave them away. I really enjoyed running Linux on old SGI or Apple computers. Over time however the novelty wore off and I realized that even old x86 hardware was cheaper and faster. Any people running Linux on some rare old hardware care to share why?

It can be a great learning experience though.


Same here. In the late 90s when I started in tech I worked at this defunct consulting co. doing life cycle support at Motorola's corporate headquarters. They just threw away old hardware so I scored some old Macs and a Next pizza box. My first experience with Linux was running it on some m68k Macs I saved from the trash compactor. Later I salvaged enough parts to get a couple PowerPC Macs running and installed Debian on them. I turned one into a firewall/router and ran a web server on the other. Repurposing old hardware was kind of like our generations equivalent to buying a Raspberry Pi. :)

> Any people running Linux on some rare old hardware care to share why?

It's called retro-computing and it's simply a hobby. You could also ask why people are fixing, maintaining and driving around with cars that are decades old.

It's also a very good method to learn everything about kernel development and maintenance. On x86, there are enough people looking at and working on the code, so you will have a hard time to find things for improvment.

The m68k port, on the other hand, has many places where you can help with improving the code and therefore get your feet wet with kernel development.


Maybe some companies are interested in this as well. As I know from work, the PLC manufacturer Pilz [1] e.g. uses 68k type CPUs in some of the products.

[1] https://www.pilz.com/


I think you answered your own question at the end there.

Then again, my father in law still refuses to buy new computers and will literally run them til they blow out.

So their is definitely someone out there keeping the thing running.


You don’t need old hardware to verify the speed differences. If you just take a browser in i3wm on bare arch Linux install that will be faster than in windows10 on the same machine. At least snappier. It’s just significantly lower overheads in Linux because you can tailor to your needs and remove all bloat. Also windows GUI is just more taxing on the system compared to Linux so that is going to be better at rendering something like a simple browser. Linux is slightly faster in some benchmarks(like geek bench) but overall the differences are minor when comparing apples to apples. It’s things GUI related where Linux has the edge in performance and some of the recent games have for some reason performed better on Linux than Windows according to an LTT video which is due to some real interest and development in Linux gaming.


Does the kernel ever remove support for devices or does this stuff just bloat up the codebase for eternity?


Yes, linux removes support for stuff over time. But most drivers for old simple peripherals are considered not a problem - as long as they are clean optional components of their respective subsystems, they don't bother anyone.

One example of support being removed includes old intel 386 processors, due to multi-core support complications apparently: http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html (that left intel 486 and later still supported)

A bit more recently, a bunch of obscure architectures were removed, mostly because the last compilers to support them are too old or buggy: https://lwn.net/Articles/748074/ (Linux is in general really great about supporting a range of gcc/binutils versions going back about 7 years ... compare to some other popular projects these days which require a Go or Rust toolchain from just a few months ago.)

And occasionally a driver is moved to "staging" to see if anyone complains, before being removed, because it is being a bother and no one seems to be maintaining it: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

This is related to why "upstreaming" a driver can be very different from just releasing the source. Making a driver acceptable for merging into the mainline kernel, means making it clean enough that maintenance costs will be extremely low, for the next 5 to 10 years, even as the linux kernel sustains a surprisingly large amount of change in every single 3 month release cycle.


they don't bother anyone

Well, it's been like 2 decades since I last compiled a kernel myself, but back then I was definitely bothered with the pages of obsolete hardware which could be selected from when running make menuconfig. Not sure if/how people configure kernels these days but I assume it didn't get better?


No, there are probably more kernel config options than ever ... but they are decently well organized in a tree. I always start with my linux distro's stock kernel config, and then go through and find stuff to add and remove which is familiar enough to me, leaving the majority as-is. There is search, there are other ways to get a good starter config, I'm no expert ...

What's wrong with the kernel supporting a wide variety of hardware?

As long as the stuff is actively maintained, I don't see a problem as this means it's actually being used.

Linux always has had a very wide hardware support and if you are bothered about that, you can either use the make target "localmodconfig" or just don't build your kernels yourself.


What's wrong with the kernel supporting a wide variety of hardware?

I think you misunderstood my reply. I'm not saying the kernel should drop support, I'm merely replying to the 'doesn't bother anyone' with a practical example of why it might bother someone on a particular level.


It bothers you that the kernel supports hardware you don't have?

Again, no, that is not what bothers me. I'm not sure if I was really that unclear - twice - or if it's just because it's monday :)

So here it goes, in full (similar to sibling poster cesarb's story): once you could, spending some time, just go over pretty much every possible option and select what you needed. Which mattered back then (at least to me), because compilation already took half a day on my machine so the less things to compile the better. Now over the years came more and more support for different hardware. That fact by itself obviously did not bother me, also because it meant I could finally hook up my insert not too common device here. What started to bother me was just that it took more time to go over all options. Whishful thinking because hard to implement, but I would have liked a flag like "ok you can just skip everything for harware which only existed before 1995 because I don't have that". So, reflecting on that period, I'm simply wondering what it must be like today to go through all options. I.e. I wonder if others might be bothered by the sheer amount of options out there.


There was a time back in the 90s where one could go through every option in the Linux kernel config, and through every package in the dselect list (does anyone here still remember dselect?), and know what each one was for, and whether it should be selected or not. Nowadays, there are too many options for that (and for dselect, too many packages).

> Linux is in general really great about supporting a range of gcc/binutils versions going back about 7 years ... compare to some other popular projects these days which require a Go or Rust toolchain from just a few months ago

7 years old gcc was released back in 2013, or 36 years after gcc's initial release


Yes, the kernel does remove unmaintained/unused code according to GKH at 30:30 in this interview [1].

[1] https://youtube.com/watch?t=1830&v=t9MjGziRw-c


There was a time when device support in a Unix meant being literally built into the kernel. Then modules became a thing. Linux has thousands of modules representing device drivers, iptables targets, crypto processing, and others.

So while the codebase may become larger with the additional devices supported, this doesn't have to impact your running kernel at all. Modules that represent hardware not installed on your system aren't typically loaded and therefore don't occupy RAM.

You can also blacklist modules, in the event something's loading where it shouldn't and you don't want it in RAM.

That being said you can do an `lsmod` and look at the modules actually loaded. If some are there that represent things you'll never use (like QNX partition support), blacklisting them may save you some KBs of RAM and lower your kernel's attack surface. QEMU's floppy disk hardware support was impacted by a security vulnerability not too long ago, and I've had the `floppy` module blacklisted for a long time on my VMs.

If you are running Linux on a purely static system that won't change hardware over its life (including things like USB devices), you can then compile a kernel with all modules "built-in" and disable module loading entirely.


> Does the kernel ever remove support for devices or does this stuff just bloat up the codebase for eternity?

As long as there is an active maintainer for the code, the code can stay forever and it's not really bloating up the kernel due to the modular nature of it.


FireWire support was not present in kernel last time I tried it on CentOS

Hasn't gone anywhere. Your kernel might not have support for it, but that doesn't mean it doesn't exist.

https://github.com/torvalds/linux/tree/master/drivers/firewi...


It did go somewhere. It went out. Out of the kernel.

I mean, it's not like the previous comment proved you otherwise by posting a link to the current kernel git tree.

But ok.


There's a big difference between a driver being dropped from the upstream kernel source tree, and a driver module not being compiled and shipped by default by RedHat.

You realise the kernel can be configured to support different hardware and many drivers are built as modules nowadays?

I was thinking the same thing. Is this actually “good” for Linux? This strikes more like a technical bar trick than an important part of the kernel, and another surface which can be attacked.


I'm not a kernel expert but this seems like something that would be ISA-specific. If you're running a Linux kernel on anything other than a 68k processor, this code won't even be there for the most part. And if you are running it on a 68k, odds are you're running it on an old Mac anyway.


That's more or less correct. The relevant Kconfig is:

  if MACINTOSH_DRIVERS

  config ADB
        bool "Apple Desktop Bus (ADB) support"
        depends on MAC || (PPC_PMAC && PPC32)
        help
          Apple Desktop Bus (ADB) support is for support of devices which
          are connected to an ADB port.  ADB devices tend to have 4 pins.
          If you have an Apple Macintosh prior to the iMac, an iBook or
          PowerBook, or a "Blue and White G3", you probably want to say Y
          here.  Otherwise say N.


In other words, the code is only built into the kernel on architectures where it is useful.

I doubt that it's per-ISA; it's probably by device or maybe bus/transport. That is, with the right adapter you should be able to plug an ADB keyboard into a normal x86 box and have it work, but Linux will only actually load a driver if 1. it's statically compiled in (rare unless you built your own kernel), 2. you explicitly load it (`modprobe foo`), or 3. you plug in a device that uses that driver.

As a reference point, I know NetBSD explicitly decouples these things; if you plug a brand-new PCI device into any system with a PCI slot, NetBSD doesn't care that you're plugging ex. a Sun keyboard into a "PC" USB card in a PowerPC mac - it has a keyboard driver, a USB driver, a PCI driver, and all of them use the same internal interfaces regardless of where they originated. I assume Linux does it the same way, but I don't know explicitly.


The driver code seems to be under “macintosh/via-macii”, for whatever that’s worth.

I had been meaning to see if anyone was interested in some fixes to the IOP ADB driver for the IIfx and some Quadra machines.

Apple had provided some internal documentation for the IOP controller but it is incorrect. The Linux driver for it just polls any ADB device for data directly instead of letting the IOP process it and just interrupt the CPU when there is a complete ADB message. I have got a working IOP driver for NetBSD/mac68k that the Linux people might like to copy.


Why is Macintosh II still being worked on but 386 support got killed? I genuinely want to know. I suppose it might be because 386 requires more gymnastics around systems programming for memory management, etc.

The 386 lacked support for some basic atomic primitives like CMPXCHG. It was possible to work around this, but it took a significant amount of extra code for a configuration that was essentially extinct in the wild.

The presence of these workarounds was an obstacle to maintenance of other x86 code. Meanwhile, the presence of a driver for an old Macintosh system is hardly an obstacle to anyone.


> Why is Macintosh II still being worked on but 386 support got killed?

Because developers care about Motorola 68000, but no one cares about i386.

No one wanted to step up to work on support for the original i386, so it got removed.


My speculations:

(1) there are people who are so keen on keeping 68K/Macintosh/etc support alive that they keep on working on it, while maybe no one displayed the same keenness when it came to 386

(2) 386 complicates code paths for 32-bit x86, which people still care about (even if only a little) for real world production use. By contrast, 68K stuff sits in its own directory tree and has little impact on the rest of the kernel


> (2) 386 complicates code paths for 32-bit x86, which people still care about (even if only a little) for real world production use.

I don't know how much of a concern this is IRL, given that one could hide essentially all 386-specific workarounds behind #ifdef's.


386 specific workaround code still needed to be updated if relevant part of the code is updated.

If you look at the patch the remove 386 support (http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html), you can already see that it's already under #ifdef. There are quite a number of atomic instructions not available for 386 which all requires workaround.


Looks like that code could have been refactored to be better self-contained. Though I suppose removing it altogether is a fine way to go about it; it can always be reintroduced in the future if folks commit to maintaining the support going forward.

ADB is the main input bus for any Apple desktop computer made from 1987 until 1999, and any PowerBook/iBook (internal kb/trackpad only) on into the fisrt few years of the 2000s. It's not really relevant for Phoronix to call it "work for the Macintosh Ⅱ" for any reason but to anchor the 1987 release date to that model.

They keep things around as long as someone is using it and someone is willing to maintain it. Apparently, there are still nerds out there willing to maintain the Mac II support, but none willing to maintain the 386 support.

There are no new m68k CPUs so you don’t have to make compromises between supposing new hardware and legacy one - there is only legacy hardware and the platform will not change anymore.

> There are no new m68k CPUs

You'd think so, but: https://wiki.apollo-accelerators.com/doku.php/apollo_core:st...


I was amusing myself with using obsolete tech with a modern computer this weekend when I needed a video card for a server. Not wanting to waste power on an unnecessary GPU I looked around for the least capable card I had and found one from 1997[1]. It still works fine with a processor made in 2017. And Linux had no complaints using the generic VESA X.org driver. Though there is a native Tseng driver still there, the particular live USB I had would segfault. Actually it looks like the last two bugs filed against it were marked WONTFIX in 2011 so I'd say the driver is unmaintained and Xorg should consider removing it.

(I wonder if the new ADB driver would be useful with the USB-to-ADB adapter I have. Although the only thing I could plug into it is a trackball.)

[1] http://www.vgamuseum.info/index.php/component/k2/item/465-ts...


USB->ADB adapters are more like ADB->USB translators than ADB on a USB bus, though. They'll only support some peripherals, and they'll show up as normal HID devices. You won't be able to connect your PageMaker dongle, unfortunately.

Nice. A Rage128 or a Matrox Millenium are also good options for server graphics.

So is someone running the kernel on their Macintosh II... to do a thing (obviously as the article notes it was tested on a Centris 650) or is this just an awesome thing to do because it could be done?


Here's a fun story about the creation of ADB by Woz: https://eggfreckles.net/2013/12/27/adb-the-epitome-of-early-...

I have a question: doesn't all this extra (basically useless) driver code just increase the security risk for Linux?

Like aren't you just increasing the attack surface? Since all these drivers exist in kernelspace.

So there's three levels, as I understand it that driver code like this could be included (or not) in the kernel. This is not my area of expertise so please excuse me if I'm mistaken here but you have:

1. Not included in the kernel and not included as a module. This is obviously excluded at source and is the safest;

2. Available as a module but not loaded by default; and

3. Included in the kernel. I'm not 100% sure if there even is a distinction between this and (2) anymore. I remember at one point you could include code as part of the configure process during compilation and this was distinct from modules (at least for a time).

Either way, there doesn't seem to be a difference philosophically between (2) and (3), right? A vulnerable module can be loaded from userspace, generally speaking.

Also, I'm actually curious how often drivers are the source of security vulnerabilities? Is this a common or rare vector?


The driver in question can't even be compiled for x86 systems, either as a module or otherwise. Having it in the source tree is completely harmless.

Exactly. It's arch-specific code. It won't even show up in the kernel configuration unless the architecture is m68k.

That misses the point entirely. It’s not about this driver specifically but concerns the thousands of other “dead” drivers and similar that could be loaded.

Most people use their distro's default kernel binaries, which tend to include only modules that are at least somewhat likely to be used by that distro's target audience. Nobody ships kernels with all modules compiled.

A driver should only be a security issue if it's actually running (loaded), and even if you have the module available on disk (which again, is unlikely for something obscure enough) it'll only get loaded if the kernel detects hardware that uses it, or someone does a modprobe (or equivalent), which already requires root. So having the driver available shouldn't add any risk unless you actually are using it anyways.

I was under the impression the driver has to be loaded before the kernel can detect if it can use the hardware (that’s how probing works) so it’s the user space that picks the modules to load.

Of course the driver probably won’t do anything if it never gets attached to a device (other than responding to probe requests on the bus it uses.)


Even if it could compile for x86 why would you load a driver for a Mac II?

In an ideal world, if I wanted to use the keyboard I used with my mac (G3, System 8), I would plug it into an adb-pci board installed in my new x86 machine and it would just work. In real life, I'd probably use an adb-usb converter and Linux probably would see a USB HID device, but that really is less elegant.

People do all sorts of things they shouldn't, especially in complex technological contexts. Of various possible responses, this strikes me as especially unsuited to actual behaviours at scale.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: