Hacker News new | past | comments | ask | show | jobs | submit login

> The author keeps building up to this massively different, otherworldly system, and then just finishes without ever answering.

Yes, it's a weak article.

So, what do mainframes have that microcomputers don't?

- Much more internal checking.

Everything has parity or ECC. CPUs have internal checking. There are interrupts for "CPU made a mistake", and OSs which can take corrective action.

- Channels.

Mainframes have channel controllers. These connect to devices on one end, and the main CPUs and memory on the other. They work in a standardized way, independent of the device. The channel controllers control what data goes to and from the devices. Sometimes they even control what a program can say to a device, so that an application can be given direct device access with access restrictions. This would, for example, let a database talk to a disk partition without going through the OS, but limit it to that partition. The channel controllers determine where peripherals put data in memory. Mainframes have specific I/O instructions for talking to the channel controllers. Drivers in user space have been around since the 1960s.

Minicomputers and microcomputers, on the other hand, once had peripherals directly connected to the memory bus. Programs talked to peripherals by storing and loading values into "device registers". There were no specific I/O instructions built into the CPU. Some devices accessed memory themselves, called "direct memory access", or DMA. They could write anywhere in memory, a security and reliability problem.

Microcomputer CPUs haven't worked that way for decades. Not since the era of ISA peripherals. But they still pretend to. Programs use store instructions to store into what appears to the CPU is a memory block. But that's really going to what's called the "southbridge", which is logic that sends commands to devices. Those devices offer an interface which appears like memory, but is really piping a command to logic in the device. On the memory access side, the program stores into "device registers" which tell the "northbridge" to set up data access between devices and memory. Sometimes today there's a memory management unit between peripherals and main memory, to control where they can store.

The end result is something more complicated than a mainframe channel, but without the architectural advantages of isolating the devices from the CPU. Attempts have been made to fix this. Intel has tried various types of I/O controllers. But the architecture of Unix/Linux isn't channel-oriented, so it just adds a layer of indirection that makes drivers more difficult.

(Then came GPUs, which started as peripheral devices and gradually took over, but that's a whole other subject.)

- Virtual machine architecture

The first computer that could virtualize itself was the IBM System 360/67, in the 1960s. This worked well enough that all the System/370 machines had virtual machine capability. Unlike the mess in x86 world, the virtual machine and the real machine look very similar. Similar enough that you can load an OS intended for raw hardware into a virtual machine. This even stacks; you can load another copy of the OS inside a VM of the first OS. I've heard of someone layering this 10 deep. The way x86 machines do virtualization required adding a special layer in hardware underneath the main layer, although, over time, it's become more general in x86 land. Arm AArch64 considered virtualization from the start, and may be saner.




> So, what do mainframes have that microcomputers don't?

Some high-end UNIX servers have historically had many of the features you mention – yet generally not considered "mainframes". Conversely, low-end IBM mainframes (such as the 43xx series) often lacked some of those reliability features – and I'm not sure if all the contemporary non-IBM mainframe platforms have them (especially the ones which are nowadays emulated).

I think the real definition of "mainframe" is historical rather than being in terms of any particular technological features. Primarily it refers to high-end commercial-focused (i.e. not supercomputer) computing platforms from before 1980, and their later compatible descendants (which weren't necessarily all that high-end).

(That said, "mainframe" and "supercomputer" are historically overlapping categories, since some 1980s supercomputers such as the Hitachi S-3800 were S/370-based and hence arguably belong to both categories; similar remarks could be made about the IBM 3838, or NEC SX-1 and SX-2.)

> But the architecture of Unix/Linux isn't channel-oriented

Unix and Linux run on IBM mainframes and use IBM mainframe channels for IO. z/Linux uses IBM mainframe channels. So did AIX/370, AIX/ESA, VM/IX, IX/370, and Amdahl UTS. And, z/OS is officially speaking Unix too (it is certified as such by The Open Group)

Channels are an implementation detail which usually only OS device drivers need to concern themselves with. (Historically some applications interacted with channels directly – e.g. EXCP on MVS – but IBM strongly discourages that approach nowadays, they'll recommend you use VSAM or DB2 instead, in which all those low-level details are hidden from the application.)


> Channels are an implementation detail which usually only OS device drivers need to concern themselves with.

Yes. The standardization of the interface was helpful, though. Simplifies drivers, and implies less trust in the device.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: