So are adders but nobody is trying to replace them... When your overhead is high enough you can poll obviously but why bother when you can set up an interrupt. They're more of a thing on embedded systems because you can get some seriously low latency numbers with them when used properly.
They've been replaced a few times under the hood. Message signalled interrupts exist, and even regular interrupts on PCI-E and HyperTransport look more like network packets or rdma read/writes than a true electrical signal.
DPDK turns off interrupts and manually polls the card in a loop AFAIK.
But for most cases, the underlying semantics though are useful enough that the abstraction isn't going anywhere anytime soon.
I'm not a hardware guy, so I don't know if this is feasible or has ever been implemented, but:
I could imagine "polite" interrupts—where instead of the processor immediately jumping into the ISR's code, it simply places the address of the ISR that "wants to" run into an in-memory ring-buffer via a system register, and then the OS can handle things from there (by e.g. dedicating a core to interrupt-handling by reading the ring-buffer, or just having all cores poll the ring-buffer and atomically update its pointer, etc.)
The major difference with this approach is that pushing the interrupt onto the ring-buffer wouldn't steal cycles from any of the cores; it would be handled by its own dedicated DMA-like logic that either has its own L1 cache lines, or is associated to a particular core's L1 cache (making that core into a conventional interrupt-handling core.) Therefore, you could run hard-real-time code on any cores you like, without needing to disable/mask interrupts; delivering interrupts would become the job of the OS, which could do so any way it liked (e.g. as a POSIX signal, a Mach message, a UDP datagram over an OS-provided domain socket, etc.) Most such mechanisms would come down to "shared memory that the process's runtime is expected to read from eventually."
There would still be one "impolite" hardware interrupt, of course: a pre-emption interrupt, so that the OS can de-schedule a process, or cause a process to jump to something like a POSIX signal handler. However, these "interrupts" would be entirely internal to the CPU—it'd always be one core [running in kernel code] interrupting another [running in userland code.] So this mechanism could be completely divorced from the PIC, which would only deliver "polite" interrupts. (And even this single "impolite" interrupt you could get away from, if the OS's userland processes aren't running on the metal, but rather running off an abstract machine with a reduction-based scheduler, like that of Erlang.)
Schemes like that are in fact implemented by some devices on top of the existing PCIe interrupt mechanism. For example, GPUs have many different interrupt sources, so a common technique is to have an interrupt ring buffer that the GPU writes to, which contains all the information about the interrupt source and additional payload data.
An actual PCIe interrupt is sent to the CPU only when that interrupt ring buffer goes from empty to non-empty, and the driver's interrupt handler simply reads the whole ring buffer contents.
It seems like your scheme would require dedicating an entire core to kernel interrupt handling, all the time (because if you let every core run userspace, and then a network packet arrived, it wouldn't be handled until some core went back into the kernel for another reason).
That seems strictly worse than the current design.
Right now nothing. I used to work with HPUX with "Real time Extensions". One of the things we could do is shut off interrupts for certain processes.
There was a subset of systems calls we could use while in this realtime mode (A lot of unix system call really rely on interrupts, and the whole OS is built on them..).
I think interrupts started from hardware signals, but were expanded to include software.
Consider the problem you are trying to solve. I/O devices, from the CPU's standpoint, are things that are very slow and require infrequent service, but when they do require service, have extremely strict hard-real-time latency requirements. Interrupts are one approach to being able to get useful work done while waiting for I/O, without burning a lot of CPU resources on polling.
Another approach is to have a processing hierarchy, like old mainframes did. Off-load the CPU with some kind of I/O processor or channel controller that can do the real-time data transfers, and "coalesce" low level interrupts into a single larger interrupt that captures more work -- think a single DMA-COMPLETE interrupt instead of a bunch of GET-SINGLE-BYTE interrupts.
You can of course push the processing into hardware but that is much harder to change than an I/O driver, so the interrupt-driven-driver design pattern wins on software maintainability.
Well, more like hyperthreads than cores, if I understand the Propeller correctly.
The earliest implementation of hyperthreaded hardware for doing I/O that I am aware of is the CDC 6000 series, announced in 1959, if I recall correctly. The CDC 6X00 Peripheral Processor Units (PPU) were actually a single processor logic cluster, with 10 copies of the PPU state (which old-timers called "the PPU barrel"), yielding effectively 10 I/O processors that ran at 1/10 the master clock frequency of the CPU. I/O drivers were written as PPU code that actually polled the I/O device. The PPU could scribble anywhere it wanted in main memory, so the PPU did all the work of moving data from the peripheral into main, or out from mem to device. Interrupts were very simple -- the PPU computed an interrupt vector address and more-or-less just jammed it into the CPU program counter. But the net effect was that on a Cyber 6000 (later Cyber 170-series) machine, much of the I/O was delegated to the PPU's, and thus a single interrupt represented the completion of a large amount of work.
It is a really great device. The two most common exceptions are price and language support.
In many designs, the chip can replace several.
Early on, yes. Was SPIN and PASM (assembly language, but no where as hard as one would imagine) Today, C, and other languages are well supported.
It is a true multi-processor. The developer can choose freely between concurrency and parallelism as needed. Combining code objects is crazy easy too.
Concurrent can be something like a video display on one core, or COG as they are known, keyboard, SD card, mouse on another. Once done, those two cores would appear much like hardware to a program running on another one.
Parallel could be several cores all computing something at the same time. Doing a mandelbrot set is an example of both.
Main program directs the set computation, one core outputs video, the remaining ones work a little like shaders do, all computing pixels.
Interrupts could have made a few niche things a tiny bit better. Mostly they really are not needed.
I had a ton of fun programming and doing some automation with this chip.
It's second gen will tape out, and early revision chips already have. Real chips, back from the fab for a final round of polish and testing.
On those, every single pin has DAC ADC, a smart pin processor and a variety of modes and pullups, pulldowns all configurable in softwares. It is a little like having a mini scope with good triggers on each pin.
Interrupts are present, but no global ones, nor with any ability to interfere with other cores.
This will keep the lego like feature of grabbing drivers and other code, and having it act like built in hardware, while at the same time making for event driven code that is easily shared and or combined with other code.
Interrupts are called events and there are 16 of them, three priority levels, and that is per core, 8 cores total.
People can build crazy complex things able to input signals or data, process with high speed accurate CORDIC, and stream data or signals out.
Freaking playground. I have been running an FPGA dev system for a while now. That is 80 Mhz.
Real chips will clock at 250 Mhz and up through about 350.
You could get rid of interrupts if you take a completely different approach to CPU architecture. The high-level view of how a CPU works is that you load in a program, which involves setting an instruction pointer at the head of a list of instructions. This is the very first thing you do when you write a kernel; you set the instruction pointer to the entry function on your kernel and the CPU goes from there. Without interrupts, it would do that forever. You need the interrupt in order to make the CPU look at any instruction not prescribed by the program you loaded in the first place, which is how you get any sort of I/O going.
An alternative architecture that would not need interrupts would be something that is driven by data. Instead of loading in an initial program, you would load in some initial data, and the CPU's execution would be driven entirely by that. On every cycle, the CPU would look at any new data that has arrived and process it accordingly. In this view, key-presses or timer ticks would just be like any other data flowing through the system.
It is fundamentally how processors work. That is how context switching happens IO happens etc. I mean if you are at work and you need to go the doctor the school will call you to interrupt you. If someone didn't what would happen to your kid?
IO can happen by message queue that thee OS looks at from time to time, context switching can be initiated by a lot of algorithms that don't need to invalidate your pipeline state, and OS calls can be entirely synchronous.
Interruptions are mostly a bad legacy from the time our computers were slow, kept around for compatibility reasons.
A core that is otherwise sleeping, an instruction countdown, a synchronous microinstruction that does some "finish this and jump if the list is non-empty" triggered by a timer...
There is a lot of stuff implied by an interruption. Computers need some of them for some functions, but never the entire lot.
To me an interrupt is a signal. They can be as complex or as simple as one desires. I'm unsure that concept can ever be replaced. Interrupt controllers, on the other hand, are very complex, which may warrant different abstractions someday.
Interrupts at their base are signals for crossing from an asynchronous, parallel world into a synchronous, sequential world. Some sort of signal needs to exist for that purpose.
There is likely a way to cross domains that can be formally reasoned about more easily. Although, like functional programming, implementing the abstraction directly on silicon probably wouldn't make much sense. Process calculus is the place to start if one is interested in this line.
interrupts are still the main methods. but modenr osses use LAPIC /APIC which is a more modern form of interrupt controller. it has a little more abilities and i think 1 local per cpu or core or w/e. basically interrupts are good, where 'polling' is the legacy method (very legacy :D) of querying hardware untill it's read. via interupts hardware can let the OS know, so it's not wasting cycles waiting for hardware.