* CLIs: https://rust-lang-nursery.github.io/cli-wg/
* WASM: https://rustwasm.github.io/book/
* Network services: https://aturon.github.io/apr/
* Embedded: https://rust-embedded.github.io/book/
They're all at various states of completion at the moment.
Originally, our first push was to get Cortex M bare metal microcontrollers as a "Tier 1" stable target for the 2018 edition of Rust.
Over the last couple months, we've been expanding, and have subteams for a bunch of different topics, including chip support, drivers, documentation, tooling, etc. We're mostly focused on helping other people who are getting started with embedded rust, as well as give feedback to the compiler teams, etc.
It would be interesting to hear more about your porting attempt, we also have a blog - https://rust-embedded.github.io/blog/ - if you'd like to share it as a blog post.
As Steve mentioned, https://github.com/rust-embedded/wg is our main coordination repo, and has links to most of the stuff we're actively working on.
Um... the Local APIC interface needed to catch an interrupt and wire up the timer is if anything simpler than what is presented here.
> masking all interrupts and remapping the IRQs. Masking all interrupts disables them in the PIC. Remapping is what you probably already did when you used the PIC: you want interrupt requests to start at 32 instead of 0 to avoid conflicts with the exceptions. You should then avoid using these interrupt vectors for other purposes. This is necessary because even though you masked all interrupts on the PIC, it could still give out spurious interrupts which will then be misinterpreted from your kernel as exceptions.
This path is what I've seen out of 99% of hobby OS tutorials I've read.
Also, use of real mode (still required for BIOS calls like e820) requires the ability to get back out of real mode, so you need the bootstrapping stuff in a real OS even if it's not in the tutorial.
But the register poking in the PIC/PIT is just silly. Turn that stuff off and use the correct hardware, even in a tutorial. Unless it's a tutorial on PC architecture history, I guess.
I'm not particularly well versed in UEFI/BIOS features, but shouldn't BIOS calls like e820 be avoided in favor of equivalent UEFI functions?
> the hardware gets in your way there. Long mode properly requires paging to be enabled, which means that you have the choice between a complicated hardware bootstrapping proceedure to enable it, or a complicated bootloader environment which has already grabbed and used chunks of memory for page tables you need to not step on.
Just figuring out the UEFI's page-table structure seems much less burdensome to me. You'd have to set the tables up yourself regardless. Is the documentation/environment really so poor as to make just doing it yourself easier?
In theory you should just be able to include a header or use a crate (a cousin comment linked one) and not have to write any assembly.
Should, but can't, because they don't work in general. Windows and Linux get their memory map from e820, so that's all the system vendors test.
UEFI supplies you with a memory map that allows you to see what memory is untouchable and large chunks of it can be remapped away into memory regions you don't touch.
I'm using it for my kernel development tinkering and it works fine except for being a bit incomplete as far as implementing all the services.
UEFI just isn’t as well documented in a way that hobbyists can find, it seems. I know I’ve stuck to multiboot because there are so many examples and resources, and it’s one of the least interesting parts of the process, so I go with what’s simple. (Which is no longer multiboot and is now phil’s bootloader used in this tutorial.)
(cool series! keep it up!!)
>As already mentioned, the 8259 APIC has been superseded by the APIC, a controller with more >capabilities and multicore support. In the next post we will explore this controller and learn how to >use its integrated timer and how to set interrupt priorities.
DPDK turns off interrupts and manually polls the card in a loop AFAIK.
But for most cases, the underlying semantics though are useful enough that the abstraction isn't going anywhere anytime soon.
I could imagine "polite" interrupts—where instead of the processor immediately jumping into the ISR's code, it simply places the address of the ISR that "wants to" run into an in-memory ring-buffer via a system register, and then the OS can handle things from there (by e.g. dedicating a core to interrupt-handling by reading the ring-buffer, or just having all cores poll the ring-buffer and atomically update its pointer, etc.)
The major difference with this approach is that pushing the interrupt onto the ring-buffer wouldn't steal cycles from any of the cores; it would be handled by its own dedicated DMA-like logic that either has its own L1 cache lines, or is associated to a particular core's L1 cache (making that core into a conventional interrupt-handling core.) Therefore, you could run hard-real-time code on any cores you like, without needing to disable/mask interrupts; delivering interrupts would become the job of the OS, which could do so any way it liked (e.g. as a POSIX signal, a Mach message, a UDP datagram over an OS-provided domain socket, etc.) Most such mechanisms would come down to "shared memory that the process's runtime is expected to read from eventually."
There would still be one "impolite" hardware interrupt, of course: a pre-emption interrupt, so that the OS can de-schedule a process, or cause a process to jump to something like a POSIX signal handler. However, these "interrupts" would be entirely internal to the CPU—it'd always be one core [running in kernel code] interrupting another [running in userland code.] So this mechanism could be completely divorced from the PIC, which would only deliver "polite" interrupts. (And even this single "impolite" interrupt you could get away from, if the OS's userland processes aren't running on the metal, but rather running off an abstract machine with a reduction-based scheduler, like that of Erlang.)
An actual PCIe interrupt is sent to the CPU only when that interrupt ring buffer goes from empty to non-empty, and the driver's interrupt handler simply reads the whole ring buffer contents.
That seems strictly worse than the current design.
There was a subset of systems calls we could use while in this realtime mode (A lot of unix system call really rely on interrupts, and the whole OS is built on them..).
I think interrupts started from hardware signals, but were expanded to include software.
Another approach is to have a processing hierarchy, like old mainframes did. Off-load the CPU with some kind of I/O processor or channel controller that can do the real-time data transfers, and "coalesce" low level interrupts into a single larger interrupt that captures more work -- think a single DMA-COMPLETE interrupt instead of a bunch of GET-SINGLE-BYTE interrupts.
You can of course push the processing into hardware but that is much harder to change than an I/O driver, so the interrupt-driven-driver design pattern wins on software maintainability.
The earliest implementation of hyperthreaded hardware for doing I/O that I am aware of is the CDC 6000 series, announced in 1959, if I recall correctly. The CDC 6X00 Peripheral Processor Units (PPU) were actually a single processor logic cluster, with 10 copies of the PPU state (which old-timers called "the PPU barrel"), yielding effectively 10 I/O processors that ran at 1/10 the master clock frequency of the CPU. I/O drivers were written as PPU code that actually polled the I/O device. The PPU could scribble anywhere it wanted in main memory, so the PPU did all the work of moving data from the peripheral into main, or out from mem to device. Interrupts were very simple -- the PPU computed an interrupt vector address and more-or-less just jammed it into the CPU program counter. But the net effect was that on a Cyber 6000 (later Cyber 170-series) machine, much of the I/O was delegated to the PPU's, and thus a single interrupt represented the completion of a large amount of work.
They also get used for industrial automation and data collection.
There is some discussion there.
It is a really great device. The two most common exceptions are price and language support.
In many designs, the chip can replace several.
Early on, yes. Was SPIN and PASM (assembly language, but no where as hard as one would imagine) Today, C, and other languages are well supported.
It is a true multi-processor. The developer can choose freely between concurrency and parallelism as needed. Combining code objects is crazy easy too.
Concurrent can be something like a video display on one core, or COG as they are known, keyboard, SD card, mouse on another. Once done, those two cores would appear much like hardware to a program running on another one.
Parallel could be several cores all computing something at the same time. Doing a mandelbrot set is an example of both.
Main program directs the set computation, one core outputs video, the remaining ones work a little like shaders do, all computing pixels.
Interrupts could have made a few niche things a tiny bit better. Mostly they really are not needed.
I had a ton of fun programming and doing some automation with this chip.
It's second gen will tape out, and early revision chips already have. Real chips, back from the fab for a final round of polish and testing.
On those, every single pin has DAC ADC, a smart pin processor and a variety of modes and pullups, pulldowns all configurable in softwares. It is a little like having a mini scope with good triggers on each pin.
Interrupts are present, but no global ones, nor with any ability to interfere with other cores.
This will keep the lego like feature of grabbing drivers and other code, and having it act like built in hardware, while at the same time making for event driven code that is easily shared and or combined with other code.
Interrupts are called events and there are 16 of them, three priority levels, and that is per core, 8 cores total.
People can build crazy complex things able to input signals or data, process with high speed accurate CORDIC, and stream data or signals out.
Freaking playground. I have been running an FPGA dev system for a while now. That is 80 Mhz.
Real chips will clock at 250 Mhz and up through about 350.
An alternative architecture that would not need interrupts would be something that is driven by data. Instead of loading in an initial program, you would load in some initial data, and the CPU's execution would be driven entirely by that. On every cycle, the CPU would look at any new data that has arrived and process it accordingly. In this view, key-presses or timer ticks would just be like any other data flowing through the system.
Interruptions are mostly a bad legacy from the time our computers were slow, kept around for compatibility reasons.
How is that to be implemented if not by timer interrupt?
There is a lot of stuff implied by an interruption. Computers need some of them for some functions, but never the entire lot.
There is likely a way to cross domains that can be formally reasoned about more easily. Although, like functional programming, implementing the abstraction directly on silicon probably wouldn't make much sense. Process calculus is the place to start if one is interested in this line.