Yes, it really was embedded. It fit in 9U (15.75 inches) and was rugged enough to fly in a military aircraft. The OS would let users turn off interrupts in order to squeeze out every bit of performance.
We could also run the OS on the SHARC DSP, which is a word-addressed Harvard-architecture chip without an MMU. Running on that would allow 3x the CPU density. One system got up over 1000 cores.
IO would typically come in via DMA, over a 32-bit link running at 40 MHz. There could be many of these.
The end result was generally something like a radar. The end-user sees a radar, buys a radar, and uses a radar. They don't see a computer. The computer is just a component embedded in the radar.
Even back then, FPGAs and custom ASICs would be part of the compute fabric. I left that out. For example, there was an FFT accelerator chip. These days the companies add in a mix of FPGAs and GPUs, but the CPUs are still there. It's all the same stuff today, but faster.
To the end user, it was never a computer. It was a radar, a video processor, an MRI scanner, a sonar, an ultrasound, a chip wafer inspection device, a laser with real-time mirror warping, or some other tool.
They probably should be considered the norm if it's embedded without unusually-high performance requirements.
1. Example hardware in this book costs around 98RMB and has a huge form factor. AVR / ATmega328P based platforms like Arduino Nano are currently ~10RMB, which can be beefed up to full ethernet connectivity with prebuilt hardware modules for another 20RMB. So in short it is possible to procure over three(!!!) embedded, ethernet capable AVR platforms for the same price as this hardware, no soldering required. Why not use something cheaper and more broadly available? It would help more people get on board (no pun intended). Oh wait, there's no AVR support. I see...
2. The nominal target hardware in the book seems overpowered/overpriced/out of date. For example, the target used in the early book is LM3S6965, which is NRND (Not Recommended for New Designs). Probably as a result of NRND status (I guess people buying up stock to keep old designs in manufacturing) it costs over 125RMB for the chip alone. The manufacturer's recommendation for replacement is TM4C12x, the cheapest development board for which is MINI-M4FORTIVA @ 22RMB (which features JTAG). ICs in this series begin at 22RMB or so, still many times AVR chip price, and with IMHO far too much hardware for novices (64KB flash, 12KB RAM, 12x12bit ADC channels, 49+ GPIO channels...). A direct quote from later in the book is In this example, let's assume we have an Texas Instruments TM4C123 - a middling 80MHz Cortex-M4 with 256 KiB of Flash. That's not middling, that's ridiculously overpowered for almost anything embedded.
3. See example code at https://rust-embedded.github.io/book/start/registers.html and compare C and Rust Volatile Access examples at https://rust-embedded.github.io/book/c-tips/index.html and see which syntax you prefer...
This approach allows you to have the advantages of the higher-level language while still being able to utilize existing libraries.
While not for an embedded project, I used this approach to write a Rust application that interfaces with the Helix Perforce C++ API.
Also, regarding your point #3, I prefer the Rust version by far. Code is written once and read many times. Without needing to jump between places or rely on good IDE contextual information, the Rust version says:
- SIGNALLED is intended to be modified.
- ISR must conform to the rules of an interrupt handler -- be careful what you do in this method.
- The write to SIGNALLED is unsafe and volatile.
- The read of SIGNALLED is unsafe and volatile. You can be sure the while condition check won't be optimized away because you see the use of the "read_volatile" -- if you're tracking down a bug, no need to go look to see if SIGNALLED is marked volatile.
Plus, in the comments of the Rust sample, the author recommends using a higher-level primitive, to avoid the direct calls to read/write_volatile.
A good IDE would show type hints: "volatile bool" for C and "Atomic<bool>" for Rust, or something similar.
 https://crates.io/crates/cc .
So, to set a value into a (volatile) boolean I have to call a function passing a pointer within an unsafe block? Why on earth it's clearer than or preferred over the C code? I know, you'll list several reasons, but please, it's not clearer and it's voodoo magic. In fact, it is the last thing I would (intuitively) think about doing, since it's known that you cannot operate from a pointer to a volatile variable because it may be UB in C++.
>> the author recommends using a higher-level primitive
Again, to read/write a boolean? Are we talking embedded here?
>> Rust does not currently have a rigorously and formally defined memory model, so the precise semantics of what "volatile" means here is subject to change over time.
I cannot bring Rust to any serious development yet...
C did not have a rigorously and formally defined memory model until 2011. We did C for almost 40 years before we got that.
Go read what Linus Torvalds has to say about "volatile", and then go rip it out of all your code. You should not be using that keyword, except possibly for a software-incremented global clock tick counter residing in normal RAM.
You need to call a function that will properly perform any required memory fences. If you don't do this, the CPU or bridges may reorder the memory accesses.
Rust might have problems, but the issues listed are not problems. My immediate concern is the availability of inline assembly. I would hope that this is possible and can be inlined into code that isn't marked unsafe.
well you obviously have a different idea of embedded. Sure, there are different requirements for different systems, but in IoT, my previous company shipped tens of thousands of devices (fleet monitoring, smart sensor networks etc) and everyhing was based on STM32 L & F series, Cortex M0, M3 & M4.
Now, in automotive we have MPC57xx.
> Re-implementing this stuff isn't just a minor drag, it's commercial insanity
absolutely agree. if there will be an adoption it will be incredibly slow. Logical but extremely frustrating.
Around 2009ish, I had a coworker who swore anything over 16 bits wasn't embedded. We were working on a MMU-less uClinux ARM7 system with 16mb of RAM and no networking and that didn't meet his criteria for "true embedded".
The next job I had was a piece of customer premises equipment that ran a fairly high powered ARM board with dual gigabit ethernet links and a lot of user space written in PHP. No one on that team questioned that it was an embedded system.
It always seemed like a strange thing to gatekeep on. Personally, I base my definition more on the use case than the power of the machine.
When I were a lad, if you had more than a dozen bytes of RAM it wasn't REAL embedded. /shakes-cane
I generally use your definition, though. Any computer that the end user doesn't think of as a computer (and isn't expected to maintain or directly use) is embedded, whether it's a camera with touchscreen GUI or an ABS processor in a car.
The only semi-valid reason to use this chip IMO is because people want to keep using existing firmware as-is.
If you read the Stepper code for Marlin @ https://github.com/MarlinFirmware/Marlin/blob/1.1.x/Marlin/s... you will see it's nontrivial... bunch of PhDs implementing nontrivial maths in a highly constrained environment. That's cute but not demonstrative of real world concerns. In such cases, you would absolutely throw more hardware at the problem, eg. for motion systems maybe dedicating one 'dumb' MCU per axis and outsourcing planning to a more rational/powerful environment.
That automatically means you have to be shipping at least 120K units just to break even with those costs. But you are also getting to market 3 months later which itself has huge business costs which means you probably need to be shipping 200K or more units. That's a pretty big number for a lot of embedded device designers.
If you double your chip constraint from $0.13 to $0.26, then that same team must now ship a million devices to break even.
It's a buck and change on those--same as ARM Cortex M series.
For anything commercial? Almost certainly not.
For one-off makers/non-engineers? Maybe. The Arduino ecosystem is really hard to beat for non-programmers.
Same for many of these devices. Mature libraries, low cost, ease of use all play a part.
Alone, it's the sort of thing you might use for a kid's toy that you hope to make into the next big must-have Christmas hit. You'd need to make millions upon millions of them, and maybe $49.95 is your target price.
Here is one example I ran on it:
Now regarding language, let me illustrate an area in which C/C++ sucks balls for embedded. Using a cheap MCU is generally considered good practice if you have many target systems to produce and they are possible to execute on that cheap MCU, because using a pricier one will quickly add up and retargeting a team costs a lot. The problem is that on a cheap MCU sometimes you run out of basic resources, such as physical interfaces (pins). In the embedded world there are multiple solutions to most problems, just as there are in the conventional software one (ie. classic perl quote TMTOWTDI) and one of these is connecting IO expansion chips via alternate (particularly shareable) interfaces such as I2C/SPI. Such chips provide various types of interfaces by proxy, for example additional GPIO pins. The issue with these is that many libraries, such as those written to control motors, those written to control MOSFETs/relays, those written to control serial devices will not work by default on these proxied interfaces. Therefore, you need to either hack or rewrite the respective libraries in order to use them.
Coming from a software background myself, this is a basic interface abstraction, and the current solution is ridiculous. Reflectance would go a long way to solving the issue. Lua frankly appeals more than Rust in this regard when dealing with third party, existing libraries which were not constructed explicitly with the requisite abstraction. Alternatively, I believe some software people here on HN recently announced they are attempting a generative approach for embedded circuits and software, sort of a 'ruby on rails' shake-n-bake approach to the whole problem space. The issue with such approaches tends to be that you wind up with a whole lot of deployable product but nobody who understands the intricacies, a vast cognitive overhead for your higher level generation specification languages, there are frequently incompatibilities in devices which are based upon timing, clock frequency or other hard to model aspects of a proposed system, and iterations in hardware cost a lot more in time and money than those in software. Both approaches are used in embedded development, but it is a vast space whose tooling usually differs by target hardware, as if you needed a new IDE, compiler, command line, development workflow and physical interface type for every language you wanted to write on a desktop, and interpreted languages didn't exist. Some of the tooling is prohibitively expensive.
I hope you can now see the problem space is not that similar to desktop software, although the conventional software world does have a lot it could further contribute to electronics. As usual for those outside of an area, the perceived simplicity of established solutions belies the complexity, nuance and outright effectiveness of their current, if ugly, state.
The embedded_hal crate defines traits for controlling things like I2C, SPI, and individual GPIO pins. Then drivers for particular devices are written against those traits.
Given the basic worldview of "we're going to rewrite everything in Rust because that's what Rust people do", I think there's a good chance of ending up in a situation where the embedded Rust ecosystem just doesn't have that problem.
Outright effectiveness? The industry is an outright disaster and it's a wonder engineers can get anything done outside of well-funded businesses. A small but representative slice of some of the bullshit I've had to deal with since I started designing electronics (high speed digital and RF):
Meta build systems with pre-pre-processors with feature gates for bug fixes tied to CRMs so that clients don't receive fixes unless they first encounter the bug (and waste weeks trying to fix it while their account manager responds). NDAs on all of the interesting parts with 2+ weeks turn around time on paperwork and 2+ months on samples (but if you're in Shenzhen you can just grab it at the corner store...). Mismatched peripheral IP jammed together with half-assed drivers that wouldn't pass at a client tech demo (can you run both DMA channels at the same time on STM32Fx or does that still crash?). Buggy reference implementations of basic interconnects that silently drop data and don't support run of the mill "advanced" features used in every ARM mcu (thank you, Xilinx). Version control brought to you by WinRar, 7Zip, and IMAP. Data exchange brought to you by The Interns™ because who has time for that shit. Different versions of firmware written by two completely different Qualcomm teams firewalled away from each other for the same chip, in the same market deployed to clients depending on whether they had an existing relationship with Broadcom or not.
The worst part is that none of these problems have much to do with the actual hardware! If it weren't for the culture surrounding the industry, electrical engineering itself would be an absolute pleasure. It has never been easier to slap a schematic and PCB layout together in Altium if you already know what you need, especially with sites like SnapEDA and Octopart simplifying the boring parts (notably, both cofounded by teams with a lot more mainstream software ecosystem experience). It's just getting to that point and writing the firmware afterwards is absolutely awful. I had a similar situation to your MCU example not too long ago and I ended up going with Atmega because downloading a bunch of Arduino projects and looking at firmware in practices makes characterizing a platform and making the right early decisions a lot easier.
To be clear: I couldn't care less about the compile times or higher level abstractions or any of the tech fads that roll through HN (although I am very much hyped up for Rust and now work with React at my day job). I just want a basic culture of information sharing and cooperative common sense that we take for granted. Coming from a software engineering background, the only thing I think this industry has done right is reference designs and PCB layout notes.
I get that the industry isn't a homogeneous blob and most of these problems are a result of the vendors' mismatching economic incentives but that just means that the industry is a natural disaster. All of these problems suck everyone into a feedback loop preventing any real progress, especially when all the details of hardware and software are locked away.
The AVR backend for LLVM was merged recently. So Rust support for AVR might happen if someone pitches in and does the work. Lots of Arduinos lying around, so it would make it more accessible.
The problem is HAL support. Some periphery is supported, other is not. ADC and DMAs for UARTs were merged just recently. Stuff like SPI / I2C slave modes is missing, I think HAL interface is not fleshed out yet. Stuff like usb stack would probably take a lot time to complete.
I think for a long time we will have to accept that Rust and C are going to co-exist on these devices. So that story should be workable.
My embedded code tends to have a strong split between application logic (in platform-independent, data-driven functional code, with automated tests) and underlying hardware-dependent "app host" (as little custom code as possible), with a data-based interface.
I would be quite happy doing C for the hardware layer and Rust for the application logic layer.
Calling C would also have to touch the same structures which would, if nothing else, change the state of some peripherals to something that will break the rust guarantees.
Rust for embedded stuff seems like an absolute dream. I'd love to make use of all the excellent work that's gone into it. Most of my work just needs USB.
Is there a comprehensive list which other microcontroller is supported by Rust?
I really like the direction that Rust is heading in regards to embedded but I wish that when folks mentioned embedded, they should also indicate which microcontrollers are relevant. There are so many varieties around.
Tier 2 platforms can be thought of as “guaranteed to build”. Automated tests are not run so it’s not guaranteed to produce a working build.
(Also two other unmerged PRs to the list.)
And in fact I'm not aware of any such systems. Existing 32+ bit embedded architectures like MIPS, RISC-V, Xtensa, and ARC all have robust instruction sets with large register files and a fully-defined SysV-style C ABI.
No, the reason is as stated elsewhere. Rust doesn't run on these systems because no one bothered to tool up LLVM for them.
Part of the problem is that LLVM developers themselves are apparently unwilling to release support for architectures that they see as liable to go unmaintained and bitrot in the future, even if someone shows up and does the work. There is a notion of "experimental arch's" but it doesn't seem to be actively used, or to suffice in addressing the issue.
Experimental archs were just used recently for wasm and riscv to find maturity.
Are you referring to a specific discussion on the llvm-dev list? Last one that had a discussion in this area that I recall was Nios2.
I'm interested in soft processors on FPGAs and their tools, and that sounds like it might make for good reading.
As best I can make out, the only relevant portions of the original SysV ABI document are chapters 4 and 5, still available and maintained on the SCO website:
There are also separate documents defining the details of the SysV ABI for each processor family. This StackOverflow answer links to some:
... and the OSDev Wiki links to many more:
I'm serious. I will go buy any board in order to check that tutorial out.
I'll go further. I would buy the Rust folks a couple boards and Seggers to get that.
Check out https://github.com/ferrous-systems/embedded-trainings/blob/m...
BTW It is really cool that your company does this. It has a cracking name too!
Similarly, as long as Rust is already shaking up C/C++ a bit, they could nudge some hobby/education and professional development towards RISC-V. (Especially on embedded, right now, as I've heard the open ISA is attractive to some projects.)
Any ideas from the HN crowd? I'm currently looking at a Pi-based weather station where I would use Rust as the processing server for all the data each day. Nothing fancy, but perhaps a fun way to learn.
I was incorrect, citation:
d) it is a Tensilica / RISC V are in the pipelines.
So not "big deal" RISC-V yet, probably later.
What do they mean?