The tooling is nearly always C, so it's interesting to see Rust moving into this space. Memory management is not so much of an issue, but multitasking correctness is; perhaps there will be some new micro-RTOS framework with provably correct interrupt handling. (We have sel4, but that's not quite the same thing)
As you get more low level, the tutorials get sparser. You move into FPGAs, where you have a choice of two languages with 1970s design principles, or third-party tools which work despite the manufacturer's total closedness. As you go deeper into actual IC design, nobody will attempt it without supervision from someone experienced to tell you all the little tricks. And then there's analog IC design, which is basically black magic.
With the esp8266 and low end arm controllers, there has been an explosion of languages for embedded applications; micropython, lua, es, basic, even lisp. Particularly for beginners, this is a good thing. Having said that, the dev environments are somewhat lacking at this stage, and I'm wondering if we'll soon be at a point where they'll be enough resources to just put Linux or other OS straight onto the microcontroller.
I wonder how easily one can link C libraries with MicroPython, that'd be the best of both worlds.
I'm assuming you're referring to VHDL and Verilog. There are plenty of other hardware definition languages out there. Chisel, CλaSH, MyHDL, etc... Can see a more complete list here:
As for source for information, for me it used to be the Elektor magazine, available in a few languages.
Up to a few years, it was still common to occasionally see Pascal listings on it.
But you are right, the embedded culture is mostly C, and using alternatives, even C++, tends to end in culture clash, as Dan Saks referred to it in his CppCon 2016 talk.
Basically whether it's going to deadlock or miss interrupts. Deadlock is immediate disaster, but at least the JTAG will help you .. if the device hasn't self-destructed. Missing interrupts is worse because it's extremely hard to debug.
Forcing DMA to behave would also be great, although this isn't strictly a microcontroller issue. I've seen a few war stories where people are trying to debug memory corruption where the program is completely correct - but some other device has simply DMA'd over it. I think this was involved in https://googleprojectzero.blogspot.co.uk/2017/04/over-air-ex... too.
Checking interrupt priorities sounds like an interesting problem. What is the state of the art in deadlock prevention? It would also be cool if you could tell the compiler "I need this interrupt handler to return in less than n clock cycles." I wonder if someone could write a rust compiler plugin to check that.
it would be great if all those would be easy issues, or solved.
Give Céu a look. It was specifically designed for embedded computing, and compiles to C, making it very easy to interface with existing libraries. It requires no manual memory management, and has very nice structured synchronous reactive concurrency primitives.
(EDIT: I realise that multitasking and concurrency aren't quite the same thing; it also has experimental interrupt support though)
Here's a made-up example for Arduino (for which it has support out of the box). It creates two concurrent fading LEDs at different frequencies, and resetting at the push of a button:
input int PIN_02; // button input
output int PWM_05; // LED outputs, remember that pins 5 and 6
output int PWM_06; // have a higher PWM frequency on the UNO
// a code block that concurrently fades an LED in and out
// `pin` the output pin of the LED
// `min` min value of the fade
// `max` max value of the fade
// `delay` number of milliseconds to wait between increasing/decreasing `analogWrite`
code/await Fade_forever(var u8 pin, var u8 min, var u8 max, var uint delay) -> void do
var int i;
loop i in [min->max] do // fade in loop
if pin == 5 then
else/if pin == 6 then
await delay ms;
loop i in [min<-max] do // fade out loop
if pin == 5 then
else/if pin == 6 then
await delay ms;
loop do //endless loop
// if *any* of these three code blocks (trails) end, all of the remaining trails in
// a `par/or` are aborted and code resumes (in this case, the loop restarts).
// By comparison, a `par/and` construct would require *all* of the trails to
// terminate before continuing.
await PIN_02; // .. meaning that if a button is pressed, we reset the loop
// fade the LED at pin 5 quickly between 64 and 192
await Fade_forever(5, 64, 192, 5)
// fade the LED at pin 6 slowly between 0 and 255
// note that because it fades over almost twice range (255 vs 128)
// but not exact, it's around eight times slower, not four, and it
// will slowly go out of sync.
// We can push the button to reset however!
await Fade_forever(5, 0, 255, 20)
This is where Rust's safety helps. Debugging embedded code on small machines is a huge pain. The more problems caught at compile time, the better off you are. A compile time error beats JTAG debugging every time.
The article gets kind of vague once they get beyond LED-blinking and busy-waiting. They implement a brute-force CPU dispatcher and call it "async" programming. They never get to interrupts at all.
Rust on little machines makes sense, but it needs more support underneath to deal with timers, interrupts, and concurrency. There are projects working on this.
NB. the OP is part of that project, from the same author.
It's better to have one good one CPU dispatcher than making users roll their own crappy one for each application.
If you mean every Rust program requires a dispatcher to run, then no, Rust does not need a CPU dispatcher. Thread and locks are not primitives: they're implemented in the standard library (the std part, specifically). The 'thread' safety guarantees don't rely on that functionality, instead the arrows go the other way: the spawning and locking constructs build on the guarantees (driven by the Send and Sync traits, which are purely compile-time constructs) to provide expressive yet safe API. For instance, there's numerous operating systems built in Rust, for instance intermezzOS seems to be pure Rust except for the single file https://github.com/intermezzOS/kernel/blob/master/src/asm/bo... .
If you mean that it would be really nice if there was a general purpose dispatcher library available, then sure, that seems like something that would be great on crates.io.
In any case, I don't see how your comment relates to mine.
Oh for the day when RV32IMAF can be considered low end.
But not single cycle ;)
This is important!
Still a lot faster than a bunch of shift+add!
It uses new , cheap multi-die assembly techniques, which microchip also used in their ~$1 Bluetooth chip. So I think with time, and with an attractive software ecosystem,we'll see interesting new mcu's and price points.
...and a long list of errata, some of them quite serious: http://ww1.microchip.com/downloads/en/DeviceDoc/80000736A.pd...
(note that if you look for it on Microchip website, the official link to the errata is also wrong at the moment!)
Some people also reported a few heat issues, since it's got to dissipate heat from the memory and heat from the MCU, which starts being a bit large and a bit fast (for an MCU). Nothing extreme, but it has has to be considered in some setups.
IMO, it's better to wait for other revisions of this chip, other versions in this family, or something else.
YMMV though. People in my class who lacked the experience I had with C and C++ from before had to struggle quite a lot.
It was a lot of fun to work with and looks to have a growing set of common drivers, etc.
The OP guide looks to be lower level and wonderful for learning. Reading through it very quickly, it looks very thorough, well done!
I assume it's non-trivial or someone would have done it already, but I'm curious if it might happen eventually. Been poking at C which is already a bit out of my wheelhouse, but it'd be a good excuse to check out Rust too if I could.
The same would work for the ESP32.
I'm making a commercial product with the ESP8266 with the FreeRTOS SDK, and it's been relatively painless so far.
I would much prefer to write the code in Rust but I'm sure even if it were possible and I got it all working eventually some other chip will be the new hotness and using C ensures I need to do the least amount of work to get it working on that chip
MS-DOS had lots of programming language options.
I started with the Basic STAMP, played around with things like the OOPic, tried my hand at PIC assembly on the 16F84 and 16F877, enjoyed the AVRs (ATmega328 and ATmega644) for a while, dabbled a bit with the ultra-low-power MSP430 (via the Launchpad), backed the MicroPython project and got a controller out of it, and have some ESP8266 boards that are just waiting to be played with.
All of that was self-taught via online tutorials and forums. While some of this stuff is obscure, there's a ton of information out there for anyone to learn it.
The biggest barrier to entry is money - developing for these platforms requires buying hardware, both the controllers themselves but also the programming tools (e.g., I use an AVRISP mkII to program ATmega chips). Many controller manufacturers make development boards that include a programmer, controller, and various peripherals to play with - this particular post focuses on the STM32FDISCOVERY board which seems to include the programming tool, a button, some LEDs, plus breaks out the controller's pins so that you can hook up whatever you need.
The biggest barrier to entry is time because unlike web or other higher level software stack, blogs and github resources are no where near the level you feel like you can figure out how to make something work other than Arduino and RPi platforms. Want to learn how to code Beaglebone's PRU with rproc and not the deprecated uio? Good luck. Want to learn how to configure TMC2130? Good luck. In order to drive the micro-controller space forward, the IC manufacturers really need to produce thorough tutorials that newbies can understand and get up to speed, and keep them up to date. Just take a look at Deep Learning, which became popular no sooner than micro-controller world did. I can easily list 5 top tutorials and MOOCs where a newbie like myself can follow and get some meaningful understanding or output from it. Deep Learning is arguably a much tougher and vast topic as well.
There are plenty of Arduino courses too.
Also microcontroller manufacturers don't really make their money from hobbyists.
To make them attractive I've also started getting into 3D-printing, paying other people to make cases/boxes/shells for me.
I've used an Arduino as a glorified breakout board thanks to the ICSP header (connected to an AVRISP mkII), and I've also used it with the Arduino IDE.
It's a surprisingly versatile platform.
I have a friend that does a lot of consumer electronics development on contract. One of his favorite parts is an 8 bit micro that cost 6 cents in bare die, and that was a few years ago. The part is probably 4 cents now. A 4 dollar ARM looks rediculous next to a 4 cent part in certain application spaces.
But in my world - low volume mechatronics - the ARM with 1mb flash is the no-brainer choice.
Which is the interesting thing for me. At the current processing nodes (22nm and even 45nm) the cost of the packaging and bin testing dominates. It costs, too a fairly close approximation, exactly the same to put a chip in that is 8 bits as it does to put one in that is 32 bits.
I had a great discussion with a product manager from ST Micro about this at their recent Developers Conference in the Bay Area. They had some collateral on the STM8 series and I asked about it and he shared the above but said that it was either legacy designs, essentially CPLD type designs, or HW engineers without a software person to support them that seemed to still use 8 bit machines. From ST's perspective the cost to produce was the same.
At Digikey the lowest cost 32 bit processor is 58 cents, and the lowest cost 8 bit processor is 40 cents.
Are you referring to Rust here?
On our product we have up to 4 processors, 1 32bit, 2 16bit, 1 8bit. Smallest only has 20bytes of RAM
Ideally I'd like to program them all with a super typesafe language
Atmel still produce a significant number of 8bit AVRs.
Apropros "M0's being so much better", this is not really an attitude that produces effective results in "the embedded" world.
True fact: in embedded, many times you want the least powerful solution. 8-bit MCU's are freakin' everywhere, and they are not going anywhere.
Cheapest 8-bit AVR on Digikey: https://www.digikey.com/product-detail/en/microchip-technolo...
2.5x the price is not cheaper.
If you're selling hundreds of thousands or millions of something, suddenly that is real money that justifies the incremental increase of engineering effort to use a cheaper part.
That's true, but you can usually run the ARMs at higher clock speeds.
And of course they still make tons of them, it's embedded, you need to supply these chips for another 10 years for old designs.
But you shouldn't use an 8-bit uC in a new design.
While the core of a 32-bit controller doesn't take up much die area at smaller process nodes, core size has never really been the limiting factor with MCUs. Look at any MCU die - the flash and RAM both dominate in size over the logic. Going to a 32 bit controller generally means that your instructions are going to be wider, so your program space is going to take up more flash. You will also tend to use more RAM unless you are judicious about word size, and a context switch on an RTOS will cost more memory since the context contains much wider registers.
Another pain point with the 32-bit MCUs is that ARM currently dominates the market, and their licensing fees add a few cents to the cost of each part - which can be a killer at volume. There have been attempts to break ARM's monopoly (most notably Microchip's MIPS-based PIC32 series) but it seems like ARM will continue to dominate this space in the years to come.
As a final note, SiLabs recently introduced an ultra-low power microcontroller line based on the venerable 8-bit 8051 core, so clearly they think that 8-bit MCUs have a role to play in the years to come.
What makes you say that?
MicroChip released new xMega's in May. 
Isn't this not really true? I remember it was a long point of discussion in an embedded class I took years ago but can't remember exactly what was said.
I think technically "everything" at least starts with an access to a file, but maybe not every interaction is done with a file.
Man do I miss this stuff. Articles like these make me want to get back into it.
If you want a taste, go build some stuff in pure x86. You'll pretty quickly get tired of it and will want to build higher order abstractions, and then you're back in software-land.
Doing stuff with hardware expands your creative horizon to do things in the real world, freeing yourself from the confines of the virtual realm. 3D print what you need, make it come alive with MCUs.
Also much of modern software development feels like just plugging lego bricks, but in embedded you actually have to know how stuff works, down to the register level.
But then those were purely hobby projects, maybe it feels very different when doing it for a living.
Now I live in the bay area and am working on a startup. Iteration cycles are much more faster so I can learn quicker and ship quicker. Also, distribution is a lot easier.