Hacker News new | comments | show | ask | jobs | submit login
Rust on AVR: Safer microcontrollers almost here (dylanmckay.io)
178 points by mmastrac on Feb 11, 2017 | hide | past | web | favorite | 122 comments



Having more programming languages for embedded platforms is always great. If all you know is Rust or Python or JS, etc. it is great to have it as an option. But I have to wonder if the specific claim of type safety really translates here. In my (albeit limited) experience, certain classes of errors just don't occur on embedded systems because there is not enough room for them. For example, there is no double free() because heap memory is not a thing. I suppose you can still have off by one errors, and problems with referencing/dereferencing, but once again in my experience you usually find this stuff pretty fast because your code just won't run. Am I wrong here?


I used to work as an embedded systems developer (for systems where a single failure could cost a quarter million dollars), and I've experimented with kernel-space Rust just for fun.

Rust is a surprisingly nice language on bare metal.

If you use '#![no_std]' in Rust, it disables the standard library and substitutes the 'core' library. This has no 'malloc' (unless you supply one, which is easy) and no OS-level features. But you still get iterators, slices, traits, "scoped" closures and generic types. You get a lot of abstraction with basically zero overhead. And don't forget that embedded systems have lot of state machines, which are nice to implement using Rust's "tagged union" support and exhaustive 'match' statement.

Here's an example of a Rust I/O port wrapper (https://github.com/emk/toyos-rs/tree/master/crates/cpuio) and an interrupt controller interface (https://github.com/emk/toyos-rs/tree/master/crates/pic8259_s...). These APIs are reasonably pleasant for something so close to the metal.

Rust would be a very agreeable programming language on an AVR system, IMO. And not necessarily because of pointer safety, either.


As a hobbyist and C junior, if I can get anything higher level than C, I'll be ecstatic. Of course, it's going to be the same as with micropython, which is fantastic: The entire ecosystem is in C.

I tear my hair out every time I have to wrestle with C for the simplest things, though, whereas Rust is a joy to write.


On "full size" systems we solve that with FFI, but I don't know whether that would work here is not.


Having recently worked on a few projects written in C++11 for embedded systems (again, no standard library), I must concur with your comment. C++11 brings a lot of the same features (minus the functional coding components) to embedded systems making program flow a breeze.


I'd be interested in seeing the challenges and outcomes of assumptions made porting between 64-bit von neumann and 8-bit harvard architecture systems.

Most of the AVR community tends to code in a mix of C and assembly, I think Rust is possibly not the best language to port, and type safety is not really an issue that's going to sell it.

To put it into perspective, Rust on a more powerful IC like an ESP32 or STMF32 makes sense to me, as you have a lot more to play with, but I've spent the best part of a year and a half working on a design constrained within 512 bytes of dynamic RAM, 512 bytes of SRAM and 512 bytes of EEPROM. I'm not sure what type safety buys me when I'm already trying to squeeze out every last bit.

All of this is not to suggest we shouldn't try new things, of course we should. Hacking things previously thought impossible is a longstanding and welcome tradition. However, I'm struggling to see how this would increase adoption of Rust, or make things significantly easier for AVR users vs say Arduino scratch.


Yes, Therac-25 as an example.

The code will run and corrupt data without crashing, due to off by one error.

In the proprocess a few people got cooked.

Although the code was in Assembly, the situation wouldn't have been different if it was coded in C, due to the lack of bounds checking.


A PDP-11 is the classic example of a microcontroller. ;)

Here's a picture of the model used for the Therac-25 the PDP-11/23 http://www.physics.purdue.edu/~jones105/pdp-11/images/IMG_28...


As I mention on another thread, an ESP-32 would be good enough to run MS-DOS.

Your example makes it even more obvious that current microcontroller are actually quite powerful, when we look at the hardware resources of the 60 - 70's.


On the smallest (I've used ones with 16 bytes RAM) you simply don't have room for doing much at all so you pretty much only use registers, none of this fancy pointer stuff and you can easily fit everything its doing into your head as its program space is only big enough to fit 64-256 assembly instructions (a paper pages worth).

It can be hard to pin down when something switches from a fancy logic circuit to a microcontroller on the low end of the spectrum.

In my opinion, it stops being a microcontroller when the RAM is external to the CPU or it needs to be provided multiple voltage rails.


I have zero knowledge about microcontrollers, so forgive my dumb question... But how do you program with 16-byte RAM!? I believe even ENIAC had more memory than that? (Though I'm not sure if ENIAC had such a concept like main memory.)

What useful thing did you conduct with it?


You have lots (where "lots" means maybe 512 bytes, or possibly 8k bytes) of flash memory. Flash is cheap both in cost/silicon area and in the power budget.

You store constants and your actual program code in Flash. The limitation of Flash memory is that you can only erase a whole page state time. On many of the smaller microcontrollers, one page may be larger than your SRAM!

So you treat Flash as read-only, writing it with a separate hardware programmer. You store state in the limited SRAM. You would also read or modify external state by loading or writing to special function registers outside of the Flash and RAM address space, where the "memory" mapped to that bit or word would do something in hardware. It would tell you whether a digital pin was on or off, or turn it on or off, or tell you the analog voltage on an input pin, or configure the pulse width of a pulse-width modulation timer, or read/write to a communication port, or set up a hardware interrupt timer.

Here's probably the most common example of this class: http://ww1.microchip.com/downloads/en/DeviceDoc/40001239F.pd...

They're often used to 'glue' two separate circuits together. For example, you could use it to measure an input, and in one mode output a high voltage if that input was on, or in another mode output a low voltage. In more physical examples, you could read the position of an analog input knob and control the pulse width of a TRIAC to control the speed of a motor. You could monitor the capacitance of a touch sensor (with an ATTiny) and turn a device on or to standby, or do a hard reset if the button is held down for a long time. You could modulate an RGB LED through a rainbow of colors. You could run a flashlight in a high power, low power, or strobe mode depending on how a button was pressed. And so on.


Not everything needs ram you can perfectly write program having only few variables. Mostly what saw is exchange part of circuit to microcontroller which sometimes led in saving big chunk of components, you can you mcu as simple state machine for specific task in circuit like power controller or sensor processing unit... For example I saw 3 mode led flashlights selling on ebay for 3$ have PIC12 (probably chinese clone).


A historical example of what one can do with limited memory is the Sinclair Scientific calculator (http://files.righto.com/calculator/sinclair_scientific_simul...). It doesn't use RAM at all, doing everything in registers.

If it would have had RAM, one could have used it to implement a memory or to implement a 'normal' infix UI.

And no, it isn't cheating that it uses a CPU designed for calculators. At best, that keeps the ROM space needed down, but I doubt that, given its somewhat weird instruction set.


Consider it yourself: what kind of math can you do in 16 bytes of RAM? What kind of control? I suppose you can fit some running average, statistics, logic equations, maybe even PID would fit. A single for loop takes just one byte. These 16-byte microcontrollers are usually very small, and have very few pins (usually eight); they are the electronic equivalent of a very short shell script that glues some stuff together.


Microcontrollers are a continuum. On the tiniest you might be right that there is no room for error (I doubt that though), but there is a large class of chips that are powerful enough to be connected to a network. Those are dangerous.


> For example, there is no double free() because heap memory is not a thing.

This was my experience back when I first did embedded stuff. These days unless you're programming something that costs less than a dollar to make, you've probably got at least double digit KBs of RAM and a C compiler. Not saying it's good practice to be using malloc() in this environment but you'll get away with it.


> Not saying it's good practice to be using malloc() in this environment but you'll get away with it.

Actually, for the sake of keeping code modular and portable, it's fine to malloc() once on boot and never free(). It's the "dynamic" part of memory management that's dangerous on MCUs, especially on those without an MMU (as fragmentation gets in the way very often and you can't really reason about memory usage anymore).


If you're using malloc on boot only, you can almost always replace that with a static data declaration, which usually gets put in the same space the heap would otherwise use. It's better because the compiler can drop malloc, saving you some flash space, and static analysis tools can see how much RAM you're using (and if something is clearly going to blow up).


There are people who think ARM SoCs with a few hundred MB of RAM are embedded systems. You can fit a lot of errors on that scale of embedded platform.


Those are considered embedded because the intention is to just flash them with software like android once during the manufacturing process and then never update it.


Love the Android example regarding the update.


That used to be the case, but as more functionality makes its way in (USB support, networking, etc) the attack surface grows substantially.

Right now not a lot of that goes on, but that's changing -- and fast.


A good example would be the ESP-32, powerful enough to run MS-DOS (520 KB hurray!), dual core with extra modern peripherals.

Which means every programming language we used to enjoy back in the day are possible to target such environment, assuming someone would bother to port them to it, that is.


I wonder whether it would be practical to write an emulator and actually run DOS et al. You'd lose some memory and speed, of course, but we might be powerful enough that it doesn't matter.


A simple example for 8-bit microcontrollers would be concurrency issues in an interrupt handler.

Heap memory is still a thing for larger devices running on a RTOS, in addition to threads, mutexes, etc.


Bugs that occur due to dynamic memory allocation and freeing are definitely not a concern for 99.99% of well-written embedded software. We avoid dynamic memory tricks not because of memory size constraints, but because it makes verification difficult and introduces additional uncertainty in the program's behaviour and timing. Even in systems where there's plenty of memory to support it, dynamic memory allocation is done sparingly -- sparingly enough that it can be properly managed via code reviews and the like.

Real-life, but somewhat extreme metric: in my last project, there were two instances of dynamic memory allocation, in about 100k lines of code, and both were in third-party code (we could have replaced them but you know, not broken, nothing to fix). At the other end of the spectrum, I think we had about one dynamic memory allocation at every 6-700 lines of code, but almost all of them were confined to a single scope, and they were largely due to code smell. (Edit: I think at one time we got really pissed when we saw how out of hand it had gotten and we went back and replaced most of these occurences, but I don't remember the metrics.)

In any case, lifetime management at the lexical level should not be a problem in most embedded applications, because there should not be much lifetime to manage. By "lexical level", I mean "things that are exposed to the language and manipulated through its primary means" -- variables, functions and so on. (Edit: there are legitimate exceptions to "should not be much lifetime to manage", but they are very few; the non-legitimate ones are in code of such poor quality that Rust won't help: what those codebases need isn't a new language but different managers, different programmers, or both.)

Unfortunately, this retains a whole class of errors that, to my knowledge, Rust (or any other language, frankly) can't manage. Off-by-one errors, for instance, can be handled if they occur at the end of the buffer (because you'd be spilling out of the container) but there's no way to handle off-by-one errors inside a buffer (you meant to say "read from the beginning of the buffer up to the last character that's been read from the serial line" but you actually ended up including one more character because you wrote off + 1 instead of off).

It's still up to the programmer to manage these things, and sloppy programmers will still write sloppy code that does this sloppily.

However, and this is a very big and very important however, we're talking about a level where any kind of additional safety helps. Returning to the previous example, off-by-one access errors inside a statically-allocated buffer are generally easy to spot and get caught very quickly during QA (since ooh, let's look at the buffer is the first thing you say when you get a bug about repeatedly garbled data when reading data from the serial interface). Off-by-one access errors outside the buffer (accidentally zeroing all the TX buffer, oh, and the first byte of the structure next to it that's used by an entirely different module) are the icky ones that sometimes fly under the QA radar for months.

I don't think the safety gains are on the level that the Rust fandom expects, but no one in their right mind is in a position to not welcome them. No, Rust won't solve everything, and bad programmers will continue to productively shoot themselves in the foot in Rust, too, just as they can productively shoot themselves in the foot in pseudocode. But for people who realize that there's a pretty big chance their next bug is going to get somebody killed, anything that reduces the bug population is a welcome addition.

tl;dr In most embedded scenarios, this will help less than the Rust community wants to think, but enough that sane people welcome it!


I'm not entirely familiar with embedded programming, but I don't think I understand your buffer example. A serial line adds to the buffer, incrementing an internal "current end position". You grab a drain iterator or a slice, incrementing the internal "current start position". At no point is there any manual arithmetic for any off by 1 errors.


The opportunity for off-by-one errors is at the other end of the pipe, when you grab data from that same buffer and munch it in the application. Depending on how unlucky you are, that ends up involving crap like CRs and LFs and totally sane protocols that include the header length in the byte count for some packets, but not for other packets.

It's probably not the best of examples, but the mistakes pop up the same way as for dynamically-allocated buffers -- you pick the wrong offset or count up to the wrong limit, and it doesn't make much of a difference if the buffer was statically allocated or not.


While I also agree that Rust does no better in unsafe code, I think it allows for some sweet careless use of pointers (references) in _safe_ code.


You're definitely right.


Why do programming language maintainers for a front end to LLVM (rust) need to consider/patch/modify their front end to properly support a llvm backend, like AVR?

(My mental model was that once you add a backend to Llvm, every programming language that uses llvm should magically work with it.) Does LLVM's design fail to capture the full problem space? Are there out of band concerns that do not lend themselves to be generally expressed?


If you want an example of a llvm-based programming language porting to a new architecture, see [1]. The issues has a checklist of things that were done, and a readable diff.

Some of the things that needed to be implemented in the frontend was the ABI, unwind support, C bindings for the new architecture. Some of these don't apply as much to rust but you can see that LLVM doesn't do a perfect job. The diff is still quite small however.

[1] https://github.com/crystal-lang/crystal/pull/3491


LLVM isn't completely platform independent (e.g. all integers are fixed-width, so the frontend needs to know the pointer size to emit pointer-sized integers) and there are some configuration options to be passed down to LLVM/the linker.

Additionally, it is useful for the front-end to understand things like the layout of types, and the standard library usually also needs changes to paper over any platform differences (although I imagine AVR doesn't need many, mostly in the compiler-support library compiler-rt, given not much of the standard library will work).


Quoting the article:

> Rust's version of LLVM has a few patches on top of stock LLVM, and it also tended to have extra commits cherry-picked on top.

Apart from that, Rust needs very little per-architecture glue code. And someone needs to run the Rust testsuite on the resulting version of Rust and make sure it passes.


Going to express this in C terms since that language has a longer history in the space. For little processors like AVR, and bigger but specialized ones (e.g. DSPs), standard ANSI C is neither necessary nor sufficient. For instance, nobody is ever going to do IEEE754 floating point arithmetic on an AVR, or on a DSP56000. But these processors have other properties that a programmer is likely to want to use that are not expressible in standard C (sometimes, not even with intrinsics); for the most common cases, C has the TR18037 extensions¹.

In the AVR case, on the biggest chips a RAM address takes 2 bytes while a flash address is 3 bytes, and on the smallest a RAM address is 1 byte while a flash address is 2, and of course the values overlap (flash-address-0xFE points to different memory than RAM-address-0xFE), and the indirection code is different. So how big is a plain 'char *', and how do you dereference it? On a small AVR you don't want to be wasting precious memory storing and copying 2 bytes when 1 will do, and you sure don't want a run-time test on every dereference —​ if you had the cost/time/battery for that you wouldn't have picked² a small AVR in the first place. So you need to be able to tell the compiler what kind of memory a pointer points to.

The big free toolchains (gcc/llvm) don't pay much attention to non-mainstream processors, which is why a handful of commercial compilers still exist.

I suppose (not being a Rust expert) that a Rust compiler could figure out that a non-mut 'static can go in flash, and track address spaces, and auto-genericize functions that take a reference… but I'll bet it doesn't, yet.

¹ http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1169.pdf

² no pun intended originally, but I'm leaving it and gratuitously calling attention to it.



From the blog post, it sounds like the problem is that Rust doesn't use standard LLVM, they maintain a separate branch with a bunch of patches that are specific to Rust. So, adding AVR support to Rust means adding AVR support to Rust's branch of LLVM, which is a hassle. Rust is about to move to a newer version of LLVM, though, so when that happens they'll be pulling in the AVR support that's already in mainline LLVM.


I think that one thing that makes it difficult to write safe microcontroller code is the interrupts that can interrupt the normal flow at anytime. Does Rust on AVR help with this at all, or is it an orthogonal issue?


Yes, really interested in how interrupts can be modelled with Rust. Preferably a safe abstraction without going full RTOS?


My toy OS project has a partial implementation of interrupts for x86_64: https://github.com/intermezzOS/kernel/blob/master/interrupts...

Just enough to get keyboards working, still a lot of stuff to do, but the foundation is there.

Used like this: https://github.com/intermezzOS/kernel/blob/master/src/main.r...


Does this mean that some day I might be able to buy a cell phone that has a radio chip that is not full of buffer overflows and other types of flaws?


I dont know rust from jack so I am interested to know why it will compile a safer program than a well written C one and what makes it better than other safe embedded languages like ADA?

Note it wont affect me as I mainly use PIC micros


The best thing Rust has over Ada (no one writes ADA any more :) ) is momentum.

Although Ada has eventually got an open source compiler, and there are still around 5 vendors available, due to the way it got sold, its key use has been in the area of High Integrity Computing.

So unless that is the domain of the work, where no errors are allowed, which could possibly lead to loss of human lives, very few care of using Ada.

For the young generations, Ada is kind of something they hear it existed but they never seen live, even though it has enjoyed a regular presence at European FOSS conferences like FOSDEM.

So by being modern, Rust can appeal more to the younger generations, also some type system safety rules of Rust for parallel code can only be coded in Ada via SPARK.


> The best thing Rust has over Ada (no one writes ADA any more :) ) is momentum.

Having only recently looked at Ada I don't really understand this. It does look like Ada offers a lot of really nice features that other languages haven't yet caught up to.

For example, SPARK allows formal definition of correct code behaviour within the function definition. This appears to me to mean tests are written into the function at the time you write the function. That's huge from a maintenance standpoint, and it likely provides extra information a compiler could take advantage of.

> where no errors are allowed, which could possibly lead to loss of human lives, very few care of using Ada.

Loss of life is one area, but surely anything financial would benefit from this - as well as anything dealing with personal data (e.g. identity management)?


Sadly languages with Algol like syntax are not fashionable any more, now being cool is having an ML like syntax, hence the momentum.


You don't get advantages over perfectly written C, but that doesn't exist. Over mortal written C, you gain safety against data races, use after free, and a couple other classes of errors. And without needing manual memory management, and with high level ergonomics.


[flagged]


Not really. We can simply expand it to "All code has bugs", and then we're simply at Rice's theorem. In C, the class of bugs that can not be proven to not exist are also quite dangerous - remote code execution, as an exemplar consequence. In rust, you're limited to a program that can crash, or that has other semantic faults.

So it's clearly not wrong, and is actually a totally well established idea well outside of the context of rust or C.


An 8-bit AVR executes instructions from on board flash memory that requires twiddling hardware lines to erase and rewrite.

Remote code execution isn't a concern.


AVR ATMega is self programmable ie a bootloader can re-flash it if its configured to allow but you would need some code already resident on the flash to allow, it couldnt be done without this code.


No, but memory corruption related to pointers misuse or unsigned arithmetic is.


(I think you mean signed arithmetic here and in your almost-sibling comment.)


Probably, I always mix them up.


You can very easily have all the sorts of memory unsafety you have in C, in Rust.


Only in sections marked as unsafe.

In C every single line of code that manipulates pointers, does string operations, uses arrays or unsigned arithmetic is a possible cause of memory corruption.


Most code is less-than-perfect, independently of language. But the consequences of this depends on the language.


IMHO There's a big gap between perfectly written and poorly written


Not poorly written, just it's very easy to forget things, and even very well-written, readable, documented C code usually has memory-related bugs, as long as it's written by fallible humans. Nearly every networked C program in existence has had a buffer overflow CVE at some point - that's simply not possible in Rust.


Corrrections welcome, but I think there is one buffer that you can overflow in every Rust program: the stack.

If you have a MMU, the runtime will/should protect against that by mapping a page as 'trap on write' and panicking/rebooting/signaling if you do, but if you don't, I don't think rust has any functionality to prevent it.

Or can you declare that your stack is x bytes, and have the compiler verify that no call chain will need more? (That typically would prevent the use of recursion, but that's not too bad)


The answer is basically "stack probes", yes.


I find very few relevant hits on that on Google (stack probes apparently also are hardware to measure various quantities such as moisture in stacks of materials), and none that explain to me what they are (you can enable them in gcc and Microsoft's compilers, and rust should use them instead of stack overflow checks, but that's about it), but from what I guesstimate, they are runtime instrumentation, not anything that a rust compiler could do to guarantee that a compiled and linked program never overflows the stack.

Can you tell me more about them?


They're runtime checks, but they're inserted by the compiler at compile time. So yeah, if you link in code that wasn't compiled with them, then that code isn't protected.

You're right that a summary seems hard to find; http://www.delorie.com/gnu/docs/gcc/gccint_124.html has some decent stuff in it, but given that it's random GCC docs, the details may not be 100% accurate today, I dunno, I'm on a flight layover right now, so that's all I've got at the moment :)


They're dynamic checks.

Static checks for stack overflows are infeasible.


They are not infeasible, if you are willing to limit what your language can do.

Doing so doesn’t remove all useful programs, as the first Fortran compilers showed (they used a fixed addresses for all variables, be they local or global (that they did not support recursion is not that surprising, given that many CPUs of the time didn’t have a return stack))

That’s why I asked whether Rust supported such a thing. It would be very helpful, for example, if your build system would tell you that that difficult to test error handler would, if called, overflow the stack before you spent time and, possibly, lots of money, on trying to run it.


> That typically would prevent the use of recursion, but that's not too bad

It also prevents the use of indirect calls/virtual functions, because they can hide recursion. That's Bad.


It's fairly trivial to overflow a buffer in Rust. I don't consider having to write "unsafe { /* overflow buffer code goes here */}" non-trivial.


The difference is that when you are forced to declare where your program does not follow some safe semantics, while it's still trivial to write an overflow, it's also trivial (or at least not nearly as complex as some alternatives) to find and fix, compared to a language where the entire code base is a possible source for those problems.

For comparison, I would say C and C++ (but to a slightly lesser degree) are also trivial to write buffer overflows in, it is definitely non-trivial in many cases to find and fix those problems, as the surface area is so large.

To illustrate this, it's trivial to write a buffer overflow in almost all languages that provide any sort of raw memory access or hardware instructions (ASM), even if through a module. It's trivial for me to create a buffer overflow in Python or Perl my including a module that allows direct assembly code access. The difference when using those languages is that when I run into a buffer overflow problem, I can be reasonably sure it's in a place where I bypassed the assurances provided by those languages, or in some included module that did the same (including just using a shared library). When tracking down a problem like that, I'll focus my attention on those spots, which makes identifying and hopefully fixing those problems easier.

Rust's "unsafe" is really an in-language way to provide a similar set of levels of assurance about how likely the code is to have problems of specific types, with assurances provides by the language and compiler. This is a step above what C and C++[1] provide, and thus welcome.

1: C++ provides some similar assurances based on types, but by nature of how "safe" code becomes "unsafe" when used in conjunction with unsafe portions, is much harder to assess at a block level, since you can't even be sure a line is safe until you've verified all the types used within it.


For embedded code, I'd recommend that all application code should be compiled with the `--deny unsafe-code` option. Then the compiler will stop you. A separate module for the I/O code providing is the only place where one should use `unsafe`. Extra care must be taken here, in code review and in testing.


Do you have any idea how impossible it is to do anything useful without unsafe in Rust on a microcontroller?


About as hard as on desktop? I mean, many of the core datastructures commonly used there are implemented using unsafe. This does not mean that the application code must or should be riddled with unsafe code blocks... For a very simple example see https://github.com/hackndev/zinc/blob/master/examples/blink_...


If Ada, Pascal and Basic can, so can Rust.

Every case where this is not true is a bug.


You can also get a buffer overflow in Python and Java by abusing their equivalents of "unsafe".


"unsafe" in Rust isn't abuse. It's explicit syntax with compile time effects specifically for the purpose of facilitating unsafe constructs, like direct memory buffer manipulation, required for low level programming.

I've never tried the Python or Java equivalents and frankly don't know enough about them to even know whether your comment is accurate or not.


Java has `sun.misc.Unsafe`, and Python has the FFI which is remarkably easy to misuse. The point is that you don't use these unless you need to, and you spend special effort when you do - same as `unsafe` in Rust. The rest of your code runs perfectly safely, and the vast majority of code has no need to use `unsafe`. Code which uses `unsafe` and isn't obviously either directly an FFI wrapper or a straightforward implementation of a generic data structure is generally considered bad form.


True that unsafe isn't considered too wrong in rust, but auditing the small amount of unsafe code is much easier than all the code.


Is getting that past the person who is reviewing your code also trivial?


Possibly. Depends on the quality of the reviewer and the complexity of the code (as a whole). On average Rust programmers aren't necessarily better than programmers using any other language. If history is a guide then if Rust takes off there will be plenty of terrible, buggy, unsafe and very thoroughly reviewed Rust code out there used in popular, widely used applications or services.

I find the notion that "explicit unsafe makes bugs less likely" to be suspect. It seems reasonable, almost tautological at first glance. But theory and practice are different animals.


It certainly is possible in Rust. Rust is not memory safe.


As it is in any language that has FFI to Assembly.

But that is a big difference in the amount of unsafety per line of code.


Rust provides compile-time safety guarantees. Safe code is possible in well-written C, but it can't be proven safe. I think that's a small difference with larger implications: the C code requires more thorough and competent human review for safety. I don't think the "incompetent C programmer writing code that is unsafe and that is always and uniformly bad" is as prevalent as it seems HN does, so I consider even the "human review" implication not that significant.

As for better, that is in the eye of the beholder. I've never used embedded ADA, but I've slung my fair share of ADA code in higher level applications. For me, for reasons I couldn't describe, the syntaxes and grammars of C-like languages (C++, Rust) just feel more comfortable. ADA is a fine language, though, and a real pleasure to work with, too (it reminded me a lot of Pascal, which is what I learned in school).

Here's something else to consider, however: Rust (core rust, even without the stdlib) provides a fantastic set of built-in abstractions that would require lots of up-front development effort if one used C.


Maybe read up on Rust. Its type system prevents memory errors. To use after free, no use of invalid iterators, no buffer overflows. No data races in multithreaded code. It's pretty cool.


Its type system certainly does not prevent memory errors.


It prevents certain classes of memory errors.


You gain Rust's memory-safety, but also you gain all of its meta-programming facilities. Which means all those #define macros become a thing of the past.


Yes, the expressive type system with zero-cost abstractions that makes me really excited about this!


No difference between AVR and x86 here. Rust has some memory protection guarantees that distinguish it from C. You simply cannot write some kinds of unsafe programs in Rust that you could in C.

If your C programs are correct, there is no safety problem.

I can't speak to Ada.


You absolutely can write unsafe programmes in Rust. In fact, you absolutely HAVE to do so to do anything useful on a microcontroller.


You have to do so to do anything useful at all. I/O is written using unsafe code.

That doesn't mean that Rust has no safety properties. It just means you have a TCB of nonzero size.


No, it does mean Rust has no useful safety properties. To do anything, you have to trust a lot of code. The reality is that one of the core selling points of the language was discovered to be unsafe due to some particularly complex combination of library features right before the 1.0 release. Was that patched? Sure. But the idea that it was the only unsound bit of code is absurd. There are almost certainly more in there, many more.


> No, it does mean Rust has no useful safety properties. To do anything, you have to trust a lot of code.

That's also true with any memory-safe language ever: you have to trust the compiler and VM. So if we accept your claim, then we also have to accept the claim that no memory-safe language has any useful safety properties. Needless to say, this is contrary to all the evidence.

> The reality is that one of the core selling points of the language was discovered to be unsafe due to some particularly complex combination of library features right before the 1.0 release.

Consider the chain of events that would have to happen to cause this unsoundness to lead to real-world problems (say, RCE), and compare that to the chain of events that routinely happen in order to cause a use-after-free in C++ to lead to the same problems. One is vastly more probable than the other.


It won't. To do anything useful on a microcontroller you would have to drop into 'unsafe' so often that any idea of safety is out of the window.


Unsafe should only be needed at the edges of the program, in dealing with IO. These parts should always (not just with Rust) be put in some platform layer used by the application code. This allows the majority of the application code to be verified easily, with unit-tests, end-to-end tests as well as writing simulations. Done right the verification can be ran both on host and on-device. Another key benefit is that this allows to develop majority of the software before the hardware is ready. When possible, use a platform abstraction which is already tested. As verification of this code is harder, typically requiring hardware and external interactions.

I hope that Rust unsafe/safe concept makes the platform/applogic boundary more visible and explicit, so people follow these practices more often.


> It won't. To do anything useful on a microcontroller you would have to drop into 'unsafe' so often that any idea of safety is out of the window.

I don't think that's true. I'm writing an OpenGL library right now, diving into "unsafe" all the time to issue the FFI calls, and there's a measurable difference in the number of memory safety problems I've seen (zero thus far) compared to what I see in C++.

I think the friction of writing "unsafe" encourages small isolated abstractions.


In other words you're just writing Rust the way anyone competent writes C in the first place: being careful. You think that the language forces you to be careful, I think what's more likely is that Rust is so difficult and aggravating to use that it discourages the sorts of morons that introduce those errors from using it in the first place.


Somewhere out there do exist these ideal C developers that write perfect safe code.

Sadly in more than 30 years of career I never met one. Not even the BSD and Linux kernel devs, given the amount of entries on the CVE database.

Even Dennis complained about this, regarding the genesis of lint.

So I wonder where do they exist.


The idea that people who create security flaws in C are "morons" is contrary to all the evidence. In fact, the biggest security problems are usually caused by the best programmers. That's because they write the code that people actually use a lot.

To be concrete: libpng, libjpeg, FreeType, etc. are not written by "morons". Can you write a JPEG decoder faster than libjpeg-turbo?


It's been seven years since I've programmed on embedded systems. I remember C ruling the day. Has that changed at all?


C++ has gained a bit of ground recently but C still rules the roost especially on smaller micros


To add to it, this was the issue of a few CppCon talks, namely how to bring C devs to write safer code.


There are Ada, Basic, Pascal, Oberon, C++ and Java options available, but C still takes the biggest market share.


Nope. Not in the least.


C is like a "portable assembly", its goal is to "abstract computers" and that is all. The languages that have attempted to replace it are trying to "abstract problems", which comes at a cost that people are not willing to pay.

If you want to replace C you either need to come with a better "portable assembly" or have a significant change in the hardware architecture that requires the change.

BTW this constant shoving of Rust down the embedded world's throat doesn't seem to be working.


I'm kind of surprised and impressed that LLVM (and GCC) can target AVR well. It's a tiny machine with some quirks. Registers and IO ports are mapped into low memory. The big ones have a weird 24-bit segmented addressing mode. Some crazy ones have DES intrinsics.


It's still a decent machine for C especially when compared with, say, 8051. I'm currently developing an entire product line based on a '51 derivative and just seeing what kind of assembly Keil C51 compiler produces is enough to make my eyes bleed. The amount of kludges required to make C work on that architecture is horrifying. Want an example? There's no stack when running under C51.


There's certainly a stack when running under C51, for function call returns. C51 just assigns local variables into 128 byte data segment. But say, PUSH ACC works just fine when you drop into assembler.

Then again, stack on 8051 might be sometimes pretty small. From 10 to 128+ bytes.


Right. Still, the fact that local variables aren't stack- but overlay- based is enough to blow my mind.


Yeah. The small device I am most familiar with is the z80, which is really not a good C machine either.


AVR was basically designed from the ground up to be a good platform for C. It's fairly RISC-like, registers are for the most part interchangeable, it has enough pointer registers to support a standard C-style stack and still have two left over for actual pointers, etc.


In fact, does anyone know whether LLVM/GCC can target AVR well? I recently had the displeasure of staring at what sdcc produces from pretty reasonable-looking C... (it doesn't help that some 8-bit microcontrollers are really not C processors.)


I wrote a bunch of stuff for the AVR in c. But it's been a few years since I've done much more than a few fixes. Some of the weird issues with targeting a Harvard architecture is I think fixed.

I'd say: 'works okay'

One thing that annoyed me was calling a variadic function like printf was expensive ~100-120 bytes each call. Eight to Ten printf's --> 1k of flash gone. Far as I could tell most of it was pushing and popping stuff on the stack one byte at a time.

I ported a bunch of AVR code to an ARM Cortex[1]. Code size didn't increase much.

[1] ARM Processors are approximately half the cost of an AVR. Seriously not important if you're building 1000's of something. Important if you're building 100,000s.


GCC at least is just fine. Was involved in developing an embedded system which was controlled by an AVR. There were some cases of unexpectedly big program sizes. But most was due to using large integers unnecessary (higher than 8 bit) and one case of overly aggressive inlining. Don't remember offhand how we solved the last one, but just took a day or two.


Are you thinking of PIC microcontrollers maybe? AFAIK all AVRs are 8-bit and the architecture was designed to be easily targeted by compilers.


I'm so happy for this, it's gonna be great to use Rust for Arduino stuff


Nice work, although there are already a few options for safer microcontrollers when one bothers to look beyond C.

More options is certainly even better.


Sounds exciting to have options apart from C and asm. How about something for us PIC plebs though?


In regard to safety you could use Basic and Pascal compilers, while Rust doesn't arrive.

https://www.mikroe.com


Latest commit in the git repo 8 months ago? is this project active?


Which repo? Last commit to LLVM AVR git is just marking it as being upstream in LLVM SVN


> The backend started as a GitHub fork. It now lives inside LLVM’s svn repository.


People still use avr?


Yup. You'd be surprised how much functionality people can wring out of an ATTiny85 or an ATMega328p.


Figured people would be moving on to ARM M0+ chips for new development.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: