Hacker News new | past | comments | ask | show | jobs | submit login
The Embedded Rust Book (rust-embedded.github.io)
387 points by formalsystem on April 27, 2019 | hide | past | favorite | 82 comments



20 years ago, I did embedded computing with systems that would scale to about 360 processors (PowerPC "G4" MPC7410 @ 500 MHz) and about 70 gigabytes of RAM.

Yes, it really was embedded. It fit in 9U (15.75 inches) and was rugged enough to fly in a military aircraft. The OS would let users turn off interrupts in order to squeeze out every bit of performance.

We could also run the OS on the SHARC DSP, which is a word-addressed Harvard-architecture chip without an MMU. Running on that would allow 3x the CPU density. One system got up over 1000 cores.

IO would typically come in via DMA, over a 32-bit link running at 40 MHz. There could be many of these.

The end result was generally something like a radar. The end-user sees a radar, buys a radar, and uses a radar. They don't see a computer. The computer is just a component embedded in the radar.


That's an interesting story but you have to admit high bandwidth real time signal processing is a pretty specialist embedded workload. I guess these days there'd be FPGAs and/or existing IC packages for the job.


There were several companies competing in this space. The main three were CSPI, Mercury Computing (now Mercury Systems), and SKY Computing. Matrox briefly tried to join the party.

Even back then, FPGAs and custom ASICs would be part of the compute fabric. I left that out. For example, there was an FFT accelerator chip. These days the companies add in a mix of FPGAs and GPUs, but the CPUs are still there. It's all the same stuff today, but faster.

To the end user, it was never a computer. It was a radar, a video processor, an MRI scanner, a sonar, an ultrasound, a chip wafer inspection device, a laser with real-time mirror warping, or some other tool.


You have a good point that people who set out to implement high end military/medical/high end industrial metrology hardware greenfield projects today may see value in using Rust and won't have a problem with more expensive hardware. However, I do think that's a vanishingly small percentage of embedded development.


I agree. Given low unit prices vs military, the volume of current embedded market suggests a massive number of 8/16/32-bit MCU's are moving:

http://www.icinsights.com/news/bulletins/MCUs-Sales-To-Reach...

They probably should be considered the norm if it's embedded without unusually-high performance requirements.


I would absolutely love to throw my lot down behind something other than C/C++ for embedded development, but after investigating Lua and Rust I could only come to the conclusion that they're currently inappropriate for resource constrained commercial development... particularly if you use a broad range of sensors, have higher-level protocol communication requirements, etc. Re-implementing this stuff isn't just a minor drag, it's commercial insanity.

1. Example hardware in this book costs around 98RMB and has a huge form factor. AVR / ATmega328P based platforms like Arduino Nano are currently ~10RMB, which can be beefed up to full ethernet connectivity with prebuilt hardware modules for another 20RMB. So in short it is possible to procure over three(!!!) embedded, ethernet capable AVR platforms for the same price as this hardware, no soldering required. Why not use something cheaper and more broadly available? It would help more people get on board (no pun intended). Oh wait, there's no AVR support. I see...

2. The nominal target hardware in the book seems overpowered/overpriced/out of date. For example, the target used in the early book is LM3S6965, which is NRND (Not Recommended for New Designs). Probably as a result of NRND status (I guess people buying up stock to keep old designs in manufacturing) it costs over 125RMB for the chip alone. The manufacturer's recommendation for replacement is TM4C12x, the cheapest development board for which is MINI-M4FORTIVA @ 22RMB (which features JTAG). ICs in this series begin at 22RMB or so, still many times AVR chip price, and with IMHO far too much hardware for novices (64KB flash, 12KB RAM, 12x12bit ADC channels, 49+ GPIO channels...). A direct quote from later in the book is In this example, let's assume we have an Texas Instruments TM4C123 - a middling 80MHz Cortex-M4 with 256 KiB of Flash. That's not middling, that's ridiculously overpowered for almost anything embedded.

3. See example code at https://rust-embedded.github.io/book/start/registers.html and compare C and Rust Volatile Access examples at https://rust-embedded.github.io/book/c-tips/index.html and see which syntax you prefer...


It's easy to mix C/C++ with Lua or Rust. Lua requires you to write a C library or use LuaJIT's FFI library. There's a number of Rust crates that let you include arbitrary C/C++ code, such as the cc crate [0].

This approach allows you to have the advantages of the higher-level language while still being able to utilize existing libraries.

While not for an embedded project, I used this approach to write a Rust application that interfaces with the Helix Perforce C++ API.

Also, regarding your point #3, I prefer the Rust version by far. Code is written once and read many times. Without needing to jump between places or rely on good IDE contextual information, the Rust version says:

- SIGNALLED is intended to be modified.

- ISR must conform to the rules of an interrupt handler -- be careful what you do in this method.

- The write to SIGNALLED is unsafe and volatile.

- The read of SIGNALLED is unsafe and volatile. You can be sure the while condition check won't be optimized away because you see the use of the "read_volatile" -- if you're tracking down a bug, no need to go look to see if SIGNALLED is marked volatile.

Plus, in the comments of the Rust sample, the author recommends using a higher-level primitive, to avoid the direct calls to read/write_volatile.

A good IDE would show type hints: "volatile bool" for C and "Atomic<bool>" for Rust, or something similar.

[0] https://crates.io/crates/cc .


As a C firmware developer always trying to learn something new I looked into Rust for embedded, but cases like this hold me back to delve deeper.

So, to set a value into a (volatile) boolean I have to call a function passing a pointer within an unsafe block? Why on earth it's clearer than or preferred over the C code? I know, you'll list several reasons, but please, it's not clearer and it's voodoo magic. In fact, it is the last thing I would (intuitively) think about doing, since it's known that you cannot operate from a pointer to a volatile variable because it may be UB in C++.

>> the author recommends using a higher-level primitive

Again, to read/write a boolean? Are we talking embedded here?

From https://doc.rust-lang.org/core/ptr/fn.write_volatile.html

>> Rust does not currently have a rigorously and formally defined memory model, so the precise semantics of what "volatile" means here is subject to change over time.

WHAT!?

I cannot bring Rust to any serious development yet...


I've done C firmware. I disagree about these problems.

C did not have a rigorously and formally defined memory model until 2011. We did C for almost 40 years before we got that.

Go read what Linus Torvalds has to say about "volatile", and then go rip it out of all your code. You should not be using that keyword, except possibly for a software-incremented global clock tick counter residing in normal RAM.

You need to call a function that will properly perform any required memory fences. If you don't do this, the CPU or bridges may reorder the memory accesses.

Rust might have problems, but the issues listed are not problems. My immediate concern is the availability of inline assembly. I would hope that this is possible and can be inlined into code that isn't marked unsafe.


> That's not middling, that's ridiculously overpowered for almost anything embedded.

well you obviously have a different idea of embedded. Sure, there are different requirements for different systems, but in IoT, my previous company shipped tens of thousands of devices (fleet monitoring, smart sensor networks etc) and everyhing was based on STM32 L & F series, Cortex M0, M3 & M4. Now, in automotive we have MPC57xx.

> Re-implementing this stuff isn't just a minor drag, it's commercial insanity

absolutely agree. if there will be an adoption it will be incredibly slow. Logical but extremely frustrating.


I used to do a lot of embedded work and found it really interesting where people draw the line on what is "embedded"

Around 2009ish, I had a coworker who swore anything over 16 bits wasn't embedded. We were working on a MMU-less uClinux ARM7 system with 16mb of RAM and no networking and that didn't meet his criteria for "true embedded".

The next job I had was a piece of customer premises equipment that ran a fairly high powered ARM board with dual gigabit ethernet links and a lot of user space written in PHP. No one on that team questioned that it was an embedded system.

It always seemed like a strange thing to gatekeep on. Personally, I base my definition more on the use case than the power of the machine.


> Around 2009ish, I had a coworker who swore anything over 16 bits wasn't embedded.

When I were a lad, if you had more than a dozen bytes of RAM it wasn't REAL embedded. /shakes-cane

I generally use your definition, though. Any computer that the end user doesn't think of as a computer (and isn't expected to maintain or directly use) is embedded, whether it's a camera with touchscreen GUI or an ABS processor in a car.


highly resource constrained, real time, bare metal (or micro kernel) embedded programmers tend to think what they do as "embedded programming". There's a certain element of truth to it if you make the distinction between "embedded system" and "embedded programming". "Embedded programming" loosely groups a bunch of techniques for dealing with real time, bare metal, resource constrained devices. But there is no real formal distinction between those things.


Is there seriously still a commercial reason to use the ATmega328 (an 8-bit microcontroller with 2000 bytes of RAM) for new projects? I've been working on 3D printing firmware, and it's so easy and silly to reach the limit of this CPU. Been shaving bytes of RAM by reducing queue sizes, running into performance limits even with a few 32bit additions in the stepper loop. That was 6 years ago.

The only semi-valid reason to use this chip IMO is because people want to keep using existing firmware as-is.


In fact many embedded requirements are not as complicated as a 3D printer and essentially consist of basic logic and state, sensors and effectors, potentially with some external communication. It may surprise you but 8 bit micros are absolutely fine for this and can serve HTTP at well over a hundred requests per second without breaking a sweat.

If you read the Stepper code for Marlin @ https://github.com/MarlinFirmware/Marlin/blob/1.1.x/Marlin/s... you will see it's nontrivial... bunch of PhDs implementing nontrivial maths in a highly constrained environment. That's cute but not demonstrative of real world concerns. In such cases, you would absolutely throw more hardware at the problem, eg. for motion systems maybe dedicating one 'dumb' MCU per axis and outsourcing planning to a more rational/powerful environment.


Cost and temperature. Albeit not the ATmega 328 MCUs in the even more resource constrained ATTINY series go for under $.30 in quantity and run in 125C environments https://octopart.com/search?q=avr%208%20bit&sort=median_pric... . It's going in new products.


Only $0.13 for 2000+ ATTINY25-20SU according to http://www.hqchip.com/search/ATTINY.html


A rather small team of 10 people (with only 4-5 devs) will cost 10K per week at some generic corporation. Let's say your project takes 12 months to make on a beefy processor ($2 per unit) and 15 months on a cheap processor ($1 per unit). That extra dev time to deal with those issues directly costs 120K.

That automatically means you have to be shipping at least 120K units just to break even with those costs. But you are also getting to market 3 months later which itself has huge business costs which means you probably need to be shipping 200K or more units. That's a pretty big number for a lot of embedded device designers.

If you double your chip constraint from $0.13 to $0.26, then that same team must now ship a million devices to break even.


Um, look that up on findchips/octopart/etc.

It's a buck and change on those--same as ARM Cortex M series.


> Is there seriously still a commercial reason to use the ATmega328 (an 8-bit microcontroller with 2000 bytes of RAM) for new projects?

For anything commercial? Almost certainly not.

For one-off makers/non-engineers? Maybe. The Arduino ecosystem is really hard to beat for non-programmers.


Toys. The 6502 is still being used in them, as well as in many basic UX tasks.

Same for many of these devices. Mature libraries, low cost, ease of use all play a part.


It sounds like something you might use for the power-on sequencing of a laptop computer. It could manage the battery and wake up the main CPU.

Alone, it's the sort of thing you might use for a kid's toy that you hope to make into the next big must-have Christmas hit. You'd need to make millions upon millions of them, and maybe $49.95 is your target price.


Those microcontrollers for laptop power sequencing have already moved on to Cortex M4s


We are slowly working on AVR support both in LLVM and Rust.

https://github.com/avr-rust/rust


I did find that but described as a fork, it is 11 commits ahead, 5210 commits behind rust-lang:master. The only activity in the last month is https://github.com/avr-rust/rust/pull/137 ... which seems to be you talking to the void. Honestly, and I don't mean this to sound rude, I assumed it was a dead project. Well done for persisting!


Rust can run on the Arduino Nano.

Here is one example I ran on it:

https://github.com/nh2/quadcopter-simulation/blob/master/ard...


AFAIK LM3S6965 is used in the book as a target for QEMU, and no one endorses to use the chip in actual hw.


ATmega are a commercial failure, and frankly, outmoded toys. They are not covered because they have no relevance outside of Arduinos.


I agree at this price bracket STM32 is better value. My point was not about particular chips, but rather a better platform for promoting embedded Rust development in terms of price and focus for most basic circuits than the stuff of the article, which in terms of tooling and clock speed is more like a general purpose processing platform: higher cost, less distribution, less relevance. What Rust or Lua needs to take off for embedded IMHO is an Arduino-like IDE, a bunch of libraries (Arduino's library manager is crap), and lots of people with hardware. That's not going to happen if the entry level hardware costs are 5x and the current stock of hardware in broadest distribution (AVR based) remains unsupported.


I'm wondering if that's going to cut it though, since target audience for embedded rust isn't someone who wants to play with his Arduino, but people who write real world applications and are fed and tired of C/C++.


Given the feedback C++ community gets from trying to win embedded developers, they surely aren't fed and tired of C89.


There seems to be a common misconception here amongst traditional desktop and server software people that embedded programming requires vast software complexity. In fact, most real world electronics are very simple software wise. There's just a lot of them, they're a pain to get to, and they have alternative and nuanced challenges and context (cost, speed, debugging, stability, environmental considerations, etc.).

Now regarding language, let me illustrate an area in which C/C++ sucks balls for embedded. Using a cheap MCU is generally considered good practice if you have many target systems to produce and they are possible to execute on that cheap MCU, because using a pricier one will quickly add up and retargeting a team costs a lot. The problem is that on a cheap MCU sometimes you run out of basic resources, such as physical interfaces (pins). In the embedded world there are multiple solutions to most problems, just as there are in the conventional software one (ie. classic perl quote TMTOWTDI) and one of these is connecting IO expansion chips via alternate (particularly shareable) interfaces such as I2C/SPI. Such chips provide various types of interfaces by proxy, for example additional GPIO pins. The issue with these is that many libraries, such as those written to control motors, those written to control MOSFETs/relays, those written to control serial devices will not work by default on these proxied interfaces. Therefore, you need to either hack or rewrite the respective libraries in order to use them.

Coming from a software background myself, this is a basic interface abstraction, and the current solution is ridiculous. Reflectance would go a long way to solving the issue. Lua frankly appeals more than Rust in this regard when dealing with third party, existing libraries which were not constructed explicitly with the requisite abstraction. Alternatively, I believe some software people here on HN recently announced they are attempting a generative approach for embedded circuits and software, sort of a 'ruby on rails' shake-n-bake approach to the whole problem space. The issue with such approaches tends to be that you wind up with a whole lot of deployable product but nobody who understands the intricacies, a vast cognitive overhead for your higher level generation specification languages, there are frequently incompatibilities in devices which are based upon timing, clock frequency or other hard to model aspects of a proposed system, and iterations in hardware cost a lot more in time and money than those in software. Both approaches are used in embedded development, but it is a vast space whose tooling usually differs by target hardware, as if you needed a new IDE, compiler, command line, development workflow and physical interface type for every language you wanted to write on a desktop, and interpreted languages didn't exist. Some of the tooling is prohibitively expensive.

I hope you can now see the problem space is not that similar to desktop software, although the conventional software world does have a lot it could further contribute to electronics. As usual for those outside of an area, the perceived simplicity of established solutions belies the complexity, nuance and outright effectiveness of their current, if ugly, state.


The embedded-rust people have a straightforward approach to the interface abstraction situation you describe, and it seems to be going fairly well.

The embedded_hal crate defines traits for controlling things like I2C, SPI, and individual GPIO pins. Then drivers for particular devices are written against those traits.

Given the basic worldview of "we're going to rewrite everything in Rust because that's what Rust people do", I think there's a good chance of ending up in a situation where the embedded Rust ecosystem just doesn't have that problem.


> I hope you can now see the problem space is not that similar to desktop software, although the conventional software world does have a lot it could further contribute to electronics. As usual for those outside of an area, the perceived simplicity of established solutions belies the complexity, nuance and outright effectiveness of their current, if ugly, state.

Outright effectiveness? The industry is an outright disaster and it's a wonder engineers can get anything done outside of well-funded businesses. A small but representative slice of some of the bullshit I've had to deal with since I started designing electronics (high speed digital and RF):

Meta build systems with pre-pre-processors with feature gates for bug fixes tied to CRMs so that clients don't receive fixes unless they first encounter the bug (and waste weeks trying to fix it while their account manager responds). NDAs on all of the interesting parts with 2+ weeks turn around time on paperwork and 2+ months on samples (but if you're in Shenzhen you can just grab it at the corner store...). Mismatched peripheral IP jammed together with half-assed drivers that wouldn't pass at a client tech demo (can you run both DMA channels at the same time on STM32Fx or does that still crash?). Buggy reference implementations of basic interconnects that silently drop data and don't support run of the mill "advanced" features used in every ARM mcu (thank you, Xilinx). Version control brought to you by WinRar, 7Zip, and IMAP. Data exchange brought to you by The Interns™ because who has time for that shit. Different versions of firmware written by two completely different Qualcomm teams firewalled away from each other for the same chip, in the same market deployed to clients depending on whether they had an existing relationship with Broadcom or not.

The worst part is that none of these problems have much to do with the actual hardware! If it weren't for the culture surrounding the industry, electrical engineering itself would be an absolute pleasure. It has never been easier to slap a schematic and PCB layout together in Altium if you already know what you need, especially with sites like SnapEDA and Octopart simplifying the boring parts (notably, both cofounded by teams with a lot more mainstream software ecosystem experience). It's just getting to that point and writing the firmware afterwards is absolutely awful. I had a similar situation to your MCU example not too long ago and I ended up going with Atmega because downloading a bunch of Arduino projects and looking at firmware in practices makes characterizing a platform and making the right early decisions a lot easier.

To be clear: I couldn't care less about the compile times or higher level abstractions or any of the tech fads that roll through HN (although I am very much hyped up for Rust and now work with React at my day job). I just want a basic culture of information sharing and cooperative common sense that we take for granted. Coming from a software engineering background, the only thing I think this industry has done right is reference designs and PCB layout notes.

I get that the industry isn't a homogeneous blob and most of these problems are a result of the vendors' mismatching economic incentives but that just means that the industry is a natural disaster. All of these problems suck everyone into a feedback loop preventing any real progress, especially when all the details of hardware and software are locked away.


Another part winning the hearts over the religious movement against anything but C and Assembly on embedded, even tool vendors, which even C++ in spite of its copy-paste compatibility with C faces, let alone languages that require other approaches.


You can get one of the STM32F103C8T6 devboards "bluepill" and use that instead for Rust. Under 2 USD, almost at the level of Arduino Nano knockoffs.

The AVR backend for LLVM was merged recently. So Rust support for AVR might happen if someone pitches in and does the work. Lots of Arduinos lying around, so it would make it more accessible.


Absolutely this. The same price and the Bluepill has much more powerfull CPU.

The problem is HAL support. Some periphery is supported, other is not. ADC and DMAs for UARTs were merged just recently. Stuff like SPI / I2C slave modes is missing, I think HAL interface is not fleshed out yet. Stuff like usb stack would probably take a lot time to complete.


How does it work to call C for the missing peripherals? For instance using the exiting vendors libraries. The book in question does not seem to cover this, unfortunately.

I think for a long time we will have to accept that Rust and C are going to co-exist on these devices. So that story should be workable.

My embedded code tends to have a strong split between application logic (in platform-independent, data-driven functional code, with automated tests) and underlying hardware-dependent "app host" (as little custom code as possible), with a data-based interface. I would be quite happy doing C for the hardware layer and Rust for the application logic layer.


I'm not sure how well it would work, the rust embedded HAL crates like to be in control of all peripherals in order to give nice guarantees of things being properly initialised.

Calling C would also have to touch the same structures which would, if nothing else, change the state of some peripherals to something that will break the rust guarantees.


I haven't tried it, but it should be doable. A lot of linux-based rust libs wrap C libs and link to them, or compile their own embedded version during build. Same thing here too, though there could be some problems regarding what C compiler is used and where it expects to find its libs.


We are slowly working on AVR support both in LLVM and Rust.

https://github.com/avr-rust/rust


Is there a good STM32 USB library for Rust? I've seen some preliminary work on some stuff in the past but I'm not sure where it's at. If it's not particularly developed, has anyone had success calling into the ST provided libraries from rust to get USB up and running?

Rust for embedded stuff seems like an absolute dream. I'd love to make use of all the excellent work that's gone into it. Most of my work just needs USB.


There is this one, https://github.com/mvirkkunen/usb-device/blob/master/README.... which apparently works well enough to power a keyboard: https://github.com/TeXitoi/keyberon


This book focuses on the ARM Cortex-M archtectured microcontrollers..

Is there a comprehensive list which other microcontroller is supported by Rust?

I really like the direction that Rust is heading in regards to embedded but I wish that when folks mentioned embedded, they should also indicate which microcontrollers are relevant. There are so many varieties around.



Wow: tier 1 contains no ARM or other CPU arch. Tier 1 is only 32bit i686 and 64bit x86_64.

Tier 2 platforms can be thought of as “guaranteed to build”. Automated tests are not run so it’s not guaranteed to produce a working build.


We’ve been thinking about re-doing the tier system, as it misses some important points. For example, Firefox has ARM as a tier 1 platform, so if we find bugs, they tend to get fixed up pretty quick. We’re not sure the current way of defining stuff really maps to the reality.


It's pretty par for the course to have tier 1 be the most common consumer systems since they're the ones with the most bang for buck, and easiest available CI infrastructure.


This is AFAICT a rust developer consideration more than anything else? I think/hope that releases are gated by one or more tier 2's successful target tests?


AFAIK this is mostly waiting on someone to step up and provide the relevant CI infrastructure.


Notably for the topic at hand, RISC-V target list is unmerged: https://github.com/rust-lang/rust-forge/pull/202/files

(Also two other unmerged PRs to the list.)


Thanks for the heads up; I’ve merged that one. I’ll try to check out the others.


The last I checked, it is only ARM chips now because it's what LLVM supports.


There's no point in having a list. Anything LLVM supports can work. The rest it just a surmountable library problem, but which libraries are good enough is a matter of some opinion.


Not quite. Not all ISAs are equivalent and not all llvm backends are equivalent. Rust depends on a few features that are typically not found in embedded systems, like multiple return registers. I wouldn’t even call ARM an embedded ISA anymore.


Uh... multiple registers dedicated to return values is a feature of an ABI, not an architecture. The hardware doesn't care what you put in those registers, obviously. Rust defines its own ABI for internally-generated code, it doesn't need to care.

And in fact I'm not aware of any such systems. Existing 32+ bit embedded architectures like MIPS, RISC-V, Xtensa, and ARC all have robust instruction sets with large register files and a fully-defined SysV-style C ABI.

No, the reason is as stated elsewhere. Rust doesn't run on these systems because no one bothered to tool up LLVM for them.


> No, the reason is as stated elsewhere. Rust doesn't run on these systems because no one bothered to tool up LLVM for them.

Part of the problem is that LLVM developers themselves are apparently unwilling to release support for architectures that they see as liable to go unmaintained and bitrot in the future, even if someone shows up and does the work. There is a notion of "experimental arch's" but it doesn't seem to be actively used, or to suffice in addressing the issue.


Backend support is pruned for architectures that go dark. But if you showed up with support for a new one I'd be really surprised if it weren't included.

Experimental archs were just used recently for wasm and riscv to find maturity.

Are you referring to a specific discussion on the llvm-dev list? Last one that had a discussion in this area that I recall was Nios2.


100%! My fault for conflating ABI and ISA. The C ABI that is first supported by a new llvm backend typically doesn’t have those features implemented. Which makes porting a higher level language like rust non trivial.


Where is the SysV style C ABI defined if we wanted to go read more about it? Or, do you have a favorite reference for a work explaining it's design choices?

I'm interested in soft processors on FPGAs and their tools, and that sounds like it might make for good reading.


Nobody actually uses System V anymore, but because it's the thing other Unixy systems are based on, people keep extending the SysV ABI standard to processors ridiculously more powerful than System V itself could ever hope to run on.

As best I can make out, the only relevant portions of the original SysV ABI document are chapters 4 and 5, still available and maintained on the SCO website:

http://www.sco.com/developers/gabi/

There are also separate documents defining the details of the SysV ABI for each processor family. This StackOverflow answer links to some:

https://stackoverflow.com/a/40348010

... and the OSDev Wiki links to many more:

https://wiki.osdev.org/System_V_ABI


Compiler exists in theory != supported


I'm sorry but that sounds like a distinction only to freeloaders. Know your dependencies, use something like Nix+Nixpkgs to put yourself in charge of the upgrade schedule, and there will be no nasty surprises.


Thanks much to japaric (Jorge Aparicio IIRC), he's contributed heavily to projects which support embedded use cases.


Okay, can we please get a primer that an intern can follow that starts from an empty Windows 10 install, installs everything including VSCode, flashes the boards with a Segger J-Link, blinks the LED, and single steps the debugger?

I'm serious. I will go buy any board in order to check that tutorial out.

I'll go further. I would buy the Rust folks a couple boards and Seggers to get that.


Hey, my company has this. We do consulting and training for embedded rust, and our material is open source.

Check out https://github.com/ferrous-systems/embedded-trainings/blob/m...


Thats ace!

BTW It is really cool that your company does this. It has a cracking name too!


This may not be exactly what you're asking for, but: https://github.com/atsamd-rs/atsamd


gofundme/indiegogo that idea, if its popular, it should get some traction


This is great, but unfortunately embedded doesnt always = ARM. Does anyone know if it would be possible to target mips32 since LLVM supports it? Is there any effort in rust for other arches? The mplabx + xc32 +harmony toolchain has got pretty darn awful so that would be amazing for the MIPS community


Maybe using an ARM dev board boosts acceptance of this book among its target audience, at the moment. But, for open source hardware goals, a RISC-V board, such as the HiFive1, would be good cross-promotion/support.


Imagine Rust being considered the way to develop for embedded RISC-V, from almost the start.

Similarly, as long as Rust is already shaking up C/C++ a bit, they could nudge some hobby/education and professional development towards RISC-V. (Especially on embedded, right now, as I've heard the open ISA is attractive to some projects.)



So, Nordic Semiconductor, when do we get Rust support in your SDKs? Please?


At least the soon to be released ESP32 successor seems to get this, by virtue of adopting a RISC-V core. Kind of a big deal really.


What's it called?


[citation needed]


My info is mainly from: https://www.esp32.com/viewtopic.php?f=2&t=9768

I was incorrect, citation: d) it is a Tensilica / RISC V are in the pipelines.

So not "big deal" RISC-V yet, probably later.


Would love to learn Rust and currently debating on a few home applications.

Any ideas from the HN crowd? I'm currently looking at a Pi-based weather station where I would use Rust as the processing server for all the data each day. Nothing fancy, but perhaps a fun way to learn.


I know some embedded devs who say Rust is still too slow compared to C.

What do they mean?


That they are happy with C and are sick of others trying to convince them to use something else. Only halfway joking... But if you really want to know what those people mean, then you should ask _them_.


Hard to say. It’s possible they made a mistake, it’s possible they hit a compiler bug, it’s not really possible to know without hearing more about what they mean.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: