Hacker News new | past | comments | ask | show | jobs | submit login
Practical Guide to Bare Metal C++ (gitbooks.io)
291 points by adamnemecek on July 21, 2016 | hide | past | web | favorite | 83 comments



Nice to see some concrete examples of where you don't want exceptions, rtti or dynamic memory.

Quite a few developers believe that if you don't have these features you aren't writing "real C++" but it's common practice in quite a few industries where size/performance is critical.


Maybe my experience is atypical but RTTI and exceptions are features of C++ that I never see used in real-world C++ almost regardless of the application. When I have seen them used, it is by people new to C++ that haven't fully figured out the common development idioms of the language yet.


I've never seen RTTI use in production code. Even with modern C++, RTTI almost always adds too much runtime overhead and negative performance impacts. Most importantly, it's usually a sign of bad design.

My experience with exceptions is different, though, and I see them used quite a bit. It depends a lot on the age of the code base and the group working on it. Old code is less likely to use them, and developers with a heavy C background, or a lot of pre-C++98 experience are less like to use them. Except for maybe some tiny embedded systems (like microcontrollers), the overhead of exceptions isn't that bad any more.


The strange thing about exceptions in C++ is that they make ubiquitous use of RAII a necessity, but at the same time they must not be thrown in destructors.

And then C++ exceptions don't come with a stack trace that makes quick and dirty exceptions based code so convenient in Java or C#.

I was never a principled opponent of exceptions, but I find myself removing them from more and more code the more thoroughly I think about all the possible error states of code that should really be robust.


I don't find it weird that exceptions shouldn't be thrown from destructors. I remember being surprised when I learned that rule, but the reasoning seems sound to me: what can you do when the cleanup failed? Clean up harder?


The short answer is that action B may depend on whether or not action A was able to finish successfully, including any cleanup.


The official reason destructors are discouraged from throwing exceptions is that one reason destructors get called is that something already threw an exception, and there is currently no way to handle two exceptions in flight at once, so in that case the process gets killed.

It's also true that with more experience, we've figured out that cleanup code is special. When I throw an exception during normal processing, I want to abandon what I'm doing. But if I discover that one step of a destructor cannot be completed, I do not want to abandon the rest of the cleanup. If I throw an exception in a destructor, I can't come back later to finish any remaining steps.

And, as I asked earlier, if I throw an exception because I can't close a file, what do I expect the exception handler to do? I guess it could try closing the file again, but I'm not going to hold my breath on that working. Perhaps the file is on a network share and the network's gone down. Perhaps closing the file will involve writing some cached data to disk, but the disk is full. In my experience, in this case your only real options are "ignore it, either because there's nothing you can do or because you're already dealing with an exception and you've simply discovered another symptom of the original problem" or "kill the program because there is no hope." You can log the fact that a step failed, but remember that it's possible for logging to fail, if the disk is full.


>It's also true that with more experience, we've figured out that cleanup code is special.

In my experience the exact opposite is true. Error handling or "cleanup" code is in no way special. Errors are not exceptional and there is nothing we can say in general about what error handling code will or will not need to do.

Error handling code needs to be able to use the entire language and reuse all the regular code that is used elsewhere. A language should not have two modes, an error mode and a normal mode with completely different non-local semantics.

>And, as I asked earlier, if I throw an exception because I can't close a file, what do I expect the exception handler to do?

That depends entirely on the context of the program you're writing. It might want to do things like writing a record to a database that marks the file as invalid/corrupt. It may need to notify some other system about the failed action. It might want to roll back a transaction or initiate a compensating transaction. It may want to abort some session and schedule it to restart at a later time.

There is simply nothing we can know in general about what a caller of an action that failed to complete cleanly may want to do as a result of that failure.

Using exceptions for error states that may require a complex response is just not a good idea and in C++ it's an even worse idea.

[Edit] But to be clear, my initial thought wasn't about exception handling but rather about ubiquitous use of RAII for all sorts of stuff that needs to get done at the end of a scope. Much of that has nothing to do with errors at all. Maybe if you think of these situations it will become clearer why I think that arbitrary limitations of code that runs in a destructor is problematic. C++ has no finally block. Destructors is all we have.


I don't agree with much in this post, but in particular "Destructors is all we have": destructors + lambdas + templates allow you to write ScopeGuard, which is a pure superset of finally blocks. Modern C++ has zero need for finally.


ScopeGuard relies on destructors, so it inherits the no exceptions in destructors issue.


Throwing from a destructor in ScopeGuard is equivalent to calling a function that can throw in a catch or finally block, which you can do in most (any?) languages with exceptions. This no exceptions in destructors "issue" is not a C++ issue. It's a fundamental issue in error propagation. What do you do when propagating error correctly, causes a new error? You can't propagate the first error, because you can't do it correctly. You can't propagate the second error, because that would mean dropping the first.

Classic example is logging an error on failure. This means calling a logging function in the catch block, and then letting the exception propagate. But what if the call to the logging function fails? In Java, coded naively you'd simply drop the original exception. Usually that's not what you want.

You can examine the issue with error codes, it's not any better.


> But to be clear, my initial thought wasn't about exception handling but rather about ubiquitous use of RAII for all sorts of stuff that needs to get done at the end of a scope.

I agree with you on that. It turns out that, say, having a lock guard automatically release a lock when it falls out of scope can cause a ton of trouble when it falls out of scope in a place the programmer overlooked.


Getting a stack trace on Linux or Windows is pretty straightforward. It is too bad it is not standardized but the large projects I work on have stack traces


My experience has been no rtti and no exceptions. Quite a few compilers generate rtti if you use exceptions.

Generally I haven't found any error handling solution where exceptions provided superior semantics. The lack of checking means that you can catch more bugs with enum error codes + switch and Wall.


I've seen very little RTTI, but I've seen it.

There are a large number of programmers who use C++ "because it's fast," and they've heard RTTI and exceptions (and virtual function calls, ...) are slow, so they avoid those features.

But the common implementations for RTTI and exceptions have improved. They're much faster than they used to be. The performance problem isn't that they're slow, but that their performance is unpredictable. Not terribly unpredictable, but unpredictable enough to rule them out in real time code.

But for most everyone else, they're plenty fast. Or, put another way, if RTTI and exceptions make your project noticeably slower, you're doing too many dynamic_casts and/or throwing too many exceptions.

At this point the question should be "does this lead to a good design?" and not "does my gut think it's fast enough?"


> I've never seen RTTI use in production code.

I have, but it's almost never the extremely limited built-in RTTI.

> Except for maybe some tiny embedded systems (like microcontrollers), the overhead of exceptions isn't that bad any more.

Big caveat here: Unthrown / rare exceptions aren't that bad.

I ran into a bug with the windows 8 touch APIs, where the pointer ID for a given finger would become invalid just before we recieved the "pointer up" notification if you pawed at the screen in just the right way. This manifested as an exception being thrown when querying the pointer position - which, of course, wasn't caught, meaning pawing at the screen just right crashed the program.

Catching the exception stopped it from crashing, at least. But the framerate would tank if you pawed at the screen, from exception handling overhead alone.

I've come to frown on exceptions in C++, for production gamedev. Everything else is using error codes anyways, why add a second error handling style into the mix? Your callbacks may be invoked from third party C, which you can't safely throw across, or C++ built with exceptions disabled, which you also can't safely throw across. Heck, just building with exceptions enabled can be difficult - I could not link against some of the C++ PS4 libraries without matching many of their build settings - including having exception handling disabled.

...I resorted to alternative options for handling failed unit tests.


Game engine middleware like Unreal, Qt, MFC, VCL.

The difference being that they created their own version.


Large difference there is that in those systems you opt-in and only pay for it in the classes you want(which is more in line with C++).

In the case of Unreal we used it to extend the RTTI system to to more things than just isA(). Stuff like member properties(editor hints in window), serialization(network & disk) and stuff you'd use C# attributes for.


I guess C++ would have been more well served by having static RTTI, but if it comes will be only on C++20 most likely, although type traits and template meta-programming already get us half way there.

However when I look to such examples, I am quite happy to spend most of my time on JVM/CLR lands.

It is not the meta-programming as such, rather the decltype, trailing return types and SFINAE tricks that put me off.


> exceptions are features of C++ that I never see used in real-world C++

Well, I use exceptions in exactly one situation: my request handlers on the server are all one-shot, transactional, and throwing anywhere unwinds to the top of the stack, aborts the transaction, and results in an error message being returned to the caller.

For that specific use case, I find exceptions cleaner than any other approach I've tried. Otherwise, I avoid them as well.


Do you work for a Foreign Exchange company? We used it exactly for the same scenario at a previous employer.


I do not, but thanks for the info I'm not the only one!


> RTTI and exceptions are features of C++ that I never see used in real-world C++

Regarding exceptions, in my experience it depends a lot of the cultural background of the developers: Unix developers tend to use them, Windows developers don't.


In my experience not using exceptions in C++ is more a generational thing than anything. Older foxes don't use them because they don't use RAII systematically either.


Hang on, from all these comments we are saying that nobody ever does dynamic_cast ???


dynamic_cast doesn't work across .sofiles loaded with dlopen:

https://stackoverflow.com/questions/23383102/dynamic-cast-tr...

I've seen it first hand crap itself on Mac OS X. The gist of it is that if you have a plugin architecture, you can't pass an object allocated in one plugin to another and have the latter do a dynamic_cast. For the same reason exceptions thrown in a plugin can't be caught in another unless the type is defined in the host binary. I.e. suppose the main application has:

    class HostException {
        virtual ~HostException() {}
    };

    class FooInterface {
        virtual void doBar()=0;
    };
Then inside plugin A you do

    try {
        foo->doBar();
    } catch (HostException& ex) {
        // stuff
    }
where foo was allocated in plugin B. Then,

    // in plugin B
    class FooImpl : public FooInterface {
        virtual void doBar() override;
    };
    class MyException : public HostException {
        virtual ~MyException() {}
    };
    void FooImpl::doBar() {
        throw HostException(); // should be just fine
        throw MyException(); // here be dragons
    }
For that reason in places where we call foreign code that we know could throw we just do a blanket catch(...) and bail out on any exception. Similarly we don't rely on dynamic casts at all anywhere, we use our own type system.

The OOP features of C++ are fine for small toy programs but ironically they break when you try to use them for actual large ones that would benefit the most from them. Stick to virtual functions, static and reinterpret casts and nothing else.


It works on Windows since the 16 bit days, actually.

You just need to declspec(dllexport) the classes, with the caveat that thanks to the lack of standard ABI you need the same compiler on both sides.


Yes, it does work on Windows. Not anywhere else.


Which makes it a compiler issue, not a language one.

Then again, most other platforms never were too C++ friendly and rather lean on C.

BeOS, Symbian, Genode and Windows are probably the only OSes that give C++ some love.


Microsoft's COM probably exists because of this.


The Google C++ style [1] guide bans exceptions entirely and strongly advises against RTTI.

[1] https://google.github.io/styleguide/cppguide.html


This link is constantly misused. Their reason is having to support legacy code that can't handle exceptions. That's it. From that link:

"However, for existing code, the introduction of exceptions has implications on all dependent code. If exceptions can be propagated beyond a new project, it also becomes problematic to integrate the new project into existing exception-free code. Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions."

Would the use exceptions if they could do it again?


LLVM [1] has very similar restrictions for different reasons.

[1] http://llvm.org/docs/CodingStandards.html#do-not-use-rtti-or...


No. The reason they don't use exceptions is because they lead to unreliable code that is difficult to maintain. There are other ways to indicate error situations such as returning error result that caller must handle.


From the Google style guide (section on exceptions):

"On their face, the benefits of using exceptions outweigh the costs, especially in new projects. However, for existing code, the introduction of exceptions has implications on all dependent code. If exceptions can be propagated beyond a new project, it also becomes problematic to integrate the new project into existing exception-free code. Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.

"Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project. The conversion process would be slow and error-prone. We don't believe that the available alternatives to exceptions, such as error codes and assertions, introduce a significant burden.

"Our advice against using exceptions is not predicated on philosophical or moral grounds, but practical ones. ... Things would probably be different if we had to do it all over again from scratch."


In my mind "real C++" doesn't use RTTI much but relies more on the compiler to figure out things. This and exceptions are things I generally try to avoid as much as possible. I am big fan of templates though.


Yes, there is a Zen state of C++ programming wherein you, the programmer, surrender to the wisdom of the compiler, and in doing so are liberated from caring about the minutiae of the type system, to be carried forth, skyward, on the twin winds of inference and overload resolution. I do enjoy that. RTTI is nothing like this – it’s a red herring, neither elegant nor powerful… at best it’s a misnomer (it’s more like RTT total lack of I, amirite) and at worst a distraction.

I dislike exceptions too, but in a difference-of-opinion way where I can be like “OK, that otherwise legible and reasonable code uses exceptions, unlike mine” and it doesn’t make me grimace like I involuntarily do for RTTI… I’d love to be mistaken tho, anyone with an inspiring counterexample, do share.


I think that attitude deserves some mention in the section discussing why C++ doesn't get used more often. If you listen to the faction who denigrates anything that looks like "C with classes" then C++ is fundamentally incompatible with bare-metal programming.


Mostly because the "C with classes" crowd tend to be C developers dragged into using a C++ compiler.

Personally I like to turn everything on, but I also understand there are use-cases where it isn't possible.

My first C++ compiler was Turbo C++ 1.0 for MS-DOS, so I also used this kind of bare metal programming back in the day.

What I don't like is "C with classes" which throws away all the safety mechanisms that C++ offers over C, to the point the code could be compiled with a C compiler.

However the advices presented in the "Practical Guide to Bare Metal C++" look quite good.


There’s something to be said for only using those features of C++ that you really need, if not because the platform demands it, then at least because it goes easier on the language-lawyering part of the brain.

I for one am perfectly happy with “C with destructors and templates”. The usual criticism of “C with classes” is levelled at a specific kind of code that’s neither particularly good C++ nor particularly good C. But not all “C with classes” is necessarily that way.


"If you listen to..."

No-one should need to listen to a bunch of fanatics. Different problems are have different constraints. Some problems are easier to solve using a different formalism than some other. People try to sound like Herb Sutter or Stroustrup when peddling their "best practice" advice after looking at a few lines of code and deciding it's not to their liking. Sutter and Stroustrup usually have a grasp of the higher level context, while random best practice commisars in the web, seldom dont.

Since C++ contains so many footguns the charitable explanation is that they grasp to the only way of coding they can hold in their head.

Damn the establishment! C++ offers a refreshing multiplicty of styles of implementation. ML style for data transforms, lispish dynamic lists for parsers, vanilla object style for some problem domains, relational style with structs of indices for others. Note: solutions of these styles can be wrapped in a nice cplusplushy interface but if one starts to solve the problem from the point of view of designing the class hierarchy rather than looking at the data and transforms required often leads to a technically inferior dessign (by which I mean - my designs became much better after I understood to focus on the latter and not the former). One needs to be strict, yes, but the strictness needs to be on a closer-core-cs-concepts level and not on the class design level.

Just wrote a parser in lisp style C++ and loved it (and yes, there were enough constraints to limit C++ as the only language).


You seem to be talking about usual C++ programming, not bare metal case. Bare metal means no OS, no runtime, no nothing.


Runtime is always there, even if it is a thin library to talk to IO pins, or whatever board you happen to plug in.


> thin library to talk to IO pins

That might be a bit bad example, because IO pin control is usually compiled to just one or at most a few machine instructions. IO pin control for those is often a macro or intrinsic, not really a library.

Sometimes there's no runtime, other than what you implement. That's true bare metal.


Or memory mapped IO. Again no library just direct memory manipulation.


Good design means that those direct memory manipulation should be encapsulated into a library and not scattered around.

Besides not all processors offer MMIO.


>> Good design ...

Yes well um no, that only happens in a select few embedded bare metal projects, and usually only the ones with fairly beefy processors or large code bases running on them. Once you contemplate in putting a library layer for your project you might as well put down a full rtos or linux, taking us out the focus of this discussion which is bare metal C++ programming.

>> Besides not all processors offer MMIO...

I don't understand what point your trying to make? The vast majority of embedded processors these days offer memory mapped IO, at least so in the ARM set of processors. And if not all processors offer MMIO, so what, what are you trying to say?


> Yes well um no, that only happens in a select few embedded bare metal projects.....

I see, they always write everything from scratch.

> And if not all processors offer MMIO, so what

Then you need to wrap the Assembly instructions out* / in* into C functions or sprinkle all the code with inline assembly since libraries aren't used.


>> I see, they always write everything from scratch.

Embedded developers, yup pretty much... :| Most projects just copy and pastes the OEM's sample code or previous projects driver code and work from there.

>> Then you need to wrap the Assembly instructions out* / in* into C functions or sprinkle all the code with inline assembly since libraries aren't used.

I have never seen that being done in the last 10 years except for the startup sections on processors setting up the ISR / clocking / co-processors etc.


A lot of embedded code uses macros to deal with the register/configuration handling, rather than functions. I've seen loads of inline assembly in #defines.


Fair enough.


Very true, I'd wager that a significant portion (20-30%) of all of the existing C/C++ codebase fits this description.


Honest question: At some point, why even bother with C++? The streams library is a mess - don't use it. Exceptions are too slow - don't use 'em. Can't have exceptions, can't have RAII, MI is bad in 99% of cases - avoid. At some point, you're left with "C with classes", maybe with some templates sprinkled here and there, and nice map/list/string implementations. Ok fine. So is this still C++?

And if it is, and if it's exactly what you need, and C is still not good enough, maybe there's a use-case for either a new language, call is --C++, or some kind of "C++ without the bad stuff" accepted standard to drive the entire eco-system forward with well-defined features, boundaries, libraries, tools, all that good stuff?


I don't feel like I have to use every single feature of a language. I prefer C++ over C for: namespaces, overloads, encapsulation and user defined types. E.g. I'd rather write simple expressions for vector math than a bunch of function calls. I'd rather have the compiler enforce the type invariants than do runtime checks everywhere. I'd rather have my ids without prefixes to evade name conflicts etc. etc.

As for your proposal - why? I can already disable exceptions and RTTI with compiler switches. And is it really a sane idea to spin off a new language if you don't like some library?


The more I mess around with Rust, the more it seems to fit the bill as the new --C++.


He's talking about real time code here and completely missing one of the main reasons why you have to use great care using C++, and even parts of the C libraries ... all those hidden mutexes in new/malloc and other parts of the standard libraries (stdio too)

I'm sure we all know what happens when your ISR quietly does a new while some other part of your code is holding one of the locks deep in malloc.

But far more important is avoiding priority inversions when a low priority thread is holding a lock in a library somewhere, they result in high priority threads missing real-time deadlines - the sort of heisenbugs that are pretty impossible to find .... and are best to avoid by design.


A part of me died when I read "your ISR quietly does a new". Don't even think about doing a new, or anything remotely like a new, in an ISR. Don't make me come over there and stage an intervention.


> I'm sure we all know what happens when your ISR quietly does a new while some other part of your code is holding one of the locks deep in malloc.

Yeah, what happens is that such code does not pass code review :)

In 99.99% of cases an ISR should only set some flag, that is then checked by code, running in user mode (or whatever it is called for your platform). Then the real work is done in user mode. The remaining 0.01% of cases does not involve using dynamic memory either.


Well, if you're programming bare metal you shouldn't be using any library's allocation routines but your own. Because, remember the cardinal rule: no mutices in ISRs!


No memory allocation in ISRs unless it's wait free, really fast and impossible to fail.

Definitely no exceptions in ISRs. What would an exception really even mean in an ISR context? The "call" is done by the hardware, so there's no stack to unwind. Uncaught exception would practically mean system crash.

Stack usage should be low, because ISR itself might get interrupted by a higher priority IRQ.

ISR (interrupt service routine) needs to return as fast as possible, usually in a few microseconds.

On x86, no FPU or SIMD in ISR either, unless you save and restore the registers. You need to cover all SIMD extensions, current and the future ones. If you use SSE but don't save AVX/AVX-512 registers, expect very weird bug reports. Better to simply avoid.


I was kind of pointing out that people know that mutexes in ISRs are a bad idea, but don't know as much about the other dragons that live in the same swamp ... they may also simply not be aware of what's inside their libraries (which ones use mutexes? 'man printf' doesn't warn me)

my point is that there are hidden mutexes in libraries that people often don't know about - his example of replacing new() with malloc() doesn't solve the problem, instead you need to be able to code your C++ without new, or at least the real-time bits of it (and that includes using stuff like strings and their libraries that do new() behind your back, or even printf which can run foul of the stdio hidden mutexes)

It's not just ISRs, priority inversion is a real problem that results in some of the hardest bugs to find (I once had to chase one that happened only once a month, resulting in satellite systems losing their provisioning)


>I was kind of pointing out that people know that mutexes in ISRs are a bad idea, but don't know as much about the other dragons that live in the same swamp

Honestly, these "people" seem like mythical creatures to me; I've certainly never met one. If you're qualified to write an ISR in C++, you're qualified to understand which parts of C++ are valid to use in an ISR and which aren't.

This is a non-problem that does not appear in practice, even though it could in theory.


Somehow I can imagine that not being the case with offshored projects.


Even avoiding mutices and using lock-free algorithms doesn't guarantee that your thread won't starve. As another comment mentioned, fast wait-free algorithms are what you need in this context.


The usage of single throw statement in the source code will result in more than 120KB of extra binary code in the final binary image. Just try it yourself with your compiler and see the difference in size of the produced binary images.

What?? 120KB? I have to try that out right away.

Edit: That turned out to be completely wrong. Simple program without throw: 71'218 bytes. With throw (one additional line that throws a plain integer): 71'814 bytes. GCC 4.8.1 on Windows. Would have been surprised if that was true.

Edit2: Okay, my bad, I should have linked statically. The statement sounded so absolute, but of course it is conditioned on the bare metal environment which is the subject of this article. Thanks for pointing out where I went wrong.


Using mingw-w64 (GCC 6.1.0):

  g++ -Os -static -s on `int main() { }' 17.408 bytes
  g++ -Os -static -s on `int main() { throw 42; }' 139.776 bytes


Which libc? You don't necessarily get as much bloat with embedded compilers and libraries.


Now what if you use two exceptions? If there's no significant increase, then the whole point is moot.


Not really moot. I do C++ on embedded processors with as little as 32kB of program space. Using a single exception means your code will not fit. My general rule of thumb to use full-fledged C++ is a processor with at least 256kB of program space.

Here's a comment of mine from a previous discussion: https://news.ycombinator.com/item?id=11706840


Not in the bare-metal environment. You may not have the memory to use one. In such a case, the increase to use two is irrelevant.


That's interesting. Have you linked dynamically? Maybe the author is talking about a statically linked binary, since he uses terms like "bare metal" and "final binary". Even if that takes an extra 120k, maybe additional throw statements won't be so significant.


The exception handling code in the GCC library is a lot more than 600 bytes. But if you were linking with the shared library, it won't be reflected in the size of the executable. Linking with the static library may show a quite different story.


Even when linked statically, my "Hello World" program is only 10,332 bytes larger than something which prints "Hello World" and then throws an integer.

Building the no exceptions version with -fno-exceptions doesn't change the size of the exception-free version either.

(g++ 4.8.5 with -Os)


All functions get compiled with exception handling unwind data, so you're going to get that pulled in whether the executable throws or not.

In order to not get that, you'll likely have to rebuild the entire runtime library with EH turned off.


If you go down to micro-controller levels, you often times are stuck with C++ 2003 and a vendor specific compiler, which means you will lack many of the niceties in the article.

I agree with him that removing the entire standard library is needed. Of course you then need to copy an implementation of printf from somewhere, and likely set it up to only work in your debug builds. Then you quickly figure out that the standard library has a lot of things you didn't even realize you depended on. Math functions typically pop up next, you'll likely end up using a vendor library (by which I mean ARM's), but if you are doing a bunch of math heavy work, and even more so if your chip has some limited FP capabilities (assuming it has an FPU at all), you may also find the need to re-implement some functions based on your performance needs.

Embedded is fun. :)

All that said, I would kill for access to proper lambdas.


> If you go down to micro-controller levels, you often times are stuck with C++ 2003

This makes me feel so old. Back when I started, C on a micro-controller was a rare treat and most things were done in some obscure chip-specific assembler.

I had a play with an Arduino knockoff a few weeks ago, kids these days don't know how good they've got it!


More like a partially supported C++98 and a compiler with a bunch of bugs that force you to use only half of that.


Risking being down voted, I doubt that c++ is the right language for bare metal development at all with its constraints like exceptions and RTTI. Rust seems to be the modern and safer version of C that is more appropriate for bare metal development.


For that Rust still needs to win the hearts of all those manufacturers selling hardware development kits.

Most of us just get to use the toys already in the package and aren't allowed to bring new ones to the party.


Architecture support is lacking so far compared to gcc/g++. ARM counts for a lot, but it's still far from 100% of the embedded world.


R̟̞̣͕̣͋̄̾̔ͭṷ͇̞̘͉ŝ̹̬ͮ̂̐ͭ̌ͯtw̦̏͒͗í͒̏̄n͉̖͋ͥ̒̒̚͞n͈̣̘̿ͪͭè̡ͤͬ̔ͭ́ͦr̭͌̿̈́ͦ͑͗s̛͈̣̱̱̦͗ͨͫͥ,̪͚̙̖̥̊͟ ̘̺̦̱̭͍͊ͬ̀̚t̰̖̤̩̱̥̐̌̄̔͐̚h̲̬̲ͤ͗̓ͫ́ͪ͠e̝̜̺̟̠̿̀̚y̼̲͌̑ͭ̄͞ ͊͐͗͋͡a̬̘̮r̰̬̉ͦ͗͐̃͠e͍ͧ͊̾ͮ͗̓͜ ̨̮͕̠̓ͫͮ́͋̑͌c̳̭̪̤͌ͧ͐̇̾ͬ̒ȏ̱̰͓͓̞̰m͚̺̤̪͚͊ͧḯ̜̙͔̗̮̻ͪ̓̓n̵̬̫͔̘̰͎͌̒ͫ͛̎̇̾g̮͇̼͖͇̊̍͒̿̑!̙͞

More seriously, the problems (exceptions and rtti) that you are mentioning are actually explained in this tutorial. One can disable them and still enjoy a great part of C++. Rust might be better on some other features though.


The Arduino development environment is really GCC C++ with a platform-specific library. All the compile-time C++ features work. Some features that need run-time support may not link.


Template oriented programming is a tremendously useful technique for working with overhead-less abstractions for embedded software, but C++ makes it so hard. Most large users of template oriented development (automotive) use external code generators for what should ideally be supported by the language itself.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: