
Practical Guide to Bare Metal C++ - Tomte
https://www.gitbook.com/book/arobenko/bare_metal_cpp/details
======
dang
[https://news.ycombinator.com/item?id=12138374](https://news.ycombinator.com/item?id=12138374)

------
monocasa
I traditionally haven't been the biggest fan of this book. Most of what they
list is stuff that just looks like using C++ just for the sake of it, with no
real benefit over a C codebase.

For instance they don't appear to use RAII at all, even in places where it's
an obvious win (InterruptLock).

~~~
vowelless
I tried searching the book for RAII and there are no hits...

Do they really not cover it or is the search busted ?

~~~
mbel
I don't think RAII is a concept complex enough to write a book about it...
Probably a blog post or Wikipedia page is enough to explain it fully.

~~~
monocasa
It's worth a chapter in this book IMO.

As the maintainer of a C++14 proprietary RTOS, it's probably the most
convincing argument to the C guys as to why C++ can benefit embedded
codebases. Like not being able to forget to re-drop thread priority when
you've temporarily increased it.

Or one of my favorite patterns is returning the object representing the
temporary higher priority. If the caller doesn't do anything with the return
value, the destructor is called and thread priority reverts to what it was.
However, it gives you a chance for the caller to hold on to that high priority
object, and get more work done essentially atomically.

~~~
xyzzy_plugh
But the compiler doesn't optimize out the constructor, right? It can't.

In an RTOS of all places, that seems awfully expensive. One generally avoids
wasted resources in embedded -- the trade-off is it requires more care during
development.

I've never seen RAII work well in an embedded system. Not being able to handle
exceptions or errors during destruction ruins everything.

~~~
fdsfsafasfdas
Disabling exceptions has solved this. There are no need for exceptions in the
kernel, and the cost is high.

~~~
concede_pluto
Isn't table-driven stack unwinding cheaper than having a bunch of functions
each check an error code and return to their caller?

~~~
monocasa
The practicalities of bare metal programming means that a lot of the time an
individual work item has to jump contexts (ie. stacks and register sets) in a
way that doesn't map cleanly to C++ style exceptions.

------
ensiferum
Without exceptions most of your STL containers become unsable, unless aborting
is an option when otherwise bad_alloc would get thrown.

Also without exceptions you really need double initialize pattern for your
class types. I.e. a ctor that cannot (and will not throw) and then a
"error_code Init(...)" method that can return an error code.

~~~
ryandrake
Yea, I've always wondered, for shops that try to make that "C++ but without
exceptions" policy work, how do you handle things like vector::push_back() or
map::operator[] failing? Just throw up your hands and crash?

~~~
astrobe_
I didn't read the book but if you're "bare metal" you want to avoid dynamic
allocations in the place. Secondly, if you really run out of memory then
you've screwed up something really badly and yes, in these situations your
best option is to let the watchdog eat your poor carcass.

~~~
Gibbon1
What you really want to do in these cases is log debugging info to a noinit
section of memory or nonvolatile memory and then reset the processor. Then the
firmware can check for and report any logged errors on startup. If you do this
religiously you might make it to fifty with most of your hair.

~~~
monocasa
Also, for those programmers who have some input on the EE's designs, a stuff
option for MRAM is amazing. It's essentially equivalent to battery backed
SRAM, but one chip. We stick logs and hardware exception stack traces/register
dumps in there on dev units.

------
victor106
RUST

------
microcolonel
This is a fine "long answer" to the question _how do I write C++ on bare
metal?_. The short answer is _don 't_, medium is _if you don 't know why every
major operating system at every size is written in C, then you have a lot of
learning to do_.

~~~
dragontamer
Windows is well known to have portions of the kernel written in C++, and
MacOSX has huge portions written in ObjectiveC. The greater part of Android is
Java-based.

Only Linux is really purely C-based, since Linus really hates C++ with a
passion. But there's a reason why even the GCC project has moved over to C++.

[http://lordjeb.com/2013/09/19/modern-c-in-the-windows-
kernel...](http://lordjeb.com/2013/09/19/modern-c-in-the-windows-kernel/)

~~~
microcolonel
"Only linux" And every BSD, Magenta (a brand new kernel), ThreadX, FreeRTOS,
QNX, Integrity, HP-UX, SeL4, Illumos, TI-RTOS, RTEMS...

XNU does not appear to have any Objective-C in the kernel, and it has no C++
in the scheduler, syscall handler, or any other critical section. I doubt
Windows has C++ in any such place either.

Inside the XNU tarball (the actual kernel inside OS X) there are 1,078 C
source code files containing 652,124 source (not comment, not whitespace)
lines. There are exactly 10 Objective-C files containing a total of 1787 code
lines, and it's not certain they are even built as part of the kernel binary.

To be fair, there is some amount of C++ in XNU these days. XNU contains 97 C++
files totalling 68,673 code lines, it seems mostly in iokit[50kSLOC] and
libkern[17kSLOC] (infrastructure to support the iokit C++ code, maybe).

~~~
dragontamer
> I doubt Windows has C++ in any such place either.

[https://msdn.microsoft.com/en-
us/library/jj620896.aspx](https://msdn.microsoft.com/en-
us/library/jj620896.aspx)

Hint: This exists for a reason.

Another note: Window's Kernel exposes an API for custom user-mode task-
scheduling (Called UMS). Its API is accessible from C++:
[https://msdn.microsoft.com/en-
us/library/windows/desktop/dd6...](https://msdn.microsoft.com/en-
us/library/windows/desktop/dd627164\(v=vs.85\).aspx) . When you get to the
ConcRT stuff (which is built on top of UMS), its C++ code for sure:
[https://msdn.microsoft.com/en-
us/library/ee207192.aspx](https://msdn.microsoft.com/en-
us/library/ee207192.aspx)

Nearly direct-access to the GPU is also done through a little C++ API called
DirectX.

[https://msdn.microsoft.com/en-
us/library/windows/desktop/dn7...](https://msdn.microsoft.com/en-
us/library/windows/desktop/dn788656\(v=vs.85\).aspx)

DirectX is very C++, using namespaces and stuff. Its a bit wonky because
DirectX itself is older than C++98, but its clearly a C++-based API. Its very
efficient and performant too.

~~~
microcolonel
> Nearly direct-access to the GPU

Not sure what you consider "near", but I certainly wouldn't consider even
Direct3D 12 to be anywhere "near" direct access to the GPU. It doesn't expose
the shader ISA, it doesn't expose the command stream format, it doesn't expose
the register sets.

The user mode scheduler is not the system scheduler, thus my statement holds.

~~~
dragontamer
> Not sure what you consider "near", but I certainly wouldn't consider even
> Direct3D 12 to be anywhere "near" direct access to the GPU. It doesn't
> expose the shader ISA, it doesn't expose the command stream format, it
> doesn't expose the register sets.

Of course, none of those are important.

The important bit to expose are exposed. IE: The memory-model of the GPU. CPU
memory space vs GPU memory space, such as Residency:
[https://msdn.microsoft.com/en-
us/library/windows/desktop/mt1...](https://msdn.microsoft.com/en-
us/library/windows/desktop/mt186622\(v=vs.85\).aspx).

Indeed, the ability for C++ to create abstractions that can track memory
spaces in DirectX is proof positive that C++ has useful features applicable to
the OS developer.

[https://msdn.microsoft.com/en-
us/library/windows/desktop/mt6...](https://msdn.microsoft.com/en-
us/library/windows/desktop/mt613239\(v=vs.85\).aspx)

\-------

Note that both CUDA and OpenCL support C++ features as well. The RAII feature
of C++ alone is a huge benefit to anybody tracking what bits are where in OS
settings. Have you tried to use C++ in the last 10 years?

