Hacker News new | past | comments | ask | show | jobs | submit login
On programming without malloc (jvns.ca)
183 points by jvns on Dec 4, 2013 | hide | past | web | favorite | 104 comments

I write C code for embedded devices and haven't been able to use malloc in 6-ish years. Everything is either statically or stack allocated. It changes how perceive memory usage and allocation.

For example, it used to bug me that I had to statically allocate memory to manage an event that was used for maybe 0.1% of the product's life. It seemed so inefficient. Yes, you could potentially co-opt the event and use it elsewhere, but then you had to deal with a bunch of other considerations: could they ever run at the same time?, would the code be maintainable?, etc.

Or the other day I accidentally set a function scoped buffer's size too large and it cause a stack pointer overflow. That was a pain to debug, because the exception happened "before" the function started running (in the function call preamble). From the debugger, it looked like the return of the previous function caused the issue.

I don't understand a large part of the brouhaha. In a sense, static memory allocation feels much more natural than bunch of mallocs in random places around the code - to me.

Maybe it's because quite many years ago I used to do some very narrow scope algorithms in C for some code that needed high reliability.

How could such code even have mallocs? You would need to then know that at no point the memory requested would be greater than memory available. So if you already know that, why not allocate the memory beforehand?

Your example, a feature used for 0.1% of product life. Certainly you couldn't tolerate that the software hung to a memory error 0.1% of the time? Hence if the product supports the feature, then it must always work, and thus memory must always be reserved for it. I don't understand how it would really work otherwise.

The other end of the spectrum, the "modern way" even in high performance tasks, with reckless java object creation and jvm memory shuffling almost makes me sick. Then everybody's tuning the garbage collectors to avoid constant full gc. It's heuristics. Oracle makes drastic changes to the GC defaults between minor releases.

While there perhaps aren't maybe many direct memory bugs in such code, the code does become very unpredictable.

Then to fix that it's again going to preallocated static pools and primitives and self implemented cache cleaning and what else, and you've lost quite a lot of your java style.

The avr-libc at least comes with a malloc (rather, heap) implementation. It's mostly a trap for new embedded programmers who haven't yet internalized that you shouldn't touch dynamic memory allocation with a stick on these kind of processors.

(Embedded programming is really quite nice to teach clean C, you learn to appreciate what the cost of library functionality is, that you can't just start using floats, that on some architectures updating a 32 bit integer is a non-atomic operation spanning 5 or more clock cycles..)

AVR-GCC is really quite a hack. It abuses types right and left (float? double? int? That's not how i knew you!). And the compiler thinks it works on a 16 bit machine, which often yields weird code.

Unlike Java for example, C is very lax about what the different types mean. In particular, there is no guarantee about the size of int/double except that it is at least as large as short/float respectively. Some implementations of the standard library impose a requirement that DBL_DIG in float.h be no less than 10, which avr-gcc fails to satisfy, but I don't believe the core language imposes that requirement.

I was delighted to start programming in java way back when just for this reason. Writing truly portable C code really requires #defines for all the basic types, which is really ugly.

> I write C code for embedded devices and haven't been able to use malloc in 6-ish years.

Where do you work?

Anyone doing embedded code that is pretty close to the metal will have to avoid using malloc. I also write C code for embedded devices and also haven't use malloc for the past 15 years.

I work for a home/building automation company. Primarily I write code for Zigbee wireless devices.

a bit off-topic, but Zigbee came up in a discussion with a friend recently when talking about adding wifi functionality to PLCs. he mentioned that Zigbee devices are pretty much unusable in existing Wifi deployments because they run over the same bands/frequencies. can you comment on this at all? thanks!

It's not unusual. I did some work in civil avionics, and we were not permitted to use dynamic memory allocation due to concerns over heap fragmentation.

The simplest allocator ever is this:

    void *malloc(int size) {
        static long nextAddress = BASE_ADDRESS;
        void *ptr = (void *)nextAddress;
        nextAddress += size;
        return ptr;

    void free(void *ptr) {
Using that will last you a long time in writing a toy operating system kernel. Of course, it will eventually run out of memory pages, but that shouldn't happen until your system has been running for a long time.

djb's qmail does almost exactly this! It works surprisingly well for short-lived forked processes.

  typedef union { char irrelevant[ALIGNMENT]; double d; } aligned;
  static aligned realspace[SPACE / ALIGNMENT];
  #define space ((char *) realspace)
  static unsigned int avail = SPACE; /* multiple of ALIGNMENT; 0<=avail<=SPACE */

  char *alloc(n)
  unsigned int n;
    n = ALIGNMENT + n - (n & (ALIGNMENT - 1)); /* XXX: could overflow */
    if (n <= avail) {
      avail -= n;
      return space + avail;
    return malloc(n);

  void alloc_free(x)
  char *x;
    if (x >= space && x < space + SPACE) {
      return; /* XXX: assuming that pointers are flat */

Cool, didn't know that! It's basically a degenerate pooled memory allocator. That one is a bit more clever in that it falls through to a proper malloc when it runs out of space. That's assuming, of course, that you have a proper malloc to fall through too.

Mine will eventually wrap around and clobber whatever's at 0x0. It assumes you have a very liberal page allocator in the background to handle page faults for whatever addresses you decide to point to and automatically wire them up to RAM.

You probably want to align the memory you return as well instead of having size be an arbitrary value.

You're right, it should word-align the addresses, but I didn't want to complicate things. This was just supposed to be a 'minimum viable allocator' to shove into a kernel temporarily then move onto other more interesting things.

What advantage does aligning it have, given that you are already satisfied with using an allocater that cannot free memory.

None on Intel chips, but on virtual any other chip you'd get SIGBUS by referencing e.g. an N-bit datum not alinged on N/8-byte boundaries.


There's a reason he said toy.

Isn't that effectively sbrk()?

sbrk() does allocate memory of the size you want. Instead it allocates memory in chunks of the size of the system page. Thus, if you want to allocate a few bytes, you will get something like 4 KB instead, which is wasteful. Also, sbrk() is a syscall, so it's not available in OP's use case: he has not implemented it.

The author of the post is actually a "she," not a "he." I hate to be pedantic, but I think it's important to not assume authors of technical posts are men.

First, that is my bad. I read the contents of the post, not the title of the blog. Also, I do use "he" as "he or she", because of the influence of both the American culture and the Russian language in which the idea that the male gender is the default is very strong.

Second, no I should not assume. Women and tech have a complex enough relationship as is, and I fully support making our industry more inclusive

Third, you should not assume either. A name is not enough to determine someone's sex or gender. It is then safest to use "they" or even ze or zhe [1], though obviously very few people do. I also hate to be pedantic :).

[1] http://en.m.wikipedia.org/wiki/Gender-specific_and_gender-ne...

I actually did not assume based on her name. (There's nothing worse than a correction that is itself wrong.) I tried to find a biography of her in the third person, but could not find one. From her About page, I found out she was a member of Pyladies, which describers itself as an all-woman group. At that point, I felt I did my due diligence.

It really is an important point, not pedantry, to not assume based on names because many cultures have names that sound like the other gender to American ears (and vice versa, no doubt).

Very nice work. I will have to bookmark pyladies as a resource for friend, etc. Thanks for pointing all this out/bringing it up.

Or we could just assume that 'he' means 'he or she'.

It sometimes does, but that's not really relevant when you're talking about a specific person, whose name is on the top of the page in question.

You're better-off using "they".

I've been seeing a lot of "singular they" lately, even though it goes against what I was taught many years ago in school. One question though: would you yous "they is", or "they are" if you are referring to a singular person?

Languages develop. I was told never to use "due to" instead of "because". Told never to use "quote" instead of "quotation". And, I was told that "horrific" was not a word.

All of those things are no longer true, based on my casual reading of the New York Times.

(Note - I was also told to use "He" as the generic case for singular person. Outside of english 11/12, I've never used anything other than "they" - and have never had anyone comment on it. "They" as the singular person form is fine, and avoids the gender confusion.)

Indeed, singular "they" has been used in English for over four hundred years.

In german we have something like this too.

If you know someone rather well, you say

"Wie geht es dir" - "How are you?"

but if you don't know him, you say

"Wie geht es ihnen?", which literally means "How are they?"

Also, back in the old days, you would talk to the nobility in plural.

"Wie geht es euch?", there is no equivalent in English I guess, since you is plural and singular, "How are you?"

"they are". not a native speaker but we were thaught "they" as the proper way to refer to a single person of unknown gender when I studied for my CPE certificate.

Yes, it is. It's called the "singular 'they'", but some native English speakers were taught that it's incorrect. The controversy over it seems to be an American thing, that at some point in the 19th century someone wrote in a style guide that it's improper grammar, and the meme stuck.

But it's perfectly correct: English writers have been using both the singular 'they' continuously for hundreds of years (14th century?). It also has equivalents in other west European languages (e.g. the German neuter gender and the French ils for a group of people of indeterminate gender), so it's not an English invention either.


And it's totally fine "They are" has been used in the way for hundreds of years.

Ah, I just found it on Wikipedia, in the article on singluar they...

"Though semantically singular or ambiguous, singular they remains morphologically and syntactically plural (e.g. it still takes plural forms of verbs)."


> Instead it allocates memory in chunks of the size of the system page.

It can do that, it's not required to.

> Also, sbrk() is a syscall, so it's not available in OP's use case: he has not implemented it.

Yes, they have, they just called it malloc() :-). Which was really my (admittedly fairly terse) point.

Very nice post! It takes special character to be working on your own OS kernel project, even a very simple one. Using Rust makes this even more interesting, because you have to know the language quite well to be able to avoid the parts that require runtime support.

Programming without malloc is a good exercise and can also be valuable outside of kernel space. Computer programs written in a naive manner may end up spending most of their time doing memory allocations. This seldom happens in C programs because usually you're aware of every malloc you do. But I've had a few Python programs end up uselessly slow because of the time wasted in allocating small objects. I was doing some physics simulation prototyping with Python and the result was too slow to do work in (soft) realtime. (although you can avoid this, you'd be writing non-idiomatic Python code and that practically destroys any benefit of using it for prototyping).

@jvns: do you have the source code of your mini-kernel project shared anywhere? Any plans for the future?

https://github.com/rikusalminen/danjeros this is my hobby kernel project from a few years ago.

Right now it's a fork of rustboot [1], a proof-of-concept repo for implementing a kernel in Rust. I'm working on extending rustboot [2] to support handling keyboard interrupts and to have a real print function. It's not remotely in a working state yet, but I'll definitely update my blog when I get keyboard interrupts working.

[1]: https://github.com/charliesome/rustboot [2]: https://github.com/jvns/rustboot

Linus wanted to experiment with the task switching opcodes in the 386 processor. What became Linux started out as two tasks - one printing A and one printing B. If you got alternating runs then task switching (and the timer interrupt) worked.

From there it is a simple matter to add memory, protection, devices, filesystems, networking, graphics etc :-)

> you have to know the language quite well to be able to avoid the parts that require runtime support.

This isn't really true, it turns out :) I've only spent about a week using Rust. I spend a ton of time in #rust asking questions, and everyone's been really helpful. The maintainer of rust-core (the library that lets you not require runtime support) hangs out in #rust all the time and has been wonderful about answering all my questions about it.

> > you have to know the language quite well to be able to avoid the parts that require runtime support.

> This isn't really true, it turns out :) I've only spent about a week using Rust. I spend a ton of time in #rust asking questions, and everyone's been really helpful.

Rust is perhaps easier in this aspect than pretty much any other high level language out there. The fact that Rust can operate without a fully featured runtime system makes it one of the most interesting new languages in my opinion. The different "pointer types" in Rust make this possible, it's a bit confusing to start with but enables nice things on the other hand.

It is nice to see someone exploring alternatives to memory management other than malloc, reference counting and full GC. It's also nice that Rust hasn't really decided on which way to go and have been doing some outstanding exploratory work on this field.

Oh yeah, and a good IRC channel with helpful people is a very valuable resource.

Another cool thing about writing Rust is that the language is really alpha, so you'll run into language bugs or rough spots if you use it seriously for any amount of time (like a week).

That's the spirit! Just last week I saw someone express their negative opinion about new languages because they aren't bug free, battle tested and come with all sorts of tooling. As if programming languages are a fully solved problem and any work on that field is useless. Needless to say, I didn't agree.

It's really nice to see buzz around new languages, that's the only way they will ever turn into mature production ready tools.

I've found Rust to be great at avoiding allocations.

You can return a pre-allocated buffer to be shared by all callers from a function in a safe way that prevents interference between callers. The callers only get to keep it for a limited time (region, actually), but usually that's no issue.

In exchange, the callers are gauranteed that the returned buffer contents can't be modified by any other part of the program while it is borrowed.

Here's some code for multiplying some polynomial vectors which uses this style, without allocating (except for a single initial allocation of the buffer):


Line 275: The structure holding the buffer and on which the operations are implemented.

Line 287: the buffer field

Line 297: The operation returning a structure (by value) which has an immutable "borrow" of the shared buffer. It's lifetime is tagged with lifetime 'a, tied to the lifetime of the implementing object.

Line 319: The buffer being included in the return value.

I really like the pattern. Get the efficiency of not allocating, kindof like old school libraries like LAPACK, without doing things like writing over your inputs :)

one buffer per thread, I hope?

Right, that's a natural outcome of Rust's task model. Any borrowed pointer or structures containing such are never "sendable" which are the only things allowed to cross task boundaries, enforced by the compiler.

A neat trick (tm) for allocating a linked list: recursively call a function which allocates a node on the stack, hooking them up in the function... call something else and pass the list to it.

at the exit condition (whatever that is), exit the recursion and call something else with your new linked list.

(I used something similar to keep track of scope while recursively evaluating in this toy language I made once.)

This is kind of bad to do in a kernel. It is up to you to map the pages of your own stack. Once you have higher level concepts like a process, it's fair for the kernel to look at that state from a privileged position and say "OK, I will allocate more stack pages as it faults, and kill the process if I fail or it goes too far". But for kernel mode code, that last part could lead you to a scary place. IMO it's better to just use a few pages for stack and not go crazy with recursive functions.

Plus if you have your kernel stacks allocated in the same virtual address space, you might not be able to just "grow the stack" since that memory may be in use.

But yes recursion in a kernel is a "bad thing" unless you know the recursive bound.

A friend of mine wrote something up on this technique a while ago: http://spin.atomicobject.com/2012/11/01/hey-c-is-a-functiona...

Isn't this just a simplified version of Cheney on the MTA? (http://home.pipeline.com/~hbaker1/CheneyMTA.html)

The book Modern Compiler Implementation in C has a very functional style, written in 1997

There are editions of the book using ML and Java. Since Andrew Appel is involved with SML/NJ, the ML edition is most likely the original. (The ML one is also quite good.)

OT but I've never seen anyone write semicolons that way in C.


Personally, I'm never a fan of punctuation first formatting styles - though I give some exception for dot (.) first method calls in fluent interfaces for C-family languages.


Yeah, commas.

I imagine you basically mean that the called function stack-allocates one list node, sets up a "return trampoline" so when a return is executed, the returned value is returned one more level up, then jumps back into the code that called it. The issue I see is that returning gets to be a somewhat more expensive operation due to having to return several levels up.

Am I completely wrong here?

Considering how much memory modern machines have, and considering she is running no other programs, I would be so tempted to write malloc like this just for grins:

    int malloc() {
      return rand() % TOTAL_MEMORY;

You jest, but you'd actually get surprisingly far just doing:

    void* malloc(size_t size) {
      static void* everything = 0;
      void* result = everything;
      everything += size;
      return result;
When people are hacking together VMs for garbage collected languages, the first allocator often looks like this until they get around to actually collecting the garbage.

You can get `free` pretty simply too: https://news.ycombinator.com/item?id=6849471

Depending on how your memory is mapped, everything should probably be an address higher than the kernel (you can get your linker script to generate the right address). Sometimes the kernel is at the top of the address space though so you could start your heap at the bottom.

oh man that is the kind of bad idea that I love. I'm definitely going to try implementing malloc that way and report on the results =D

If that's legal, then I'm confused over the problem. Why not just start with a single address and increase by the amount requested each time? It pushes any difficulty off to the implementation of free, right?

You can do that. Of course you have to revamp it when you need to add 'free', but you can do it.

It's just much less pernicious to do it that way, than my method, which means it's less fun :)

OK thanks for the info. I thought it was hard because he was in some mode that required him to setup pagetables and service faults and all that.

So it's just hard because writing a decent malloc/free is involved, algorithmically.

Indeed, your method is a kind of malloc_roullette, depending on how much memory you allocate to the VM you run your kernel on.

Minor qualm: is rand defined?

Well, rand can be defined as a Linear Congruential Generator (https://en.wikipedia.org/wiki/Linear_congruential_generator), or some other well known algorithm. LCG's don't require anything that the machine doesn't already do (mod, addition and multiplication)

You could use rdrand if you're on (a recent enough) x86. You just lose some portability. Probably not a major issue at this point.

Ah, good ol' malloc.

You are wise to implement malloc once...you are a fool (or a masochist) to implement it twice.

I've implemented it thrice. Easy to get working, surprisingly tricky to balance all the space/performance requirements. You do get better at it!

A bit OT, but this is the first time I've heard about Hacker School. Is it as awesome as it sounds? If I wanted to join, how much money would I need to stay in New York for 3 months?

> Is it as awesome as it sounds?


> How much money would I need to stay in New York for 3 months?

About $1000/month for rent, give or take, and $112 for a metrocard. Plus whatever you spend on food (depends how much you cook) and entertainment.

Depending on the level of comfort you are used to, you can go a lot cheaper. I shared a room in Bed-Stuy for $350 / month and biked to Hacker School.

I did then blow the money I'd saved by eating out every day, but that's a different matter.

Oh, after your comment I read their about page. It really sounds awesome and is something I would be very interested in.

I don't know why but I always assumed it was yet another "we teach non programmers some javascript"-programme. But it's really the opposite.

Memory allocation in kernel needs special care. Usually the plain malloc() is not enough, once interrupt and memory paging are in place. Since your piece of the kernel code can run in different interrupt level, you really need a pageable memory allocator and a non-pageable allocator. At a high level interrupt (like hardware interrupt), the memory paging service won't get a chance to run and accessing a pageable memory might crash the system. You want to allocate non-pageable, pinned down memory for your code that runs in high interrupt level. For low interrupt level code, it's fine to use pageable memory.

That's why kernel mode development is a lot more difficult and fun.

I've been working on LLJS and the current version (which targets asm.js) doesn't implement dynamic allocation yet. I've been making all kinds of demos without malloc and it has been surprisingly fun!

When I was working on a hobby kernel, I'll confess I just took dlmalloc and called it a day. There are loads of interesting problems in kernel development (mapping pages, doing crazy stuff in page fault handlers, synchronization, filesystems) but somehow I didn't find malloc to be one of them. I guess anyone reasonably skilled at C can write a bad malloc pretty quickly, and once you get to that point you don't really learn that much by putting more effort into it.

There's a huge depth to a good malloc() implementation. While you're wise to avoid it when your interests are focused on kernel development, I'd recommend every programmer to have a go at writing malloc/free one day. Avoiding terrible performance problems is pretty tricky. At the very least, you'll gain a lot of respect for memory allocation and will discover that there's a lot of work going on behind the scenes when your code tries to allocate only a few bytes of memory.

Yup, malloc is usually a homework assignment in the upper-level CS classes in my university. I wrote it then (and again for my friends' hobby kernel) but I couldn't do it now.

If you don't want to use malloc in a kernel, that is fine. However, you should at least consider having something similar. This can be as simple as finding the available memory in your system and writing a frame or slab allocator. It will make your life much easier. It would be interesting to see how easy it is to integrate your memory manager directly with the rust language.

Interesting. Golang _runtime_ requires malloc, so even if used carefully I doubt you could use go in this way.

But, malloc doesn't have to be significantly advanced. In a situation where you have a large chunk of memory unallocated, the simplest thing to do is just to "bump" allocate it. Freeing it is a bit of pain of course, but you could build a freelist out of the chunks that are free.

So, the suggestion looks like this:

    struct freemem {
      int len;
      char *addr;
      struct freemem *next; 
    // globals
    int total_mem_size;
    char *start_of_memory;
    char *bump_pointer;
    struct freemem *freelist;
Allocate is simply:

    if bump_pointer + MIN(size_to_allocate, sizeof(struct freemem)) < total_mem_size:
      int *b = (int *) bump_pointer;
      *b = MIN(size_to_allocate, sizeof(struct freemem);
      return b + sizeof(int)
      iterate over freelist, checking for size_to_allocate < freelist->len
      return freelist->addr (the *b business is already taken care of)
Deallocate simply turns the discarded memory into a struct freemem and prepends it to the freelist, setting addr = the discarded address, and len = *(addr - sizeof(int)) (it's for this reason that we allocate a minimum size of sizeof(struct freemem))

That's a basic malloc / free.

> Freeing it is a bit of pain of course, but you could build a freelist out of the chunks that are free.

If you want a much longer explanation of this technique, I wrote a chapter about object pools[1] that discusses exactly this[2].

    [1] http://gameprogrammingpatterns.com/object-pool.html
    [2] http://gameprogrammingpatterns.com/object-pool.html#faster-particle-creation

Urr.. That should have been MAX, not MIN.

That's one of the main reasons we say Rust and Go aren't in the same space (and that is not a knock against Go, their choices made a lot of sense for their domain).

I'm really looking forward to Rust 1.0. It seems like an awesome language in many respects, not least of which because it's possible to write operating systems and other low-level software in, but the lack of stability will be kind of annoying for a while.

Does Rust not have dynamic heap allocation (malloc)? An explanation to accompany the article would be nice. Or at least a comprehensive response to my comment ;-)

Rust, and virtually every other programming language out there, like C++, D, python, C#, etc. have a runtime library built to aid the programmer. When programming an operating system, you must implement the runtime that the language expects if you wish to use some features of that language. In the case of Rust that would be dynamic heap allocation. The C++ "new" and "delete" operators (analogous to C's malloc() and free() ) are also an example of a runtime that must be implemented for those operators to be used. Python and C# have gigantic runtimes (the interpreter/JIT compiler).

Rust does have dynamic heap allocation. I'm writing freestanding Rust (using #[no_std]). This means that I can compile Rust code without using the normal standard library, and makes it possible to do things like write a kernel in Rust (which is my use case right now).

There's a support library for writing freestanding / embedded Rust called rust-core [1]. This library requires that you define a malloc function: it declares

extern { pub fn malloc(size: uint) -> *mut u8; }

However, it doesn't require that the malloc implementation be more than a stub, and as long as I don't allocate memory at any point that works fine.


Rust uses malloc provided by the system. It's mentioned in the article:

"I actually do have a malloc function because my Rust standard library needs to link against it"

He hasn't implemented malloc in his kernel yet (except for a stub).

She actually (Julia Evans)!

I just want to point that out since women don't get enough cred in our field.

Thanks, I'm not sure how I got that wrong. I certainly registered that it was a woman when I read the article.

> Rust uses malloc provided by the system.

It doesn't lock you into the system allocator. You can LD_PRELOAD your own, and in the past Rust has even shipped with and used jemalloc[1] (though I believe it's not using it at the moment). Furthermore, as alluded to at the end of the article, you can override the compiler's malloc implementation via what Rust calls "lang items" (which are sadly ill-documented).

> He hasn't implemented malloc in his kernel yet (except for a stub).

Rather, Julia hasn't implemented malloc in her kernel yet.

[1] http://www.canonware.com/jemalloc/

There was a really terrifying bug and he malloc got removed, wish I could remember the ticket number. Pretty sure it's not back...

The first time that jemalloc got removed was because our red zones were simply too small for it to work. The second time that it got removed was because somehow Rust was using jemalloc to allocate some memory and, once in a while, an entirely different allocator to free that memory (this is likely the terrifying bug that you remember).

there's no relationship between "Jason Evans" and "Julia Evans" =) That's a coincidence.

> Copyright © 2013 Jason Evans

I think you could forgive people for not knowing that Jason and Julia are the same person? (that's what you are implying, right?)

You appear to be falling victim to the vagaries of footnote notation. Blame HN for not giving us Markdown with which to embed links!

As the sibling implied, the "[1]" link is a footnote, referred to from the paragraph mentioning jemalloc. The gender issue was a separate point.

Hm, is there a reason why you've not declared something like "const VGABUF: int = 0xb8000;" (like I assume VGA_WIDTH is defined?)?

    pub unsafe fn putchar(x: u16, y: u16, c: u8) {
        let idx : uint =  (y * VGA_WIDTH * 2 + x * 2) as uint;
        // 0xb8000 is the VGA buffer
        *((0xb8000 + idx) as *mut u16) = make_vgaentry(c, Black, Yellow);

As others have stated, embedded programming generally avoids malloc.

Having all one's memory usage laid out at compile time really does provide some benefits. For one thing, it makes keeping track of who is wasting memory really easy! It is far too common to over allocate a buffer when initially developing code and to not tune it in later. Indeed, having to justify every allocated byte means developers will sit and do math and think about their code before writing it.

Odd as it may sound, this takes a fraction of the time that tracing down a memory leak does! Which is where the other huge savings comes in, no more memory leaks!

That of course leads right up to the other reason why heap-free programming is so common in embedded: Stability.

Memory leaks are a very common class of bug, for deeply embedded devices, it is not reasonable to have the user power cycle. Avoiding memory leaks removes one huge class of bugs. Add to that the lack of threads, and you knock out a second large class of bugs. In the end, one's test surface is greatly reduced!

That said, it is an interesting experience, I went from C# to embedded C# (yes it exists! .Net Micro Framework) which generally avoids allocations if possible, to straight embedded C and C++.

Some wise guy may now remark about smart pointers, to which I'll reply "my stack is 4k, go away!" Allocating on the stack can still fail with an OOM! Static allocation fails with an OOM at compile time! (Of course it is still possible to blow one's stack, but it is a lot harder when you aren't allocating structs or classes on it!)

Oh, and finally, a great interview question is to ask someone where in memory statics are stored. Heck just ask what are the types of memory allocation in C or C++. The percent of candidates that can pass this is quite low! Bonus points if the candidate knows that static consts may be stored in a separate read only region! Super extra bonus points if they list uninit'd and initialized data! That is very much OS and linker options dependent however, assuming one even has an OS!

80%+ of candidates will look at code such as

    static int16_t value = 5;
and, when asked where it is stored at, will answer either heap or stack. (Not quite sure why people say stack.)

Really, what you want is to look at a diagram like the one at http://www.geeksforgeeks.org/memory-layout-of-c-program/

And again, extra credit if they can talk about the various segments on their OS of choice. The above linked most certainly varies by platform, but anyone who has a true understanding of how one platform is laid out will be able to easily comprehend how other platforms go about things.

On a more practical note, all this came in handy when using C++ on a platform that didn't call constructors on statically declared classes! I opened up the debugger, looked at my pointer, say a bunch of 0's, then my class data, and realized I had no v-table! (I was crashing whenever I called any virtual functions.) Suffice to say, understanding what a v-table is, and how it worked, and how compilers created them, helped a ton. I then went off to ask the gentleman who wrote our run time why I had no v-table, well, turns out he hadn't written the code to go about and call constructors!

It is good to know that all of the "magic" that happens behind the scenes is not magic at all, but perfectly comprehensible code written by a developer just like you or I!

Implementing malloc() was one of the first assignments in my OS class. A linked list with some header information about each allocation will get you a long way before you need a better algorithm.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact