Actually, using his memzero solution would work, but not because of his reasons. Putting memzero into another compilation unit (.c file) requires to compile it separately. memzero itself cannot be compiled to a NOP, since the compiler does not know how it is used and a call to memzero cannot be optimized since the compiler does not know what it does.
Nevertheless, link-time optimization in theory could still optimize across compilation units. The only solution which comes to my mind is to use 'volatile' for the memory access, but that will never be fast.
As you are insisting that the memory is accessed when you demand that the memory is wiped for cryptographic purposes, you will not be burned by the usage of volatile. (To be clear, you would of course not use the memory with volatile: you would add that qualifier only when you went to wipe it.)
(edit:) In fact, that instruction, and a small handful of others (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) do seem to cause re-orderings. On x86, at least, any other form of optimization should continue to be allowed (involving cache-lines, etc.), but you are definitely right: this instruction's usage would not be. :(
void* memzero(volatile void*, size_t);
(volatile void *) memset(a, 0, 3)
But there's a safer and clearer approach I'd probably actually use (although also untested):
volatile void *prevent_optimization = memset(a, 0, 3);
I'm also not sure of erydo's approach. Given "volatile int x", is "(x+1)" a volatile access? I cannot clarify that with ANSI C99.
Why leave something like that to compiler semantics? If the block of memory you think is safe turns out in fact to be a back-door, well then: thats your fault, not the compiler, operating system, etc.
Well, there's also the rule "Never use malloc() in-process", too, which means: before main(), all your vars and memory are already initialized and allocated as you need them from the start. Oh, and other silly rules too, which can prevent a death or two along the way ..
Good compilers. Compilers are, most commonly, judged on their ability to optimize code.
> since the compiler does not know how it is used
That's not what the programming language specification says. See the "as if" rule in C.
The following one works in at least gcc 4.5.3 and clang 3.1, but is actually not guaranteed to work by C language semantics:
static inline void memzero(void *volatile block, size_t size)
memset(block, 0, size);
static inline void memzero(void *block, size_t size)
static void *(*volatile const memset_)(void *, int, size_t) = memset;
memset_(block, 0, size);
If this is a concern, there's probably no alternative to an explicit loop:
static inline void memzero(void *block, size_t size)
volatile unsigned char *bp = block;
while(bp < (unsigned char *)block + size)
*bp++ = 0;
$ echo '#include <string.h>' | gcc -E - | grep memset
extern void *memset (void *__s, int __c, size_t __n) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
/* will be optimized away for unclear reasons */
extern void *memset (void *__s, int __c, size_t __n);
/* may be optimized away */
extern void * memxxx(void *__s, int __c, size_t __n) __attribute__ ((pure));
/* should not be optimized away */
extern void * memyyy(void *__s, int __c, size_t __n);
I am pretty certain that memset is not conceptually being handled especially different: it is simply being inlined. If you have a memset of a variable to 0, gcc's compiled output might involve just a single mov instruction and if it is a small array, it might be a few mov instructions: if it is a lot of memory being cleared, it might involve a call.
However, as it is inlined, the semantics are pretty clear for how it should work (and attributes like pure will be deduced, as they would for any function you include in your actual code; you don't need to mark it as such <- edit: to be clear, though, that I really don't think that memset is "pure"... it is pretty much the exact opposite of "pure"): it is equivalent to assigning a variable a value, and if that variable's value will not ever be used, then the assignment will not happen.
Instead, if the developer really really really insists that that assignment happen even if, from the perspective of the C language standard there is no legitimate side effect of that operation, the correct thing to do is to temporarily qualify the pointer with volatile and then do whatever it is you wanted (such as wiping it).
Sadly, this article does not even in a single place mention "volatile", so I'm not certain that the author understood how this works. Instead, he states that a "solution" is to instead put a zero in the first element and then copy it to subsequent elements... something that could easily be detected by abstract interpretation, and which a sufficiently smart compiler would be correct in also optimizing away.
a.c:14:2: warning: passing argument 1 of ‘memset’ discards ‘volatile’ qualifier from pointer target type [enabled by default]
Curiously enough, making a custom function with volatile pointer argument and `pure' attribute still causes GCC to optimize it away. I guess such combination makes no sense anyway.
extern void * memaaa(volatile void *__s, int __c, size_t __n) __attribute__ ((__pure__));
However, the author's attempts to redefine memset without using volatile rely on gcc limitations: in the first "trivial" case that it does not use abstract interpretation while looking for constant variables (in this case, it should be easy to fold that loop down to "oh, that rvalue is always a constant 0), and in the second that it does not do extensive optimization across compilation units.
As for the other thing you noticed: while I would imagine that the gcc-specific "I am the developer and want you to do something very special and important" __attribute__ would override any notion that it might have regarding "volatile", they are really orthogonal concepts: with just that prototype the compiler wouldn't even know if that function does anything at all with the memory you passed... the function might just do something with the pointer value and not indirect it at all; therefore, one would pretty much demand that "pure" have the semantics that gcc is giving it here.
If you compile with "-fno-builtin-memset", gcc uses whatever prototype is in the preprocessed source code and generates calls to memset() as appropriate. However, if you provide your own memset() implementation in the same compilation unit (or in a different unit w/ LTO) that doesn't play volatile or asm("") games, then the compiler still might be able to deduce that the memset() call can be optimized away.
I appreciate how you've distinguished your opinion from objective facts. E.g. according to spec, it's not broken, but it's a perfectly valid opinion to hold that the spec is broken too. :-)
Regarding the article, I don't at all understand why the three arguments are necessary. Why would the following patch not work?
- memset(x, 0, n);
+ memzero(x, n);
The first statement seems false. I was not previously aware of an association between the number of address lines and the size of the data bus on computing systems? I know I've had 32-bit processors with at least 64-bit memory buses, and the SheevaPlug has a 32-bit processor with a 16-bit memory bus.
Also, the code above this paragraph will only use wide accesses on 64-bit architectures ("#ifdef __LP64__"), even though there are benefits available on 32-bit systems.
By specifying volatile and zeroing out the memory in a loop, you're guaranteed that the compiler won't optimize away the zeroing. But if you want something faster, you need to get cleverer, because there's no universal "optimize this way but not that way" command.
#pragma GCC push_options
#pragma GCC optimize ("O0")
memset(a, 0, 3);
#pragma GCC pop_options
Edit: buried at the bottom of that post is another method:
It shows that a) volatile does not have to do _anything_ (if that is documented by the compiler) and b) you cannot trust your compiler to be bug-free. The latter is one reason to follow cert.org's advice to read the disassembly output of your compiler.
If the memory of the process is available to other processes after it finishes (or while it's running), isn't this already a lost game? I.e., how can you be sure that this particular chunk of memory wasn't cached on disk at some point? how can you be sure that someone didn't access it before your memset() call?
Imagine a web server that didn't wipe plaintext passwords or encryption keys from its memory after finishing with them. If the web server was remotely exploitable then it could be possible to obtain the contents of the memory of that process remotely, thus possibly leaking passwords or other sensitive information of other people that have connected to that web server at some point in the past.
> how can you be sure that someone didn't access it before your memset() call?
True, but there's a difference between having a very small window of opportunity where the data could be obtained via a remote exploit, and leaving the window wide open for possibly endless period of time.
Obviously you shouldn't have any remote exploits in the code in the first place, but it's good practice for secure programming to keep the sensitive information in memory for as short a period as possible just in case there is something that you aren't aware of.
Bear in mind that other normal user processes can only access the memory once it has been returned to the OS. Therefore, it means you have to have a malicious super user to access your keys whilst the program is running.
Of course the memory will be available to other processes because we don't have infinite supply of memory. Stuff like mlock(2) can prevent paging to disk, and there are probably other countermeasures too which I'm not aware of.
Using something like mlock(), supposedly.
Also, in a standards-compliant compiler, statically declared variables are automatically initialized to zero unless stated otherwise.
There apparently used to be a cfree though it appears to be equivalent to free.
I upvoted rwg's comment regarding CERT's Secure Coding wiki. memset_s is the correct solution given it is part of the C11 standard.