Hacker News new | past | comments | ask | show | jobs | submit login

I think the author's point is opportunistic optimization. He didn't ask us to rely on this behavior.



If you don't know whether or not you are really getting an optimization, then how much do you really care?

If you really care, then you actually profile your system and see what takes how much time, under which circumstances. The results of such a profile are almost always surprising.

I guess this is a basic cultural difference -- almost nobody in the HN crowd really cares whether their software runs quickly; there is just a bunch of lip service and wanting-to-feel-warm-fuzzies, with very little actual work.

In video games (for example) we need to hit the frame deadline or else there is a very clear and drastic loss in quality. This makes this kind of issue a lot more real to us. If you look at the kinds of things we do to make sure we run quickly ... they are of a wholly different character than "guess that calloc is going to do copy-on-write maybe."


At the same time why would you ever opt to malloc & memset instead of calloc? calloc might have clever optimizations, whereas malloc + memset won't. Intentionally choosing something slower, buggier, and that requires more work on your part is moronic.


Predictability sometimes trumps optimizations. For a striking illustration of this, see timing attacks.


Exactly. I would avoid using calloc simply because I don't know what it actually does.


The implementation details of malloc aren't specified as part of its interface either...


Which is why high-end games do not use generic system malloc; in general we link custom allocators whose source code we control and that are going to behave similarly on all target platforms.

(In fact we go out of our way to not do malloc-like things in quantity unless we really have to, because the general idea of heap allocation is slow to begin with.)


You know what it does:

    The calloc() function contiguously allocates enough
    space for count objects that are size bytes of memory
    each and returns a pointer to the allocated memory.
    The allocated memory is filled with bytes of value zero.
You should not care how it does it.


Spoken like someone who does not ship fast software!


There's a saying that is often misused, but applies here: "premature optimization is a root of all evil"

You first write your code using standard system functions, using the right calls for what you're doing. If after that performance of the code is bad because of calloc() only then you roll out your own implementation (most likely in assembly), and accept that in the future your code might not work well, because something in your OS has changed since you wrote your code.


It's not always the best way to write software. If you're writing a program that's supposed to work in real(ish) time then it's good to take performance into consideration early on, otherwise you'll end up rearchitecting your program later. It's not necessarily about a number of cycles each operation takes, but rather about memory layout of your data. I guess it's a matter of experience: if you expect something to be a bottle neck (because it was a bottleneck in a similar application you've written in the past) then maybe you should just write it properly the first time round?


That's why I mean when I said that the saying is abused. Some people think that choosing the right algorithm is premature optimization. It is not.

Choosing whether to use malloc vs calloc is not an architectural change though, and in fact it is very easy to replace one with the other, but if you use the right call for right use case, then you will benefit from optimizations that the OS provides, and often you might not even be able to achieve them from user space.


This was approach that OpenSSL does (it had its own memory management routines), and it already caused security vulnerabilities, not to mention performance issues.

Rule of thumb: if you need to allocate memory region that will be overwritten anyway (for example reading a file) use malloc(). If you need a zeroed memory, use calloc().

As long as you rely on guarantees provided by the calls and use the right call for right use case you get predictability and very often optimization.


yes, but malloc() isn't predictable either. If you care about predictability you aren't using malloc or calloc.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: