Hacker News new | past | comments | ask | show | jobs | submit login

This is like saying "there will always be some code you can't optimize, so this optimization doesn't matter". I know you know that isn't true.

Real library code either owns object lifetimes (and can use pools internally because the library's own _release() function is the only thing that can free its state), or keeps its hands completely off allocation. The few counterexamples, where for instance a library malloc()'s something and expects you to free it, tend to be notorious examples of error-prone and evil interfaces.

Meanwhile, just because everything isn't amenable to pool allocation (or, even better, arena allocation, where there is zero memory management overhead at all ever) doesn't mean you don't win huge on the places that are amenable.

You are raising the boogeyman of a hypothetical library that is going to take a pointer that I pool allocated and call free() on it, blowing up the program. I assert that any real example of such a library is going to be easy to shoot down as "an incredibly crappy library".




That's a complete straw man, and you know it. I'm saying that the specific optimization you mentioned is hard if you don't control all the pieces.

The primary advantage of a pooled allocator isn't in allocation - though that's nice - it's that you don't have the cost of iterating through each object to free it. But if you have external libraries, they'll abstract their allocations into handles (say), and now you have the problem of running what amount to destructors.


Nope, I don't think it is. I'm saying that when libraries track their own state and manage their own lifecycles, you're right, you can't make them use pool allocators (unless it's worth it to you to do surgery)... but that doesn't keep you from using pools on your own state.

I think people also overestimate the extent to which C programs depend on third-party libraries for their own statekeeping.

And again... what are we talking about here? I'm not advocating writing things in C. I'm saying, it's bogus to say that since code tends to be I/O bound, C isn't a performance win for most programs. That is simply a bogus argument. That the level of performance you can get out of C is usually not worth the investment is neither here nor there. Again: Ruby programmer here.


Again, I'm not saying that you're advocating writing things in C.

Where I think performance advantages that come out of writing things in C come from is being rarely being able to take shortcuts by relying on provided libraries and primitives which turn out to be not quite tweaked for the problem at hand. That is, C forces you to do so much yourself - largely because it has such poor abstraction tools - that you end up with a more specialized codebase. That specialization can include cache-oriented optimization, but I don't think it's the most important aspect or unique to C such that you can't get 95% of it - to the point where it's no longer a meaningful advantage - in a GC'd language.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: