Hacker Newsnew | past | comments | ask | show | jobs | submit | rc4algorithm's commentslogin

It's not like de Raadt or Torvalds got arbitrarily dropped into their positions. People effectively chose them by investing usership and development effort into their projects.


All of that useless zeroing is very painful from a performance perspective. It just isn't a reasonable option in performance-relevant code like kernels and core infrastructure. The Moore's Law argument doesn't apply because memory bus latencies and case pressure are lasting problems.


On high-performance code for an embedded PPC system I used to work on, we made all our control block a multiple of the L1 cache width. Our allocation routines then all had inline assembler to run the dcbz instruction (data cache block zero) on all the cache blocks for the control block as it was allocated. This meant the control block was always zeroed, and the memory bus wasn't touched in order to do so. Yes, things were evicted from the cache, but since we're about to start writing things into the control block, the lack of fetch was a net gain.


Unless the zeroing is defined by the language, assuming it t is incorrect and shouldn't be done.


Calloc is defined to return memory that's all-bits-zero, which in turn defines the value for many datatypes. The overwhelming majority of C implementations additionally define the values of all-bits-zero floats and pointers. IMO it's reasonable to write programs targeting only such implementations, provided such a dependency is clearly documented (just as e.g. it's reasonable to write C programs targeting only implementations on which floating-point arithmetic is IEEE 754, despite this not being required by the C standard).


> The overwhelming majority of C implementations additionally define the values of all-bits-zero floats and pointers.

So... Clang and GCC?

Regardless, you're ignoring the obvious difference: that isn't POSIX- or ISO-defined at all. If you choose to use it, you're using a weird dialect. Might as well just switch to Cyclone.


> that isn't POSIX- or ISO-defined at all. If you choose to use it, you're using a weird dialect.

People used C (even on multiple compilers) long before POSIX or ISO. You're putting the standardization cart before the compatibility horse.


Yeah, let's revert to 1988! Great solution! Muchly of advance!


OpenRISC has been stagnant for many years and will probably never make it to silicon.


This was written by Miod Vallet a couple days ago. He was the main nonstandard arch developer for OpenBSD and resigned last week. So, the context is that the author was one of the few leaders of the Luddites and considers it a lost cause.


Kinda of figured, and for a lot of stuff, I don't disagree with the sentiment, but I still think it has value.

My biggest frustration with the alternate chip vendors is the total lack of a simple ITX / ATX motherboard with a chip, expandable memory (preferably ECC), and a standard set of ports. Even ARM is problematic at this.

I am mystified at how hard it is for a chip vendor to produce a motherboard.


The vast majority of MIPS systems are bargain-basement embedded systems, though. Also, the ISA is pretty messy (especially at the kernel layer) and doesn't support many modern features such as per-page W^X or crypto acceleration.

I dislike monopolies more than most, but MIPS isn't currently a very strong contender for ARM's market share.


Right now MIPS rules the television and set-top-box world, because there are, I think, two Chinese companies both of which own the IP for a MIPS core and all the digital TV decoder logic; which means that they can produce a complete end-to-end TV chipset and not have to pay anyone anything. So they're vastly cheaper than anything else.

Also, don't dismiss them. My last company was a startup producing a portable native gaming platform. I did a couple of ports to some MIPS-based smart TVs, and it totally rocked; I think we ended up with Lego Batman running on it in HD with an XBox 360 controller plugged into the USB diagnostics port on the back.

Of course, at the userland level all these platforms are horrible piles of fail. The ones I've seen all run badly-ported Linux kernels, and on at least one they hadn't bothered to write device drivers, and as a result the TV UI stack did audio by talking via pipes to a standalone executable, running as root, which fiddled with the hardware registers. Unsurprisingly we had latency problems...


I believe MIPS is a popular choice with internet routers too (at least the ones designed for domestic use).

Out of interest, what did you use for your base OS when you were doing MIPS development? Did you reuse an existing distro? Does Linaro support MIPS?

EDIT: Just found out there's a Linaro-like organisation for MIPS called Prpl. Is that what you used?

http://linuxgizmos.com/mips-supporters-form-a-linaro-like-co...


We didn't have control over the base OS --- we just had to port to whatever foul junk the manufacturers had put on it, and usually had to link to their UI libraries, as our platform was intended to run alongside the vendor UI.

I vaguely remember Linaro, but that might have been ARM (which we also supported). I don't recall Prpl.


This article suggests that there's no point in having a valid SMTP cert. However, consider end-users' clients, which store the SMTP domain (i.e. don't do MX lookups) and connect to it directly. For mail to users on the same email network, this is the only non-local SMTP hop. Securing this connection also prevents anyone on the end-user's local network from MitMing.


> I would happily admit there is extensive room for improvement in systemd, but we UNIX folks have been working with worse-is-better and rough-consensus-and-running-code for decades.

You're understanding "worse is better" backwards. It means "starkly simple to the point of being initially off-putting is better". In this context, it means (in part) "be a big boy and learn how to write a shell script". (Not to be too condescending, but I think that captures the Unix perspective here.)


Yes. I like how you use the word "initially".


There's so much wrong with this comment...

ETFs are just a way of buying equity, not a market that can be overcrowded.

The 2008 meltdown happened because of subprime mortgages, the approximate opposite of conservative index fund investments.

ETFs are (in this context) just a convenient way of buying modest amounts of an index fund.

Index fund companies like Vanguard charge a very low, extremely reasonable commission for both direct purchases and ETFs.


Was about to say all of the same things... Guess MBA mention was just an appeal to authority.


Do feel better now?

Best of luck!


No no no no no. When you do this, you are:

1) Reinventing malloc for no good reason 2) Opening yourself up to a whole world of security holes

See Bob Beck's talk on the first 30 days of LibreSSL. Heartbleed wouldn't have been nearly as catastrophic if OpenSSL just used the system memory allocator like normal people.


There are a few good reasons to reimplement malloc(), but of course it needs to be done very very carefully. For example, with a naive buddy list allocator it is (or was, in 2007) possible to more than triple allocation throughput vs. malloc(), but one loses many of malloc()'s security features.


I don't understand how this could lead to a security hole any more easily than using malloc could. Can you give an example? The system memory allocator is using mmap internally anyways (on Linux and anywhere else using the Doug Lea allocator at least) and a simple region based allocator is only a couple dozen lines of code.

The semantics are simple as well:

    region create_region(size_t size);
    void * allocate(region *r, size_t size);
    void destroy_region(region *r);
Edit: also, you're not reinventing malloc for no good reason. A region based allocator is much faster, possibly at the expense of using more space than the system allocator (because you might reserve more space than you actually need).


The point is that when you use malloc, an attacker exploiting a buffer overrun can't easily guess at the offset they need to find useful data. When you allocate everything in a big region, your process can be expected to read past the object bounds, so an attacker might be able to probe the address space (also the memory layout might be more deterministic). If they try to probe like that in a program that allocates with malloc, they'll just segfault the program.


Yes, this is a reasonable objection.


If you want to count examples you're reasoning about this the wrong way. The question is what way leads to a higher probability of making a mistake, not the existence of discrete examples. Considering that one way, Valgrind works out of the box, but with your way, it doesn't, I think the answer to that question is quite clear. Even if there weren't tools like that, it is easier to read code and understand pointers' lifetimes when they're being handled individually, instead of having their lifetimes be part of some far-off region.


The whole point of using regions is not to be worrying about pointer lifetimes. And you don't need Valgrind when you have regions since you aren't going to leak memory by forgetting to free something. Valgrind solves a problem that doesn't exist when you use regions.

If you're really worried that you'll leak an entire region worth of data just allocate the region with malloc instead of mmap and then use Valgrind to tell you what regions you aren't destroying.


Valgrind isn't for memory leaks, its main purpose is for catching out-of-bounds and uninitialized accesses.


The risk is that you'll screw up and write a buggy allocator with a security hole.

Yes, if your program just needs to allocate lots of memory, do some computations, then exit, this approach works. But the programs where security is most critical do not usually follow that pattern.


The code for this is so simple it would be hard to screw up. Plus you can just use obstacks[1] which pretty much provide the interface I was describing.

Look, I'm totally willing to accept that there might be flaws with this approach, but with the exception of adrusi, no one's objections have been all that reasonable. If there are legitimate objections (not just handwavy, oh, you'll probably implement the allocator wrong) I'd love to hear them. I use this pattern in my own code and I'll stop if there are legitimate flaws.

Also, Akamai released a patch for the OpenSSL allocator bug mentioned earlier. Guess what the patch used: mmaped regions.[2]

[1]: http://www.gnu.org/software/libc/manual/html_node/Obstacks.h...

[2]: http://thread.gmane.org/gmane.comp.encryption.openssl.user/5...


Very good tip. Thank you. It looks like you are being opposed without a reason. Not everyone is writing encryption software in C.


Would you happen to have some code online I can look at which use this technique ? I first saw Casey Muratori from https://handmadehero.org/ use an approach like this and used the same approach in a small project of mine. It worked well and was simple enough to implement.


Because you're going to have bugs. You don't need the extra performance, and you do need the extra safety. There is no good reason for you to do this in 2015, unless your system's malloc is broken. The exceptions to this rule know who they are.


Yes, safety was the initial motivation. The advantage is that you free once and the semantics are more stack like. The performance is an additional benefit.

It is entirely possible to have no bugs in the <100 lines needed for this code, so I don't think the bug argument is valid. adrusi did raise a legitimate concern however.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: