Hacker News new | past | comments | ask | show | jobs | submit login

Don't you see that his answer has nothing to do with a hacker mindset? It's an assertion that making your development and production environments as close as possible will save you from unexpected grief, coupled with an observation that this has driven server architectures historically. Especially with subtle problems like performance issues. I find it a very sensible conclusion.

Of course it didn't hurt that x86 quickly became the price/performance leader for servers, but he makes a good case that this will continue for at least the near future.

The NetBSD people vehemently disagree. By ensuring your software works on various architectures, you expose subtle bugs in the ones you actually care about. Lots of 32-bit x86 code was improved during the migration to 64-bit, not because the move created new bugs, as because existing ones (i.e. code that relied on undefined behavior) couldn't get away with it anymore.

Well nobody uses NetBSD so...

Sure portability increases code quality, but at what cost to time to market which seems to be the primary concern for most developers these days?

NetBSD (and NetBSD code) is used pretty much everywhere. The internet pretty much runs on it.

That might be a bit oversold. I love the BSDs, but I'd think by now that Linux in all its forms would surely heavily outweigh NetBSD by now.

I would love to read a recent survey. If someone knows about that.

I couldn't name a major corporation that uses NetBSD on their servers or routers. (Yahoo used to use FreeBSD servers, but even they migrated to Linux.)

Is there a major router vendor or something else that uses NetBSD in a big way?

I can name several. But you won’t think of them as tech companies.

Hotpoint, pioneer, bose, Samsung (some TVs and audio equipment), whirlpool and many, many more.

They all use netbsd in firmwares.

That says FreeBSD

I wouldn't call their bugs. If the binaries worked correctly on x86 due to compiler specific guarantees then the code wasn't buggy. It just wasn't written for a generic C or C++ compiler.

Undefined behavior is not a compiler specific guarantee. UB can change based on almost random factors, especially between newer releases of the same compiler. They are bugs, they were just masked.

This honestly depends on what undefined behavior we are talking about. Sometimes it will be guaranteed to behave a certain way on a compiler. A few will also be the same across compilers if your compiling for the same architecture.

However, I do agree that cross compiling is good for finding bugs like this. And really if we are letting the compiler or architecture define undefined behavior, I find it better to break out the inline assembly. It's explicit that this code is platform dependent, and avoids any issues that a subtle change in the future may cause it to break.

Although, it's usually possible to define what your attempting in C without issue, and I only ever find I am doing such a thing if there is a good reason to use a platform specific feature. Generally, relying on how compiler handles uninitialized memory and similar is not what I call a compelling platform specific feature. Cross compiling is good in the regard because it forces everyone working on a project to avoid those things.

> This honestly depends on what undefined behavior we are talking about. Sometimes it will be guaranteed to behave a certain way on a compiler.

That is implementation defined, not undefined, behavior.

That's at least unnecessarily splitting hairs and possibly missing the point, considering that some compilers allow you to turn undefined behaviour into implementation-defined behaviour using an option. -fwrapv comes to mind.

Undefined as per the spec. Does not mean it does not have a certain behavior on a given implementation.

The spec also does mention implemtation defined behavior. However, undefined things still need to be handled.

Not really. Undefined means that no purposeful explicit behavior for handling has to occur even within a specific implementation, which means things can blow up randomly just changing some compiler settings or minor things in the environment (or even randomly at runtime).. eg running out of bounds of an array in C is a perfect example of undefined behavior.. no guarantee on what occurs from run to run. Yes obviously time doesn’t stop dead and something happens, but I think that stretches any meaningful definition of “handled”.

True, undefined behavior can be implementation defined but that is not a requirement, and it usually is not.

Undefined as per the spec.

If the compiler defines a behavior for some UB, then it's no longer UB. It's been defined for your implementation. It might still be undefined for another implementation but that doesn't mean your code is buggy on the first one.

No, it does not. It's still UB. UB is defined by the standard, not by your compiler's implementation. Certain behaviors may be implementation defined by the standard, those can be defined by your compiler.

But if the standard says it's UB, it's UB. End of story.

Where/how do you obtain such confidence in something so wrong? The standard not only doesn't prohibit the implementation from defining something it leaves undefined (surely you don't think even possible behavior becomes invalid as soon as it is documented??), it in fact explicitly permits this possibility to occur -- I suppose to emphasize just how nuts the notion of some kind of 'enforced unpredictability' is:

> Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner...

In my experience with porting stuff, sometimes the bugs exposed by ports are not along the lines of "always works on x86, always fails on ARM". In a lot of cases it fails on both, with different frequencies, but maybe the assumptions are broken sooner or more often on another platform.

There's a world of difference between working correctly on x86 and appearing to work correctly on x86. Sometimes the difference has serious security implications.

If a program manages to avoid the entire maze of deathtraps, the C standard calls it strictly conforming. I doubt anything commonly used today could qualify.

Even on NetBSD, my old love, you can not take you program on the x86 machine, pack it and then run it on arm. You will have to crosscompile and hope it works.

Many projects don’t care about subtle bugs. They need to deliver features in time. Bugs are acceptable.

Debugging on different platforms is great. But when it comes to deployment, you probably want to choose the one you know the best, and that's probably your dev platform.

Question is: when will development not occur locally at all? Is it possible that in a near future you actually develop directly in the cloud, on your own development instance directly? When this happens the cpu architecture of your laptop is irrelevant. It will just be a window to the cloud.

Well, unless you're hacking on kernel code, making your production environment exactly as your development one is trivial. Just develop remotely. This isn't a part of Linus's calculus because, for him, developing remotely is unthinkable.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact