It's extremely hard to agree with Linus on that. One problem in his argument is that he believes that everybody has a kernel hacker mindset: most today's developers don't care about environment reproducibility at architecture level. The second problem is that he believes that every kind of development is as platform sensitive as kernel hacking, and he even makes the example of Perl scripts. The reality is that one year ago I started the effort to support ARM as a primary architecture for Redis, and all I had to do is to fix the unaligned accesses, that are anyway fixed in ARM64 almost entirely, and almost fixed also in ARM >= v7 if I remember correctly, but for a subset of instructions (double words loads/stores). Other than that, Redis, that happens to be a low level piece of code, just worked on ARM, with all the tests passing and no stability problems at all. I can't relate to what Linus says there. If a low level piece of code written in C, developed for many years without caring about ARM, just worked almost out of the box, I can't see the Ruby or Node application to fail once uploaded to an ARM server.
Node and Ruby applications do fail on ARM though, when it comes to native libraries and extensions. And now your whole distro is different than your development machine, which adds complexity.
Do I really want to be debugging why node-gyp fails to compile scrypt on the ARM distro on the new Amazon A1 ARM instance (which it did in my case)? And if I solve that, what about the other 2451 dependencies? Let's pessimistically say there's a 1% failure rate, I'll be stuck doing that forever! Nah, I'll just go back to my comfy x86 instance, life's short and there's much code to write :)
I think I'll side with Linus on this one. I saw first-hand how non-existent the x86 Android market was, despite Intel pouring megabucks into the project. If the developers don't run the same platform, it's not going to happen, no matter how great the cross platform story is in theory. Even if it's as simple as checking a box during upload that "yes, this game can run on X86", a huge chunk of the developers will simply never do that.
I'm suffering through this now -- I have a custom C++ Node exception that needs to run on both x64 and ARM. The ARM cpu is onboard a mobile robot, where I care about power draw.
The good news is that Clang can cross-compile fairly easily. Much better than gcc.
The bad news is that there are a surprising number of missing libraries on Ubuntu/ARM64. For example, Eigen3. And although the code is fairly compatible, there's some extra cognitive load in learning to debug on ARM. For example, calling a virtual function on a deleted object crashes differently.
I'm willing to put up with it for ARM's advantages in battery-powered applications, but I wouldn't just to save a few bucks on cloud servers.
Unfortunately C++ code is only portable if it is free from undefined behavior. Fortunately there are many tools now to debug these kinds of errors: https://blog.regehr.org/archives/1520
How about using CMake to download and cross-compile the 3rd party dependencies? I've worked on a couple of robotics applications (C++). I wouldn't depend on Ubuntu for packages. You'll lose the flexibility to choose package versions, apply patches etc.
I remember getting annoyed needing a windows driver that was part of an open source package. That needed autotools to build. I got annoyed trying to make it compile under mysys/MinGW. So I built a gcc cross compiler under linux. Compiled the damn driver and it just worked(tm).
by default it supports a variety of targets, you don't need to set it up.
setting up the system libraries is relatively easy, make a copy of the target you're building for, assuming it has a setup for compilation, and use --sysroot=/path/to/target.
On a related note, if Apple does switch to ARM chips for their laptops, that will make mainstream ARM server-side development more viable than any cross-platform story ever can. Or kill the Mac desktop. One or the other :)
It will most likely kill apple laptops as a developer platform.
Unless you are deeply disconnected from the hardware, CPU architecture does matter. Most developers using macbooks I know have VMs for either Windows or Linux works.
It might be conceivable to use the ARM port of <Insert your Linux distribution here>, or the ARM version of Windows 10. But it would also require a good desktop virtualization solution for ARM. If Apple release its own solution or creates a partnership with let's say VMWare to release something at the same time an ARM macbook comes out, it might work, but barely, and I'm not sure if developers are at the top of Apple's priority list. In the end, with the Linux subsystem in Windows, choosing Windows as a development platform might make more sense.
As a side note, if Apple switches to ARM, I foresee a lot of cringing. The switch from ppc to intel was annoying for a lot of Apple users at the time, but there were a lot of goodies (bootcamp, better autonomy, better performance, VMs) that kind of made it up for it, basically, nothing was lost in term of possibilities, a few were actually gained, and only the transition was a bit painful. With a switch to ARM, you may gain a little in term of autonomy, but with macbook pro already at +8 hours (~a work day), not sure it's a game changer, at best you will stay the same in term of performance, and you will lose in compatibility with other OSes.
I think kill is too strong. Certainly some developers will need to be on an Intel chip, but not all. How many developers use their laptops as a dumb SSH terminal? While some C extensions to scripting languages will need some love, the majority of major interpreted or VM driven languages work already.
My feeling is it will be net zero as far as ARM servers are concerned until the hardware is made and is viable. Perhaps Apple ARM laptops will help with marketing ARM as a viable option, but we already develop on OS X in order to deploy to Linux without any great calling for OS X servers.
Cloud server “hardware” has also drifted from what you see in real hardware. There are numerous times in my career I’ve had to explain to deveopers of all experience levels that their SSD is several orders of magnitude faster than the tiny EBS volume they’re reading from and writing to.
In short, I think architecture mismatch just isn’t that important to most Web App oriented companies. My girlfriend works at a travel industry giant and they’re at the opposite end, putting IBM mainframes to good use. They don’t have much use for Macs and most of their developers seem to be on Windows instead of anything resembling what they’re deploying on. For the segment of our industry that does care, they’ll have options and will choose them regardless of what Apple, Google, and Microsoft do with ARM, Intel, Power, MIPS and other architectures.
While the Intel switched helped, at least I thought it was great, the big deal was that OS X was a tremendously usable Unix on amazing laptop hardware.
I'm not sure the architecture mattered as much as that did.
Agreed. I switched when there were still G4 PPC laptops just because OS X was a usable Unix with good hardware. The switch to Intel was good, but it wasn’t because I struggled with the architecture. It was for the more powerful CPUs and battery life.
Sorry, but in Windows, Mac and Linux x86 docker is a huge part of my workflow... I'm having enough trouble guiding people out of windows only as it stands, ARM is too big a pill to swallow in teh near future. There's still enough bad parts in scripting environments at the edges (I'm a heavy node user, scss and sqlite being a big two touch points).
I can imagine a neat divide here between “MacBook” (based on ARM, 15 hour battery life) and MacBook Pro (based on Intel i7, 6 hour battery life, optimised for VMs).
I consider myself a pretty average Mac user, and I've already been turned off by the last couple rounds of Macs that Apple has shipped. Messing up the one remaining upside, x86 compatibility, would be be the straw that broke the camel's back. They still only have single digit market share in desktop computing, this could be the death blow for their platform.
Sounds about right. They actually already have this setup, the T2 chip in recent macs contains an arm processor which handles some tasks like disk encryption. It could be possible that future OSX versions will leverage that processor for more general purposes.
I wouldn't imagine you'd get a lot of performance boost from the change. You'll see battery life but that assumes they aren't looking to run a crazy number of cores to make it compete with the x86. And they only way that massive core counts help is if the software is designed to utilize them correctly.
Its not that all users are devs. Its that all devs might not be able to make their software work well under that environment.
Current crop of Apple A chips runs circles around almost all Intel chips which they put in the laptops at a fraction of TDP.
I think you will see a lot of performance boost after switching to ARM. If they start on the "low end" then a macbook will be practically on par with a mbp. This might not be useful at first for native development, but I am quite sure that macOS, iOS and web development will be very much possible on these machines - the three domains that Apple cares most about.
Knowing Apple, they would just go for the 'even lighter' approach, and insert a battery half the size of the current-ones...
A battery lifetime of 8 or 12 hours is plenty, and going beyond that isn't that much of a marketable strategy, unless it has to become 24h+ or something. A lower weight approach however would also mean a lower BOM for Apple, and more profit, while being able to shout "1/3rd lighter!" - and that's an easy sell :)
>And they only way that massive core counts help is if the software is designed to utilize them correctly.
That's for servers and scientific software (and perhaps 3D and such).
For regular devs the massive core count helps even with non optimized apps, because unlike the above use cases, we run lots of apps at the same time (and each can have its core).
When that happens we’ll also see a big push to add ARM support to all the native nodejs modules that are out there. (And I assume Ruby, Python, Go, etc packages).
Linus’s prediction is based on the premise that everyone will continue to use x86 for development. But that’s probably not going to be the case for long. Multiple sources have leaked the rumour that Apple will release an arm MacBook next year. And I wouldn’t be surprised if Microsoft has an arm surface book in the wings too.
Developers don't develop on Surface books, and Macbooks are in low percentages.
The majority of people in the world writing code are using x86 PCs and Microsoft and Apple aren't about to change that with any *Book.
Linus' premise that everyone will continue to use x86 for development is because they will.
There's no incentive for companies or individuals to go switch out all of that x86 hardware sitting on desks and in racks with ARM alternatives which will offer them lower performance than their already slightly aged hardware at initially higher costs.
I can forsee _some_ interest in ARM cloud, and I don't think it'll be the issue Linus claims at higher-than-systems-level, but I absolutely would bet on x86 going nowhere in the human-software interface space in the foreseeable future.
For some reason Macbooks seem disproportionately represented amongst web developers. All the agencies I know in Sydney and Melbourne are full of macbooks.
> There's no incentive for companies or individuals to go switch out all of that x86 hardware sitting on desks and in racks with ARM alternatives which will offer them lower performance than their already slightly aged hardware at initially higher costs.
Uh, why are you assuming ARM laptops will have lower performance and a higher cost compared to x86 equivalents? The ARM CPU in the iPad pro already outperforms the high end intel chips in macbook pros in some tests. And how long do you think Apple will continue to sell intel-based macbooks once they have ARM laptops on the market? Maybe they'll keep selling intel laptops for a year or two, but I doubt they'll keep refreshing them when new intel processors come out. When Apple moved from powerpc to intel they didn't keep releasing new powerpc based laptops.
Once web development shops start buying laptops with ARM chips, it will be a huge hassle if random nodejs modules don't build & work on ARM. At this point I expect most compatibility issues will be fixed, and that will in turn make deploying nodejs apps on arm a more reasonable choice.
Obviously we'll see, and this is all speculation for all of us. But I think its a reasonable path for ARM chips to invade the desktop.
I'm not assuming per-se, im guestimating, basing it on my understand of x86 and ARM. I graduated in Electronic Engineering from a university department who's alumni include Sir Robin Saxby, they pushed ARM hard, and I have a fairly good understanding of where it's at architecturally compared to x86.
Apple have 100% control over every part of their hardware and software from the get go, so it's inevitable they perform excellently on that hardware; they can optimise their code to death, and increment the hardware where it can be improved upon.
Web developers make up a fairly small proportion of the developers I've ever worked with, I have worked for software houses where web just isn't a thing for us other than for our own marketing. None of these people run Mac's, they all run PCs, and these PCs don't have the same control in their hardware/software process that will bring about the kind of "excellent" result you see from an iPad. They'll be relying on Microsoft to get Windows optimised, but Microsoft will be working with dozens, even hundreds of partners, Apple works with one, itself.
I suspect, also that they'll be more expensive because of all the new development the manufacturers have to put into releasing these new ARM laptops. Microsoft will have to put extra work into Windows, which will cost money, and finally those of us that run Linux will end up with something that hasn't had the benefit of the decades of x86 development on the desktop has had, thus, worse performance, at least in the beginning.
I could imagine a laptop equivalent of big.LITTLE, where you have x86 cores for the real grunt work, and ARM cores for power saving, bit I don't see pure ARM in the workstation space.
It'll be an interesting time, but based on my own experience, I'm betting on Linus with this one and I don't see myself or my colleagues or my workplace moving to ARM anywhere outside of non-laptop-portables any time soon.
Yeah, well, I live in one of the ex-USSR countries. And guess what - there are no Macs here whatsoever. I'd suspect that x86 is the prevalent platform in China and India, the dominant players in the outsourcing markets. So, no, most of development is done on Intel machines.
This is true, but a huge part of that is VMs with linux or windows, and for me x86 docker workflows that go to x86 servers. It'll be years for any real transition imho.
It took 4 years of concerted effort to get most node things working right in windows.
If that happens, a lot of low level programming will be done ARM-first. A lot of Swift and Objective-C code will be built, tested and run primarily on ARM.
Apple's latest iPad processors are competitive with low-end laptop x86 processors. And they have a much stricter power budget than a laptop. If Apple wants to go this route, then they probably have the capability to build the chips to support it.
The part you're discounting is just how resource intensive desktop apps are and how much optimization goes into iOS apps.
To really see the benefit of changing they would need to add a lot of cores, and then cross their fingers that 3rd party app developers know how to do true multi-core development.
> I think I'll side with Linus on this one. I saw first-hand how non-existent the x86 Android market was, despite Intel pouring megabucks into the project.
Doesn't that refute Linus' argument, not strengthen it? Almost all Android developers develop on x86. Intel thought, as Linus apparently does, that this would drive adoption of x86 on phones. It didn't.
Intel even got competitive in power efficiency and it wasn't enough to save them. In fact, I remember folks on HN predicting the imminent death of ARM all the way up to Intel throwing in the towel.
I think Linus is wrong here. His argument made sense in the '90s, but it's 2019. The ISA just doesn't matter that much anymore.
Phones are different than servers though. The primary customer of a phone is Joe Somebody who doesn't know or care about architectures, only battery life and cost. Well ARM wins there.
The primary customer of servers is developers who care less about cost and more about time to market.
Indeed you might not, but the person that wrote your JVM does, and the person that wrote the system that runs on does, and the person that wrote GAE does...
That single instance you're running on already took half a dozen or so other systems developers and more before it got to you, so in your example you're the minority.
It's because of the work they've done, that you can not care about the architecture you're running on, not in spite of them.
Sure, but every time you move down the stack a level, you shrink the network effect by several orders of magnitude.
Linus' argument is that x86 stays on top because everyone is developing with x86 at home. It's much less convincing to argue that x86 will stay on top because the people writing JVMs use x86 at home. There just aren't that many of them, and if they get paid to write ARM, they write ARM.
Indeed, but by at home he means in the office too (he says as much), and I don't see offices don't this unless they have a real incentive to throw out the hardware they've invested in, perhaps in the not so near future when they inevitably have to replace it all due to failure the arm stuff will have a chance to take some share.
Well, if ARM servers are cheaper for Amazon to run, they're going to want to incentivize customers to switch to ARM. Either by passing on some of the cost savings (even if it's only 5%), or by making the x86 option more expensive.
In the second case, Amazon is still "passing on the cost savings" in a sense, it's just that now they take a higher profit regardless.
AWS is still 10x-100x more expensive than renting bare-metal unmetered servers and running everything yourself, so I don't think the actual hardware factors too much into their pricing.
This is so utterly untrue and directly related to what Linus was talking about.
Those bare-metal servers are basically 1:1 what you are developing on.
I can install an instance of my application
on them in minutes.
It's AWS that takes significantly more time to set up and learn.
Most people using AWS are spending big bucks on an 'automatically scaling'
architecture (that never just works) that will cost them many thousands of dollars a month, which they could have comfortably fit on a 30 bucks dedicated server.
You can pay a dedicated system administrator to run your server (let's not kid ourselves, you probably just need one server) and still save money compared to AWS.
With AWS you're not only paying Amazon, you're probably also paying someone who will spent most of his time just making sure your application fits into that thing.
Take my use-case for example: I can run my entire site on about 8 dedicated servers + other stuff that costs me ~600-700 euros a month.
Those just work month after month (rarely have to do anything).
Just my >400TB of traffic would cost me 16,000 bucks / month on AWS. I could scale to the whole US population for that money if I spent it on dedicated servers instead and just ran them myself.
My situation is similar to what chmod775 describes.
We serve 200+ TB/month, and no we didn't just forget to use a CDN ◔_◔ Those cost money, too.
For us, cloud is about double - $10k/month more - than dedicated boxes in a data center. I've run the same system in both types of environments for years at a time.
For us, cloud is basically burning money for a little bit of additional safety net and easy access to additional services that don't offer enough advantage over the basics to be worth taking on the extra service dependency. It's also occasional failures behind black boxes that you can't even began to diagnose without a standing support contract that costs hundreds or more a month. Super fun.
High bandwidth and steady compute needs is not generally a good fit for cloud services.
I don't have production experience with ARM unfortunately but Raspberry Pi is huge... Linux on desktop sucks if you have any laptop or something like this, but specific hardware just like RPi, everything works for my needs. I have node.js, .net core, python and loads of software that just works for me on ARM. Let alone I have Synology with ARM processor. Making servers is a lot easier than making consumer grade laptops or desktops. I agree with Antirez, there is so much space to try out stuff on cheap ARM with RPi and other SBC it just going to roll over x86 because ARM is going to be ubiquitous with phones and SBCs. That is why x86 won with SPARC and PowerPc, it was just in more places.
I ran Pis at home for a bunch of services and I agree it did a great job. But when you put actual loads on it the device craters because they are so under powered. This is where THE issue is going to be. To get speeds you expect out of server hardware its not just about making a 64 core ARM. Single Core ARM vs Single Core x86 has an obvious winner. So you need to make node and .net core and python and everyone else really push their limits on using multiple cores without developers knowing about it.
But that is just the first step. You then need developers who write applications on top of those languages be multi-core aware and design their applications to fully use the huge number of cores. At that point you'll loose a lot of your power efficiency because you'll need a lot more hardware running to do the same tasks. You'll also need developers who know how to think in an extremely multi-core way to get the extra performance boost.
Well, servers typically care about throughput and not latency. So if your ARM server will process 10000 requests per second with each request taking 100 ms and your x64 server will process 8000 requests per second with each request taking 80 ms for the same price, ARM will be preferred. There are exceptions, of course, but generally server workload is an ideal case for multi-threaded applications, because each request is isolated. That's why server CPU's usually have a lot of cores but their frequency is pretty low.
I think it will be interesting to see how this plays out in different ecosystems. I’d hazard a guess that ecosystems like Go, JVM and .NET will fare much better in an arm world, compared to languages that more commonly pull in native binaries.
Of course I can, but the question is why would go out of my way to do any of that?
I was interested in trying the state of server-side ARM for my mostly-interpreted language, and I pretty much immediately found that it doesn't Just Work. I had a vision of spending many hours searching, creating and +1:ing GitHub issues and tracking discussions around "why package X doesn't work on ARM" and the developers saying at best "happy to accept patches" (which btw is the mantra for why "why package X doesn't work on Windows", and why you don't want to develop with Node on Windows to this day despite all of Microsoft's ecosystem work). Nope, not worth it.
I'm not interested in supporting ARM just for it's own sake. A 30% discount on the cloud instances is also not nearly enough for me or my team of developers to be spending any significant amount of time on this, solving problems unrelated to our core business.
Let's see again in a few years. Of course, if ARM development machines become mainstream by way of Apple, then the calculation changes completely.
Aside from the fact that emulation is slow (and thus more annoying to test), now you have to contend with emulator bugs. Is your software crashing because your code is bugged or because the emulator is bugged? Or worse: your software may only be working because of an emulator bug.
It's very easy to disagree with him, because the server market doesn't work the way he think it does.
Google, Amazon, Microsoft and Facebook collectively purchased 31% of all the servers sold in 2018. The market for server hardware is dominated by a handful of hyperscale operators. The "long tail" is made up of a few dozen companies like SAP, Oracle, Alibaba and Tencent, with the rest of the market practically representing a rounding error.
These customers are extraordinarily sensitive to performance-per-watt; for their core services, they can readily afford to employ thousands of engineers at SV wages to eke out small efficiency improvements in their core services. They aren't buying Xeon chips on the open market - they're telling Intel what kind of chips they need, Intel are building those chips and everyone else gets whatever is left over. If someone else has a better architecture and can deliver meaningful efficiency savings, they'll throw a couple of billion dollars in their direction without blinking.
This is not theoretical - Google are on the third generation of the TPU, Amazon are making their own ARM-based Graviton chips and Facebook now have a silicon design team led by the guy who designed the Pixel Visual Core. It's looking increasingly certain that Apple are moving to ARM on the desktop, which further undermines the "develop at home" argument.
ARM won't win the server space, because nobody will win the server space. With Moore's Law grinding to a halt, the future of computing clearly involves an increasing number of specialised architectures and instruction sets. When you're spending billions of dollars a year on server hardware and using as much electricity as a small country, using a range of application-specific processors becomes a no-brainer.
All of this will be useless, if there is no customers that are interested in new platform. There is a big difference between making existing used platform more efficient and offering new efficient platform for which only few customers are interested.
"Run your code on our boxes" is a very, very small subset of cloud services. Does anyone other than Facebook care what instruction set they're using to ingest images? Does anyone other than Amazon care what instruction set they're using to serve S3 requests? Does anyone other than Google care what architecture they're using to crawl the web or serve ads or do something creepy with neural nets?
I don't get it. Having your software being able to run on two different plaforms, means that that software need to be tested twice, maintained twice. Your architecture decision might be optimal for one platform but not for the other, so you have to change your development process, have test env for both platforms, etc. You can't just cross-compile and hope it works.
All of this costs money, in terms of either having more developers/testers or having longer development time. So, in order to justify this investment, the second platform must be way cheaper in order to cover costs for extra developers/development time. And if there is a such huge difference and second platform works great, then why still have support for first platform anyway. Ditch it, and you will save yourself some money.
You could be an ISV, but again, your software will be more expensive if you need to support two different platforms. Which means that your customers must be willing to pay for it. Which brings us to same conclusion, unless there is a big saving by running software on alternative platform, nobody will care.
Google's data centers collectively use more electricity than the state of Rhode Island, or about the same as the entire country of Costa Rica. Their electricity consumption has doubled in the last four years. At average wholesale prices, their annual electricity bill would be about a billion dollars. ARM isn't dramatically more efficient than x86 in most applications, but specialised ASICs can be orders of magnitude more efficient.
I don't know about the standard offerings on cloud platforms, but people who do some sort of scientific computing care a great deal about the architecture and performance. As a C++ programmer, I'm constantly profiling my code to find out bottlenecks and optimize the code. Sometimes I even care if the CPU supports say AVX or some specific instructions.
From my comment: "the future of computing clearly involves an increasing number of specialised architectures and instruction sets".
I'm not saying that nobody cares about the choice of architecture, I'm saying that major tech companies with vast quantities of servers are beginning to develop their own silicon with custom architectures and custom instruction sets, precisely because that's vastly more efficient than using a general-purpose architecture that happens to be popular in the wider software ecosystem. The fact that nobody else uses that special-purpose architecture is unimportant, because it is economically viable for them to invest in tooling and training to write and port software for these weird chips.
The largest users of cloud VMs are internal customers - cloud services and the business that the cloud providers spun out of. If ARM is cheaper to run, the savings for Amazon/AWS could be astronomical. That will generate more than enough internal customer demand to make offering ARM servers worthwhile.
>The "long tail" is made up of a few dozen companies like SAP, Oracle, Alibaba and Tencent, with the rest of the market practically representing a rounding error.
Are Alibaba and Tencent Really in the long tail? I believe Tencent could be since it is about 3rd of the size of Alibaba, but if I remember correctly Alibaba will overtake Google by 2019 ( They had ~90% growth in 2018 ), and 2018 were already close to matching Google's Cloud Revenue.
I wonder if OVH is also big in the list. And Apple? Surely the Server Back End to services 900M iPhone users can't be small. How do they compare to say, Google in Server Purchase Terms?
I once had a bug developing an IP phone. Connecting to one server worked but once on site with the customer connection to their server didn't worked, even though the servers where identical.
It turned out that the power supply on their server was malfunctioning sometimes delivering too little power. Specially when taking the code paths my IP phone triggered.
The server software was built in php. It's not often you start looking for a bug in PHP but end up switching a capacitor in the PSU.
My point is that even if your write in php, php is running C libraries, that is running ASM that is running hardware and every part of this chain is important. There's no such thing as "works everywhere", it's just "have a very high chance of working everywhere".
(off-topic) thanks for the sds library. I'm a heavy user of it.
Yes but the platform is just one of the unknowns at the lower level. If the tooling is fine, the C compiler is very unlikely to emit things that will run in a different way, it is much simpler to see software breaking because of higher level parts that don't have anything to do with the platform like: libc version, kernel upgrade, ...
I’d be really interested to hear more of this story! How you isolated the problem down to the level of the PSU is going deeper into the machine than I’ve personally been, so this story could be a great teacher.
It sounds way cooler than it was. For some reason I openened up the server box and during a reboot I saw the LED on the motherboard flickering in a way I didn't expect. So we tried to change PSU and then everything worked.
Don't you see that his answer has nothing to do with a hacker mindset? It's an assertion that making your development and production environments as close as possible will save you from unexpected grief, coupled with an observation that this has driven server architectures historically. Especially with subtle problems like performance issues. I find it a very sensible conclusion.
Of course it didn't hurt that x86 quickly became the price/performance leader for servers, but he makes a good case that this will continue for at least the near future.
The NetBSD people vehemently disagree. By ensuring your software works on various architectures, you expose subtle bugs in the ones you actually care about. Lots of 32-bit x86 code was improved during the migration to 64-bit, not because the move created new bugs, as because existing ones (i.e. code that relied on undefined behavior) couldn't get away with it anymore.
I couldn't name a major corporation that uses NetBSD on their servers or routers. (Yahoo used to use FreeBSD servers, but even they migrated to Linux.)
Is there a major router vendor or something else that uses NetBSD in a big way?
I wouldn't call their bugs. If the binaries worked correctly on x86 due to compiler specific guarantees then the code wasn't buggy. It just wasn't written for a generic C or C++ compiler.
Undefined behavior is not a compiler specific guarantee. UB can change based on almost random factors, especially between newer releases of the same compiler. They are bugs, they were just masked.
This honestly depends on what undefined behavior we are talking about. Sometimes it will be guaranteed to behave a certain way on a compiler. A few will also be the same across compilers if your compiling for the same architecture.
However, I do agree that cross compiling is good for finding bugs like this. And really if we are letting the compiler or architecture define undefined behavior, I find it better to break out the inline assembly. It's explicit that this code is platform dependent, and avoids any issues that a subtle change in the future may cause it to break.
Although, it's usually possible to define what your attempting in C without issue, and I only ever find I am doing such a thing if there is a good reason to use a platform specific feature. Generally, relying on how compiler handles uninitialized memory and similar is not what I call a compelling platform specific feature. Cross compiling is good in the regard because it forces everyone working on a project to avoid those things.
That's at least unnecessarily splitting hairs and possibly missing the point, considering that some compilers allow you to turn undefined behaviour into implementation-defined behaviour using an option. -fwrapv comes to mind.
Not really. Undefined means that no purposeful explicit behavior for handling has to occur even within a specific implementation, which means things can blow up randomly just changing some compiler settings or minor things in the environment (or even randomly at runtime).. eg running out of bounds of an array in C is a perfect example of undefined behavior.. no guarantee on what occurs from run to run. Yes obviously time doesn’t stop dead and something happens, but I think that stretches any meaningful definition of “handled”.
True, undefined behavior can be implementation defined but that is not a requirement, and it usually is not.
If the compiler defines a behavior for some UB, then it's no longer UB. It's been defined for your implementation. It might still be undefined for another implementation but that doesn't mean your code is buggy on the first one.
No, it does not. It's still UB. UB is defined by the standard, not by your compiler's implementation. Certain behaviors may be implementation defined by the standard, those can be defined by your compiler.
But if the standard says it's UB, it's UB. End of story.
Where/how do you obtain such confidence in something so wrong? The standard not only doesn't prohibit the implementation from defining something it leaves undefined (surely you don't think even possible behavior becomes invalid as soon as it is documented??), it in fact explicitly permits this possibility to occur -- I suppose to emphasize just how nuts the notion of some kind of 'enforced unpredictability' is:
> Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner...
In my experience with porting stuff, sometimes the bugs exposed by ports are not along the lines of "always works on x86, always fails on ARM". In a lot of cases it fails on both, with different frequencies, but maybe the assumptions are broken sooner or more often on another platform.
There's a world of difference between working correctly on x86 and appearing to work correctly on x86. Sometimes the difference has serious security implications.
If a program manages to avoid the entire maze of deathtraps, the C standard calls it strictly conforming. I doubt anything commonly used today could qualify.
Even on NetBSD, my old love, you can not take you program on the x86 machine, pack it and then run it on arm. You will have to crosscompile and hope it works.
Debugging on different platforms is great. But when it comes to deployment, you probably want to choose the one you know the best, and that's probably your dev platform.
Question is: when will development not occur locally at all? Is it possible that in a near future you actually develop directly in the cloud, on your own development instance directly? When this happens the cpu architecture of your laptop is irrelevant. It will just be a window to the cloud.
Well, unless you're hacking on kernel code, making your production environment exactly as your development one is trivial. Just develop remotely. This isn't a part of Linus's calculus because, for him, developing remotely is unthinkable.
A low level piece of code written in C seems likely to have less portability issues than something sitting on top of many layers of abstraction. The thorny problems that show up deploying on an environment that's not identical to the development environment are often the result of unexpected interactions in the stack of dependencies. This is why containerization is a thing.
Platform portability issues have got easier with better adherence to standards and where you have largely the same code running across different ISAs (and no endianness issues between x86 and Arm) but the popularity of things like Docker suggest many devs do care about reproducible production environments.
It seems likely that, simply, times have changed. There was a time when being on the same platform as the deployment environment was super important, but nowadays the tooling has gotten so much better that it matters a lot less. The proportion of people still writing C code on a day to day basis has dropped... well to pretty much a rounding error.
The bigger issue is really that ARM servers aren't that much cheaper than x86 servers today, and its very likely a lot of that difference in cost is just Intel's synthetic market advantage that would disappear if ARM actually started becoming a threat (which has already started happening due to AMD becoming a threat). Phoronix did a synthetic benchmark of AWS's A1 instances, versus the C5 Intel and C5A AMD instances [1]; they're nothing special at all, even with price taken into account.
Maybe that'll change in the future, but now that AMD is in a competitive state, that's pushing Intel into high-gear and its hard to say that ARM will have any effect on the server market in the short-term.
> There was a time when being on the same platform as the deployment environment was super important
Which is also interesting because there was a time before that where being on the same platform as the deployment environment was sometimes considered nigh impossible, such as the early days of the "microcomputer" revolution where a lot of software was written on big iron mainframes to run on much more constrained devices (C64, Apple II, etc). It's interesting to compare the IDEs and architectures of that era and how much cross-compilation has always happened. There doesn't seem to be a lot of computing history where the machine used to build the software was the same machine intended to run the software, it's the modern PC era that seems the unique inflection point where so much of our software are built and run on the same architectures.
(A lot of the modern tools such as VMs like the JVM and CLR are because of the dreams and imaginations of those developers that directly experienced those earlier eras.)
It's interesting how that tide shifts from time to time, and we so easily forget what that was like, forget to notice the high water marks of previous generations. (Even as we take advantage of it in other ways, we cross-compile to mobile and IoT devices today we'd have no way to run IDEs on, and would rather not try to run compilers directly on them.)
I know some software was written on minis to run on 8-bit computers, but I have a hard time imagining that as the norm. My Apple II dev rig was two computers, one running development tools and one to test and they were two because running my software wasn't possible on the development machine without rebooting and loading all the tools took 30 seconds - a painful eternity in Apple II terms.
As confirmed by multiple interviews on the RetroGaming Magazine, almost every indie that managed to get enough pounds to carry on with their dream, invested into such setup when they started going big.
For consoles, it's natural - they don't have any self-hosted development tools and the machine you write your code with is largely irrelevant. Early adopters also benefit from the maturity of the tools in other platforms for the time before native tools are developed.
This may be more common in game studios, but was not mainstream in other segments.
My understanding is that there's a pretty good proprietary JVM for ARM (optimising JIT and all), but that the FOSS stuff (including OpenJDK) is well behind, and as you say, can be expected to perform nowhere near as well as the AMD64 counterpart.
> Cross platform in theory, not so much in practice.
Optimistic that the OpenJDK folks would rise to the challenge if there was anything to play for. Writing a serious optimising JIT for modern ARM CPUs would doubtless be no small task, but wouldn't be breaking the mould. I believe it's a similar situation for RISC-V, currently.
Googles But wait, there's more! 'Graal'! Shiny new JIT engines are on the way, and ARM support is in there. Hopefully they'll perform well. [0] [1]
Android uses Dalvik, not JVM. Language is Java, standard library is mostly Java-compatible, but runtime is different. And I'm talking about server loads, I don't think that Dalvik is very good for those tasks (but I might be wrong, it's an interesting question).
Falls squarely into the 'cute but pointless' category.
Java is intended to be used by optimising JVMs. Java bytecode is rarely optimised -- that's left to the JIT. Using the Jazelle approach, where is the compiler optimisation step supposed to occur? Nowhere, of course! You'd be far better off with a decent conventional ARM JVM.
If you're on a system so lightweight that this isn't an option, well, you probably shouldn't have gone with Java. (Remember Java ME?)
[Not that I've ever actually worked with this stuff, mind.]
It's still the case that environments should be as close as possible. It's easier to achieve now because the number of environments have shrunk significantly.
Nowadays you will be running on a CentOS/Debian server or a Windows desktop, on an AMD64 compatible CPU. Not so long ago, there were tens of Unix and Linux variants with significant differences. It was impossible to support half of them.
> but nowadays the tooling has gotten so much better that it matters a lot less
I think that that's the point. Portability to platforms with a strong tooling and usage base even in a different sector is ok and safe. The problem is when you try to do something like x86 -> Itanium or alike, that could take some time to stabilize.
I don't think you are not really disagreeing with Linus - he's not saying ARM is not viable - he is saying it will not win. With your current setup (cross-compiling), are your ARM executables more performant than x86? Or do they have any other advantage at all over x86? Without an advantage, ARM can't possibly win.
Having a cheap, viable ARM-native development platform drastically increases the chances of ARM-only killer apps to exist, this would be an advantage over the currently dominant x86 (just as there were Windows-only and Linux-only killer apps that cemented their ascent). However, if everyone is cross-compiling due to the cost, it means ARM will always be a secondary platform (at most)- it can't win by being the Windows Phone of platforms.
He's not saying it won't win, either. He's just saying that for it to win, it needs a viable dev platform. Which, if you reverse cause and effect is blatantly obvious.
If ARM comes anywhere close to viable enough to be "winning", there will be a good market for dev platforms, and somebody will step in and fill the need. Heck, some are even arguing here that the Pine64 already meets that need.
I'm definitely on the side of ARM(and RISC-V and other new architectures for that matter) getting "wins", because the modern environment is displaying the signs of a low-layer shakeup:
* New systems languages with promising levels of adoption
* Stablization and commodification of the existing platforms, weakening lock-in effects
* Emphasis on virtualization in contemporary server architectures
* "The browser is the OS" reaching its natural conclusion in the form of WASM, driving web devs towards systems languages
All of that produces an environment where development could become much more portable in a relatively short timeframe. It's the high friction of the full-service, C-based, multitasking development ecosystem that keeps systems development centralized within a few megaprojects like Linux. But what is actually needed for development is leaner than that, and the project of making these lower layers modernized, portable, and available to virtualization will have the inevitable effect of uprooting the tooling from its existing hardware dependencies, even when no one of the resulting environments does "as much" as a Linux box. The classic disruption story.
We have to define what "winning" means here. Google uses POWER9 and specialized GPU-like chips for some workloads. All cloud providers can gain from being able to offer products that perform better or have lower prices than it would be possible with x86.
Right now ARM probably outnumbers x86 in number of machines running Linux by a very large margin. In my backpack there is one x86 machine and two ARM ones and that doesn't count the one that's in my hand
It all depends on what chips become available at what price. All cloud providers do lots of hardware design for their own metal. I'd they tell you that their next data center will be primarily ARM, they create a market for a million unit run of whatever CPU they choose.
I have to agree with LT, but not on technical issues. This is a question of business issues. You can look at many facets of different cases, but they are all distilled into "Path Dependant Behavior". Sometimes it is called "Baby Duckling Syndrome", but it is that the leader in a market segment is much better equipped to respond, and outpace competitors.
Redis is a popular program, it makes sense to spend the time to port it. But what your missing is the long tail, thousands of little programs scattered around - or edge cases in bigger ones.
Working on a non-x86 platform makes you a second class citizen, you will experience issues that others have already ironed out on x86. Software has a long tail of niche code not actively maintained but still heavily used. It doesn't make sense to switch to ARM there.
I agree with you. For many users of the cloud -- especially serverless application developers -- the actual hardware on which their software runs is a black box. In fact, if the serverless application consists solely of scripts or precompiled bytecode and doesn't contain architecture-specific binary code, it could likely run on arbitrary hardware with any supported ISA and users wouldn't know the difference.
I think it ultimately depends on how much ARM/RISC-V's price/performance ends up being better than x86.
If it's not much better then people will not switch due to these small annoyances, and there doesn't seem to be any fundamental reason for it being much better (Intel and AMD are perfectly capable of producing top-performing x86 CPUs, and the architecture should not matter much).
I guess the situation is similar to containers and static linking that got popular for server deployments quite recently. They allow to develop, test and deploy your application with its dependencies in the exact same version - the exact same binary. Although usually this works just fine even with minor differences in the dependencies, a lot of developers deem it worthwhile to preclude such issues. If you now use ARM in production, you can't use the same binary file anymore.
There might be different implementations depending on the architecture in some library you use. Also even with higher-level languages like Java it is possible to observe ISA differences: e.g. memory ordering.
I had the pleasure (?) of working on a C/C++ codebase that compiled on Windows and ten different flavors of Unix. It was all "portable", but all over the place there was stuff like
#if defined AIX || defined OSF1
short var;
#else
int var;
#endif
And to get it right, you had to compile it on all the platforms and fix all the errors (and preferably all the warnings).
Yeah, cross platform is never as simple as same platform.
How many years ago was that? POSIX compliance has come a long way and most of the proprietary vendors (the ones with all the corner cases) are gone. These days not only do platforms like AIX and Solaris have much fewer corner cases, they're even adopting Linux and GNU extensions wholesale. Anyhow, most people can ignore these altogether. Portability between Linux and the BSDs is much easier. macOS is the biggest outlier in terms of corner cases yet in many ways the best supported thanks to the popularity of Homebrew.
C++ is a different matter, but C++ portability is a headache even if you stay on Linux. Likewise, trying to maintain OS-level portability of monolithic codebases between Windows and Unix is a fools errand, which is why Windows Subsystem for Linux (WSL) is likely to only get better.
How long ago? 15 years - and it was at least a decade-old code base. But I had to take some new code that was something like Windows-and-Sun-only, and port it to run on all of the other architectures.
Yes, cross-platform development is more work, but that example you gave is the wrong way to do it. It's better to abstract the differences using typedefs, functions, macros, etc., and keep the platform switches in a few isolated places in the code.
It's likely that ARM will find a place in a cloud. But, at, least at first, it will be in some specific parts of a cloud offering.
I don't see ARM displacing X86 on VMs offering like EC2 any time soon, an ARM offering will exists (and it already exists in fact), but it will remain a small portion.
However, some parts of a cloud offering are completely abstracted from the hardware: DNS, object stores, Load Balancers, queues, CDNs... for these, from the point of view of a developer, CPU architecture doesn't matter at all and if the cloud provider find it more interesting to use ARM (maybe with some custom extensions), it will probably switch to them.
From there, it can gradually go to services where architecture kind of matters, but not necessarily, like serverless, or Postgres/MySQL as a service.
And while it grows, ARM CPUs will improve for other use cases, and maybe overtake X86 VMs.
The other possibility is a massive cost reduction like 3 to 4 times cheaper for equal performances, but it's not really the case right now.
Also, given the all the wasted money I've seen on AWS ("app is leaking memory? just use a 64GB instance"), I'm not sure it's a good enough incentive. However we are specialist at being penny wise and pound foolish.
I think more people need to have a kernel hacker mindset. We are getting to the point where everyone is working so high level that they just assume you can swap out the Distro/Kernel/Arch and it will not only "just work" but will work exactly the same was as your home system. While your simple applications (not attempting to optimize or really push the limits of your system) you might not run into issues initially but when you try push things you'll inevitably run into the problem.
The fact that we're talking about ARM makes this even more important. You're having to compete against x86, which requires increased core counts, and a lot more optimization and potentially even redesigns of your software to make your higher level environment to be perceived equal to x86. Businesses will need this, your boss will ask if ARM is as fast as x86 and they won't care to quibble about technological differences if you can't just get the same output speeds as their old, trusted hardware. There is only so much your language can do to cover your butt. At some point you'll have to be aware of your environment to compete.
I'm a web developer, and for years now I've been developing on either Mac or Windows, but deploying to Linux servers. Or possibly totally different kinds of servers. I don't care much about the architecture of the server as long as it runs my code. Give me Apache, JVM, node, and the necessary build and deploy tools, and there's nothing I can't run on it.
Linus said "the cross-development model is so relatively painful". What you described is a relatively painful effort of chasing the word alignments. Wouldn't it be better not to worry about alignments at all and spend your time on something more productive? I think this is his point - if non-productive effort can be avoided, people will avoid it.
It might be easy to do that in a well written single C/C++ codebase like Redis.
It's not simple nor something devs will ever want or care to do in a big web app with several binary dependencies.
Just consider that a single Node app's binary deps could trivially include the entirety of Chrome itself, not just in the form of Node's v8 engine, but e.g. as the PDF rendering "headless chrome" wrapper Puppeteer.
And that's just the tip of the iceberg, add DBs, extensions, Python backend scripts, etc etc, and few will bother.
When developing for iOS you typically interact with the iOS simulator on your desktop, which natively compiles your app against x86 versions of the mobile frameworks. True native iOS development is pretty rare, and more painful. Overall, iOS development is a delightful experience because there's a singular hardware target and Apple pretty much nails the execution.
For Android development on the other, you don't have a good simulator, and the out-of-the-box dev experience relies on an x86 emulator of the ARM environment. In practice this means that in your day-to-day Android development, you're running the compile-run-test cycle by looking at your actual ARM device all the time, because the emulator is dogshit. I wouldn't really call it cross-platform development in any traditional sense, it's more like remote development, and a bad experience.
> For Android development on the other, you don't have a good simulator, and the out-of-the-box dev experience relies on an x86 emulator of the ARM environment. In practice this means that in your day-to-day Android development, you're running the compile-run-test cycle by looking at your actual ARM device all the time, because the emulator is dogshit. I wouldn't really call it cross-platform development in any traditional sense, it's more like remote development, and a bad experience.
This hasn't been true for years. The emulator shipping with Adroid Studio uses an x86-based image, and it's very, very fast as a result.
Android's emulator even has quite a few more features than iOS's simulator, such as mock camera scenes so you can even develop apps that rely on the camera on the emulator.
If anything these days the Android emulator soundly trumps the iOS simulator on all interesting metrics except maybe RAM usage. But, critically to Linus' argument, they both use the same architecture as the development machine.
> When developing for iOS you typically interact with the iOS simulator on your desktop, which natively compiles your app against x86 versions of the mobile frameworks.
And the fact that the "develop on x86, test on ARM" workflow works so smoothly on iOS is strong evidence that Linus is wrong.
Who's going to make the "develop on x86, deploy on server-side ARM" experience smooth? It certainly isn't today. Who has that kind of control of the entire stack top to bottom? Amazon is the only one that comes to mind... but I wouldn't bet on it.
I think it's a smooth enough experience, yours notwithstanding. It's just that there aren't many server-side ARM options available, so we don't have much experience.
> I’ve certainly heard of bugs that the simulator doesn’t reproduce because it’s not ARM.
And that isn't enough to get people to demand an ARM emulator. In fact, Android developers hate the ARM emulator and prefer the x86 simulator—more evidence against Linus' assertion.
Hi, I've done Android development professionally for many years, over multiple apps. I don't know anyone who uses the x86 simulator for anything except out of curiosity to check it out every couple of years if it's still completely worthless. Android developers develop with an ARM phone attached by USB, and it's still an abysmal experience compared to iOS.
Depends on general confidence level and how mission critical your deployed software is. For instance, I will always prefer to run PHP, Ruby or Python on Linux servers. But on the client side, I have faith in Electron Apps to run cross-platform without issues.
I wouldn't be comfortable with an underlying architecture change to ARM for at least years to come and the usage decision would be based on general consensus on reliability that follows.
I would more comfortably run my code developed on x86/Linux on ARM/Linux than on x86/FreeBSD for instance... Platform is just one unknown and not the worst if the tooling is good IMHO. Consider that complex software under Linux/ARM now has a ten years history at least, with numbers (mobile) that are not approached by any other thing on earth.
i think that they don't care until they do. I think it's awesome that you ported redis to ARM, but that is your software. If my node/rails app has modules with native libs unsupported on ARM, do I fix all those modules that "almost" work, replace them with other ones that already work, or just deploy to an environment I already know works? And once I've hit that issue once, will I even try it again?
Good work on Redis. Ruby, however, strikes me as pretty fragile. They can't get it running on Windows, officially, and the third-party Windows installer is hit or miss. Ruby also seems to depend on a specific compiler, gcc. I wouldn't be surprised if it has trouble on ARM.
On the other hand, it's easier to not have to think about it, even when it doesn't matter 99% of the time, and run the exact same container as you tested locally on the server.
I feel like this shouldn't matter really, but people are amazingly lazy/developer time valued highly.
Linux' point is that there is hardly any developer-class ARM machines available, just RaspberryPi-class SBCs that use mobile SOCs with the performance of a 2012 vintage smartphone, nothing with the grunt of a Qualcomm Centriq or Cavium ThunderX.
"I often find black-and-white people a bit stupid, truth be told" - Linus Torvalds, 2005
I think maybe Linus T. is getting old, out of touch, and closed-minded, and I think we should be open to change and care less about every random thought he blows off.