Do I really want to be debugging why node-gyp fails to compile scrypt on the ARM distro on the new Amazon A1 ARM instance (which it did in my case)? And if I solve that, what about the other 2451 dependencies? Let's pessimistically say there's a 1% failure rate, I'll be stuck doing that forever! Nah, I'll just go back to my comfy x86 instance, life's short and there's much code to write :)
I think I'll side with Linus on this one. I saw first-hand how non-existent the x86 Android market was, despite Intel pouring megabucks into the project. If the developers don't run the same platform, it's not going to happen, no matter how great the cross platform story is in theory. Even if it's as simple as checking a box during upload that "yes, this game can run on X86", a huge chunk of the developers will simply never do that.
The good news is that Clang can cross-compile fairly easily. Much better than gcc.
The bad news is that there are a surprising number of missing libraries on Ubuntu/ARM64. For example, Eigen3. And although the code is fairly compatible, there's some extra cognitive load in learning to debug on ARM. For example, calling a virtual function on a deleted object crashes differently.
I'm willing to put up with it for ARM's advantages in battery-powered applications, but I wouldn't just to save a few bucks on cloud servers.
Unfortunately C++ code is only portable if it is free from undefined behavior. Fortunately there are many tools now to debug these kinds of errors: https://blog.regehr.org/archives/1520
How so? The complications in cross compiling come from setting up all the system libraries, not the compiler and linker.
setting up the system libraries is relatively easy, make a copy of the target you're building for, assuming it has a setup for compilation, and use --sysroot=/path/to/target.
Unless you are deeply disconnected from the hardware, CPU architecture does matter. Most developers using macbooks I know have VMs for either Windows or Linux works.
It might be conceivable to use the ARM port of <Insert your Linux distribution here>, or the ARM version of Windows 10. But it would also require a good desktop virtualization solution for ARM. If Apple release its own solution or creates a partnership with let's say VMWare to release something at the same time an ARM macbook comes out, it might work, but barely, and I'm not sure if developers are at the top of Apple's priority list. In the end, with the Linux subsystem in Windows, choosing Windows as a development platform might make more sense.
As a side note, if Apple switches to ARM, I foresee a lot of cringing. The switch from ppc to intel was annoying for a lot of Apple users at the time, but there were a lot of goodies (bootcamp, better autonomy, better performance, VMs) that kind of made it up for it, basically, nothing was lost in term of possibilities, a few were actually gained, and only the transition was a bit painful. With a switch to ARM, you may gain a little in term of autonomy, but with macbook pro already at +8 hours (~a work day), not sure it's a game changer, at best you will stay the same in term of performance, and you will lose in compatibility with other OSes.
My feeling is it will be net zero as far as ARM servers are concerned until the hardware is made and is viable. Perhaps Apple ARM laptops will help with marketing ARM as a viable option, but we already develop on OS X in order to deploy to Linux without any great calling for OS X servers.
Cloud server “hardware” has also drifted from what you see in real hardware. There are numerous times in my career I’ve had to explain to deveopers of all experience levels that their SSD is several orders of magnitude faster than the tiny EBS volume they’re reading from and writing to.
In short, I think architecture mismatch just isn’t that important to most Web App oriented companies. My girlfriend works at a travel industry giant and they’re at the opposite end, putting IBM mainframes to good use. They don’t have much use for Macs and most of their developers seem to be on Windows instead of anything resembling what they’re deploying on. For the segment of our industry that does care, they’ll have options and will choose them regardless of what Apple, Google, and Microsoft do with ARM, Intel, Power, MIPS and other architectures.
Was the PowerBook as heavily used as a developer laptop as the MacBook is today?
(or Power Mac vs desktop PC as desktops were more common at the time).
I was not in the industry at the time (2000 - 2006) so I don't know the answer.
I'm not sure the architecture mattered as much as that did.
Not all Mac users are devs.
Its not that all users are devs. Its that all devs might not be able to make their software work well under that environment.
I think you will see a lot of performance boost after switching to ARM. If they start on the "low end" then a macbook will be practically on par with a mbp. This might not be useful at first for native development, but I am quite sure that macOS, iOS and web development will be very much possible on these machines - the three domains that Apple cares most about.
A battery lifetime of 8 or 12 hours is plenty, and going beyond that isn't that much of a marketable strategy, unless it has to become 24h+ or something. A lower weight approach however would also mean a lower BOM for Apple, and more profit, while being able to shout "1/3rd lighter!" - and that's an easy sell :)
That's for servers and scientific software (and perhaps 3D and such).
For regular devs the massive core count helps even with non optimized apps, because unlike the above use cases, we run lots of apps at the same time (and each can have its core).
Linus’s prediction is based on the premise that everyone will continue to use x86 for development. But that’s probably not going to be the case for long. Multiple sources have leaked the rumour that Apple will release an arm MacBook next year. And I wouldn’t be surprised if Microsoft has an arm surface book in the wings too.
The majority of people in the world writing code are using x86 PCs and Microsoft and Apple aren't about to change that with any *Book.
Linus' premise that everyone will continue to use x86 for development is because they will.
There's no incentive for companies or individuals to go switch out all of that x86 hardware sitting on desks and in racks with ARM alternatives which will offer them lower performance than their already slightly aged hardware at initially higher costs.
I can forsee _some_ interest in ARM cloud, and I don't think it'll be the issue Linus claims at higher-than-systems-level, but I absolutely would bet on x86 going nowhere in the human-software interface space in the foreseeable future.
For some reason Macbooks seem disproportionately represented amongst web developers. All the agencies I know in Sydney and Melbourne are full of macbooks.
> There's no incentive for companies or individuals to go switch out all of that x86 hardware sitting on desks and in racks with ARM alternatives which will offer them lower performance than their already slightly aged hardware at initially higher costs.
Uh, why are you assuming ARM laptops will have lower performance and a higher cost compared to x86 equivalents? The ARM CPU in the iPad pro already outperforms the high end intel chips in macbook pros in some tests. And how long do you think Apple will continue to sell intel-based macbooks once they have ARM laptops on the market? Maybe they'll keep selling intel laptops for a year or two, but I doubt they'll keep refreshing them when new intel processors come out. When Apple moved from powerpc to intel they didn't keep releasing new powerpc based laptops.
Once web development shops start buying laptops with ARM chips, it will be a huge hassle if random nodejs modules don't build & work on ARM. At this point I expect most compatibility issues will be fixed, and that will in turn make deploying nodejs apps on arm a more reasonable choice.
Obviously we'll see, and this is all speculation for all of us. But I think its a reasonable path for ARM chips to invade the desktop.
Apple have 100% control over every part of their hardware and software from the get go, so it's inevitable they perform excellently on that hardware; they can optimise their code to death, and increment the hardware where it can be improved upon.
Web developers make up a fairly small proportion of the developers I've ever worked with, I have worked for software houses where web just isn't a thing for us other than for our own marketing. None of these people run Mac's, they all run PCs, and these PCs don't have the same control in their hardware/software process that will bring about the kind of "excellent" result you see from an iPad. They'll be relying on Microsoft to get Windows optimised, but Microsoft will be working with dozens, even hundreds of partners, Apple works with one, itself.
I suspect, also that they'll be more expensive because of all the new development the manufacturers have to put into releasing these new ARM laptops. Microsoft will have to put extra work into Windows, which will cost money, and finally those of us that run Linux will end up with something that hasn't had the benefit of the decades of x86 development on the desktop has had, thus, worse performance, at least in the beginning.
I could imagine a laptop equivalent of big.LITTLE, where you have x86 cores for the real grunt work, and ARM cores for power saving, bit I don't see pure ARM in the workstation space.
It'll be an interesting time, but based on my own experience, I'm betting on Linus with this one and I don't see myself or my colleagues or my workplace moving to ARM anywhere outside of non-laptop-portables any time soon.
It took 4 years of concerted effort to get most node things working right in windows.
To really see the benefit of changing they would need to add a lot of cores, and then cross their fingers that 3rd party app developers know how to do true multi-core development.
If that team were to design a bigger core aimed at laptops, then I wouldn't be surprised if they could make it competitive.
Doesn't that refute Linus' argument, not strengthen it? Almost all Android developers develop on x86. Intel thought, as Linus apparently does, that this would drive adoption of x86 on phones. It didn't.
Intel even got competitive in power efficiency and it wasn't enough to save them. In fact, I remember folks on HN predicting the imminent death of ARM all the way up to Intel throwing in the towel.
I think Linus is wrong here. His argument made sense in the '90s, but it's 2019. The ISA just doesn't matter that much anymore.
The primary customer of servers is developers who care less about cost and more about time to market.
That single instance you're running on already took half a dozen or so other systems developers and more before it got to you, so in your example you're the minority.
It's because of the work they've done, that you can not care about the architecture you're running on, not in spite of them.
Linus' argument is that x86 stays on top because everyone is developing with x86 at home. It's much less convincing to argue that x86 will stay on top because the people writing JVMs use x86 at home. There just aren't that many of them, and if they get paid to write ARM, they write ARM.
If AWS switches to running JVMs on ARM, and passes the cost savings onto me, I'd be in no position to argue.
In the second case, Amazon is still "passing on the cost savings" in a sense, it's just that now they take a higher profit regardless.
To break through any floor requires a disruptive change in architecture (CPU or otherwise).
Only if you value your time at zero.
Those bare-metal servers are basically 1:1 what you are developing on.
I can install an instance of my application
on them in minutes.
It's AWS that takes significantly more time to set up and learn.
Most people using AWS are spending big bucks on an 'automatically scaling'
architecture (that never just works) that will cost them many thousands of dollars a month, which they could have comfortably fit on a 30 bucks dedicated server.
You can pay a dedicated system administrator to run your server (let's not kid ourselves, you probably just need one server) and still save money compared to AWS.
With AWS you're not only paying Amazon, you're probably also paying someone who will spent most of his time just making sure your application fits into that thing.
Take my use-case for example: I can run my entire site on about 8 dedicated servers + other stuff that costs me ~600-700 euros a month.
Those just work month after month (rarely have to do anything).
Just my >400TB of traffic would cost me 16,000 bucks / month on AWS. I could scale to the whole US population for that money if I spent it on dedicated servers instead and just ran them myself.
If bandwidth is your highest cost, that's a completely separate problem that likely requires CDN. Neither x86 or ARM is going to reduce that cost.
We serve 200+ TB/month, and no we didn't just forget to use a CDN ◔_◔ Those cost money, too.
For us, cloud is about double - $10k/month more - than dedicated boxes in a data center. I've run the same system in both types of environments for years at a time.
For us, cloud is basically burning money for a little bit of additional safety net and easy access to additional services that don't offer enough advantage over the basics to be worth taking on the extra service dependency. It's also occasional failures behind black boxes that you can't even began to diagnose without a standing support contract that costs hundreds or more a month. Super fun.
High bandwidth and steady compute needs is not generally a good fit for cloud services.
And no Cloudflare's cheap plans are not an option, they'll kick you out.
For servers, developers are often the customers of their own software.
But that is just the first step. You then need developers who write applications on top of those languages be multi-core aware and design their applications to fully use the huge number of cores. At that point you'll loose a lot of your power efficiency because you'll need a lot more hardware running to do the same tasks. You'll also need developers who know how to think in an extremely multi-core way to get the extra performance boost.
RPis are built to a price and don't have the best CPUs that ARM can offer. A better comparison would be Apple's A chips.
Not sure if this is sarcasm or not, but if your project have that many dependencies not wonder it is hard to port anywhere.
Can you not develop on an ARM emulator? Or just buy an ARM machine for dev work?
I was interested in trying the state of server-side ARM for my mostly-interpreted language, and I pretty much immediately found that it doesn't Just Work. I had a vision of spending many hours searching, creating and +1:ing GitHub issues and tracking discussions around "why package X doesn't work on ARM" and the developers saying at best "happy to accept patches" (which btw is the mantra for why "why package X doesn't work on Windows", and why you don't want to develop with Node on Windows to this day despite all of Microsoft's ecosystem work). Nope, not worth it.
I'm not interested in supporting ARM just for it's own sake. A 30% discount on the cloud instances is also not nearly enough for me or my team of developers to be spending any significant amount of time on this, solving problems unrelated to our core business.
Let's see again in a few years. Of course, if ARM development machines become mainstream by way of Apple, then the calculation changes completely.
That's the biggest chance for ARM: having the notebooks/desktops that are good or even better for most potential users.
It's very easy to disagree with him, because the server market doesn't work the way he think it does.
Google, Amazon, Microsoft and Facebook collectively purchased 31% of all the servers sold in 2018. The market for server hardware is dominated by a handful of hyperscale operators. The "long tail" is made up of a few dozen companies like SAP, Oracle, Alibaba and Tencent, with the rest of the market practically representing a rounding error.
These customers are extraordinarily sensitive to performance-per-watt; for their core services, they can readily afford to employ thousands of engineers at SV wages to eke out small efficiency improvements in their core services. They aren't buying Xeon chips on the open market - they're telling Intel what kind of chips they need, Intel are building those chips and everyone else gets whatever is left over. If someone else has a better architecture and can deliver meaningful efficiency savings, they'll throw a couple of billion dollars in their direction without blinking.
This is not theoretical - Google are on the third generation of the TPU, Amazon are making their own ARM-based Graviton chips and Facebook now have a silicon design team led by the guy who designed the Pixel Visual Core. It's looking increasingly certain that Apple are moving to ARM on the desktop, which further undermines the "develop at home" argument.
ARM won't win the server space, because nobody will win the server space. With Moore's Law grinding to a halt, the future of computing clearly involves an increasing number of specialised architectures and instruction sets. When you're spending billions of dollars a year on server hardware and using as much electricity as a small country, using a range of application-specific processors becomes a no-brainer.
All of this costs money, in terms of either having more developers/testers or having longer development time. So, in order to justify this investment, the second platform must be way cheaper in order to cover costs for extra developers/development time. And if there is a such huge difference and second platform works great, then why still have support for first platform anyway. Ditch it, and you will save yourself some money.
You could be an ISV, but again, your software will be more expensive if you need to support two different platforms. Which means that your customers must be willing to pay for it. Which brings us to same conclusion, unless there is a big saving by running software on alternative platform, nobody will care.
Google's data centers collectively use more electricity than the state of Rhode Island, or about the same as the entire country of Costa Rica. Their electricity consumption has doubled in the last four years. At average wholesale prices, their annual electricity bill would be about a billion dollars. ARM isn't dramatically more efficient than x86 in most applications, but specialised ASICs can be orders of magnitude more efficient.
I'm not saying that nobody cares about the choice of architecture, I'm saying that major tech companies with vast quantities of servers are beginning to develop their own silicon with custom architectures and custom instruction sets, precisely because that's vastly more efficient than using a general-purpose architecture that happens to be popular in the wider software ecosystem. The fact that nobody else uses that special-purpose architecture is unimportant, because it is economically viable for them to invest in tooling and training to write and port software for these weird chips.
Are Alibaba and Tencent Really in the long tail? I believe Tencent could be since it is about 3rd of the size of Alibaba, but if I remember correctly Alibaba will overtake Google by 2019 ( They had ~90% growth in 2018 ), and 2018 were already close to matching Google's Cloud Revenue.
I wonder if OVH is also big in the list. And Apple? Surely the Server Back End to services 900M iPhone users can't be small. How do they compare to say, Google in Server Purchase Terms?
It turned out that the power supply on their server was malfunctioning sometimes delivering too little power. Specially when taking the code paths my IP phone triggered.
The server software was built in php. It's not often you start looking for a bug in PHP but end up switching a capacitor in the PSU.
My point is that even if your write in php, php is running C libraries, that is running ASM that is running hardware and every part of this chain is important. There's no such thing as "works everywhere", it's just "have a very high chance of working everywhere".
(off-topic) thanks for the sds library. I'm a heavy user of it.
About SDS: glad it is useful for you!
Of course it didn't hurt that x86 quickly became the price/performance leader for servers, but he makes a good case that this will continue for at least the near future.
Sure portability increases code quality, but at what cost to time to market which seems to be the primary concern for most developers these days?
Is there a major router vendor or something else that uses NetBSD in a big way?
Hotpoint, pioneer, bose, Samsung (some TVs and audio equipment), whirlpool and many, many more.
They all use netbsd in firmwares.
However, I do agree that cross compiling is good for finding bugs like this. And really if we are letting the compiler or architecture define undefined behavior, I find it better to break out the inline assembly. It's explicit that this code is platform dependent, and avoids any issues that a subtle change in the future may cause it to break.
Although, it's usually possible to define what your attempting in C without issue, and I only ever find I am doing such a thing if there is a good reason to use a platform specific feature. Generally, relying on how compiler handles uninitialized memory and similar is not what I call a compelling platform specific feature. Cross compiling is good in the regard because it forces everyone working on a project to avoid those things.
That is implementation defined, not undefined, behavior.
The spec also does mention implemtation defined behavior. However, undefined things still need to be handled.
True, undefined behavior can be implementation defined but that is not a requirement, and it usually is not.
But if the standard says it's UB, it's UB. End of story.
> Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner...
Platform portability issues have got easier with better adherence to standards and where you have largely the same code running across different ISAs (and no endianness issues between x86 and Arm) but the popularity of things like Docker suggest many devs do care about reproducible production environments.
The bigger issue is really that ARM servers aren't that much cheaper than x86 servers today, and its very likely a lot of that difference in cost is just Intel's synthetic market advantage that would disappear if ARM actually started becoming a threat (which has already started happening due to AMD becoming a threat). Phoronix did a synthetic benchmark of AWS's A1 instances, versus the C5 Intel and C5A AMD instances ; they're nothing special at all, even with price taken into account.
Maybe that'll change in the future, but now that AMD is in a competitive state, that's pushing Intel into high-gear and its hard to say that ARM will have any effect on the server market in the short-term.
Which is also interesting because there was a time before that where being on the same platform as the deployment environment was sometimes considered nigh impossible, such as the early days of the "microcomputer" revolution where a lot of software was written on big iron mainframes to run on much more constrained devices (C64, Apple II, etc). It's interesting to compare the IDEs and architectures of that era and how much cross-compilation has always happened. There doesn't seem to be a lot of computing history where the machine used to build the software was the same machine intended to run the software, it's the modern PC era that seems the unique inflection point where so much of our software are built and run on the same architectures.
(A lot of the modern tools such as VMs like the JVM and CLR are because of the dreams and imaginations of those developers that directly experienced those earlier eras.)
It's interesting how that tide shifts from time to time, and we so easily forget what that was like, forget to notice the high water marks of previous generations. (Even as we take advantage of it in other ways, we cross-compile to mobile and IoT devices today we'd have no way to run IDEs on, and would rather not try to run compilers directly on them.)
As confirmed by multiple interviews on the RetroGaming Magazine, almost every indie that managed to get enough pounds to carry on with their dream, invested into such setup when they started going big.
This may be more common in game studios, but was not mainstream in other segments.
Games were developed on bigger systems, and uploaded into them via the expansion ports.
> Cross platform in theory, not so much in practice.
Optimistic that the OpenJDK folks would rise to the challenge if there was anything to play for. Writing a serious optimising JIT for modern ARM CPUs would doubtless be no small task, but wouldn't be breaking the mould. I believe it's a similar situation for RISC-V, currently.
Googles But wait, there's more! 'Graal'! Shiny new JIT engines are on the way, and ARM support is in there. Hopefully they'll perform well.  
Java is intended to be used by optimising JVMs. Java bytecode is rarely optimised -- that's left to the JIT. Using the Jazelle approach, where is the compiler optimisation step supposed to occur? Nowhere, of course! You'd be far better off with a decent conventional ARM JVM.
If you're on a system so lightweight that this isn't an option, well, you probably shouldn't have gone with Java. (Remember Java ME?)
[Not that I've ever actually worked with this stuff, mind.]
Nowadays you will be running on a CentOS/Debian server or a Windows desktop, on an AMD64 compatible CPU. Not so long ago, there were tens of Unix and Linux variants with significant differences. It was impossible to support half of them.
I think that that's the point. Portability to platforms with a strong tooling and usage base even in a different sector is ok and safe. The problem is when you try to do something like x86 -> Itanium or alike, that could take some time to stabilize.
Having a cheap, viable ARM-native development platform drastically increases the chances of ARM-only killer apps to exist, this would be an advantage over the currently dominant x86 (just as there were Windows-only and Linux-only killer apps that cemented their ascent). However, if everyone is cross-compiling due to the cost, it means ARM will always be a secondary platform (at most)- it can't win by being the Windows Phone of platforms.
[edited for clarity]
If ARM comes anywhere close to viable enough to be "winning", there will be a good market for dev platforms, and somebody will step in and fill the need. Heck, some are even arguing here that the Pine64 already meets that need.
* New systems languages with promising levels of adoption
* Stablization and commodification of the existing platforms, weakening lock-in effects
* Emphasis on virtualization in contemporary server architectures
* "The browser is the OS" reaching its natural conclusion in the form of WASM, driving web devs towards systems languages
All of that produces an environment where development could become much more portable in a relatively short timeframe. It's the high friction of the full-service, C-based, multitasking development ecosystem that keeps systems development centralized within a few megaprojects like Linux. But what is actually needed for development is leaner than that, and the project of making these lower layers modernized, portable, and available to virtualization will have the inevitable effect of uprooting the tooling from its existing hardware dependencies, even when no one of the resulting environments does "as much" as a Linux box. The classic disruption story.
Why is this discussion so fixated on having a local development environment? It's 2019.
Right now ARM probably outnumbers x86 in number of machines running Linux by a very large margin. In my backpack there is one x86 machine and two ARM ones and that doesn't count the one that's in my hand
It all depends on what chips become available at what price. All cloud providers do lots of hardware design for their own metal. I'd they tell you that their next data center will be primarily ARM, they create a market for a million unit run of whatever CPU they choose.
As do the current top two supercomputers, Summit and Sierra, among others.
Working on a non-x86 platform makes you a second class citizen, you will experience issues that others have already ironed out on x86. Software has a long tail of niche code not actively maintained but still heavily used. It doesn't make sense to switch to ARM there.
If it's not much better then people will not switch due to these small annoyances, and there doesn't seem to be any fundamental reason for it being much better (Intel and AMD are perfectly capable of producing top-performing x86 CPUs, and the architecture should not matter much).
There might be different implementations depending on the architecture in some library you use. Also even with higher-level languages like Java it is possible to observe ISA differences: e.g. memory ordering.
I don't see ARM displacing X86 on VMs offering like EC2 any time soon, an ARM offering will exists (and it already exists in fact), but it will remain a small portion.
However, some parts of a cloud offering are completely abstracted from the hardware: DNS, object stores, Load Balancers, queues, CDNs... for these, from the point of view of a developer, CPU architecture doesn't matter at all and if the cloud provider find it more interesting to use ARM (maybe with some custom extensions), it will probably switch to them.
From there, it can gradually go to services where architecture kind of matters, but not necessarily, like serverless, or Postgres/MySQL as a service.
And while it grows, ARM CPUs will improve for other use cases, and maybe overtake X86 VMs.
The other possibility is a massive cost reduction like 3 to 4 times cheaper for equal performances, but it's not really the case right now.
Also, given the all the wasted money I've seen on AWS ("app is leaking memory? just use a 64GB instance"), I'm not sure it's a good enough incentive. However we are specialist at being penny wise and pound foolish.
I had the pleasure (?) of working on a C/C++ codebase that compiled on Windows and ten different flavors of Unix. It was all "portable", but all over the place there was stuff like
#if defined AIX || defined OSF1
Yeah, cross platform is never as simple as same platform.
C++ is a different matter, but C++ portability is a headache even if you stay on Linux. Likewise, trying to maintain OS-level portability of monolithic codebases between Windows and Unix is a fools errand, which is why Windows Subsystem for Linux (WSL) is likely to only get better.
The fact that we're talking about ARM makes this even more important. You're having to compete against x86, which requires increased core counts, and a lot more optimization and potentially even redesigns of your software to make your higher level environment to be perceived equal to x86. Businesses will need this, your boss will ask if ARM is as fast as x86 and they won't care to quibble about technological differences if you can't just get the same output speeds as their old, trusted hardware. There is only so much your language can do to cover your butt. At some point you'll have to be aware of your environment to compete.
It's not simple nor something devs will ever want or care to do in a big web app with several binary dependencies.
Just consider that a single Node app's binary deps could trivially include the entirety of Chrome itself, not just in the form of Node's v8 engine, but e.g. as the PDF rendering "headless chrome" wrapper Puppeteer.
And that's just the tip of the iceberg, add DBs, extensions, Python backend scripts, etc etc, and few will bother.
iOS seems like a huge counterexample (as you note.)
For Android development on the other, you don't have a good simulator, and the out-of-the-box dev experience relies on an x86 emulator of the ARM environment. In practice this means that in your day-to-day Android development, you're running the compile-run-test cycle by looking at your actual ARM device all the time, because the emulator is dogshit. I wouldn't really call it cross-platform development in any traditional sense, it's more like remote development, and a bad experience.
This hasn't been true for years. The emulator shipping with Adroid Studio uses an x86-based image, and it's very, very fast as a result.
Android's emulator even has quite a few more features than iOS's simulator, such as mock camera scenes so you can even develop apps that rely on the camera on the emulator.
If anything these days the Android emulator soundly trumps the iOS simulator on all interesting metrics except maybe RAM usage. But, critically to Linus' argument, they both use the same architecture as the development machine.
And the fact that the "develop on x86, test on ARM" workflow works so smoothly on iOS is strong evidence that Linus is wrong.
I’ve certainly heard of bugs that the simulator doesn’t reproduce because it’s not ARM.
And that isn't enough to get people to demand an ARM emulator. In fact, Android developers hate the ARM emulator and prefer the x86 simulator—more evidence against Linus' assertion.
I wouldn't be comfortable with an underlying architecture change to ARM for at least years to come and the usage decision would be based on general consensus on reliability that follows.
I feel like this shouldn't matter really, but people are amazingly lazy/developer time valued highly.
Source for this? Seems like pure speculation.
I'm looking forward to embedding Redis my Android app :)
I think maybe Linus T. is getting old, out of touch, and closed-minded, and I think we should be open to change and care less about every random thought he blows off.
Quote source: https://www.linux.com/news/linus-compares-linux-and-bsds
>"This isn't rocket science. This isn't some made up story. This is literally what happened"
right? I mean, I can see arguing that going up into the cloud is different in some ways then going down to smartphones (although the high end ones are now going to outperform plenty of old dev machines in burst power). There are certainly differences in scaling and such. But the maturity of the tech for cross development of high level software isn't the same as it was in that era either. And if we're talking about bottom-to-top revolutions, embedded and smartphones seem to be at a lower level and much higher volume then PCs.
Finally there is clearly an upcoming disruptive fusion event coming due to wearable displays. When "mobile" and "PC" gets merged, it certainly looks like ARM is in a strongly competitive position for some big players, and having more powerful stuff up the stack will matter to them as well.
None of which is to say he won't be right at least in the short term, but it still is kind of odd to not even see it addressed at all, not even a handwave.
It goes beyond the different instruction set of course and most of the time this is indeed mostly irrelevant (unless you've arrived at processor-specific optimizations), but the "develop on the same platform you are running on" still has the least painful workflow IMHO.
I wouldn't mind an ARM-based Mac though ;)
Interestingly there's also https://stackoverflow.com/questions/50966676/why-do-arm-chip... . See also on HN front page https://www.axios.com/apple-macbook-arm-chips-ea93c38a-d40a-... "Apple's move to ARM-based Macs creates uncertainty"
(BTW the link is now slashdotted, I am using https://web.archive.org/web/20190222120214/https://www.realw... )
I won’t move to an ARM Mac, personally. I will move to Windows or Linux on x86 for all the reasons Linus gives and also for games. Sorry, but an ARM Mac may finally push me where crappy keyboards and useless anti-typist touch bars have not quite done.
NDK level programming is explicitly only allowed for scenarios where ART JIT/AOT still isn't up to the job like Vulkan/real time audio/machine learning, or to integrate C and C++ from other platforms.
In fact, with each Android release, the NDK gets further clamped down.
I would like a better NDK experience, in view of iOS and UWP capabilities, on the other hand I do understand the security point of view.
As long as Android allows running native code via JNI, the security concerns are void anyway. If they are really concerned about security, they would fix their development tools (just like Apple did by integrating clang ASAN and UBSAN right into the Xcode UI).
Also Google is working with ARM to adopt the new memory tagging architecture from ARMv83+ in Android.
Since this work only started on Android 7, it is clamping down the free reign that existed before.
Nothing was meaningfully "clamped down" there. You can't directly syscall some obsolete syscalls anymore, and you can't syscall random numbers, but nearly any actual real syscall is still accessible and nothing indicates that it won't be.
As long as libc can do it so can you, since you & libc are in the same security domain. Or anything else that an NDK library can do in your process, you can go poke at that syscall, too.
It'd almost always be stupid to do that instead of going through the wrappers, but you technically can
You might be confused and thinking of glibc, which is a particular libc implementation.
This is mostly setgid/setuid, mount point and system clock related stuff. Except for syslog and chroit, I see no syscalls that you should be using in a user process anyway.
So technically, this is clamping down Android, but it seems like a pretty reasonable restriction and far from a heavy handed approach.
The problem is basically everything else:
- The ever changing build systems. And every new "improvement" is actually worse than before (I think currently it is some weird mix of cmake and Gradle, unless they changed that yet again).
- Creating a complete APK from the native DLL outside Gradle and Android Studio is arcane magic. But both Android Studio and Gradle are extremely frustrating tools to use.
- The Java / C interop requires way to much boilerplate.
- Debugging native code is still hit and miss (it's improved with using Android Studio as a standalone debugger, but still too much work to setup).
- The Android SDK only works with an outdated JDK/JRE version, if the system has the latest Java version, it spews very obscure error messages during the install process, and nothing works afterward (if it needs a specific JDK version, why doesn't it embed the right version).
The Android NDK team should have a look at the emscripten SDK, which solves a much more exotic problem than combining C and Java. Emscripten has a compiler wrapper (emcc) which is called like the command line C compiler, but creates a complete HTML+WASM+JS "program". A lot of problems with the NDK and build system could be solved if it would provide a compiler wrapper like emcc which produces a complete APK (and not just a .so file) instead of relying on some obscure magic to do that (and all the command line tools which can do this outside gradle are "technically" deprecated).
...hrmpf, and now that I recalled all the problems with Android development I'm grumpy again, thanks ;)
> End result: cross-development is mainly done for platforms that are so weak as to make it pointless to develop on them. Nobody does native development in the embedded space. But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.
The vast majority of code is delivered as either source (python, ruby, etc) or bytcode, JVM, Scalia, etc.
And the Xeon class machines folks deploy to in data center envs is a world apart from their MacBooks.
These truths are true for Linus, but not for the majority of devs.
Even those creating native binaries, this is done through ci/cd pipelines. I have worked in multi arch envs, Windows NT 4 on mips/alpha/x86, iOS, Linux on arm. The issues are overblown.
Disclaimer: I'm a HPC system administrator in a relatively big academic supercomputer center. I also develop scientific applications to run on these clusters.
> Linus is mostly wrong except for HPC. Very few dev pipelines for folks result in native executables. The vast majority of code is delivered as either source (python, ruby, etc) or bytcode, JVM, Scalia, etc.
Scientific applications targeted for HPC environments contain the most hardcore CPU optimizations. They are compiled according to CPU architecture and the code inside is duplicated and optimized for different processor families in some cases. Python is run with PyPy with optimized C bindings, JVM is generally used in UI or some very old applications. Scala is generally used in industrial applications.
> And the Xeon class machines folks deploy to in data center envs is a world apart from their MacBooks.
No, they don't. Xeon servers generally have more memory bandwidth, and more resiliency checks (ECC, platform checks, etc.). Considering the MacBook Pro have a same-generation CPU with your Xeon server with a relatively close frequency, per core performance will be very similar. There won't be special instructions, frequency enhancing gimmicks, or different instruction latencies. If you optimize well, you can get the same server performance from your laptop. Your server will scale better, and will be much more resilient in the end, but the differences end there.
> Even those creating native binaries, this is done through ci/cd pipelines.
Cross compilation is a nice black box which can add behavioral differences to your code which you cannot test in-house. Especially if you're doing leading/cutting edge optimizations in the source code level.
the xeons though always took about 40 seconds.. but were consistent in that runtime (and were able to do more of the same runs in parallel without loosing performance)
always attributed that to the turboboost..
No. In HPC world, profiling is not always done over "timing". Instead, tools like perf are used to see CPU saturation, instruction hit/retire/miss ratios. Same for cache hits and misses. For more detailed analysis, tools like Intel Parallel Studio or its open source equivalents are used. Timings are also used, but for scaling and "feasibility" tests to test whether the runtime is acceptable for that kind of job.
OTOH, In a healthy system room environment, server's cooling system and system room temperature should keep the server's temperature stable. This means your timings shouldn't deviate too much. If lots of cores are idle, you can expect a lot of turbo boost. For higher core utilization, you should expect no turbo boost, but no throttling. If timings start to deviate too much, Intel's powertop can help.
> my experience with video generation/encoding run of about 30 sec was that my macbook outperformed the server xeons...
If the CPUs are from the same family, and speed are comparable, your servers may have turbo boost disabled.
> otherwise a testrun of 30 seconds would suddenly jump up to over a minute.
This seems like thermal throttling due to overheating.
> the xeons though always took about 40 seconds.. but were consistent in that runtime (and were able to do more of the same runs in parallel without loosing performance)
Servers' have many options for fine tuning CPU frequency response and limits. The servers may have turbo boost disabled, or if you saturate all the cores, turbo boost is also disabled due to in-package thermal budget.
If you have any more questions, I'd do my best to answer.
Optimizations that need to happen, don’t happen locally, they get tuned on a node in the cluster. Look at all the work Goto has done on Goto Blas.
What I wanted to say is, unless the code you are writing consists of interdependent threads and the minimum thread count is higher than your laptop, you can do 99% of the optimization on your laptop. On the other hand, if the job is single threaded or the threads are independent, the performance you obtain in your laptop per core is very similar to the performance you get on the server.
For BLAS stuff I use Eigen, hence I don't have experience with xBLAS and libFLAME, sorry.
From a hardware perspective, a laptop and a server is not that different. Just some different controllers and resiliency features.
This wasn't showstopping by any means, but it did take a couple of hours to tweak it until it ran properly, and this was just a small webapp not really doing anything exceptional.
Our main line-of-business app (on Java) runs on SPARC/Solaris in production, so we have on-premises test servers so we can test this... and yes, there have been quite a few instances where we identified significant performance anomalies between developer machines running x86/Windows and our Sparc/Solaris test environment, and had to go rewrite some troublesome functions.
So Linus position is a bit of straw man.
Oh, you meant that just because there is one other thing that you might slip up and forget to control for, we shouldn't bother trying to control anything? No, wait, that's actually A Very Bad Opinion.
Even if your code is Java bytecode, that's still running on a different build of the JVM, on a different build of the OS (possibly a different OS). There is opportunity for different errors to crop up. They might be rare, but they'll be surprising and costly when they happen exactly because of that.
Someone else  points out that Java (in the right context at least) is so successful in isolating the developer from the underlying platform, that it isn't a problem if the developer isn't even permitted to know what OS/hardware their code will run on.
Could they accidentally write code that depends on some quirk of the underlying platform? I think it's not that likely. Nowhere near as likely as in C/C++, where portability is a considerable uphill battle that takes skill and attention on the part of the developer.
> They might be rare, but they'll be surprising and costly when they happen exactly because of that.
Ok, but you can say the same for routine software updates. It's a question of degree.
we had those problems when developing in scripting language on windows a code that will run on linux because at some point we needed something that called native and would make us problems with different behavior. after some of that experience we tried to get everybody the same environment that is close to what will run in production.
So while Linus opinion is to be respected, mainframes, and the increase in smartphones, smartwatches and GPS devices use of bytecode distribution formats with compilation to native code at deployment time, shows another trend.
I think fundamentally, the error he's making is comparing the current market to the late 90s/early 2000s market. Back then a RISC Unix machine cost thousands of dollars. It was cost prohibitive to give one to each dev/admin. Nowadays a RISC Linux PC is $5.
The starving college kid in a Helsinki dorm working on his EE degree can't afford 600-1000 dollars for another Laptop/Desktop to experiment with. A 35 dollar ARM SBC and a monitor that doubles as his TV is right in his price range...
That doesn't invalidate his point. He's just saying that is basically what needs to happen for ARM servers to start taking off. The next step is for companies to start deploying ARM workstations. That part still seems to be a good way off, MS abandoning their Windows ARM port didn't help the cause.
35 dollars will buy you an oldish x86 beige box that will absolutely flat out murder a Raspberry Pi performance-wise. Cheap, fast hardware is not a problem anymore.
Doesn't look like this is the case now
This is patently false. Mobile developers do test their apps on smartphones, eventhough google and apple offer VMs. You'd be hard pressed to find a mobile app software house that doesn't have a dozen or so smartphones available to their developers to test and deploy on the real thing.
CI/CD is already too far ahead in the pipeline to be useful. CI is only a stage where you ensure that whatever you've developed will pass the tests that you already devised, but it's already a stage where you already tested and are convinced that nothing breaks.
The type of testing that Linus Torvalds referred to is much back in the pipeline. He is referring to the ability to ramp up a debugger and check if/when something is not working as expected. Developers do not want to deal with bugs that are only triggered somewhere in a CI/CD pipeline, and can't reproduce in their target machines.
I'm not sure I agree with. My coding environment is on x86, and I build on x86, but my Run/Debug cycle is on ARM. No one is really encouraged to test on the simulator even though it's available, you are almost entirely asked to test on your actual arm device and run it and see the results of your work.
Linus is making the argument that people want their release builds to run in the same environment as their daily test builds, and I don't see smartphone development as an exception to that rule.
I don't see this happening. PCs are tools for getting real work done. Mobiles are mostly communication and entertainment devices.
I like to fall back on this Steve Jobs quote, employing a car/truck metaphor for computers:
When we were an agrarian nation, all cars were trucks, because that's what you needed on the farm. But as vehicles started to be used in the urban centers, cars got more popular … PCs are going to be like trucks. They're still going to be around, they're still going to have a lot of value, but they're going to be used by one out of X people.
There’s already a whole generation or two who will likely have little to no experience with PCs.
With 2-1 and tablet docking stations, the desktop case will be fully covered.
Surface, Samsung DeX, ...
Communication is also work, especially as you go up the management value chain. I think maybe people should refer to the thing that PCs do and mobiles don't as "typing".
Maybe an iPad Pro with its stylus could perform a lot of those mouse-driven tasks, but using the stylus for long periods of time is going to be exhausting and injury-prone. By using a mouse your arm can rest comfortably and allow you to work for long periods of time with minimal effort and no strain.
Desktop and mobile OSes should remain separate. You don't go around hauling fully loaded semi trailers with a car.
We've known about Fitz's Law since the dawn of the GUI and have decades of study on it. It's not any more efficient to need to "headshot" everything you need in an application 100% of the time, and in fact it is often rather the opposite that it gets in the way of actual efficiency.
Mousing through most "mobile" applications is great, whether "first class" or not.
Desktop and mobile OSes don't need to remain separate, and it's really past time that a lot of super-cramped "desktop apps" got the death they deserved for their decades old RSI problems, accessibility issues, and garbage UX.
It's friendly but it's not space efficient. For applications with a huge number of features, a touch UI can't handle them. Touch screens don't have right click, so you can't get context menus.
It's more than that, though. A touch screen UI for the iPhone makes zero sense on a 32" display. I'd much rather have a true multiwindow, multitasking operating system than that. Really, I wouldn't use a 32" iOS device at all. That's probably why Apple doesn't make them.
User studies from the dawn of the GUI continue to harp that user efficiency is inversely correlated to space efficiency. It doesn't matter if an application can show a million details to the individual pixel level if the user can't process a million details or even recognize individual pixels.
> Touch screens don't have right click, so you can't get context menus.
You don't need "right click" for context menus.
Touch applications have supported long-press for years as context menu. Not to mention that macOS has always been that way traditionally because Apple never liked two+ button mice.
Then there's touch applications that have explored more interesting variations of context menus such as slide gestures and something of a return to relevance of Pie Menus (which it is dumb that those never took dominance in the mouse world and probably proof again that mice are too accurate for their own good when it comes to real efficiency over easy inefficiency).
> I'd much rather have a true multiwindow, multitasking operating system
Those have never been mutually exclusive from touch friendly. It's not touch friendliness that keeps touch/mobile OSes from being "true multiwindow/multitasking", it's other factors in play such as hardware limitations and the fact that tiling window managers and "one thing at a time" are better user experiences more often than not, and iOS if anything in particular wants to be an "easy user experience" more than an OS.
(I use touch all the time on Windows in true multiwindow/multitasking scenarios. It absolutely isn't mutually exclusive.)
In general, smartphone software is built to discourage creative work and focus on either reading or communicating.
If we're trying to predict the future, I think one effective approach to try to not be trapped in the present paradigm is to try to extrapolate from foundations of physics and biology that we can count on remaining constant over the considered period. Trying to really get down to the most fundamental question of end user computing, I think it's arguable that the core is "how do we do IO between the human brain and a CPU?" With improving technology, effectively everything else ultimately falls out of the solution to creating a two-way bridge between those two systems. The primary natural information channel to the human brain is our visual system with audio as secondary and minimal use of touch, and the primary general purpose output we've found are our hands and sometimes feet, with voice now an ever more solid secondary and gestures/eye movements very niche. Short of transhumanism (direct bioelectric links say) those inputs/outputs define the limits of out information and control channels to computers, and the most defining of all is the visual input.
Up until now, the screen has defined much of the rest, and a lot of computer can be thought of "a screen, and then supporting stuff depending on the size of the screen." A really big screen is just not portable at all, so the "supporting stuff" can also be not portable which means expansive space, power, and thermal limits as well as having the screen itself able to be modularized (but even desktop AIOs can pack fairly heavy duty hardware). Human input devices can also be modularized. Get into the largest portable screen size and now the supporting gear must be attached, though it can still have its own space separate from the screen. But already the screen is defining how big that space is and we're losing modularity. That's notebooks. Going more portable then that, we immediately move to "screen with stuff on the back as thin and light as feasible" for all subsequent designs, be it tablets, smartphones, or watches. The screen directly dictates how much physical space is available and in turn how much power and how much room to dissipate heat. And that covers nearly the entire modern direct user computing market.
Wearable displays, capping out at direct retinal projection, represent a "screen" that can hit the limits of human visual acuity while also being mobile, omnipresent, and modularized. I'm really actually kind of surprised how more people don't seem to think this represents a pretty seismic change. If we literally have the exact same maximalized (no further improvements possible) visual interface device everywhere, and the supporting compute/memory/storage/networking hardware need not be integrated, how will that not result in dramatic changes? It's hard to see how "Mobile" and "PC" won't blur in that case. Yeah, entering your local LAN or sitting at your desk may seamlessly result in new access and additional power becoming available as a standalone box(es) with hundreds of watts/kilowatts becomes directly available vs the TDP that can be handled by your belt or watches or whatever form mobile support hardware takes when it no longer is constrained to "back of slab", but the interfaces don't need to necessarily change. Interfaces seem like they'll depend more on human output options then input, but that seems likely to see major changes with WDs too, because it will also no longer be stuck in integrated form factor.
WDs definitely look like they're getting into the initial steeper part of the S-curve at last. Retinal projection has been demoed, as well as improvements in other wearables. We're not talking next year I don't think or even necessarily the year after, but it certainly feels like we're getting into territory where it wouldn't be a total shock either. And initial efforts like always will no doubt be expensive and have compromises, but refinement will be driven pretty hard like always too. I don't think the disruptive potential can possibly be ignored, nobody should have forgotten what happened the last few such inflection points.
>I don't see this happening. PCs are tools for getting real work done. Mobiles are mostly communication and entertainment devices.
This line of reasoning though is fantastically unconvincing. Heck even ignoring the real work mobiles are absolutely being used for, and given the context of this article, I pretty much heard what you said repeated word for word in the 90s except that it was "SGI and Sun systems are tools for getting real work done, PCs are mostly communication and entertainment devices".
Why couldn't ARM-based servers do the same thing? I understand why a generic ARM-based CPU might not win against a generic ARM-based x86 CPU at running cross-compiled code in Linux. But what if the server has a custom ARM-based chip that is a component of a toolchain that is optimized for that code, all the way down to the processor?
Imagine a cloud service where instead of selecting a Linux distro for your application servers, you select cloud server images based on what type of code you're running--which, behind the scenes, are handing off (all or part of) the workload to optimized silicon.
I don't have the technical chops to detail how this would work. But I think my understanding of Apple's chip success is correct: that they customize their silicon for the specific hardware and software they plan to sell. They can do that because they own the entire stack.
I think if any company is going to do that in the server space, it would have to be the big cloud owners. No one else would have the scale to afford the investment and realize the gains, and control of the full stack from hardware to software to networking. And sure, enough, that is who are embarking on custom chip projects:
So, maybe the result won't be simply "ARM beats x86," but rather "a forest of custom-purpose silicon designs collectively beat x86, and ARM helped grow the forest."
Intel x86: 1.7%
Tablet list: https://www.intel.com/content/www/us/en/products/devices-sys...
(and there were a lot of small brands that used to make them, that I don't believe get represented in that list)
Having said that, for it to reach >1%, it was more likely a combination of Intel Android tablets (which were fairly common for a while) and Chromebooks.
There may be people somewhere doing Android/ChromeOS/Fuchsia development on ARM Chromebooks, following the Google model of using a mostly cloud-based toolchain together with a local IDE. There’s none of this happening inside Google itself, though, yet—but that’s just because Google issues devs Pixelbooks, and they’re x86 (for now.)
But, since Pixelbooks (and ChromeOS devices in general) just run web and Android software (plus a few system-level virtualization programs like Crouton) there’s nothing stopping them from spontaneously switching any given Chromebook to ARM in a model revision. So, as soon as there’s an ARM chip worth putting in a laptop, expect the Pixelbook to have it, and therefore expect instant adoption of “native development on ARM” by a decent chunk of Googlers. It could happen Real Soon Now (hint hint.)
<quote>End result: cross-development is mainly done for platforms that are so weak as to make it pointless to develop on them. Nobody does native development in the embedded space. But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.</quote>
Between that and the much-rumored ARM Macs, this could turn pretty quickly...
I have an Acer R13 w/ MediaTek ARM SoC. It's alright, better than the comparables with Intel N-series CPUs, but it ain't no i5.
Exactly. Linus' point is that Arm has no real advantage in the server space to compensate for the problems with cross-development. That's completely different for smartphones, which is why Arm won that space.
(See my argument elsewhere in this thread)
I hear far more make do with just homebrew.
> unless you’re in the mood for slow emulation
I run an embedded OS (made for a quad-core ARM Cortex-A53 board) on both Real Hardware and on my ThinkPad (via systemd-nspawn & qemu-arm). I found (and confirmed via benchmarks) the latter to be much faster than the former — across all three of compute, memory, and disk access.
Turns out power usage was never an ARM vs. x86 thing, it was purely a "how fast do you want to go" thing. ARM started at the "very slow" end of the spectrum which made it a good fit for mobile initially since x86 didn't have anything on the "very slow" end of things. By being very slow it was very low power. But then the push to make ARM fast happened, and now ARM is every bit as power hungry as x86 at comparable performance levels.
The power cost is for performance. The actual instruction set is a rounding error.
Not anymore, but there was a time when x86 was (barely) able to compete in that area and there were some x86-based smartphones and tablets. But it was too little too late: x86 already was a niche. Developers absolute had to support ARM, but x86 was optional, so many apps were not available for x86, and that was pretty much it for those devices.
The Macbook uses a 14nm 4.5W m3 with 1.5 billion transistors.
The iPad uses a 7nm 12W A12X with 10 billion transistors.
You don't have much choice now sure, but it's not as if there weren't any efforts at x86 smartphones (like the ZenPhone). Nor is it as if there wasn't a long run up of phones leading to the modern smartphone either. And even in this how is not directly relevant to the case of x86?
I mean, we're directly doing a comparison to the RISC/MIPS/etc era yeah? Couldn't back then someone say "well but with PC you don't have a choice, so it's different"? x86 got heavy traction on the back of WinTel, then moved up to bigger iron, which didn't really fight hard in the lower end lower margin space. Does there really seem to be no deja vu with that vs ARM gaining heavy traction in iOS/Android/embedded then moving up to PCs and servers, where Intel/AMD didn't really play in the lower end lower margin space? There was a period with plenty of choice in servers, but then x86 won.
And again it's not as if someone can't come up with compelling arguments, x86 has some real moats even beyond pure performance. There is enormously more legacy software for x86 for example, and the ISA for it will be under legal protection for a long time to come which complicates running it on ARM. But it's hard to say how much that matters in much of the cloud space, particularly if we're imagining 5-10 years further down the line. x86 takeover didn't happen overnight either, and the first efforts were certainly haphazard. But momentum and sheer volume matter. It just seems like something that needs to be addressed at any rate, more deeply then you have and certainly more then Linus did.
Try running a Windows 95 era application on Windows 10. You can even have problems with Windows XP era stuff.
And server space in general does not do legacy without keeping everything intact. The only real issue is lack of ARM developer PCs.
>And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on".
Sure, as soon as these merge and you have a development platform as productive as a desktop computer that allows you to natively build for ARM, then absolutely, it could displace x86. And maybe when (if) the two platforms really merge that could be a real possibility.
And speaking of x64...
> It's why x86 won. Do you really think the world has changed radically?
No, x86 is loosing to x64. And at some point another instruction set will supplant x64.
Intel tried "another instruction set" (Itanium) and nearly lost the market to AMD (AMD64)
His thesis is that if you want a platform to take off, start shipping developer boxes of the platform. So mobile and pc will merge when and only when you can do all your development on a mobile platform.
I don't think ARM can rule with Java ( that already supports it) and Swift/c ( limited hardware).
I don't think ARM can rule with Java ( that already supports it) and Swift/c ( limited hardware)