Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Pledges to Use ARM Server Chips, Threatening Intel's Dominance (bloomberg.com)
379 points by rayuela on Mar 8, 2017 | hide | past | web | favorite | 230 comments



This could well be a negotiating strategy to rejigger the relationship with Intel. Dell did exactly the same thing a number of times for similar reasons.

So all you do is: put out some press releases, ramp up some hires, put out some glossy product roadmaps. Intel starts to get separation anxiety and flinches: well, maybe they can afford to come closer to AMD prices for another year. They kiss up and you back out of your false posturing. Rinse and repeat every few years.


This is pretty par for the course in a lot of HW companies(esp SoC selection). Where it gets tricky is when you've pulled that move a couple times and then it turns out AMD/ARM/etc actually have the good chips.

At that point you've burned the other company so many times by changing at the last minute that it becomes hard to get real support and engagement where you actually need it.


> actually have the good chips.

Has that ever been the case yet, for ARM servers?

There's no "magic" in ARM. The reduced power comes from the reduced complexity. Heck, branch prediction is fairly new in ARM. I have trouble seeing a time when an Intel chip will ever fall behind even a bucket full of ARM chips, unless workloads ever fall near zero.


Reduced power comes from reduced complexity, and that has a performance cost, yes -- but it takes far more than a 10% increase in power to get a 10% increase in performance.

Intel's designs, being tuned for high sequential performance, are significantly better for that purpose; but if you look at VPS services which currently only get a fraction of a CPU each, I'm sure they could run more power-efficiently with a larger fraction of a punier CPU.


What's the difference between 100 ARM CPUs @ 1 watt versus one Intel CPU @ 100 watt, except that you need a bigger data center for the ARMs?


There are a lot of nice things in arm. Ie slightly less register renaming.

For example, Jim Keller estimates a fully developed arm chip will be 15-30% more efficient than an equivalent x86. Now, this development hasn't happened yet, but with giants like Qualcomm, it will happen eventually.


If this was the case (and there weren't other issues e.g. insturctuon fusion etc) then wouldn't AMD be pushing K12 far harder? It's the same ceneteral core as Zen but they don't seem to have as much confidence in it.


Because migration is super expensive and a big part of what's preventing the movement to ARM servers. Also, the aforementioned case hasn't necessarily happened, Keller has spent forever making Zen better. They are still working on K12 IIRC, but Ryzen, for good reason, is the focus. Besides, CPUs are more and more becoming simply host systems.

Besides, never doubt the word of jim keller ;)


The magic in arm is in Risc. in an azure (or aws) cloud where 90% of operations are webserver events, those are less than complex operations. Relational databases with sums, grouping, etc. Not using every bit of x86. And once you start talking about geospatial, advanced searching, and cryptography, you might just offload to a graphics chip.


What is this, the 90s? No one cares about RISC vs CISC or ARM vs. x86-64 (on a high-level ISA level) anymore. It's all in the implementation.


Not quite. They've been working together for a few years to do this. Qualcomm even based its motherboard design off of Microsoft's Project Olympus.

> QDT has been working with Microsoft for several years on ARM-based server enablement and has onsite engineering at Microsoft to collaboratively optimize a version of Windows Server, for Microsoft’s internal use in its data centers, on Qualcomm Centriq 2400-based systems. QDT’s OCP submission is the result of a multi-faceted, multi-level engagement between the two companies to facilitate ARM adoption into the data center. It encompasses multiple areas of hardware and software including board development, firmware, operating system, compilers and tools, and CoreCLR.

https://www.qualcomm.com/news/releases/2017/03/08/qualcomm-c...

Also, I doubt Microsoft went through the trouble of supporting full x86 programs on Qualcomm's ARM chips just to improve the relationship with Intel. If anything, finally bringing ARM to not just servers, but Windows notebooks is what will make Intel even angrier with Microsoft.

So far, the main reason why ARM couldn't enter the PC market was the Windows lock-in - if x86 programs wouldn't work on ARM, then they had no chance (save for UWP apps gaining huge popularity, which hasn't happened yet, and I don't expect to happen for at least a few more years, if ever).


Why doesn't apple move macbooks to ARM processors?


They introduced ARM to manage their fingerprint sensor and the TouchBar. Rumors had it that Apple is working on connecting the ARM chip to more subsystems so it can do background work like background refresh (or what that feature is called when the laptop is in sleep – PowerNap?). I see this as Apple slowly creating a hybrid (low energy ARM for easy tasks and faster x86 for speed). Maybe the goal is a complete transition to ARM and Apple starts to design their own MacBook (Pro) CPUs to gain more control (be less dependent on Intel’s varying release cycles).


I've been waiting for something like ever since I first got a Raspberry Pi.

Keep the x86 CPU around for active, high powered use, and then use a low power ARM CPU to run downloads and software updates while the PC is in an off or idle state.


What about number crunching on the GPU?


They'll probably take a hybrid approach before going all-in. Use the ARM CPU for low-power, now-CPU requirements to avoid draining the battery. The ARM chips in the iPhone and iPad are very good, but they're not quad-core i7 good.


I could see that being a possibility in the future as the discussion moves away from firepower and more toward power consumption, which seems to be a focus of Apple products.


They're far too wimpy today.


Which chips are you refering to, specifically? Because otherwise it's a blanket statement that is simply untrue.


> Which chips are you refering to, specifically?

Literally any ARM chip. What ARM chip do you think makes a fitting replacement for a high performance Intel core in a Macbook?


We're talking about servers here, which has nothing to do with the OS. If ARM chips were appropriate, we would see them in Linux, but time and time again, the power to performance just doesn't make sense for ARM, when compared to an Intel chip, for reasonable workloads.


Time and time? We do see them in Linux.

The ARM server ecosystem has been in the works for a few years. TechCon'16 we saw the announcement that the software is fully there: and this is not hype, all major distributions support all the major players out of the box. Download ISO, install, and use it.

HW-wise we are now in the 3rd gen of chips (QC Centriq, XGene3, ThunderX2) all of which compare with E5, not Atoms. First/Second gen were mostly proving grounds for the specs and software work, excepting for Cavium machines, which have some serious network I/O where the relative wimpyness of any one of the 96 cores simply doesn't matter for certain workloads. In a matter of two years we've seen private datacenters soak up XGene2 and Cavium chips. We've seen ODMs and OEMs get their shit together and start selling real kit (b2b.gigabyte.com).

It's funny, people expect these things to suddenly appear overnight everywhere and competitive with E7s and Power9. It takes decades, but the server guys managed to fit in nicely in about 5 years. I am excited.


So if my code is parallel enough to take advantage of all the cores, like a web server serving 96 requests simultaneously, one per core, would I get better performance than Intel? Better performance per watt?


You would run 96 processes one for each core. Performance more depends on IO that CPU in reality.

What ARM will bring is competitive market and no Intel Tax. Average 1U in ARM will be massively cheaper. For servers OpenPOWER is more interesting[1]

[1]https://www.theregister.co.uk/2016/04/07/open_power_summit_p...


> HW-wise we are now in the 3rd gen of chips

I stand corrected. It's been a time since I've looked at the performance. I didn't realize they caught up so quickly.


Ah, just like governments threaten to use Linux to rejigger MS.


I've heard similar theories about SteamOS. That it was a threat to Microsoft since they had just released their own store on Windows 8 that competed with Steam. I'm inclined to believe it since Valve seems to not care about it much anymore.


If I remember correctly, Valve from the start considered SteamOS as "backup", a way to escape Windows if MS pull some shenanigan they don't like.

Since MS is mostly "behaving" themselves, Valve is keeping SteamOS around, but not focusing on it.

At least for now... (it wouldn't surprise me if MS became more aggressive in their W10+DX12+"Xbox everywhere"+"MS Store" push...)


a ton of major issues have been recently fixed. It doesn't seem to be on the back burner at all, it's just that building/updating an entire OS is a bit different from building/updating just steam. Also, a lot of fixes are edge cases that most people don't view as fixes, because it was never an issue for them.

Most of the issue revolves around GPU drivers, which in theory valve is not responsible for, but they have hired devs to work on the open source AMD drivers. Even then, most devs don't optimize for OpenGL. That's again on the dev, and not valve, unless valve is going to provide devs to every game developer out there to provide linux support. Vulkan (hopefully) would resolve that, as optimizing vulkan builds makes games run better everywhere.

Of course they aren't putting ALL their eggs in this basket. ~90% of their revenue comes from windows, so they should be putting most of their effort into windows support. You can't lose your current customers while trying to make the future better. The fact that they are in fact fixing things and hiring driver devs is a great sign. And I believe they provided the rocket league dev too?

Either way, progress is coming, slow and steady. linux as a primary gaming platform works just fine for me, thanks to valve.


linux as a primary gaming platform works just fine for me, thanks to valve.

Confirmed by a happy occasional CS:GO player here.


Yeah, I have another comment from a different thread saying the same thing I'm about to, but the amount of linux compatible steam games has really exploded since the original Steam OS announcements. I think it's almost at half the top 100 most played games in Steam, but last I checked more than half of my steam library was playable on ubuntu at least.


I wish it worked on Debian, it used to but the last time I tried I lost a day and still didn't get it to run...


I've been running it on Debian testing here for nearly two years now, no major problems. Since jessie it's just an apt-get away.


Its works fine in Ubuntu.


> Even then, most devs don't optimize for OpenGL. That's again on the dev, and not valve,

Sony, Nintendo, Microsoft, specially Microsoft, go the extra mile to make sure games perform well on their platforms, even special casing APIs for specific games.


they also push those platforms as their only game platform. and try to discourage devs from using other platforms.

Valve isnt trying to discourage devs from using windows. Windows is still their main income source. Linux only devs aren't going to make valve money during the transition to windows. The goal is to build the game library on linux so it is a valid competitor.

Also, they can't really make a ton of special OS level APIs, since they are trying to keep things generic. they dont want to fork Linux, they want to maintain compatibility. I mean, open is the entire premise of leaving windows. And if you mean higher level APIs, there's things like: https://valvesoftware.github.io/steam-audio/ one of the top things in porting a game, is that directX covers more than just graphics. opengl + SDL2 covers most of it. steam audio makes it so you can write code once and have it work everywhere.

So yeah, they are working on making APIs that make it easier for devs to release linux/SteamOS games without needing the player base already there, because linux "just works" with the middleware.

The short of it is: they are actively doing a lot of stuff to advance linux gaming still, just a lot of it is behind the scenes now and aimed at devs or not directly steam related (you dont see the fact that they hire mesa devs through steam updates).


It is not only a matter of special OS level APIs.

When one attends GDC, they have booths were they give support directly to customers, they also fly-in to relevant studios to sort out any kind of performance or bug related problems they might be facing.


And theyve done that with some devs. rocket league obviously being the big one, where valve went to a linux dev and said "hey can you help port this"

And the point still stands that this is a future investment for them. They aren't pushing to make linux exclusive, they are pushing to make linux viable. Valve has ~360 employees. Linux has a ~1% gaming market share. If valve has even 4 employees working on Linux, they are essentially scaling their effort based on current income from the platform, whereas Sony/Microsoft get 100% of their income from a singular platform, so they spend 100% of their expenses on building that platform. If valve switched over and said "ok, all employees full bore on SteamOS, ignore windows", they would lose a LOT of money. Besides, the goal here isn't to make SteamOS "their" platform, they goal is to make gaming work on an open platform. Microsoft has to entice people with support in order to keep them locked in to their garden, Valve is trying to unlock the garden (to an extent).

if they want growth, they should spend more than just 1% on the platform, but people are acting like they are spending 0% on the platform. It's still improving and actively being worked on, but this is a big undertaking for a team of 4, OS updates, dev support, steam client, etc etc... And if the team is bigger than 4, then they are in fact spending more than 1% on a 1% market segment, which still proves the point of "they havent abandoned it, its just not growing at the 'right' pace in publicly obvious ways"


Right, but Sony, Microsoft, and Nintendo also make their entire systems (both hardware and software), not to mention that they don't provide games for other platforms, so it's obviously going to be a little different for SteamOS, which is essentially a second-tier platform from Valve and is running on arbitrary hardware.


SteamOS was supposed to be Valve's distribution.

Windows also runs on arbitrary hardware


I think it's a little disingenuous to compare Windows to SteamOS like this. Microsoft has quite a bit of pull with hardware manufacturers; there's a strong incentive for PC companies not to put out hardware that doesn't work with Windows (and I'd imagine extensive tests are done before releasing any such hardware to ensure that everything works with Windows, presumably with some degree of communication and cooperation with Microsoft). On the other hand, there's virtually no incentive for a PC maker to make sure that their hardware doesn't break things for SteamOS.


Another interesting thing is that Windows 10 / Xbox App recognizes and tracks all games on the system regardless of their origin. I opened the Xbox App for the first time the other day to turn off the annoying screen capture prompts it injects and found my "recent activity" which was populated with Steam Games.


The other issue for consumers is that SteamOs will give you a ~10% performance hit .


> SteamOs will give you a ~10% performance hit

Any sources to back up this claim? 10% performance hit compared to what? Windows? A different Linux distro?

I've found that for native games, Linux can actually offer better performance in many cases than the Windows version.

Someone also did a test recently (full video on Youtube) with the Windows version of the new Doom running on Linux through the WINE emulation (yes I know its not technically an emulator) via Vulkan and the performance exceeded the Windows version.


My personal anecdotal evidence is that many of the games available for Linux on steam are really terrible ports. Gang beasts for example couldn't even find the binary. Once I fixed that, I was lucky to get 5 fps with lowest settings on my 6 core i7 with a Titan x. Yes I had the latest Nvidia drivers. Chivalry medieval warfare was another unplayable performance offender.

This isn't necessarily valve's fault though. It's because the devs will shit out a Linux build without even trying to run it or they just don't care.

Some games work great. Some don't work at all and shouldn't even be listed with Linux support.

I sincerely hope Linux takes off as a viable gaming platform. All the tools and components are there. The only thing missing at the moment is developer mindshare.


That's completely different to my own experience. In my experience, there tend to be minor issues with graphically intensive games but they mostly work fine with good performance. For example, I'm playing Alien Isolation on a Lenovo Ideapad laptop, with most settings at max, and I'm impressed by how well it runs.

There are some problems and glitches unrelated to the games, in my experience (like how newer nvidia GPUs have screen tearing on Ubuntu -- unrelated to vsync, by the way -- and there's pretty much nothing you can do about it for the time being)


Wine tests are a bad way to compare performance because you can not be sure that all features that the game uses are actually implemented and not stubbed.



IIRC that benchmark is garbage, because the compiler they use was so old that Debian Stale had a newer version in their repos, when that article was first published.

Ah, here it is. From https://www.reddit.com/r/linux_gaming/comments/3snvpc/steamo...

"took a quick peek at the latest Geekbench executable for Linux(3.3.2,) it was compiled with GCC 4.4.3 on Ubuntu 10.04 - a 5 and a half year old compiler. Windows version was compiled with MSVC 2012 update 4 released 2 years ago."

Take note: That ars article was done ~15 months ago (november 2015). In it, they used Ubuntu 10.04, which was released in April of 2010 (thus '10.04')! Steam wasn't even released on Linux, until nearly 3 years later, in 2013!

That article is so BS that you should ignore it outright. It's not even fit to be toilet paper.


You mean Debian Stable, though perhaps that was intentional? If so, well done.


This blanket statement is not generally applicable. It can depend on rendering tech used, the game, and hardware. And this is going to be an issue in bleeding edge games mostly.

Borderlands 2, for example, plays fine for me on a i7 930, with a GTX 570. An older rig.

Dota 2 with OpenGL rendering hits or exceeds the Windows FPS when run on Linux.

Switch to Vulkan though and it craters.

Source engine games with OpenGL rendering often outperform Windows.


> Source engine games with OpenGL rendering often outperform Windows.

nah, not really, only dota2, because has no wrapper, at least for valve's source games.


And beyond the 1337 gamer kids, who will notice?


> And beyond the 1337 gamer kids, who will notice?

Isn't that the core market?



Probably depends on where the Linux is being used. When I did DoD contracting there was very much a constant push towards using Linux and OSS where possible but this was mostly in product development, not client operating systems.


Dell did this for ages with AMD chips. Same time every year you'd see an article go across all the magazines about Dell planning to introduce AMD chips to their servers, and just as quickly vanish once Dell had finished negotiating with Intel.


You can do this with a moderate sized enterprise. Engage HP/Dell or a strong VAR. Get them on the hook and focus on Intel E7 (and SQL Server enterprise if you are an MS shop).

Then get meeting scheduled with IBM and Oracle. Talk to the VAR or OEM guy in your office with your POWER mug poorly hidden. Start asking questions about TPC benchmarks and error correction. Ask if you can pay for cores only when you need peak capacity.

Intel will provide them with marketing support, potentially very significant if they care about you. Microsoft will also bend over backwards for an Oracle compete opportunity.


The resulting Intel Payola sometimes carrying Dell's quarters - preventing Dell from missing profitability.

https://www.velocityreviews.com/threads/intel-and-dell-scum-...


They did more than plan, I have a rack full of older Dell servers with AMD CPUs.


When I say "did this for ages", I'm reaching back to the turn of the millennium. It started happening about as soon as AMD brought their first competitive chip to the market.

Dell did eventually start shipping servers with AMD processors in them, but it was long after they first started saying "We may start selling AMD based servers."


That might change with AMD's new chips.


Let's hope so


It could be, but vendors tend to have epiphanies when they eat their dog food.

Microsoft and Exchange was a great example. Doing things like running big mailboxes, having HA services, and doing discovery were ridiculous hacks until Exchange Online came around and subjected them to the pain.

Similarly, I'm sure Microsoft has plenty of idle servers sitting around doing very little that could easily be running more effectively on ARM.

Intel has improved power management, but their low-end chips suck.


Perhaps spurred by Google's recent announcement that they have Skylake chips in their cloud: https://cloudplatform.googleblog.com/2017/02/Google-Cloud-Pl...


Sure. And also, may be the Google-Inte Skylake comes from past year, when Google was testing POWER9 chips... [1]

[1] https://www.theregister.co.uk/2016/04/07/open_power_summit_p...


There's more recent news on that front: https://blog.rackspace.com/the-latest-zaius-barreleye-g2-ope...


No, this is the real deal, apparently. Obviously the work has gone on for years but between OCP and Xeon competitive chips (ThunderX2 and Centriq) it makes sense to put public weight behind it.

The real question is if WSL (Ubuntu for Windows) will be available for AArch64 Windows. Because the only real 64-bit ARM software ecosystem today is Linux (courtesy of Linaro, Redhat, Canonical, Suse, and ARM).

Between WSL and BTing legacy 32-bit x86 apps this could be a decent offering for folks who already have Microsoft infrastructure. Even cooler if Hyper-V is ported.


At some point though, this sends the competition (AMD) out of business, then, no more negotiation of this sort, and also everyone is worse off.


Actually, the only signal so far is to AMD to continue investing developing ARM chips after the A1100.


Reminds me of how media conglomerates fuck with the satellite/cable companies.


I love the fact that this can even be a threat. I'm biased, as an old Sun guy and thinking the x86 architecture was a bit too adhoc[1] and way too proprietary for really clever innovation. I had pretty much given up anyone challenging them in the server space assuming that 100 years from now kids would marvel that their brain implant could be switched into 'real mode' to run something called 'dos' :-).

This is something that I give full credit to Linus and the other developers that have made Linux into a credible server operating system. Without that software base, ARM would never have been able to get where it has.

[1] I get it that most people never see the 'insides' of their systems but its always been something I cared about.


Has it been long enough for people to forget NT on Alpha, MIPS and x86 (and i860, though not released)? And to forget both PReP (https://en.wikipedia.org/wiki/PowerPC_Reference_Platform) and CHRP (https://en.wikipedia.org/wiki/Common_Hardware_Reference_Plat...) which were going to break the Intel stranglehold once and for all?


I remember running SQL Server on a Digital Alphaserver 1000a back in the 90's. That machine was a physical beast though I'm not sure about the actual processing speed. The most interesting thing about the whole affair was the split case with drives on one side and a huge ducted fan on the other. That was the single most coolest thing in the room. :)


Imagine having a time machine, going back to Microsoft in the 90's and telling them to forget SGI, IBM, and HP, port NT to ARM...


They'd laugh at you since at that time ARM was something that ran in either small-format low power low-clock devices (like a Newton) or in fringe machines like the Acorn Archimedes.


While you're there, tell Intel to ignore HP and forget Itanium ;-)


I suddenly feel too young


Fear not, that's a problem that will solve itself eventually.


Why did CHRP fail?


Because it wasn't Intel, it was PowerPC, and Intel always manages to pull a rabbit out of the hat in terms of performance and price.

Also if I recall I don't think Windows NT for PowerPC ever truly saw the light of day. And this was while Windows 95/98 was still dominant and before Windows 2000/XP, so there wasn't really consumer software for it. So no real operating systems.

PowerPC ended up being a dead end which only Apple pursued. Though I guess there were variants that ended up in gaming consoles for a while.

At a job I had (IBM subsidiary) in 1997 they had a pile of early-CHRP boxes hanging around. For kicks I got Linux running on them, just out of curiosity. They were basically juts PCs (PCI bus, etc.) that ran with a PowerPC CPU. Which is effectively what Macs were for years, too.


> We wouldn't even bring something to a conference if we didn't think this was a committed project and something that's part of our road map."

As anyone who rode the rollercoaster of abandonment in the ActiveX years recalls, their previous MO was all about things that became uncommitted after being in their road map :(

I really hope there's some kind of future for non-Intel players.


ActiveX deserved abandonment, but anyone who remembers those years probably also remembers that Microsoft is capable of supporting NT on many different architectures.


Potentially capable; their support for non-x86 has always fallen short.

If you look at e.g. Linux or BSD distributions, the entire world is rebuilt for every architecture. Running Linux on powerpc, arm, amd64, I get the exact same experience across the board as x86 bar platform-specific bits like openfirmware/efi tools. Microsoft has never done this. The vast majority of their stuff remains x86 only, making arm and even x64 second class citizens, with x64 only being viable as a result of the x86 compatibility. Until Microsoft start building and providing every binary as a native build, and providing the tooling for others to do the same, they will remain wedded to x86, and I'll be unable to take their support for other platforms seriously.

If a bunch of volunteers can manage to provide over 20000 software packages for over 10 architectures, totalling over 500000 binary packages, it's entirely possible for Microsoft to support three. When I used to maintain the Debian build tools, it took 18 hours to rebuild around all 18000 packages; compilers, kernel, tools, applications, everything. It would be much faster on a current system. It's all possible from a technical point of view.


> Running Linux on powerpc, arm, amd64, I get the exact same experience across the board as x86 bar platform-specific bits like openfirmware/efi tools.

This is demonstrably not true, there are plenty of ports that end up being Intel-only, and plenty of architectures that involve some sacrifice in terms of software choice if you want to run them. (as a SPARC and Raspberry Pi user I could elaborate, but hopefully you get the idea) Not that there's anything wrong with that.

> Until Microsoft start building and providing every binary as a native build, and providing the tooling for others to do the same, they will remain wedded to x86, and I'll be unable to take their support for other platforms seriously.

What you take seriously is your business, but the standard you're holding Microsoft to is one Linux doesn't meet, let alone BSD, and it's completely arbitrary regardless. If their ARM platform does what it needs to do, it doesn't especially matter if it offers support for (for example) legacy Windows cruft.


> There are plenty of ports that end up being Intel-only .. as a SPARC and Raspberry Pi user

Sure, but you can compile failed builds of software yourself and get that tool on ARM or SPARC. ARM works pretty well on Debian, so long as your not on Raspbian & using their ancient repos with broken software. Part of why I've avoided Raspberry Pis entirely.


A good point, but their repos are so eccentric because Debian refused to support the RPi's older ARM architecture and the RPi community had to roll their own. That speaks to the original point of this subthread, I believe. Not only does Linux not offer the "exact same experience across the board," it cannot even do it for the ARM architecture. For decent enough reasons, really, when you think about it.


Different versions of the ARM architecture with different ABIs. From the Debian POV that's a separate architecture to support. It could have been done by Debian, but was done by a third party, just as other minor platforms are supported.

I don't agree that it has a "different experience" because the tools and infrastructure are there to build the entire distribution from scratch. And this was done. I've done it myself several times. Once done, this architecture variant had the complete package set available for all the official supported architectures, bar any architecture-specific packages being added/omitted. Third parties can and do bootstrap and maintain entire architectures. I can't speak for the raspbian people and their port, but it's not hard to manually bootstrap the kernel and toolchain and then set an autobuilder loose on the entire source archive.

And that's the point I was trying to make about Windows; that's exactly what you can't obtain. Be it the old NT ports or the present day ones, outside the base "Windows" release and some key products, the rest of Microsoft's product portfolio is largely missing.


Yeah, it sounds like we're talking more about the difference between the limitations of open source and closed source than any particular failure on Microsoft's part to meet the expectations of its customers.

I don't think they're going to really square the circle in the way linux can, having everything available because everything is open source. On the other hand, it's not inconceivable they could end up with a server platform that does offer an awful lot of Microsoft stuff as open source, which also makes available all the linux userland stuff you'd want to have. Their handicap might be that not enough of their infrastructure software has made it to .net yet, but they've shown lately that they're willing to do some porting when it's appropriate. (Sql server!)


Only a select few boards were based on ARMv6, the Raspi 2 and above use a modern ARMv7 core. Running Raspbian vs Debian on a Raspi 2 or 3 shows off the massive performance gap between the two, IMO they should have used a single ARMv7 core from the get go.

But the Raspberry Pi is a fundamentally flawed platform, with poor I/O, binary blobs required to make the hardware function, and a community that is toxic towards free software, with its own vaguely supported distro.

An OrangePi Zero ($7) or OrangePi PC Plus ($22) will blow a Raspi out of the water any day, due to each USB port and the ethernet port being directly wired into the SOC, allowing 40MB/s per port. Plus, I can run kernel 4.10 and mainline Debian on it without any blobs, and the only things I'll miss out on are GPU support & WiFi. The VPU has been reverse engineered though, so H.264 & H.265 video works well.


> community that is toxic towards free software

I think that's a hostile mischaracterisation of the community!


Isn't that more a case of no support promised, none given. Rather than support promised but falling short.


It's close to two decades now but I don't recall Microsoft promising that all the NT architectures would offer the exact same experience. It would have been silly then, and it would be doubly silly now that plenty of the stuff in Windows is legacy cruft.


Pretty much, Debian moved forward to recompiling for ARMv7 as it netted significant performance improvements, and if you wanted ARMv6 binaries you could compile them yourself. Raspbian did this, but not very well.


> Potentially capable; their support for non-x86 has always fallen short.

"Microsoft will continue to provide mainstream support for Itanium-based systems until July 2013, and extended support until July 2018." -- http://www.computerworld.com/article/2516742/computer-hardwa...

Itanium came out in 2001 and the above article is from 2010, so that would be about 9 years of product releases and 17 years of support overall.

And, frankly, there's no money to be made off of supporting all those miscellaneous architectures so Microsoft has no reason to bother.


I'm not referring to that type of support; this is merely support for the base platform. I'm referring to the entire ecosystem of Microsoft products, of which you'll find most are x86 only.

When you say "there's no money to be made...so Micosoft has no reason to bother", this attitude is a major reason why ia64 failed, and why their previous arm attempt failed, and why their current arm attempt is also likely to fail. If the software isn't there, it's a poor proposition for most customers.

When I run Linux on ia64 or arm, I have an entire distribution's worth of software at my fingertips, and for the most part I'm not losing out compared with more popular architectures. With Windows, no matter how technically good the base platform may be, the ecosystem is a wasteland and will remain so until Microsoft put the effort in to support them properly.

Supporting multiple platforms is not expensive; it's simply a matter of having the build infrastructure in place. In Debian we had it build every package automatically built on 11 platforms. Microsoft could do the same for their applications. For example, see https://buildd.debian.org/status/package.php?p=okular&suite=... -- one package built for 22 platforms. Building for three or four is not a lot to ask...


IA64 failed because it was a bad answer to a question no one asked. AMD got it right, that's why AMD64 won.


Yes, but none of this thread is about this specific platform and its merits, it's about the different strategies for supporting multiple platforms, and where Microsoft through the choices they made failed to realise their own full potential on non-x86 platforms while other organisations managed to fully support them.


AMD managed to produce x64 thanks to the licenses they had from Intel, otherwise this laptop would be powered by an IA64 processor.


No, it wouldn't.

Sometimes I wonder if IA-64 was just an exercise in killing of Alpha and HP-PA...

Anyway, x64 succeeded because instead of producing something no one asked for, and poorly (IA-64), AMD went to Microsoft, found out what they wanted from a 64-bit chip, and built that.


Sure it would.

If Intel had transitioned their processor line to IA64, without AMD to defy their roadmap, do you really believe consumer desktops would magically start using other vendor processors?

> AMD went to Microsoft, found out what they wanted from a 64-bit chip, and built that.

Because they still had the cross-license deal with Intel that allowed them to legally build x86 clones.

No x86 licenses, no x64.


We're talking complete hypotethicals. When x64 was introduced, not only weren't there any consumer Itanium chips, but they weren't even hypothetical.


Of course it is hypotheticals, that is what talking about alternatives is all about.


> I'm not referring to that type of support; this is merely support for the base platform. I'm referring to the entire ecosystem of Microsoft products, of which you'll find most are x86 only.

Windows Server comes with IIS and other services that would normally have been provided in a Linux environment by the Linux ecosystem. Also, as the article notes, SQL Server and Visual Studio were supported on Itanium as well.

As Itanium only succeeded as a server product, there's no business reason for Microsoft to have ported their desktop applications to it.


Those are just a select few products out of thousands of tools, applications and services I might need to run. The server vs desktop distinction isn't very important. What matters is the utility and hence viability of the platform as a whole. By not having the platform be generally usable, it greatly reduced its desirability and reach.

Any considerations for such a server/desktop split certainly should not apply to arm, which can be used for either. Also, contrast with the experience of ia64 on Linux, where I had the full set of tools, services, applications available. That's the sort of experience Microsoft should have provided, but didn't. And should also be doing for arm, but aren't there either.


As long as the non-x86 architecture is both 64 bit and little endian, I think this is true. However, in the absence of those two properties not at all. There are many mainstream linux code bases which are either inconsistently endian-clean (meaning they are for some operations but not all) or are straight up broken. This becomes really visible debugging bizarre, impossible bugs on BE architectures.


Not true. When multiarch mattered (NT 3 and 4), Microsoft was the only vendor that delivered the exact same OS, device support and development environment across x86, Alpha, MIPS and PowerPC, and pushed the industry towards standardization. Microsoft has always taken arch and platform independence seriously, and this was evident even in 2010 when I worked on NT. You could rebuild the whole system for any supported arch (x86, x64, ia64 when that mattered), now arm and arm64

It's funny, but Microsoft was spot on correct to continue supporting 32-bit x86 on par with x64. Now they can just support BTing 32 x86 on ARM64 instead of being forced to support 64-on-64, which would simply involve more overhead.


Great points. It is an odd fact that Microsoft generally avoided writing operating systems on or for Intel x86 processors [1], and started the development of both NT and CE on Risc processors.

Meanwhile, Linus had a PC with an Intel 386 processor, so that's what he started Linux on and for...

[1] MS DOS was based on code that Microsoft bought in, not having time to develop it from scratch. However, Microsoft did have some success on x86 with its PC version of Unix, which was called Xenix.


Not that odd. The NT group specifically wanted a portable design, and they made the right call to initially target i860, MIPS and only then i386 machines. Contrast that to OS/2 development - the (never shipped) PowerPC OS/2 port was based on top of of an IBM fork of Mach, because the kernel was too x86 specific (derived from the pre-virtual memory 16-bit protected mode code).


BTing?


Binary Translating.

Basically a JIT that recompiles (for example) 32bit x86 instructions to ARM64, so that you can run old apps on new CPUs.

DEC had FX!32 to run x86 Windows NT programs on Alpha Windows NT.

Apple licensed Transitive's QuickTransit to run PPC OSX apps on x86 Macs, though in typical Apple style they renamed it "Rosetta".


Is this the same as code morphing mentioned here?

https://en.wikipedia.org/wiki/Project_Denver

Article seems to say NVidia couldn't get a license from Intel.


I worked with those guys on a project - also PPC to x86. It was pretty amazing and they were solid. Sad that so much of it ended up at IBM.

Apple's marketing around it was amazing; you'd think Apple built the technology in house.


Binary translation / JIT. Like Apple did with Rosetta or like DEC did on Alpha NT with FX!32.


It would be nice for everybody if we could shed the Intel "tax".


Well, everybody but the Intel employees here.


This is awesome: with a renewed AMD and now ARM gaining another big proponent (Apple being the first imo) there might finally be some real competition to Intel's place at the top.


Samsung and Qualcomm aren't big proponents of ARM?


They are, but ARM probably wouldn't exist in its current form without Apple.

ARM was developed in the 1980s by Acorn -- a small, mostly struggling British company -- as the Acorn Risc Machine. They wanted a simple 6502 replacement without going to the Motorola 68000, as Apple, Atari, Amiga (Commodore), Sinclair and others did.

Apple wanted to use the ARM in the Newton handheld, but wasn't going to bet on a chip owned by a small British company.

So, Apple persuaded Acorn to spin off the ARM operation into a separate company, Advanced Risc Machines. This was jointly owned by Acorn, Apple and VTI (40/40/20). The rest is history.

VTI got in because Acorn relied on its software tools to design the ARM chip and get it manufactured. Acorn had never designed a chip before and, in the beginning, didn't really know what it was doing.


Sammy and Qualcomm are big players yes I was just speaking regarding North American mindshare which is a misgiving. Though there is something to be said about the absolutely stellar single threaded perf of the Apple designs. Not to mention their GPUs


They're quite far behind Apple when it comes to perf. Aren't Apple's ARM chips in their iPad pro competitive with lower end modern x86_64 chips?


Yep, their best iPhone chip can out benchmark the 2015 Macbook or roughly equal the 2013 Macbook Pro.

https://www.google.com.au/amp/s/www.macrumors.com/2016/09/15...


The ARM server chips are not related to their mobile counterparts. Pretty much everyone is doing a custom microarchitecture, instead of growing mobile designs or building on ARM's own IP.

The 2017 chips (3rd generation) are Xeon E5 competitive.


Is that not also true for the Snapdragon chips? (835 for example)


No, not really. Apple's advantage is quite staggering.


And yet the Snapdragon has nothing to do with any shipped or to be shipped ARM server parts...


Competition is great, but this also seems like a rather big setback to the goal of having multi-cloud applications, in instances where the app can only run on x86-64.


Writing architecture-specific code is what would be a setback to that goal. Either write portable code, or be prepared to translate non-portable parts to new CPUs from time to time.


There are fewer binary apps these days though. Java/Scala, C#/F#, JS, Python, Erlang should all be fine.


Don't most docker containers contain x86 binaries?


Of course, which is sort of the point: ARM is way behind with Linux packaging and distribution. Every vendor has their little tarball of junk that runs on their SoC or device or whatever. There's no architecture-unified distro you can install on everything. And this is where it gets hurt, because while Raspberry Pi can get away with handing out tarballs, no one is going to bet a datacenter install on anything other than RHEL or Ubuntu or whatever.


Debian and Ubuntu have had ARM ports for years. Prospective server vendors will need to address the driver situation but once you've booted the userland is highly portable.


Fedora has too, I believe. But that's sort of missing the point. Userspace ports got done years ago because they're the easy part. The hard part is the system integration that you sweep away as a "driver situation" to be dealt with by "vendors".

And so far there's no significant entry here for the ARM world. So you can't roll up a server install or Docker container that isn't, fundamentally, a hacked up tarball from some random vendor. And the market doesn't trust that.


The post I was replying to was explicitly talking about distributions, packaging, etc. I wasn't saying that there are no problems but disagreeing with the assertion that the problem is higher-level rather than lower level.

As for sweeping anything away, it's true that there's work involved but it's not like we're starting from scratch in 1985. There's a lot of industry experience supporting new hardware and any company serious enough to be bidding on a Microsoft order for a boatload of Azure servers isn't going to walk away because they can't figure out how to package up some drivers.


If you were talking about the mobile 32-bit world, you would be correct.

The 64-bit world is very different. It is very homogeneous, it looks like an x64 server basically. Userspace actually wasn't ported that long ago. Fortunately now all the major Linux players fully support ARM64 servers (you can download an ISO and install on any ARM64 server without any voodoo, just like x64).


Not ARM64 ports. Important difference. But yes, you can go download an ISO now and boot on any ARM64 server. Cool stuff.


ARM and ARM64 Windows uses UEFI and ACPI, with the same OS images for everyone.


Right, I was addressing the Docker problem in the grandparent, sorry for the confusion.


If you need a binary, Go can be cross-compiled for ARM as well.


Pretty much any noteworthy language can be, though.


As long as, you only use pure Go code and no syscalls.


Microsoft has UWP Apps as part of Windows 10. Any UWP application will automatically run in the .NET Virtual Machine and work on both x86 and ARM.

That, in combination with emulation for the x86-based legacy stuff, should be a powerful reason to use the Microsoft platform.

Even if ARM servers don't become a thing, the support of ARM servers only benefits Microsoft and their current "Universal Windows Platform" strategy.


Yes, Microsoft has UWP.

No, UWP and .NET are orthogonal. When I need to, I’m using C++ to develop my UWP apps just fine.

Edit: and even when using .NET, by default, UWP apps don’t run in the virtual machine. Instead, they’re compiled to native code with .NET Native.


This is correct. However, it's important to point out that the .NET native compilation can also target ARM architectures. For native development in C++, one would need to build and distribute ARM-targeted binaries also in order to actually be "universal".


It is no different for .NET Native.


This is why using some kind of bytecode as portable binary format has been a long tradition on mainframes, almost since the early days, when it was executed via microcode.

Modern computing has just been catching up with them.


isn't it microsoft who are making x86 emulation for ARM


It's Microsoft indeed.


I would think any cost per performance advantage would be lost.


For the emulated apps, sure. But apps compiled directly to the new architecture will gain a lot and benefit from the new architecture. Microsoft is big on backwards compatibility, so adopting a new architecture would be a complete non-starter if it weren't for an emulator.


I think the point is that Microsoft can how ship Surface devices with ARM chips that can occasionally run x86 applications when needed. And it also allows Microsoft to ditch Intel if they really wanted to.


Hopefully they get it right this time. The last ARM Surfaces (Surface RTs) were hot garbage, barely functional enough to do email and a little web browsing on.


Actually they were better than that. Users loved them (check out the star ratings on user reviews) and some are still in use.

The design was fundamentally flawed for other reasons, but they performed reasonably well compared to rival tablets, while also offering multiple log-ons, multi-tasking and full Microsoft Office, which those other tablets lacked. They also supported Active Directory, ditto.


Maybe I got a bad one then, but I was not real impressed with mine.

1.) Keyboard constantly flakes out and stops working - or the keyboard and touchpad works, but the touch screen stops working.

2.) Extremely limited software choices. You're stuck with whatever small subset of the garbage in the Windows Store was cross-compiled for Windows RT. Stuck with IE, no options to get Chrome or Firefox or something that works a little better.

3.) Extremely anemic performance. Mine chokes and dies trying to read email (in the godawful Metro Mail app) and browse the ticketing webapp that my company uses.

4.) Suffers the brunt of the awful Windows 8 Metro UI design changes.

5.) The operating system is effectively dead in the water, and won't be getting updates.

6.) Microsoft took a $900 million write-down on the product[1].

[1] https://arstechnica.com/business/2013/07/microsoft-takes-900...


I haven't seen the keyboard problem. The main one I saw was a disappearing mouse pointer, which was fixed by removing and re-attaching the keyboard. Weird.

You could try a software reset for the performance problems, bearing in mind that the updates will drive you mad. Unfortunately you can't fix the slow CPU or the too-small RAM. However, the current performance should still compare reasonably with another 2012 tablet.

> The operating system is effectively dead in the water, and won't be getting updates.

Look out for Windows 10 on the Snapdragon 835 later this year ;-)

> Microsoft took a $900 million write-down on the product

Yes, I bet that hurt, even when you have the odd $100 billion in spare cash.


I agree with almost all you say, but you couldn't AD join a Surface. If that were to have been enabled by Microsoft, my conjecture is that it would have become the tablet for business as the market was still open at that point.


> but you couldn't AD join a Surface

Sorry, my mistake. Many thanks for the correction!


I don't think you'd run an entire OS like this. But some processes might not have been ported to ARM and those need an emulation layer. Think Apple's Rosetta for running PowerPC binaries on Intel processors to make the switch less painful.


ARM and ARM64 Windows are 100% native code out of the box. On ARM64, the translation engine might be disabled if you really want to.


AMD designs are used as a part of the design, and AMD has shown x86 and ARM chips which fit in the same sockets


Project Skybridge was cancelled as far as I know. [0] Not sure if we see something like that in the future, considering how quiet AMD is about their ARM plans. Although a ARM chip with really good power efficency would be nice below Ryzen.

  Finally, we’ll note that this roadmap is empty of any mentions of project Skybridge. CEO Lisa Su has commented that AMD has decided to change their focus away from Skybridge based on customer feedback. Customers were telling AMD that they didn't necessarily need socket compatible solutions, so AMD is going to work on differentiated solutions. That said, given that Skybridge was announced last year and planned for 20nm, I suspect that it has also become a victim of the short lifespan (and underperforming nature) of 20nm, leading to it being downplayed in favor of 14nm prodcts.
[0] http://www.anandtech.com/show/9231/amds-20162017-x86-roadmap...


On the same motherboards? That's rather impressive, if the same motherboard can support two different chip architectures. Having an ARM chip that just happens to use an AM3/AM4 socket is less impressive.


I'm purely a layman in this space, but I don't see why it would be impossible. Pretty much all modern CPUs are actually SoCs. Modern peripheral connections like PCIe and USB 3 have direct access to the CPU rather than going through custom bridge chips, simplifying board design and making it possible to use the same board for two architectures. Audio and network chipsets don't care what architecture the CPU uses.

Again, I'm far from an expert but it seems like a simple enough solution.


As someone with a lot of experience in board bring-up, trust me, the hard part isn't grabbing AMD's reference board, sticking on an audio chip and a wifi chip and sending it over to a board manufacturing shop.

The hard part is wrangling the software: BIOS, drivers, testing and more testing, patches, microcode updates. Finding out that your wifi chip vendor has moved the chip to end-of-life just before you were ready to ship would be a non-issue if the software part were faster.


And it means that presumably you could put a RISC-V or a Power8 in there too. ;- )


AM4 chips are SoCs which do not need a chipset to work technically


I am far from an expert on this... but. For most software, can ARM realistically replace Intel? If we ignore the obvious advantages of the intrinsic instructions found in most Intel chips...

Intel has a really solid and reliable clock speed with a relatively small number of steady states (low power, turbo, default). In my limited experience, ARM chips seem to have variable clock speeds depending on both the type (floating point/integer) and magnitude (number of cycles/sec) load. Is this only apparent in the IoT space? Does it disappear in ARM chips designed for servers? Am I delusional about this?

It is really difficult to understand just how sensitive any given application is to the monotonic clock. Can most applications really just be ported?


To cut a long story short: Yes, most applications can just be ported.

The number of applications that rely on the actual clock rate being steady is extremely close to zero. Bear in mind nearly everything runs fine on laptop machines, which generally have a much wider range of clock states, and have no problems.

Similarly, each core in modern machines is likely to be running at a different speed, and again, no problems.

Finally, the clock rates used are a matter of OS policy; if you tell your OS to keep the CPU at a particular speed, it'll do it, regardless of whether it's an x86 or an ARM CPU.


I'm struck by how hard it would be to track down a bug that depends on the consistency of the clockrate...


Bugs like the ones you describe come down to timing.

But it's not like "the old days" where for a specific target, there was only 1 chip/clockspeed so devs simply relied on it for timing since an RTC was or not available, or too slow to access.

These days, it would be very hard to actually write code that relies on a specific clock-rate and work reliably. It's a lot easier and reliable to use the clock for time-sensitive stuff.


Im very curious how this will work for AMD with their K12 chip. They will have high performance x86_64 and AARCH64 chips.


Last I read K12 was cancelled.


Maybe you mean Skybridge? The last I read K12 was only delayed until 2017/2018 (but Skybridge was cancelled).


I don't believe it was canceled, it was certainly delayed to finish Zen though.


Other than some of the weird licensing I've heard (by know very little) about, I love so much about ARM / RISC based systems but the one thing I find myself needing is higher single core clock speeds, not just on ARM but on x86 as well. Considering how popular and accessible interpreted languages such as Python are these days and yet how hard it is to write multithread code easily I often find processes bottlenecked by a single core, I'd love a high clock speed (3.5ghz+) dual (or more) CPU ARM setup with at least 24 physical cores and a sensible amount of L1/2 cache, but as far as I'm aware it's not really something that's readily available. Something like that would hum with modern Python / Ruby apps or even an ARM build of PostgreSQL and a modern (4.10+) kernel with NVMe storage (if that exists / makes sense for ARM) running either CentOS 7 + some trusted third party mirrors such as elrepo/epel or Fedora Server which I have not actually tried in a production environment so I can't comment on its updates stability but have found it the most reliable desktop desktop distro (excluding macOS).



Question - is this a sign of maturity of ARM architecture? Can we really expect desktop OS to move to ARM? Could Apple start transitioning Apple AX into their laptops?


Servers and desktops have very different requirements, just ask the linux guys. I'm not saying it's impossible, but I don't see that happening for another 5 years at least. Apple in particular is already getting a lot of flak for underpowering laptops, switching to Arm now would basically kill the entirety of their remaining high-end offering on desktop.


On the other hand, their A10 chip is roughly on par with the performance of the Intel chip they include in the 12" MacBook. They're not going to replace an i7 with an A10, but maybe replace the M3 or M5 with an A10, replace the i5 with an A11, and shoot for the i7 with an A12 or A13. Why else would they dump so much money into making the fastest ARM chip on the market? The iPhone doesn't need that much power, but they keep pumping it up anyway.

We all know OSX will run on ARM, they just use a different WM and call it iOS.


I've been speculating that they'll do a A11+x86 combo; possibly using a very underpowered, bargain basement x86 paired to an ARM for native binary compatibility during the transition.


Apple does have a massive team working on optimizing their ARM chips, they started out with redesigning the uncore, hired quite a bit of talent from other companies, and have moved to optimizing the CPU itself.

They even forced Global Foundries and Samsung to sync processes at 14nm to build their chips:

http://semiaccurate.com/2014/04/17/semiaccurate-right-apples...


Microsoft is releasing a version of Windows client (not to be confused with server, which was todays announcement) for ARM with an x86 compatibility layer.


For servers - yes. And not just the 64-bit ISA, but the server specs/standards, the hardware ecosystem, OEM/ODM engagement...and software, of course! And conjecture - in the cloud it won't matter what arch you are on.


Dumb question. Are there commodity ARM SOCs with built-in PCIe controller logic? Or is the idea that you have a separate controller chip? Is PCIe used in ARM systems or do they use a different bus - axi??? Sorry for the dumb questions. Hardware hobbyist here :)


Yes, many ARM based SOCs include PCIE IP...SOCs from Freescale, TI, Nvidia, etc. typically supporting PCIE RC and EP.

Lots of boards with ARM based SOCs are using PCIE WiFi/BT combo chips which traditionally used SDIO. PCIE also being used for storage.


They're not replacing Intel chips from the looks of it though, right?


No, they're going to use it in Azure where it makes cost sense, and future versions of Windows and Windows Server will run on ARM with emulation for x86 if you need it, but x86 support is definitely not going away.


> x86 support is definitely not going away.

To say the least. :)


Payback time?

https://semiaccurate.com/2016/11/17/intel-preferentially-off...

Okay, obviously this partnership started years back, but it's nice to see that not everyone is willing to encourage Intel's monopoly, as Google often does (in Chromebooks, too, even though Intel's chips are virtually unnecessary there).

Although, to be fair, the "Wintel" name didn't come out of nowhere. Microsoft obviously played its part in growing Intel's monopoly for a long time, too.


Google's doing quite a bit of work to make ARM a viable Chromebook processor. http://www.theverge.com/2017/2/22/14691396/google-chromebook...


Microsoft tried as much as it could to avoid being locked into Intel. They built Windows NT to be multi-platform and, at a given time, it ran on MIPS, PPC, Alpha and a later version on Itanium.

Intel has been much more dependent on Microsoft's monopoly than the contrary.


> Microsoft tried as much as it could to avoid being locked into Intel.

... while Intel kept looking at Linux and anything else that might help it avoid being locked into Microsoft ;-)


the amount of relevant windows software that can run on ARM is a rounding error, and x86 emulation on ARM sounds completely unusable.

Microsoft is very dependant on x86.


Microsoft is still very dependent on x86. But the two main platform with potential to go forward: Server and Office are now both running on ARM.


The opposite is still very much the case. 64-bit desktop processors only started selling when a version capable of running 32-bit Windows became available (and not from Intel, who was putting all their eggs on Itanium) and only took off after a 64-bit version of Windows for them was released.

Inertia is a powerful thing.


So they translating asm to C?


No: NT was written in a portable high level language from the beginning. They had a few platform support ASM blocks but one of the big selling points in the 90s was true API compatibility across x86, PowerPC, Alpha, MIPS, etc. In the pre-Linux era that was huge – porting across vendors' different conceptions of Unix was a huge time-sink.

DEC built an impressive x86 to Alpha dynamic binary translator but that was their work, not part of NT.

See https://web.archive.org/web/20020503172231/http://www.win200... for more history from that era


In the end, the lack of software (on Windows, the vendor would need to recompile the code for all ISAs) was what doomed Windows on PPC for me. Even though it was ridiculously fast, and a nicer experience than MacOS 8, neither Visual Studio nor Outlook were there.


Google have been doing something with POWER CPUs for a few years now, at least on an experimental basis.

2016, Power 9:

http://www.computerworld.com/article/3052951/cloud-computing...

2014, Power 8:

http://www.pcworld.com/article/2149080/google-shows-homegrow...


Each time Google looks into POWER, it seems to be when it's time to renegotiate with Intel. This is a tried and true tactic for better Intel pricing at this point. I was even talking about it (and got somewhat of a corroboration) a few days ago[1].

1: https://news.ycombinator.com/item?id=13786170


That's rather sad. POWER has seemed to be lingering in this half-life for many years now, where it's valuable enough for development to continue, but it's not competitive against Intel for broad use.


Strange many discussion is about game. The article is more about cloud computing including AI, impact to HP etc


I'm more interested in new AMD processors in the server. That's going to be interesting.


AMD really needed the break. This is fantastic news. Don't all consoles also use AMD chips?


ARM, not AMD. But yep, AMD is in PS4 and Xbox One. Nintendo Switch looks like an Nvidia Tegra though (ARM).


ARM isn't used for the main CPU for the PS4 or Xbox One, both run customized APUs using x86-64, just like AMD's consumer APU offerings.

https://en.wikipedia.org/wiki/PlayStation_4_technical_specif...


Why though? ARM server chips haven't beat Intel just yet.


Think of how rapidly they've caught up given Intel's lead. I think AMD and arm are great examples that Intel's lead is mostly inertia.


Not sure, but you can pick up a 1U server running 96 ARM cores today. ARM cores are optimized for certain types of work and have code to accelerate all sorts of things like AES, SHA1/2, video codecs, etc.

So depending on your workload, a 96 core 1U is going to save you a lot of U's and power. No idea how Windows Server and Azure services fall into this. Maybe they want to dump specialized tasks onto ARM instead of churning through their x86 infrastructure which takes longer and uses more power.

Data centers are power bound. Power is expensive and instantly turns into heat, which requires yet more power to exhaust and cool. Anything that can bring significant power savings will be taken seriously in data centers. If ARM servers can deliver these power savings, then its a no-brainer to buy them. The cost of porting Exchange, SQL server, Sharepoint, IIS, and Windows Server to ARM is going to be a fraction of the power bill those DC's run. Now that MS is actually paying the server bills, they're realizing that pegging your product to just x86 isn't the wisest move.


> Not sure, but you can pick up a 1U server running 96 ARM cores today. ARM cores are optimized for certain types of work and have code to accelerate all sorts of things like AES, SHA1/2, video codecs, etc.

True, but that 96 core 1U server is going to bear a high price, and Intel CPUs also have instructions to accelerate AES, SHA1/2, video codecs, etc.


You can pick up this 96 core server right now for $4,000. $4,000 is nothing in the world of DC servers.

https://www.avantek.co.uk/store/arm-servers.html

The 'big dog' avantek sells right now seems to be this 384 core 2U solution at around $15,000.

https://www.avantek.co.uk/store/avantek-384-core-cavium-thun...

These are very cheap pieces of hardware. Price up a HP Proliant with multiple 8 core Xenons and see how crazy that pricing gets, not to mention the power usage. Pricing is good right now for ARM. The question is does ARM work for what you're computing? According to this article, ARM is probably best for high thoroughput and high RAM solutions, not necessarily number crunching:

https://www.servethehome.com/exclusive-first-cavium-thunderx...

Mind you, those dual Xenons its competing against go for anywhere between $7,000 to $3,000 each. That's just the CPU street price before the HP/Dell markup and, of course, the rest of the server. Ask a VAR what a dual or quad E5-2699 v3 or current v4's goes for. You're probably looking at a $25,000 to maybe even $50,000 box here.


> 96 core 1U server is going to bear a high price

The server market only cares about price/watt and size. 1U is very small for such a parallel system, no need to write your code for Phis or anything like that, and low power in comparison to Intel.

Win, win, and win.


Does the market care much about size? I thought there were lots of datacenters that reached their limits on power and cooling despite having floor space available for more racks.


It depends on place to place but sometimes space is limited. I know some places need "off-site" backups that are within walking distance (not really too off site). For companies in NYC or populated areas space is a factor.

Also if a machine was half as power consuming, twice as powerful, but you could only fit one in your entire datacenter, I don't think many would go for it.

Having many systems is a failure tolerance assurance. Being able to increse your power efficiency, processing ability, AND capacity all by switching to a single system that's priced competativly/cheaper to the market standard? That's a wining combo.


> Does the market care much about size?

If it didn't, we wouldn't have blade servers, for which the main selling point is density.


I cannot wait for ThumbEE to make it into the "main stream" ARM server market. Think about the huge speed boosts we could see?

I don't know why Oracle/Python/Microsoft aren't pushing this harder. Having your JIT-ed code running directly on the CPU? What could be better then that?


Not an expert, since your statement triggered a goog .. but :

https://en.wikipedia.org/wiki/ARM_architecture#Thumb_Executi...

Says:

"On 23 November 2011, ARM Holdings deprecated any use of the ThumbEE instruction set,[77] and ARMv8 removes support for ThumbEE."


DARN! That's too bad! That would have been really cool. I know that there are a few in-hardware JVM systems that are used in cellphones that could be made use of. That would be cool.


Well, except that it would have been slower... that's why it got removed.


> Having your JIT-ed code running directly on the CPU? What could be better then that?

Although it looks good, Lisp Machines, Ada Machines, Mainframes have repeatedly proven that JIT on a general purpose CPU allows for more optimizations than having it on the hardware.

Hence why mainframes like IBM i, have their JIT on the kernel, not on CPUs.


For what?

Microsoft is not really a dominant force in the server space.


Azure is one of the big three in cloud computing - Google, Amazon, and Microsoft


And they also run some pretty large things that could be run on ARM with no visible impact to customers. Bing, hosted Outlook, etc.


Sure, but it is a single cloud.

Will you ever be able to buy these products? And would you ever want to? Windows Server 20XX on ARMv8 will forever remain a niche platform at best.


> Sure, but it is a single cloud.

Not sure what that means. Arguably, the cloud computing business is what's driving high performance computing at this point. Go look at the benchmarks for Ryzen, for example. It's pretty clear who AMD is targeting with that architecture (hint: games aren't in desperate need of better multi-core performance).

> Will you ever be able to buy these products? And would you ever want to? Windows Server 20XX on ARMv8 will forever remain a niche platform at best.

Did you read the article? This is about MS supporting ARM in their cloud offerings (and porting windows to do so). Whether or not consumers will purchase these solutions seems irrelevant in this context.


Their new datacenter is a mile long. It would be profitable to sell chips just for the one datacenter.


Most of the world you take for granted runs on Azure. I don't mean by numbers, but by importance. Big enterprise loves it. AWS is technically bigger but its mostly for web technologies. Many companies run on some level of Azure or O365. Its insane how big MS is in this space and how quietly because they don't get the big headlines due to Windows server space being 'boring enterprise stuff.'


MS is one of the major players in open compute and in the top three for cloud services.


According to the article:

"its Azure service is No. 2 in cloud infrastructure behind Amazon, and it's one of the biggest server buyers"

You don't have to be "dominant" to have influence.


They are quite large in the server space: http://map.buildazure.com




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: