So all you do is: put out some press releases, ramp up some hires, put out some glossy product roadmaps. Intel starts to get separation anxiety and flinches: well, maybe they can afford to come closer to AMD prices for another year. They kiss up and you back out of your false posturing. Rinse and repeat every few years.
At that point you've burned the other company so many times by changing at the last minute that it becomes hard to get real support and engagement where you actually need it.
Has that ever been the case yet, for ARM servers?
There's no "magic" in ARM. The reduced power comes from the reduced complexity. Heck, branch prediction is fairly new in ARM. I have trouble seeing a time when an Intel chip will ever fall behind even a bucket full of ARM chips, unless workloads ever fall near zero.
Intel's designs, being tuned for high sequential performance, are significantly better for that purpose; but if you look at VPS services which currently only get a fraction of a CPU each, I'm sure they could run more power-efficiently with a larger fraction of a punier CPU.
For example, Jim Keller estimates a fully developed arm chip will be 15-30% more efficient than an equivalent x86. Now, this development hasn't happened yet, but with giants like Qualcomm, it will happen eventually.
Besides, never doubt the word of jim keller ;)
> QDT has been working with Microsoft for several years on ARM-based server enablement and has onsite engineering at Microsoft to collaboratively optimize a version of Windows Server, for Microsoft’s internal use in its data centers, on Qualcomm Centriq 2400-based systems. QDT’s OCP submission is the result of a multi-faceted, multi-level engagement between the two companies to facilitate ARM adoption into the data center. It encompasses multiple areas of hardware and software including board development, firmware, operating system, compilers and tools, and CoreCLR.
Also, I doubt Microsoft went through the trouble of supporting full x86 programs on Qualcomm's ARM chips just to improve the relationship with Intel. If anything, finally bringing ARM to not just servers, but Windows notebooks is what will make Intel even angrier with Microsoft.
So far, the main reason why ARM couldn't enter the PC market was the Windows lock-in - if x86 programs wouldn't work on ARM, then they had no chance (save for UWP apps gaining huge popularity, which hasn't happened yet, and I don't expect to happen for at least a few more years, if ever).
Keep the x86 CPU around for active, high powered use, and then use a low power ARM CPU to run downloads and software updates while the PC is in an off or idle state.
Literally any ARM chip. What ARM chip do you think makes a fitting replacement for a high performance Intel core in a Macbook?
The ARM server ecosystem has been in the works for a few years. TechCon'16 we saw the announcement that the software is fully there: and this is not hype, all major distributions support all the major players out of the box. Download ISO, install, and use it.
HW-wise we are now in the 3rd gen of chips (QC Centriq, XGene3, ThunderX2) all of which compare with E5, not Atoms. First/Second gen were mostly proving grounds for the specs and software work, excepting for Cavium machines, which have some serious network I/O where the relative wimpyness
of any one of the 96 cores simply doesn't matter for certain workloads. In a matter of two years we've seen private datacenters soak up XGene2 and Cavium chips. We've seen ODMs and OEMs get their shit together and start selling real kit (b2b.gigabyte.com).
It's funny, people expect these things to suddenly appear overnight everywhere and competitive with E7s and Power9. It takes decades, but the server guys managed to fit in nicely in about 5 years. I am excited.
What ARM will bring is competitive market and no Intel Tax.
Average 1U in ARM will be massively cheaper. For servers OpenPOWER is more interesting
I stand corrected. It's been a time since I've looked at the performance. I didn't realize they caught up so quickly.
Since MS is mostly "behaving" themselves, Valve is keeping SteamOS around, but not focusing on it.
At least for now... (it wouldn't surprise me if MS became more aggressive in their W10+DX12+"Xbox everywhere"+"MS Store" push...)
Most of the issue revolves around GPU drivers, which in theory valve is not responsible for, but they have hired devs to work on the open source AMD drivers. Even then, most devs don't optimize for OpenGL. That's again on the dev, and not valve, unless valve is going to provide devs to every game developer out there to provide linux support. Vulkan (hopefully) would resolve that, as optimizing vulkan builds makes games run better everywhere.
Of course they aren't putting ALL their eggs in this basket. ~90% of their revenue comes from windows, so they should be putting most of their effort into windows support. You can't lose your current customers while trying to make the future better. The fact that they are in fact fixing things and hiring driver devs is a great sign. And I believe they provided the rocket league dev too?
Either way, progress is coming, slow and steady. linux as a primary gaming platform works just fine for me, thanks to valve.
Confirmed by a happy occasional CS:GO player here.
Sony, Nintendo, Microsoft, specially Microsoft, go the extra mile to make sure games perform well on their platforms, even special casing APIs for specific games.
Valve isnt trying to discourage devs from using windows. Windows is still their main income source. Linux only devs aren't going to make valve money during the transition to windows. The goal is to build the game library on linux so it is a valid competitor.
Also, they can't really make a ton of special OS level APIs, since they are trying to keep things generic. they dont want to fork Linux, they want to maintain compatibility. I mean, open is the entire premise of leaving windows. And if you mean higher level APIs, there's things like: https://valvesoftware.github.io/steam-audio/ one of the top things in porting a game, is that directX covers more than just graphics. opengl + SDL2 covers most of it. steam audio makes it so you can write code once and have it work everywhere.
So yeah, they are working on making APIs that make it easier for devs to release linux/SteamOS games without needing the player base already there, because linux "just works" with the middleware.
The short of it is: they are actively doing a lot of stuff to advance linux gaming still, just a lot of it is behind the scenes now and aimed at devs or not directly steam related (you dont see the fact that they hire mesa devs through steam updates).
When one attends GDC, they have booths were they give support directly to customers, they also fly-in to relevant studios to sort out any kind of performance or bug related problems they might be facing.
And the point still stands that this is a future investment for them. They aren't pushing to make linux exclusive, they are pushing to make linux viable. Valve has ~360 employees. Linux has a ~1% gaming market share. If valve has even 4 employees working on Linux, they are essentially scaling their effort based on current income from the platform, whereas Sony/Microsoft get 100% of their income from a singular platform, so they spend 100% of their expenses on building that platform. If valve switched over and said "ok, all employees full bore on SteamOS, ignore windows", they would lose a LOT of money. Besides, the goal here isn't to make SteamOS "their" platform, they goal is to make gaming work on an open platform. Microsoft has to entice people with support in order to keep them locked in to their garden, Valve is trying to unlock the garden (to an extent).
if they want growth, they should spend more than just 1% on the platform, but people are acting like they are spending 0% on the platform. It's still improving and actively being worked on, but this is a big undertaking for a team of 4, OS updates, dev support, steam client, etc etc... And if the team is bigger than 4, then they are in fact spending more than 1% on a 1% market segment, which still proves the point of "they havent abandoned it, its just not growing at the 'right' pace in publicly obvious ways"
Windows also runs on arbitrary hardware
Any sources to back up this claim? 10% performance hit compared to what? Windows? A different Linux distro?
I've found that for native games, Linux can actually offer better performance in many cases than the Windows version.
Someone also did a test recently (full video on Youtube) with the Windows version of the new Doom running on Linux through the WINE emulation (yes I know its not technically an emulator) via Vulkan and the performance exceeded the Windows version.
This isn't necessarily valve's fault though. It's because the devs will shit out a Linux build without even trying to run it or they just don't care.
Some games work great. Some don't work at all and shouldn't even be listed with Linux support.
I sincerely hope Linux takes off as a viable gaming platform. All the tools and components are there. The only thing missing at the moment is developer mindshare.
There are some problems and glitches unrelated to the games, in my experience (like how newer nvidia GPUs have screen tearing on Ubuntu -- unrelated to vsync, by the way -- and there's pretty much nothing you can do about it for the time being)
Ah, here it is. From https://www.reddit.com/r/linux_gaming/comments/3snvpc/steamo...
"took a quick peek at the latest Geekbench executable for Linux(3.3.2,) it was compiled with GCC 4.4.3 on Ubuntu 10.04 - a 5 and a half year old compiler. Windows version was compiled with MSVC 2012 update 4 released 2 years ago."
Take note: That ars article was done ~15 months ago (november 2015). In it, they used Ubuntu 10.04, which was released in April of 2010 (thus '10.04')! Steam wasn't even released on Linux, until nearly 3 years later, in 2013!
That article is so BS that you should ignore it outright. It's not even fit to be toilet paper.
Borderlands 2, for example, plays fine for me on a i7 930, with a GTX 570. An older rig.
Dota 2 with OpenGL rendering hits or exceeds the Windows FPS when run on Linux.
Switch to Vulkan though and it craters.
Source engine games with OpenGL rendering often outperform Windows.
nah, not really, only dota2, because has no wrapper, at least for valve's source games.
Isn't that the core market?
Then get meeting scheduled with IBM and Oracle. Talk to the VAR or OEM guy in your office with your POWER mug poorly hidden. Start asking questions about TPC benchmarks and error correction. Ask if you can pay for cores only when you need peak capacity.
Intel will provide them with marketing support, potentially very significant if they care about you. Microsoft will also bend over backwards for an Oracle compete opportunity.
Dell did eventually start shipping servers with AMD processors in them, but it was long after they first started saying "We may start selling AMD based servers."
Microsoft and Exchange was a great example. Doing things like running big mailboxes, having HA services, and doing discovery were ridiculous hacks until Exchange Online came around and subjected them to the pain.
Similarly, I'm sure Microsoft has plenty of idle servers sitting around doing very little that could easily be running more effectively on ARM.
Intel has improved power management, but their low-end chips suck.
The real question is if WSL (Ubuntu for Windows) will be available for AArch64 Windows. Because the only real 64-bit ARM software ecosystem today is Linux (courtesy of Linaro, Redhat, Canonical, Suse, and ARM).
Between WSL and BTing legacy 32-bit x86 apps this could be a decent offering for folks who already have Microsoft infrastructure. Even cooler if Hyper-V is ported.
This is something that I give full credit to Linus and the other developers that have made Linux into a credible server operating system. Without that software base, ARM would never have been able to get where it has.
 I get it that most people never see the 'insides' of their systems but its always been something I cared about.
Also if I recall I don't think Windows NT for PowerPC ever truly saw the light of day. And this was while Windows 95/98 was still dominant and before Windows 2000/XP, so there wasn't really consumer software for it. So no real operating systems.
PowerPC ended up being a dead end which only Apple pursued. Though I guess there were variants that ended up in gaming consoles for a while.
At a job I had (IBM subsidiary) in 1997 they had a pile of early-CHRP boxes hanging around. For kicks I got Linux running on them, just out of curiosity. They were basically juts PCs (PCI bus, etc.) that ran with a PowerPC CPU. Which is effectively what Macs were for years, too.
As anyone who rode the rollercoaster of abandonment in the ActiveX years recalls, their previous MO was all about things that became uncommitted after being in their road map :(
I really hope there's some kind of future for non-Intel players.
If you look at e.g. Linux or BSD distributions, the entire world is rebuilt for every architecture. Running Linux on powerpc, arm, amd64, I get the exact same experience across the board as x86 bar platform-specific bits like openfirmware/efi tools. Microsoft has never done this. The vast majority of their stuff remains x86 only, making arm and even x64 second class citizens, with x64 only being viable as a result of the x86 compatibility. Until Microsoft start building and providing every binary as a native build, and providing the tooling for others to do the same, they will remain wedded to x86, and I'll be unable to take their support for other platforms seriously.
If a bunch of volunteers can manage to provide over 20000 software packages for over 10 architectures, totalling over 500000 binary packages, it's entirely possible for Microsoft to support three. When I used to maintain the Debian build tools, it took 18 hours to rebuild around all 18000 packages; compilers, kernel, tools, applications, everything. It would be much faster on a current system. It's all possible from a technical point of view.
This is demonstrably not true, there are plenty of ports that end up being Intel-only, and plenty of architectures that involve some sacrifice in terms of software choice if you want to run them. (as a SPARC and Raspberry Pi user I could elaborate, but hopefully you get the idea) Not that there's anything wrong with that.
> Until Microsoft start building and providing every binary as a native build, and providing the tooling for others to do the same, they will remain wedded to x86, and I'll be unable to take their support for other platforms seriously.
What you take seriously is your business, but the standard you're holding Microsoft to is one Linux doesn't meet, let alone BSD, and it's completely arbitrary regardless. If their ARM platform does what it needs to do, it doesn't especially matter if it offers support for (for example) legacy Windows cruft.
Sure, but you can compile failed builds of software yourself and get that tool on ARM or SPARC. ARM works pretty well on Debian, so long as your not on Raspbian & using their ancient repos with broken software. Part of why I've avoided Raspberry Pis entirely.
I don't agree that it has a "different experience" because the tools and infrastructure are there to build the entire distribution from scratch. And this was done. I've done it myself several times. Once done, this architecture variant had the complete package set available for all the official supported architectures, bar any architecture-specific packages being added/omitted. Third parties can and do bootstrap and maintain entire architectures. I can't speak for the raspbian people and their port, but it's not hard to manually bootstrap the kernel and toolchain and then set an autobuilder loose on the entire source archive.
And that's the point I was trying to make about Windows; that's exactly what you can't obtain. Be it the old NT ports or the present day ones, outside the base "Windows" release and some key products, the rest of Microsoft's product portfolio is largely missing.
I don't think they're going to really square the circle in the way linux can, having everything available because everything is open source. On the other hand, it's not inconceivable they could end up with a server platform that does offer an awful lot of Microsoft stuff as open source, which also makes available all the linux userland stuff you'd want to have. Their handicap might be that not enough of their infrastructure software has made it to .net yet, but they've shown lately that they're willing to do some porting when it's appropriate. (Sql server!)
But the Raspberry Pi is a fundamentally flawed platform, with poor I/O, binary blobs required to make the hardware function, and a community that is toxic towards free software, with its own vaguely supported distro.
An OrangePi Zero ($7) or OrangePi PC Plus ($22) will blow a Raspi out of the water any day, due to each USB port and the ethernet port being directly wired into the SOC, allowing 40MB/s per port. Plus, I can run kernel 4.10 and mainline Debian on it without any blobs, and the only things I'll miss out on are GPU support & WiFi. The VPU has been reverse engineered though, so H.264 & H.265 video works well.
I think that's a hostile mischaracterisation of the community!
"Microsoft will continue to provide mainstream support for Itanium-based systems until July 2013, and extended support until July 2018." -- http://www.computerworld.com/article/2516742/computer-hardwa...
Itanium came out in 2001 and the above article is from 2010, so that would be about 9 years of product releases and 17 years of support overall.
And, frankly, there's no money to be made off of supporting all those miscellaneous architectures so Microsoft has no reason to bother.
When you say "there's no money to be made...so Micosoft has no reason to bother", this attitude is a major reason why ia64 failed, and why their previous arm attempt failed, and why their current arm attempt is also likely to fail. If the software isn't there, it's a poor proposition for most customers.
When I run Linux on ia64 or arm, I have an entire distribution's worth of software at my fingertips, and for the most part I'm not losing out compared with more popular architectures. With Windows, no matter how technically good the base platform may be, the ecosystem is a wasteland and will remain so until Microsoft put the effort in to support them properly.
Supporting multiple platforms is not expensive; it's simply a matter of having the build infrastructure in place. In Debian we had it build every package automatically built on 11 platforms. Microsoft could do the same for their applications. For example, see https://buildd.debian.org/status/package.php?p=okular&suite=... -- one package built for 22 platforms. Building for three or four is not a lot to ask...
Sometimes I wonder if IA-64 was just an exercise in killing of Alpha and HP-PA...
Anyway, x64 succeeded because instead of producing something no one asked for, and poorly (IA-64), AMD went to Microsoft, found out what they wanted from a 64-bit chip, and built that.
If Intel had transitioned their processor line to IA64, without AMD to defy their roadmap, do you really believe consumer desktops would magically start using other vendor processors?
> AMD went to Microsoft, found out what they wanted from a 64-bit chip, and built that.
Because they still had the cross-license deal with Intel that allowed them to legally build x86 clones.
No x86 licenses, no x64.
Windows Server comes with IIS and other services that would normally have been provided in a Linux environment by the Linux ecosystem. Also, as the article notes, SQL Server and Visual Studio were supported on Itanium as well.
As Itanium only succeeded as a server product, there's no business reason for Microsoft to have ported their desktop applications to it.
Any considerations for such a server/desktop split certainly should not apply to arm, which can be used for either. Also, contrast with the experience of ia64 on Linux, where I had the full set of tools, services, applications available. That's the sort of experience Microsoft should have provided, but didn't. And should also be doing for arm, but aren't there either.
It's funny, but Microsoft was spot on correct to continue supporting 32-bit x86 on par with x64. Now they can just support BTing 32 x86 on ARM64 instead of being forced to support 64-on-64, which would simply involve more overhead.
Meanwhile, Linus had a PC with an Intel 386 processor, so that's what he started Linux on and for...
 MS DOS was based on code that Microsoft bought in, not having time to develop it from scratch. However, Microsoft did have some success on x86 with its PC version of Unix, which was called Xenix.
Basically a JIT that recompiles (for example) 32bit x86 instructions to ARM64, so that you can run old apps on new CPUs.
DEC had FX!32 to run x86 Windows NT programs on Alpha Windows NT.
Apple licensed Transitive's QuickTransit to run PPC OSX apps on x86 Macs, though in typical Apple style they renamed it "Rosetta".
Article seems to say NVidia couldn't get a license from Intel.
Apple's marketing around it was amazing; you'd think Apple built the technology in house.
ARM was developed in the 1980s by Acorn -- a small, mostly struggling British company -- as the Acorn Risc Machine. They wanted a simple 6502 replacement without going to the Motorola 68000, as Apple, Atari, Amiga (Commodore), Sinclair and others did.
Apple wanted to use the ARM in the Newton handheld, but wasn't going to bet on a chip owned by a small British company.
So, Apple persuaded Acorn to spin off the ARM operation into a separate company, Advanced Risc Machines. This was jointly owned by Acorn, Apple and VTI (40/40/20). The rest is history.
VTI got in because Acorn relied on its software tools to design the ARM chip and get it manufactured. Acorn had never designed a chip before and, in the beginning, didn't really know what it was doing.
The 2017 chips (3rd generation) are Xeon E5 competitive.
And so far there's no significant entry here for the ARM world. So you can't roll up a server install or Docker container that isn't, fundamentally, a hacked up tarball from some random vendor. And the market doesn't trust that.
As for sweeping anything away, it's true that there's work involved but it's not like we're starting from scratch in 1985. There's a lot of industry experience supporting new hardware and any company serious enough to be bidding on a Microsoft order for a boatload of Azure servers isn't going to walk away because they can't figure out how to package up some drivers.
The 64-bit world is very different. It is very homogeneous, it looks like an x64 server basically. Userspace actually wasn't ported that long ago. Fortunately now all the major Linux players fully support ARM64 servers (you can download an ISO and install on any ARM64 server without any voodoo, just like x64).
That, in combination with emulation for the x86-based legacy stuff, should be a powerful reason to use the Microsoft platform.
Even if ARM servers don't become a thing, the support of ARM servers only benefits Microsoft and their current "Universal Windows Platform" strategy.
No, UWP and .NET are orthogonal. When I need to, I’m using C++ to develop my UWP apps just fine.
Edit: and even when using .NET, by default, UWP apps don’t run in the virtual machine. Instead, they’re compiled to native code with .NET Native.
Modern computing has just been catching up with them.
The design was fundamentally flawed for other reasons, but they performed reasonably well compared to rival tablets, while also offering multiple log-ons, multi-tasking and full Microsoft Office, which those other tablets lacked. They also supported Active Directory, ditto.
1.) Keyboard constantly flakes out and stops working - or the keyboard and touchpad works, but the touch screen stops working.
2.) Extremely limited software choices. You're stuck with whatever small subset of the garbage in the Windows Store was cross-compiled for Windows RT. Stuck with IE, no options to get Chrome or Firefox or something that works a little better.
3.) Extremely anemic performance. Mine chokes and dies trying to read email (in the godawful Metro Mail app) and browse the ticketing webapp that my company uses.
4.) Suffers the brunt of the awful Windows 8 Metro UI design changes.
5.) The operating system is effectively dead in the water, and won't be getting updates.
6.) Microsoft took a $900 million write-down on the product.
You could try a software reset for the performance problems, bearing in mind that the updates will drive you mad. Unfortunately you can't fix the slow CPU or the too-small RAM. However, the current performance should still compare reasonably with another 2012 tablet.
> The operating system is effectively dead in the water, and won't be getting updates.
Look out for Windows 10 on the Snapdragon 835 later this year ;-)
> Microsoft took a $900 million write-down on the product
Yes, I bet that hurt, even when you have the odd $100 billion in spare cash.
Sorry, my mistake. Many thanks for the correction!
Finally, we’ll note that this roadmap is empty of any mentions of project Skybridge. CEO Lisa Su has commented that AMD has decided to change their focus away from Skybridge based on customer feedback. Customers were telling AMD that they didn't necessarily need socket compatible solutions, so AMD is going to work on differentiated solutions. That said, given that Skybridge was announced last year and planned for 20nm, I suspect that it has also become a victim of the short lifespan (and underperforming nature) of 20nm, leading to it being downplayed in favor of 14nm prodcts.
Again, I'm far from an expert but it seems like a simple enough solution.
The hard part is wrangling the software: BIOS, drivers, testing and more testing, patches, microcode updates. Finding out that your wifi chip vendor has moved the chip to end-of-life just before you were ready to ship would be a non-issue if the software part were faster.
Intel has a really solid and reliable clock speed with a relatively small number of steady states (low power, turbo, default). In my limited experience, ARM chips seem to have variable clock speeds depending on both the type (floating point/integer) and magnitude (number of cycles/sec) load. Is this only apparent in the IoT space? Does it disappear in ARM chips designed for servers? Am I delusional about this?
It is really difficult to understand just how sensitive any given application is to the monotonic clock. Can most applications really just be ported?
The number of applications that rely on the actual clock rate being steady is extremely close to zero. Bear in mind nearly everything runs fine on laptop machines, which generally have a much wider range of clock states, and have no problems.
Similarly, each core in modern machines is likely to be running at a different speed, and again, no problems.
Finally, the clock rates used are a matter of OS policy; if you tell your OS to keep the CPU at a particular speed, it'll do it, regardless of whether it's an x86 or an ARM CPU.
But it's not like "the old days" where for a specific target, there was only 1 chip/clockspeed so devs simply relied on it for timing since an RTC was or not available, or too slow to access.
These days, it would be very hard to actually write code that relies on a specific clock-rate and work reliably. It's a lot easier and reliable to use the clock for time-sensitive stuff.
We all know OSX will run on ARM, they just use a different WM and call it iOS.
They even forced Global Foundries and Samsung to sync processes at 14nm to build their chips:
Lots of boards with ARM based SOCs are using PCIE WiFi/BT combo chips which traditionally used SDIO. PCIE also being used for storage.
To say the least. :)
Okay, obviously this partnership started years back, but it's nice to see that not everyone is willing to encourage Intel's monopoly, as Google often does (in Chromebooks, too, even though Intel's chips are virtually unnecessary there).
Although, to be fair, the "Wintel" name didn't come out of nowhere. Microsoft obviously played its part in growing Intel's monopoly for a long time, too.
Intel has been much more dependent on Microsoft's monopoly than the contrary.
... while Intel kept looking at Linux and anything else that might help it avoid being locked into Microsoft ;-)
Microsoft is very dependant on x86.
Inertia is a powerful thing.
DEC built an impressive x86 to Alpha dynamic binary translator but that was their work, not part of NT.
See https://web.archive.org/web/20020503172231/http://www.win200... for more history from that era
2016, Power 9:
2014, Power 8:
So depending on your workload, a 96 core 1U is going to save you a lot of U's and power. No idea how Windows Server and Azure services fall into this. Maybe they want to dump specialized tasks onto ARM instead of churning through their x86 infrastructure which takes longer and uses more power.
Data centers are power bound. Power is expensive and instantly turns into heat, which requires yet more power to exhaust and cool. Anything that can bring significant power savings will be taken seriously in data centers. If ARM servers can deliver these power savings, then its a no-brainer to buy them. The cost of porting Exchange, SQL server, Sharepoint, IIS, and Windows Server to ARM is going to be a fraction of the power bill those DC's run. Now that MS is actually paying the server bills, they're realizing that pegging your product to just x86 isn't the wisest move.
True, but that 96 core 1U server is going to bear a high price, and Intel CPUs also have instructions to accelerate AES, SHA1/2, video codecs, etc.
The 'big dog' avantek sells right now seems to be this 384 core 2U solution at around $15,000.
These are very cheap pieces of hardware. Price up a HP Proliant with multiple 8 core Xenons and see how crazy that pricing gets, not to mention the power usage. Pricing is good right now for ARM. The question is does ARM work for what you're computing? According to this article, ARM is probably best for high thoroughput and high RAM solutions, not necessarily number crunching:
Mind you, those dual Xenons its competing against go for anywhere between $7,000 to $3,000 each. That's just the CPU street price before the HP/Dell markup and, of course, the rest of the server. Ask a VAR what a dual or quad E5-2699 v3 or current v4's goes for. You're probably looking at a $25,000 to maybe even $50,000 box here.
The server market only cares about price/watt and size. 1U is very small for such a parallel system, no need to write your code for Phis or anything like that, and low power in comparison to Intel.
Win, win, and win.
Also if a machine was half as power consuming, twice as powerful, but you could only fit one in your entire datacenter, I don't think many would go for it.
Having many systems is a failure tolerance assurance. Being able to increse your power efficiency, processing ability, AND capacity all by switching to a single system that's priced competativly/cheaper to the market standard? That's a wining combo.
If it didn't, we wouldn't have blade servers, for which the main selling point
I don't know why Oracle/Python/Microsoft aren't pushing this harder. Having your JIT-ed code running directly on the CPU? What could be better then that?
"On 23 November 2011, ARM Holdings deprecated any use of the ThumbEE instruction set, and ARMv8 removes support for ThumbEE."
Although it looks good, Lisp Machines, Ada Machines, Mainframes have repeatedly proven that JIT on a general purpose CPU allows for more optimizations than having it on the hardware.
Hence why mainframes like IBM i, have their JIT on the kernel, not on CPUs.
Microsoft is not really a dominant force in the server space.
Will you ever be able to buy these products? And would you ever want to? Windows Server 20XX on ARMv8 will forever remain a niche platform at best.
Not sure what that means. Arguably, the cloud computing business is what's driving high performance computing at this point. Go look at the benchmarks for Ryzen, for example. It's pretty clear who AMD is targeting with that architecture (hint: games aren't in desperate need of better multi-core performance).
> Will you ever be able to buy these products? And would you ever want to? Windows Server 20XX on ARMv8 will forever remain a niche platform at best.
Did you read the article? This is about MS supporting ARM in their cloud offerings (and porting windows to do so). Whether or not consumers will purchase these solutions seems irrelevant in this context.
"its Azure service is No. 2 in cloud infrastructure behind Amazon, and it's one of the biggest server buyers"
You don't have to be "dominant" to have influence.