It's great to see IBM move slowly towards an open POWER architecture. At Red Hat we have access to lots of POWER7 and POWER8 hardware and it really is the fastest hardware available for anything, many times faster than x86-64 (of course with a massive cost and power budget to go with that). But when am I going to be able to buy POWER8 hardware for myself?? Is the expensive and experimental Tyan mobo the only thing?
It's worth nothing that IBM /does/ have a virtual loaner program that provides free access to POWER7/POWER8 servers for software development. System i (OS/400), System P (AIX/Linux/FreeBSD [eventually])
Also if you dont want to sign up (it is not a lot of hassle, but you have to use a VPN and instances only stay up a week), OVH has a Power8 cloud with a small and big instance size.
Sure it could, hypothetically. It wouldn't be exactly fashionable or portable - think gargantuan Dell "portable workstations" instead of Macbook Airs - but it's hypothetically doable.
And quite honestly, I'd be willing to sell my home and live in my car for the next decade or so in order to afford such a beauty should someone try and make one. Especially if it had some high-end GPUs wired directly into it via CAPI. Its battery life would be 4 seconds long, it would weigh more than a Ford truck, and it would run hot enough to cook my breakfast in the morning, but god damn would it be worth it.
Interesting development, let's hope that IBM will pick this up and run with it as an officially supported OS at some point. It would give lots more room upwards for when you're about to max out a single server x86 set-up.
It won't be long before someone runs a top-of-the-line X86 to Power8 benchmark using FreeBSD.
Which benchmarks are linux-to-linux? In the article, it says the SAP SD test is AIX vs Windows, and the SPECjEnterprise test is AIX vs Oracle Linux; and in the linked Oracle benchmarks, it seems to always be IBM/AIX vs Sparc/Linux.
Namely, most applications that benefit from parallelism will benefit from POWER or SPARC, since both platforms pack a high number of cores and a high number of concurrent threads per core.
This doesn't sound like much for consumer devices (PowerPC was the gold standard for gaming consoles last generation, but now Nintendo's the only one still using POWER in any significant application), but with Erlang/Elixir and Scala (among other languages renowned for facilitating "easy" parallelism) being the latest hot shit right now, these advantages are going to be much more significant.
Given that Erlang can scale near linearly with more cores and it's not uncommon to have 100+ cores for a single Power8 node - it makes me wonder how much investigation has gone into Power8 as an architecture for companies who have highly network centric products like WhatsApp.
Edit: WhatsApp is a highly network centric product and they also run on FreeBSD / Erlang.
I think that scalability aspect is precisely why Google (IIRC) is on the OpenPOWER board; there's been hinting that they're interested in developing their own server boards in-house based on POWER8 (or perhaps a future generation).
- he's figured out the KVM issues, their lack of support for some mandated hypervisor APIs and other bugs
- then found the existing powerpc pmap (physical memory management) code wasn't very SMP friendly
- also found the PS3 hypervisor layer isn't thread-safe
These are some of the reasons that I care about NetBSD (which runs on some 56 architectures with its single codebase) so much -- the real-world work supporting real-world architectures can pay off everywhere -- it's not just academic work or nostalgia/eccentricity.
Being highly-portable also helps to make sure that your code has a maximum of cleanliness and correctness; this is one of the primary reasons why OpenBSD, for example, emphasizes the importance of being portable to everything from x86 and POWER systems to Zaurus PDAs and VAXen, even though portability isn't an explicit objective of OpenBSD like it is for NetBSD.
I love NetBSD, don't get me wrong, but inevitably you will end up with a messing subsystem if managing multiple architectures because you won't be able to avoid situation like this:
> Every effort is made to keep everything cleanly split into 'Machine Dependent' (MD) and 'Machine Independent' (MI) areas. For example, an Ethernet chipset would have a single MI core hardware driver, which would be matched with appropriate MD bus attachment code for a given platform. Not all drivers are as clean as we would like for historical reasons, but any new driver will be, and old drivers are in the process of being converted across.
Machine dependent parts are built to a hardware abstraction layer (HAL)[1] interface, so that spec gives a single abstraction that the hardware-dependant code is built against, and can operate w/ the rest of the OS. As I understand it, this is facilitated by the _build system_ (not a mess of if/then in code) like this[2]: