> Our goal was to develop a single operating platform across multiple 64-bit ARMv8-A server-class SoCs from various suppliers while using the same sources to build user functionality and consistent feature set that enables customers to deploy across a range of server implementations while maintaining application compatibility.
I wonder how successful this was. Previously, all x86 CPUs (including x86_64) would bootstrap into the same mode from 1970s CPUs and preserve all the functionality from the original ISA (we still talk to the RTC via inb/outb, e.g.). I suppose this changed a little bit after EFI/UEFI was offered?
ARM CPUs were not bound to this backwards compatibility so AFAIK every vendor could implement their own bootstrapping functionality, and therefore having a single bootloader was challenging/impossible? uboot is a popular basis solution but IIRC everyone provides their own tweak to suit their SoC. Does TrustZone normalize the bootstrapping process for ARM devices such that we can write a single bootloader binary and expect it to work the same way across ARM server SoCs?
The approach here for 64-bit ARM on servers has been to say "it must run UEFI". Then the vendor-specifics get dealt with by UEFI and ACPI, and the kernel can simply assume those facilities exist (and the distro can install new kernels in the usual UEFI way without having to know how to flash them into 500 different bootloaders). You can argue the merits and demerits of UEFI (and people do!) but there's a lot of benefit in pushing for "all server hardware must work like this".
I wish Google would mandate that with newer Android versions instead of this /vendor crap in Oreo. Microsoft mandated UEFI for their phones, but unfortunately their bootloaders are still locked. If they weren't though, it'd be a great standardized platform for alt OSes.
Coreboot has some pretty solid ARM support too. Modern Chromebooks (both x86 and ARM) use Coreboot with a "depthcharge" payload. You can even use Libreboot at this point to bring up an RK3288, and boot into ChromeOS or a traditional linux. Even cooler is that if you do your own build of Coreboot, you can provide your own OS verification keys, and use vbutil to sign your own OS and have custom verified boot.
Isn't UEFI way more complex? But I'm not sure if that's a problem because of Windows and all the drivers it has to support, or if it's a problem with the specification. I believe it was the latter? I think Google mentioned that in its recent talk on removing Intel ME.
because uefi gives more than booting support, it also acts as firmware interface even after kernel has booted up. Kernel can continue to use uefi services for hw specific things which are abstracted via uefi interface
We're now in the 3rd generation of Arm servers, all built to the same set of specifications with multiple vendor SoCs by tens of different OEM/ODMs - SBSA (Server Base System Architecture) and SBBR (Server Base Boot Requirements).
The goal of these specs was to make these servers as "boring" as possible, i.e. as similar to an x86 server so that neither OEM/ODMs nor IT guys have to be able to distinguish between supporting an Arm or an x86 servers.
This means that you will be able to boot the same binary OS distribution on every machine. There are no more BSPs like in the old 32-bit Arm world. This means that the OS does not need to know anything about clock domains, pin muxes, GPIOs or DVFS beyond the standard facilities exposed via ACPI. Like x86, machine error handling is firmware-first. PCIe works the same way as on x86. AP core boot up and power off is abstracted through the PSCI (Power State Control Interface). TrustZone is not available for OS use and is purely used to implement resident portions of firmware (RAS error handling, PSCI, SDEI).
For people asking why UEFI and why ACPI, the answer is very simple: because that's what 99% of all deployed servers use. Using something else is just a friction point for the OEMs, ODMs, IHVs and firmware vendors. It would also be a friction for anyone consuming these systems. Sometimes, you have to be the adult in the room and say that you don't need an "ideal" solution, but the existing solution will work. Plus, the UEFI+ACPI world only really works when you are able to make all systems look more or less the same, hiding all the nitty-gritty shitty bus accesses (I2C for power buttons, SPI for flash, GPIO etc) completely in the firmware, not for the OS to care about. The OpenPower ecosystem didn't see this at all, and they have an elegant firmware solution (hostboot + skiboot + petitboot), but... why bother? It's yet another way to boot, and it just makes their systems foreign to 99% of everyone making and using servers. The OpenPower booting was basically built for Google, but the reality of Google is very different from the realities of the world.
How is uboot fundamentally different than BIOS or UEFI/EFI? It's not like all the PC clones through the ages had the same bootup sequence for RAM and peripheral init.
uboot is more striped down than traditional INT call based BIOS because it doesn't provide a boot API, or any "runtime" services. Basically it does little more than early hardware init, and jump to kernel image. For example it doesn't have an execution environment for option roms. Its great if your shipping a device with a fixed hardware configuration and kernel, but what ends up happening is that it fails to provide "platform" abstractions to the kernel. The result is constant churn at the platform level because even simple operations like "I want to boot this kernel/configuration on the next boot" cannot be communicated from the OS/kernel to the firmware in a standard way.
UEFI OTOH, is a bad combination of BIOS and openfirmware, but has standardized an execution environment that allows device vendors to build standalone "driver" packages that enable booting off plug-in network boards/RAID controllers/graphical display/etc.. Those drivers can then either be installed in the firmware or provided on option roms. There is a higher level API so that grub/whatever can say read a config file written by the OS without having to know the underlying technology.
Basically uboot is great for devices that could do without firmware and just boot a kernel, uefi is useful if you want to have a standard environment usable by a generic kernel/OS across a wide range of devices because combined with ACPI AML/etc it abstracts away much of the underlying platform management.
It's different because uboot typically resides on what I would refer to as "end-user" storage. If it booted from an on-chip or on-board flash/ROM device and then loaded content from the end-user's device, you're right -- in that case it would be remarkably similar to BIOS/UEFI.
I hope prices on high-performance ARM hardware can come down a bit. Currently there's nothing between Chromebooks and $3000+ servers. On the other hand, if I were in the market for a high-end server, it looks pretty competitive vs. Xeon or Epyc. Any good benchmarks out there?
Hmmm, there's a lot of interesting stuff over there. If I didn't already have a dedicated router, the ClearFog boards look like a very nice router platform.
It seems like the ThunderX chips from Cavium are the most prevalent 64-bit arm marketed as server platforms. Very high core count, high memory capacity.. I've been hoping that these things take off because I love the idea: http://www.cavium.com/ThunderX_ARM_Processors.html
https://www.phoenicselectronics.com will sell you a Gigabyte MT30-GS2 (Cavium ThunderX 32-core) 1U system for around $2k. If you want to provide your own ATX case - much less.
ThunderX2 and Qualcomm Centriq (3rd gen arm server) systems have been recently announced (as in GA), but those will set you back quite a bit because they're not toys. But if you look at the 1st and 2nd gen systems, those are quite approachable.
It should be pointed out that the solid-run machine, while in theory a decent machine isn't going to run Redhat, or for that matter much beyond the image its shipping with. That _may_ change, but right now its not quite done cooking.
That said, outside of the 10G Ethernet, its pretty much bested by just about every m-itx x86 board in that price range. Plus, if you happen to need the 10G, its still probably less expensive to pick up a G4400+motherboard+10GbaseT board (new dual port on ebay for about $100) and best that machine in most cases.
“besting” is very relative. Workloads in such footprints
are usually power constrained and I am not aware of any x86 solutions involving a discrete NIC that can do 20Gbps at 35W.
Yep, ACPI support is evolving, but it’s a matter of time. Folks have figured finally that if it isn’t compliant with SBBR and SBSA, then there will be plenty of competitors who will be, absolving you of the headache of dead-end BSPs.
The base processor is rated at 10W for 4 cores, add ~8W for the 10GbaseT board (plus a bit) and its a competitive solution, particularly if your workload needs more RAM/etc. Feeling like you want a little more beef the Denverton's are hitting the market and many of them have the 10G integrated.
That one also has a BMC, which given my past experience tends to add a few watts too. Without testing them side by side its hard to know which draws more power in a given workload, particularly since the intel machines have become very dynamic over the last couple generations, its actually pretty hard to hit their TDP numbers in most cases (particularly without heavy FP workloads).
Now its become more about what the motherboard manufacture has integrated. Its all fine that the core/etc draws 8 watts. The problem happens when the motherboard manufacture decides to glue an aspeed BMC and an old marvell SATA controller on the board. Between them at idle they draw 2x more power than the SOC does running at peak. Its pretty easy to move the numbers one way or the other with unrelated changes.
EDIT: Discovered after posting that the PCIe on the asrock, is only x1 which keeps it from taking one of the x540T boards, but they have a couple variations including one for less money in m-atx which has a potentially better slot layout, and full size DIMM's.
Does anyone know the current ARM (equipment cost + ~0.9 utilization power costs) amortized over, say, three or five years compares to the latest generation of Xeon and Epyc?
I had the impression that CENTOS7.3 was available on baremetal ARM64 on scaleway since a long time. Is it really new ? Maybe I do not understand what this announce is about.
My understanding was that CentOS is simply a rebuild of all RHEL packages with the Red Hat branding removed. If there's different arch support tho, then there's probably more to it than I thought.
Well, basically that's right and if they're rebuilding RHEL packages anyway, they can just rebuild them for more architectures. Of course that implies that important packages actually support those architectures. RHEL offers more than just binary distribution, they should support their product, so they decided to go with less architectures than theoretically possible. That's how I see it.
RHEL 7 only supported x86_64, so CentOS 7 only "officially" supports x86_64. While AArch64, i386, and other arch builds are available from the AltArch SIG, they aren't maintained by the core.
With RHEL 7 supporting AArch64 after this announcement, I'd assume CentOS will follow suit.
I wonder how successful this was. Previously, all x86 CPUs (including x86_64) would bootstrap into the same mode from 1970s CPUs and preserve all the functionality from the original ISA (we still talk to the RTC via inb/outb, e.g.). I suppose this changed a little bit after EFI/UEFI was offered?
ARM CPUs were not bound to this backwards compatibility so AFAIK every vendor could implement their own bootstrapping functionality, and therefore having a single bootloader was challenging/impossible? uboot is a popular basis solution but IIRC everyone provides their own tweak to suit their SoC. Does TrustZone normalize the bootstrapping process for ARM devices such that we can write a single bootloader binary and expect it to work the same way across ARM server SoCs?