Hacker News new | past | comments | ask | show | jobs | submit login
Arm Announces Client CPU Roadmap For Laptops (arm.com)
389 points by redial 8 months ago | hide | past | web | favorite | 278 comments

Oddly, what I'd really like to see is ARM enter the NUC space. Maybe I'm the only one, but I'd like be able to pay $200-400 for a small, low power usage, decently performant machine. The 8th generation Intel NUC are good, but 28W TDP and it'd be nice to get it much, much lower than that. I know these are a small fraction of the overall market but personally I think it'd be cool.

I find it odd that there's so little choice between Raspberry Pi and Intel NUC. With all these high powered mobile phones there should be plenty of chips that would be fast enough. I can buy a phone with CPU, GPU, battery, modem, screen, etc for $100, but there's very little cheap low powered PCs.

Have you tried looking much? There's actually a fair bit of choice, and has been for the past few years, with many quad-core ARM based Pi competitors in the sub $100 market these days, all using various mobile phone style CPUs:

> https://all3dp.com/1/single-board-computer-raspberry-pi-alte...

The reason you don't see them all that often is that none of them have close to the popularity of the Pi, and therefore don't have nearly as good community resources. The Odroid boards seem to have fairly active Reddit communities from what I've seen, but haven't tried using one personally.

> https://www.hardkernel.com/main/main.php

> The reason you don't see them all that often is that none of them have close to the usability of the Pi

I think what OP wanted is usable NUC alternative on ARM (popularity is the result of that and is really proportional), so little ARM hardware is tolerably compatible with Linux (be it GPU driver, network driver, bootloader or whatever else cr*p ARM hardware makers keep to themselves), unlike NUCs which is actually really nicely compatible with Linux.

> I think what OP wanted is usable NUC alternative on ARM (popularity is the result of that and is really proportional)

There's a little truth to that, but it also overlooks the huge effort the Pi Foundation has made to foster a community, which I think is the one thing the Pi team do that other board manufacturers simply haven't come close to matching.

I'd personally argue it's the work of the Pi Foundation that has lead to the boards success more so than the boards themselves, which to be honest are increasingly pretty under powered compared to almost all rivals and have very long intervals between upgrades. It has probably also helped that the Pi Foundation is a charity promoting CS education in schools, rather than a commercial enteriprise.

Mainline drivers or bust. Today we no longer absolutely need to make compromises and rely on corporate support for out of tree kernelsm

> but it also overlooks the huge effort the Pi Foundation has made to foster a community

I think I didn't overlook it, Linux already has a huge community, they had to roll their own (community) because they weren't compatible enough, this is better than none, but not as good as it could be.

I don't think so. I feel like the Pi is dragging down the rest of the industry by using broadcom chips based on a GPU that has been obsolete for a long time.

How can the Pi be dragging down the industry that it created and is still dominating by a hefty margin? People already have plenty of alternatives but somehow their price/performance ratio renders them to obscurity. Perhaps that must mean something.

A small player in an industry doesn't have the volume to drag down the industry, only the dominant player does.

The dominant player in this instance is insisting on using chipsets that are effectively obsolete when compared to the competition.

As a result, the community that they command continues to work on supporting this obsolete hardware, rather than newer hardware.

Network effects exist here just like in many other industries.

Exactly this. ARM is not a platform. It's some SoC where each manufacture solders some random pins to random chips. Few manufactures publish device tress and only the dead Windows phones have UEFI+ARM (with their locked bootloaders. Fuck you MS).

If ARM manufactures want to get serious about devices for enthusiasts, hobbits and Linux users, they need more UEFI+ARM solutions that can boot plain old Linux or Windows 10 ARM.

> locked bootloaders

Easily unlockable with WPInternals :)

> only the dead Windows phones

Marvell/SolidRun MACCHIATObin, SoftIron Overdrive, even the old Gigabyte boards with APM X-Gene have both U-Boot and EDK2 (TianoCore) firmwares available. I'm not even talking about big servers (ThunderX) :)

Heck, U-Boot itself can run EFI binaries, with network access for netbooting even — and it's good enough for FreeBSD/aarch64 to not even consider non-EFI booting methods.

I wouldn't put SolidRun in that list. Their hardware is garbage. I wasted money on this thing:


Their support is terrible and they only have a 14 day refund window.

I have a SolidRun CuBox running 24h/7d for about 6 years and the thing is a beauty. I agree that the company could be better though - they stopped providing updates very quickly, and abandoned their apt repository. I looked at newer models but their prices were way too high in a market where buying a Pi will mean getting less headaches as well as saving money. I don’t think their focus is on NUC/plugs anymore.

just out of curiosity...

if it's an eval board, then do they provide schematics which are typical of other eval board producers?

Having the schematic might aid in getting things working assuming the MAC <-> PHY circuitry supports what you want to do.

I've always been surprised that nobody's tried to weld ARM SoCs and the ACPI standard together yet. (How would that even work? I have no idea. Maybe the SoC would have a standardized PMC embedded in it?)

> Fuck you MS

To be fair I've yet to see a good ARM manufacturer and they'd all deserve a "fuck you".

Rockchip, Marvell, AMD (barely an ARM manufacturer, but SoftIron sells them), Cavium (mostly on super-expensive servers though)

Marvell? really?

Pretty sure Rockchip has violated Linux's GPL. Rest of who you listed I've never seen/used in real life.

I think they did, but I'll take GPL violations and (possibly leaked?) publicly available documentation over GPL-compliant source code dumps (which are next-to-useless for actually understanding the hardware anyway.)

> possibly leaked

Not leaked, entirely official: http://opensource.rock-chips.com ("Hardware Support" section)

And instead of dumps, they have proper git commits! https://github.com/rockchip-linux/kernel/commits/release-4.4

And lots of upstream linux commits with @rock-chips.com addresses.

Interesting. The docs I found before were marked "Rockchip Confidential". But the ones on their site are not complete either --- the fact that only "Part1" of the TRM is there suggests that there is a 2nd part (and possibly more.)

Yeah, I've seen the "confidential" mark on the same documents (or older versions) — these were published on board vendors' sites (e.g. Pine64's).

Not sure what would be in a "secret 2nd part"… the only thing I can think of that's missing from "Part1" is the display system. But there is a driver in mainline Linux for that https://github.com/torvalds/linux/blob/master/drivers/gpu/dr...

Maybe a very long time ago. Or you're confusing them with Allwinner.

Rockchip has the source of all the things on their GitHub, and a ton of commits in mainline Linux.

You've never heard of amd?

To be fair, the OP wanted a <28W TDP NUC, and honestly, I'm also a bit mystified by the question as the market is lousy with them.

If you do a search for Apollo Lake NUC or SBC (6-12W) you'll get dozens of results. The followup Gemini Lake boards are also starting to appear and it looks like there's even a 4.5W Core-M Kaby Lake SBC coming out (LattePanda Alpha - ridiculously overpriced for general use IMO considering you can get 15W 8250U NUCs for less).

Gigabyte, Zotac, Asus and tons of Chinese OEMs all sell 15W U processor NUCs (as well as Apollo Lake options).

There are also Intel based boards. Apologies for sounding like an ad. I am a big fan of the upboard which starts at $90 and goes up if you add more RAM and storage. They also have a 40 pin connector that is the same as the raspberry pi. It uses a 64 bit Atom chip, and performs surprisingly well.

(There is a but coming.) It just works with standard AMD64 Linux distros and uses UEFI. You can grab any of the Ubuntu / Redhat / Fedora / Arch etc installers and they just work. You can use the mainstream kernel and it works. The GPU is a standard Intel one and just works.

BUT if you want to use some of the 40 pin connector then you will need some kernel patches. Under the hood the Atom runs at 1.8v but the rpi pins are at 3.3v. Consequently there is a piece of level shifter hardware that does the voltage translation, but needs to know which direction the pins are being using for (like the raspberry pi, you can change this at runtime). The kernel patches make that work. There are more if you care - eg you can make the enumeration order of i2c & spi etc match the same as the raspberry pi.

Storage is embedded emmc (ie no need for sdcards), and I don't know what they did, but it is really fast. CPU performance seems to be about double that of the Odroid XU4 for each core in my workloads. Having a standard (for AMD64) boot process is great - the various ARM devices in this space are all over the map.

I actually have one of these "Up!" boards as well, it is pretty awesome to get decentish x86 performance in an identical form factor to the Pi, along with built in storage, 4GB of RAM and 'real' gigabit networking. It cost at the time approximately the same as 4x Pi 3s, but was well worth it for the massive increase in performance. They deserve to be far more popular than they are.

> http://www.up-board.org/

and I don't know what they did, but it is really fast. CPU performance seems to be about double that of the Odroid XU4 for each core in my workloads.

That's what x86 was always winning at. Power consumption is higher, however.

The Intel Atom is lower power consumption for lower performance - this is not i7 levels of performance. The power consumption differences between the Intel boards and ARM boards is irrelevant for the CPU, and most of the power needed is for USB ports (500mA per USB 2 port, 900mA per USB 3). For example I use upboards with 5V4A adaptor and over 3A is for USB, ethernet etc!

Odroid boards are pretty great, they took end-user needs seriously when designing the their later products.

At the time, they sold one of the only consumer ARM boards that had a separate controller/bus for their gigabit ethernet and USB 3 controllers[1].

Hardkernel has also been patching new kernels for their boards, which is a breath of fresh air. Usually, consumer ARM boards, other than Raspberry Pi Foundation's offerings, are tied to single kernel release and, unless you wanted to commit a lot of time to building your own patchsets, they'd never see a kernel update.

[1] http://www.hardkernel.com/main/_Files/prdt/2016/201602/20150...

Rockchip has quite a lot of mainline Linux support http://opensource.rock-chips.com/wiki_Status_Matrix#Mainline... and good documentation.

> that had a separate controller/bus for their gigabit ethernet and USB 3 controllers

Are you sure? As far as I could tell, they had a single USB 3 controller that had a 2 port hub. One port was connected to gigabit ethernet, while the other was connected to another two port hub which is the two ports on the device. ie all devices connected to USB 3 and the gigabit are sharing the controller and bandwidth. The background is the Exynos chip used was intended for the cell phone market, where there is no need for multiple USB 3 controllers, in addition the existing USB 2 controllers.

I don't get it. Your list doesn't even include the RK3399 which is the fastest SoC you can nowadays get on an SBC for $100 and even that only has a geekbench score between 3k to 4k (RPi 3 is 1.2k and my 5 year old i5 has 10k). You'd expect to get at least a 5k score from an SBC for it to be viable.

Also this whole announcement is really petty... Qualcomm and Apple already have SoCs with scores higher than 9k on geekbench so why does ARM have so much trouble making good cpu cores? They are comparing existing 14nm CPUs with theoretical 7nm CPUs that they haven't even released yet. Of course it's going to perform better. It would be embarrassing if it didn't...

Geekbench is a bit of a bogus benchmark. I really doubt that the Rockchip board would have even quarter the actual performance on "real" programs (e.g. x264, imagemagick, povray etc.)

> "real" programs (e.g. x264, imagemagick, povray etc.)

...none of which are CPU-bound, instead relying on either hardware accelerators or the presence of specific SIMD instructions.

Geekbench is pretty good at measuring how well a chip would perform / scale as a web-app server. (Ignoring IO bandwidth, that is. A benchmark that incorporated both would be brilliant.)

x264 is an h264/AVC encoder that uses the CPU to extract maximum quality for a given bitrate. It does not support any fixed-function hardware on GPUs and the like since that goes contrary to its goals (gpu-accelerated encodes tend to have worse quality for a given bitrate and the fixed-function hardware doesn't support profiles like Hi10p that can improve quality).

Pov-ray is a raytracer. It only runs know he CPU (it has no codepaths to use the GPU).

imagemagick is an command line image manipulation suite that by default uses the CPU to perform operations in images. There is opencl support but it requires either a build option or configuration option for it to be enabled.

My own experience, comparing LAME and flac encoder speeds on an Android device (binaries invoked via a terminal emulator) to a desktop CPU and comparing the actual performance to the geekbench 4 scores showed that gb overrated the Android device by almost 125%.

I know there are plenty of different boards out there, like BananaPi, Odroid, etc, but a lot of them have questionable driver support and are still bare PCBs. The ODROID-HC1 does look interesting though.

I guess what I'd really want is a x86 device to get a more PC like experience. I can buy a tablet with battery, touch screen, and Intel Atom X5-Z8300 CPU for less than $90 [0], but there aren't any good cheap x86 alternatives for simple home servers.

[0] https://www.gearbest.com/tablet-pcs/pp_298397.html

Trust me, driver support is just as bad as with ARMs on those Z8300 devices.

My issue with these is they still use MicroSD cards. I basically want a Raspberry Pi with a SATA connector so I can use an actual hard drive or SSD.

You've been able to boot a Pi from USB for quite some time now, avoiding the issues with using a MicroSD card. While it isn't SATA, booting from USB still allows use of an external drive.

https://96boards.org it has spawned its own "form factor" if you will. And given that the form factor is reasonably similar you can build a NUC like system (I've got several for various servers). There is also a Coursera course starting up on the Dragonboard 410c in their IoT specialization that can give you a solid basis for a good example board of this class.

I'm in the same boat as you, but I can understand why, based on the market:

You can buy a cheap powerful phone because there's a market for it. The margins are in the scale of the consumer market. However, there isn't as much a market for a generic "beige-box" cheap CPU. It's one of the difficult realizations of being a techie: the kind of undifferentiated, cheap, flexible box we want just doesn't carry itself as a product.

I'm with you here. I think the new AMD SoC that's coming out in the Udoo Bolt looks like a nice middle ground. I've been using Skull Canyon as a Linux desktop for about a year and a half now and the new model doesn't look a compelling upgrade for the price. I'm hoping the Bolt architecture is repeated in more products going forward. https://www.kickstarter.com/projects/udoo/udoo-bolt-raising-...

An entry-level NUC is just barely powerful enough to give a tolerable experience in Windows 10. There are a gazillion single-board computers and Android TV boxes that will run Linux with a bit of hacking.


I am typing this on a 3GB amlogic s912 streaming box, with ports for USB type A, ethernet and 4K HDMI. Performance for around $40 is what you would expect from a smartphone, albeit the quirks of Android with a mouse and/or tv remote.

The modding scene might not have the same level of polish as a rpi - there are ROMs for Android/Armbian/CoreElec but you may be at the mercy of building it yourself, e.g. When the wifi module isnt precompiled for your particular no name OEM device.

I was trying to find a little server to host a friends wiki. I wanted to host it on an x86 server so my docker containers would be portable to AWS, but there's nothing that can compete with the rpi unless you're willing to spend over $200. I'd like to see the Y series processors on an SBC, but Intel ARK lists those for going over $250 for the processor alone.

I’ve stuck Intel Compute Sticks onto a few lab machines with good success. Like NUCs they’re a bit too expensive but they are great for a running a tool off a VESA mounted panel, especially as many of these things need Windows.

Asus Tinkerboard is about twice the price and (mostly) compatible with the Raspberry Pi. Its biggest upgrade, for me, is gigabit LAN. Worth looking into.

The Pine64 set of devices may be worth a look:


They have terrible software support too.

I have a Rock64 and a Rockpro64 and the software from https://github.com/ayufan-rock64/linux-kernel is rock solid. Zero complaints. Both units require heat sinks if you are doing compilation work. The 6 cores of the Rockpro64 is a nice bump up and it is much faster on large compilation jobs like Envoy proxy or CockroachDB.

made k8's cluster with 10 rock64's, no complaints

Yeah the only device that fits that profile is the asus chromebit but it runs chromeos and can't yet run android or linux apps so its a much more limited device than the pi even with double the ram.

Install Linux on an Nvidia Shield TV?

Intel has the Minnowboard.

I'm not sure what qualifies as a NUC, but there are a few "high-end" ARM SoC boards with PCIe now. Here are a few in the $100-$300 price range:

Rockchip RK3399 boards:

NanoPC-T4, $109 2x A72@2GHz, 4xA53@1.5Ghz 4GB LPDDR3-1866 RAM, 4 lane PCIe M.2 80mm https://www.friendlyarm.com/index.php?route=product/product&... http://wiki.friendlyarm.com/wiki/index.php/NanoPC-T4

RockPro64, $80 2x A72@1.8GHz, 4xA53@1.4Ghz 4GB RAM, 4 lane PCIe full-size slot http://wiki.pine64.org/index.php/ROCKPro64_Main_Page https://www.pine64.org/?product=rockpro64-4gb-single-board-c...

Rock960, ~$99 https://docs.96rocks.com/rock960/start/unbox/frontside/

HiSilicon Kirin boards:

HiKey960, $239 Kirin 960, 4x A73@2.4GHz 4x A53@1.8GHz, 3GB LPDDR4, TSMC 16nm http://hihope.org/product/HiKey960 https://www.96boards.org/documentation/consumer/hikey/hikey9...

HiKey970, $299 Kirin 970, 4x A73@2.36GHz, 4x A53@1.8GHz, 6GB LPDDR4X-1866, PCIe M.2 2260?, TSMC 10nm http://hihope.org/product/HiKey970 https://www.96boards.org/product/hikey970/ https://www.youtube.com/watch?v=8YiJ4PQoNTM

There's also the Socinext 24-core developer box. It's $1200 for a complete system: https://www.96boards.org/product/developerbox/

Having dabbled with a few of these boards (RK3399, Hikey960), I can say that the user experience is still quite far off from what you would expect on an x86 NUC running a GNU/Linux distro.

Even finding usable images to flash was tough on the RK3399 and near-impossible on the HiKey960. Even then, the HiKey960 image I found was pretty unstable (randomly killed processes due to a mysterious OOM) and was contributed by a _forum user_, not the official vendor (they seem to mainly support Android).

What should be kept in mind is that these are essentially phones-on-a-board that may or may not have been shoehorned properly to run a desktop Linux. That comes with all of the relevant baggage. For example, the HiKey960 board will aggressively thermal throttle given that all that comes with the board is a piddly copper "heatsink" that you _glue_ on the SoC.

Unfortunately the Raspberry Pi is the only board that I've come across that offers a sane GNU/Linux experience in terms of software support. The performance is acceptable if you consider that even these 4xA73 boards would still get smoked by a reasonable desktop CPU.

For random ARM SBCs, I like the reviews that MickMake does: https://www.mickmake.com/category/reviews Unlike most "reviews" online he actually installs Linux, points out all the things that don't work properly, and runs them through Phoronix, checking performance, power usage, and temps.

At the high end, I agree that unless you have something really specific (if you need Jetson for CUDA or high-end imaging), you'd usually be better off w/ x86 - it's nice to have full mainline support and stuff that just works. In the SBC space, for the same price as a HiKey960 you could get an UDOO x86 Advance or an UP Squared, both of which will beat the pants off the HiKey in most aspects.

Here is a simple benchmark comparing a Linux VM in my laptop with 2 cores, 4 cores and the RockPro64 board. On the SBC section I've put the power consumption during my tests:


Fantastic for a $80 board that consumes less than 10W.

Just replying to say thanks for the useful info! :D

Arm isn't like Intel they don't make their own hardware, they design and license their chip designs. A hardware maker would have to license one of these upcoming future chip designs, build their own chip based on it and then sell that. Which is why it takes the resources of an Apple or an Asus to produce these devices. Just the license from Arm alone is a huge cost up front.

Right, but you can still buy these chips after they have made it through this process. There are many ARM chips with decent performance that you can get as a SOM. Some of them even have decent documentation.

To be fair, ARM does build some of it's own chips, they just don't sell them to end users and they cost a very pretty penny. Their RealView/Integrator/Versatile boards have custom ARM chips.

Why not just use a Atom-based NUC? At 10W TDP (and mostly at idle for many uses), other power draws become more important anyways. Modern Atom's are surprisingly competent.

Intel hasn't released an Atom-based NUC in 4.5 years (they do have a few more modern "Celeron" or "Pentium" branded units -- not sure if these are intended to be efficient or just cheap). But yes, of course my original comment wasn't implying there is anything magical about ARM that Intel couldn't do -- just that historically they have really dominated the performance per watt low TDP space and Intel's current "efficient" offerings still have pretty huge power draw.

There are at least 2 recent Atom NUC's. They do use the "Celeron" branding, which is a little confusing because there are both Core-based Celerons and Atom-based Celerons. But these two are Atom:

Gemini Lake Atom NUC: https://www.intel.com/content/www/us/en/products/boards-kits...

Apollo Lake Atom NUC: https://www.intel.com/content/www/us/en/products/boards-kits...

I don't track the Intel NUC SKUs super closely, but the industrial/embedded board sector is lousy with tiny low-power x86 chips in USFF/NUC, PicoITX, 3.5" form factors (Aaeon, Axiomtek, Jetway, etc or just browse through linuxgizmos.com or cnx-software.com for a constant stream of releases).

On the consumer side there might be less, but the Gigabyte Brix, ECS Liva, and the Zotac mini-PC lines seem to be running strong and have a full line of TDPs - Zotac's latest P-series units look pretty great for their size: https://www.zotac.com/us/product/mini_pcs/zbox-p-series/all

Two weeks ago I bought a small Zotac box with Intel Celeron N3060. The whole system with 4GB RAM and SSD drive consumes 8W. It is good for backups. It runs Ubuntu 18.04 w/o any problems.

I think this is 5th generation (?) and it's 6W TDP. I've had one since Dec 2015 running Fedora Server and it's been rock solid. I've even installed Windows 10 in a libvirt/qemu/kvm VM (testing not production), but it's great for running docker containers. Bit slow for compiling things, but nice to offload that work onto the NUC, walk away and do something else and not have to worry about interrupting it if I need to reboot or sleep the laptop going to lunch. https://ark.intel.com/products/87740/Intel-NUC-Kit-NUC5PPYH

Helios4 is not exactly a NUC but at least it has native SATA instead of a USB bridge: https://kobol.io/helios4/

Just underclock and undervolt them.

Apple's Mac Mini hasn't been updated since 2014. I suspect the performance / power consumption ratio is even worse on these as a result.

Starts at $499

> Maximum continuous power: 85W

It’s a 28W TDP part, so the same power draw as the nuc but a 4th gen instead of 8th gen cpu.

OP was looking for an ARM machine, not a five year old intel machine

I think it's amusing to remember that ARM started off in fancy 32-bit high-performance desktop machines: https://en.wikipedia.org/wiki/Acorn_Archimedes

The Archimedes was so far ahead of its time that no-one really knew what to do with it. They ended up in schools, especially in commonwealth countries, mainly because Acorn's previous big success, the BBC Micro, had been specifically targeted at schools.

But in 1987 the ARM was so astoundingly powerful that it should have become the king of the workstation market. A failure of marketing, perhaps.

The failure of archimedes was it not being a PC clone, at which all "big boys" were putting bets on, and secondary to that was Taiwanese not buying into it.

And remember, back then the whole notion of a _personal_ PC was very new, and thus companies with first mover advantage got huge leverage by instilling the idea into common people that AT/XT "common phenotype clone" is the computer.

Lots of people then got an idea that a computer must be a kind of at/xt derivative and nothing else:

My parents brought a second hand CZ310 from Japan in late 1994. A super expensive machine even for them (they were possibly within the top 2000 at that time in Russia.) I remember, mom saying about those times affectionately that she asked for "the best computer the money can buy." And that when it gave up the ghost in 1997 or 1996, the repairman simply did not believe that it was a computer and not a some kind of a gaming console.

Whither the software and peripherals?

The failure of non-PC-clones in the 80s/90s is mostly attributable to those two deficiencies, I think.

Well the early models were available with a BSD-derived UNIX, so software was available, at least.


Similar to Commodore's failure with the Amiga line.

It doesn't help that they didn't ship it with Unix, which is what workstations ran back then. Porting System V to it and selling it with it may have helped.

Then again, both Atari and Amiga tried that with their 030 machines and had no success. They were not taken seriously as workstation vendors (in the then lucrative workstation market) in many cases because of the brand on the box. I distinctly remember a UnixWorld article about the Atari TT030 with the headline "Up from toyland" (above an otherwise positive review)

They did ship it with Unix. RISC iX was actually available before RISC OS was.


You could get a version with unix, or you could get an OS that was fast and had a good GUI

IIRC, when BYTE magazine benchmarked it, the Archimedes doing software FP beat a Compaq (?) 387 with an 80387 coprocessor.

"What I can say with certainty is that the Archimedes running C programs without a math coprocessor rivals the Compaq Deskpro 386, a 16-MHz 80386 machine with an 8-MHz 80287, and comfortably outpaces a Macintosh SE with a Hyper- Charger, a 15.67-MHz 68020 with a 7.83-MHz 68881 , on all but the floating- point-intensive Savage benchmark (the Compaq also beats the Archimedes on the Sort). Even more remarkably, the Savage benchmark in interpreted BASIC V in RAM on the Archimedes takes only half again as much time as it takes in compiled C on the Deskpro 386 with a math coprocessor.

Benchmarks are not everything, but the experience of using the Archimedes tells me that on many untested tasks, like writing to the screen, it is far faster than anything else I've seen. If I had to take a stand on benchmark figures alone, I would look at the Dhrystone, which is the most general-purpose test (even though it doesn't test floating point). The Archi- medes runs 3 1 percent more Dhrystones."


(see other formats at the top too).

Facinating read.

Thank you for posting it here. I was having trouble finding it.

I had one from 1988. It was totally awesome. My first experience with a Windows PC 5 years later was a like being slapped with a wet fish. Also, the 8MHz ARM out performed the 486 100MHz by a long way

Will we be able to run our own Linux kernels on these laptops? "In January 2012, Microsoft confirmed it would require hardware manufacturers to enable secure boot on Windows 8 devices, and that x86/64 devices must provide the option to turn it off while ARM-based devices must not provide the option to turn it off.[15]" (https://en.wikipedia.org/wiki/Secure_boot)

This is no longer the case in the current hardware certification requirements

It's no longer required that you are able to turn it off on x86-64, that you can't turn it off on ARM or both?

From what I've seen on the internet, you can turn secure boot off, select USB boot, but a blank screen after that with linux/aarch64 images.

We really need to get these laptops (NovaGo etc.) into the hands of experienced developers to figure out what the deal is.

It is no longer required that it be force-enabled on ARM.

My impression is that every time ARM release a new big core (e.g. A73, A75) they project massive improvements. And then what's actually sold shows half the expected speedup, due to the design not clocking anywhere near as high as ARM projected. Given that history, why is everyone so willing to take these latest numbers at face value?

Does anyone actually take "next gen is XX% faster" numbers at face value? Apple, Intel, all say things like this, and real world performance is always worse than the marketing since the tests run in ideal conditions (say, on AC power) but the real world conditions are always worse (say, throttling down the CPU to save battery).

In my mind, there are two big questions: (1) is it fast enough for users, and (2) will users be able to run business-critical x86 / Windows software on it?

(1) was pretty bad with the first ARM-based Chromebooks, which could handle at best 3-4 tabs, but has started getting better (see the Samsung Chromebook Plus).

(2) boils down to whether the relevant parties can get x86-on-ARM emulation working. I feel like Microsoft had a pretty good tech demo of this at a past Build conference, but I wouldn't be surprised if non-technical reasons prevent it from shipping.

The x86-on-ARM emulation has already been in production for a while, has it not?

The various Snapdragon 835 powered Windows laptops use it AFAIK.

Yep, it's already shipping on the Qualcomm HP Envy x2. Performance of x86-on-ARM seems to be mixed/disappointing so far, but the bigger catch seems to be that it doesn't support x64 yet:


Any idea if it's going to come to phones by any chance? I initially thought they meant phones, but after a while they seemed to say they meant laptops.

Also, regarding x64, I believe it was due to patent issues, not because they somehow thought x86 is enough for anybody.

Limbo emulator has existed for a while and you can use it to run x86 (x64?) Windows, for example, on a normal android phone. (It is slow, of course.) https://limboemulator.weebly.com/

Do ARM chips have ME or PSP equivalents? It would be great to be able to buy a new machine and use something like coreboot without having to use hacks to disable ME.

Arm sells barebones CPU cores which can be used to create more complex processors like complete SoCs. When I mean barebones, I mean the traditional core with branch predictors, instruction fetchers, writebacks, etc.

Intel and AMD sell an entire System-on-a-Chip disguised as a CPU processor. Their CPU is much more than a CPU core: they contain an entire system in there.

If you want to make a comparison, it is more correct to compare the Snapdragon and the Exynos chips to the off-the-shelves CPUs that Intel and AMD sell.

Arm only sells technologies that enable other companies to create a final product, it doesn't impose those kind of "management systems" and binary blobs.

It would be fair to mention that TrustZone, the equivalent technology, is built into the cores/ISA. They also do distribute software related to the TrustZone, albeit not a full TEE solution.

TrustZone is more like x86 System Management Mode than the ME or PSP.

TrustZone is not an equivalent technology to ME/PSP. TrustZone is a technology for providing hardware isolation and ME/PSP are co-processors that manage the entire CPU socket (the Ryzens and the i9s)

Yes, AMD's PSP is actually licensed from ARM, its why there is a small ARM Cortex CPU on AMD's CPUs and APUs. The first few times they tried adding it on did not go well FYI, so they disabled this small chunk of silicon at the factory.

In 2013 AMD successfully fabricated a CPU with said ARM Cortex core embedded, thus that was the first year they actually offered their PSP. AMD had similar problems with their APU's for a number of years IIRC, whereby making a single chip with both CPU and GPU on it had poor yields with a high percentage of dead chips.

No, it is not licensed from Arm. AMD's PSP is a system built by AMD itself, it just happened for them to buy Arm cores to build their system.

They could have used MIPS, PowerPC or any other CPU cores for PSP, they just decided to go with Arm.

IIRC it also heavily depends on ARM's TrustZone architecture, it's not like they just "happened" to use ARM.

I mean, they did license it from ARM; it's a Cortex A5 last time I checked.

You already can buy an extremely high performance workstation without ME or PSP or equivalent; they are just a bit pricy. https://www.raptorcs.com/content/TL2WK2/intro.html

The Talos has the BMC (Baseboard Management Controller) that runs a full Linux distro off the side and is connected to the network.

The only reason why it might not be considered a ME or PSP replacement is that the user can control the signing keys.


If the next Mac Pro doesn’t utterly blow me away I’ll take a dual-CPU POWER9 system into serious consideration. My only concern is drivers for suitably powerful GPUs.

Unfortunately it's not as easy to answer. Intel and AMD manufacture their own chips which means they can put their backdoors into all their products. However with ARM they license their IP and other companies make their chips.

This means some companies have hidden proprietary code in their bootloaders. For example the Samsung Exynos have a range of ARM chips, but to boot them you must use their bootloader, which may contain spyware, backdoors or surveillance systems. You can not see the source code for this bootloader and have no way of auditing what it actually does.

Rockchip is another company that makes ARM chips, and can be considered mostly free [1]. As with all hardware it's very hard to know what's going on inside, but all the code to boot into Linux (minus the optional GPU) on a Rockchip product is open source and can be audited/compiled by anyone.

ARM also have TrustZone [2] that allows you to run applications in a "secure" (or separate) space. It doesn't run on a separate chip, but runs on the ARM chip, separating memory and instructions from the operating system. (Don't quote me but...) I believe you don't actually have to use TrustZone. The instructions/documentation for it doesn't appear to be available to the public, however if you don't upload a blob for TrustZone, with Rockchip it simply won't use it and will run everything on the same level. (Note this is true for Rockchip, but again depending on who is manufacturing the ARM chip, they may force you to use TrustZone).

Unlike with Intel ME and AMD PSP, if you don't want to use their ME, you have no choice. If you remove the blob your system won't boot (or will restart after 30 minutes for some older models).

This means if ARM TrustZone is compromised you can remove it and continue on as normal. But if ME and PSP are compromised you are at the will of Intel and any agency it may have colluded with.

While we're on the subject of free and open source code, note that with (most) ARM chips, the GPU is closed source just like the Intel ME. Again, the difference is if you don't want to use the GPU, you can just not upload the blob, and use the CPU without the GPU. There are some movements being made to open the GPU [3], but it's still a long way off.

1. https://libreboot.org/docs/hardware/c201.html

2. http://www.openvirtualization.org/open-source-arm-trustzone....

3. https://gitlab.freedesktop.org/lima

TrustZone is essentially an ISA extension, similar to Intel's TXT and SGX to provide a trusted execution environment. You can trivially avoid it by never running any of the related instructions.

It's more of an extra address bit and processor mode. It doesn't have related instructions like TXT and SGX, but instead is structured more like a hypervisor.

I am not sure if this is correct. There are related instructions like 'smc' that helps to switch to secure world.

OK, so there's the one instruction to do a system call that hits secure mode. It's equivalent to svc or hvc, but hits EL3 (secure mode) rather than EL2 (hypervisor mode) or EL1 (supervisor mode).

It's very very different than the dozen or so instructions to setup TXT or SGX that sits off to the side of the main OS rather than running like a super hypervisor. If you're going to compare it to something, it's way more like SMM on x86.

Source: I've ported a kernel to EL3 (secure mode).

You generally do want to run the reference (BSD licensed) Trusted Firmware though. It implements PSCI at least.

The closed boot loader is a red herring. Unless you have the underlying RTL source the hardware could do anything - no secret boot loader required. Open source without open hardware is a false sense of security.

Every moderately complex SoC will have something like ME or PSP. The most recent big boy ARM SoC that I can think of without something like that was the iMX6. Even SiFive's newer U54-MC RISC-V SoC has a little "monitor core".

SoC power management, system bringup, and maintenance tasks are complicated enough these days to warrant a full small core tacked onto the side. These cores are necessary, and aren't going away. Complaining about them being there is just pissing into the wind. Complain about what they're used for and the closed source nature of their code.

> SoC power management, system bringup, and maintenance tasks are complicated enough these days to warrant a full small core tacked onto the side. These cores are necessary, and aren't going away. Complaining about them being there is just pissing into the wind.

There's a vast difference between such a core being used solely for bringup/power management/housekeeping and it having a network connection to the outside and being used for "remote management" (and running with godawfully insecure parsing code, at that).

Which is why the next sentence (that you cut off) says

> Complain about what they're used for and the closed source nature of their code.

All of these cores will have the ability to have network connections because they'll bringup the whole SoC including the network MAC.

They aren't 'necessary'. You can completely disable the ME and have a usable system. Maybe you are confusing it with the PCH?

Check out what the ME disable stuff does. It just removes some of the binaries from the thing's uKernel that are the non boot services. The ME is required for system bringup.

> However, while Intel ME can't be turned off completely, it is still possible to modify its firmware up to a point where Intel ME is active only during the boot process, effectively disabling it during the normal operation, which is what me_cleaner tries to accomplish.


EDIT: Also, the ME is inside the PCH, so I'm not really sure why you're making a distinction there.

> The ME is required for system bringup

Because Intel makes it required, or is it technically required?

My understanding is that it's what controls power sequencing, QPI bringup, and RAM bringup, so the main cores can't come up without it or something like it.

Just like the similar cores you see in pretty much every ARM SoC.

Can't completely disable ME. There are still essential modules that we have no idea what they're doing.

They can have it, as ARM TrustZone. This is an independent ARM chip and is in fact used in the AMD PSP. I don't know if some or all of these laptop SoCs would have one, though. Some (most?) Android phones have one (see: https://googleprojectzero.blogspot.com/2017/07/trust-issues-...)

ARM TrustZone isn’t a chip at all, and it’s not a thing that an SoC could have. It’s just another operating mode of an ARM processor. It’s more analogous to x86’s SMM than to PSP or ME. TrustZone is also fully documented AFAIK.

So the real question is: will the laptops let end users replace the TrustZone kernel?

There's a lot of SoC specific stuff moving over into the Arm Trusted Firmware that sits below the TZ kernel. The upstream ATF is BSD licensed, so while some chips have open source implementations, others might only exist as blobs.

It's possible to build out SoCs that require a closed-source blob that runs on one of the ARM cores, doing basically all the same jobs a PSP or ME does.

libreboot specifically points out to ME and PSP, and doesn't point any such thing for ARM.

A complication is that ARM only designs the ISA, implementors can very much add their own management system to the SoC.

I am holding on to x86 until ARM figures something out on standardization like in PC space. If I need to have custom kernel, boot loader and rely on GPU drivers from different vendors for each computer I am going to buy - no thanks.

This, exactly this. The PC (technically the AT) grew so much because it was originally an open platform and almost completely standardised and documented. There have been many recent efforts to lobotomise it but it still remains mostly standard and well-documented.

ARM platforms are extremely diverse largely due to the fact that its cores are integrated into SoCs for a wide range of uses, and I'd say the majority of ARM SoCs in use don't even have any public documentation. This is particularly true of those used in smartphones and tablets.

Or put it more bluntly, "legacy-free means compatibility-free."

Is this the best evidence yet that Arm Macs are, in fact, coming in 2020?

Apple doesn't use Arm-design cores, they design CPUs that use Arm ISA in-house, so this tells us nothing about Macs.


The linked article refers to Arm core designs roadmap. Qualcomm and others will use those cores in their SoC. Apple, on the other hand, no longer uses those cores, they design their own cores that just happen to use the same Arm instruction set. Their cores are already more performant than those provided by Arm, so they don't need those design to put A# CPU as the main processor into MacBooks, they would probably design their own. So, this article is not the best evidence that Apple will use Arm in Macs (even though, I believe this would eventually happen.)

Qualcomm also design their own implementations of ARM (e.g. the current Kyro), though they do definitely ship some things with ARM designed cores.

While Kryo used to be fully custom, lately it's been based off of the Cortex designs. I'm not just talking about the low end SoCs, but even their high end stuff. Anandtech usually has details on this.

Right, it's now a derivative design (as was Scorpion, AFAIK), though how much that matters is questionable (Apple got good perf gains over the Cortex-A8 in the Apple A4 despite it being a derivative).

Samsung too (Exynos M1-M3). I think Qualcomm designs are heavily modified Arm designs, though.

It's not "just happen to use the same Arm instruction set," it's "they keep up to date with advances in the Arm architecture, such as v8.x extensions, which are heavily influenced by what Arm plans in their roadmaps."

If instructions/extensions are being made that improve laptop-class performance in some ways, then that likely does increase the chance of Apple making laptop-class Arm cores for their Macs, even if by a miniscule amount.

Apple cores out perform ARM cores. They don't care what ARM dies in this space because they're years ahead of them in perf already.

Wouldn't Apple design the whole thing internally without needing help from Arm? They have a world-class silicon design team in-house.

They would still have to pay for an ARM license, at least outside of RISC-V.

Apple has an architecture license. I don't know what the terms of the license are but it allows Apple to develop architecture compliant chips. They were one of the original developers of the original ARM and held a large stock position in ARM Ltd.


Do we know what Apple is paying? Remember that ARM started as a joint project from Acorn, Apple, and VLSI. Apple certainly held a full architecture license from founding, despite not designing one until comparatively recently, which to me implies there's some specific license.

I certainly assume they weren't paying large amounts for a license when relatively poor in the 90s despite holding one.

These chips are 5W TDP and around that area. The most powerful Intel CPUs in the Macbooks are 45W. So unless you're suggesting that ARM Is making CPUs that are literally 10x more power efficient, this actually seems to rule that out. This road map is great for maybe an entry level Mac. But it wouldn't be able to roll that out to the full line-up which I think makes it less likely.

No. Apple hasn't use ARM core designs in years.

Apple T2 chip, currently being used and produced.

But that’s not an ARM design, right? It’s an Apple design either based on ARM or just using their instruction set.

I don’t think Apple uses straight ARM chips for anything of consequence anymore (I imagine little low-power embedded ones still exist).

I don’t believe that’s an Arm design.

No, Apple ARM laptop (most likely Air replacement) will ship with Apple custom ARM chip.

Apple's software engineering team doesn't have the bandwidth to transition the Mac to a new architecture, and neither does their hardware team.

And nobody wants a stripped-down, locked-down version of MacOS-Lite.

So, no.

Apple's software and hardware teams are absolute monsters compared to when they made the transition from PowerPC to x86. The toolset and maturity for cross-platform compilation is dramatically more advanced and mature. They could absolutely do this, and it seems eminently likely that they will in the near future.

I would also bet the already have full macOS running on ARM internally. I think we are seeing the proof of this in the new libraries being released to support the iOS like apps (News and Stocks) on the Mojave.

Why do you think they rewrote from scratch all the applications they acquired from 3rd parties like Final Cut, Logic?

You don't need to rewrite them from scratch. They've had those companies for years and it's a good bet that they've done work to make it more portable. It was already available back when Macs were using PPC, so the code is obviously somewhat portable.

Perhaps they didn't need to, but they did anyway

When they transitioned to Intel they even said they had internal Macs running on Intel for years waiting for this day.

AFAIK, they essentially "just" maintained the x86 port of NeXTSTEP throughout the entire migration from NeXTSTEP to Mac OS X.

I wouldn't be amazed if they supported big-endian PPC still for the sake of maintaining portability, though I feel that's less likely than little-endian ARM.

Did somebody here get one of these back in the days? https://forums.macrumors.com/threads/apple-development-trans...

AppKit is now very old, and maintaining backward compatibility through annual release cycles is extremely complicated and challenging. And that complexity doubles when you add a new platform in the mix.

Porting AppKit to ARM is easy. It's another thing entirely to commit to supporting parallel versions of AppKit for the next 8+ years.

It would be a colossal waste of Apple's software engineering resources, which many people believe are already stretched too thin. And for what, a little better battery life? It's just not realistic.

And for what, a little better battery life? It's just not realistic.

Having entire vertical control, with the flexibility and profitability that entails. Apple is paying hundreds to thousands per processor to Intel, and worse (for Apple) they are beholden to Intel for their product map. This is the antithesis of what Apple is about.

Perhaps, but these are the same folks (for political reasons) who can no longer build a workstation tower or laptop for "professionals." If it wasn't for the iPhone rocketship business they'd be dead.

Imagine if Dell didn't have a competitive workstation/pro laptop in the lineup for five years.

You've got it the wrong way around. Apple is neglecting their Mac line becuase it's paltry compared to their phone business. It's just as much, if not more, work with a fraction of the profits. They would be negligent to do otherwise.

Wouldn't be very smart reasoning on their part, but let's assume true. Supports the up-thread assertion that they are not able to do it for political reasons however.

So given two jobs, you'll focus on the one that pays you the least per hour? Apple isn't just being smart, it has a fiduciary duty to maximize shareholder returns.

I'm not sure what you mean by "political reasons" but it does still make billions from Macs, and the Macs contribute in indirect ways to the phones and the overall brand. Maybe that's what you mean.

Does Apple have three employees? Of course not, so please stop with the false dichotomy.

Either support a product fully or end it, one of Jobs’ most important lessons.

>And nobody wants a stripped-down, locked-down version of MacOS-Lite.

That's what everyone was saying before the iPad launch and look how that turned out. Personally, I would hate an iPadBook, but I can see it selling really well. Especially among students.

I think it would too.

I've had both a Chromebook and most recently and Android tablet with a magnetically attached keyboard, and it's great as a "I don't quite feel like bringing my proper laptop" device crossing over to a tablet, and it fits in my coat pockets.

It's a bit on the heavy side, but have pretty much supplanted my Android tablet. The Chromebook had a similar position, but then I still ended up bringing my (smaller) tablet with me.

For me at least it will never supplant a "proper" laptop because I want a full size keyboard and a much bigger screen, but it's a great complement for travels and meetings etc. or just to bring along in case of emergency.

Like a ChromeOS MacBook.

Apple has a track record of successfully transitioning to a new architecture, they have done it several times. Not only that but they already produce ARM based devices.

And nobody wants a stripped-down, locked-down version of MacOS-Lite

Yet in iOS that is exactly what Apple sells the most.

> And nobody wants a stripped-down, locked-down version of MacOS-Lite.

There's nothing inherent in an ARM transition that would lead to such a thing, and there's nothing in the current x86 architecture that's preventing it.

Mmmm, with two previous examples of similar transitions, as well as the fact that lots of the tech that makes up MacOS already runs on aarch, I’m not sure there’s much to back up that view.

Why? I moved from a MacPro to a MacBook because I travel to much and I did not want the extra weight. If there is something that I need to do that is CPU intensive I can just ssh to my works racks and racks of build servers and let them do the work. Laptops today for most people are nothing more then email, web, and presentation & documentation machines. Given the ability to punt the hard stuff to the cloud or remote access to your corporate systems if you are a dev, battery life, screen resolution and weight seems to be the largest driving factor in design. Also to be fair some of the benchmarks for the last Apple ARM based CPU have it beating the Intels in a few task. It is important to realize not everyone has your particular set of needs. Apple is after all a mass market company and will focus on the largest set of users.

> And nobody wants a stripped-down, locked-down version of MacOS-Lite.

Plenty of people use iPads instead of laptops, no?

Apple is focused on making iOS better, rather than making MacOS worse.

Apple has been running MacOS internally on ARM hardware for a long, long time now.

I guess you missed the fact that Apple has endless billions to spend and can buy or hire to achieve anything they want.

What are you basing that on?

Now we really need (non-Qualcomm) SoC makers to get on board with actually making laptop/desktop-grade chips. Rockchip RK3399 and Marvell Armada 8k are big steps in the right direction, but we need more power (and PCIe lanes).

And we need SoCs with 64 bit wide data DDR buses. Most of the existing non-server SoCs are only 32 bit DDR interfaces, so you can't use off the shelf DIMMs which limits so many of these existing cheap ARM "maker" board to having soldered down RAM and less than amazing memory performance.

Marvell Armada 8k (MACCHIATObin) takes a full size DIMM (apparently with ECC support, even). Not that cheap though… but cheaper than, say, the Overdrive 1000.

Will the new CPUs implement DRTM (dynamic root of trust, similar to Intel TXT) for Windows 10 SystemGuard? This appears to be planned for Qualcomm Arm CPUs that support Windows 10 and x86 emulation.


> "Windows Defender System Guard runtime attestation, which is built into the core Windows operating system, will soon be delivered in all editions of Windows. Windows Defender System Guard runtime attestation, like Credential Guard, takes advantage of the same hardware-rooted security technologies in virtualization-based security (VBS) to mitigate attacks in software."

What's the benefit of hardware-based security tech? Is it actually doing anything special, or just doing what security software is doing, but in hardware?

Take for example the secure enclave on an iPhone. If the FaceID / TouchID was implemented in software then you could read it from memory if you compromise the A11 chip. Instead you have to now compromise both the secure enclave and the A11 since it is isolated from the A11.

Why would that be any more than 2x harder?

General-purpose processors have to be secure while executing untrusted code, providing a large number of features, and providing good performance.

The secure enclave isn't subject to these constraints, allowing for more conservative design decisions.

You've found a privilege-escalation attack that can let sandboxed apps escape their sandbox? Still secure if the chip can't run apps in the first place. You've found a bug in the USB disk mode emulation code? Still secure if the chip doesn't have any USB code on it. You've found a bug in branch prediction? Still secure if your chip didn't use it. You've found a way to abuse the third party developers' debugging interface? Still secure if your chip provides no such interface...

Yes, it absolutely is more than 2x harder. The attack surface of the secure enclave is considerably smaller than the attack surface of the AXX chip as a whole, and you need a significant jailbreak/compromise before you could even target the SE.

Because you need two compromises and hacking the secure enclave is a much harder proposition than finding a exploit that allows you to read the memory of an iPhone. Public information about how the secure enclave really works has been hard to come by.

The secure enclave would have a much smaller attack surface, due to only handling a much smaller set of features, I guess.

It can provide a signing chain from the bootloader upwards to verify that you're running the software you think you are.

The OS could do that as well, yet we see a lot of userspace-only exploits and hacks.

No, the OS cannot do that, because an attacker will replace your OS with a compromised one which lies to you that everything is peachy.


Software based is very vulnerable to the evil maid attack.

BTW, Google uses hardware-based security on all of their servers:


Ok, so you move from the "evil maid" reinstalling the OS to having to replace the CPU/whole computer. Yeah that definitely looks like an advantage.

Yes but you can detect that. The enclave can sign a statement that you could verify with a public key for that enclave. So if you replace the hardware there is no way to do that anymore as you cannot extract the private key from the original enclave.

Is this the beginning of the end for x86?

Maybe. Traditionally it's hard for companies to compete with much cheaper but slightly less performant rivals. That's the whole idea behind "Disruptive Innovation." When x86 came out it was the cheap knockoff that everybody could afford and it eventually ended up eating it's way into workstations and servers as its scale let it pour more engineering resources in. ARM may be achieving that sort of scale advantage over x86 but its fragmented nature makes the story more complex.

Probably not. Arm is power efficient but still x86 performance is ahead. At some point you need a balance between battery life and performance.

If everything goes right for ARM, next year's CPU line should be single-core competitive against Intel's laptop line; the year after, ARM could be ahead.

Meanwhile, AMD's CPUs are winning on price/performance and pure performance for multi-core in the server market, and it looks like they're going to be competitive with Intel on basically every desktop area.

Intel has a marketing advantage. That's pretty much it.

Just built a 64 core workstation based on AMD Epyc cpus, seeing how fast my simulation workloads run on it brings a smile to my face.

I can appreciate the fun to be had with so many core cpu, but did you actually measure the performance? According to Passmark, both single and multithread performance of EPYC is very poor [1][2]. Passmark database is years in building and very informative, but I think for EPYC that is an erroneous result. Could you run Passmark benchmark on your rig to get another data point public?

[1] https://www.cpubenchmark.net/cpu.php?cpu=AMD+EPYC+7501&id=31...

[2] Compare that with performance and price of something like E5-2670 from 2012: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2670+...

Will give that a try tonight. Have to warn you that right now 1 of my DIMM was bad, so the system is running with only 15 DIMMS (which is an unsupported config) so the results might be suboptimal until I receive a new DIMM from the seller.

That's great, looking forward to the results. But, there's no rush. If you'd like to measure today anyway, please just make sure the result doesn't get reported to Passmark database so as not to spoil the small dataset with a biased result. Good luck with the replacement, hope it will stay solid from now on.

Just realised that Passmark does not have a Linux benchmark program, so wont be able to run it.

Ah, that's a shame. I'd love to know how much faster the Epyc is than the E5-2670. Could you try to run sysbench to get some numbers? Here are mine:

# sysbench --test=cpu run --max-requests=20000 Test execution summary: total time: 25.9014s total number of events: 20000 total time taken by event execution: 25.8983 per-request statistics: min: 1.27ms avg: 1.29ms max: 3.19ms approx. 95 percentile: 1.29ms

# sysbench --test=cpu run --max-requests=20000 --num-threads=16 Test execution summary: total time: 1.6859s total number of events: 20000 total time taken by event execution: 26.9264 per-request statistics: min: 1.26ms avg: 1.35ms max: 3.59ms approx. 95 percentile: 1.51ms

[dman@epyc ~]$ sysbench --test=cpu run --max-requests=20000 WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options. WARNING: --max-requests is deprecated, use --events instead sysbench 1.0.14 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options: Number of threads: 1 Initializing random number generator from current time

Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed: events per second: 1461.82

General statistics: total time: 10.0004s total number of events: 14621

Latency (ms): min: 0.67 avg: 0.68 max: 1.76 95th percentile: 0.69 sum: 9997.72

Threads fairness: events (avg/stddev): 14621.0000/0.00 execution time (avg/stddev): 9.9977/0.00

[dman@epyc ~]$ sysbench --test=cpu run --max-requests=20000 --num-threads=128 WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options. WARNING: --num-threads is deprecated, use --threads instead WARNING: --max-requests is deprecated, use --events instead sysbench 1.0.14 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options: Number of threads: 128 Initializing random number generator from current time

Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed: events per second: 47980.46

General statistics: total time: 0.4152s total number of events: 20000

Latency (ms): min: 0.68 avg: 2.06 max: 111.24 95th percentile: 3.36 sum: 41275.73

Threads fairness: events (avg/stddev): 156.2500/118.37 execution time (avg/stddev): 0.3225/0.06

Let me know if you want me to run any other benchmarks.

Thanks. I just ran v1.0.14 the same way and got 793 events/s singlethread and 10264 events/s multithread (16) on the E5-2670. So you've got single thread almost 2x faster and multithread almost 5x faster. In singlethread, that's much better than what I expected of low frequency Epycs. You've got a nice snappy machine there:) Interestingly, the performance per thread in multithread is 641 on E5-2670, while only 374 on Epyc. Probably there's some massive thermal throttling going on Epyc. With that fast cores, one should get at least 1491x64=93504 without throttling.

I think the problem size is too small. Dialling up the max requests and setting thread count to 64 yields the per thread in multi thread to 1104. (I am guessing the the scheduler will take a while to bring all 64 threads up, so on small problems one might not see the full benefit of available parallelism, but this is just an armchair hypothesis).

[dman@epyc ~]$ sysbench --test=cpu run --max-requests=2000000 --num-threads=64 WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options. WARNING: --num-threads is deprecated, use --threads instead WARNING: --max-requests is deprecated, use --events instead sysbench 1.0.14 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options: Number of threads: 64 Initializing random number generator from current time

Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed: events per second: 70704.31

General statistics: total time: 10.0014s total number of events: 707263

Latency (ms): min: 0.68 avg: 0.90 max: 25.23 95th percentile: 1.50 sum: 638330.33

Threads fairness: events (avg/stddev): 11050.9844/1635.68 execution time (avg/stddev): 9.9739/0.02

That's interesting, same increase in requests makes little difference on my machine, I get 10432 for multithread (16). I noticed above you have used 64 threads on 64-core system - does that give you better result than using 128 threads? It shouldn't, SMT should give you some increase in speed. At least on my 8-core system, using 16 threads gives 10432, while using 8 threads gives only 7311.

Wow, that's impressive. Thanks for posting this. I have a mere 8C/16T 2700X.

intel is still single thread performance king and will be for the foreseeable future. It's the most important performance metric when considering a desktop processor for most workloads.

While arm is catching up, there is no gaurentee that it will actually be competitive one day, not to mention beat it. Intel is still a beast and spends more in R&D than what amd took in last year. It would be foolhardy to write intel off.

You seem to have missed the part of the article* where the Cortex-A76 matches a core i5-7300 in performance, with much lower TDP. That is planned to come... oh wait, it's already available.

* it's the second paragraph

My info may be out of date, but I think Intel also has a production advantage. AMD doesn't have the capacity to, for example, own 45% of the market and leave Intel with 55%, as I understand it.

Desktop/laptop is a shrinking market, and HEDT/Workstations have always been extremely tiny -- it's nowhere close to where Intel makes most of their money. Consumer Ryzen 7 uptake numbers or ARM laptops are basically meaningless as far as predictions about "The death of Intel" go. It's nothing. Datacenters are what matter.

This is a good move for ARM perhaps, since their licensees have (repeatedly, at least _for now_) failed to move into the server market where x86 reigns supreme, so if they can take a shrinking market off Intel's hands and make some inroads there, hey, whatever works. People actually don't care about processors, because for consumers, price is king. So if they can deliver cheaper Chromebooks or whatever, people are happy. And ARM already dominates the lower end market. But large players don't work that way.

EPYC has better pricing per-core (I say this as a very happy 1950X owner) but you're kidding yourself if large scale vendors who buy thousands of SKUs per year/quarter do anything but buy in bulk, on multi-year contracts, with extensive sales negotiations. They are far ahead as far as vendor validation/stability goes (my 1950X motherboard still has BIOS/IOMMU glitches that I'm waiting on updates for, this stuff just takes time). For the biggest customers, Intel customizes their SKUs directly to their requirements. That's a significant amount of integration with their partners that AMD is not going to match overnight. And even if they take away some of Intel's total-monopoly status in the DC, say 25%, which is a metric shitload, they've still got a hell of a lot of technology (in their foundries) to back themselves up, as well as a massive warchest. I wouldn't be surprised if Xeon margins were above 50%. You really think they can't drop some of that off and immediately tilt that ratio back around, while having tens of billions on hand for R&D anyway?

You live in a castle of sand if you think they're actually going anywhere anytime in like, the next 5-7 years. And I have many bridges to sell you, if you think "marketing" is their only advantage in this fight -- as opposed to their foundries, deep integration, near-total monopoly status in the only market that matters, their massive warchest, and huge R&D setup.

I honestly wonder if Intel actually wants people think that ARM Chromebooks are a threat to them or whatever. It means they can keep deluding themselves while Xeon sales and margins continue to skyrocket for cloud providers while everyone else chases pennies (except AMD, who are actually trying, and are absolutely not guaranteed to dominate by that alone)...

(I do hope they start feeling the pressure, of course. I'd love cheaper Xeons, personally. :)

Literally exactly what Arm is talking about in this roadmap.

You can't really look at power and performance in isolation, given that you can trade one for the other by modifying their clock speeds. It would be better to say that from the milliwatt range to 20W socket draw or so ARM is more performance and current x86 designs are more performant above 20W.

"No wireless. Less space than a nomad. Lame."

iPods could gain market share from marketing and consumer focused industrial design. ARM cores don't have the same way out.

Woah, fatal_error.

Does "Intel Inside" sound familiar? That's because it worked amazingly well, and people chose Intel over AMD back in the day just because they "felt" better about it. Some even chose it because they didn't want to get AMD shamed ("Aw poor person; they could only a afford an AMD.").

Not saying Intel CPU lineup wasn't better than AMD's product, (especially at the height of the marketing program when laptops used to have "Intel Inside" stickers), but a lot of people didn't even bother comparing. They just chose Intel-based computers.

And that's the story of how so many people I know ended up buying a PC with a crappy Celeron* (I know, I know, the new ones are better).

Sure, CPUs are harder than iPods to market to consumers. Most haven't a clue what ARM Cortex-A7 is. But, that could be the issue. They need to step up their brand identity. They also should probably tell people that ARM doesn't actually make hardware for consumers. They just design the architecture and license that design to silicon manufactures.

Most people knew they were buying an Intel. Most people don't know they bought an ARM. They could do better.

*: I assume some bought them because the salesman at Best Buy said "dude, Intel's are what you want. AMDs are slow."

It is rather amazing that just reading "Intel Inside" I heard the little chimes from the TV commercials and saw it in my head.

Intel has ~60x the revenue of ARM and is already a known brand with the public. If it gets into a marketing slugging match, Intel wins hands down.

I'm not sure that type of marketing will happen again though. Since then, computers have gone further down the path of being an appliance. What's inside matters less than what it can do. If Apple releases a MacBook with 1-5 days of battery life, no one will care what's inside.

I agree, I don't think it will. It just doesn't make sense.

ARM is owned by Softbank, which has a net worth of ~2x Intel.

And many many more commitments. They can't dedicate the same resources that Intel can.

Right, but Intel CPUs are used in more than laptops.

I don't know if Intel's days are numbered or not, but I don't think this alone would be enough.

I doubt it. But competition is sorely needed.

I used an ARM Chromebook with Linux installed for about 2 years. YMMV, but VLC seemed to not work anywhere near as well as it does on the x86 (in terms of playback and codec availability).

Good battery life though.

Doubtful. Performance generally corresponds to active area and, hence, power.

So, for ARM to become as performant as x86 means they wind up burning the same area and power.

That having been said, breaking a monoculture would be welcome, especially given how cavalier Intel is about security.

(You will note I didn't say "In light of" with respect to Intel and security. IBM, DEC, etc. have been preaching about the fact that x86 has lousy security for 30+ years. It's just that nobody cared until x86 became a mainframe ... err ... cloud.)

> Performance generally corresponds to active area and, hence, power

Then how come ARM won on mobile?

Because an enterprising engineer managed to hand code an assembly language implementation of cellular baseband on ARM thus entrenching ARM as the low cost implementation on a cell phone. After that, ARM expanded outward to run the GUI.

In addition, the batteries of the time demanded a specific power envelope. Not many chips had this envelope ... pretty much only ARM , MIPS, and a handful of also rans that you've never heard of.

Once ARM got going, network effects took over. There is no reason you couldn't implement a cell phone on a MIPS core, for example, at this point except for network effects.

Another thing is that at the time only other ISA designer that really cared about this market was Hitachi with it's SuperH (and then founded Renesas with the sole intent of becoming meaningful ARM competitor, which obviously didn't happen).

You could get usable low power microcontrollers and almost-SoCs with ARM cores in late 90's. While there were SoCs with other 32b RISC cores they typically were intended for mains powered high performance applications ranging from DVD players to network equipment. See how large part of Freescale's PowerPC SoC lineup are without much exaggerattion "Cisco 2500 on a chip" (obviosly with ppc core instead of m68k and with wonderfully complex DMA-engine/protocol decoder/whatever-thing)

Wasn't ARM also the de-facto standard for PDAs? Which was a deliberate market profiling after Apple came into the fray and wanted a CPU for their Newton (and helped spin off the ARM we know today from Acorn).

Dramatically lower performance expectations.

Low power Intel is not as good as low power ARM. "High power" ARM competitive with desktop doesn't quite exist yet, although the iPhones are very close (and better than a desktop of a few years ago)

I would imagine it comes down to margins. In a competitive market, customers aggressively try to drive down prices. Most silicon companies are too smart to play that game. You fight to establish dominance up front. If you don't win the market, you're unlikely to every become profitable, so you cut your losses and exit.

Atom came too late and it wasn't enough to beat ARM. All the other low power players stagnated.

x86 has overhead in small low power systems; it didn't help that the initial intel low power products were low-effort garbage.

> So, for ARM to become as performant as x86 means they wind up burning the same area and power.

Yup. The thing is that ARM can indeed match Intel on scaling up as you noted. Intel is struggling with x86 in scaling down, though. Easy to see which one is in a better position overall.

Intel's x86 architecture reset and housecleaning is due to land in 4 or 5 years and it should be really interesting then.


I wonder if they (ok, maybe not them, they're ip only) will deliver the 7nm / 5nm processes

In practice, ARM works very closely with the foundaries despite not selling chips. A process agnostic dump and run won't let you hit the perf that you'd expect.

They have physical IP with verified, fab specific implementations for most of their cores. The press release state that the coming cores are also covered by that program.

IMHO, ARM performance is "good enough" for consumption and light editing/creation.

The real test for this is software.

Anyone remember this? https://en.wikipedia.org/wiki/FX!32


We've come a long way... Software vendors compiling for multiple targets doesn't seem so crazy anymore. We've got toolchains that are good at that, and we don't distribute software by floppy or cd anymore. Emulation seems like the wrong way to go.

When you factor in NAND, RAM, Wireless Network, at what point will the lower CPU price be irrelevant for leaving the x86 Desktop ecosystem? Ryzen 3 2200G cost only $99. So Assuming everything else being the same, would you pay $99 more for the NUC or board?

It is the same reason why x86 doesn't work out when it is moving into mobile space, same reason ARM doesn't work out moving up to Notebook Desktop space.

Two more years down the road I wouldn't be surprise to see a Dual Zen Core, 64 Vega APU selling for $59 or less.

I am of two minds about this.

While i want to see the x86 desktop get a serious contender once again, it has risen on the back of a very open and modular platform.

But ARM based products are virtual black boxes by comparison.

Thus i worry if ARM rising to the challenge on the desktop will lead to an acceleration of the trend of "devicification" the desktop.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact