
How enthusiasts designed a powerful desktop PC with an ARM processor - rbanffy
http://www.pcworld.com/article/3184230/computers/the-anatomy-of-a-powerful-desktop-with-an-arm-chip.html
======
brianolson
Article is all about vaporware. Probably we were all hoping for more when we
clicked. Maybe title should be something like "Enthusiasts Dream of a Powerful
ARM Desktop PC".

ARM isn't a super awesome ISA though. (I've programmed directly in 6811,
68000, PPC, MIPS, ARM.) I'd be a little tempted to hope for skipping ARM and
getting to RISC-V desktop class hardware. Latest RISC-V chips are more like
mid range embedded ARM, but the dream of a clean new ISA that's open source
and could become the Linux of CPUs is enticing.

~~~
xmichael99
Agreed, I can't believe that article existed with that title. I was rather
annoyed when I got to the end.

Title should have been, "Summary of a audio recording about the type of ARM
computer a bunch of nerds wish existed."

------
m-j-fox
When I saw the picture at the top I thought someone besides me was using
ThunderX as a PC. The single-thread performance is nothing to write home
about, but if you can find something to engage all 48 cores (like pbzip2) then
its bananas.

I don't know why they say Cavium is "not interested" in making a PC. It's not
their main market, but they'll sell you chips and eval boards. You need a few
daughter cards for SATA and PCIe so you can plug in a graphics card. What's
really needed is someone to integrate that design, putting the normal IO on
board.

It runs 98% of Ubuntu packages, thanks in part to Linaro's own hard work. So
they should know, it's a perfectly viable project.

~~~
subway
Able to point to a sales channel where one might be picked up? The last time I
reached out to Cavium directly, they made it seem they weren't interested in a
sale that wouldn't lead to volume later.

~~~
m-j-fox
I'd talk to Gigabyte. They build systems to spec. If you want to go to Cavium
directly then you're right that you have to convince them you have a solid
business plan to move some thousands of units a month, in which case they will
bend over backwards to help you do so.

------
ekianjo
Actually, they did not design anything, or that's not described in the
article.

Besides, no mention about the fact that there's a complete lack of graphics
drivers for ARM under Linux, which is one of the most problematic issues for
anyone who wants to build a desktop/laptop with it. No ARM SoC manufacturer
has any open source effort for drivers, and reverse-engineering efforts have
stalled for the most part.

~~~
cbetz
Reverse engineering efforts have only stalled for ARM SoCs using PowerVR GPUs
from Imagination Technologies. However for Qualcomm Adreno and Vivante GPUs,
the freedreno and etnaviv projects are steadily and quietly moving forward;
both have been integrated into mainline Mesa at this time.

For instance, I learned from HN last week that you can run android on i.MX6
with open source GPU drivers. This is a big deal. See:
[https://www.collabora.com/news-and-
blog/blog/2017/04/27/andr...](https://www.collabora.com/news-and-
blog/blog/2017/04/27/android-getting-up-and-running-on-the-imx6/)

~~~
metalliqaz
What about Nvidia's powerful K1/X1/P1 Tegra chips? Any hope there?

~~~
my123
The Google Pixel C runs using the Nouveau open-source drivers.

------
jdblair
Long time embedded Linux developer here.

This article exaggerates the undesirability of cross-compiling from x86 to
ARM. It requires a bit of a conceptual shift, but once your toolchain is set
up workflow is practically the same. Debugging is a bit trickier, but most
issues can be sorted out on x86 before cross-compiling. It's definitely not
like developing for Windows on a Mac.

There may be a compelling case for a PC that consumes less energy​ by using an
ARM CPU, though peripherals like DDR RAM, SSD and video card will use the same
energy regardless of CPU architecture.

~~~
joezydeco
Embedded Linux developer here as well.

As much as I grind my teeth with Yocto, it _does_ make the process of
generating a usable toolchain much easier than it used to be. Which makes the
rest of it easier.

Up until a couple of years ago it was pretty much "grab what you can from
CodeSourcery and cross your fingers".

~~~
woodrowbarlow
i've found that:

* if all you need is a toolchain, go with crosstool-ng.

* if you want a toolchain and a bootable kernel with busybox and dropbear for a popular chipset, go with buildroot.

* if you want buildroot plus flexibility in every direction (at the cost of configuration pain), go with yocto.

* if you've outgrown even yocto, you're back to crosstool-ng and rolling your own build/bundle system.

------
abainbridge
It seems a shame that Nvidia don't make a NUC like product from the Jetson
TX2. See: [http://www.nvidia.com/object/embedded-systems-dev-kits-
modul...](http://www.nvidia.com/object/embedded-systems-dev-kits-
modules.html).

They'd need to sell it for about $400 though, with case and PSU.

~~~
walterbell
What size of case will fit that board?

~~~
zokier
Looks like the devkit should fit mITX cases

------
walterbell
There's a MIPS + PowerVR PC that can run Debian Linux, with support for
hardware virtualization of the GPU. Does not seem to be available outside of
Russia.

[http://www.pcworld.com/article/3040528/computers/this-
russia...](http://www.pcworld.com/article/3040528/computers/this-russian-all-
in-one-desktop-is-just-weird-enough-to-be-interesting.html)

[https://www.imgtec.com/blog/t-platforms-tavolga-terminal-
des...](https://www.imgtec.com/blog/t-platforms-tavolga-terminal-desktop-pc-
mips-linux/)

------
mmjaa
Another great ARM-based, enthusiast-built system: the Open Pandora, and its
follow-up, the Pyra:

[https://pyra-handheld.com/boards/pages/pyra/](https://pyra-
handheld.com/boards/pages/pyra/)

Maybe not high-performance, but .. decent. And such a great, fully integrated,
community-led project.

~~~
AlbertoGP
Yes, I'm one of the pre-orderers and it's moving closer to delivery. Nobody
knows yet when it'll happen though.

Current status: [https://pyra-handheld.com/boards/threads/i-wish-you-could-
fe...](https://pyra-handheld.com/boards/threads/i-wish-you-could-feel-things-
through-the-internet.80200/)

Here is the cost break-out: [https://pyra-handheld.com/boards/threads/money-
makes-the-wor...](https://pyra-handheld.com/boards/threads/money-makes-the-
world-go-round.80207/)

The current setback is that a first batch of cases was either damaged on
delivery or not sent: [https://pyra-handheld.com/boards/threads/the-following-
is-ba...](https://pyra-handheld.com/boards/threads/the-following-is-based-on-
a-true-story.80321/)

------
Animats
What about repurposing ARM-based Chromebooks?[1] Same result, less work.

[1] [https://www.chromium.org/chromium-os/developer-
information-f...](https://www.chromium.org/chromium-os/developer-information-
for-chrome-os-devices/samsung-arm-chromebook)

~~~
shams93
With Chromebooks that support android you can run linux via termux for console
linux

~~~
jacksmith21006
This is how our school teaches AP CS 1 and CS 2. They use to use Crouton but
required developer mode. Now with Google adding containers to ChromeOS you can
use Termux inside a container without developer mode required.

ChromeOS is the only commercial OS that has the exact same environment used in
the cloud on a laptop.

In someways ChromeOS is now an OS of OSs. My 2nd oldest son studying CS as
University uses ChromeOS, Android and desktop Linux all on the same machine at
the same time. With containers this is done with basically zero additional
overhead.

Google has something really special here and would expect it to become a
strong developer platform over the next couple of years.

It is just ideal in that you just pull down whatever containers you need and
they will run unchanged on ChromeOS as it has the same Linux kernel as the
cloud.

------
sleepingeights
All of the more recent ARM SBCs are capable of running as desktops for the
most common tasks of browsing the web, watching movies, editing documents,
etc... on less than 5W of electricity.

ARM's selling point is efficiency and cost, not raw power irrespective of
cost.

edit: The real issue right now is not whether ARM can run as a desktop because
it can. It lies in keeping the large number of devices and processors upstream
with the linux kernel. Monitoring Kernelci.org shows that this is slowly
becoming less of a problem as more enthusiasts become fans of ARM processors
and the available chips and SBCs.

------
pasbesoin
I'm pretty sure this title is going to apply to Apple, in the not too distant
future.

Unless they've changed their direction, again.

It's also one more way to explain the foot-dragging on the Mac Pro. Hard to
launch a new high-end platform when you're focused to switching to a platform
squarely focused on mobile (phones and laptops).

Now that they say a new Pro is coming, I wonder a bit more about whether
they've switched directions and are going to stay with Intel in the near
future.

P.S. Or have they continued development and are they going to be one of the
first to introduce true high-end ARM power through up-design and an effective
multi-processor/multi-package integration? (Aside from ARM server clusters,
which can be powerful but are a different kind of thing.)

~~~
mdip
While it'd be neat to see, I think the hurdles that exist in moving from Intel
to ARM are pretty high, and far higher than they were when they moved from the
PowerPC range to Intel.

Just a few off the top of my head:

\- The Mac platform, today, is _much_ more widely used than it was in the
PowerPC days. This is a double-edged sword. Apple can use their substantial
weight to force ISVs to recompile[0]. These ISVs are unlikely to provide the
next version for free. Some ISVs won't exist any longer, forcing one to run
the application in whatever Rosetta-like compatibility later is produced. I
can't speak to how well a translation application would work going from
x86->ARM, but the only emulators I've seen that perform _well_ are ones where
the target platform is dramatically more powerful than the emulated platform
(Nintendo emulators, etc). The impact to users on upgrading is _very_ high and
there are _many_ more of them, now, which will get noisy. Their competition
has also gotten better at producing more desirable competition (Surface Book,
Windows 10[1])

\- ARM's aims are performance-per-watt, not performance at all costs. I don't
care if my desktop drinks electricity. I care if it does things quickly. I
don't believe Apple will make this move until they can be assured that a
processor can be developed for about the cost of an Intel equivalent and will
perform as well[2].

\- I'm foggy on the details, but my understanding is that App Store
submissions are required to be done in a way that provides LLVM or other IL/VM
language code instead of machine code. This could land them in a spot where
re-compiling to a new platform can be done by _Apple_ (assuming ToS have
granted them that right), which is great ... for apps in the App Store. The
mac platform app store isn't the only source of software.

\- Apple would almost certainly have to design the processor as they have with
their phone. "They've done it before, they can do it here" is a somewhat fair
and unfair argument. Desktops have different design goals than phones/iPads.
Notebook designs would probably fall somewhere in-between the two. To do it
_right_ , they probably have to support two new processor designs: one for
"Pro Desktop" and one for a notebook that has more performance than their
highest end mobile device, but sips power like their highest-end mobile
device. The cost, all around, will be _high_ : low-power/high-
performance/cheap, pick two. Intel gives them high-performance/cheap and
moderate power.

There are other, less important reasons, but in weighing pros and cons, I
can't come up with a lot of benefits to doing this. The funny thing is that
even as I was writing this, I could come up with plenty of counter arguments
and the reality is that "Apple could actually pull something like this off". I
am having a difficult time figuring out, though, what the upside is for them.
Apple owning the processor doesn't buy them a whole lot. And while having a
single set of CPU instructions[3] sounds like it might benefit ISVs, it really
only helps mac-only ISVs. Most of those ISVs are going to have to target Intel
platforms if they're supporting Windows. There would have to be a few other
_huge_ reasons to do this to offset the costs.

Personally speaking, I'd _love_ to see something like this ... I'd probably
find myself buying an Apple product[4].

Part of me (that more cynical side) me wonders if these rumors don't originate
out of Apple as a way to keep Intel on its toes. Apple is a _big_ customer and
hints that Apple may jump ship to ARM keep Intel focused on improving the
performance/watt ratio and likely helps them on price.

[0] And we all know it's not just a matter of changing the target platform for
any moderately complex application

[1] Yeah, maybe not the best examples, but privacy elements aside, Windows 10
works, is pleasant to use and rarely crashes despite my running insider builds
in the fast channel.

[2] I haven't looked too deeply into the server processors, but my sense is
that they're desirable mainly because of the core numbers, almost as though
the ARM processors are making up for single-threaded performance by throwing
more cores at the problem. And that's probably a good deal for _many /most_
server use cases. Many server tasks are simple as far as an individual thread
is concerned, but multiplied by the concurrent use.

[3] Well, no, not exactly.

[4] For all of the same reasons these "insiders have [not actually] designed
an ARM desktop", but also because my inner geek would like to play with an ARM
desktop.

~~~
DerekL
> I'm foggy on the details, but my understanding is that App Store submissions
> are required to be done in a way that provides LLVM or other IL/VM language
> code instead of machine code. This could land them in a spot where re-
> compiling to a new platform can be done by Apple (assuming ToS have granted
> them that right), which is great ... for apps in the App Store. The mac
> platform app store isn't the only source of software.

Apple calls this ”Bitcode“. It's not useful for portability between
architectures, it's only good for adapting to smaller changes in instruction
sets. (For example, maybe the current chip doesn't do integer division, but a
future one does.)

Here's a discussion:
[https://news.ycombinator.com/item?id=9727599](https://news.ycombinator.com/item?id=9727599)

------
10165
The article begins with looking back many years. In those days there were
technical limitations and clever workarounds. One MB was enormous.

Today, what are the technical limitations?

PCWorld's summary of by the comments in the talk lacks much insight. The
article itself is misleading because it seems to suggest the goal is a
"consumer PC". As I understand it the goal is a build machine, a computer
whose purpose is to run a compiler. This is not a computer primarily for
_consumption_. Graphics are optional.

Here is how I understood the comments:

1\. Need _expandable_ memory. Not everyone will require the same amount of
memory. For example, the kernels I compile only need about 200MB of RAM, max.
But this might not be true for other users. Back in the PC days, users could
add their own RAM. This concept has been lost on mobile phone manufacturers.
Not everyone needs the same amount of RAM.

2\. Need better secondary storage. I would guess the reason is not to
supplement RAM i.e., swap space, but because source trees have become so
large. This is a pet peeve of mine. I keep writing half-baked tools to reduce
the size of source trees to only what I will need. If the trees were smaller,
perhaps more customized, then I could fit them in RAM (mfs or tmpfs). On my
RPi I never use SD cards as writeable storage. They are just where I store a
read-only copy of the kernel and userland which I can load into memory. I pull
out the card after booting. I/O via secondary storage is just too slow.

Looking back at times past there was some incredible creativity to find ways
to make due with limited memory. Today we have enormous amounts of RAM (and
computing power) but I see little effort or creativity in getting systems to
fit into these "constraints" (sheesh).

The systems of the past fit easily in today's "minimum" amounts of RAM. The
systems of today are still used by a majority of users for many of the _same
boring tasks_ as in times past but cannot fit into _GB_ of RAM?

Software is like a gas. It expands to fill space.

Forcing one to keep purchasing new hardware. I guess that is the point.

~~~
m-j-fox
I think this will make a fine build machine for aarch64:

[http://b2b.gigabyte.com/Density-
Optimized/H270-T70-rev-100#o...](http://b2b.gigabyte.com/Density-
Optimized/H270-T70-rev-100#ov)

With 372 cores and 32 dimm slots you can put your whole Ubuntu source tree in
tmpfs so you will not have to wait too long to recompile your distro.

------
ShinyCyril
Does anyone know any ARM SoCs which can run without binary blobs? I'm looking
for something with a fast serial video interface, H.264 decoder and a SATA or
PCIe controller. So far I'm looking at the Allwinner A20 and the i.MX6 -
however the latter doesn't have enough bandwidth on the video interfaces to
drive a high-resolution display.

The RK3399 mentioned in the article looks great, however it seems like at the
very least, the Mali GPU requires a binary blob (not sure about some of the
other peripherals).

------
vfclists
They are not going to use Linux as the OS are they?

This is the OS in which opening browser tabs cause music to go choppy. After
more than 20 years of Linux this problem still exists. WTF?

Did that ever happen on the Archimedes?

------
nickpsecurity
What's weird is Russia simultaneously has teams building CPU's so threatening
that Intel will buy one in defense but the one making desktops is worse than
what some academics built on RISC-V and OpenSPARC. Russia needs to pay their
top people for a competitive design that can be cheaply licensed due to the
subsidy. Then sponsor some SoC's. Then a desktop. Then get the process
rolling.

~~~
monocasa
Which Russian chip company did Intel buy?

~~~
trsohmers
Pretty sure he is thinking of Elbrus, but AFAIK nothing actually happened from
that being announced back in 2004... Elbrus still exists and is making chips.
They were basically the Russian Transmeta.

~~~
nickpsecurity
I saw the is buying article but not if they concluded the deal. It was a
_huge_ amount of money. I also remember the tech description was like a better
version of Itanium. I thought they were blocking an Itanium rival before it
hit market.

Didnt follow it from there as I was doing a quick survey of Russian chips and
fabs.

~~~
Jenya_
Intel hired people from Elbrus, to work in Moscow Intel's office

[https://www.extremetech.com/extreme/56406-intel-hires-
elbrus...](https://www.extremetech.com/extreme/56406-intel-hires-elbrus-
microprocessor-design-team)

~~~
nickpsecurity
That's exactly the one I read! Thanks Jenya! So, not only was I right they
could make Intel-grade chips: they've been doing it at Intel since 2004. Haha.
There should be some more in Russia, though, that could support a larger team
of hardware people. They could also try to lure the best ones back or get some
from IBM's POWER team.

