
Pre-Prod HoneyComb LX2K 16-Core Mini ITX Arm Workstation Pre-Order for $550 - walterbell
https://www.cnx-software.com/2019/06/05/buy-honeycomb-lx2k-16-core-mini-itx-arm-workstation/
======
walterbell
Arm desktop is useful for native development builds. Quoting Linus,
[https://www.extremetech.com/computing/286311-linus-
torvalds-...](https://www.extremetech.com/computing/286311-linus-torvalds-
claims-arm-wont-win-in-the-server-space)

 _> ... as long as everybody does cross-development, the platform won’t be all
that stable. Or successful ... If you develop on x86, then you’re going to
want to deploy on x86 ... It was literally this “develop at home” issue.
Thousands of small companies ended up having random small internal workloads
where it was easy to just get a random whitebox PC and run some silly small
thing on it yourself. Then as the workload expanded, it became a “real
server”. And then once that thing expanded, suddenly it made a whole lot of
sense to let somebody else manage the hardware and hosting, and the cloud took
over._

------
neilv
Sounds like the pre-production board is for people who can't wait until a
production board with mainline Linux support is available:

> _The pre-production developer board is fitted with NXP LX2160A pre-
> production silicon which explains some of the limitations. [...] software
> features will be limited with the lack of SBSA compliance, UEFI and mainline
> Linux support. It will support Linux 4.14.x only._

BTW, would be nice if the production board had Coreboot support, whether or
not it has UEFI by default.

~~~
linux4kix
Coreboot isn't immediately on our radar. We have both u-boot and EDK2 running,
although EDK2 needs more work.

Mainline support for the SOC is already making it into mainline and we can
boot a mainline kernel with limited functionality. Getting anything into
mainline is a process, but we hope to have everything supported by the end of
the year.

------
dpifke
Does anyone know if this has a closed coprocessor equivalent to the Intel
Management Engine? Or how open the firmware is?

~~~
snazz
I'm also wondering how the BIOS/UEFI equivalent on these machines works--I
really hate how the Raspberry Pi does it (boot partition on SD card contains
all data).

~~~
drewg123
At work we just got an aarch64 box from a vendor that has a very x86-like UEFI
setup. Right down to the "BIOS" setup screens. It was very familiar seeming,
but I'd rather have something like coreboot.

~~~
p_l
Coreboot only handles low-level initialization, it will still need a higher
level payload unless you're going to hardcode your kernel to the board for
every update.

Now, coreboot+tianocore uefi for somewhat open firmware...

------
m0zg
NVIDIA Jetson Xavier is quite usable as a "workstation" as well. I actually
develop ARM SIMD code on it. 8 cores + GPU. $700 on Amazon right now.

~~~
aseipp
Holy hell, they dropped the price on that thing pretty quick! I think you
could originally only get the $700 price tag for Xavier via the "developer
discount", which only applied to one unit.

As a word of note, though, the chips in the Tegra machines are, IME, extremely
powerful -- but you're stuck with Linux 4 Tegra and Nvidia's Ubuntu-based
distro. At least for the TX1 there was some success getting other distros
working (without GPU) but I remember several people running into trouble with
the TX2. That may or may not be a dealbreaker for some people (it's a sort of
franken-Ubuntu and your only hope for problems are the Nvidia forums)

The Solid Run LX2K machine however should (eventually) run any standard
AArch64 distro. They also have 16 cores and expandable SATA, as well as an
open PCIe 4x slot on these boards, so you can install whatever GPU you like
onto it. Not to mention you can plug in a bunch of RAM. You can even buy a
slightly different variant of this board with a 100GigE cage on it, still
under $1000 USD, which puts it in a very unique position vs any other I've
seen.

I think the LX2K boards are (now, at this point in time!) a much better sell
if you want an actual "arm workstation" as a developer. But the Xavier will be
vastly superior if you want a lower power profile (mobile) or if you're doing
something like DL inference.

~~~
m0zg
You can install whatever GPU you like, but good luck getting aarch64 drivers
for it. IMO franken-Ubuntu on Xavier is pretty close to stock, save for CUDA
and a few Jetson-specific tools. Certainly close enough to be practical. And
you get Pascal-grade CUDA and tensor cores on the Xavier, too. And PyTorch
works as well.

~~~
aseipp
Why would that be an issue? People already have Mesa and AMDGPU working fine
on non-x86 architectures (people have demoed physical POWER, RISC-V and
AArch64 machines, all with various Radeon GPUs and the newer AMDGPU stack) and
the support gets better and better every release. But it already works fine.

Also, Franken-Ubuntu I suppose isn't really my complaint, though I still don't
like it. It goes beyond that, though -- the Tegra boards have very custom
design that means some things are a huge pain in the ass if you want to even
remotely treat them like "normal" computers (for example, want to test that
fancy driver you wrote on your AArch64 Tegra machine for that M.2 peripherial?
well first you gotta reconfigure the BSP firmware to turn off the power rails
to various subcomponents so you can reroute them to things like the M.2 E key
slots in the TX2. Gotta disable that USB controller via BSP flash if you want
a chip in there, etc etc). It also means I am basically stuck on older kernel
versions which may mean features I _want_ to use or develop with are basically
off limits (eBPF enhancements, io_uring, etc.) I don't just want to test SIMD;
I also need to test actual memory/disk/peripherials, etc. These constraints
make perfect sense for Tegra devices -- they are still annoying, for me.

I'm not saying the Tegra machines are not nice, or they are not high quality.
They absolutely are. If you are just writing code to give the CPUs some work
to do, they are great! Flash the default BSP, plug in HDMI, run 'gcc'. They're
the highest performing ARM machines for their power profile and size, IME. But
the moment you treat them like an "ordinary" computer, all bets are off. These
LX2K machines act substantially more like "ordinary" desktop computers, and
IMO that's a significant benefit.

Now, if you want to do DL inference or GPU programming, I think CUDA is vastly
better (for a number of technical reasons). And the Tegra machines would be a
good choice! But if you're just writing ARM code and need a desktop, these
machines seem perfectly sufficient and can host open source drivers just fine,
and will be _far_ more usable in a number of ways vs a Tegra machine.

(And also: Nvidia also released some press notes last month that they would be
bringing the full CUDA stack to ARM for data-center computing, presumably,
including the host driver. So it's possible you may be able to stick a Quadro
into these things soon enough and use CUDA to your hearts content!)

~~~
m0zg
>> these machines seem perfectly sufficient

That I do agree with. I wanted to buy an ARM machine to do CI for one of my
clients, and currently the only real option for that seems to be Cavium
ThunderX. And pricing is way out there in the stratosphere for what you get.
Instead I could see them deploying a few of these for a fraction of the cost.

------
UomoNero
Some advice for a mini-itx (or other format) board, low/mid budget, with more
then 1 SATA port? I want to build a NAS to replace my old ReadyNas...

~~~
alacombe
Pretty much any mini-itx board (ie. <$100) comes with 4 sata standard.

~~~
UomoNero
? I can’t find one. Really. I lost my “google kungfu”. Can you give me some
direction? I will love you!

------
6thaccount2
Dumb question, but what is this for?

~~~
klez
Native development on ARM instead of cross-compling, for example. When you
reach a certain point down the stack, the machine you develop for/on matters a
lot more.

~~~
6thaccount2
Gotcha

------
1-6
Let the RISC vs CISC wars begin!

~~~
messe
Modern ARM isn't really RISC any more, at least in the original sense of
"reduced instruction set".

~~~
unixhero
That and x86 nowadays has RISC elements

~~~
monocasa
Eh, that's overstated. Microcode always looked RISC-esque.

------
techntoke
Probably better to get a new Ryzen processor and motherboard for around the
same price.

~~~
jchw
I love Ryzen but having more mid range machines on CPU architectures that
aren’t AMD64 is a huge net gain for everyone.

~~~
whatshisface
Is that true? It increases the load on developers.

~~~
jchw
Not really, at least until end users are buying ARM machines. But honestly, I
think this bit is overplayed. An extreme majority of modern developers writing
software that runs on consumer computers are not writing assembly language,
and most of them aren’t even writing code in languages that compile AOT to
machine code. I’d argue most of the things people use nowadays run in the
browser.

Desktop isn’t even the hot new thing anymore, mobile is (or, at least, _was_.)
Ans as fate would have it, mobile platforms are dominated by ARM already. Many
modern developers already deal with ARM as it is.

Going further down the rabbit hole, I’d argue that most applications aren’t
even nearly CPU bound and would run just fine with emulation, and nobody will
bat an eye. This isn’t really theoretical, considering there exists x86
emulation in ARM Windows 10.

And finally, most open source software already runs on many architectures,
just look at how many architectures Debian supports.

It does increase load on developers, but not to a degree that would make the
benefits to competition unworthy. I think a good amount of developers are
capable enough to figure out how to cross compile.

Architectural splits are not new. Apple did it twice already with Mac, and
while there’s no evidence of a third one on the horizon yet, it’s certainly
not beyond them.

And hell, maybe in the future, developers largely won’t ship architecture-
dependent binaries. It is yet to be seen what might happen with WebAssembly
and similar technology.

~~~
geezerjay
> Not really, at least until end users are buying ARM machines

How do you address the problem caused by the combinatorial explosion of adding
a whole new ISA to the OS*distribution combination of platforms that need to
be targetted, tested for, and debugged?

~~~
jchw
It’s been largely solved, for Linux distributions. Many of them have supported
a wide array of architectures for a long time.

This is a pretty theoretical problem and wreaks of slippery slope fallacy. How
many CPU architectures have even come close to desktop usability in the past
10 years? Even prospect wise I basically only see ARM and RISC-V on the
horizon. A combinatoric explosion is unlikely, very unlikely.

I already kind of went over this, but I’ll reiterate:

WebAssembly is a good example of a technology that has the potential to kill
two birds with one stone and unify both architectures and OSes. Web browsers
are Not the only environment WebAssembly runs in.

Folks writing software in .NET, Python, Java, JS already don’t have to worry
about this. They already aren’t dealing with architectures, mostly.

A lot can be done with emulation when performance is not a primary concern.
Windows 10 already has x86 emulation for ARM. On Linux Qemu user mode is
pretty mature (and is already used for things like setting up ARM chroots on
x86 boxes.)

