Hacker News new | comments | show | ask | jobs | submit login

I love these ARM based systems that are available today - so much fun in such a small package.

I only wish they'd come with more RAM, or at least some way to upgrade the RAM .. I'm pretty spoiled with the 32gigs of RAM I've become accustomed to in my workstations, alas.

Anyone know of a multi-core ARM-based system with decent performance and 8 - 16gigs of RAM possibility? I'd love to have such a machine to hack with ..




Until ARM64 becomes mainstream, unlikely --- ARMs are 32 bit devices and are limited to 4GB total. I don't know what the NetBSD userspace layout is but the largest process is probably going to be 2GB or 3GB.

(FWIW, I was looking at the writeup and thinking: finally, an ARM dev board with a decent amount of memory!)


NVIDIA does have a 64-bit ARM chip in products today; the Tegra X1. Unfortunately they don't seem to have a dev kit available for it. It's a really interesting design too, a completely custom core that can use hardware or software translation of ARM instructions to its own internal instruction set, like Transmeta. https://en.wikipedia.org/wiki/Project_Denver

It's speculated that NVIDIA wanted it to support both ARM and x86, but was unable to license x86 from Intel.


X1 is actually the 64-bit ARM CPU configuration (Cortex A53 + Cortex A57), not Denver. K1, the predecessor of X1, comes in two flavors: 32-bit 4xCortex A15 and 64-bit 2xDenver. TK1 is 32-bit, Nexus 9 is 64-bit (and is the only device I know of with Denver).


I stand corrected. Thanks for the info!


Not only a decent amount of memory, but a very powerful GPU for its category (192 Kepler cores supporting CUDA compute) and a lot of features and expandability (SATA, USB 3.0, 1GigE, mPCI-e, etc). The primary target market is a dev board for automotive applications like dashboard rendering, machine vision for things like backup warnings, etc. But it's an extremely powerful machine for its size/power and has many potential applications for embedded processing.

It's been out for a couple years now, and what I'm really waiting for is an updated version with the new Tegra X1 SoC! Where's my Jetson TX1 NVIDIA?


The ARM architecture supports LPAE with up to 40 bits. So the memory cap is at 1024 GB. But I never came across a device utilizing LPAE.

Since there are also ways to get around the 4GB per process cap on 32Bit systems I never understood the craze for 64 bit systems. Most of the time 64 bit makes things slower (e.g. that is why the x32 linux abi exists) and leads to more power consumption.


> Most of the time 64 bit makes things slower

I am aware that this is strictly andecotal, but when I got my first AMD64 system, I first ran Gentoo/x86 on it, and eventually I reinstalled Gentoo/amd64. I did not notice any performance difference at all. Now, amd64 (x64, x86_64, whatever you may call it) has more registers than plain x86, so maybe the differences cancel each other out in this specific case.


PAE/LPAE are a really awful hack, and you are still resticted to 4GB per process, unless you do app specific PAE/LPAE custom allocators (also a terrible hack).

a real 64 bit processor with a real 64bit memory space is a much better solution all around.


How is PAE a "really awful hack"? As soon as your processor has a MMU there is neither any "hack" involved nor do you pay any performance penalty.

And if your app really needs more than 4GB of memory it might be sensible to fork() anyways since unix processes are dirt cheap. It also plays well with modern multicore architectures since it forces you to parallelize your problem.

The single most important performance issue we have today is really memory bandwidth and access time. If you carry 64 bit pointers around all the time you need roughly 30% percent more memory which is wasted on precious cache. So you have to either live with even more cache misses or you have to increase the cache size. And since cache is one of the major power and chip area consumer in todays processors increasing the cache is expensive in many ways.


A performance penalty and kernel memory management complexity comes from peripherals and kernel's inability to directly address all memory and resulting need for bounce buffers. Liked it was with ISA dma 16mb limitation and Linux in the bad old days.


You don't need PAE/LPAE to get to the situation where the kernel can't directly address all physical memory. That point is reached at ~900 MB, after which you need to compile the kernel with HIGHMEM support.


> Until ARM64 becomes mainstream, unlikely

Correct my, if I am wrong, but I though the current generations of the iPhone/iPad run on ARM64... It does not get much more mainstream than that.


The LPAE extension increases the maximal possible amount of physical memory up to 1TB (40 bits).


IIRC, Facebook uses (or used) Tilera machines for memcache nodes, they seem to be focused on network applications now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: