

NetBSD on the Nvidia  Jetson TK1 - fcambus
http://blog.netbsd.org/tnf/entry/netbsd_on_the_nvidia_jetson

======
fit2rule
I love these ARM based systems that are available today - so much fun in such
a small package.

I only wish they'd come with more RAM, or at least some way to upgrade the RAM
.. I'm pretty spoiled with the 32gigs of RAM I've become accustomed to in my
workstations, alas.

Anyone know of a multi-core ARM-based system with decent performance and 8 -
16gigs of RAM possibility? I'd love to have such a machine to hack with ..

~~~
david-given
Until ARM64 becomes mainstream, unlikely --- ARMs are 32 bit devices and are
limited to 4GB total. I don't know what the NetBSD userspace layout is but the
largest process is probably going to be 2GB or 3GB.

(FWIW, I was looking at the writeup and thinking: finally, an ARM dev board
with a decent amount of memory!)

~~~
sprash
The ARM architecture supports LPAE with up to 40 bits. So the memory cap is at
1024 GB. But I never came across a device utilizing LPAE.

Since there are also ways to get around the 4GB per process cap on 32Bit
systems I never understood the craze for 64 bit systems. Most of the time 64
bit makes things slower (e.g. that is why the x32 linux abi exists) and leads
to more power consumption.

~~~
throwaway2048
PAE/LPAE are a really awful hack, and you are still resticted to 4GB per
process, unless you do app specific PAE/LPAE custom allocators (also a
terrible hack).

a real 64 bit processor with a real 64bit memory space is a much better
solution all around.

~~~
sprash
How is PAE a "really awful hack"? As soon as your processor has a MMU there is
neither any "hack" involved nor do you pay any performance penalty.

And if your app really needs more than 4GB of memory it might be sensible to
fork() anyways since unix processes are dirt cheap. It also plays well with
modern multicore architectures since it forces you to parallelize your
problem.

The single most important performance issue we have today is really memory
bandwidth and access time. If you carry 64 bit pointers around all the time
you need roughly 30% percent more memory which is wasted on precious cache. So
you have to either live with even more cache misses or you have to increase
the cache size. And since cache is one of the major power and chip area
consumer in todays processors increasing the cache is expensive in many ways.

~~~
zurn
A performance penalty and kernel memory management complexity comes from
peripherals and kernel's inability to directly address all memory and
resulting need for bounce buffers. Liked it was with ISA dma 16mb limitation
and Linux in the bad old days.

~~~
dezgeg
You don't need PAE/LPAE to get to the situation where the kernel can't
directly address all physical memory. That point is reached at ~900 MB, after
which you need to compile the kernel with HIGHMEM support.

