

Ask HN: 32 OR 64 BIT for the JVM? - antoaravinth

While reading this book over here:<p>https:&#x2F;&#x2F;www.safaribooksonline.com&#x2F;library&#x2F;view&#x2F;java-performance-the&#x2F;9781449363512&#x2F;ch04.html<p>I can see the following section, which says:<p>32 OR 64 BIT?
If you have a 32-bit operating system, then you must use a 32-bit version of the JVM. If you have a 64-bit operating system, then you can choose to use either the 32- or 64-bit version of Java. Don’t assume that just because you have a 64-bit operating system, you must also use a 64-bit version of Java.<p>If the size of your heap will be less than about 3 GB, the 32-bit version of Java will be faster and have a smaller footprint. This is because the memory references within the JVM will be only 32-bits, and manipulating those memory references is less expensive than manipulating 64-bit references (even if you have a 64-bit CPU). The 32-bit references also use less memory.<p>Chapter 8 discusses compressed oops, which is a way that the JVM can use 32-bit addresses even within the 64-bit JVM. However, even with that optimization, the 64-bit JVM will have a larger footprint because the native code it uses will still have 64-bit addresses.<p>The downside to the 32-bit JVM is that the total process size must be less than 4GB (3GB on some versions of Windows, and 3.5GB on some old versions of Linux). That includes the heap, permgen, and the native code and native memory the JVM uses. Programs that make extensive use of long or double variables will be slower on a 32-bit JVM because they cannot use the CPU’s 64-bit registers, though that is a very exceptional case.<p>Programs that fit within a 32-bit address space will run anywhere between 5% and 20% faster in a 32-bit JVM than a similarly-configured 64-bit JVM. The stock batching program discussed earlier in this chapter, for example, is 20% faster when run on a 32-bit JVM on my desktop.<p>Is any one have tired this? Running JVM on 32-bit mode on 64-bit machine and have a success story?
======
MichaelCrawford
I personally am skeptical of the value of 64-bit computing.

Recently I heard - I'm not dead certain this is really the case - that Apple
will soon require iOS Apps to be built for 64-bit ARM.

There are some good reasons for 64-bit, such as memory-mapping an entire
database file. 64-bit x86 has a few more general-purpose registers than does
32-bit x86, but I don't regard that as evidence of the superiority of 64-bit,
rather I regard that as a failing of the x86 architecture. There are other
instruction set architectures such as PowerPC where 32-bit has lots of
registers. ARM has more than x86, less than PowerPC.

32-bit pointers require less storage than 64-bit ones. I know that's obvious
but in the case of Java, there is a great deal of allocation so you have to
consider the memory space used to store all those pointers.

It's not so much that all those pointers would use up all your gigabytes, but
that they'd use up more of your L1 cache.

If you don't need such a large range of integer values you don't need so many
bits in your integers. If you have a for loop that counts from 0 to 9, you
don't really need 32 bits let alone 64.

~~~
antoaravinth
> but I don't regard that as evidence of the superiority of 64-bit, rather I
> regard that as a failing of the x86 architecture.

Can you say me why that is the case?

~~~
MichaelCrawford
Among the advantages of x86_64 is that it has more general-purpose registers
than 32-bit x86, which was originally designed to do everything on the stack.

However, one can make a 32-bit CPU that has lots of registers, it just won't
be an x86. Such processors typically have calling conventions in which the
first few arguments to a function are passed in registers. Because there lots
of registers, sometimes there are enough that you don't need to store any
local variables on the stack.

It's not just that you can operate on registers faster than memory, but not
having to access memory at all means you don't thrash the cache as much.

It's not just modern processors; the ARM architecture dates back to the Acorn
personal computer, which was made at roughly the same time as the 8086 IBM PC.

The 68000 CPU has eight general purpose data registers and eight address
registers (one of which is the stack pointer). It was made by Motorola at
roughly the same time as Intel introduced the 8086. While the 68k only had 16
external data lines and 24 external address lines, it had 32 registers, and a
32-bit instruction set architecture.

The reason everyone uses x86 or x86_64 these days, has a lot to do with that
the 16-bit 8088 was quite a lot cheaper than the 32-bit 68k.

There were other factors than the instruction set and the size of the
registers; little-endian processors are easier to design, use less real
estate, are cheaper to manufacture, I expect have a better yield at the fab,
perhaps they use less power.

However all of those factors are, today, quite insignificant. The extra
circuitry required for a big-endian CPU doesn't cost much as compared to the
cost of everything else today's CPUs can do.

