> The garbage collector is also more precise, which costs a small amount of CPU time but can reduce the size of the heap significantly, especially on 32-bit architectures.
Previously, 32-bit version was almost unusable is some cases when garbage collector kept the allocations for which
he falsely believed to be used even if they were not. I'd like to know if this means that the problem is finally solved.
> Previous Go implementations made int and uint 32 bits on all systems. Both the gc and gccgo implementations now make int and uint 64 bits on 64-bit platforms such as AMD64/x86-64.
This is a C-like can of worms, forcing you to test on more platforms just to be sure that you have portable code. I believed there's no reason to do such things with primitive data types in the new languages of 21st century (compare with http://docs.oracle.com/javase/tutorial/java/nutsandbolts/dat...). I'd like to see any discussion about such in my opinion quite strange decision.
The size of these types have always been specified to be either 32 or 64 bits. The only change here is that 64-bit platforms will actually use that flexibility.
The probably most important consequence is that now you can have very large slices. The length of a slice is defined as fitting in an int, so slices were restricted to 2B items (signed 32-bit max); now they can be much larger.
I don't think there's any new gotcha here with having to test on multiple platforms. This is a change intended to give much more flexibility, but that flexibility naturally has to come with possible platform variation. If you are highly concerned with portability, avoid the int/uint types, and use the sized types (int32, int64, etc.) as much as you can. But in real code, as it turns out, it is rarely a big deal.
> This is a C-like can of worms, forcing you to test on more platforms just to be sure that you have portable code.
Hardly. If you need fixed integer widths, use the fixed integer width types. If you need types that are 'optimal' for the target architecture, use the non-fixed-width types.
The only time you run into trouble is when you make silly assumptions about the width of non-fixed width types, which is a bad idea in almost any language that doesn't vend automatic "bignum" scaling integers. It's not as if you can ignore the width of the types in Java.
Apple does the same thing themselves with ObjC; NSUInteger/NSInteger are uint32 on 32-bit systems, uint64 on 64-bit systems. All in-memory lengths, counts, sizes, etc (but not offsets) are defined as NS(U)Integer values.
I've done the same in my own code that's portable from 8-bit to 64-bit CPUs. Works out just fine.
Java is itself an odd duck; using fixed-width signed types without also offering unsigned variants has been a plague on every single person who has ever had to parse binary data in Java, ever.
Making int 32-bit or 64-bit depending upon the bitness of the current arch is one of the few Go language decisions I disagree with, primarily because it causes confusion with CGo since int in C/C++ (while technically compiler-specific) is, in practice, always 32-bits even on 64-bit platforms.
People writing CGo code can avoid this easy enough by understanding the issue and using C.int where they want C style ints, but I've run into more than a handful of examples of actual code in the wild using CGo that has issues because it expected 'int' in Go to be the same size as 'int' in C, which it will be if the arch is 32-bit, but won't be if the arch is 64-bit. eg:
IMO the right call would be just go with what C does and treat both int and int32 as 32-bit ints, int64 exists if you need it, but having 'int' size be different between C/C++ and Go on the same arch violates principal of least surprise, IMO.
If most Go projects had a non-trivial amount of CGo in them, then you might have a valid argument. But I would say that most Go projects probably don't have any CGo in them at all, much less a non-trivial amount. Hobbling the language for the sake of slightly better C interoperability just doesn't make sense.
The new GC is still not a precise GC, it's still conservative and non-tracing, just "more precise" (probably means they refined their heuristics) so you should probably not call it a precise GC.
> The garbage collector is also more precise, which costs a small amount of CPU time but can reduce the size of the heap significantly, especially on 32-bit architectures.
Previously, 32-bit version was almost unusable is some cases when garbage collector kept the allocations for which he falsely believed to be used even if they were not. I'd like to know if this means that the problem is finally solved.
> Previous Go implementations made int and uint 32 bits on all systems. Both the gc and gccgo implementations now make int and uint 64 bits on 64-bit platforms such as AMD64/x86-64.
This is a C-like can of worms, forcing you to test on more platforms just to be sure that you have portable code. I believed there's no reason to do such things with primitive data types in the new languages of 21st century (compare with http://docs.oracle.com/javase/tutorial/java/nutsandbolts/dat...). I'd like to see any discussion about such in my opinion quite strange decision.