Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Do Not Use Go for 32bit Development (abtinforouzandeh.com)
117 points by abtinf on April 8, 2012 | hide | past | favorite | 130 comments


The strategy the allocator uses is to reserve a large contiguous block of address space(not memory) on start up and then allocate actual memory from that block. On 32bit this is around 512MB, on 64bit it's something like 16TB. This makes allocating and garbage collection far simpler and it really shouldn't be a big ask for an operating system to give a process a contiguous block of address space and not interfere with it. But just like the pinning issue the smaller address space on 32bit means that things that expect to be at a certain address have a higher likelihood of getting in the way of the contiguous block.

The fix for both this issue and the pinning issue is big changes to the garbage collector. Since the language and standard library have stabilised with Go1, changes to the runtime are going to be the focus of development so this issue will probably get a fix.


This is highly dependent on the video card and other PCI devices you are using, as they will often reserve huge chunks of VA space to map to device memory (and furthermore, require it to be in low addresses i.e. user space), making a contiguous reservation difficult.

If you want to see exactly what's happening, SysInternals VMMap (http://technet.microsoft.com/en-us/sysinternals/dd535533?ppu...) will tell you exactly the VA space layout.

Note that Linux has exactly the same problems with VA space, except they default to 3/1, instead of NT's default (but configurable) of 2/2.

It seems that a quick fix to this issue would be to lower that number from 512MB to something like 128MB, though I know nothing about Go's internals.


Getting 512MB of contiguous virtual address space does not require 512MB of contiguous free physical memory; contiguous virtual addresses can map to discontiguous physical memory.


That's what I thought. The address your user-space program uses and the actual address in memory are two different things, aren't they?

Isn't this what enables the operating system to scramble the allocations it gives you to make it harder to implement a buffer-overflow attack?


> That's what I thought. The address your user-space program uses and the actual address in memory are two different things, aren't they?

Right. Userspace uses virtual addresses, which get mapped through page tables to become physical addresses. Some kernel addresses get identity-mapped (meaning that the virtual address matches the physical address), while some kernel addresses go through translation as well. (Specifically, on Linux, kmalloc allocates identity-mapped addresses, while vmalloc allocates virtual addresses; vmalloc allows you to have a large virtually contiguous buffer without requiring a large physically contiguous region of free memory.)

> Isn't this what enables the operating system to scramble the allocations it gives you to make it harder to implement a buffer-overflow attack?

You can do Address Space Layout Randomization (ASLR) for either virtual or physical address spaces; you want to do it for whatever type of address an attacker could otherwise make use of. Userspace processes can use ASLR for their virtual address space, so that attackers can't make use of fixed addresses. The Linux kernel doesn't normally map virtual pages to any particular well-known location in physical address space, either. The kernel can also use ASLR for some (though not all) of its own kernel-space addresses. Beyond that, recent versions of Linux also try to avoid exposing kernel-space addresses to non-root users.

Also note that ASLR doesn't generally introduce enough randomness in a 32-bit address space.


I/O devices are never mapped in user space in Windows, and I think that's true for other operating systems as well.


Ah, you're right, they have to be in low physical frames, I'm getting confused.


> On 32bit Windows machines, processes only have access to 2GB user space memory.

You can change this in boot options to 3 or 4 (optimized) or enable PAE. see:

http://msdn.microsoft.com/en-us/library/aa366778.aspx

i'd try that before rewriting your code (also kill services you don't need, which is usually a lot of them. run -> msconfig).

Edit: you also don't mention probably the most important piece of info: which version of windows you are running.

Edit 2: make sure go is being treated as a background process

Edit 3: it seems a lot of these tips should be in the Go docs for win32 - i'll add it when I have a moment


"Just set these boot options" is fine if you're just deploying this code to a server you control.

I get the sense from various aspects of this guy's story that he's shipping executables to end users, which makes this "solution" a non-starter.

You can't expect the user to go in and mess with these sorts of options just to run your program, and I wouldn't feel comfortable changing something like this automatically during install since it is such a fundamental OS setting (not to mention testing the installer against all versions of Windows would then become a nightmare you'd have to redo every release).


Enabling /3GB and LARGEADDRESSAWARE might work, but I have found that because of certain video drivers (XP) would not work correctly with D3D or OpenGL apps (looks like some assumptions were made there that the space would be below 2GB).

For one we could not get correctly our game and some of our tools to work on XP with /3GB (that was few years ago)

Nowadays since we moved to Windows Vista/7 64-bit, we no longer have the problems (LARGEADDRESSAWARE) and our 32-bits sometimes go up to 3.5gb of usage.


Importantly, you will need to recompile with /LARGEADDRESSAWARE.


Yeah, that kernel flag is useless without stamping the binaries, and the Go linker doesn't know about this.


It's just a flag in the PE headers. It can be modified after the fact with a tool; the Go linker doesn't need to know.


But the reason why there is an explicit flag in the first place is that there's a lot of code out there that doesn't work with pointers >= 0x80000000, or when pointers could differ by more than that. Things like pointer "tagging", and signed integer overflow bugs, are examples of the sort of code that can be affected. Without knowing details about the Go code generation and runtime/garbage collection, it would be a bit risky to just set the flag and hope for the best.


Oh, I know; I've written bounds-checking logic for RTLs that use casts to integers and tests for negativity to check for overrun, implicitly assuming 31-bit address space. My only point is that a limitation of the Go linker does not preclude setting the flag.


No such problems in Go. I just checked and the flag is set by default in all Windows binaries.


> the Go linker doesn't know about this.

I was wrong, it does, it's on by default :).


One thing to note, is that his problem is lack of contiguous VA space, not physical frames (i.e. "memory"). Killing services will do nada to help that, the only thing that will help is mapping less DLLs.


DLLs are loaded in a process address space only because they are referenced in the PE header or the process calls LoadLibrary(), I'm afraid killing processes won't cause mapping of less DLLs either.


It has nothing to do with PAE.


It seems to me it does: With PAE Windows can use more memory. This extra memory can be used to allocate the 512mb chunck.

Can you clarify?


It's a VM (mapping) issue not a RAM (free memory) issue.


Also, with PAE, each process gets the same 32 address space as before.


At runtime init, Go programs attempt to reserve 512mb of RAM. If they can’t do this, they crash. Go needs the space because it is garbage collected - presumably, all GC languages, or any program that takes a similar approach to memory management, will be susceptible to the same problem.

While it is true that most garbage collected environments will reserve some memory at startup, a hard minimum of 512 MB is not the norm. Do note that embedded Java has run on cell phones with minimal amounts of RAM for many years.


Fwiw, Go doesn't need a hard minimum of 512 MB physical memory, but it does seem to assume a contiguous 512 MB of virtual address space will be available. Since every process has 2 GB of virtual address space and grabbing the 512-MB chunk is the first thing Go does, this should almost always be safe, but on Windows it appears that can fail to be the case because in some configurations, DLLs are loaded during initialization and fragment the virtual address space. I can run Go fine on a 32-bit (Linux) VPS with 256 MB of RAM, though.


They only get scattered throughout when using cgo with the DLLs loaded by the Windows system by the application loader rather than by the go runtime. It appears to be very specific to how Windows loads DLLs at runtime.


Yes, though in this particular case the author told me cgo wasn't involved. This makes it very very peculiar. Usually when I see system DLLs (not 3rd party) rebasing it's either mallware in other processes that lazily loaded ntdll.dll and that had to be rebased or some legit program or mallware that does user mode hooking by injecting DLLs.

It is NOT a Go problem, but Go could implement a workaround making the address space reservation in teh PE header.


Isn't Go normally statically linked if cgo isn't being used? Would that mean that the DLL's in the virtual address space definitely shouldn't be there?


Statically linked in Linux, in Windows the Go runtime and packages are statically linked into the binary, but the binary itself is dynamically linked to kernel32.dll because issuing syscalls directly is not supported.

Why was kernel32.dll rebased when Go doesn't force the rebase is a very good question indeed. I suspect a 3rd party user mode hook, it doesn't even have to be malware, there are legit "security" and monitoring applications that use this technique.


ASLR I would guess.


ASLR support needs to be stamped in the PE header. You can enable it globally, but the randomization space is smaller than it should matter for this.

It's definitely something to investigate though.


You are correct - it was an error in my post. Go wants 512mb of virtual space.


Looking through the linked bug and the testing they did, the issue doesn't appear to be using Go for 32 bit systems. It's using 32 bit compiled Go programs that use cgo on 64 bit Windows systems.

Understanding the why of the issue is extremely important in this case. Based on the discussion in the bug report, I would expect that using the 32 bit compiler for 32 bit architectures and 64 bit for 64 bit architectures would not result in the issue occurring. Before abandoning Go, I would very much identify that this is indeed the case. With Go's young age, I would very much use the architecture specific compilers for the specific targets which is different than normal expectations, but not at all unreasonable.


Unfortunately, this is not the case. 32bit compiled runs great on 64bit machines, where it gets the full 4gb of space and memory fragmentation at init is very unlikely to be a problem.

Edit: similarly, 64bit also is immune because of how the main platforms allocate virtual address space to clients.


Hrm, that actually makes the issue tricky to solve. I don't have any Windows machines to work with, so I can't test, and I'm not very familiar with using Go on Windows in general, but is it possible to specify delayed load of the DLLs? That should allow the main Go system to get fully initialized (and allocate the chunk of memory it needs) before loading the DLLs needed for the cgo portions.


Nobody is using cgo, it's the system DLLs which get loaded in the middle of the address space. This is very very unusual, Windows tries hard to avoid such things.

It could be solved by making the initial virtual address space reservation in the PE header, not requesting it in the initialization path, that would always work, even with cgo.


I don't believe this is true since the issue is that the GC is conservative so on a 32 bit system integers look a lot like pointers, so the GC won't free them.


Did you read the post?


Nope, my bad. This is the 2nd or third post on Go not being any good on 32 bit environments in as many days. The other post I read is what my post is in reference to.


I'm a big fan of Go, but it would be nice to see a caveat about 32 bit systems on the Go website or blog. Or at least some official warning or notice that isn't just a mailing list discussion.


If 512mb are needed as contiguous space, then one hacky solution is do define a BSS section of 512mb, or in "C" terms - global array -

char mem[512 * 1024 * 1024];

the executable loader would "allocate" this memory before loading any dlls, and later dlls that were supposed to be in that memory would be rebased

Oh, and somehow Go would need to use this memory rather than allocate it.


This would, however, allocate memory rather than just address space.

(Yes, it may be lazily committed, but it's still not as cheap as address space.)


Once this is done, you can VirtualFree it (starting/ending from page boundaries), and let it back to the OS. Yes for some short amount of time, while it loads and goes to your code the memory is reserved.


I wouldn't call this "hacky" - it's exactly the right way to do it. For one thing, it makes the address of your arena constant. Unfortunately my experience is that a lot of C environments handle this very poorly.


I called this hacky, as it would only work for executables, if this was in a dll/dylib/so shared library, then it might even make the things worse, as now even more stricter contiguous space would be neeeded.

For example, if the language runtime was loaded as plugin or something. So one has to be aware that is not always the good solution for all purposes, hence hacky :)


> I’m sitting here rewriting a ton of Go code in C.

I don't believe a single word. He pretends to have written "a ton of Go code" without discovering the "real show stopper"? I'm not a Go fan (quite the contrary, Go isn't the step forward from C++ and Java I expected from Google). But this Anti-Go campaign few days after 1.0 smells.


I'm not the OP, but I've written quite a bit of Go code and I didn't realize this was an issue until it started blowing up on golang-nuts/HN/proggit the past couple of days.

Until you actually have to deploy the code on end-user 32-bit systems, it is very easy to be blissfully unaware of this issue. And since Go1 "supports" Windows-x86, why would you assume anything other than that your code will "Just Work" on such a target?

I don't think it is unreasonable to expect the Go team to give this issue more light. In my case it doesn't really matter, but if I were using Go to write end-user apps where 32-bit Windows installs were on the table, I could understand this guy's frustration pretty easily.


> Go isn't the step forward from C++ and Java I expected from Google

Go is a lovely language, imo. Quirky in parts, but definitely a charmer, very fun to code in, and once you get the Go zen, simple and elegant code effortlessly follows. I haven't had this much fun since Java came out in mid 90s.

Of course, I do agree with the other comments that would like to see Golang.org be more clear as to the platform coverage and the current state of Scheduler and MM -- but they are certainly upfront in the source tree (c.f. proc.c header comment), forums, and the issue tracker.

So hacker fine print is there and available, and frankly (IMHO) that approach to disclosure may turn out to be a good filter for the growth of Go community.

The issue at hand is nothing that time and resources will not solve -- they just ran out of time (c.f. RSC's comment before Abtin's in the issue.)


>> Go isn't the step forward from C++ and Java I expected from Google

>

>Go is a lovely language, imo. Quirky in parts, but definitely a charmer, very fun to code in, and once you get the Go zen, simple and elegant code effortlessly follows. I haven't had this much fun since Java came out in mid 90s.

Go is a step backwards in language abstractions, throwing away many of the abstractions that have become mainstream in the last decades.


Sometimes to move forward you have to take a step back.

Just because some abstractions have become mainstream doesn't mean they are good.

I think precisely what many people love about Go is that it throws away lots of unnecessary abstractions and complexity that people have come to expect from languages this days.

Go's approach is a breath of fresh air.


So hacker fine print is there and available, and frankly (IMHO) that approach to disclosure may turn out to be a good filter for the growth of Go community.

Creating bad PR by releasing "stable" code containing undocumented landmines sure is likely to affect the growth of the community.


Of course you are correct, but that is not exactly what Golang.org has done. I believe errors have been made on both end of this story.

Please consider this:

a version 1.0 of an open source project X has been released by a team that actively engages its user community. Release manager indicated before release (in issue tracker) that a special problem with platform P is not making the cut. It is true he didn't tweet it /g but it was a public announcement.

Developer D has decided to use X1.0 to create a software product for paying customer C on said platform P on a tight production go live deadline. Apparently there were issues in his deployment testing regiment, as his initial tests were apparently satisfactory, and he typed a ton of code, and now he is busy retyping the same in C as something went boom with X 1.0 on platform P.

IMO, it is not thoughtful to spec 1.0 software for your paying clients unless you know what you are doing and have land mine detectors and have reviewed field maps. (An example of folks fitting the requirement are Heroku -- they use Go in production.)

IMO, stories like this will (a) stem the rush of those who compulsively (but superficially) seek 'the new' and then blog about their own misunderstandings; and likely keep thoughtful hackers who like to dig deep (and will avail themselves of the public resources and available channels) and make their own determination. And these hackers will build great software on Go and that will attract more developers.

There is no real reason for Go to seek a footprint via "PR". PR for languages is for those who need users for their language's survial. Sun (RIP) didn't really need Java, did it? They had to push Java to gain users. Go has Google for its deployment foot print (as a client of sorts); is already an option in the cloud.

Go has a solid team of engineers [1] behind it. It is definitely not a flavor of the month language and would not benefit from a flavor of the month community.

All opinions, of course!

[1]: for a sample of both: http://www.youtube.com/watch?v=HxaD_trXwRE


Believe what you want. The problem is intermittent and didn't appear on my test machines/VMs.


What's "a ton"?


This same Go implementation design earlier ran into some issues on Linux, too - in this case, with distributions setting conservative RLIMIT_AS limits.

The technical details are interesting (https://groups.google.com/forum/#!msg/golang-dev/EpUlHQXWykg...) and Linus' response (http://lkml.indiana.edu/hypermail/linux/kernel/1102.1/00233....) is also instructive.


I had a similar issue with the default configuration of SBCL on a 64-bit vhost; the 64-bit version asks for 8gb of VAS and the vhost had a fairly small limit on VSIZE. Fortunately the heap allocation is configurable in SBCL; perhaps doing something like that on go would be the right thing?


In my opinion, the problem is that 32-bit Windows is intentionally broken by Microsoft.

Unlike other 32-bit x86 operating systems, the consumer version of Windows refuses to addresses more than the first 4GB of physical memory, and on most machines a quarter to half of that space is consumed by PCI address space.

Yes there are hacks around this. But the right way to deal with this is to just fold and accept what Microsoft is imposing: use a 64-bit version of Windows whenever possible and enjoy your memory.

Once on a 64-bit OS, you can either write a 64-bit program, or you can write a 32-bit program and use a compiler option to get the full 32-bit address space. (Microsoft's compilers assume by default that you are an idiot who uses signed values for addresses and your program will break if they give you more than 2GB of virtual address space)

The other lesson is don't use experimental languages you are uncertain of for actual product.


This has nothing to do with physical memory limits. Virtual address space != physical memory.


The same also applies to other 32 bit OS, this is not Windows specific.


Umm no, other OSes have different defaults for mapping and have different heuristics for how and where they map devices and libraries. Go on 32bit Linux rarely has the problems that the Windows versions do.


The PCI mappings are the same, either in Linux or Windows. I do agree that this particular report is a Windows problem, 512MB contiguous address space should be available at program initialization in every case.

I do agree that Go on 32 bit Linux rarely has any problems, but this is true for Windows as well. What happened in the last few days is blown out of proportion.


The _size_ of the mappings are probably the same, but there are very few PCI devices that _require_ to be mapped to a particular address, as opposed to a certain area of memory, i.e. under 4G or 1M.


Nothing is mapped under 1M in protected mode with paging enabled, that's user space. They still have to be mapped under 4G because 4G is all you got on 32 bit.


No. A 32-bit OS is capable, on the x86, of addressing much more than 4GB of physical memory. The 32-bit server versions of Windows, for example.


The address space of each process is still 4G. The reported problem and the one before it are related only to virtual memory, they have nothing to do with physical memory. PAE does not influence the address space of a process at all.


Somewhere in the source of Go will be the number 536870912, change this number to 134217728. Problem should largely be solved.

Note: I'm not a Go dev, or have even programmed in it, but logically it should be a quick fix that you can try to save you rewriting your codebase.

Given the importance of the number it's probably a #define or constant so should be pretty easy to find in one of the header files.


It's 512<<20, and a proper way to fix it, on Windows, at least, would be to reserve the address space in the PE header, not ask the operating system in the process initialization path.


This is my understanding of the problem: when a Go program is initialized on 32-bit windows or linux systems, it tries to allocate 512MB of contiguous virtual ram. When the virtual address space is too fragmented, the initialization will fail and the Go program won't start. Is this correct?

We're considering using Go for an end-user desktop application for Mac, Windows, and Linux. Of course, we don't have control over whether our users have a 32-bit or 64-bit system. And, of course, our app has to work 100% of the time. What can we do to make Go work 100% of the time on these 32-bit systems?


Don't know what it means for the virtual address space to be fragmented. Each process begins with a pristine virtual address space.


Mostly, but not really, apart from the binary image itself, its dependent shared libraries are also loaded, and the various images loaded might have virtual address space reservations marked in the header. In normal scenarios there should be enough contiguous address space, Go programs only link to ntdll.dll and winmm.dll. While it's true that on recent Windows versions these two shared objects pull around 30 other shared objects, all the DLLs shipped with Windows have a base address chosen so that they only take a very small chunk of address space and most address space is contiguous. 512MB of contiguous reservation shouldn't be a problem to get.

The fact that this is not happening means that 3rd party DLLs are loaded, this happens if you use a program, knowingly or not, that install user mode hooks.


x86 32-bit is actually capable of addressing any amount of memory, using the right compilation model. Flat 32-bit has a 4GB limit sure. But segemented, it can access any amount of virtual memory, with the caveat of only 4GB per allocation.

Probably no OS supports this any more. But the instructions set and memory management units support it, or used to in early Pentium days.


Take a look at PAE nearly every modern kernel supports it.


But do any languages? When was the last time you compiled to anything but flat?


Segment:Offset addresses are converted to 32 bit linear addresses; the segment base address in the segment descriptor is a 32 bit value.


> Segment:Offset addresses are converted to 32 bit linear addresses

That's true, but segment descriptors have a not-present bit, which allows you to implement segment-level swapping.


On a 64-bit architecture its 64-bit. 32-bit programs run there.

Also, as noted elsewhere, on 32-bit architectures there's segment swapping too. In fact I wrote an OS that segment swapped before the 386 came out, when the 286 was king. Probably the only one out there; a pretty crazy notion and the 386 came out a year later with paging.


Even in 64-bit mode the base address for code and data segments remains 32 bits - it is only expanded to 64 bits for call gate descriptors, IDT gate descriptors, LDT descriptors and TSS descriptors. The base address for the FS and GS selectors can be set to a 64 bit value, but the upper 32 bits are ignored in compatibility mode (ie when a 32 bit task is running).

The point about being able to implement segment-swapping is well-taken however.


I think Go, Mono, Ruby, Python et al should team up to create a reusable garbage collector library that can compete with the OpenJDK one. For me, it's the single biggest piece falling behind on non JVM languages. The performance and "tunnability" are unmatched right now. Bonus points would be for a real time garbage collector, a la IBM Metronome.


We need to decide if Go is a suitable language to build our app with. Our app will be deployed on end-user 32-bit systems, and needs to work 100% of the time.

Is there someone here who can say either: (1) Go will fail sometimes on 32-bit systems, don't use it until this problem has been fixed, or (2) there are things you can do to always avoid the 32-bit problems.

?


(and if there's no one here with an informed answer, where should i ask this question? Thanks.)


How many dlls are loaded into the process? Have you looked at all the loaded dlls at the moment allocation fails to look at the potential for remapping?

Edit: Okay I see a memory map on the bug report. So why not poke at a copy of kernelbase.dll?


This only goes to show that Go is still not production ready.


YouTube is using Golang http://code.google.com/p/vitess/. So 10% of Internet traffic now depends on it. I wouldn't call it not production ready.

From the project goal page:

"Go is miles ahead of C++ and Java in terms of expressibility and close in terms of performance. It is also relatively simple and has a straightforward interaction with Linux system calls.

The main drawback is also its strength - the garbage collector. vtocc has made spot optimizations to minimize most of the adverse effects of Go’s stop-the-world gc. At this point, we are trading some amount of performance for greater creativity and efficiency at lower layers. unless you’re trying to max out on qps for your servers, you should see acceptable performance from vtocc. Also, go’s garbage collector is being improved. So, this should only get better over time. Go’s existing mark-and-sweep garbage collector is sub-optimal for systems that use large amounts of static memory (like caches). In the case of vtocc, this would be the row cache. To alleviate this, we intend to use memcache for the time being. If the gc ends up addressing this, it should be fairly trivial to switch to an in-memory row cache. Note that the row cache functionality is not fully ready yet."


"Go is miles ahead of C++ and Java in terms of expressibility and close in terms of performance. It is also relatively simple and has a straightforward interaction with Linux system calls."

How can this be when Go lacks:

- enumerations

- exceptions

- generics

- dynamic loading

The short compilation times are possible in any language with modules.

Channels are available as part of concurrency libraries in Java, .NET, C++ and Erlang.

Goroutines are also possible in other languages, in form of continuations or task pools.


None of those things are really roadblocks to Getting Shit Done though, except maybe dynamic loading.

Go has plenty of support for global constant values that take the place of enumerations, even to the point of having an "iota" syntax to make initializing constants easier.

Exceptions are generally an anti-pattern in my opinion: a hack to get around the ability to return multiple values from a function, so that error conditions can be handled in-band. The other technical challenge exceptions sometimes provide -- guaranteed cleanup after code that might fail to execute -- can be handled by Go's deferred function execution.

Generics are a powerful abstraction, but Go has very good support for (un)boxed values and interfaces, so the only thing you would get from generics is a bit more type safety. The additional complexity required to support generics isn't worth it.

Dynamic loading is arguably a blocker for certain modes of development and distribution, but by not supporting dynamic loading, Go can make tremendous simplifications in its module system. Making the assumption that every go library is distributed in source form greatly reduces compatibility friction and runtime bugs. "DLL Hell" may be a "solved" problem on most systems nowadays, but there are a lot of moving parts to enable that.

The simplifications that Go makes are 100% worth it in my opinion. There is one place where Go sacrifices something that can't be replaced is that a stop-the-world garbage collector may be unacceptable for some applications. Other than that, codebases aren't actually served by having too much power in their languages, even if an individual programmer might be.


ioata is a poor man's solution to enumerations. Why should I do the work for the compiler?

Even C has enumerations, a language developed in 1972! This is how modern Go is.

The lack of generics makes everyone that writes generic data structure code like it was the 90's again. Copy-pasting code or writing template processors to generate code. Talk about evolution.

Dynamic loading is an important way to write modular applications that can be composed on run-time. Something like Eclipse would be impossible to write in Go, due to performance constrains of interprocess communication.

If Go did not had Google behind it, I doubt it would be noticed, it would fail as just another language.

Look how successful Limbo and Alef were. And Go is nothing more than a reinvention of them.


People who write data structures just give up type-safety when doing so. They don't copy-paste code or write template processors, afaik. A Left-leaning Red-Black Tree, for example: http://gopkgdoc.appspot.com/pkg/github.com/petar/GoLLRB/llrb

Holding up Eclipse as an example citing performance is perhaps not the best idea. And Chromium seems to do just fine (performance-wise) using IPC between renderer processes and the main process.


Go fanboys always cite Chromium as a gold example of process IPC and why dynamic loading is not required for plugins.

What people fail to see is that Chromium is just one use case.

An IDE for example, would explode memory wise if every single plugin would be a separate process. Then all plugins that require real time interaction with source code manipulation would suffer from heavy context switching.


Evidence of this memory explosion? Keep in mind that Go processes tend to take significantly less memory than Java processes, especially when dealing with IO related code, i.e. sockets (http://shootout.alioth.debian.org/u64q/benchmark.php?test=al...).

Sure, if you do it stupidly, and have a highlighter process per open document or something, it could add up over time. But if you share highlighter processes between documents, you have 1 process that you keep open for the lifetime of the application. A simple HTTP server in Go takes < 5 MB of RAM in its steady state, a server using domain sockets would probably be comparable. And even better, a bug in the highlighter doesn't mean your main application crashes. If it crashes, the parent merely restarts the process. Even better, its more secure, in the sense that you can run your highlighter with absolutely no OS privileges. Also, this allows for plugins written in any language that supports sockets.

All I'm saying is that there's no evidence that IPC isn't performant enough. And IPC comes with security, compatibility, and isolation advantages, to boot.

Note that I'm definitely not saying that dynamic loading wouldn't be nice. Do I miss it, when working with Go? Not for my use cases. "Plugins" for servers don't really make sense. But dismissing Go because it doesn't target every use-case at the moment is somewhat shortsighted.


Well, if I look at my plugins folder I have around 200 jar files.

Many of those do provide more than one plugin, which puts it way above 200 plugins.

The approach one process per plugin won't scale in such cases.

The lack of IPC performance for heavy communication, like it happens with plugins, is the main reason why microkernel based OS are yet to become mainstream.


Are all of those plugins active all the time? I doubt it.

This discussion is getting a little off track.

I'm simply saying that dismissing Go for not supporting plugins is rather illogical, as there are a lot of use cases that have no need of plugins. Servers being a good example of one of these use cases.


iota is much more general than just enumerations. http://golang.org/ref/spec#Iota

A neat example is using it to define constants for a bit field:

  const (
    A = 1 << iota
    B
    C
  )


It pollutes the global namespace, plus it is a common enumeration feature that you can give initial values to the enumeration elements, so I fail to see iota benefit.


There is no global namespace in Go. The fact that you don't know this trivial thing means you nothing about Go and only spread FUD.


Don't put words in my mouth please.

Each package has a global namespace, so each const has to be unique at the package level to avoid namespace collisions.

With proper enumerations, only the enumeration needs to be unique, while the enumeration elements would be scoped to the enumeration level, as most (not all) languages do.


You keep using that word, "global". I do not think it means what you think it means.


- It has a more general analogue to enumerations, one that does not add a lot of verbiage to the language spec. (http://play.golang.org/p/HSh4Ke3pCJ)

- It has an "exception" mechanism that is rarely used, in favour of error-valued returns. (http://blog.golang.org/2010/08/defer-panic-and-recover.html)

- Apparently, vitess didn't need generics. You'll notice a bit of casting here in their implementation of an LRU cache, but writing containers isn't exactly the main purpose of the library. I do admit I want generics, for the sole purpose of stopping people parroting that criticism without actually using the language. (http://code.google.com/p/vitess/source/browse/go/cache/lru_c...)

- Not sure what you mean here. If you mean a Go library is being loaded, then there's an issue for that, but one unlikely to be fixed in the short-term because of Go's unusual calling convention. If you mean Go loading a library at runtime, I don't know why you'd want this. Either way, static linking seems cleaner and more self contained to me.

"Short compilation times" being available in any language is total bull. Try building chromium or firefox in a reasonable amount of time, without a build cluster. Don't you think if there was a way to speed that up, the developers would make that priority 1?

Channel/Goroutines are of course available in all languages, Turing-completeness implies that they must. However, in practice, does all code in those languages make use of the same concurrency primitives? Do they read as nicely as Go does?


> Try building chromium or firefox in a reasonable amount of time

He wrote "in any language with modules". C++ doesn't have them.


Thanks.

I am not sure what it is up with these kids today, that only seem to know C and C++ as languages for native development.

There are many others out there that always had modules and fast compilation times.


Apologies. I misread your post because you were comparing Go to Java and C++, and I erroneously assumed the entire post was talking about those languages.


This shows that Go does not work in specific scenario (32-bit architecture with fragmented adress space at startup). It says nothing about production readiness in other scenarios.


The above fact(s) missing from the documentation seem to indicate otherwise.


The inadequacy of the documentation has no impact on the production readiness in the scenarios where Go does work. As a comment above notes, Google is using Go successfully in production. We are also using in production.

Production ready != done.


Good enough is not the same as done. One data point does not define a trend.


Appengine+Youtube+<everyone-using-Go-in-production> isn't exactly one data point. Each independently is probably a ton of datapoints as services tend not to use one machine.

While it is a bummer, I'd argue that the OP is the one that does not define a trend.


It seems like the actual problem is you chose to use windows...


It does seem rather surprising to me that Windows is so broken that a program can fail to reserve 512MB of virtual address space as the very first thing it does at launch.

Of course, the 2GB limit on 32bit Windows programs also seems terribly broken to me.


You mean the identical limit that also 32-bit Linux applications on x86 have without reconfiguring kernel for different split?

It seems Windows bashing without any background knowledge is becoming popular again -_-


Running a PAE kernel on 32 bit is pretty standard at this point, I believe. Please correct me if I missed something.


It has nothing to do with PAE, but 3G/1G split I think is default on Linux.


That does not address the 512MB of virtual address problem, which is not an issue with PAE.


Yes it is because PAE doesn't change the address space of processes. PAE let's you use more physical memory, the address space of each process is still the same, because 2^32 is still 2^32 with PAE.


What the hell? 2^32 = 4,096MB. by default it is split 50/50. there are boot switches in windows to make it 3/1 or 4/0 (or PAE)

i'd love to hear more about this awesome 32-bit operating system you seem to know about that defies the laws of math.


While this is a correct observation, the one about 512Mb contiguous memory being fragmented that early is not really the most clever construction of an operating system I have seen.

It argues that while you can get 2Gb (or 3Gb, ...) you cannot get a contiguous space larger than X megabytes.

That said, it is a rather odd limitation of the garbage collector as well. Most GCs works around the problem by being able to allocate memory in chunks that are different. Still - this solution, one large chunk, is by far the simplest and fastest solution to the problem.


I haven't run windows in a long time, but I do remember from when I did run some servers that there were a lot of configuration options for how the kernel will treat memory allocation for a process. There is a also a big difference between windows versions, for eg. XP is optimized by default for a lot of different applications being opened where it will swap and fragment a foreground process even if it hasn't hit limits as it is anticipating other applications to be opened.

I think that a combination of allocating more to userland, killing all the default services and trimming the server, along with telling the memory manager to treat go as a background process would solve this.


Nothing to do with the problem.


Just compile with the gcc go compiler and you are set. Running 32 bit on servers is just like running IE6 on desktops. It's time to upgrade. Also every language that uses a conservative GC behaves this way, it's not just a GO issue.


No, gccgo doesn't resolve anything. No, this is not intended to be server code. No, its not time to upgrade. No, the dominant GC languages (java, c#, python, ruby, and on and on) do not suffer from this problem (at least, not to the same extent).


No, gccgo doesn't resolve anything.

Care to explain? I was under the impression that the recent 32-bit memory usage issues are specific to the 8g implementation only.


The memory issue at hand isn't the same as the one that plagues the conservative GC. This is a Windows-specific problem around the reservation of contiguous address space, not a failure of the GC to collect memory that isn't in use.


What other popular languages use a conservative GC, though?


I believe Mono uses Boehm GC, which is conservative:

http://www.mono-project.com/Mono:Runtime#Mono.27s_use_of_Boe...


They've introduced S-Gen which is a good, precise, generational collector. I don't know if it's the default, but it is considered production ready IIRC.


It'll be the default in Mono 3.0 (or 2.12, whatever we decide to call it). Currently, in Mono 2.10, Boehm GC is the default, although we've been using SGen for our commercial products (which is why we feel that SGen is now ready to be the default).


And yet Mono works fine in environments with < 512MB of RAM, even when using Boehm GC :-)


That's because the conservative GC issue is separate from the low-RAM issue.


Last I checked, Ruby uses a conservative GC. It uses one bit of every variable as a flag to unambiguously identify pointers, so that's a free pass on the integer-aliasing problem Go experiences. (Of course, the problem described in the OP is different and has to do with how Go allocates memory.)


Maybe I'm confused about what's going on, but if Ruby unambiguously identifies pointers at runtime, and the GC takes advantage of that, wouldn't that not be a conservative GC?


The point is that there isn't any integer that looks like a pointer — odd numbers are immediate, even are addresses — so Ruby's GC is spared from having to figure it out. It avoids the problem as a side effect of a performance optimization in the language implementation, so the GC doesn't have to be smarter. The compromise is in losing a bit of integer precision. But now that I think twice about it, I suppose I might kind of be splitting hairs.


Well, I'm not sure what you call it, but if you can't even have an integer that looks like a pointer, then you effectively don't have a conservative collector. The important distinction is whether an object can be kept alive through a false reference, and it sounds like it definitively can't in this case.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: