Hacker News new | comments | show | ask | jobs | submit login

Wow. That third message, with suggestions for avoiding the bug, reads like a twisted joke. Highlights:

"avoid struct types which contain both integer and pointer fields"

"avoid data structures which form densely interconnected graphs at run-time"

"avoid integer values which may alias at run-time to an address; make sure most integer values are fairly low (such as: below 10000)"

I understand that this isn't a completely brain-dead garbage collector, but warnings like that really scream "I'm just a toy language". It doesn't seem wise to call such a fragile programming tool production-ready or 1.0; the 32-bit implementation should be tagged as experimental, if only to lessen the damage to Go's reputation.

Unfortunately you are right. For me this is a very unpleasant surprise after I put aside D and rewrote my little framework in Go. Everybody said that the gc is not final, that there are some performance issues and they are working on it but I never imagined that such catastrophic bugs are not solved by now.

I also looked at D/Go -- I predict the reign of JVM is over (quote me).

I wouldn't bother with the 32 bit hiccups. Go hits the sweet spot pretty well. You have chosen well. Hang in there.

D has a nice allocation management(i loved the manual+gc approach) and in my benchmarks, optimized D was slightly faster than Go at almost anything and consumed up to 70% less memory (I assumed that was because the Go gc kicked in later and was a bit lazy) but D was weak at threading/synchronization and the documentation of the standard lib was quite messy and lacking. So I decided that in the long run Go will be better (cheap goroutines, channels, good stdlib /documentation + support from google and the prospect of a better GC, all indicated a clear winner).I really hope they fix this cause I can't throw away my atom box and ARM is becoming more and more important.

It's strange that with all those threading examples, they didn't notice the need for a WaitGroup primitive. I know that you could implement it yourself. You could simulate the goroutines, implement channels and SCGI/FCGI and so on, but why bother when there is Go ?

Probably because there is no need for it ? If WaitGroup waits for the end of all tasks to continue, there is map/reduce which should do the trick.

Did you mean something like core.sync.barrier?

barrier is not a suitable primitive for a WaitGroup. The ideea is simple. You accept sockets in a loop and handle the connections in parallel threads.At one point you want to stop this loop and the main thread must wait for all the active threads to finish before exiting, otherwise some clients may receive "connection reset by peer". With a WaitGroup, every starting thread increments a counter, and every finishing thread decrements it;when the counter is zero -> all the threads finished.The main thread calls WaitGroup.Wait and it remains blocked until all the worker threads finish the jobs. I guess you could simulate it with core.sync.condition

At the program level, this is built in (D has daemon and non-daemon threads like Java). Or you could use ThreadGroup in core.thread. It would be pretty trivial to do this at the user level with messaging as well.

std.concurrency "is a low-level messaging API." The language provides low-level concurrency primitives. It is possible a higher level library providing "goroutines" might be made, though there is not effort or plan.

I wrote a couple of D programs long ago and I still have a taste of an unfinished language. If I recall correctly, arrays manipulation is pretty weird and the language feels a bit the same PHP feels: a bunch of different things pieced together with no coherence. Compared to Python or Go, it's a whole different world.

The Phobos standard library is quite unfinished indeed and poorly documented in some areas but the D2 language is rather complete. It has sh*tloads of features but this also makes it a bit harder to master. Go is lighter (very easy to take on) and has an impressive library. Long story short: with D2 I needed ~1 month to get a good grasp of the language and the std library (the library was the hardest part) while with Go I needed around a week.

Agreed, D is a much bigger and complex language than Go.

Comparing the standard libraries, Phobos doesn't look far behind Go in scope. There are big holes though, like crypto, which is entirely missing, and a complete SQL driver (was in development, but we haven't heard from it for a while now), although there is a binding for SQLite3 and several drivers for major RDBMSs (not in the standard lib, tough). Logging will be included soon. Most of the rest is included (networking uses libCurl), and Phobos quality is continuously improving, some parts of it being excellent both in terms of functionality and performance, like the new regex library. On some other parts, like containers, Phobos seems much more advanced than Go. OTOH, there seems to be more 3-party libraries for Go than for D, but we can't comment on their quality. And of course, both languages allow to bind C libraries.

> The Phobos standard library is quite unfinished indeed

How so? I think it's got pretty much what belongs in the standard library.

What did you find weird about array manipulation in D?

If Ceylon/Rust are not ready today it means they will be immature for at least 2 years from their release date. Without a large community behind them, they will slowly fade away like D did.In order to get community, they have to have a good/clean/pragmatic design and a library as comprehensive as possible. If they don't have a good library they better have excellent interop with other platforms or they will be doomed. IMO Go is a solid step in the right direction and if the authors will resist the temptation to complicate it with (too many) new features&paradigms I think it will fare well.

> Without a large community behind them, they will slowly fade away like D did.

This is a rather strange assessment. The history of "successful" languages has been a mixture of "cool jump onboard" and "who can stay alive the longest to get a community." D is in the latter camp.

It seems to me, unreasonable to expect a language to be coming out the gate with guns blazing. People expect Nukes now!

The market will decide. As someone said on HN: the best language in the world would be a mix of Go and D and would be called GoD. When there were few options, yes, it was enough to survive long enough to get a community. But when you have plenty of similar options, some of them backed by some major actors (at least in the early stages), i think no one will sacrifice his productivity (other than for hobby projects) for the sake of one language. I really hope D will develop into something, but looking to various sources (abandoned projects for D1 on dsource.org) on the net and google results, it seemed to me that D peaked somewhere around 2007-2008 with a slight revival in 2010 when D2 was released.

The drop after 2007/2008 probably corresponds to the Phobos/Tango debacle, and the fact that Walter decided to fork with D2.

One of the main problems was, he was almost the sole compiler developer, and could hardly keep up with the tasks of maintaining 2 parallel branches and developing new ideas at the same time. People complained that they couldn't get involved as much as they wanted. It's understandable that many people thought that D didn't have a solid future with such uncertainties.

Nowadays, these problems are mostly overcome with a much better organization: there are several committers for the compiler, and several committers for the standard library. Phobos is the standard library, it's maturing, D2 has shown its strengths over D1 and the community is united again, because not only it is deeply involved with the design of the language and standard lib(through the m-l), it is also involved with the implementation of essential parts of it. 2011 has been a very good year for D, and I think that more than ever, the whole project feels like it's going in the right direction.

edit: I guess another reason D isn't gaining as much traction as it could is, it has been removed from the Alioth computer language shootout. For a language which is aimed at raw speed (and was brilliant at that when it was still on the shootout), it's a severe blow.

I strongly prefer ceylon/rust... But they're not ready yet.

Go 1 defines and the backwards-compatibility guarantees one can expect as the language matures. It's more a statement about the language specification than it is about the implementations.

I know from experience that the gc 64-bit version is production ready. It's unfortunate that the limitations of the 32-bit version are not called out clearly on the website.

Clearly, there was a need for "Hey, things won't change as much from now on". Now, I'm still afraid little work will appear on the GC, which is IMO Go's biggest weakness to date.

Now, I'm still afraid little work will appear on the GC

No need to guess, look at golang-dev https://groups.google.com/forum/?fromgroups#!forum/golang-de...

Oh yeah thanks. I guess I'm doing it wrong :) Glad to see some work on that front.

>I know from experience that the gc 64-bit version is production ready.

Is it really production ready though, or is it the same half-arsed implementation as the 32-bit one, just taking advantage of the larger availability of virtual memory on a 64 bit system?

On 32-bit systems, the problem is that there's a very high chance of non-pointer data looking like pointers to existing data, which, in turn might have its own pointer-like data. This means that you can end up keeping an excessive amount of unreferenced data.

However, a 64-bit address space is so much larger that you can't really suffer the same issue. Unlike on 32-bit systems, neither high entropy data nor text will look like valid pointers.

Therefore, the technique can be validly described as production-ready for 64-bit systems.

Hmmmmm... I'm smelling memory-based DoS if I, say, upload carefully crafted files to a webserver. What accidentally happens in 32-bit may be deliberately triggered in 64-bit.

This is impossible. I don't like to make absolute statements, but I'll stand by this one.

In order to pull of an attack, you'd need to know what address range the program in question has been allocated, then figure out the smaller range that the runtime is actively using, then give it data with integers in that range. This is impractical.

If you think you can pull it off play.golang.org lets you upload text to a Go program on Appengine, then it compiles that program and runs it. This gives you 2 programs to attack, the playground binary, and the one compiled from your source. If you can do it, you'll have a way to kill machines inside Google.

Good luck.

Is it that you can't suffer the same issue, or that you can't 'really' suffer the same issue?

Even if there's a smaller chance of it happening, any language that has ANY chance of killing your system when running as expected is a language I'll never bother to learn.

Is there any sort of analysis tool they might be able to put into the compiler to tell you if a data structure you created has a high chance of looking like a pointer?

Can someone help me out here with a TL;DR? This is my first exposure ever to any technical aspect of Go, and this discussion makes it sound from this discussion like it doesn't use any tag bits in its pointers, but has its garbage collector run heuristics on the data it examines to see whether it "looks like text" to decide whether to collect it? I know that can't possibly be correct.

It's much simpler than that, it's called a conservative collector, It looks at every bit of data, at just pretends it is a pointer, if anything points to a valid allocated address, that address is retained. Otherwise, just like every collection when there are no references, the object is collected.

If Google is, and has been for quite some time, using it for large-scale systems in production, I would call it production ready.

Is there any reason to believe that Google uses Go for large-scale production systems?

Yes, it's used by YouTube, in a very critical path, large scale enough :)?



I am sorry if the post puts Go in a bad light. That wasn't my intention. You shouldn't read the post as "32-bit Go is unusable" - read it as "If you encounter a garbage collection issue in your 32-bit program then you can use the rules mentioned in the post to fix the issue".

We know. But that does put Go in a bad light. If your garbage collector falls down if you use "big numbers", it looks bad.

Keep in mind that the comments on that thread are open to all people and thus that doesn't entail an 'official' response to the problem, though it does unfortunately seem to entail sound (but obviously crazy given the restrictions it puts you under) advice for avoiding the issue.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact