Hacker News new | past | comments | ask | show | jobs | submit login
Rust and Go (medium.com/adamhjk)
352 points by mperham on Nov 7, 2014 | hide | past | favorite | 302 comments



The article is a lightweight analysis by someone who writes small programs. He does get that, for Rust, "If the compiler accepted my input, it ran — fast and correctly. Period." That's was a common experience with the very tight languages, such as Ada and the various Modulas. It's been a while since a language that tight was mainstream. We need one now, badly.

Go isn't bad for writing routine server-side web stuff that has to scale and run fast, which is why Google created it. Go is a modern language with a dated feel. No user-defined objects, just structs. No generics or templates. It was designed by old C programmers, and it looks it. Go has generic objects - maps and channels - and syntax for creating object instances - "Make". Only the built-in generics are available, though; you can't write new ones.

Go's "reflection" package thus tends to be overused to work around the lack of generic. This means doing work for each data item at run time for things that could have been done once at compile time. "interface{}" (Go's answer to type Any from Visual Basic) tends to be over-used.

Go (especially "Effective Go") has a lot of hand-waving about parallelism. Go's mantra is "share by communicating, not by sharing", but all the examples have data shared between threads. The channels are just used as a locking mechanism. Race conditions are possible in Go, and there's an exploit which uses this. (That's why Google AppEngine limits Go programs to single threads.) Go doesn't use immutability much, which is a lack in a shared-data parallel language with garbage collection. If you can make data immutable, you can safely share it, which is a way to avoid copying without introducing race conditions.

Rust, like Erlang, takes a much harder line on enforcing separation and locking. I haven't used Rust myself yet, so I can't say more on what it's like to use it. My hope is that Rust will provide a solution to buffer overflows in production code. After 35 years of C and its discontents, it's time to move on. I really hope the Rust crowd doesn't fuck up.


Like you†, I've had the pleasure of working with some fairly large concurrent codebases and the character-building experience of tracking down deadlocks, random memory corruption bugs that turn out to be race conditions, and (my most favorite of all) unexpected serializations that randomly bring programs to a halt. Most of that experience has been in C++, with a little C and a little Java mixed in there.

Over & over I see language aficionados ding Golang for not taking advantage of immutability and for allowing shared data --- or, in your case, going a step further and reducing all communication among processes in Golang to instances of synchronized sharing.

What I'd like to know is: why don't all those hundreds of thousands of lines of concurrent Golang code out there, including all the library code I can just "go get" and whose authors have been encouraged by Rob Pike to use, basically, threads with near total abandon (watch his video about designing a lexer!) --- why don't all those libraries and programs randomly deadlock and corrupt themselves all the time?

Because my experience is that Golang code is quite a bit more reliable than, for instance, Python code.

What am I missing? The "share by communicating" model in Golang seems to work pretty darn well, especially given the extent to which Golang begs programmers to make programs concurrent.

OK probably you more than me but still.


> why don't all those libraries and programs randomly deadlock and corrupt themselves all the time?

The simplest answer would be "they do." In aphyr's recent presentation on Jepsen, where he tested etcd (a Go database implemented on top of Raft), he noted that when he started using it he encountered a ton of easily reproducible races and deadlocks (which he sarcastically noted was surprising because he thought goroutines were supposed to make concurrency issues a thing of the past).

I am not saying that Go channels don't help the situation at all--and the inclusion of a race detector doesn't hurt either--but you still have plenty of ways to shoot yourself in the foot. The thing that probably helps most is that GOMAXPROCS is 1 by default, since data races are a multicore phenomenon in Go.


Distributed systems programming is its own special concurrency problem, and distributed systems also exhibit deadlock, races, and serialization, no matter what language they're implemented in. I'm not sure what finding a race condition in a distributed commit implementation says about a language; at the very least, it's nothing you couldn't say about Rust as well, which is also not a language that solves distributed systems concurrency problems.

Maybe I'm wrong and etcd was riddled with concurrency problems between the goroutines of a single etcd process?

In any case: as anyone who has worked on a large-scale threaded C++ codebase can tell you: Golang programs simply do not exhibit the concurrency failures that conventional threaded programming environments do. It would be one thing if Golang code only used concurrency for, say, network calls. But goroutine calls are littered throughout the standard library, and throughout everyone's library code.


It is not a black and white situation probably. Golang is better because it has built-in channels and encourages users to take advantage of them. It also has garbage collection. So those 2 things right of the bat help.

But there are better things out there -- isolated heaps (Erlang), borrow checkers (Rust), stronger type systems and immutability (Haskell) etc. There are no magic unicorns so those things often come at a price -- sequential code slowdown.

Getting back to go. One can of course say, "Oh, send only messages. We are all adults here. Let's just agree to be nice. Stop sharing mutable memory between goroutines!" But all it takes is "that guy" or "that library", doing it "that one time" and then there are crashes during a customer demo or during some critical mission. It crashes and then good luck trying to reproduce it. Setting watchpoints in gdb (or the equivalent Go tool), asking customers "Can you tell me exactly what you did that day. Think harder!" and so on.

Also, as others have pointed, with Golang though, often it is run with just one OS thread backing all the concurrency. So many potential races could be just be hidden.

There is also some confirmation bias involved. When something is broken, often authors don't write blogs about it, don't advertise. They fix it, and move on. So maybe a lot of programs are full of concurrency bugs but just nobody is blogging about it. They've invested time and energy into learning a new ecosystem and now they have to blog about its flaws and so on. That is hard to do.

Another observation is that when spending a lot of time debugging and handling segfaults, pointer errors, user-after free errors, concurrency issues, that becomes the default and expected view of how programming works. It becomes hard to imagine how it could work another way. It becomes obvious that weeks would be spent tracking one concurrency bug or having to add cron jobs to watch for crashed programs and restart them because the system is so complex and non-deterministic, replicating the bug is too hard.


How does Rust's borrow checker cause a slowdown of sequential code? It's a purely compile-time construct and allows for the elimination of a GC, so it's actually a net win in code execution speed.


> How does Rust's borrow checker cause a slowdown of sequential code?

It doesn't, it was just grouped with the other two. But immutiblity-by-default could lead to slowdown.


Not at all. In languages that are thoroughly immutable, it's copying, not immutability, that has a runtime cost. Rust has mechanisms for avoiding these costs (moves and mutable references).

Furthermore, since the compiler's knowledge of mutability is directly related to its knowledge of ownership, one could argue that immutability actually makes code faster by dint of providing greater aliasing information (e.g. `restrict` in C) to the optimizer (though the Rust compiler has yet to actually leverage this optimization).


Just out of curiosity - is there a problem with sharing data across goroutines when access to/mutation of said data is controlled by a mutex?

    func (*Mutex) Lock

    Lock locks m. If the lock is already in use,
    the calling goroutine blocks until the mutex is
    available.
http://golang.org/pkg/sync/

It seems to me this is a valid alternative to [rigidly] sticking to pure message passing.


In short: We (as a field) tried them for many many (many!) years now and they have been found lacking -- in practice they're just too hard to get right for large-scale systems.

EDIT: The mutexes themselves are easy enough to get right, it's the systems using mutexes that are too hard to get right.


In most languages, the language says nothing about what data is protected by the mutex. Modula and Ada did, and Java has "synchronized" objects, but C/C++/Go lack any syntax for talking about that. This typically becomes a problem as a program is modified over time, and the relationship between mutex and data is forgotten.


> In most languages, the language says nothing about what data is protected by the mutex.

Or the other way around, what mutex protects a piece of data (or even that a piece of data should be protected at all), so it's easy to forget it and just manipulate a bit of data without correctly locking it.

I was pleasantly surprised to discover that Rust's sync::Mutex owns the data it protects, so you can only access the data through the mutex (and the relation thus becomes obvious).


> is there a problem with sharing data across goroutines when access to/mutation of said data is controlled by a mutex?

Well aside from the deadlock or priority inversion of sorts that should work. Just like it would work in C/C++/Java etc.

The real problem is when the shared data is not controlled by a mutex, but should be.


I feel this is a symptom of people coming from dynamic, higher level languages, who may or may not have ever really learned concepts of CS, "switching" to Go for performance reasons. I don't intend to insult anyone or certainly the authors of the above. Go is type safe but doesn't keep you from shooting yourself in the foot. You NEED to read the spec to understand when sharing memory is generally OK. I'm glad not to be penalized in performance or boilerplate to accomplish this. The downside, of course, are stories like the above. I don't blame the language here, though. They give you the tools to be safe. Getting away from automobile analogies, let's try woodwork. I can give you a drill, a drillbit, a screwdriver, and a screw. Sometimes you should know when to make a pilot hole, and when it's ok to forego this. But folks looking for 'performance' or 'ease of use', or folks who come from languages that just give you a nailgun, will inevitably split the wood a few times.


Were there any code examples provided that show how to easily trigger races & deadlocks? I mean, the Go team needs to be aware of these problems and provide a fix or something.


The issues weren't with Go--which definitely allows for both data races and deadlocks and doesn't claim to eliminate either--but with etcd. And according to aphyr, the team was very responsive and quickly fixed the ones he found.

My point wasn't that Go is _worse_ than contemporary languages like C++ and Java when it comes to data races, only that it doesn't eliminate them. Which, again, it doesn't claim to. Rust does, and it is an important difference between the two languages. Because data race freedom with cheap mutable state requires a garbage-collection free subset of your language [1], I think it's unlikely that Go will ever guarantee this.

[1] as noted by Niko Matsakis at http://smallcultfollowing.com/babysteps/blog/2013/06/11/on-t...


> show how to easily trigger races & deadlocks? I mean, the Go team needs to be aware of these problems and provide a fix or something.

You mean file a bug like "issue #1935 -- stop sharing memory between goroutines" (I just made it up to be silly there is no such bug report)

In other words, they have explicitly designed in the ability to share memory between goroutines. You can certainly file an issue or bug report about, somehow I doubt that will lead to much but being kickout and laughed at.

One can also just have 2 goroutines wait on each for results and that's a deadlock. Maybe there is a tool to detect that would be nice. Do you know of one?


If all goroutines deadlock, then the runtime will panic and you'll get stack traces for everything. But yes, you can obviously deadlock one or more goroutines trivially. A simple select{} will just block one goroutine forever, for example. And no, there's no tools currently that will detect deadlocks, AFAIK.


It seems that Go's race detector (https://blog.golang.org/race-detector) can do that. see an example: http://pastie.org/9705392


I haven't looked into that (or googled), but how does one detect that everything has deadlocked - is it some kind of profiling/sampling being done, or is it something more system specific? Any pointers? Thanks!


http://golang.org/pkg/runtime/pprof/#Profile Provides a "blocking profile" that tells you what things blocked and for how long they blocked for. You can use it to find places where you've deadlocked as well as places where adding buffered channels might help performance.



How can you compare the ridiculous amount of Python libraries/code out there (with varying degrees of quality as you would expect from a friendlier language) with Go ? Of course you would find Golang code to be more reliable. The libraries you're relying on are by comparison quite primitive and missing many years of technical debt.


As a separate comment I want to address your final paragraph. Rust definitely provides a solution to buffer overflows. Specifically, all data structures that represent buffers of any sort always do bounds checking. This includes using the indexing operator (`foo[idx]`) on a fixed-length compile-time array. It's possible to skip the bounds check but you have to do so very intentionally, using an `unsafe {}` block (which defines a scope wherein certain features can be used that bypass some of Rust's safety). This should only ever be done in response to performance profiling when it turns out that bounds checking is a problem and when you can prove to yourself that it's safe to skip, and often there's another way to do the same thing that doesn't use `unsafe {}` (e.g. if you're iterating an array, use an iterator instead of indexing into the array each time; the iterator approach generally optimizes all the bounds checks away).

The basic rule is if you see a segfault or a data race, search your code for any `unsafe {}` blocks, and the cause will always be found inside one of those. And as a corollary, never use `unsafe {}` if you can possibly avoid it.


Note that Go also does bounds checking, so you never have buffer overflows in Go either (also unless you use the unsafe package).


I could not agree more with your first paragraph. The only other language I've used that I've had that experience with was Haskell, and while there are good arguments to be made for using Haskell in production, it should be obvious that's not a language that will ever become mainstream.

I'm hoping that as Swift evolves over time, it will slowly become that sort of language. Right now it's pretty hard to write any real-world code in Swift that doesn't work with the Cocoa frameworks, and the Cocoa frameworks are typed in an objective-c-compatible way (even if new frameworks are written in Swift they'll need to maintain obj-c compatibility), which means you don't get the strong typing that's necessary for this property. Pure Swift code has the potential to behave like this, although you probably need to avoid the ImplicitlyUnwrappedOptional feature (the ! suffix on types), which of course primarily exists for ease of obj-c integration anyway.

I'm bringing up Swift because, with Apple's backing, it's very quickly becoming a "mainstream" language. I put that in quotes because it is only usable with iOS and OS X programming (for now at least), but iOS is large enough that obj-c should be considered a mainstream language despite the fact that almost nobody outside of iOS/OS X uses it, and therefore as Swift supplants obj-c it becomes appropriate to call it mainstream.

Regarding parallelism, I've been in love with Rust for a long time now, and one of the biggest reasons is because Rust makes parallelism safe. As an iOS/OS X programmer by trade, I think thread safety is far and away the biggest elephant in the room. Despite the fact that we've known that multithreading is the future for years, and despite the wonderful Grand Central Dispatch library on iOS/OS X, most programmers still think in a single-threaded mindset and don't even consider how their code should operate if invoked on a separate thread. This was one of my bugaboos with Go back when I was using that language (which was from the day it was announced right up until I discovered Rust, though admittedly my usage was in hobby projects and nothing serious).

I applaud the fact that Go has a data race detector now, which I used to finally uncover a lurking data race that plagued one of my programs for months (and which was ultimately caused by the Go library using two goroutines where I expected one, and therefore data which I expected to be on a single goroutine was actually mutated from two goroutines simultaneously). But I think Rust is absolute proof that a modern language can be designed such that data races are prohibited at compile-time without sacrificing any language flexibility.


> I could not agree more with your first paragraph. The only other language I've used that I've had that experience with was Haskell, and while there are good arguments to be made for using Haskell in production, it should be obvious that's not a language that will ever become mainstream.

I don't think it is at all obvious that Haskell won't become mainstream. It's already exerted a tremendous influence over many other mainstream languages and there's only so long that can happen before people just start going directly to the source of the innovations (or one of its direct descendants).


I agree Haskell has had an important influence, but I don't see why people would necessarily "go directly" to it because of that. In fact, I would argue just the reverse. People chose the derivative languages b/c they provide things the original does not.

To wit, Lisp never became mainstream despite exerting a huge influence. Likewise Smalltalk.


> I agree Haskell has had an important influence, but I don't see why people would necessarily "go directly" to it because of that.

I also said "or one of its direct descendants" (like Agda or Idris in all likeliness).

> To wit, Lisp never became mainstream

Clojure doesn't count? And the good parts of Perl, Ruby & Javascript are essentially Lisp without the homoiconicity.


> Clojure doesn't count?

Perfect example. Clojure != Lisp. But it is a pretty obvious descendant.


"Lisp" is a family of languages and Clojure is one member of this family.


Technically true, but I don't think it makes sense to consider Clojure to be the same thing as Lisp in the context of ternaryoperator's statement (though I obviously can't speak for him). Notably, Clojure's tight integration with Java is the primary reason for its relative popularity, and what most sets it apart from the rest of the Lisp family. It is disingenuous to claim Clojure means Lisp has gone mainstream when the popularity of Clojure is not due to its inclusion in the Lisp family.

Beyond that, I'm not quite sure Clojure counts as "mainstream" yet. According to the TIOBE Index, it doesn't even rank in the top 50 languages. Heck, the top 20 includes R, and Dart, neither of which I would call "mainstream" (I'm actually really surprised at how high R is ranking). I don't know how significant that is, though the TIOBE Index is measuring "number of skilled engineers world-wide, courses, and third party vendors" and that seems like a reasonable approximation for "mainstream" to me.


There are several languages targeting the JVM these days. And, what obviously sets Clojure apart from other JVM languages is its Lispiness (i.e., the JVM is constant across JVM languages).

Clojure is not married to the JVM either-- in fact, it has been hinted that it would jump ship if something better comes along or the current situation becomes less viable. Furthermore we already have a dialect of Clojure called ClojureScript which targets JavaScript/node/V8.

And, I look at the JVM as really merely a library/API/runtime. C++ has STL and stdio and such and they are not part of the language proper but rather merely libraries for interacting with the underlying operating system (in a platform independent way). The same is true for the JVM with respect to Clojure and Scala et al.


> Likewise Smalltalk

Java happened. Business were already in the process of adopting Smalltalk.

Hotspot is a Smalltalk JIT compiler reborn.

Eclipse is Visual Age for Smalltalk reborn. It still keeps the old Smalltalk code browser.


>Hotspot is a Smalltalk JIT compiler reborn.

Yeah, but nothing in it is Smalltalk-specific. It's not like Smalltalk survives in the mainstream because of Hotspot (in the way that, say, Algol survives).

[edit: survives, not "survices"]


My point was that Smalltalk did not became mainstream, because a few heavy weight vendors decided to switch field to support the new kid on the block.


Well, a counter point then can be: and why did those vendors did not insist on Smalltalk? Why weren't Smalltalk more heavily pushed by some big vendor itself?

It's not like SUN was the only player in town. IBM pushed Smalltalk IIRC.

I think this (from StackOverflow) tells a more comprehensive story):

• when Smalltalk was introduced, it was too far ahead of its time in terms of what kind of hardware it really needed

• In 1995, when Java was released to great fanfare, one of the primary Smalltalk vendors (ParcPlace) was busy merging with another (Digitalk), and that merger ended up being more of a knife fight

• By 2000, when Cincom acquired VisualWorks (ObjectStudio was already a Cincom product), Smalltalk had faded from the "hip language" scene

http://stackoverflow.com/questions/711140/why-isnt-smalltalk...


I used VisualWorks at the university in 1995, just before Java appeared and there were presentations with broken Java code[1].

Eclipse 1.0 was Visual Age for Smalltalk redone in Java.

If Java hadn't appeared in the scene, maybe even with those cat fights, the language would have become mainstream anyway.

This just speculation from my part.

[1] The famous "private protected" that was accepted in the very first release.


Technically, Eclipse was Visual Age for Java [only implemented in Smalltalk] redone in Java :-)


So his point stands. People preferred Java to Smalltalk and it thus didn't become mainstream.


s/People/Vendors/g

Which is quite different.


Even worse than the problem of uncommon concepts as monads is that Haskell's memory footprint is extremely hard to reason about. A few years ago it was impossible with the http libraries to download a file without the program consuming several times as much memory as the downloaded file.


> Even worse than the problem of uncommon concepts as monads

Just go ahead and learn the typeclass hierarchy and such-- it really is quite a useful higher level of abstraction in whatever language you choose. And it definitely will enter the mainstream (even more than is already has [Swift, Scala & C# all have monadic constructs]).

> Haskell's memory footprint is extremely hard to reason about.

And you'd probably want to also throw runtime in there as well.

I think this is relative-- it's not "extremely hard" for everyone. Also, many structured programmers found object orientation "extremely hard" but somehow the industry managed to progress through that era.


A common recipe people quote for good software is "a) first make it work, b) then make it fast".

Haskell is very good at a), and not bad at all at b). With the help of the profiler it shouldn't be that hard to determine a program's bottlenecks/leaks and fix them, as with any other language.

BTW, since you mention http, I have this reading on my back burner [1] but from skimming it found that for certain payloads a haskell http server may perform better than nginx (1.4, circa 2013?), which is an impressive feat.

1: http://aosabook.org/en/posa/warp.html


I have definitely heard from multiple people that for them Rust was a good stepping stone to Haskell. Hopefully it will be the same way with Swift.


That may be true but I find it curious. Without knowing too much about Rust, it looks like a lower level language than Haskell.

I'd have thought that someone would pick the higher level language first, and go to the lower lever when in need of more control.


Rust is not your typical lower-level language though. It supports a lot of the features that functional programmers expect. It is an eagerly evaluated language that lets you drop to 'unsafe' code where necessary but in its natural form, it is surprisingly high level.


I think the same is true for F# and Scala.


> it should be obvious [Haskell]'s not a language that will ever become mainstream

If it is obvious then you should be able to explain why in a few sentences.


Haskell is an academic language with a focus on language research rather than pragmatism.

One sentence. Is that sufficient?


Sure! I don't actually agree with your conclusion but I think your point of view is fair enough.


> Race conditions are possible in Go

Just to add a clarification on the implied comparison: Rust only protects against data races at compile time. Other forms of race conditions are still possible.


> No user-defined objects, just structs.

The way structs are supported, it feels much more like objects than C structs. I prefer it much more than say, OOP classes. It's something you need to immerse yourself in to appreciate, IMO.

What about "user created objects" would you miss?


It's absolutely lightweight. First impressions for sure, writing what were very small programs. I tried to be clear about that. :)


It's not clear how much experience the author really acquired with each language, and whether that experience was sufficient experience to justify his statement:

Go felt that way to me — it was good at everything, but nothing grabbed me and made me feel excited in a way I wasn’t already about something else in my ecosystem.

He's apparently using each language to write relatively small command-line utilities. If Go is "amazing" at anything, its usually cited as a language of choice for (1) networked systems, and (2) large yet maintainable systems. I'm not sure his initial foray into the language would have provided enough experience to accurately assess those merits one way or the other.

Rob Pike once expressed surprise that people migrating to Go weren't C++ programmers, but Ruby/Python/etc. programmers who needed more performance. That leads you to wonder: (EDIT: removed pejoratives) if a programmer desired to switch from C/C++ to another language but hasn't by now, why not?

1. They require the performance benefits of C/C++ (and as humanrebar pointed out, manual memory management).

2. They're tied to legacy code, with too little incentive to switch.

3. They have an organizational mandate.

Any programmer who wasn't subject to the above constraints and wanted to switch could have done so before Go showed up on the scene. And if a programmer uses C or C++ solely because the above constraints, Go isn't likely to change that.

Rust may have a better chance of converting C++ programmers, if it offers the performance and control demanded by programmers who are using C++ by necessity. It will be interesting to see if people migrating from Python/Ruby to a higher performance language will choose Go or Rust in the future. Kind of like the OP, I like Go but I'm excited about Rust.


> Rob Pike once expressed surprise that people migrating to Go weren't C++ programmers, but Ruby/Python/etc. programmers who needed more performance. That leads you to wonder: who is still using C/C++ in 2014, and why? Here are some ideas:

I (and many others) "still" use C++ (which is a very different language from C) in 2014 because it's an extremely powerful language that's still evolving and has produced many innovations that haven't been replicated by many other languages, such as destructors/RAII, strong compile-time type checking, generic programming, and ownership/move semantics. To me, Go looks like an update of C that has ignored all the good innovations of C++. It has no destructors, uses GC instead of ownership/move semantics, and requires runtime type checking instead of providing generics. I see it as a big step backward from C++, so unlike Rob Pike I'm not surprised it hasn't attracted C++ programmers.

Rust, on the other hand, has taken many of the good parts from C++ and developed them even further, without the baggage of C++. Thus, while I haven't coded any Rust yet, it looks like a step forward from C++ and has me quite excited.


I apologize, "still" came off as pejorative. My post became a little muddled there, but had my thoughts been clearer, I would have focused on just one group: programmers who currently use C++ even though they would prefer to switch to another language. My interest pertains to them -- what requirements have prevented them from switching, and does Go satisfy those requirements?

>> [C++ is] still evolving and has produced many innovations

You're absolutely right, those features extend beyond performance/control.

>> Go looks like an update of C that has ignored all the good innovations of C++.

Yes, Go feels much more like an updated C to me too. And it makes it all the more surprising they expected C++ programmers to jump on board. I double-checked the quote, and actually C wasn't mentioned at all.


For me - it's because Go exists in the same uncanny valley as Java. It is a language that manages to get a good number of trade-offs right, balances performance & productivity, and provides a good all-around package.

However, it's not the best at anything. If you need super performance and the ability to control the machine directly, you still need C++. If you want super productivity and the ability to quickly try out ideas and see if they work, you still need Python or Ruby. If you want to write for web browsers, you still need Javascript. If you want to write for iPhones, you still need Objective C. If you want to write for Android, you still need Java. If you want to be absolutely certain your program will work as intended when it compiles, you still need Haskell.

I guess Go is the best mainstream language available in one area, networking concurrency. And that's where we see its successes so far - Cloudflare and Doozer (Heroku's Paxos implementation) and dl.google.com.

But virtually all the money in the tech industry is made on the margins - by being the best in the world at one specific task. If you want to build the fastest, most memory-efficient database possible - you still need C++. If you want to crawl, parse, and index the most pages on the web - you still need C++. If you want to have the highest-quality graphics at the best frame rate possible - you still need C++.

I could see Go getting pretty widespread adoption within the enterprise (as in, software departments of companies whose primary product is not software), much like Java has. There, you don't need to be the best in the world at something, you only need to make the best use of the computing resources you can with the staff you have available, and a language that gives you a good trade-off works out well. But these people moved away from C++ a decade ago; the enterprise is all Java land now. The people who still are on C++ generally work for product companies where performance is critical; because of the economics of the software industry, it pays for these companies to hire a few more expensive developers and ensure that their product remains better than the competition than to switch to a more productive language and take the risk that their technology stack won't let them achieve the goals that make them competitive.


Do you really think Go is better at concurrency and networking than Erlang?


I said mainstream. Erlang is better at concurrency & networking, but it has some very pragmatic problems (notably, string handling is dog-slow and takes lots of memory) that rule it out for many use-cases. The tooling for Go is also better than that for Erlang, the libraries for things other than telecom & messaging are more extensive, and you're more likely to find enthusiastic developers for it.


I definitely share this perspective, I think C++14 it's a great language and nothing I've seen about go has given me any interest in considering switching to it. Rust on the other hand looks really promising and has a bunch of neat innovations that make it seem pretty exciting to me and I'm definitely interested in learning more even though I'm by no means dissatisfied with modern C++.

The only other language I currently do any meaningful amount of coding in is F# and for me that replaces any need for a scripting language like Python or Ruby. I've used and liked Python but I really missed static typing.


  who is still using C/C++ in 2014, and why?
Systems programmers. Embedded programmers. Kernel developers. Compiler developers. Language runtime developers. Safety critical software developers. I guess you can group them under "performance benefits" but it's as much about the ability to drop down, write arbitrary bits to arbitrary addresses, and (re)implement low-level constructs as it is about performance.

The most obvious example of the need for a powerful language is manual memory management. For C and C++ developers, manual memory management isn't a drawback of their language but a feature.

If you want that sort of freedom to rebuild from the ground up, the only mainstream choices are C and C++. We'll see if Rust gets it right too.

All that being said, I think there's more room in the systems programming space. Both C and C++ tend to pick fast (or reverse-compatible) over correct when deciding which way to go on a feature. There are many, many applications that could really benefit from a "safety first" attitude towards systems programming. Browsers are the driving example but encryption libraries, medical systems, safety-critical software, and many other domains would benefit from a new perspective on things.


Nim performance compares favorably to C++. It features a Pythonic syntax, optional soft real-time GC, optional manual memory management. Porting an elliptic curve implementation over to Nim from Python was a cinch:

elliptic.nim: https://github.com/def-/bigints/blob/master/examples/ellipti...

elliptic.py: https://github.com/wobine/blackboard101/blob/master/Elliptic...

Others have reported success converting Python codebases directly to Nim. To put it in perspective [1]:

> The problem asked students to do a hundred runs and to try sample sizes of 10 and 100. For kicks, I did 1000 runs with a sample size of 1000. This took nearly 4 minutes in Python (macbook pro retina with i7 or i5? I forget...) and under 5 secs in Nim.

[1]: http://forum.nimrod-lang.org/t/589

Rust and Go vs Nim: http://goran.krampe.se/2014/10/20/i-missed-nim/


The Rust vs Nim comparison is tainted by very out of date documentation hosted on some MIT servers (the condition system is entirely gone). It's unfortunate and we have contacted the web master but there's not much they can do to remove or change how it appears in search engine rankings.

For reference, doc.rust-lang.org is the only place to look for documentation for the standard libraries; any other hosting of those docs is likely to be out-of-date.


Hmm, was the parent comment edited? I can't see a reference to documentation...


The "I missed Nim" article to which it links to includes

> Nim has Exceptions with tracking (!) and Rust has… something incomprehensible or perhaps its this, I am actually not sure ;). Again, for me, a clear win for Nim.

where "something incomprehensible" links to very old documentation of the conditions system, which was removed quite a while ago.


If we swapped the language, he might as well also have said:

> The other language is named Nimrod... or perhaps it is Nim... I'm actually not sure ;).


Regarding the parts where I link to out of date docs:

1. I will fix it later today/tomorrow, busy right now

2. I did actually search quite a bit, but that was what I as an outsider found. Consider this an "issue" to fix :)

I think the issue was that the words I was searching for are not the words used to describe these mechanisms.

Anyway, if you can give me a proper link to the canonical place where error/exception handling in Rust is documented - I will link it. Going to http://doc.rust-lang.org/0.12.0/reference.html I still can't seem to find it.


Re 2. as my comment states, we have tried to fix it, but we cannot: the administrator of that server is powerless.

Error handling is mainly via types, mainly `Result`: http://doc.rust-lang.org/nightly/std/result/ . That part of your article will need changing because it is completely different to the condition system (that is, just swapping the link would be useless).

Also, it's a little unfortunate that you entirely ignore the major difference between Rust and Nim in your little list of differences: Rust does not require a garbage collector (there is no GC used or implemented in the standard library). Lastly, unless you've participated in Rust development (I encourage everyone to do so, it's fun) it may be difficult to judge what the influence of the 'corporate' backing is... you're actually talking to a volunteer (no association with Mozilla) member of the Rust core team right now.


Regarding "ignoring the major difference", I explicitly tried to not compare them too much - because I know too little of both. Now I know more about Nim, but not Rust.

Regarding GC, its not fully "required" in Nim either, but yes, the GC is a big differentiator and probably one of the things making someone prefer one over the other.


You did compare them too much: comparing error handling idioms requires more spcific language knowledge than noting that Rust is designed to be used without a GC (all functionality works and is safe with no garbage collection), but Nim is not.


What I meant with "issue to fix" - add a section in the documentation that has some kind of words that outsiders connect with "error handling": Error, Exception etc

Perhaps I didn't search hard enough - but the next person will probably not search harder either. "Result" is not typically a word one would search for.


> "Result" is not typically a word one would search for.

It was renamed from the more generic (type-wise) "Either", because the two tags (Left and Right) didn't feel very descriptive when it was used as a error-or-value type. But "Either" at least hints at the concept of "either this or that": "Result" is even more of a generic name. I don't know why they settled on a name like that. (The two tags, Ok and Err are more descriptive for errors, though.)


The mnemonic for how Either fits into the exception monad in Haskell is the result is either Right (correct) or Left (sinister). It is tainted by the old prejudice against left-handed people.


I know, but in Rust's case it was more straightforward to call it Ok/Err since that was the majority use-case. Personally I prefer Either, since then you have a more generic sum type, just like the tuple type is a more generic product type (and if I'm using the terminology correctly).


Games developers are also overwhelmingly C++ developers (at least AAA console game developers, things are a bit different in the mobile and indie space where other languages have made inroads). Performance is a major reason for us too but large legacy code bases and the platform and library ecosystems we work with are also major factors.

Plus some of us actually really like C++ and/or C :)


Never underestimate the legacy. I've resolved not to start another C++ project, but I'm learning MFC at the moment for working on a codebase that's 35 years old and was rewritten into C++ at some point in the 90s.


I'm sorry you have to do that.


I'm finding it not too bad; the style is somewhere between 'C with classes' and EnterpriseJavaReallyLongNames, and aggressive testing has kept it working over the years. MFC itself is pretty intuitive and I like the window message system.


It seems relatively clear that he's quite new to both languages, with maybe a few weeks worth of familiarity if not less.

From his rookie mistake of using path package instead of path/filepath to join system file paths [1], to non idiomatic naming with underscores, and what he describes in general.

It's interesting from the perspective of what a fresh perspective on both languages is, but the comments have less value once you've spent significant time with either language and used them to build large projects.

[1] https://gist.github.com/adamhjk/5a475b8dd45971a4e814#comment...


I am absolutely quite (completely) new to both languages.


Another aspect that makes me hopeful about Rust as a way forward for C++ programmers is that it is designed to play well with the C (if not C++) ecosystem.

Go is in this weird place where they insist that you have to be 100% Go or not at all, at least with the official Go package (gccgo is different, but this divergence in the ecosystem doesn't exactly inspire confidence). This is convenient when you have small isolated processes, but completely prevents Go adoption in other environments.

I'm working in an electronic design automation software system which is basically a large number of C/C++ shared libraries/plugins loaded into a Tcl interpreter which you do not want to rewrite (lots of tricky and finely tuned optimization and design analysis algorithm).

Go is not a good fit in this environment because of its all-or-nothing attitude. Rust, on the other hand, is fairly easy to imagine as an addition to the toolset from a purely technological point of view (ignoring the social and management issues which add a huge amount of inertia). There is an unfortunate impedance mismatch between idiomatic C++ and idiomatic Rust, but communication is possible via a thin C interface.



>(2) large yet maintainable systems

I have never seen anyone suggest Go for large systems or seen any open source code that even comes close to enterprise system size. I would argue that Go is inadequate for large systems compared to the JVM languages. The absence of operational tooling, exceptions, declarative annotations, runtime management etc all make it much harder to support and scale to large numbers of developers.

Go seems perfect for micro services, command line utilities and single purpose applications. Which is where it seems to have gained a lot of traction in companies to date.


>>(2) large yet maintainable systems

>>

>I have never seen anyone suggest Go for large systems or seen any open source code that even comes close to enterprise system size.

I might be moving the goal posts a bit here, but I think go's C heritage, and focus on message passing -- possibly coupled with something like protobuf or similar -- encourages breaking large systems into small services. So if you view a system as a "sum of functionality" -- I think one might still use go for "large, yet maintainable systems".

Now, it is of course possible to write micro services both in C++ and java -- but historically it appears at least in the java world, you end up throwing everything into a massive jboss container, exposing yourself to thousands upon thousands of lines of code.

With go, you deploy (relatively) small binaries, and a service can live as 20 binaries on one box, or as 20 binaries (along with some load balancers like haproxy or what not) across 300 machines. Or something in between.

I'd argue that some of the more sane java frameworks and projects also revolve around simplicity and separation of concerns -- typically leading to micro services. But a lot of people seem to end up working with large, poorly architectured beasts. That's probably more of a culture thing, than a language thing -- so I think people will make huge swats of unmaintainable go code as well...


> encourages breaking large systems into small services

That's as much of a curse as it is a blessing. To some extent, small services are handy for dev/ops type folks, as they can quickly see which specific part of an application is misbehaving with memory or cpu or diskspace, so they like it.

But small services means that you lock down the interface between parts of the system by using another language to specify the communications protocol (e.g. protobuf, json, ...) and 2 different codebases have to understand it. And even if you manage to get the code to change, now you have the problem of migrating the running program.

In other words, the interface is now set in stone. Nobody will ever touch it again. This is exactly what you do not want to happen. Small services are the enemy of large, flexible programs.

Contrast this to Java/C# (and, somewhat less perfectly, C++) and their refactoring tools. What a difference. Changing an interface is something that is mainly done by computer code, not by a programmer, and all parts are modified and all problems identified.

There are points where this is not a problem, like a file system interface, or a socket interface, that sort of thing (and even there you may change your mind ...). Places where flexibility is not needed or wanted (I would argue, looking at linux file systems, that the POSIX API is not, in fact, a good API for quite a few file systems, but looking at the kernel I can see why this is not going to change. Of course, half the distributed file systems are user space libraries, partly for this reason). This is exactly the sort of thing C programmers deal with.


One of go's strengths is how easy it is to refactor. Implicit interfaces means that you can change a function to take an interface, and the caller who is passing in a concrete type doesn't have to get updated at all.

also, there's a relatively recent tool created called gorename that does 100% type-safe renaming.

Plus there's been gofmt and gofix for forever which you can use to automatically rewrite your code.

Finally, because almost all go code is formatted with gofmt, you can often do simple find and replace changes because all the code is completely regular.


> One of go's strengths is how easy it is to refactor

Give me a minute to collect my jaw of the floor here. Nope ... need some more time. Unless you mean in the same way as C and pascal are easy to refactor, I disagree in the strongest possible way. I may conceed to a very limited extent. While Go and it's tools don't allow refactoring, due the static nature of Go, it's actually possible, through careful design and constantly putting extra effort in, to make sure that it's reasonably easy to refactor. As long as you stay away from using interfaces, use long and unique enough names for your variables, make sure no variable names are substrings of other variable names, have a convention for polymorphic method names (ie. Matrix.MakeWithFloatArray(), Matrix.MakeWithIntArray(), Matrix.MakeWithZeroes(), ...), ...

> Implicit interfaces means that you can change a function to take an interface, and the caller who is passing in a concrete type doesn't have to get updated at all.

Yes because that's what refactoring is ... what you're showing here is called "polymorphism", and Go "doesn't support it" (except when it does, like as you point out here, in interfaces, oh and in range, make, new, append, close, copy, delete, imag, len, print, println, real, go, defer, most of which are also generic and polymorphic in really, really bad ways (some have completely unrelated and surprising behavior when passing different types to them), and I doubt I've got all of them).

> also, there's a relatively recent tool created called gorename that does 100% type-safe renaming. > Plus there's been gofmt and gofix for forever which you can use to automatically rewrite your code. >Finally, because almost all go code is formatted with gofmt, you can often do simple find and replace changes because all the code is completely regular.

I have tried that tool. It only does a single file. Again that makes it not refactoring. Just so we're clear. Here's the definition of refactoring :

  Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
Which is not what those tools do. Change the name of a method ... boom 5 objects don't satisfy the interface they did 5 seconds ago anymore. Change the name of an interface ... doesn't change in all other parts of the code. Change an exported variable ... everything fails to compile.

Next major point of criticism of the go tools. When do you want to do refactoring ? Well, during development. Of course in order to refactor during development, when 2-3 of your program's files don't compile, you obviously cannot use a normal compiler to refactor, since it won't understand the program. While this is not technically part of the definition, it frustrated me to no end the first, and last, time I used gofix to attempt to refactor something. Me and vim are faster at refactoring a 10000 line Go codebases than gofix + cleaning up after it is. Gofix knows a cute trick with symbol tables that is 1% complete (because making it functional will require a full rework of the Go compiler), which is not refactoring (since it doesn't look at the full source tree), and it will require a rework of Go itself (I'm not yet positive, but I think that because Go works with implicit interfaces, it is not actually possible to refactor anything related to object methods or interfaces correctly).


> make sure no variable names are substrings of other variable names

Did you miss the point about gorename's type safe renaming? It understands that pkgfoo.Bar.Baz() is different than pkgbat.Bar.Baz(). So you can safely tell it to rename one, and it won't touch the other. And yes, it'll rename everywhere that was referencing the old name and fix that, too. Now I'll grant you that older refactoring tools were mostly text matching, but the fact that the std lib ships with a go parser and AST library means that anyone can write their own code parsing and refactoring tools.. and people are.

> what you're showing here is called "polymorphism"

I don't really want to argue about what counts as refactoring... what counts as "external behavior" depends a lot on how you look at a problem. To your customers, the CLI may be your external behavior, for partners, it may be your API, for the developer in the next cube, it may be the exported variables on your package/class/whatever.

I'm not sure what you would count as qualities that make code easy to refactor. I like Go's implicit interfaces, nice tooling, and static typing to help make sure I'm not shooting myself in the foot. I honestly would like to hear what language you think is easy to refactor and why.


According to the announcement [1], gorename is able to rename just about any identifier (function, method, exported variable, local variable) throughout your entire GOPATH, not just a file. I tried it out and it seemed to work fine. Additionally, it seems to detect at least some cases where the rename would cause the resulting code to not work, although you can -force it to apply the changes anyway.

I happen to agree that Go could use more refactoring tools, but I think it's in good shape considering how young it is.

[1] https://groups.google.com/forum/#!topic/golang-nuts/96hGPXYf...


A big problem in large-scale systems is dependency management. One of the significant decisions that Go makes in its design is its approach to package management. Its approach isn't perfect, but is simple, making it a breath of fresh air compared to C++ or JVM-based languages... e.g. how many hours have been spent tweaking Maven or debugging discrepancies between classpaths referencing different copies of commons-logging?

Speaking of Maven - `go build` normalizes build systems so there is less time twiddling builds. It's not a silver bullet, but it's specifically intended for large systems.

Go's strong decisions around formatting (`go fmt`) are another example of planning for enterprise-size. There is no need for a style guide for go programs - it's built into the language, and with an approach that feels less pedantic than Python.

Go isn't perfect, but it's certainly designed for large-scale systems.

I agree with you that the JVM languages have better large-scale system support currently (20 years does give you something) -- but Go's goal is not dissimilar to the one Java eventually settled on after the applet craze.

Contrary to what you've seen, Go seems (to me) to be a go-to language for new companies principally concerned with distributed systems / cloud infrastructure (particularly those that once might have hired a hybrid of C and scripting language devs). One of the trends in those communities is toward microservices / SOA / etc, but the total codebases managed are certainly enterprise-scale.

All that said I have absolutely no significant opinions on Go vs Rust :) Just had to take issue with the "go is not enterprise scale" idea.


Go's strong decisions around formatting (`go fmt`) are another example of planning for enterprise-size. There is no need for a style guide for go programs - it's built into the language, and with an approach that feels less pedantic than Python.

How is that different from the 40 years old Unix utility indent? Pretty much every single non-trivial programming language has its own code formatter. Auto code formatting is neither novel nor unique to Go.


I didn't mean to imply Go invented automatic formatting :) It's unique among the popular language that I know in that the formatter is built into the standard toolkit. It also goes well beyond indentation. I've never seen a discussion of the format of go code outside the discussion of how `go fmt` should work - this I consider a benefit.


If you consider C++'s standard toolkit to be Clang, Clang comes with clang-format.


indent goes well beyond indentations too, despite the name :)


You're certainly correct, but if you compare this:

  http://linux.die.net/man/1/indent
with this:

  https://golang.org/cmd/gofmt/
even just by line count, you'll see the difference in philosophy.


Unfortunately having a code formatter does not make up for basic deficiencies in the language itself when it comes to datatypes. Basic example :

http://stackoverflow.com/questions/19946992/sorting-a-map-of...

(shortest way to sort an array of a custom datatype in go is around 50 lines of code, and requires you to write a custom sorting class)


First example on that page shows a straightforward solution in under 10 lines of code.

Convert the map values to a slice and then use the stdlib package sort to organize the data.


You left out defining a sorting class, like in Turbo Pascal. So, it's 17 lines of code (wrong version provided on stackoverflow) or 21 lines, the correct version :

  type dataSlice []*data

  // Len is part of sort.Interface.
  func (d dataSlice) Len() int {
      return len(d)
  }

  // Swap is part of sort.Interface.
  func (d dataSlice) Swap(i, j int) {
      d[i], d[j] = d[j], d[i]
  }

  // Less is part of sort.Interface. We use count as the value to sort by
  func (d dataSlice) Less(i, j int) bool {
      // WRONG : will crash if there's a nil in the list ...
      return d[i].count < d[j].count

      // Correct version
      if d[i] == nil {
          return true // note : true is an exported name, with no capital ... yet another inconsistency
      }
      if d[j] == nil {
          return false
      }
      return d[i].count < d[j].count
  }

  func main() {
    // create s of type []*data

    sort.Sort(s)
  }
Python version:

  // create list of data in variable s
  s.sort(key=lambda x: x.count)
C++ version:

  // Create Vector<Data> in s
  sort(s.begin(), s.end(), [](const Data& d1, const Data& d2) { return d1.count < d2.count; });

  (bonus for the C++ version : it's behavior is defined and correct for nulls, just saying this because none of the other examples are, including the Go one)
Java version:

  // Create List<Data> in s
  Collections.sort(s, (Data d1, Data d2) -> Integer.compare(d1.count, d2.count));
Let's put it this way. If Java is 16 TIMES more concise than your language, you have a problem. A big problem.


Juju is ~300k lines of code, and that doesn't count 3rd party libraries (it's also like 6 months out of date, so it's probably significantly more than that now). There are many large Go projects out there - maybe you've heard of Docker or Kubernetes?

Saying it's not for large systems is just showing ignorance of what it's already being used for.


That leads you to wonder: if a programmer desired to switch from C/C++ to another language but hasn't by now, why not?

Another reason to use C for your project is the ubiquity of C compilers and C bindings to other languages. Software like Lua and SQLite wouldn't be as widely-used if they were written in another language.


To met it's pretty clear that the language C++ programmers were waiting for is called C++ 14.


As a C++ programmer, I'm waiting for C++ZZ where ZZ is the version that has a sane module system. Something like Cargo isn't even on the radar for C++ developers for a lot of reasons, but many of them can be traced back to textual inclusion.


I absolutely agree! And I hope that a module system will help C++ in improving its compilation times too: templates slow gcc a lot.

Just an example. In the previous months I had to re-write a medium-sized project (~5000 lines of C++), and I chose to use Free Pascal instead of C++. The old C++ code took ~1 min to compile, while Free Pascal only requires ~1 sec! It is so fast that my makefile uses the "rebuild all" switch every times it calls "fpc". In this way, at each recompilation all the hints/warnings are printed again and again, and there is no risk that I miss any of them.

Another thing that C++14 lacks is a decent set of features in the standard library. How can it be possible that the STL includes esoteric features (see "partition", "is_permutation"...), but not a printf-like formatting facility à la "boost::format"? I cannot stand the usage of ios::setw et similia!


This is going to sound weird, and possibly ignorant, but PHP uses textual inclusion and we still have Composer/Packagist and PSR-4? I'm guessing the difference lies in 'spl_autoload_register'?


Isn't it C++17? Modules and concepts?


If you are considering Go, or just want a good laugh, just read discussions where higher order functions are discussed. Or for that matter, generics.

Here is a gem: https://groups.google.com/forum/#!topic/golang-nuts/RKymTuSC...

There's a chance you'll laugh at the people dismissing higher order functions as nonsense, in which case Go might not be for you. This is a good test of whether you want to try it out or not.


Let me save the reader of this comment a long and unproductive session of reading tea leaves out of mailing list posts:

* Golang doesn't have generics. This comes up so often on the mailing list that it's in the FAQ:

http://golang.org/doc/faq#generics

* Golang has a similar attitude to idiomatic functional programming tools as Python; it doesn't have map() and it's not easy to write a general-purpose map.

(I'll take Golang's lack of generics and map over Python's busted closures, and so call it a draw).

If you want to write programs in functional style, you don't want to use Golang. Pretty simple.


Umm... Am I just interpreting you wrong? Python has map in the builtin namespace:

  >>> map(lambda x: x*5, range(5))
  [0, 5, 10, 15, 20]


Python obviously has those primitives (that's what I meant by the tradeoff between it and Golang) but Guido infamously discourages their use.


Hmm, sorry I didn't see this earlier, but I personally feel that Python supports a healthy mix of imperative, functional and object-oriented styles. (especially with the (over?) use of the "operators" library I tend to see around nowadays...)

Certainly when I picked up a purely functional language (Racket) after mainly having only Python and Java experience before I didn't feel too disoriented or out of touch.


The parent is referring to the fact that Guido van Rossum (the Benevolent Dictator For Life of Python) is pretty down on functional programming in Python. You can read some of the history (from the horse's mouth) here: http://python-history.blogspot.com/2009/04/origins-of-python...

TL;DR - "I have never considered Python to be heavily influenced by functional languages, no matter what people say or think."

Python, IMO, has flutters of functional programming in it. But its broken closures, lack of uncripppled anonymous functions and lack of tail-call optimization are pretty damning strikes against calling Python's support for functional programming similar to its support for imperative or OO paradigms. It just isn't. Yeah, we get `map` and `filter`, big whoop. :-)


> But its broken closures

They were "unbroken" in Python 3, though because of the way scoping works in the language it requires marking variables (with `nonlocal`)


I'm aware. The presence of nonlocal and global still makes them broken to me, even if it's a result of how scoping works. Lua and Javascript both manage to have unbroken closures.


> Lua and Javascript both manage to have unbroken closures.

Because they use explicit local declaration (and implicitly declared variables are global in both)…


Yes, and I would say that requiring explicit local declarations is the right thing to do. Having a "default" variable scope is very error prone (typos are treated as new variables and closures don't work right) but at least with "global by default" you can use a linter to enforce that all your globals are explicitly declared. In Python its impossible to do something similar.


> Yes, and I would say that requiring explicit local declarations is the right thing to do.

And I could hardly agree more with that, I simply disagree that Python's closures remain broken, they're simply fixed within the constraints set by previous design decisions.

> at least with "global by default" you can use a linter to enforce that all your globals are explicitly declared

Still, there's really no excuse for global as default, there's no convenience justification and could just as easily be an error (more easily really)


> I simply disagree that Python's closures remain broken, they're simply fixed within the constraints set by previous design decisions.

The fact that I have to distinguish between `global`, `nonlocal` and `default` scope makes them broken. `nonlocal` is a hack.

Saying that it's a result of previous design decisions is a sound technical reason for why `nonlocal` is necessary to make closures work. But as an abstraction, at least, they are broken.

With that, I will concede that they are broken in two different ways between Python 2 and Python 3. This is IMO.


Guido discourages map because it should usually be expressed as a comprehension, no?


Let me clarify even more.) If a language does not threat functions (procedures) as "first-class citizens" (like any other value it could be another function's argument or return value, a member of an aggregate) then you should't use it for writing programs in functional style.

As long as your language has "everything is a pointer semantics" you should be able to write your own map and filter in a 2 or 3 lines of code.

   map(F, [H|T]) -> [F(H)|map(F, T)];
   map(F, []) -> [].


Question.. do you do anything else in your life other than commenting on HN 24/7?


Yes, I also scrapbook.


Great response, btw.


What an infuriating thread. I guess I'm not the kind of person Go is made for, but the fact that everyone in the thread was so close-minded about what might be helpful about map, filter, and reduce was frankly absurd.

I especially enjoyed the demonstration of map would look in Go code, complete with a useless anonymous function:

> bar = map(foo, func(e T) { return f(e) })

And this guy who doesn't seem to understand the concept of function pointers:

> You seem to equate the function f with a single character, which it's not. The user must write it's implementation, which you have omitted for your own benefit.


What you're forgetting is that adding one useful feature is a slippery slope towards adding many other useful features, and thereby no longer having a mediocre language.


First class functions is a language feature. I definitely understand omitting it from a language. But go has that already. The issue is that without good higher order functions in the core libs, it's quite a waste!


Wow that makes me cringe. This specifically:

> The power of the map/filter abstraction starts becoming more apparent when you do things like

> Filter(someFunc, Map(funcWith3args, vecA, vecB, vecC))

> And you realize you should swap map and filter for your particular app for better performance.

met with this reply:

> vs > [10 lines of code for two for loops doing the same thing] > Using range makes it obvious that there is a performance hit in the first place.


In cases of needing more performance, you only swap map and filter for a loop in languages that don't optimize it.


There are a lot of bad things in Go, sure. And some are really annoying, like "goroutine all the things" mantra. But no matter how I dislike it there is just no other choice today. It all comes down to support, bug fixing, ease to learn and to use, good standard library, built in cross-compilation and a pretty fast one, static binaries, good enough dependency management, reasonable performance and memory usage, especially in comparison to python/perl/ruby, integrated unit testing, code formatting tool, etc. All these things together matter more, than the language itself.


Except for "static binaries" (which isn't clear what benefits you desire out of them), java has all those things.

The other choices are certainly there.


Except for "reasonable memory usage".


And even Java is getting that.


Too late. Also Java is not as easy to learn and to use as Golang.


Too bad you can't unlearn functional programming. Or generic programming. Or algebraic datatypes. Or having actual useful datatypes. Or ...

Just the basic thing of sorting a list containing a struct. And the suggested solution by the Go team is a 50 line program. WTF.

They actually claim the err = function(); if err != nil {}. This is the reason I originally left C. And those criticisms really apply to Go as well. Firstly nobody sanely checks those errors, often outright ignoring them. Second when they do, they always just pass the error up (in other words: manually implementing exceptions), or worse, they panic on the error (meaning you cannot trust external libraries won't panic on you, negating the single advantage that leaving out exceptions had). Even if you do check those errors they interrupt the train of thought you have. Imagine someone telling you how to open a door in Go : first, put your hand on the handle. If you don't have a hand then go to the hospital. If there is no hospital near you then go to the car. If there is no car then check for a bike. If the door is locked ... wait what was I doing ?

Thirdly functions that might return a variable or might not ... the official Go team's advice is use a pointer. Sigh. The amount of times I've seen people commit Go code into my program that doesn't check for nil-ness ... Grrrr.

If you don't mind living in the dark ages, then yes, Go is great.


F# has that. With Mono you can produce static binaries. And the F# compiler can also make statically linked assemblies, including the bits it needs from all its dependencies.

Although I'm not sure if there is an F# code formatter tool. Which is made up for by F# being a vastly more capable language.


Yes, some code formatting tools exist (within Visual Studio as part of F# Power Tools, and also I believe standalone in some binary form). However, as a community we're currently a little less dogmatic than Go (for example) about what "correct" formatting is.

Tabs vs. Spaces is decided at the language level (it's spaces or it doesn't work) but other things are often a matter of taste. It doesn't tend to matter too much though because (well written) F# code is expressive almost regardless of minor formatting issues (poorly written code in any language is a comprehensibility mess, regardless of "neatness" or newline consistency!)


There are a lot of comments in that thread, is there anything specific you think stands out? Some of Ian's initial comments were saying that they are being very very careful about what gets added to the language.

Also, at this point the language syntax itself is basically locked. And from everything I've seen, they are focusing more on the runtime and tooling before they make any serious changes to the code language.


I'm thinking of the crowd that insists "map" is never more useful or readable than s straight for-loop. So not only don't they want it in go (which could be understandable in some situations), they genuinely seem to think it has no place in an imperative language. That blows my mind.


I've written in both paradigms and I mostly agree with the Go team. That it blows your mind blows mine, and I suppose it's good that we have these choices so we don't have to agree.

Could you show me a code snippet of maps and filters used that you believe is more readable than for loops? Maybe I'll get a laugh out of that. :)


A for loop is a general tool. It can be used for anything. Because of that, it conveys little or no information. As a reader I have to read the loop in detail to see that what it does is perform a mapping, and not something slightly different (the last item could be skipped, etc etc).

The expressiveness of higher order functions comes not from terseness (only) but by expressing intent more clearly.

E.g this expresses (in pseudo) A conversion followed by a filter.

> odd_ints = strings.map(parseInt).select(odd)

This was expressed in the forum, but there was no agreement that this was readable, so I fully expect you to also think the for loop to be more readable. I suspect there is a divide between those who prefer reading how something is done rather than what is intended, in order to understand it.


Ok, that's a good example. Your one-liner is pretty concise and the intent is clear.

I think on a higher level with complex code, you definitely want better abstractions to convey intent, rather than say just having one large function that does everything.

I think in many cases map/filter does in fact help with readability of intent, especially if you're not declaring the function body inline, and the functions are commonly available ones like parseInt. But generally this isn't the case, which diminishes the readability such that it's comparable to a for-loop anyways.

I spend most of my time trying to figure out exactly that -- the higher level abstractions that aren't easily conveyed by "map" or "filter", that having map/filter generic functions isn't high on priority. I just want a simple language that I can spew out thoughts onto uncompilable code... tweak the code often as I see fit, and work on fixing type errors later and once I'm done with that it will probably run fine with few bugs. The nice thing about for-loops is that it exposes more control points e.g. breaking out early, using the index, inserting log lines -- that the flexibility helps me mutate the code quickly.

So maybe it's more about coding style, rather than readability. Do you like to spec out your code completely before writing things down, or do you prefer to define the spec as you write and edit the code because it helps you get things done faster?


Another data point for you (I'm not the grandparent poster):

I prefer to write exploratory code and spec out the design as I go along, and I also prefer map/filter (or list comprehensions) to for-loops.

I suspect it does have to do with coding style, but not in the way you suspect. I like to write very small functions - often one-liners, rarely more than a page - and have each function do one thing and one thing only. I also tend to code mostly bottom-up, figuring out what abstractions I need, writing them, and then writing the functions that use them. So I almost never use an inline lambda for a map, it's usually a function I've already defined.

All of the exploratory scenarios you list are handled by built-in functions in my language of choice (Python). Breaking out early = itertools.takewhile(). Using the index = enumerate(). Inserting the log lines, I'd just insert them in the mapper (although there's also trace).


Is there a reason you need some built-in map thing and not just user-defined functions? If a loop is too hard to read inline, you slap it in a nicely named function and now it's more clear and more concise. Either way, you have to write the code to do the conversion. select(odd) doesn't work unless you've already written the code behind whatever "odd" is, for example.

I can write go code that makes this line legal:

    odd_ints := parseInts(myStrings).select(odd)
but I'd probably write code so it looked like this:

    odd_ints := odds(parseInts(myStrings))
Is either of these harder to read? Does it matter that parseInts is a "map" and odds is a "filter"? Their function is obvious by their names. If anything, the words "map" and "filter" are extraneous.


they are so pervasive/important that it should be included in the core library. I'd include my own collection_utils every single time.

That said, having a library function of course requires it to work in a type safe way for all collections/functions, which might make this argument really be one about generics and not about two collection functions. If this is omitted because generics is, I think it's just another argument why omitting generics is making the language simple to the point of being stupid.


Instead of map/filter:

    // using some fake filter/map/lambda syntax
    names := machines.filter(strings.HasPrefix(m.tag, "ec2")).map(|m| = m.name)
why not just write

    names := []string{}
    for _, m := range machines {
        if strings.HasPrefix(m.tag, "ec2") {
            names = append(names, m.name)
        }
    }
What happens when you read that first implementation? Don't you read it as "for each machine, if its tag has the prefix "ec2", then append its name to the list that is returned? Isn't that the exact same thing you'd read the second one as? Except that the second one requires no special knowledge other than loops and if statements.


    let descending_squares = range(0u, 5u).rev().map(|x| x * x).collect();
versus:

    let mut descending_squares = Vec::new();
    let mut i = 4u;
    loop {
        descending_squares.push(i * i);
        if i == 0 {
            break
        }
        i -= 1
    }
(Note that if you try to use a for loop here you will infinite loop due to unsigned underflow.)


Well this is how I'd write it in Go:

  descending_squares := []uint{}
  for x:=4; 0<=x; x-- {
    descending_squares = append(descending_squares, uint(x*x))
  }


Use unsigned integers for your index and values. That's what makes it hard to use a for loop. (Sure, you could cast a signed integer loop index to unsigned inside the loop to avoid the underflow problem in this specific case, but I'd argue that the functional style is so much clearer than code that has to work around unsigned underflow gotchas.)


> functional style is so much clearer

There is something that makes me uncomfortable in there:

> let descending_squares = range(0u, 5u).rev().map(|x| x * x).collect();

The overhead. I wonder about it. I am unable to get a sense of what it is. With a simple for-loop, it's rather easy to see it, but with the version above I have no idea. So "much clearer" is not what I see with the piece of code above. I see what it does, but what is not so clear is what code will be generated, something which matters when trying to write efficient code.


There is not much overhead with Rust iterators at all. They are essentially concrete versions of the deforestation that Haskell can do to avoid constructing intermediate lists, and compile to code that is close to the equivalent C (the functionality is all statically dispatched, so the compiler can inline and optimise the calls). This is a case of having experience with and trusting one's tools.

I wrote a blog post a while ago that used iterators heavily http://huonw.github.io/blog/2014/06/comparing-knn-in-rust/ , as you can see the performance is good.


I am definitely going to have to look more into Rust.


Well then, good news! Such usages of iterators in Rust are almost always result in the same code that would be generated by old-fashioned, hand-written loops. The 'trick' is that the methods (and the closure!) are inlined, and generics are specialized at compile time.


All I did was take your for-loop code and translated into idiomatic Go code. My point is the for-loop in Go isn't as terrible as the loop example you wrote.


Your version infinite loops: http://play.golang.org/p/uglbTETE6d

See why for loops are tricky? :)


Doh! I fell straight in :)

Overflows in general are tricky. How does Rust deal with it?


We have special checked types you can use if you want checked arithmetic. The default is to not check, because CPUs currently make it expensive to check (although I would love it if that could change--we need hardware support though).


But CPU's have the overflow flag, that is actually set if the operation overflows?

You can't write that check explicitly in C, but you can in assembler. Available now, across the different hardware.


As you say, detecting the overflow is easy, but efficiently handling it is not. It adds a branch to every single arithmetic operation, and it makes it much harder for the compiler to optimise things e.g. it is hard to vectorise a loop summing an array, if every + has a conditional branch on the overflow flag.

(Also, I believe it introduces a lot of data dependencies, getting in the way of the out-of-order execution of modern CPUs.)


To handle the overflow on the places where you want to handle them, modern processor doesn't have any problem with an additional jump instruction. Also, modern compilers could optimize the checks away if they aren't used. In effect, implementing overflow checks definitely won't turn your C speed (1s) code in a Python speed (40s) code. I estimate it wouldn't be even two times slower in most of the use cases. It's certainly not the problem of the CPU's.

It can be the problem of the certain compilers if they don't have the infrastructure to reason about overflow flags though. But it's not a hardware problem.


> I estimate it wouldn't be even two times slower in most of the use cases.

In his "We Need Hardware Traps for Integer Overflow"[0], Regher quotes 5% to 100% overhead for languages such as JS or Racket, and that a "highly tuned" checker would likely be in the 5% range. Playing with arithmetics-heavy programs and Rust's checked_* (which are backed by LLVM's overflow intrinsics[1]) I got anywhere from 5 to 40% performance loss IIRC.

That's not a lot, but at the same time when you're competing with languages specifically not paying those 5%, a 5% hit on all computations is not going to get you much love.

Which is why Rust currently lets you do that (via num::Checked* and num::Saturating) but uses overflowing default semantics.

[0] http://blog.regehr.org/archives/1154

[1] http://llvm.org/docs/LangRef.html#arithmetic-with-overflow-i...


OTOH I must note that Swift, on the other hand, has opted to error on overflow and have a second set of overflowing operators: https://developer.apple.com/library/mac/documentation/swift/...


And that's the best approach I can imagine: programmer should be able to control if he wants 5% penalty or speed. It's a win-win.


> To handle the overflow on the places where you want to handle them, modern processor doesn't have any problem with an additional jump instruction

This isn't just a jump, it is a branch. Especially when the body of the loop is 6 instructions, adding an extra branch is going to be noticable.

> Also, modern compilers could optimize the checks away if they aren't used

This is equivalent to the halting problem, and most code will not be able to optimise them away. Suggesting otherwise is invoking "sufficiently smart compiler", which is invalid.

In any case, you haven't addressed the problem of missed optimisations (especially vectorisation) caused by having to maintain semantics.

> It's certainly not the problem of the CPU's.

Yes, it partly is: the data dependencies and linearisation caused by checking the CPU flags is bad.


Nice. :)


Just use int and it works fine: http://play.golang.org/p/nZcd3M5aL_

To me golang is readable while even small rust examples don't feel quite right right.


> Just use int and it

is a completely different piece of code.


I love when arguments are decided by/proven by runnable code examples.


Or if Go and Rust had a FOR loop guaranteed to stop at 0, as with a Wirthian language, no?


Aside from your specific point (which I liked), isn't there a kind of tension here in a systems language?

For loops are almost always going to explicitly show their allocations whereas the functional version has allocations that are not nearly as obvious.


None of the iterators in the Rust core library allocate. (In fact, they can't, since the Rust core library has no allocation—it's the building block for liballoc.)


You need a collect on that first example, you end with a Vec<uint> in the second, but an Iterator<something> in the first.


Oops, you're right. Fixed.


And that's why I like go.

Sorry, that's not really fair. I'm sure the compiler would have caught that when you tried to feed it into something that wanted a vector (or would have not cared if you were just iterating over it).

To be honest, I think Rust has a lot going for it. I just think Go has a lot going for it too... they're just different things. Which is fine, because if everything were the same, the world would be a really boring place. :)


Are you suggesting the Rust compiler didn't catch that?


No? I don't actually know rust, but I don't think the first thing he typed was invalid code. It just returned an iterator instead of a vector. I was suggesting that when he tried to pass the variable into a function that expected a vector, the compiler would complain. No snark, I just meant what I said.


To be fair, the line "And that's why I like go" seems to imply that Go is able to keep you from making errors in untested context-free code snippets. Perhaps you meant "and that's why I like that Go doesn't support operations like `map`"?


Well, yes, actually, Go does help prevent errors in untested context-free code snippets... by being really really simple, and not trying to mash a ton of logic into a single line for no reason.

The amount of language trickery is at a minimum with Go, so writing something out in plaintext often just works. It's hard to screw up for loops and if statements for people who have been programming for any significant period of time.

It's not that I like that Go doesn't have map, per se. I just don't miss it. At all. And the fact that it's not in the language means I don't have to read someone else's use of it and try to make sure they're not screwing it up somehow. A wise man once said "It's not that I don't want generics, I just don't want you to have generics."


  > Go does help prevent errors in untested context-free code 
  > snippets... by being really really simple, and not trying 
  > to mash a ton of logic into a single line for no reason.
But there is an example in this very thread of a manually-implemented map-via-a-for-loop from a well-meaning Go user that accidentally underflows into an infinite loop.

This isn't an attack on Go (which I respect as a language for having the temerity to be opinionated, a facet that more languages need to emulate), merely bafflement at your claim that implementing everything anew via bespoke loops is somehow effective at reducing errors.


Yes and that was a carefully constructed example explicitly to show problems with loops.... But underflows in loops are just not a common occurrence, which is why the example works in the first place.


> Yes and that was a carefully constructed example explicitly to show problems with loops

No, it was an example from my experience and a bug I have had to fix multiple times, as I mentioned. I didn't make it up.


Sorry, yes you mentioned that. Anyway, this is a stupid thing to argue about and I apologize for my part in it.


Likewise, forgetting a `.collect()` call in Rust code doesn't happen, because the compiler will tell you without fail when it's given an iterator but expects an array.


> Well, yes, actually, Go does help prevent errors in untested context-free code snippets... by being really really simple, and not trying to mash a ton of logic into a single line for no reason.

Sorry, I don't see the benefit of "preventing errors in code that's never actually executed" as a design goal.

> The amount of language trickery is at a minimum with Go, so writing something out in plaintext often just works. It's hard to screw up for loops and if statements for people who have been programming for any significant period of time.

I just demonstrated a counterexample in this thread.


That's the only place in C where I use the postfix decrement operator.

    for (u = 5; u-- > 0; ) {


Go is not the language for you if you care about mashing as much logic into a single line as you can.

Also, I can count the number of times in my 15 years of professional development experience that I've wanted to count down to zero using an unsigned int as the index... never. Which is not to say that the declarative isn't nice, just saying that the example of the infinite loop is not exactly compelling.


> Go is not the language for you if you care about mashing as much logic into a single line as you can.

The point here is that iterators can reduce bugs, independent of aesthetic concerns.

> Also, I can count the number of times in my 15 years of professional development experience that I've wanted to count down to zero using an unsigned int as the index... never.

I like to give that example because it was something I actually hit and a bug that I actually had to fix.

In fact I just hit that again yesterday when iterating over a list in reverse order (painting order for CSS box-shadow).


  > The point here is that iterators can reduce 
  > bugs, independent of aesthetic concerns.
Iterators can reduce bugs, really? This is a very flawed point. Bugs are not caused by lack of language features, they are caused by people. And people make mistakes independent of language features and sometimes because of language features causing cognitive overload or require them to make assumptions.


> And people make mistakes independent of language features

Unless language features prevent the existence of these bugs. Javascript will blindly let you concatenate a number and a string, Go will not. Therefore you can't make the mistake of unknowingly concatenating a number and a string in Go. Thanks to a language feature.

Iterators prevent indexing mistakes (off-by-one errors, index overflow or overflow, wrong-variable use), therefore iterators can indeed reduce bugs.


  > Therefore you can't make the mistake of unknowingly
  > concatenating a number and a string in Go.
Look at this another way: If you need to do that you now have to think about it and explicitly convert a number into a string. But while you are thinking about it you can make mistake of concatenating a number with a wrong string or make some other screw up, because your thinking power is now reduced.


> If you need to do that you now have to think about it and explicitly convert a number into a string.

You have to think about it either way, the feature precludes forgetting about it.

> you can make mistake of concatenating a number with a wrong string or make some other screw up

Which you can make in both cases.

> because your thinking power is now reduced.

Your thinking power is not reduced, it's increased: you don't have to wonder whether you should convert something to a string or it already is one, the compiler will tell you, so you can focus better on the actual work at hand.


No, if conversion is implicit you don't have to think which function to call, where to find it and which library to include. Quite significant cognitive overhead I'd say.

Having separate concatenation operator with implicit conversion could reduce that overhead:

  a := "foo" ~ "bar" ~ 123
Instead of:

  import "strconv"
  
  a := "foo" + "bar" + strconv.Itoa(123)
But this is not how Golang guys make decisions. And I'm fine with that nowadays as long as they don't claim to be right in that regard.


The interesting code for this example is:

  c := a + b
There's no indication from that line of code what a and b are, a type system that doesn't do implicit conversions will tell you "error: adding string and int" and so the programmer can address the problem (e.g. maybe they meant to parse an integer from the string `a`, maybe they meant to format the integer `b` into a string).

Using literals is not a useful comparison because there is no confusion about types in that case.


My bad, I thought it was obvious from having separate operator for concatenation that '+' operator also has only one intent. So in your case it will parse an integer from a string if operand is a string.


The way I see it is that you can reduce bugs by increasing the level of abstraction, which in this case means to use things that are more straightforward and less flexible: the ultimate looping-construct is arguably the while-loop, by using a counter to index into the array (if you're working with an array, anyway). But with that flexibility comes more room for error: maybe your increment is wrong, maybe your while-condition is wrong, maybe you inadvertently change the index inside the loop without meaning to, etc. A step over that is to explicitly just say that you want to iterate over each element in the array. You don't get to choose how, but you probably wanted to iterate over it from start to finish anyway. The highest level is a function that encapsulates exactly what you want without having to bother with explicitly iterating over the collection yourself. Now you can't even mess up how you use each member of the array, because that is already handled by the function.

A higher level of abstraction means less flexibility, which means fewer potential things that you can screw up. I don't see how that is a flawed point.


Well, the way I see it has nothing to do with levels of abstractions or flexibility, but everything with reducing amount of brain power required to understand the program. Sometimes abstractions do help, other times they add unnecessary complexities and reduce people's brain power to understand the rest of the code and therefore make mistakes in it. This is all about psychology.

Sorry, if I was rude, I'm just tired of pseudo-scientific language designs.


It's a question of which you prefer: declarative or imperative. Having that in mind, and knowing that "readability" is a tricky concept, here is an example in LiveScript:

    # utility, normally wouldn't be here
    map2 = (f, xs, ys) ->
        map (apply f, _), (zip xs, ys)

    startsWith = (prefix, str) ->
        map2 (==), prefix, str |> and-list

    endsWith = (suffix, str) ->
        revStr = unchars << reverse << chars
        startsWith (revStr suffix), (revStr str)
"<<" is function composition, "|>" is a "pipe" or "reverse application order operator", "_" is partial application. "map", "zip" and "apply" functions are standard and have expected semantics, "chars" transforms a string into an array of chars and "unchars" does the opposite (and they all come from prelude-ls library).

This code is both shorter and easier to read (to me, at least) than equivalent for loops and it also composes better. The details of iteration are irrelevant here and so are abstracted, which means they can be trivially changed.

Of course, for "map" to be this useful it needs to be supported by many functionally oriented language constructs. It's also not the best example for anything, it's just a piece of code I wrote in a style I like, in a language I use.


Semantically, loops impose order, maps do not.

A map can be auto-parallelised.

A loop cannot.

It seems to me that for a language aimed at exploiting concurrent features, easy wins for parallelisation would be a feature.


There's really nothing about a typical map operation that makes it any more parallelize-able than a for-loop, in the presence of closures.


The article is about Rust. In Rust, the type of the closure indicates whether it can mutate externally visible data (and therefore race on it).


Having just started reading the Rust guides, I'm curious: can the closure type actually ensure it doesn't mutate any externally visible data, or does that only apply to memory? Can declare that a closure doesn't (or shouldn't be allowed to) write to disk or hit a database?


Gotcha! So will Rust then auto-parallelize these "pure" closures?


We'd like to have generic APIs in the future that will allow idiomatic automatic data parallelization. Niko Matsakis has been thinking about this for quite a while. Stay tuned :)

(Note that Servo has been using this type system feature for a while now to prevent data races in our massively parallel CSS layout code.)


> Niko Matsakis has been thinking about this for quite a while. Stay tuned :)

Arrg, you have me excited now!


I wouldn't put my hopes very high on this. They tried doing this kind of ubiquitous automatic parallelization in Haskell but they ran into parallelization overhead issues because its very hard to have the computer figure out the correct parallelization granularity all by itself.


Amdahl's law says auto-parallelising a map operation is usually a big waste of time.


Who is Amdahl, why should I care, and does he expand on this argument or is it your interpretation of a more generalized law?

Going to search for it, but I'd still like to hear your response.


The application in this case is that the mapped operation generally has a fairly low cost and the (sequential) cost of dispatching and resynchronising back into a result are going to dwarf any gain you'd get unless either the collection is huge (and the parallelization is coarsely chunked) and/or the mapped operation is extremely expensive.

Same reason why even though mergesort is fairly trivially parallelizable there's basically no stdlib running parallel mergesorts by default: you need huge collections before you recoup the synchronization overhead.


Your collections don't have to be stupifyingly huge for a parallel mergesort to be faster. It's bad for a standard library to auto-parallelize because that's an unwelcome side effect. If you're writing some program where you actually cared about the performance speedup, in most cases having a sort function spawn threads or use threads behind your back in a way that your system can't control is completely unwelcome.


Have you rung up the GPU manufacturers to tell them that they're wasting everyone's time?


The synchronisation/set-up overheads are still fairly large (e.g. communicating with the GPU). I find it rather unlikely that e.g. a map over 20 elements will be faster in parallel.


It might not be. But I am happy to ignore the question and leave it to the compiler or runtime to take advantage of easy parallelism. Maps allow that, loops (or any other sequential treatment of data) doesn't.

My personal brand of bigotry is relational databases, so I am accustomed to thinking of sorted / ordered behaviour as the special case. Thinking in sets is very powerful.

It doesn't have to be either/or. Sometimes you need a loop. But it would be nice to have mapping too.


Let's take the example interview question: given a string, how do you determine if it is an anagram of a palindrome? The answer is that, for any given character, at most 1 should appear in the string an odd number of times. Here is an implementation in Scala, with higher-order functions:

    def isAnagramOfPalindrome(s: String): Boolean =
        s.groupBy { c => c }
            .map { case(key, value) => value.size() }
            .filter { n => n % 2 != 0 }
            .size <= 1
And here's a more traditional implementation in JavaScript.

    function isAnagramOfPalindrome(str) {
      var chars = {};
      for (var i = 0; i < str.length; i++) {
        var char = str.charAt(i);
        chars[char] = (chars[char] || 0) + 1;
      }
      var numOdd = 0;
      for (var key in chars) {
        if (chars[key] % 2 != 0) numOdd++;
      }
      return numOdd <= 1;
    }
I find the Scala version much more readable, but I'm assuming you'll prefer the JS version?


In Go:

  func isAnagramOfPalindrome(str string) bool {

    charCounts := map[rune]int{}
    for _, c := range str {
      charCounts[c]++
    }

    numOdd := 0
    for _, count := range charCounts {
      if count%2 == 1 {
        numOdd++
      }
    }
    return numOdd <= 1;
  }
In this case I think I do prefer the latter. I like how I can name the intermediate objects. :)

Also I don't like the practice of groupBy().map(=>_.size()), which hides the performance penalty of creating arrays. I'm sure a better compiler can do better, but I'd have to know the compiler to assume that.


I'll bite. In Python:

    def odd_count(letters):
        return lambda ch: letters.count(ch) % 2

    def anagram_of_palindrome(letters):
        return sum(map(odd_count(letters), set(letters))) <= 1
How's that for some higher-order function action?

But really, unless you are deriving programs by doing algebra you are missing the point of things like map() and reduce() (as far as I know no one is actually doing Functional Programing the way Backus described. Am I wrong? I'd love to be wrong.)

Go read Backus' Turing Award paper: http://web.stanford.edu/class/cs242/readings/backus.pdf


The Haskell programmer in me can't resist---eta reduce your functions! :P

    from functools import partial
    
    def odd_count(letters, ch):
        return letters.count(ch) % 2
    
    def anagram_of_palindrome(letters):
        return sum(map(partial(odd_count, letters), set(letters))) <= 1


A more modern take would use collections.Counter(letters) to extract the letter counts, that avoids traversing the string for each letter count:

    def anagram_of_palindrome(letters):
        return sum(c % 2 for c in Counter(letters).itervalues()) <= 1


The discussion here has been good, and the thread is already old, but I can't help myself!

Maps, filters, folds provide a vocabulary for manipulating entire collections. They allow you to write code for transforming the individual items of a collection without mixing it with code that deals with the shape of your collection. Maps, folds, and filters exist for trees and all kinds of other structures with interesting shapes that are more challenging to traverse and reason about than a simple array or list. Languages with first class support for these operations provide a common vocabulary for processing collections of any shape. This alone is a good enough reason for me to prefer such languages, even without bringing readability into the mix.

I also think that it is usually possible to write code with maps and filters that is at least as readable as its for loop equivalent, especially with the comprehension sugar commonly found in many functional languages.


I also agree with the Go team. Although I am a big fan of functional language constructs like map & filter, adding them to Go does not feel right. One of the great things about Go is that it's simple but it does not hide much from you. You still know exactly what is going on (concurrency& channels being an exception). That's why I find it easy to read Go code.


The go community reminds me of the java community. Blind faith in the design decisions of the language. Any feature it doesn't have is passionately defended as a good decision because the clumsy old way is subjectively clearer, up until the day it gets added.


I found that attitude much more prevalent in the .Net community: for just about anything added since C# 2.0 (I didn't follow discussions about generics after the 1.0 release though I expect it also happened then), the idea was essentially dismissed as pointless ivory-tower wankery useless to developers in the Real World right until MS announced it for the next version, at which point it became an Obviously Great idea and a good way to trash-talk java.


That sort of behaviour is in no way unique to Go or Java, and I think it's unfair to judge a language or community by some posts from an online newsgroup - most people are too busy to post or lurk on lists like golang-nuts.


mapping and filtering are just functions and/or loops. You can write those in go. Making some generic thing to save you 3 lines of (really simple) code is not an incredibly useful use of the Go author's time.


I don't get the sense that they think it has no place in an imperative language. Like the OP calls out, Go is a very small language with few constructs. For a lot of tasks there are simply only one way to do things. Some would consider this a strength, others a weakness (kind of like Python vs. Perl).


I think Ian (who works on Go) summed it up nicely. To paraphrase: yes map/reduce is useful but it's another built-in generic so it adds a lot of complexity and it's not worth the trade-off.

This rounds back into Go's generics debate. If they give in with implementing map/reduce, they might as well start implementing some form on generics.

I think Go is for people who are in-line with the golang crew's views on language design. For the rest, use any other strongly typed language that has sort form of type-abstraction/generics (which pretty much they all do).


A thousand times this. If somebody were to go through the process of actually trying to implement a generic map function in the current Go they would get a good sense of just what the issues are. Unfortunately you need a good grasp of the type system and this just isn't something most of the commenters on this subject have; the Go community is much more qualified. I've done this and generics are the first road block but after consigning to working around that you run into a lot of Go implementation details around interface{} and type assertion so that what you end up with is actually quite verbose in practise and no more desirable than the alternative.. I decided that for me, for loops aren't so bad.

So it does just come back around to generics but people are solving this with code generation, like gen which looks very interesting, and I imagine that is quite acceptable for the time being, for those who really want it.


It's useful to know Ian's background too; he's the author of the gold linker (http://en.wikipedia.org/wiki/Gold_(linker) so would be intimate with the sort of problems that generics and map and reduce would introduce to the compilation pipeline.

To a language user the benefits might be clear but the problems not.

To a language implementor the problems will be much clearer.


I liked Rob Pike's driveby comment - programmers these days...


... have come to expect some advances in programming languages from the last 50 years of research?


Programming Language Research is Irrelevant, or at least in Pike's world.


Not a bad write-up. The Rust code snippets can be slimmed down very slightly though. Here's main():

    fn main() {
        let args = os::args();
        let washed_args = args.iter().map(|arg| arg.as_slice()).collect::<Vec<_>>();
        match washed_args.as_slice() {
            [_, "review", opts..] => review(opts),
            _ => usage()
        }
    }
although I might actually suggest the alternative approach:

    fn main() {
        let mut args = os::args().into_iter();
        args.next(); // skip program name
        match args.next().map(|s| s.as_slice()) {
            Some("review") => review(args.collect::<Vec<_>>()),
            _ => usage()
        }
    }
(this approach requires `review()` to take a `Vec<String>` instead of a `&[&str]`, but that's not a difficult change, and we could fix it using a second line of code if we wanted but at the cost of introducing a new allocation, like the original code does)

For review() I'd suggest changing the original code:

    let cwd = os::getcwd();
    let have_dot_git = have_dot_git(cwd.clone());

    let dot_git_dir: &Path = match have_dot_git.as_ref() {
        Some(path) => path,
        None => { panic!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()) },
    };
to the following:

    let cwd = os::getcwd();
    let dot_git_dir = have_dot_git(&cwd).expect(format!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()).as_slice());
This actually leaves `dot_git_dir` as a `Path` instead of a `&Path`, but I think that's better anyway. It also requires `have_dot_git()` to take a `&Path` instead of (what I assume it takes now,) a `Path`, which is an appropriate change as there's no need for cloning the path.


It's also worth pointing out that if you enable the `slicing_syntax` feature gate then all the `.as_slice()` calls can turn into the suffix operator `[]`.


Your expect version changes behaviour: it does the allocation and string formatting unconditionally even if `have_dot_git(&cwd)` is `Some`. It is probably not a problem for this since the other operations are significantly more expensive, but it can be a gotcha if used inside a loop.


You're right, that's what I get for doing this fast. In that case I'd say

  let dot_git_dir = have_dot_git(&cwd).unwrap_or_else(|| panic!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()));


I really like the idea of testing a language by writing a small command-line utility with it, even one that -- as the author mentioned -- already exists.

Way back when, when I first was learning C, I didn't comprehend it very well. I was OK with it. A friend gave me some CDs with FreeBSD on it, one with the OS, and one with program sources. It was that source code which really opened my eyes, and you could digest small programs (like chmod) and see, you know, this is working, production code, and it's not hard, and you can do this.


I was worried when I saw "I decided to write a little Rust and, because everyone in my world is seems swoony over it, Go."

That's a pretty bad reason for using a language and usually leads to some pretty ridiculous criticism.

This post was not that, I think they nailed a lot of the good and bad things about Go, in fact, they could have been a lot more harsh. There is a depth lacking in just checking out a language in this way though as there's no evaluation of some of the larger reasons Go exists, like concurrency and and fast compile times. Then again, most people probably don't even need/care about these.


Hmm. Learning a language/framework that is exploding in popularity is probably one of the best things a dev can do to stay relevant (read: employed).

Hell, very few of us would be using Javascript if it weren't for it's ubiquity/community/popularity. I sure as hell am not using it because it's a well designed language.


Quite the contrary, learning a language because it is exploding in popularity is a nice way to ensure that you'll end up being abused, poorly paid and irrelevant. You can't possibly find a worse reason for learning a new language.

On Javascript you're missing the causality. Some people are learning Javascript because it is popular of course, because they've read on some forum that learning popular things gets them hired, but those people are totally uninteresting in the pool of developers that learned Javascript because they had shit to do and things to build and Javascript was the answer.

There's a big difference between learning X because you want to make your job easier, because you want to build things in a new way, because you want exposure to new ways of thinking, because there isn't an alternative to what you're trying to do, etc... and learning X because that's fashionable. In the former case, it's pretty much a gamble but you might actually get something out of it. In the later case all you're earning is the ability to slap another keyword on a resume that only mediocre HR people care about, being actually a time waist as it's keeping you from learning things that might actually be of interest for the things you're trying to accomplish (other than getting a shitty job of course).

Just as people that learned PHP or Java back in the day found out, learning for the sake of getting a job is a sure way to land you a shitty and boring job in which you are replaceable cog. Managers love popular languages and frameworks, of course they do. And the interesting jobs that are out there, that make people happy and that pay well - well, let me tell you, those companies aren't looking for keywords on your resume, but for things that you've actually built.

And after some years from now, when you'll be over 40, you'll find yourself either a manager that does LinkedIn searches for keywords, or an obsolete developer that can't find a job because everything you've learned is obsolete and you waisted your time in learning syntax, instead of learning new abstractions, algorithms and mathematics - you know, stuff that never gets old.


> Quite the contrary, learning a language because it is exploding in popularity is a nice way to ensure that you'll end up being abused, poorly paid and irrelevant.

Because I had the good fortune of starting with Rust early, I now have several foundational libraries under my belt in the Rust ecosystem. This has certainly been of great interest in interviews, and might just lead to me landing a pretty 'hip' internship over the Australian summer. Of course it won't be in Rust, but it's better than being at a hum-drum corporate placement. Being a pioneer definitely sets you apart from other candidates, and also culls the pack of potential employers to those that value curiosity and initiative, and actually care about exploring new ideas.


By learning new programming languages or frameworks you stand out from the crowd.

Guess who's going to get a job:

A) The fool who just learned JavaScript and Angular.js

B) The university student who only knows Java

The answer will be (A) every single time because companies don't have the time to train people for months.


Depending on context, it is going to be (C) the developer who has something to show on GitHub or (D) the developer that knows about algorithms, networking, computer architecture and that can solve concurrency issues.

Thanks for the example BTW - you've highlighted the problem precisely - the question is do you want your competition to be college students that only know Java? Because that says something about the company in question. And yes, it's a choice.


This hasn't been my experience in practice but I guess it depends on what kinds of companies you've worked with.

Everywhere I've worked, they hire engineers based on a lot of factors, and it has been more often the case that they'd go for someone who knows a different stack really well, than someone who knows all the newest stacks.


Personally, I like to learn any language, regardless of how popular it is. It's just fun.


Yeah, I was doing this for fun. I'm pretty secure in my day job. :)


I agree that it's great to learn a language, I was thinking of this as more of an exercise to generate a good post, but it's probably more reasonable to assume he really wanted to learn Go :P


I'm under the impression that Rust isn't near done and ready for production use. Go has been ready for a couple years. Rust could be 10 times better but I'm not going to touch it until they ship 1.0.


The plan is to release 1.0 around the end of the year. In that sense, Rust isn't done, but it is near done.


Unlike other languages that are "done" at 1.0, Rust will still be improving and iterating. 1.0 defines a backwards-compatible release that you can depend on but the language won't be "done".

"It’s important to be clear about what we mean by stable. We don’t mean that Rust will stop evolving. We will release new versions of Rust on a regular, frequent basis, and we hope that people will upgrade just as regularly. But for that to happen, those upgrades need to be painless." [0]

[0]: http://blog.rust-lang.org/2014/10/30/Stability.html


Go updates twice a year, which is actually a lot. I don't think a release every 6 weeks for Rust is a good idea. That'll never fly for enterprise software. Heck, for some companies the Java upgrade cycle is too fast.


At Java ONE MongoDB developers mentioned they have clients still running on Java 5, thus they cannot fully embrace Java 8 for the time being.


The stability guarantee given by rust is what scares me the most about starting up with it. "with a minimum of hassle" is really squishy language that is not at all comforting. Compared to the Go 1.0 promise: Your code will not break, period.


> A good (trivial) example was a great command parsing library just doesn’t exist yet.

There is a Docopt implementation in Rust[1], which is used by Cargo. It tracks master and is regularly updated.

Interestingly, I've found people either love or hate Docopt, so maybe you knew about it but don't like it. :P

/shameless plug

[1] - https://github.com/docopt/docopt.rs


I've had a really good experience using that library in Rust. It just works without much code or hassle.


Yeah, that whole part got the me the wrong way. Docopt is as far as I know the best. Also rust comes with a simpler getopt.

It is broken now, but with the collection reform landing, what isn't broken.


It was only broken because I merged a PR that updated it to Rust master before the nightly was updated on Travis. :-)

It's passing now that Travis updated to the new nightly.

(But yup, it was collection reform.)


Having spent a little time with Go, I ended up feeling like it was both better and worse than Ruby. In many ways it's many of the things that I want from a language - static typing, fast complier, pretty sensible defaults and so on. A lot of things I wish Ruby did, Go does great.

I think Go does really well in tooling, but it doesn't feel as great in syntax or language features that I would really like. The two big ones I wish Go had were named parameters on functions, and immutable data structures.

It's probably a blub paradox thing, but my ideal language has immutable data structures and named function parameters. Kotlin, Scala, and Swift all kind of nailed these features, but Go has not.

For me, named parameters and a little verbosity goes a long way to allow me to communicate intent in my code. Immutability allows me to have a built in sense of safety that once the data is set, it stays that way.

In my experience, a whole class of problems goes away when you have those two features (alongside the many other features we'd expect form a language like Go).

What I felt from Go is that a year ago when I was playing with it, that they didn't care much about either of those two features, which is fine, but it keeps go from being my ideal language.

Swift I wish was a bit more general purpose, Kotlin and Scala don't compile as fast as Go, and Ruby doesn't have a compiler to do some of the checking I wish it would.

Go's fast compile times with something like Ruby's syntax and a feature set similar to Kotlin/Swift would be darn near perfect.

For me, there is no perfect language.


Maybe OCaml is what you want?

(I'm a Scala guy myself and I understand OCaml has no higher-kinded types and worse performance (in terms of throughput of the code), and its syntax is weird. But it's a general purpose programming language with an emphasis on immutability, at least some kind of keyword arguments, strong types and allegedly fast compilation (though probably not at the same level as go))


There is no perfect language but from what you write seems like you may enjoy Crystal [1]. It is probably not ready for prime time yet but it is making good progress!

1: http://crystal-lang.org/


the thing that has got me excited about learning go some day is that it's supposed to be really good at cross-compiling code into small, standalone binaries for the three major platforms (linux, windows, osx). haxe is another language on my radar for much the same write-once-deploy-all-over-the-place reason.


Rust has that too. You can cross-compile binaries that will run on any platform that LLVM supports.


nice :) already excited about rust for other reasons, but that's going to be a great plus


Comparing Go and Rust doesn't feel right. They are obviously designed for solving different kind of problems. Go is a simple language, maybe even too simple for my taste. But simplicity is its greatest strength. And I can understand people who would prefer Go as their go-to language for dealing with specific kind of problems. Go is a boring language but gets you where you want to be in a short time and without much suprises. Rust, on the other hand is designed for systems programming. It's got some nice features but it's also a lot more complex language than Go. I don't want to fight compiler all the time. Sometimes I don't need that kind of safety.


I agree that it doesn't feel quite right, but this post seems to be more of a personal exploration of going from some ignorance of both languages, to having a better understand of where each language sits. His concluding section is nicely written.


I think to properly write a language comparison, you need to have extensively used both languages and with multiple use cases.

For example: I've recently attempted writing a small service in Go and it only took a few hours for me to figure out how weak a language could be without some sort of type-abstraction or generics: I had to implement a FindValueInArray() twice for two different types. This should be a big issue in any respectable language. (I mostly like Go otherwise though!)


True, but you can assess how approachable a language is by simply approaching it. This is useful information. You could even argue that experience would disqualify you from reviewing using that angle!


hm.. but why would anyone want to read how approachable Go is for beginners on HN? Do all negative experiences disqualify anyone from publishing a comparison? which negative experiences qualify?


What is to "implement" in FindValueInArray?

    for _, x := range foo {
        if someCriteria(x) {
            // do something
        }
    }


I'm confused by your comment. Does it take extensive experience and multiple use cases to assess a language? Or a couple hours writing a small service?


The former. I'm not arguing that I have extensive experience or had multiple use cases (I had neither), but I am saying these very serious issues started to pop up just a couple of hours in. I'm surprised there is nothing here about these seemingly oft-occurring issues in this article, and makes me feel like the author hasn't used the language enough (again, not that I have).


I'm surprised at the paragraph about rust having erlang style actor based parallellism... From what i've read, parallel programming was still heavily a work in design in Rust ( had a very recent discussion on using rust for http server side coding on HN whith people confirming this to me).

golang goroutine let me built a standalone binary with embedded https server and websocket support. Would Rust be able to do that even at 1.0 release without relying on low-level C library wrappers ?


It sounds like you're actually talking about non-blocking and asynchronous IO, rather than parallel programming. It is definitely true that the former is helpful for the latter, but not everyone is using parallelism/concurrency for IO tasks.

In any case, Rust 1.0 is not aiming to be feature complete in language or library, it is just the point of guaranteed backwards compatibility and additions and improvements will continue. See:

- http://blog.rust-lang.org/2014/09/15/Rust-1.0.html

- http://blog.rust-lang.org/2014/10/30/Stability.html

The Rust standard library is not aiming to have async IO at 1.0, but external libraries have exactly the same low-level power as the standard library, so this functionality can be written externally, e.g. mio[0], and the Cargo package manager makes it super-easy to use them in your own applications (with reproducible builds, so no risk of upstream changes accidentally breaking the build).

Lastly, are you saying "(low-level C library) wrappers" or "low-level (C library wrappers)". The former is exactly what Rust will have, it has highly efficient FFI (a Rust -> C function call is the same as C -> C function call) and so can bind to the high-performance libraries other have written without any overhead. However, there's no reason that people can't build a nice and high-level API above the direct bindings though (this is exactly what happens, e.g. the game-development community has quite a few nice-to-use libraries[1] that are thin layers above the low-level C functionality).

[0]: https://github.com/carllerche/mio

[1]: https://github.com/rust-lang/rust/wiki/Computer-Graphics-and...


Yeah, "Erlang style-actor" isn't exactly accurate anymore. A while ago, this was true, but it's not exactly true today.

That said, Rust does encourage message passing by default, but also gives you the ability to safely do shared-memory concurrency if you need.


Ok. I'm not a language designer so correct if i'm wrong, but doing anything remotely looking like actors implies being able to do m:n threading in some way, which rust decided not to do recently. Correct ?


I am not clear that "Actors" directly implies M:N threading. Rust's spawn() makes a 1:1 thread, but you still talk between threads with channels, and you can't get shared memory without going through an Arc<Mutex<T>> or something similar.

> which rust decided not to do recently.

It's more subtle than this. Rust's I/O libraries are being re-done to remove the M:N stuff, yes, but Rust is low enough that I/O is just a library; anyone can implement alternate I/O stuff, it's not privileged other than coming with Rust. See mio as an example of an in-progress alternative.


Not really. You can do actor model with 1:N or 1:1 threading. With 1:N you get actor-model concurrency without native-thread overhead per actor, but no parallelism; with 1:1 you get parallelism but you have native thread overhead for each actor. With M:N, you can limit your native thread overhead to whatever is useful given available hardware, while scaling out to as many actors as you need.

OTOH, M:N and 1:N both require you to do more to handle actors in the runtime, which potentially adds overhead to everything, M:N may not always be a win compared to 1:1.


From my (admittedly super ignorant position) it felt very similar - message passing, spawn, etc. The primitives felt very similar. I'm sure they differ hugely in details ;)


When I think "Erlang style" I think process trees and no shared-memory and sending code over the network to other nodes. Maybe that's my bias. :)

(We also used to say "Erlang-style" in our marketing, but eventually removed it)


Does 'the language prevents errors at compile time' really mean 'better function signatures in the standard library'? Because that's all I'm getting out of the regex example given. So far as I know the fail! macro still exists in Rust.

Edit: Evidently I should have put a /s at the end of that first sentence. I know what the idea behind compile-time checking is, but I don't see that the given example actually illustrates it.


> Does 'the language prevents errors at compile time' really mean 'better function signatures in the standard library'?

No, it means there's memory safety and a strong type system. Unless you specifically and explicitly break that, the language prevents you from doing things like accessing undefined memory or returning nulls where you haven't explicitly said you can return nulls, and the strong type system allows libraries to prevent you from accessing closed resources.

People say Haskell prevents lots of classes of errors at compile time, and yet even that has escape hatches. panic!() shouldn't be a part of idiomatic Rust code, it's primarily for use during initialisation, small scripts, and the odd place where an application design would be a pain to deal with if it had to deal with a one-in-a-million chance that something particularly odd happens, and the application can't continue if it does.


I think it means having a proper type system (though perhaps not as complete as the one in Haskell). In the example given, the compiler was able to signal the possibility of an error because the type system could represent None as a possible return.

Go's type system is, by comparison, pretty minimalistic. In the example you mentioned, Go would rely on you learning to write idiomatic Go code to check the value of the returned error. If you skip that step, Go will happily crash and burn as you try to do something with the nil value returned alongside the error.


> Does 'the language prevents errors at compile time' really mean 'better function signatures in the standard library'? Because that's all I'm getting out of the regex example given.

Of course any function can be forced to crash by inserting some crash-inducing code into the function. You can always make a function that assumes that value is of a certain form, and crashes the program if it is not (like unwrap for extracting a value from an Option, or crashing if it there is no value). In order to have "code that works if it compiles" (comparatively speaking; often used as an exaggeration), you have to have the discipline to use the facilities that the language provides you.

I guess a language would need some kind of totality checker in order to make sure that you couldn't make function diverge in some way. Like for example Idris has.

Rust also has the `!` type, aka bottom, for marking functions that diverge.

> So far as I know the fail! macro still exists in Rust.

What should that do? Crash the program? If so it should have been renamed to something like `panic!` now, since "fail" now is associated with Error types, while "panic" is associated with crashing the program/exceptions.


fail! has been renamed to panic! for much that reason.


> What should that do? Crash the program? If so it should have been renamed to something like `panic!` now

It crashes the current task (thread/process), not the whole program, and has in fact been renamed panic! recently: https://github.com/rust-lang/rust/pull/17894


How mature is Rust and its compiler actually at this moment? Is it in a state ready to replace C++?

Edit: Also, I missed a good overview of features in one language and lacking in the other. In that respect, I find the wikipedia page [1] deeply broken, but that aside.

[1] http://en.wikipedia.org/wiki/Comparison_of_programming_langu...


Depends on what you mean by solid. It uses LLVM for all the codegen and perf is generally in line with C++. The core ideas around traits/borrowing/inference/mutablility have been stable for a while.

What's not stable is everything around the core ideas. Many things are in flight at the moment in anticipation of 1.0 stability. The core collection libraries had a major refactor land last night. How error handling works got changed earlier this week.

If you're interested in writing libraries for a new ecosystem, now is the time to get into the language. I've tried to get into Rust a couple times since the 0.4 timeframe and I couldn't keep up with the pace of change. My current attempt started at 0.11 and while there are steady chanes, I've found them to be manageable.

If you want to have idioms and ecosystem relatively settled, hold off until the 1.2 or 1.3 timeframe (6 week cycles). I say this because a lot of the current RFCs are split between things that need to happen before 1.0 and things that need to happen after. This generally means the base functionality in-place for 1.0 with the sugar/convenience stuff to come later. I expect it'll take a few cycles after 1.0 before Rust settles into what most people are thinking when they think stable.


"What's not stable is everything around the core ideas. Many things are in flight at the moment in anticipation of 1.0 stability. The core collection libraries had a major refactor land last night. How error handling works got changed earlier this week."

Huh? I'm seeing comments in this thread about people using Rust right now in production. How... why... what?!?


There are two big deployments and some smaller ones. The two big ones are OpenDNS and Skylight.io.

We don't recommend it currently, but some people are just eager. :)


I would venture to guess that those organizations feel that the pains of working with a prototype is worth the payoff of shaping the final product (not to mention the immediate advantages of rust's already stable safety features).


Yup. I know much more about Skylight's case, but this is exactly true.


Some people like to live dangerously.


> Is it in a state ready to replace C++?

Nope. That will take many years. But it definitely is in a state where you can start building real things with it, if you don't mind some of the usual costs of being on the bleeding edge, like documentation that's hard to grok, frequent trips to irc, and frequent breaking changes. But breaking changes will be going away very soon, the documentation gets better all the time, and the irc room is far friendlier than others.


How well does the optimizer perform? And do you often encounter nasty, hard to trace, bugs in the compiler output?


Rust uses LLVM for all code generation and optimization. So, in that respect, it is very mature and I personally have never hit an issue.


I've yet to see a bug that I traced back to the compiler rather than my own code (which doesn't mean they haven't happened, just that if they have, they have been obscure enough that I haven't noticed them).


From my experience, not even close, if only because the ecosystem around it has to continuously update their own resources/libraries or risk being left behind every new patch because the language is not even at a stable 1.0 point.

Trying to figure out new things tends to be an exercise in reading obsolete code examples and then asking for someone in #rust on irc.mozilla.org to translate. Stuff that gets deprecated isn't well documented in my experience and one of the best resources I came across (Rust by Example) also contains a lot of out of date or deprecated features.

Obviously, this is a side-effect of a rapidly evolving language that is still in pre-1.0 phase, but anyone going into it should be aware of the potential breakage that tends to happen at a fairly high clip at the moment.


I wonder what actor library the author is using with Rust. Given the status of github issues like https://github.com/rust-lang/rust/issues/3573 I thought there weren't any real options.


It's more that Rust _used_ to have that style of concurrency, but it has changed over the last year.

That said, Rust does encourage message passing by default, but also gives you the ability to safely do shared-memory concurrency if you need.


I have seen it for many cases how Rust's strictness (no null pointers and other things) has a positive effect on safety, but I am curious: does this also affect reliability or stability to the better?

I would like to know the thoughts of someone experienced with Rust.


tl;dr: Guy who knows Rust better than Go prefers Rust over Go.


Since it's clear to anyone who read the post that this is an incorrect kneejerk summary, I am dying to know why? Do you feel an emotional attachment to go and have a deep-seated need to preemptively defend it even when no one is attacking it?


"Where line 9 there just blindly assumes the regex found a match, and causes quite the run-time error message."

You blindly assume the regex found a match, because you ignored the part of the docs where they tell you how to check for a match: http://golang.org/pkg/regexp/#Regexp.FindStringSubmatch


The article didn't say there was no way to check, just that the compiler does not enforce checking. Rust's type system requires that all possible paths are accounted for when trying to extract the value from the option.


The point is that Rust gives you a compiler error while Go assumes your sloppy code is intentional or unimportant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: