Another Scala user here. And I've been using Scala for the last six months. Mostly developing web applications on Scalatra and Play. I love Scala - It still has its goodiness and it's an excellent alternative to Java.
If you notice in all of the Scala books (I've read the one by Martin and the One by David Pollak as well), they all seem to push you towards the functional programming model instead of the much more common imperative model.
But you know, there's a problem with that. Not with the language itself, but with the web development side of Scala. There's only certain ways by which you can program with a particular function that accepts a request and returns a response. So, most of the time, I would find myself wasting time thinking "How can I make this more functional?"
Functional programming is really great when you can actually enforce it - When writing algorithms and so forth, but otherwise, you are forced to fallback to this imperative model which sucks, because you have to constantly keep changing your mindset between these two models.
Another downside of Scala is its HUGE syntax. Processing strings? Here's a ten million ways to do it.. (a bit over-exaggerating). Oh and if you chose this way, then you should use this syntax. No, not that one, THIS one. Oh no, not that one, this variation of that one that looks like this.
I've advocated Scala to many HN members here in the past, you can even check out my comments, for instance. But from my experience, I think Scala is an Academic language. But it's superior in its own ways - I just LOVE the concept of avoiding nulls and passing an Option. It's beautiful. But the downside is its huge academic complex syntax. I want to be able to write code which shouldn't reduce my hair count even if I look at it after 6 long months. I don't want to refer to an 800 page book each time I'm confused about something.
That's why I think Go is beautiful. The syntax is just enough to keep it inside your head forever. I fear that as the language matures, this might also evolve into something as complex as Scala, but let's hope not so.
Go isn't a magic bullet though. It has its own downsides, but nothing w.r.t performance or similar. For the most part, it's awesome.
I've historically been pretty pro-Scala 'round these parts, so my response to this should be considered in that light. So, with that disclosure out of the way... =)
I have to admit, I don't agree with the assertion that the cognitive dissonance associated with programming in an imperative way versus a functional way is a major problem to Scala. I realize personal preference is a very big part of this, but there are many problems which are just easier to solve in a functional way versus an imperative way. Of course, the opposite holds true as well. This is the beauty of Scala; one can write performance-critical imperative code while exposing a functional interface; or, one can consume a fluent functional interface to produce imperative code.
Frankly, I guess I've become something of a functional zealot, so my problem with Go is that it's so stubbornly imperative. This is why I can't get behind Go, as much as I want to. I feel like it doesn't consider many of the lessons that have been learned about FP making life easier. Set operations (map, filter, reduce) are insanely common, yet doing them imperatively sucks to the point of discouraging them. Excessive mutability is difficult to reason about. Nobody expects the nil/null/none inquisition. We tend to forget about side effects. Without an extensible language, which requires truly first-class functions, you're at the language designer's mercy.
Hell, Scala probably isn't the be-all, end-all answer to all of this. I just don't think that a doggedly-imperative, non-composable, statement-oriented language is the future of programming "in the large", not when the past has shown us that we tend to produce buggy, unmaintainable software this way. I'm pragmatic enough to realize that pure FP isn't a solution, but I feel strongly that sticking to traditional imperative because it's familiar is a costly mistake.
I can't argue with your take on the syntax, though, since that's personal preference. =) If you have any thoughts on why you feel productive in Go, I'd love to hear them; as I've said, I've been really struggling with motivating myself to learn and enjoy it.
Maybe you would try nimrod language, it's similar to go but a bit more functional, and it has generics and the concurrence is managed using actors (also supports channel's) , and rust also is a good language for functional programmers trying to find a good system language...
Go language is so different to scala or Haskell because it follow the principle what a simple language is better than a complex one (the less powerful principle). The type system is a bit weak in go, probably one simplest type system ever, nimrod has a basic oop joining with algebraic data types and something similar to clojure multi methods, rust I believe has type classes..... So yes, I if you're happy writing functional code and using powerful languages maybe you will not totally happy with go language...
I agree with everything you say. I think functional vs imperative is a matter of taste, though. Even I tend to incline towards Functional Programming (FP) in most scenarios, but in some cases imperative languages are 'good enough'.
And like I said, I still love Scala, for the reasons you've cited. But then again, there are a couple of reasons which I have in mind for going with Go.
The main problem with Scala for me:
1) Difficult to find good Scala engineers. IF you find one, you still need to figure out if he's comfortable with the Functional or Imperative model.
This is an example from one of my previous commenters (dxbydt, thank you) on my thread:
scala> def test1 = println ("hello world")
scala> def test2(f: =>Unit) = println ("hello world")
scala> def test3(f:Unit) = println ("hello world")
scala> val t = test2 _
scala> t apply Unit
It is mostly a personal preference, but I still feel too many ways to do one thing is a recipe for disaster.
With all that said, I know of people writing a Scala program and not touching it for a year or two unless they wanted to update their OS on their servers.
Nice example. It combines aliases for the Unit type with the sort of hybrid function/value nature of Unit to produce a truly baffling array of options. =)
I'll agree that this set of cases does not reflect well on Scala; at the very least, it supports your point that the syntax is too large. Still, I don't mind it too much. Although the "multiple ways to skin a cat" nature of Scala means you can get very WTF-y code like above, it also means that you can construct a list by writing `1 :: 2 :: 3 :: Nil`, or send an actor a message with the Erlang-inspired `actor ! Message("hello")`.
I don't want to be an apologist for it, though. You're correct that there are some nasty corner cases. Using Scala for serious work is difficult if people don't agree on a consistent style.
Go doesn't lend itself to functional programming, in part because of its lack of type parametrization. In a language like Scala or Haskell, using functions like map, fold, and filter is clean an idiomatic. In Go, it is so ugly and convoluted that you immediately reject this approach and use for loops instead.
Standing firmly in one paradigm is a good thing for a language IMHO. I've worked in multi-paradigm languages and they suffer from the fact that their communities can't find a common style. "Should I use a fold or a loop or a tail recursion?", or some library designers want to be as typeful as possible, others don't, etc.
Sometimes "academic" seems like a catch-all for stuff people don't like. Scheme has a strong academic history in its use and implementation, yet it seems to be described as "academic" only when someone is unhappy with how minimalistic it is, which is the opposite issue described here.
Has anyone here used Haskell in a production environment?
I want a language that is small, clean, and can provide a lot of static guarantees. I know many people find static guarantees and unacceptable curtailment of their "programming freedom", but frankly I think it's the answer to many of the problems we face in software today. Small is another thing. E.g. Go and Scheme are small. C++ and Scala are large. You know what I mean..
Now, static typing is just one kind of static assertion I'd like to have. Side effect isolation (i.e. language-enforced purity) would be another feature I'd like to see become common. For example, the D programming language has a "pure" keyword that lets you mark a function as side-effect free. (In Haskell, all function are pure by default, and you escape the confines of purity via monads.)
I'd like to do research into (inventing) other forms of static assertions. One thing that's been on my mind lately, has been static "complexity assertion" for programs. I don't know if this even possible, but it would be nice to be able to ascertain before running a program, certain time & space complexity bounds on it. This would perhaps require us to drop the programming language itself to some level "slightly below" Turing machines -- but this in itself could be achieved via a keyword like D's "pure" or something more elegant. (Note: my point is not that we're going to make the entire language non-Turing complete -- but rather that we'll have subsets of the language that are not, and this reduction of power would/could bring with it a greater possibility for statically asserting stuff.)
 FYI I made this term up. Let me know if there's something better that describes this.
Total functional programming (and the associated analysis of codata has already been touched on), so I'll just address your interest in static guarantees for space usage. The Virgil programming language has been designed with exactly this in mind. It is aimed at embedded systems where memory is extremely constrained and running out of memory could kill people. Heap allocation is not possible in the language and all data structures are initialized during compilation (like C++ templates, but more sophisticated). The compiler can use the initialization information for advanced optimization and analysis as well as serving as a bound on the memory usage.  The language also has some interesting static typing ideas, but they are not as distinct from other languages.
The halting problem assumes a program from a Turing-complete model of computation. Not all computational models are Turing-complete. The parent is pointing out that if you limit yourself to a language or a sub-set of a language that is not Turing-complete, then you can make static assertions about things such as halting.
A trivial example would be a programming language that did not allow any loops or explicit backward branches. You would be able to provide upper bounds on how many operations such programs can perform.
> if you limit yourself to a language or a sub-set of a language that is not Turing-complete, then you can make static assertions about things such as halting
There's already something that is non-Turing complete, but comes with a halting guarantee -- it's called total functional programming. This paper (linked below) is actually what got me thinking about this whole thing. The author argues in it how functions in functional programming languages aren't really like mathematical functions, because they have the possibility to execute infinitely (never halt) and thus not have a proper return value. To mitigate that, he creates "total functional programming".
Total functions are guaranteed to have a return value (i.e. terminate eventually), but they could take very long to run. If we could actually have complexity bounds, or be able to determine a given function's complexity, that would be a great boon to software engineering (i.e. assuming they adopt it... which is a whole different matter).
The author also makes a very good point that most parts of your program are meant to terminate (i.e. the vast set of functions and algorithms that comprise you program). Only the top-level needs to be Turing-complete, the rest can at least be total.
I actually want to see if it's possible to push this concept further, to create a ring-like hierarchy of languages -- the lowest one being the weakest and the one we can extract the most static assertions out of, and so on. There's a language called Hume that sounds like it accomplishes at least part of this goal, but I haven't really looked into it.
Your point on "slightly below" Turning machines caught my attention - exactly the same terminology I have used. I want as many proofs (assertions) as possible about code, and Rice's Theorem is a problem, so slightly below Turing is on the radar. If you are interested, we can discuss this. Shoot me an email at svembu at zoho ...
I liked Scala, but it's just the sheer weight of it that puts me off using it in my personal projects, and it still comes with JVM baggage.
What it needs is some leadership. A few weeks ago Rob Pike and Andrew Gerrand (developers of Go) appeared on The Changelog podcast, which I recommend listening to, and one thing that Rob Pike said really intrigued me. He said that a policy they have when designing Go is to ensure that all members of the core team agree on any design decision before it gets put into the language. If one of them disagrees it gets tossed.
That's probably one of the main reasons why Go is such an "opinionated" language and very small in nature. This probably infuriates some people as their favourite feature from other languages is missing in Go, but it keeps the language where it is and steers it on a path that the designers are maintaining complete control of.
> But from my experience, I think Scala is an Academic language.
Your complaints about Scala's size are more on point and oppose this; Scala is such a big language because it is designed to be an industrial language (academic languages tend to be small and focussed) specifically targetting the enterprise uses of Java.
Academic functional languages that aren't designed to provide an easy glidepath for users of a particular large imperative OO language and provide access to that industrial ecosystem tend to be smaller than Scala.
I find your comment quite odd, as a web service is just about the most functional thing you can get -- it's just a function request => response as you note. I don't see where you'd need to make it "more functional."
For an academic language (which I assume is factually true as it was developed in a university) it is one of the most developer friendly academic languages out there IMHO, I rather write Scala than Java in terms of complexity, and if I would need to rewrite a Ruby code base, I would give it a shot in Scala before trying Go (feels more natural to me, they have way more similarities than I thought)
I have the parts I need from Scala easily fitting in my head, which is perhaps just a subset, but enough for me to be very expressive in it.
I think one of Scala's biggest issues is people being turned off by what sometimes feels as functional zealotry. But people perhaps are not aware that it is also a great imperative language, you get no compiler warnings if you use a var or a mutable collection.
I find it very close to Ruby actually and I think Ruby developers will feel at home with it 
The other thing that holds it back IMHO is the compile time.
(But this is improving from version to version)
> The other thing that holds it back IMHO is the compile time.
Now that you mention compile time -- one of the biggest things that the Go designers cared about was compile-time. Go touts it as one of its major pros; and I do agree for personal experience that compile time matters.
You can often cut down compile-time by making "run-time compromises". For instance, with C++ code, dyn-linking rather statically linking everything can speed up things significantly. This is because with massive C++ codebases, when you change a couple of lines, rebuilding the relevant objects only takes a few seconds - but the static linking stage can take forever.
On memory constrained system (4GB) of RAM, I've seen a particular codebase that I've worked with take up to 28 minutes just to link. The same code on a machine with 8 gigs of RAM (just double) took less than 4 minutes to link. Due to the sheer number of objects that need to be linked, your system ends up thrashing (swapping pages out to disk).
That being said, I read somewhere that Go doesn't support incremental compilation. I don't if this is still true, but that's a major problem that needs to be fixed right away.
With interpreted languages, practically everything is done at run-time and you have no compilation stage -- but at a massive performance penalty. Tracing JITs do help though.
>Another downside of Scala is its HUGE syntax. Processing strings? Here's a ten million ways to do it.. (a bit over-exaggerating). Oh and if you chose this way, then you should use this syntax. No, not that one, THIS one. Oh no, not that one, this variation of that one that looks like this.
On the other hand, Scala lets you deal with collections and Strings using the same enormous set of methods.
I was aware of this, but was genuinely confused for a few moments, since I would have hoped that a reference to Seymour Cray was more likely on this forum than such, as you put it, “vernacular” forms of expression.
"Two years in, Go has never been our bottleneck, it has always been the database."
I would expect this to be true with any language, if you code well. Regular web applications do not have state so they are very easily scaled horizontally anyway. Databases on the other hand are trickier to scale the same way and will end up being the bottleneck almost all the time.
Or, you might be surprised when you find something like "I'm making 5000 DB queries?" instead of your language being slow.
But certainly if "enough" people take my advice here who have not profiled one of their web pages before, there's a non-empty set of them that are going to go "Oh crap, I didn't realize that's what was so slow, I just assumed it was the database!" Not every web app is a glorified select statement.
And there'll also be quite a few people who discover that their page isn't "slow" or anything, but who will discover that the CPU vs. IO is closer to 50/50 than they realized or something.
In my experience, with slower platforms/languages, while it may be conventional wisdom that the database is the bottleneck, that's not actually the case in many circumstances.
Certainly in circumstances where you're doing a complex query involving fields that are not indexed or several joins, you're going to be waiting on the database.
But if you're just fetching rows by ID or indexed fields, slower platforms and languages end up being a bigger bottleneck than modern databases. Sometimes this is masked somewhat by the fact that the database drivers and/or ORM are slow, so from the application's perspective, the "database" is the bottleneck. But one should not confuse the drivers and ORM for the database.
If you are talking about a breakdown of time consumed during a single request's processing, then yes on slower platforms/languages/frameworks, the database access portion may not be the most significant percentage of time used. But this is not that relevant as even the slower platforms usually can handle a single request reasonably fast.
What I was talking about was more about in the scaling of a system, i.e. what happens when your architecture needs to handle lots of requests. In this case, it is very rare for the application server part of your architecture to be a bottleneck in scaling because it is generally stateless (for normal web apps at least) and hence very easily horizontally scaled out. Of course a faster platform will allow you to use fewer servers but 15 servers on Go vs. 20 servers on Python is not that big of an issue.
In my opinion, many popular platforms and frameworks are not reasonably fast at providing a response in real-world applications. As a result, many web sites are frustratingly slow in my opinion (for example, a popular site used for hosting source code repositories). If those sites were to capture and share their profiling data (including time spent in drivers and the ORM), I would guess the database proper would not be as great a bottleneck as conventional wisdom says.
Perceived slowness is latency, and horizontal scaling doesn't necessarily address latency. Horizontal scaling may help alleviate an over-taxed CPU dealing with too many concurrent requests, but if a single request in isolation runs in 300ms, it will not run quicker than 300ms. It may run worse when contending for CPU capacity versus concurrent requests, but not better, unless a faster CPU is dropped in.
Performance matters, even in the world of horizontal scalability. Performance brings reduced latency (user experience) and reduced cost (size of cluster). If we can get that paired with an efficient, enjoyable developer experience, then yay for us.
Finally, "15 Go servers == 20 Python servers" seems a little unfair to Go.
True enough. This thread of conversation was kicked off by the premise that most applications are bottlenecked on their database server. When I said, "not necessarily," I meant as far as latency is concerned--for any given request on a slower platform, it is likely that the database's contribution is actually a minority.
That is the conventional wisdom I find in need of disruption.
But you are correct. Since a conventional database server can be more difficult to scale horizontally on its own right, even that small latency contribution, when multiplied by the number of queries being run by a wide array of application instances may ultimately mean the database server is the first observed bottleneck. By which I mean, the first device to reach 100% CPU without the simple recourse to just throw more money at Amazon and spin up another instance to solve the problem.
So I buy that.
But when I see slow web applications--when I criticize a site for being slow--I am always talking about latency. When we look under the hood, assuming the application is not being silly with its queries or doing a fundamentally challenging work-load, user experience slowness (latency again) usually originates from the application's own code or slow platform.
For example, I've observed applications that require 200ms of server-side time to render a login page. Behind the scenes, I may observe that they are badly designed and include one or two trivial (but utterly unnecessary) queries. Still, those two queries can be fulfilled by modern hardware in ~5 to 10ms. The remaining ~190ms of server processing is on the application. To my mind, that is unacceptable. A login page should be delivered in ~3ms of server time (under load!) on modern hardware.
And back to the OP, Go is a platform that brings JVM-class speed (the capability to return a login page in ~3ms) to those who can't stomach Java. Bravo to Go!
I don't entirely disagree with you but I do think you're not doing anybody any favors by "disrupting this conventional wisdom."
When I was an engineer at Formspring I profiled our social graph service which was basically PHP client hitting a Python service that queries against Cassandra. Thrift was used to communicate between PHP and the Graph Service and, being Cassandra, Thrift was used between the Python Cassandra client and the Cassandra server. So, two-way serialization, twice.
In the end, this isn't a bad design. I didn't write it then but if I were re-writing it now I'd probably use protobuffs but aside from that, it was a clean separation of concerns and fit in nicely with our larger SOA.
Point being, though, that serialization is CPU expensive. Reading from Cassandra was blazingly fast compared to the work Thrift was doing.
All that said, I think noob engineers should be taught that network operations (db queries included) are at least an order of magnitude slower than, say, opening and writing to a local socket. Experienced engineers can see the balanced view of things and agree with the point you're making, so advocating it, IMO, will only serve to make you look smart and confuse noobs.
I appreciate your point of view. It's not my intent to confuse noobs.
You're right, with that in mind, the conventional wisdom is worth retaining so to instill the proper fear of treating database queries as trivial. Thinking of queries as cheap--or not thinking about queries at all--is what gets applications into a state where a single request runs dozens if not hundreds of queries to deliver what is effectively static content! :)
That said, I've met more than a few senior folks who continue to fiercely stick with the premise that database queries trump all. As someone else in this thread said, profiling can be illuminating, even for senior developers. But it seems on that point, we're all in agreement.
I think one of my favourite things about Go is that it makes it easy and obvious to code well. Generally the first thing that comes to my mind is going to be performant and extendable.
The reason this is the case, I think, is that Go strives to make algorithmic complexity clear: you know when you're allocating memory, but you don't need to jump through hoops to do it. You know the rough performance costs of what you're doing because the core team works hard to make it obvious. For example, some regular expressions features (lookahead, if I remember correctly) aren't implemented in the Go standard library's regular expression package, because they're impossible to achieve in O(n) time. https://groups.google.com/forum/#!topic/golang-nuts/7qgSDWPI...
This level of care in making simple, clearly-defined tools with known properties makes it easy to code well. Ruby, Python, PHP, NodeJS... you can shoot yourself in the foot and not even notice.
Ah yes the old "engineers are more expensive than machines" adage.
Thing is, that's true for some small number of machines. But when you build scalable systems and actually scale them you get to a point where the balance tips. And it just so happens that building those systems is exactly what the Golang team has in mind.
Throwing servers (money) at the problem is often incredibly wasteful, and exasperates the other problem you mentioned (the database). Having 500 servers all connecting to the database is not so awesome. Having 10 is a lot nicer.
I can't be the only person who thinks Go is really interesting, but can't get over the 'package' hump... It's just too wonky for 'real-world' from my experience. For example, how do you create clear lines of separation with internal modules? If you want to use 'namespaces' then each namespace has to be it's own package, which then requires its own Repo that you have to 'go get'. There's unsustainable for a project of any useful size.
Packaging is both the best and worst of Go. I love the decentralisation, so there's no central directory of Go code. We don't have pypi, we don't have rubygems or npm, we have Github.
As others have pointed out, you can use directories as namespaces, which works nicely. But I've found, personally, that splitting my projects into separate repos for each namespace is actually beneficial. Helps keep my code clean and separated.
That being said, versioning is a real pain right now. The best thing I've found is to fork a repo when I decide to use it, then pull in updates as I adjust my code to work with them. Definitely not ideal, and if you have a lot of projects using the same dependency, it becomes a major headache.
There are some workarounds, and we've discussed the topic at great length on the mailing list. I think it's something we'll see a solution to in the next few years. But one of the things about Go that I really enjoy is that the core team is hesitant to push half-baked ideas onto the community. When we see a solution, it tends to be an elegant, clean solution that fits perfectly into the problem it solves.
In other words: yes, there are some problems. Yes, I do think they'll go away. Yes, I do think we'll need to be patient. Yes, I do think it will be worth it.
Each package does not need to be in its own repo. You can put packages inside subdirectories. The standard library is full of these, such as the net library. There is the net library itself, and there are subdirectories, including http, smtp, and more.
I would imagine this has something to do with having an existing package dependency management system company-wide and it causing friction trying to get the two to coexist.
I have similar issues where I work with trying to integrate an internal build system and rubygems. Our answer is to essentially mirror the gem version internally into our own repo. It's not the best answer I could hope for.
They can all be subdirectories of the same main repo, yet import each other with paths like github.com/mainrepo/packagename.
The real thing that I haven't figured out a consistent solution to is how to manage versions. You could approximate version numbers with branches or tags, but nobody's put together a convention for it that works well with the tooling. That'll be important as go matures and has more libraries.
It's not cliche, it's just personal. For instance, most of the code I write is numerical code. To that end, any language without operator overloading is a non-starter, since it's far harder to find a bug in:
On the other hand, if I wrote server code, operator overloading would be far less useful. I'd probably curse any programmer who used it and thank the gods that it was left out of Go.
Conversely, since I write a lot of numerical code, I don't care about generics or typing, which is crucial to many other programmers. Generics don't matter since everything I work with is either a Double or a collection of Doubles. Similarly, static typing doesn't help, since most functions just have the type (Double -> Double) and the type checker can't tell a sine from a logarthim. Of course, the reverse is also true. Since everything is either a double or a collection of doubles, the fancy tricks that dynamic languages offer don't give me a damn thing, so I'm extremely ambivalent about the typing debate.
Of course, on other projects, I've written code that benefited from static typing and I've written valid code that would choke any type checker. I've written code that heavily needed generics. When I did that, I used the languages that had the features I needed.
Go just won't work for the programs I write and it sounds like it doesn't work for yours, either. That's why we won't use Go. I've heard it works wonderfully for a certain class of server software and I'm glad those guys have a useful language for their domain. If I ever have to write another server, I might even teach myself Go. But don't feel guilty that you're using an old hammer instead of the shiny new saw.
though if you're using a ton of matrices it could be.
I for one have found Go great for computing. The really quick compile times with static checking plus the composability are great. It definitely depends though; while the native concurrency is great there aren't a lot of easy solutions for non-shared memory computations. (I saw an MPI package at one point, but I haven't tried to use it)
It's more than just matrices, though. For instance, I have a little library that propagates uncertainty for me. Without operator overloading, I'm back to descending into the rpn hole from my example.
Another poster pointed out unit analysis. I've done this before with custom types that keep track of the units on measurements.
Since you mentioned parallelization, that's another fun toy I've played with. By overloading the operators for an object that defines a snippet of OpenCL code, it's possible to push these snippets through pre-existing functions and have it return a final OpenCL function. You then call that returned function on your arrays of data to run everything through your GPU with just three lines of code changes from the sequential.
Operator overloading is more than just adding matrices. It's a powerful technique that comes in handy almost any time that you're working heavily with numerical data. Of course, it's also dangerous as hell in the wrong hands. The code for the OpenCL example was actually some pretty terrible code that did extremely non-intuitive things during value comparisons.
Operator overloading can certainly do good things. Not going to argue with you there. Though, in the case of units, you can actually go a long way with that (http://play.golang.org/p/iCqB2euJj8). It's not perfect, you have to do some function calls to multiply units of differing type correctly, but you do gain all the proper compile-time safety, and you don't need to do all sorts of method calls.
What are you doing with uncertainty that you can do operator overloading with? You usually want to do Bayes rule with probabilities, but that gets intractable fast.
For the record, there are lots of other places where you do numerical work. Any kind of genomics work, for one, will involve tons of data processing and numerics, and it's it's not strictly an academic pursuit.
At my previous job most of my day-to-day work was on algorithms for speech recognition and topic modeling, which is pretty well doubles flying left and right. That wasn't academia either.
You hit the nail on the head with academia. Physics, to be precise. About 80% of my code is data analysis stuff where I'm just reducing events from data files. The other 20% is monte carlo simulations.
As a developer on the Python stack, I would love to know when would be a good time to start using Go in serious production work. It seems to me that it solves a lot of the backend services infrastructure problems associated with interpretive languages (one of the reasons I was considering diving in Scala or other JVM languages), is relatively reliable, and has a fairly strong core library. It still seems bleeding edge, but the language seems to have developed far faster than Python did over the last decade or so.
The analogy is a little flimsy, but I'll run with it anyway: I consider Go today to be similar in some ways to Java at around the time of Java 1.1 or 1.2.
Obviously, Go is modern and is in many ways better than today's Java 1.7. But I am trying to illustrate its maturity level and the trajectory that I believe it's on. If you recall the days of Java 1.1, it was already seeing a great deal of early traction. The early traction of Go seems roughly the same to me. Also Java in its 1.1/1.2 years was on a clear trajectory to become a dominant language. I think Go will only grow in popularity for years to come in the same fashion. Even as a primarily Java developer, I look forward to Go being a clear and viable alternative.
I could be wrong about the trajectory, of course.
But I believe a short answer to your question is: if you're considering it, take some time to actually do something with Go. At first something experimental, then something for production use.
As a long-time JVM user, I've been trying to explain to other developers for a while now that assuming you use a modern approach to Java development, the performance of the JVM allows you to be (in my opinion) even more efficient than a dynamic language because you can code your application fairly recklessly. You can defer optimization in all of its forms for a long time, perhaps infinitely. The resulting mindset is a dramatically reduced concern about performance. When I work with most dynamic languages, I can never fully set aside the inner voice saying, "this is going to perform like crap." Trouble is, the voice is often right.
Go brings the same ballpark of performance as the JVM and a style that I believe is more appealing to Python developers than a modern Java stack (although I don't think modern Java stacks are given much of a fair shake because of Java's legacy, but that's a separate rant entirely).
To be clear: I don't write Go professionally and I am not part of the "Generics or bust!" advocates. I'm agnostic.
I simply used it as an example of something that many would point to as evidence of Go's maturity level. If the language maintainers don't ever add generics to Go, I think I'd be comfortable with that. And if that's the way it plays out, eventually the design decision will be seen as firm and not a sign of immaturity.
If you understood how to write Go, you would write an imperative ad-hoc loop instead of composing generic functional combinators. But you have to be mature enough to jump over the shadow of your functional pride and write clean imperative code.
Of course you can write a 'for' loop. You can also write a goto in C. I was replying to my parent who said 'you don't need generics, since Go has interfaces'. I wanted to point out that interfaces are not a general substitute for generics.
But you have to be mature enough to jump over the shadow of your functional pride and write clean imperative code.
By that reasoning we can go back to assembly ;): you just have to be mature enough to jump over the shadow of your portability pride and write clean assembly code.
Abstractions exist to help us and in that respect Go feels like a throwback to the past. It's pretty much a Java 1.0 that compiles to machine code.
Agree, and a map also provides an "at a glance" assurance that I'm getting a transformed array of the same size. For loops take longer to discern what they're doing just because they could be doing almost anything, including early exit.
I'm using Go in production, and migrating existing Python services to it. I've found nothing wrong with using it for "serious production work". The ecosystem obviously isn't as mature, but it's getting there. There don't seem to be any gaping holes.
I wish we could know what happens "after 12 years in production" (just 10 more).
Sorry if someone did feel offended (the negative voting), I said it with my best and more constructive intention.
In my opinion, the problem is not in the content of my comment. Well, maybe, if someone understands it as: "oh, can't argue against that, is attacking the language" instead of, "let run your imagination to 2023".
NDK on Android is native (C++/C) and it's there to boost performance and avoid GC in game loops.
I've seen too many devs discussing how to mitigate the effects of GC in games to consider Go to be a worthy upgrade at this point in it's development, the NDK would probably still be needed...
I didn't say get everyone to build apps. It would simply be another option. App development is getting more competitive, of course, so people that want to stand out might need to move to Go to gain an edge. Plus, I image that a Go environment might provide a better interactive development cycle.
Go seems to be hitting some kind of tipping point where it's going from being more of a niche thing with a small user base to something with a broader appeal. I don't think that it's because go is changing so much as the kinds of problems people are encountering writing web services that need to scale (or at least have that option).
I've been comparing go to scala lately and what has really got me into go is the development speed. The fast compiles and fast web page auto-reload make it feel every bit as fast to develop in as ruby or python or php, but with type safety, fewer tests, and very clean code.
Scala is a fantastic language, but even with jrebel autoreload you still have 2-3 second page reloads and a 5 second test suite reload. That seems like a small thing, but the faster I see code change results, the more hooked I am to the process. A 5 second wait is probably enough to get me out of the zone.
With go, on just a default simple test of a small thing it is less than a second to compile/run. In revel, pages reload/update as fast/faster than they do in rails/sinatra.
Oh, and with go, each web request will run in a separate goroutine, so you get excellent nonblocking IO performance without dealing with the callback soup of node.js.
It might just be irrational exuberance because I haven't built anything big and messy yet, but so far go is seriously fantastic and solves a lot of real world problems elegantly.
Oh, painful point, I still try to ignore Scala's compile time and hope it will improve in the future, (it does gradually on every release) but when I do, then Scala feels a bit more natural to me than Go (it feels closer to Ruby for me, which I like, and has the Java interop aspect, which I unfortunately need). I just close my eyes, do sbt ~test-quick, and hope for the best. It's just that Scala feels to me more like a statically typed Ruby, where in Go I need to shift some paradigms and do some mental twists to accept how great it is.
I think we need more than 1 account of Go in production before we start high-fiving each other about Go in the mainstream.
And to be fair to the person you are responding too, there has been an inordinate amount of Go articles on HN over the last few months compared to anywhere else on the internet tech/dev wise, so much so that a number of my friends have independently made a joke of it.
>I think we need more than 1 account of Go in production before we start high-fiving each other about Go in the mainstream.
I've been programming in Go full time since Go 1.0. Lots of companies use Go: http://golang.cat-v.org/organizations-using-go -And that is list out of date and not maintained. For example, companies like Mozilla and Walmart are not on that out of date list.
agreed - we've been using Go in production since before 1.0 and I know of many other high profile companies using it that aren't publicizing their usage for whatever reason. There is no question of whether Go works in production under loads. That has been asked and answered, your honor.
> there has been an inordinate amount of Go articles on HN over the last few months
They are being submitted and voted up by people. If you want other content, then create / upvote it, just don't pretend 'inordinate' means anything in this kind of context.
The other thing that never improves threads is everyone only posting what they think might get upvoted instead of saying what they actually think, especially when there might be a grain of truth in it. Tedious comment back-patting is the death of good communities.
Yes, it's all a conspiracy! Because that's how open source development at Google work: create shitty technology, spam HN, Reddit et al. with it, and then let the massive Google workforce and the fanboys upvote the postings.
Please, let's not have an argument over which dev team is smarter. The fact is that Go and Rust are very different languages with different goals and feature sets, with different levels of ambitiousness and which started development in earnest at different times.
I, as a user interested in both, did find value in the comment regardless of his/her intentions. This comes off as a 'shut up' when really, it could be a good opportunity to look at what Go does right for implementation in Rust, no?
As a follower of the Rust project, the constant comparisons to Go are all the more tiring for how unwarranted they are. They are wildly different languages, with wildly different strengths and wildly different goals, designed for wildly different audiences. And yet people still insist on playing up the dramatic Google vs. Mozilla angle. I'm as tired of it as pcwalton is.
Because it's plainly a troll. ("Go will make you cry for using Rust" made that clear, especially considering the user's comment history, in which I've already explained why at least visibility on case doesn't make sense.)
[Disclaimer/Warning I work there and this is about Iron.]
- Native cloud service over HTTP transport
- Clean easy API
- Scales to unlimited queues/clients
- Push queues can have URL endpoints as subscribers
- Highly available (our #1 priority is to keep it running at all times)
- Nice UI to manage queues, stats, rate, etc.
- IronWorker integration (workers as a service)
- Fast, clean API
- One time delivery
- Push queues / pubsub / fanout as first class feature
- IronWorker integration (workers as a service)
Best way is to just try it out. It's already one of the leading cloud MQ's out there and we have a lot of big plans for IronMQ to make it the safest "bet your business" cloud message queue available.
I've used IronMq in production for about 6 months. It's easy to use but the uptime is terrible http://status.iron.io/history so if the messages are time sensitive I wouldn't recommend it. You should also queue the messages locally to ensure they end up in IronMq at all.