I've been comparing go to scala lately and what has really got me into go is the development speed. The fast compiles and fast web page auto-reload make it feel every bit as fast to develop in as ruby or python or php, but with type safety, fewer tests, and very clean code.
Scala is a fantastic language, but even with jrebel autoreload you still have 2-3 second page reloads and a 5 second test suite reload. That seems like a small thing, but the faster I see code change results, the more hooked I am to the process. A 5 second wait is probably enough to get me out of the zone.
With go, on just a default simple test of a small thing it is less than a second to compile/run. In revel, pages reload/update as fast/faster than they do in rails/sinatra.
Oh, and with go, each web request will run in a separate goroutine, so you get excellent nonblocking IO performance without dealing with the callback soup of node.js.
It might just be irrational exuberance because I haven't built anything big and messy yet, but so far go is seriously fantastic and solves a lot of real world problems elegantly.
And to be fair to the person you are responding too, there has been an inordinate amount of Go articles on HN over the last few months compared to anywhere else on the internet tech/dev wise, so much so that a number of my friends have independently made a joke of it.
I've been programming in Go full time since Go 1.0. Lots of companies use Go: http://golang.cat-v.org/organizations-using-go -And that is list out of date and not maintained. For example, companies like Mozilla and Walmart are not on that out of date list.
I would expect this to be true with any language, if you code well. Regular web applications do not have state so they are very easily scaled horizontally anyway. Databases on the other hand are trickier to scale the same way and will end up being the bottleneck almost all the time.
You might not.
Or, you might be surprised when you find something like "I'm making 5000 DB queries?" instead of your language being slow.
But certainly if "enough" people take my advice here who have not profiled one of their web pages before, there's a non-empty set of them that are going to go "Oh crap, I didn't realize that's what was so slow, I just assumed it was the database!" Not every web app is a glorified select statement.
And there'll also be quite a few people who discover that their page isn't "slow" or anything, but who will discover that the CPU vs. IO is closer to 50/50 than they realized or something.
Certainly in circumstances where you're doing a complex query involving fields that are not indexed or several joins, you're going to be waiting on the database.
But if you're just fetching rows by ID or indexed fields, slower platforms and languages end up being a bigger bottleneck than modern databases. Sometimes this is masked somewhat by the fact that the database drivers and/or ORM are slow, so from the application's perspective, the "database" is the bottleneck. But one should not confuse the drivers and ORM for the database.
What I was talking about was more about in the scaling of a system, i.e. what happens when your architecture needs to handle lots of requests. In this case, it is very rare for the application server part of your architecture to be a bottleneck in scaling because it is generally stateless (for normal web apps at least) and hence very easily horizontally scaled out. Of course a faster platform will allow you to use fewer servers but 15 servers on Go vs. 20 servers on Python is not that big of an issue.
In my opinion, many popular platforms and frameworks are not reasonably fast at providing a response in real-world applications. As a result, many web sites are frustratingly slow in my opinion (for example, a popular site used for hosting source code repositories). If those sites were to capture and share their profiling data (including time spent in drivers and the ORM), I would guess the database proper would not be as great a bottleneck as conventional wisdom says.
Perceived slowness is latency, and horizontal scaling doesn't necessarily address latency. Horizontal scaling may help alleviate an over-taxed CPU dealing with too many concurrent requests, but if a single request in isolation runs in 300ms, it will not run quicker than 300ms. It may run worse when contending for CPU capacity versus concurrent requests, but not better, unless a faster CPU is dropped in.
Performance matters, even in the world of horizontal scalability. Performance brings reduced latency (user experience) and reduced cost (size of cluster). If we can get that paired with an efficient, enjoyable developer experience, then yay for us.
Finally, "15 Go servers == 20 Python servers" seems a little unfair to Go.
Latency and throughput can be inversely related depending the precise architecture of a system (queueing is the classic mechanism that trades them off).
But in terms of "horizontal scaling", the goal really is to improve total throughput. Often imposing a tax on latency due to coordination costs.
That is the conventional wisdom I find in need of disruption.
But you are correct. Since a conventional database server can be more difficult to scale horizontally on its own right, even that small latency contribution, when multiplied by the number of queries being run by a wide array of application instances may ultimately mean the database server is the first observed bottleneck. By which I mean, the first device to reach 100% CPU without the simple recourse to just throw more money at Amazon and spin up another instance to solve the problem.
So I buy that.
But when I see slow web applications--when I criticize a site for being slow--I am always talking about latency. When we look under the hood, assuming the application is not being silly with its queries or doing a fundamentally challenging work-load, user experience slowness (latency again) usually originates from the application's own code or slow platform.
For example, I've observed applications that require 200ms of server-side time to render a login page. Behind the scenes, I may observe that they are badly designed and include one or two trivial (but utterly unnecessary) queries. Still, those two queries can be fulfilled by modern hardware in ~5 to 10ms. The remaining ~190ms of server processing is on the application. To my mind, that is unacceptable. A login page should be delivered in ~3ms of server time (under load!) on modern hardware.
And back to the OP, Go is a platform that brings JVM-class speed (the capability to return a login page in ~3ms) to those who can't stomach Java. Bravo to Go!
When I was an engineer at Formspring I profiled our social graph service which was basically PHP client hitting a Python service that queries against Cassandra. Thrift was used to communicate between PHP and the Graph Service and, being Cassandra, Thrift was used between the Python Cassandra client and the Cassandra server. So, two-way serialization, twice.
In the end, this isn't a bad design. I didn't write it then but if I were re-writing it now I'd probably use protobuffs but aside from that, it was a clean separation of concerns and fit in nicely with our larger SOA.
Point being, though, that serialization is CPU expensive. Reading from Cassandra was blazingly fast compared to the work Thrift was doing.
All that said, I think noob engineers should be taught that network operations (db queries included) are at least an order of magnitude slower than, say, opening and writing to a local socket. Experienced engineers can see the balanced view of things and agree with the point you're making, so advocating it, IMO, will only serve to make you look smart and confuse noobs.
You're right, with that in mind, the conventional wisdom is worth retaining so to instill the proper fear of treating database queries as trivial. Thinking of queries as cheap--or not thinking about queries at all--is what gets applications into a state where a single request runs dozens if not hundreds of queries to deliver what is effectively static content! :)
That said, I've met more than a few senior folks who continue to fiercely stick with the premise that database queries trump all. As someone else in this thread said, profiling can be illuminating, even for senior developers. But it seems on that point, we're all in agreement.
The reason this is the case, I think, is that Go strives to make algorithmic complexity clear: you know when you're allocating memory, but you don't need to jump through hoops to do it. You know the rough performance costs of what you're doing because the core team works hard to make it obvious. For example, some regular expressions features (lookahead, if I remember correctly) aren't implemented in the Go standard library's regular expression package, because they're impossible to achieve in O(n) time. https://groups.google.com/forum/#!topic/golang-nuts/7qgSDWPI...
This level of care in making simple, clearly-defined tools with known properties makes it easy to code well. Ruby, Python, PHP, NodeJS... you can shoot yourself in the foot and not even notice.
Thing is, that's true for some small number of machines. But when you build scalable systems and actually scale them you get to a point where the balance tips. And it just so happens that building those systems is exactly what the Golang team has in mind.
If you notice in all of the Scala books (I've read the one by Martin and the One by David Pollak as well), they all seem to push you towards the functional programming model instead of the much more common imperative model.
But you know, there's a problem with that. Not with the language itself, but with the web development side of Scala. There's only certain ways by which you can program with a particular function that accepts a request and returns a response. So, most of the time, I would find myself wasting time thinking "How can I make this more functional?"
Functional programming is really great when you can actually enforce it - When writing algorithms and so forth, but otherwise, you are forced to fallback to this imperative model which sucks, because you have to constantly keep changing your mindset between these two models.
Another downside of Scala is its HUGE syntax. Processing strings? Here's a ten million ways to do it.. (a bit over-exaggerating). Oh and if you chose this way, then you should use this syntax. No, not that one, THIS one. Oh no, not that one, this variation of that one that looks like this.
I've advocated Scala to many HN members here in the past, you can even check out my comments, for instance. But from my experience, I think Scala is an Academic language. But it's superior in its own ways - I just LOVE the concept of avoiding nulls and passing an Option. It's beautiful. But the downside is its huge academic complex syntax. I want to be able to write code which shouldn't reduce my hair count even if I look at it after 6 long months. I don't want to refer to an 800 page book each time I'm confused about something.
That's why I think Go is beautiful. The syntax is just enough to keep it inside your head forever. I fear that as the language matures, this might also evolve into something as complex as Scala, but let's hope not so.
Go isn't a magic bullet though. It has its own downsides, but nothing w.r.t performance or similar. For the most part, it's awesome.
Once you go Go, you never go back.
P.s - I still love Scala as well ;)
I have to admit, I don't agree with the assertion that the cognitive dissonance associated with programming in an imperative way versus a functional way is a major problem to Scala. I realize personal preference is a very big part of this, but there are many problems which are just easier to solve in a functional way versus an imperative way. Of course, the opposite holds true as well. This is the beauty of Scala; one can write performance-critical imperative code while exposing a functional interface; or, one can consume a fluent functional interface to produce imperative code.
Frankly, I guess I've become something of a functional zealot, so my problem with Go is that it's so stubbornly imperative. This is why I can't get behind Go, as much as I want to. I feel like it doesn't consider many of the lessons that have been learned about FP making life easier. Set operations (map, filter, reduce) are insanely common, yet doing them imperatively sucks to the point of discouraging them. Excessive mutability is difficult to reason about. Nobody expects the nil/null/none inquisition. We tend to forget about side effects. Without an extensible language, which requires truly first-class functions, you're at the language designer's mercy.
Hell, Scala probably isn't the be-all, end-all answer to all of this. I just don't think that a doggedly-imperative, non-composable, statement-oriented language is the future of programming "in the large", not when the past has shown us that we tend to produce buggy, unmaintainable software this way. I'm pragmatic enough to realize that pure FP isn't a solution, but I feel strongly that sticking to traditional imperative because it's familiar is a costly mistake.
I can't argue with your take on the syntax, though, since that's personal preference. =) If you have any thoughts on why you feel productive in Go, I'd love to hear them; as I've said, I've been really struggling with motivating myself to learn and enjoy it.
Go language is so different to scala or Haskell because it follow the principle what a simple language is better than a complex one (the less powerful principle). The type system is a bit weak in go, probably one simplest type system ever, nimrod has a basic oop joining with algebraic data types and something similar to clojure multi methods, rust I believe has type classes..... So yes, I if you're happy writing functional code and using powerful languages maybe you will not totally happy with go language...
And like I said, I still love Scala, for the reasons you've cited. But then again, there are a couple of reasons which I have in mind for going with Go.
The main problem with Scala for me:
1) Difficult to find good Scala engineers. IF you find one, you still need to figure out if he's comfortable with the Functional or Imperative model.
This is an example from one of my previous commenters (dxbydt, thank you) on my thread:
scala> def test1 = println ("hello world")
scala> def test2(f: =>Unit) = println ("hello world")
scala> def test3(f:Unit) = println ("hello world")
scala> val t = test2 _
scala> t apply Unit
With all that said, I know of people writing a Scala program and not touching it for a year or two unless they wanted to update their OS on their servers.
I'll agree that this set of cases does not reflect well on Scala; at the very least, it supports your point that the syntax is too large. Still, I don't mind it too much. Although the "multiple ways to skin a cat" nature of Scala means you can get very WTF-y code like above, it also means that you can construct a list by writing `1 :: 2 :: 3 :: Nil`, or send an actor a message with the Erlang-inspired `actor ! Message("hello")`.
I don't want to be an apologist for it, though. You're correct that there are some nasty corner cases. Using Scala for serious work is difficult if people don't agree on a consistent style.
Edit: Functional Programming in Go
As the other answer to your question states, adopting a functional style in Go is technically possible, but practically it is so inconvenient as to be unusable.
EDIT: Take a look at this article, which explores developing a library that adds runtime-checked type parametric functions. http://blog.burntsushi.net/type-parametric-functions-golang
"There’s no such thing as a free lunch. The price one must pay to write type parametric functions in Go is rather large:
Type parametric functions are SLOW.
Absolutely zero compile time type safety.
Writing type parametric functions is annoying.
Neither, use a combinator, like map or traverse. (And that applies to all languages that support this style.)
Sometimes "academic" seems like a catch-all for stuff people don't like. Scheme has a strong academic history in its use and implementation, yet it seems to be described as "academic" only when someone is unhappy with how minimalistic it is, which is the opposite issue described here.
I want a language that is small, clean, and can provide a lot of static guarantees. I know many people find static guarantees and unacceptable curtailment of their "programming freedom", but frankly I think it's the answer to many of the problems we face in software today. Small is another thing. E.g. Go and Scheme are small. C++ and Scala are large. You know what I mean..
Now, static typing is just one kind of static assertion I'd like to have. Side effect isolation (i.e. language-enforced purity) would be another feature I'd like to see become common. For example, the D programming language has a "pure" keyword that lets you mark a function as side-effect free. (In Haskell, all function are pure by default, and you escape the confines of purity via monads.)
I'd like to do research into (inventing) other forms of static assertions. One thing that's been on my mind lately, has been static "complexity assertion" for programs. I don't know if this even possible, but it would be nice to be able to ascertain before running a program, certain time & space complexity bounds on it. This would perhaps require us to drop the programming language itself to some level "slightly below" Turing machines -- but this in itself could be achieved via a keyword like D's "pure" or something more elegant. (Note: my point is not that we're going to make the entire language non-Turing complete -- but rather that we'll have subsets of the language that are not, and this reduction of power would/could bring with it a greater possibility for statically asserting stuff.)
 FYI I made this term up. Let me know if there's something better that describes this.
Further discussion on LtU: http://lambda-the-ultimate.org/node/2131
I don't know about knowing what "space" a program will consume ahead of time, but I believe the halting problem means there'd be no way of computing a time requirement.
A trivial example would be a programming language that did not allow any loops or explicit backward branches. You would be able to provide upper bounds on how many operations such programs can perform.
> if you limit yourself to a language or a sub-set of a language that is not Turing-complete, then you can make static assertions about things such as halting
There's already something that is non-Turing complete, but comes with a halting guarantee -- it's called total functional programming. This paper (linked below) is actually what got me thinking about this whole thing. The author argues in it how functions in functional programming languages aren't really like mathematical functions, because they have the possibility to execute infinitely (never halt) and thus not have a proper return value. To mitigate that, he creates "total functional programming".
Total functions are guaranteed to have a return value (i.e. terminate eventually), but they could take very long to run. If we could actually have complexity bounds, or be able to determine a given function's complexity, that would be a great boon to software engineering (i.e. assuming they adopt it... which is a whole different matter).
The author also makes a very good point that most parts of your program are meant to terminate (i.e. the vast set of functions and algorithms that comprise you program). Only the top-level needs to be Turing-complete, the rest can at least be total.
I actually want to see if it's possible to push this concept further, to create a ring-like hierarchy of languages -- the lowest one being the weakest and the one we can extract the most static assertions out of, and so on. There's a language called Hume that sounds like it accomplishes at least part of this goal, but I haven't really looked into it.
This is not true. Nothing about monads, in themselves, allows you to write impure functions.
ATS  is my go-to language for static guarantees with C's efficiency. Not sure if it counts as small but the implementation is not big.
Maybe its not totally exactly what you meant, but it can do basic analysis for loops.
Not necessarily a bad thing, for web development, most of it's depth is an over-kill.
What it needs is some leadership. A few weeks ago Rob Pike and Andrew Gerrand (developers of Go) appeared on The Changelog podcast, which I recommend listening to, and one thing that Rob Pike said really intrigued me. He said that a policy they have when designing Go is to ensure that all members of the core team agree on any design decision before it gets put into the language. If one of them disagrees it gets tossed.
That's probably one of the main reasons why Go is such an "opinionated" language and very small in nature. This probably infuriates some people as their favourite feature from other languages is missing in Go, but it keeps the language where it is and steers it on a path that the designers are maintaining complete control of.
Your complaints about Scala's size are more on point and oppose this; Scala is such a big language because it is designed to be an industrial language (academic languages tend to be small and focussed) specifically targetting the enterprise uses of Java.
Academic functional languages that aren't designed to provide an easy glidepath for users of a particular large imperative OO language and provide access to that industrial ecosystem tend to be smaller than Scala.
I think one of Scala's biggest issues is people being turned off by what sometimes feels as functional zealotry. But people perhaps are not aware that it is also a great imperative language, you get no compiler warnings if you use a var or a mutable collection.
I find it very close to Ruby actually and I think Ruby developers will feel at home with it 
The other thing that holds it back IMHO is the compile time.
(But this is improving from version to version)
Now that you mention compile time -- one of the biggest things that the Go designers cared about was compile-time. Go touts it as one of its major pros; and I do agree for personal experience that compile time matters.
You can often cut down compile-time by making "run-time compromises". For instance, with C++ code, dyn-linking rather statically linking everything can speed up things significantly. This is because with massive C++ codebases, when you change a couple of lines, rebuilding the relevant objects only takes a few seconds - but the static linking stage can take forever.
On memory constrained system (4GB) of RAM, I've seen a particular codebase that I've worked with take up to 28 minutes just to link. The same code on a machine with 8 gigs of RAM (just double) took less than 4 minutes to link. Due to the sheer number of objects that need to be linked, your system ends up thrashing (swapping pages out to disk).
That being said, I read somewhere that Go doesn't support incremental compilation. I don't if this is still true, but that's a major problem that needs to be fixed right away.
With interpreted languages, practically everything is done at run-time and you have no compilation stage -- but at a massive performance penalty. Tracing JITs do help though.
On the other hand, Scala lets you deal with collections and Strings using the same enormous set of methods.
List('"', 'f', 'o', 'o', '"').drop(1).dropRight(1)
Was it so hard to type one letter in support of readability? Or is "cray" its own word now?
Was it so hard to type four letters in support of readability? Or is "bro" its own word now?
You don't need to fallback to imperative style. The fact that people write web apps in pure functional languages should make that obvious.
>Once you go Go, you never go back.
That depends where you came from. I really tried to like go, but went running back to haskell. Go is just way too primitive.
I must be missing something.
As others have pointed out, you can use directories as namespaces, which works nicely. But I've found, personally, that splitting my projects into separate repos for each namespace is actually beneficial. Helps keep my code clean and separated.
That being said, versioning is a real pain right now. The best thing I've found is to fork a repo when I decide to use it, then pull in updates as I adjust my code to work with them. Definitely not ideal, and if you have a lot of projects using the same dependency, it becomes a major headache.
There are some workarounds, and we've discussed the topic at great length on the mailing list. I think it's something we'll see a solution to in the next few years. But one of the things about Go that I really enjoy is that the core team is hesitant to push half-baked ideas onto the community. When we see a solution, it tends to be an elegant, clean solution that fits perfectly into the problem it solves.
In other words: yes, there are some problems. Yes, I do think they'll go away. Yes, I do think we'll need to be patient. Yes, I do think it will be worth it.
The page at http://golang.org/doc/code.html explains how to organize code into packages.
I have similar issues where I work with trying to integrate an internal build system and rubygems. Our answer is to essentially mirror the gem version internally into our own repo. It's not the best answer I could hope for.
The real thing that I haven't figured out a consistent solution to is how to manage versions. You could approximate version numbers with branches or tags, but nobody's put together a convention for it that works well with the tooling. That'll be important as go matures and has more libraries.
than it is in
On the other hand, if I wrote server code, operator overloading would be far less useful. I'd probably curse any programmer who used it and thank the gods that it was left out of Go.
Conversely, since I write a lot of numerical code, I don't care about generics or typing, which is crucial to many other programmers. Generics don't matter since everything I work with is either a Double or a collection of Doubles. Similarly, static typing doesn't help, since most functions just have the type (Double -> Double) and the type checker can't tell a sine from a logarthim. Of course, the reverse is also true. Since everything is either a double or a collection of doubles, the fancy tricks that dynamic languages offer don't give me a damn thing, so I'm extremely ambivalent about the typing debate.
Of course, on other projects, I've written code that benefited from static typing and I've written valid code that would choke any type checker. I've written code that heavily needed generics. When I did that, I used the languages that had the features I needed.
Go just won't work for the programs I write and it sounds like it doesn't work for yours, either. That's why we won't use Go. I've heard it works wonderfully for a certain class of server software and I'm glad those guys have a useful language for their domain. If I ever have to write another server, I might even teach myself Go. But don't feel guilty that you're using an old hammer instead of the shiny new saw.
Introduction to the manual: http://docs.julialang.org/en/release-0.1-0/manual/introducti...
Oddly enough, in Go it can.
type sine_val float64
type log_val float64
(Not that I am advocating this sort of approach, but it is possible to let the typechecker work this stuff out if you are dedicated)
Typing can become useful in numerical code when you move past operating on scalars. Column and row vectors needn't be confused, a two-vector and a complex number have different types, etc.
Also, physical quantities can have different types and a type system can be useful there.
I totally agree that for numerical code, operator overloading is of great utility.
(-b + math.Sqrt(bb - 4ac)) / (2 a)
though if you're using a ton of matrices it could be.
I for one have found Go great for computing. The really quick compile times with static checking plus the composability are great. It definitely depends though; while the native concurrency is great there aren't a lot of easy solutions for non-shared memory computations. (I saw an MPI package at one point, but I haven't tried to use it)
Another poster pointed out unit analysis. I've done this before with custom types that keep track of the units on measurements.
Since you mentioned parallelization, that's another fun toy I've played with. By overloading the operators for an object that defines a snippet of OpenCL code, it's possible to push these snippets through pre-existing functions and have it return a final OpenCL function. You then call that returned function on your arrays of data to run everything through your GPU with just three lines of code changes from the sequential.
Operator overloading is more than just adding matrices. It's a powerful technique that comes in handy almost any time that you're working heavily with numerical data. Of course, it's also dangerous as hell in the wrong hands. The code for the OpenCL example was actually some pretty terrible code that did extremely non-intuitive things during value comparisons.
What are you doing with uncertainty that you can do operator overloading with? You usually want to do Bayes rule with probabilities, but that gets intractable fast.
(-b + math.Sqrt(b*b - 4*a*c)) / (2*a)
At my previous job most of my day-to-day work was on algorithms for speech recognition and topic modeling, which is pretty well doubles flying left and right. That wasn't academia either.
I've yet to actually meet someone in industry who does numerical work that isn't some form of data analysis. Maybe someday...
Obviously, Go is modern and is in many ways better than today's Java 1.7. But I am trying to illustrate its maturity level and the trajectory that I believe it's on. If you recall the days of Java 1.1, it was already seeing a great deal of early traction. The early traction of Go seems roughly the same to me. Also Java in its 1.1/1.2 years was on a clear trajectory to become a dominant language. I think Go will only grow in popularity for years to come in the same fashion. Even as a primarily Java developer, I look forward to Go being a clear and viable alternative.
I could be wrong about the trajectory, of course.
But I believe a short answer to your question is: if you're considering it, take some time to actually do something with Go. At first something experimental, then something for production use.
As a long-time JVM user, I've been trying to explain to other developers for a while now that assuming you use a modern approach to Java development, the performance of the JVM allows you to be (in my opinion) even more efficient than a dynamic language because you can code your application fairly recklessly. You can defer optimization in all of its forms for a long time, perhaps infinitely. The resulting mindset is a dramatically reduced concern about performance. When I work with most dynamic languages, I can never fully set aside the inner voice saying, "this is going to perform like crap." Trouble is, the voice is often right.
Go brings the same ballpark of performance as the JVM and a style that I believe is more appealing to Python developers than a modern Java stack (although I don't think modern Java stacks are given much of a fair shake because of Java's legacy, but that's a separate rant entirely).
Will they ever add generics? Not sure. Will Java ever have proper first-class functions? Not sure.
I simply used it as an example of something that many would point to as evidence of Go's maturity level. If the language maintainers don't ever add generics to Go, I think I'd be comfortable with that. And if that's the way it plays out, eventually the design decision will be seen as firm and not a sign of immaturity.
But you have to be mature enough to jump over the shadow of your functional pride and write clean imperative code.
By that reasoning we can go back to assembly ;): you just have to be mature enough to jump over the shadow of your portability pride and write clean assembly code.
Abstractions exist to help us and in that respect Go feels like a throwback to the past. It's pretty much a Java 1.0 that compiles to machine code.
Maps are in principle trivial to parallelise. That would be a nice feature.
I love using Go and I'm a Python guy through & through.
Now would be a good time. No language, runtime, compiler, library, or framework is ever going to be perfect, but now is a great time to dive in.
> It still seems bleeding edge
This is probably a good thing in many respects because Go doesn't have the baggage from yore, and it was created by some pretty smart and capable people.
> but the language seems to have developed far faster than Python did over the last decade or so
Language designers are getting better at marketing. No language succeeds without fantastic marketing.
You'd think that more reasons are required than 'it is new, doesn't have baggage in was created by smart people'.
Sorry if someone did feel offended (the negative voting), I said it with my best and more constructive intention.
In my opinion, the problem is not in the content of my comment. Well, maybe, if someone understands it as: "oh, can't argue against that, is attacking the language" instead of, "let run your imagination to 2023".
Have a nice day/night.
Go compiles to native, so "compile on install", would probably be needed.
Go seems like an exercise in frustration to me at the moment for anything GUI or low-level OS-related.
Many GUI libraries use an object model that's difficult to map onto Go's heavily restricted interfaces, especially with a lack of generics.
In addition, interacting with popular libraries (such as libsdl or even OpenGL) that use thread-local variables (TLS) means using ugly workarounds like this one:
So I think it really comes back to the "right tool for the right job." For most command-line utilities, and for anything networking-related, Go would be my first choice.
But for anything that needs a modern GUI toolkit and uses OpenGL, it would be difficult for me to justify.
Again, I love the model Go provides for programs and packages purely written in Go; it's only when interfacing with system-level components that I get cranky.
Go might be a much better choice from a pure dev standpoint, but from a getting everyone to build apps standpoint, it's a fail in the short/middle term.
That said, not having run into a problem is a worthwhile feedback.
Go's std, visibility by case and gofmt, among other things will make you cry for using Rust.
I really hope Rust get's better with time and it really focuses on being developer friendly not just a bag of nice features.
Then why post?
That particular quote is related to "cleanliness of the language" obviously it's not an apples to apples comparison.
My apologies to all of you Rust devs who might get offended with my comment, wasn't my intention, you are doing an awesome job I'm looking for 1.0 meanwhile I'll keep toying around with the language.
My apologies again to rust devs.
- Native cloud service over HTTP transport
- Clean easy API
- Scales to unlimited queues/clients
- Push queues can have URL endpoints as subscribers
- Highly available (our #1 priority is to keep it running at all times)
- Nice UI to manage queues, stats, rate, etc.
- IronWorker integration (workers as a service)
- Fast, clean API
- One time delivery
- Push queues / pubsub / fanout as first class feature
- IronWorker integration (workers as a service)
Best way is to just try it out. It's already one of the leading cloud MQ's out there and we have a lot of big plans for IronMQ to make it the safest "bet your business" cloud message queue available.
We now also offer isolated clustering for production level highly available applications that need "4 nines" availability.