Hacker News new | past | comments | ask | show | jobs | submit login
Things from Python I'd miss in Go (yosefk.com)
128 points by smackay on June 10, 2014 | hide | past | web | favorite | 131 comments



Towards the end of the article there seems to be a confusion between servers and web servers. Yes, Go is nice for writing servers, no, web servers aren't the only things out there doing 'serving' in systems-land.

Three examples from CloudFlare all written in Go:

1. Our Internet compression/optimization technology called Railgun

2. Our DNS server

3. Our CA infrastructure

All are networked, all are highly concurrent.

Also, our entire logging and analysis infrastructure is being migrated to Go.

PS Forgot that we also recently rewrote the code that does image compression (using C tools) in Go as well. Here the Go code is working as a job server.


It looks like the author found the worst use case for Go in comparison to a library that Python absolutely soars in and uses that as a basis for this argument. Go does not have something as nice as numpy. For numeric computing, by all means use Python.

He then makes some crazy statements about operator overloading as if that is the essence of a good language. I disagree. I don't want operator overloading. I almost never used operator overloading in C# or Java. Maybe that's just me, but it's certainly not a reason to avoid a language.

The crux of his argument is that C++ users who switched to Java years ago are the types of programmers that Go wished to convert.

He neglects to mention deployment. He neglects to mention stability in runtime. He neglects to mention any low level meddling that you have to do in compression. The examples you posted are exactly the kind of things that Go soars in. It's very convenient to leave them out while beating your chest about how your language is a superior language because it is superior in a very specific niche.


Erm... so:

* I didn't "find" my use cases - they found me. All are real things I do with Python.

* I'd switch to something faster than Python, and finding more errors statically, if it wasn't for the bunch of things I'd miss.

* I like C better than C++ though the former lacks operator overloading; it's not "the essence of a good language", just one thing I use in Python.

* Further evidence that "I'm not here to bitch about Go for the sake of it": I didn't mention generics or similar, for instance - because they don't matter in the cases where I use Python, at least I don't recall that they do. I'm not looking for things to complain about.

* Deployment, etc. - it probably won't make someone switch to Go though it might be nice if you start in Go, and also - it's an argument in itself. I like how in Python you can change a file and all code using it "sees" the change, or how in languages with dynamic libraries/runtimes/whatever you change the library and all code "sees" it. Basically there are deployment trade-offs and in my environment - a controlled LAN - Go is simply worse. On a DreamHost box with ancient everything it'd be a godsend.

Overall - I don't think "Python is better than Go", but the ones who "beat their chest about how your language is a superior language because it is superior in a very specific niche" are people arguing about Go the way you do :-) That niche in Go's case being servers.

Speaking of servers... I know the difference between servers and web servers, I just said/implied that if Go is great for web apps, then it might take off as a really big thing, if it's just for high-performance infrastructure it's a language for Google, CloudFlare and a small number of others...


You've been saying that it's just about your own use cases, but your statements in that post are much more general. For example:

> What does Go have that Python lacks? Performance, static typing.

That's a very simplistic view. Being a long term Python developer that switched to Go as the preferred language years ago, here are some of the things I appreciate on the other side of that fence:

- Performance, indeed.

- Static typing, yes.

- A simpler language specification (a big one for me, game changer for large code bases in large teams)

- Interfaces based on structural typing ("static duck typing", as some say)

- Much better control of memory layout (struct with 2 int64s takes 128 bits in memory, a list of those takes N*128bits)

- Built-in micro-threads (goroutines)

- A runtime that schedules (means, you can block, despite concurrency)

- Buffered and unbuffered channels managed by the runtime (with select, etc)

- Builds native static executables, fast (maybe you don't care, many people do)

- A much better organized standard library

- A very good http server _and_ client package in the standard library

- Good crypto packages and APIs in the standard library

- Error handling without exceptions (that's a pro for me and many, feel free to disagree)

- A very elegant system to define access control (private/public) on vars/funcs/types/fields/consts/etc.

- All names inside files that are part of a package ("directory") are part of the same namespace (allows better organizing code than with Python's file-is-a-module style)

- Code files that can see declarations out of order (improves code organization)

- gofmt! godoc! Love those.

- None of the Python 2 => 3 migration mess (people are being told to program in a pseudo-language that is neither Python 2 nor 3 in recent times)

And so on... That's not to judge your use of Python, though. By all means stick with Python if it suits your needs.

As a last point, if you're interested in Qt bindings, I've been working on this:

https://github.com/go-qml/qml

Some applications coming out of it:

http://blog.labix.org/2014/04/25/qml-contest-results


> - Performance, indeed.

I have been really disappointed and underwhelmed by Go's performance (I have not tried the gccgo yet). For such a simple, statically typed language I was expecting really an order of magnitude better performance. So, in light of Cython and Pypy the performance story is quite murky, the other points do stand.


Performance is always a hard topic to talk about. For almost any language you can optimize hotspots in the code to make it go super-fast. You can make any code run slow in any language if you are not familiar with how the language behaves.

Go has pprof which allows for profiling of your code to help find the hotspots. Without a specific case it's hard to say that Go is fast or slow compared to other languages.


That's not about how much the code might be optimized. I could always step out of Python and do a C module if I needed speed.

Instead, what's valuable in terms of the perceived performance delta is the speed of standard code as one would write intuitively and conventionally. With a reasonable understanding of how CPython compiles and interprets code, and a reasonable understanding of how the standard Go compiler suite generates binary code, one can make ballpark-style statements of performance for such code without lying too much.

Sure, there's PyPy, and there's gccgo, but that's not what most of the respective communities are using today, and it's not what we use in the projects I'm part of either.


I'm curious - why did you go with Python over Java in the first place? Many of your bulletpoints are also things that Java gives you, except for the duck-typed interfaces, goroutines, and control over memory layout.


> It looks like the author found the worst use case for Go in comparison to a library that Python absolutely soars in and uses that as a basis for this argument.

But you also seem to be using a bit of a straw man here as well, aren't you?

You focus how he picked numpy and operator overloading.

Ok how about REPL, does go have one? No.

Does it support dynamic code loading?

Does it have exceptions?

He wants exceptions. That is his post. Remember, the title "Things I'd miss". Not things "jamra" should miss.

Let's continue.

Does Go have a lot of GUI bindings? No. Ok talk about that maybe as well.

> He neglects to mention deployment. He neglects to mention stability in runtime. He neglects to mention any low level meddling that you have to do in compression.

So where is your blog post then?


Ok. You have a point, though I don't think my argument is a straw man. Numpy is one of those rare and great libraries.

Go's error handling is the way it is for a reason. The authors chose to leave out exceptions. Stating that exceptions are better because you're more used to them or because they more easily hide errors is not a good critique. Even if you choose to squelch your errors in Go, you can always go back into your code and handle your errors appropriately. That makes it far easier to find bugs and makes your code very robust. I personally like how it compares to C in that regard.

Go has some QML bindings that are pretty great, although I don't think that third party libraries are a good measure for a language. Especially seeing as how Go is quite young. I think that facets of the language itself are better choices of criticism.

To help the OP, I think that python and other functional languages make dealing with abstract concepts more appealing. I walked the tight rope between writing Flask web sites and Go web sites. In the end, I know how to do either and I don't think one language needs to be better than another. In the case of my example, Sql Alchemy makes it much easier to make web apps. This does not detract from my issue with these kinds of articles in general. This seems to me more like a programmer trying to convince himself to not like a language due to his comfort zone.


> how about REPL, does go have one? No.

What? https://github.com/rocky/go-fish "Yet another Go REPL"


Made by a third party, it doesn't come with Go. Granted neither is numpy.


> I almost never used operator overloading in C# or Java.

Dude, you can't overload operators in Java. You can overload methods of a class, but not operators. Operator overloading means that you can overload '+' for example for your particular class, which would enable you to write code like (in Java):

    Matrix a = zeroMatrix(4,4);
    Matrix b = identityMatrix(4,4);
    Matrix c = a+b;
You can't do that, so you would have to write:

    Matrix c= a.add(b);
Which makes your code verbose and makes Java (and Go) painful for doing numerical analysis.


It's important to note that numerical calculation is a domain particularly suited to operator overloading simply because the underlying domain (math) already uses operator overloading heavily (e.g. multiplication means something different when done on scalars and vectors, but in both cases it is well defined and in common usage).


You should leave out the 'overloading':

- Math (and it's very close relative logic) are the only domains that use operators, period.

- other domains use overloading; homonyms are very common.


Which leads us to that ugly a.equals(b) instead of sane a == b. When not abused, operator overloading is a bless.


Except that a == b and a.equals(b) are two different things.


As it stands you can't in Java but you can in C#, although it's usually a really bad idea.


My comment only applies for Java - I don't know C#. As you say, it's usually a bad idea but it's quite reasonable to do for numerical analysis, as your sibling comment says.


You are right. Can you tell which language I spent more time with?


"Also, our entire logging and analysis infrastructure is being migrated to Go." This intrigues me. I've been moving a lot of ETL into Go (as it's pretty well suited to this), but I still end up doing a lot of the analytics in Spark (for bigger work) or Pandas or R (for smaller bits of data). What are you guys planning on doing for the analytics? Or is it more of straight up time series work (so you can use influx + whatever). Just curious and keep up the good work :)


Since I'm not actually involved in that project I'm going to defer answering until we write a detailed blog post.

Also, if you are interested in a job...


"Also, our entire logging and analysis infrastructure is being migrated to Go." This intrigues me....

Go is great for this kind of stuff. Spark is great when you have a lot of custom queries, but if you have a few fixed queries writing a simple solution in Go have have huge gains.

I worked to implement a web service + simple map reduce (with network transparency) + a time series DB (storage engine too, NOT using levelDB like influx) all in Go and the result was it is many times faster than an order of magnitude larger hadoop cluster, BigQuery, etc for the limited operations it supports.


Did you use a framework to run mapreduce with go or did you roll your own?


We wrote our own, Go 1.0 had just come out.


It seems that you have the canonical type of use case for Go.

Server systems for thousands of customer and huge number of requests. Python became a jack of all trade where concurrency and performance is not the main focus.


Is the Lua stuff going away? What are your opinions of Go vs Lua?


Nope. No need to throw away the Lua. We're moving more and more to a combination of C-based (nginx) for core serving, Lua-based for logic and Go for services.


Sometimes we get so invested in a language that we forget that it's just a tool, and a good engineer/hacker should try to choose the best one for the problem at hand.

Go is just another tool in our toolbox: as the author says it sacrifices some of Python's friendliness and ease of use (but not as much as other compiled/statically typed languages) and features for performance and a solid concurrency model.

It's up to us to decide what the best fit for our applications is.

Even if Go it's definitely not "The Language to rule them all", it's nice to have more options.


First of all - yes they're "just tools" and that's pretty much what I said - Go is for servers and it won't "rule them all", and I wouldn't write the obvious if it weren't for the massive Go hype.

Another angle though - a language comes at a massive cost of vocabulary (builtins and libraries) which dwarfs the cost of learning a new grammar (easy compared to natural languages though a cost in itself). A language suited for a narrow niche will still replicate a lot of libraries, tools etc. Whoever pays the costs, too many languages in one's toolbox, be it a person, a company or society as a whole, is not necessarily a net gain.

So if Java could have gotten goroutines without having to make a whole new language... would Go be a good idea? (I don't know the answer and maybe it's practically irrelevant but it's an interesting question I think.)


> So if Java could have gotten goroutines without having to make a whole new language... would Go be a good idea?

http://blog.paralleluniverse.co/2013/05/02/quasar-pulsar/

"We’ll start at the end: Quasar and Pulsar are two new open-source libraries in Java and Clojure respectively that add Erlang-like actor-model (and Go-like coroutine/channel) programs to the JVM."


I try to advocate polyglot programming - using the best tool for the job - everywhere I go, but unfortunately people are largely allergic to the idea. Most programmers don't want to learn new tools, much less new paradigms or methodologies. Even worse, the management agrees - the upside of building better systems isn't convincing enough to justify a temporary drop in performance while learning.

More to the point, I don't understand why the OP wrote this article in the first place. It's obvious that Go is not designed to support his use-cases and that's really all there is to it. Listing reasons why Go is bad at doing things it wasn't meant to do seems a bit pointless to me. It's like bashing a screwdriver for how bad a hammer it makes...


The drop in performance isn't just while learning. I worked somewhere where I had to regularly use TCL, Java, PHP, C++ and Python - each was the most suitable tool for the job (at the time the tool was written anyway) but the constant context switching caused a real and permanent performance hit.


> but the constant context switching caused a real and permanent performance hit

Maybe context switches were too frequent? Or the opposite, too rare, which could lead to repeating the learning overhead each time? Personally I didn't notice any slowdown due to switching between languages and technologies - other then at the beginning, when I was learning them.


For me, it's the infrastructure & best practices that's the biggest cost.

There are degrees of language learning. I went through a "everyone should be a polyglot programmer" phase when I had about 3-4 years of experience, because I was then proficient - not expert - at about a half dozen languages. At that point, I knew the syntax and semantics of all of them, the common standard library calls that I needed for everyday programming, and most importantly, I could mentally map between constructs in my head. So I'd be like "This is a 'for x in collection' loop in Python, a 'for (String x : collection)' in Java 5, a 'for (var x, i = 0; x = collection[i++];)' in Javascript', a 'map (\x -> ...) collection' in Haskell".

I'm coming up on 10 years of experience now, and I try to limit the number of programming languages I work with pretty dramatically now. What's changed is that I now think of a language as an ecosystem and a culture, rather than a set of things I type into a computer. The typing is automatic; instead I'm thinking of the level of "Well, if I use Mockito and JUnit for my unit testing, here's how I have to set up my Dagger modules, and I can use Providers there to give me a dependency graph of ListenableFutures that will let me kick off a whole cascade of RPCs when a request comes in, all without having to manually manage the sequence of events." And all of those libraries have gotchas and best practices that I've internalized, which I think need to page out if I start working with another ecosystem. (This was perhaps a bad example because I'm actually much more fluent with Python and Javascript than Java now - but then, maybe that makes it a good example, because it shows how much tacit knowledge is important even for a simple forum comment, let alone a working system.)

So I can't really judge your level of expertise over the Internet. But I'll caution you that views on this can flip-flop as you gain more experience. It's important to really learn one language well before judging that everybody should be able to use multiple languages with equal proficiency. "Learning them" is a continuous process, and there're tips and tricks that are very specific to each language that you continue learning even decades in.

[1] http://www.reddit.com/r/haskell/comments/cguuj/a_haskell_web...


Though I agree with, the primary argument that I've heard has always been that fragmentation/heterogenous systems in a corporate infrastructure make the human and capital management more of a nightmare.


It's corporate infrastructure that is the nightmare, not the fragmentation.


It's ironic that the author dismisses C++ to a small niche:

> Those who still stick to C++, after all these years, either really can't live with the "overheads" (real time apps - those are waiting for Rust to mature), or think they can't (and nothing will convince them except a competitor eventually forcing their employer out of business).

Yet later he gives a good example for an important area where C++ is still unbeaten: scientific computing libraries.

If you want/need maximum performance, then right now, there's no other language that has no overhead and zero-cost abstractions (to the degree that C++, especially C++11, has them). Hopefully Rust will get there eventually, but as of today, you have three choices: Fortran, C, and C++ – and neither C nor Fortran offer any abstractions worth mentioning.


Zero-cost abstractions come at a pretty hefty price if you pick C++. The cost is in programmer productivity & quality of resulting software. I'm sure there are some fields of academia where this is a reasonable tradeoff (if you're programming DSP algorithms running on a embedded battery powered device strapped to a dolphin, say...) but your generic number crunching job rarely calls for it.

(As an aside, zero-overhead abstractions being unique to C++ doesn't sound right to me. Then again I haven't heard the term used outside C++ circles and its meaning is nebulous, so maybe it's just a name for C++'s tradeoffs?)


> Zero-cost abstractions come at a pretty hefty price if you pick C++. The cost is in programmer productivity & quality of resulting software.

I agree with you. But as with most prices, sometimes they are worth it. There are a lot of practical cases where you really want the performance (even outside scientific computing, which in itself is a pretty widely applied field).

> I'm sure there are some fields of academia where this is a reasonable tradeoff

I'm not sure where you got "academia" from; scientific computing is in no way restricted to research. Weather predictions, genome processing, and oil reserves exploration are just three of many examples for the commercial application of scientific computing.

> (if you're programming DSP algorithms running on a embedded battery powered device strapped to a dolphin, say...)

DSP applications are ubiquitous today; you'll find them, e.g., in your phone, or in your car.

> but your generic number crunching job rarely calls for it.

You're very wrong. Number crunching is the example for which you want lots of performance.

> As an aside, zero-overhead abstractions being unique to C++ doesn't sound right to me.

I didn't say they are. I said that no other language offers them to the degree that C++ does.

> so maybe it's just a name for C++'s tradeoffs?

No. It means abstractions which do not cause runtime overheads because the compiler can optimize them away.


For embedded DSP, the preferred language is C (no ++). The extra complexity the ++ adds is of little or no value in small embedded systems.


Hopefully Rust's approach to zero-overhead abstractions will alleviate a lot of the problems you run into in C++.


C++ has been pushed into performance niches. Those are big niches though. Games, real-time programming, high performance networking.

But it is not used _as_ much as in the past for core business logic. There are legacy system that will run for many years. A lot of web backends are not not in C++ when in the past they were. It was the language to rule them all, but now in some areas others have taken over.


I agree 100% with what you said. I would never advocate C++ for business logic. In fact, I wouldn't use it for anything which doesn't really need the most performance available (and even there I would try to restrict its usage to the actual performance bottleneck).


If you want/need maximum performance, why are you programming in a "high-level language"?

Many of the highest-performance scientific computing libraries are written in assembly or use inline assembly instructions. The vast majority of scientific programmers don't need to write at this level. I challenge you to write a native C++ matrix-matrix-multiply algorithm that generally outperforms the matrix-matrix-multiply available to MATLAB, Python/Cython, and Julia through their interface to the assembly-optimized OpenBLAS library: http://julialang.org/

Note that C, Fortran, and Julia are all using OpenBLAS in the rand_mat_mul benchmark, and OpenBLAS is written in C (with assembly).


> If you want/need maximum performance, why are you programming in a "high-level language"?

Because going to assembly offers so little benefit (if at all), that it doesn't justify the huge increase in effort and susceptibility to bugs in most cases (and even then, inline assembly is usually sufficient).

> Many of the highest-performance scientific computing libraries are written in assembly or use inline assembly instructions.

I don't think this is true, at least not for non-inline assembly. Sure, OpenBlAS contains a lot of assembly, but many scientific computing algorithms are more complex than the BLAS routines.

> I challenge you to write a native C++ matrix-matrix-multiply algorithm that generally outperforms the matrix-matrix-multiply available to [...]

Where in my post did I give you the idea that I don't like code reuse and instead reinvent the wheel every time?


edit: Jack Poulson's Elemental, and Andreas Waechter's IPOPT are also written in C++, and are both very important libraries within scientific computing and optimization, to add a few more examples of the usefulness of C++.

I'm trying to make the point that C++ occupies a weird place in the abstraction-vs-performance tradeoff.

For high abstraction and friendliness, C++ doesn't do nearly as nice of a job as MATLAB or Python in exposing high-level scientific computing operations.

For performance, C++ does have some really nice high-performance libraries (Eigen comes to mind), but for the majority of use cases, you either need to write inline assembly to get the best possible speed, or you're already reusing a fast numerical kernel from another library, and you may be better off in a higher-level language.

This is a very subjective discussion, and there are many interesting, fast, C++ libraries that do important work for science. But your claim that C++ is the best language for such work would be highly contested by many high performance computing specialists. As it is, I believe there's a pretty rough split between C, C++, and Fortran supporters, with Python continuing to gain traction.

Here's another counter-argument. MPI is used in 99% of all scientific high performance computing codes. The latest version of MPI, MPI-3, drops explicit support for C++ bindings. If C++ was so dominant, why would explicit bindings be dropped?


>For high abstraction and friendliness, C++ doesn't do nearly as nice of a job as MATLAB or Python in exposing high-level scientific computing operations.

I think you responded to it yourself in the next paragraph.

While not quite the same as using MATLAB or Numpy; with Eigen, Blitz++ and Armadillo you can come really close syntactically. You do have to suffer through the compilation process but it comes with computational benefits. Numpy and its ilk are adversely affected by the limitation of their vectorization paradigm, some more, some less. This style creates needless copies, unnecessary and overly pessimistic loops. These cost performance. Eigen/Blitz++/Armadillo style libraries do not suffer from this problem. A common Pythonic way to recover from these deficiencies of Numpy is to use Cython (and numexpr although it has a very narrow scope). However, to see speedups in these tight numerical loops with Cython one really has to do manual indexing, write at a low level, not so for Eigen/Armadillo/Blitz++.

This, although very limited in scope, is a concrete example that shows that you can write at a higher level in C++ without incurring performance hits.


I think we're approaching this from two different directions: you from the library user's, and I from the library writer's perspective.

Sure, if there's a good library available that handles all the bottleneck computation, then there's no need to write anything in C++ – just use Python or whatever. But if you actually have to implement a kernel, you can't do it in Python or Matlab.

Regarding assembly: I still believe that assembly is unsuitable except for the simplest of algorithms. Sure, you can optimize the crap out of DGEMM or DAXPY, with SIMD, cache-optimization, and prefetching – but in the end, it's still a very simple algorithm, with a very simple data layout and predictable access patterns. But as soon as it gets a little more complex (e.g., graph algorithms, or a SAT solver), you can forget about assembly. Heck, I doubt you could even get a measurable performance benefit.

> But your claim that C++ is the best language for such work would be highly contested by many high performance computing specialists.

My personal, subjective, and completely unscientific opinion on this is that these people are domain experts, which know how to design fast algorithms and data structures, but not necessarily how to make the best use of a complex language like C++. Or they're extending legacy code. In short: for non-technical reasons (analogy: Haskell and Scala are better languages than Java, and yet a lot more Java code is written).

> The latest version of MPI, MPI-3, drops explicit support for C++ bindings. If C++ was so dominant, why would explicit bindings be dropped?

Maybe because they didn't offer much over the C bindings, and even some C++ projects (e.g. Boost) used them?


I am sorry to say this, but I sure will be happier if this oft repeated vapid trope stops getting repeated. There is more to performance than memory layout and caching.

There is a big difference between optimization in the small and optimization in the large. It is precisely because you need maximum performance and correctness that you need high-level languages. Quite ironically your comment itself illustrates the point well. One could have written high-perf apps entirely in the style of OpenBlas. Sensible people dont and for a reason.

I am going to copy a previous comment of mine on this topic.

--

I will not be surprised if it beats C by generating C code itself, but C code that no human would write by hand. It has been done before in programming history (not just C, fortran has been beaten as well http://dl.acm.org/citation.cfm?id=200989 http://link.springer.com/chapter/10.1007/978-3-642-19014-8_1.... http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.150.... although the prior art focuses on Lisp I think Haskell and ML like languages are better equipped for this). Imperative code is a lot harder to reason about, and low level code is a lot harder to write if one cares about correctness and performance simultaneously. Specialized functional languages has a history of beating C implementations. It could be faster still if one removes the constraint that one has to generate C code, for example coroutines and proper tail calls become easier to implement. The bottomline is that correctness is easier to achieve with functional, and performance with imperative. So we would need both and they meet in a compiler.

Take the example of Stalin and StalinGrad, they are Lisps. Their compile times are epic, but once done they have a history of generating faster code than C for specific applications (a specific example where it will excel is multidimensional numeric integration where the function to be integrated is passed as an input). The main reason why they would be able to generate faster code is that they are smarter about inlining. The reason they are smarter about inlining is that programmers intent is better preserved when encoded in a higher level language.

Can a programmer not write the same correct code in C by hand ? Perhaps they can, but it will take longer and would be more error prone and likely to be costlier to produce, unless one takes the help of these higher level abstractions primitives, but then you aren't really writing in C anymore. Can the C compiler not be equally smart ? possibly, but it will be a lot harder to make a C compiler smart than a functional language compiler smart. Functional languages leave a lot of room for the compiler to do its thing, C while not as bad as Java still over-specifies how exactly something needs to be computed (my way or the highway). Add the fact that when code is written in a higher level, but optimization friendly language one can enjoy the benefits of compiler techniques yet to come. It future-proofs the code to an extent and amortizes the effort that went into writing the compiler.


You forgot to mention CUDA, which is trouncing Fortran, C, and C++ in performance and being adopted at a fast rate. Now, one could argue that CUDA is a dialect of C++, but it is much more than that.


CUDA is a special purpose language, the same way that, e.g., VHDL or Verilog are special purpose languages.


Those doing real HPC work really don't have a choice these days (they will be out competed otherwise), unless of course the problem isn't GPU friendly.


I don't understand your complaint, isn't "scientific computing libraries" a niche?


> isn't "scientific computing libraries" a niche

In a way, yes. The point I'm trying to make is that there are many niches that require the performance of C++ [1], while the author implied that there's just a single one.

Sorry, I should have been clearer on that.

[1] If you're not convinced, I can give you a lot more examples.


I think the author misses the point. The number one reason Go was created was to build maintainable softwares, the kind Google uses at large. The easiest way to build these are:

- Automatic memory management -> GC

- Bug catching before the software is run -> Static typing

- Overall simplicity -> few features, added only if it is extremely needed

The thing is, when you start using Go, you already know its features. There is nothing particularly new, and it all fits in your head. It is a bit strict though, so there is some boilerplate (error checking, sort.Sort, ...) but that's going to save you when you edit your software in 5 years.

Here, performance (both compilation and running) is a byproduct of simplicity.

Now, I'm not saying the OP's use cases are invalid, far from it; they're just not what was intended in the process of creating Go. Like OP, I tend to think that Go is the new Java: "boring" (ie no revolutionary features) but it just works for server-side softwares.


Why do I miss the point if, like me, you think Go is the new Java? :-)

I think it's exactly that, it works and that's fine, and why the JVM doesn't do cheap concurrency I don't know. If it did Java might have been the new Java :-)


I was referring to these:

> What does Go have that C++ lacks? Mandatory garbage collection, memory safety, reflection, faster build times, modest runtime overhead.

> If on the other hand most code in most websites matters a lot for performance, then maybe you want Go

Specifically, seeing only the features (performance) and not the intent in Go. I should have said "misses the point in why Go can be more interesting than other languages".

Now, as you said, if your requirements can't be fullfilled by Go (or any blub), then it's not worth switching.

> If it did Java might have been the new Java :-)

I don't think people avoid Java because of the lack of light concurrency (there are multiple production-ready libraries to do that). I think people avoid it because of Java-the-platform... so Java would have stayed the bloated Java :-)


Concurrency is not the only interesting thing about go. The other choices are also interesting, partly in what they leave out (no inheritance, no headers, explicit errors, implicit interfaces, static binaries with no dependencies, fast compilation, strict style enforced by gofmt). You might not like those choices of course, and you might prefer to use other languages like python or C++ ;), but comparing Go to Java + concurrency is pretty absurd, as the culture, tools and standard library are very different. Maybe superficially some of the syntax looks similar, because of the C heritage.

Thanks for the article with your first impressions of Go anyway - I read another post on your blog while visiting (about leaving C++ for a simpler OO C), and it actually echoes a lot of the motivations of Rob Pike and others at Google in creating Go - frustration at C++ compile times and baroque grammar was a primary factor in the creation of Go, so it feels to me like they went back to C as a basis and built something new...


> I think the author misses the point. The number one reason Go was created was to build maintainable softwares, the kind Google uses at large. The easiest way to build these are:

Please show how he misses the point. His point, it seems to me, is "Things _he_ would miss in Go that Python has". Maybe he doesn't want to write maintainable "softwares" (sic). Maybe his bugs look like open_file();close_file(); read_file(). Static typing can't catch that.


The error handling one is an area I've never understood - in a well-designed Go app errors will be returned from functions/methods and you'll either deal with them or not _/err.

I love try/catch/finally but I fail to see how doing either a _ or writing a simple log function to do something with returned errors necessarily represents "more code" than handling exceptions.


That creates some mostly unnecessary boilerplate. I.e., if an error happens down the stack in function to read config, which calls a function to parse a file, which calls a function to read a file, you'll have to either propagate (or otherwise handle) that `err` manually or lose the precious information.

On the contrary, Java-like pattern of functions like `Config readConfig() throws IOError, ParseError` simplifies code quite a bit. No need to deal with IOError in parseConfig - if an exception happens there it'll be propagated automatically. And `throws` clause allows for static checking and warning whenever you forgot to handle something. And sane code analysis tools would also warn you whenever you really wanted overly-broad `catch Exception(e)`, too.

There's a panic/recover in Go, but they aren't serious. At least, to my limited knowledge of Go, there are no guidelines on using them properly, so everyone panics with whatever they fancy, and this lack of conventions is a bit problematic, like `raise "Failed to open file"` in Python.


OTOH, if you follow Go's idioms and handle every error where it happens, your code will look like having a lot of boilerplate at first, but you end up with better error handling. It's the same as putting a try-except catchall around every single call in python, because in python you can never be sure (without reading the source) what kind of exceptions something will throw.

For request based services and something where some big ass operation will either fail in some way (and i don't care where exactly) or be successful, exceptions are fine. If something goes wrong somewhere, log it and reply with HTTP 500. But for servers with data i care about, i most likely want to recover right where the error happened, and actively decide if i want to abort and push the error to the next layer. Not having exceptions makes this more intuitive.


> if you ... handle every error where it happens

This is what the debate is about: "if".

The problem with Go's error handling is that people don't handle every error. And due to poor documentation structure and returning one error type, often you have to even go digging through source code to just find out what the specific error values can be and under what conditions they occur, which makes it easy to write code that you think handles all error conditions but in fact does not.

The code on the golang web site didn't even check the error code from println and you never see this done in code. If you point this out you're met with "why would you want to check that error?", which is a tacit admission that Go programs will always have missing error handling and that "if" in "if you handle every error" is never met in reality.

In contrast to this there are no Java programs that don't IOException from a failed println, and yes even println can matter. Redirecting output with ">&-" is different from ">/dev/null".


You shouldn't be putting a try-except catchall around every single call in Python. Exceptions mean you don't have to do that.


For me the most important thing exceptions mean is that they might pop up at any time, with unpredictable types. So as long as I don't have a catch all exception handler wrapped around all code, it might crash at some point.


Assuming you're talking about unchecked exceptions, correct.


Python doesn't have checked exceptions.


This one really deserved downvoting, great job whoever did that :/


I don't consider error handling to be unnecessary, but it is boilerplate, no matter how or where it's implemented. That's because every non-trivial program has the potential to enter an error state. That state can either be ignored (deliberately or not, the latter case being a major source of most of the angst and misconceptions around C and C++) if the language allows it or handled in some way, and that way is by boilerplate "if/else" or "try/catch".


> There's a panic/recover in Go, but they aren't serious.

They are very serious.

> At least, to my limited knowledge of Go, there are no guidelines on using them properly, so everyone panics with whatever they fancy, and this lack of conventions is a bit problematic, like `raise "Failed to open file"` in Python.

The convention is that you don't let panics escape across the public API of your function -- which also makes most other cross-project conventions for panics unnecessary. Its probably useful to have internal conventions within a project for internal consistency, but since none of your panics should ever escape across a public API boundary, a broader convention would have limited benefit.


The standard is usually

    if err != nil {
        return (whatever null type), err
    }
The error will propagate up, since all calls to a function returning err should be checking for the err. In a not-so-ideal world, just return err up. In a relatively more sane world, you can send "up" extra information along the error (just creating a new error with extra info) to know the error path. The first caller will/should handle this error then.


The readConfig example is not really an issue in go. Just return the io error and type switch after the readConfig.

    if config, err := readConfig(ioReader); err != nil {
      switch err.(type) {
      case io.ErrClosedPipe:
        // do one thing
      case ParseError:
        // do something else
      case ...
      }
    }


> Just return the io error

Manually, on every function call that may fail - that was my only point why exceptions may result in a clearer code.

    section, err := parseSection(ioReader)
    if err != nil { return nil, err }
Has some boilerplate, as compared to a typical exception-based

    section = parseSection(reader)  // throws IOError


Yeah I get what you mean.

If you forget that "throws" comment then the Java code is opaque to the exception. I think Go developer see a value in having errors exposed explicitly, it forces them to think about how to handle them instead of having a catch-all at the top of the program. But now we're down to philosophy and personal taste :)


People complain about the error handling of Go a lot, but I've recently switched from writing Python to Go, and so far, I am really enjoying the change to more explicit error handling.

The thing is, if you're playing fast and loose, exceptions are great; there's no extra work, and it blows up if something goes wrong, so you find out -- great!

But if you're trying to do the best possible thing in every scenario, it becomes much harder.

(E.g. maybe for some reason a field in a record you're loading isn't valid UTF8. Let's say it's in the context of a larger request -- you still want the request as a whole to succeed; the non-valid UTF8 isn't a critical issue, just something you should log and fix later. In Python land, it's quite possible you wouldn't even realize that an exception could be thrown there. You leave your code by a totally unexpected exit. Users are seeing errors for no particularly-great reason. So: you fix things to handle this case properly, and you end up with ugly try:catch: indentation that's worse than just checking error return codes in the first...)


"In Python land, it's quite possible you wouldn't even realize that an exception could be thrown there."

In the context of server code, that's really the fatal problem. And in general, "scream and die" isn't necessarily the optimal case either; not everything is a consumer web app where showing an error page is perfectly fine (I mean, not good, but generally fine). Knowing what can throw an error where is really useful.

I don't consider the error question settled yet. I think the failure of checked exceptions is instructive, because they clearly demonstrated that there's this "unioning" effect where a bit of code's error throwing capabilities is the union of all code it may call (in the absence of a programmer carefully restricting it), and seemingly minor bits of code may in fact result in "union"ing in a whole whackload of other errors. I'm not convinced anything has fully managed to address this yet. Unchecked exceptions tend to just cover over the problem, but don't help you deal with it much. A closed sum type for errors is great when you can use it, but it composes poorly if you start trying to move that around the system, and in practice Haskell still has exceptions. Using arbitrary or nearly-arbitrary terms for errors like Go (error is an interface) and Erlang do means you can pass errors back up the stack easily without the type system complaining and it's not a "hidden goto", but it means there's no way to be really sure that you've actually handled all possible errors in the best way. There's still no way to point at a block of code and assert that you know all the errors it can generate.

I think part of my willingness to accept how Go does errors is that I lack the belief that it's a solved problem and "duh, just use $SOLUTION". There's only some solutions that do a good enough job of sweeping the issue under the rug that they've convinced you it's solved, but the issues are still waiting to pop out as soon as someone stomps their boot down in the right place. It still seems like there's more work to be done here.


Generic methods is one place where non-explicit error handling is essential. What if your map/select call fails? You have to make sure to thread error handling through everything that would ever take another function as an argument (since the error handling characteristics of the unknown function are open), which, in this day and age, is ridiculous. C# couldn't do LINQ at all in this case.

Sometimes a bit of dynamic scoping can go a long way.


> Generic methods is one place where non-explicit error handling is essential.

And, really, the error-handling part fits fairly well with Go (not only what the language supports -- which is, after all, full-featured exceptions with a different [IMO, improved] syntax -- but also with the conventions on their use across public API, though that requires some thought about the motivation for that rule and how it applies to that kind of function.)

OTOH, Go's type system, as opposed to its error-handling approach, is a real problem for generic functions. Though if you just mean non-generic higher-level functions, this is less of an issue.

> You have to make sure to thread error handling through everything that would ever take another function as an argument (since the error handling characteristics of the unknown function are open)

No, you just have to state as an assumption of the "generic" that any function it works on is wrapped to panic in the event of error, and the caller needs to handle the panics. To fit with the Go convention on panics and public APIs, you should wrap any function that results from applying such a higher-level function to a function that an fail into another function that recovers from panics and provides error returns before passing the resulting function across a public API boundary.


> OTOH, Go's type system, as opposed to its error-handling approach, is a real problem for generic functions. Though if you just mean non-generic higher-level functions, this is less of an issue.

Without generic types, I don't think generic programming is very common in Go and exceptions won't be missed much yet. However, if they ever get around to adding them, I expect dominoes to fall :)

> To fit with the Go convention on panics and public APIs, you should wrap any function that results from applying such a higher-level function to a function that an fail into another function that recovers from panics and provides error returns before passing the resulting function across a public API boundary.

So panic is basically an exception but in callback form rather than block form? Holy Hollywood principle! I'm not sure what you can hope to accomplish in a callback that would make the immediate calling context sane again.


> exceptions won't be missed much yet.

Exceptions won't be missed at all, with or without generic programming, because, and I get tired of saying this, Go has exceptions. panic/recover/defer provide all the functionality of raise/catch/finally -- the main difference in structure being that (1) the context is always a function, not some other block within a function, and, (2) "recover" is done within a deferred function (loosely parallel to an "finally" block) as opposed to "catch" which is done in a sibling block to "finally". [1]

> So panic is basically an exception

Yes.

> but in callback form rather than block form

No.

[1] see: http://blog.golang.org/defer-panic-and-recover


Defer is basically finally without block structure, triggered instead by the popping of the surrounding procedural call context. So instead of a block, you have to create a new function instead to unwind...how is the extra complexity of avoiding simple block structure worth it?

And what is the equivalent of catch if we have an equivalent of finally? It seems to be recover, which allows a deferred execution to query what exception has occurred, correct? Is there a good discussion of why this is better than try/catch/finally?


> Defer is basically finally without block structure, triggered instead by the popping of the surrounding procedural call context. So instead of a block, you have to create a new function instead to unwind...how is the extra complexity of avoiding simple block structure worth it?

How is function structure any more complex than block structure?(I mean, visually, sure, there are potentially a few extra sigils, but structurally how is a block different than a nullary function?)

> And what is the equivalent of catch if we have an equivalent of finally?

Recover within a deferred function provides the ability to do the functionality of catch -- I wouldn't call it a one-for-one equivalent (the combination of features is equivalent, but there isn't a one-for-one correspondence between the individual features except between panic and raise/throw.)


Thank you. I'd still like to know why this is better and not just different. Or is there some legacy design I'm missing, like in one of Rob Pike's previous languages?


> I'd still like to know why this is better and not just different.

Its potentially more flexible, in that it doesn't restrict the "generic" cleanup code to run strictly after the exception handling code; this also probably makes things a bit more clear in the instances where you want to apply logic in the guaranteed-closeout that might affect return values but needs to consider errors in the event they occurred.


I've been waiting for someone to weigh in on this from the Python side for a while now. I'm a C++ and Python programmer (amongst other things) and I can't ever see myself switching to Go. I'd sooner switch to Rust or Swift, but D has been my favourite language by far since I picked it up last year.

I actually don't understand why, in the absence of niche use-cases, how and why Python programmer would write Go code and _like_ it. Are these Python coders who've never heard of itertools??


From what I understand, Go is an opinionated language, with a nice concurrency framework. It's not groundbreaking, it's not particularly expressive or adapted to any specific use case. It so happens that if your style fits Go's ideology (opinionated view), you become a fan.

For me, the lack of exceptions kills the language. C-like error testing smells like a twenty year de-evolution. I also dislike the flat object model, but could force myself to live with it for the sake of trying composition over inheritance. Error returning, however, is taking the ideology too far.


You are assuming the use of exceptions is an evolution in the first place, but that's far from being a consensus. To some of us, exceptions are very convenient, but more easily lead to brittle software.

I wrote a little bit about that before. Probably won't help you much, except perhaps in acknowledging that there's a different angle to that which some people may care about.

http://blog.labix.org/2013/04/23/exceptional-crashes


Exceptions are, when you boil it down to the core, a default behavior for error conditions. It states that, on error, execution jumps to the first point in the code path expecting the error. If no such point exists, execution halts.

This is a stark contrast to the default behavior for error returning, which is to carry on as if nothing happened. How can "keep calm, carry on" be construed as better is beyond me.

Now, I know the arguments against exceptions are supported on all kinds of horror code using exceptions out there. Let me preempt that by stating that assessing a tool for its wrong uses is not a good evaluation of the tool. A hammer is not a good screwdriver, there are no news there.


It's not about horror stories, but about choosing a tool that makes it easier to get the error handling path right, which is just as real and important as the success case. In my experience with the medium sized projects I've been closely responsible for (in the hundreds of thousands of lines ballpark), we were interested in doing a really good job in both the exception and the error result cases, because the misbehavior scenario is a real issue, and followed all the good practices in the modern developer's backpack. In the end, exception-rich code just turned out obviously hard to get right, no matter how much we _wanted_ to get it right, because the program is allowed to jump out of arbitrary stack frames that the developer was conveniently taught to not handle errors on, because.. hey! the "default behavior"!

After getting involved in these experiments, I started paying more attention to how people can possibly be happy with exception-rich logic that behaves like that. My empiric observation is that the lack of proper error handling turns out to have a relatively low impact for lots of projects. Crashed or misbehaved? Whatever.. file a bug.


Just read your post. It's the poor coder theme, a variant of the poor code theme. Sorry, I don't code poorly and I'm not about to shackle myself for fear of a nonexistent boogeyman.


Ah, I see, you're still in the upwards trend of thinking you have a superior mind. Been there.. eventually I learned that I too suck, and need all those tricks that encourage me and the teams I work with to up the stakes in terms of project quality.


To satisfy your curiosity, I have a long and public track record on Python, and switched to Go as the preferred language around mid-2010, for many reasons. These were both related to the properties of Go and to frustrations with Python itself. Initially that was a personal move, but several months later we also started to choose Go for projects where Python would have been the choice within Canonical as well.


There are some Python programmers who sometimes regret Python doesn't have some static typing.

Go has that and also provides an easy way to be quite dynamic when that's wished for (or even when not - without generics one'll have to resort to `interface{}` even if that's not desired, huh).


Less than 100% of the reported switches from Python to Go actually started with serious knowledge of Python.


So I do a lot of scientific programing in go [1]. I am absolutely still using python + numpy/scipy/pyamg for linear algebra heavy stuff but this has more to do with the available and well documented packages then operator overloading.

I actually find that the interfaces used for math [2] encourage a more memory efficient coding style. Without operator overloading I'm less tempted to jam everything into a pretty one liner and more likely to set things up to work with out unintended reallocations.

Lack of efficient generics is a much bigger annoyance.

[1] See for example my random forest package https://github.com/ryanbressler/CloudForest [2] See package big for example http://golang.org/pkg/math/big/


When I read go specs or go vs python comparison, nearly every diff is a specific pain point of our big python code base.

No keyword argument? to hell with them. Strict formatting? That should be in python interpreter. No unused import: would have saved our lives. No cyclic dependencies? No inheritance? No exceptions? Great, all of them are maintenance nightmares.


What's your issue with keyword arguments, apart from magic args/*kwargs? They're quite convenient. As for exceptions, the problem with Python is more that the exceptions are unchecked than exceptions in general.


Can you please elaborate. I don't understand what could be wrong with keyword arguments. Also I can't see how an unused import can waste a life, unless there is some lethal side-effect to importing that package (I don't like import side-effects, but they are inconvenient whether the import is used or unused). Exceptions seem pretty useful as well.


Python's keyword arguments:

    def f(a, b=1, c=2):
        pass
Now, same call can be written `f(0, 1, 2)` and `f(0, b=2, a=1)`, which makes it much harder to refactor. For instance, suppose you want to add a non-keyword argument, you'll have to carefully grep for all calls.

I think functions should be either full keyword arguments and naming them in calls should be mandatory. Or no keyword arguments at all. The mixing of both is "convenient" for prototyping, but has a high long-term cost.

Unused imports:

In a module bar, you have `import foo`. How do you know if it is used or not? You'll have to scan all modules importing bar to check if they import foo. Granted, the problem is not that import foo is unused, but that it is imported in bar's namespace.

Exceptions make it hard to follow code paths and easy to hide important events. The hardest debugs I've endured are because of exceptions. An innocent looking `except AttributeError` can be hiding a deep issue in some remote library playing badly with getattr magic.

That said, I love Python, and still think it is the most elegant language. I will most certainly teach Python to my kid when he'll be old enough. I also believe it is best suited for a full serie of programming tasks. But, when code base grow in size and complexity, some of its "tolerance" become a burden.


looks like your makefile doesn't have a pep8/pylint/pyflakes step...


There is no one best language.

Numpy is not something Go is going to do (Correct em if I am wrong). Julia would be the faster language to switch to, but personally I switched from Python to R due to the fact that I like languages that are made for what I am doing. Python always feels second best to whatever I am doing and I have branched away from Python. So R for statistics is great for me.


Besides operator overloading, why can't Go do Numpy?


> No exceptions [...] If I want higher-level error context I need to propagate the error upwards, with an if at every function call along the way.

This is flat out false. Go has fairly standard exceptions (panics), though instead of using try/catch/finally blocks it uses deferred functions that perform a hybrid of the "catch" (if they use "recover") and "finally" functionality.

It is a go convention that library code doesn't let panics escape to the calling code, so if you are going to use panics in your own code, to follow the convention you should be recovering from them somewhere in the call chain that you control rather than letting them escape across the public API of your code. But that's it, they exist, and can be used. So, while there may be some ground to complain that when calling the standard library you have to handle error returns rather than catching exceptions (equivalently, "recovering from panics") -- though, IMO, there is a good reason for this design choice -- there is absolutely no reason, aside from sheer ignorance and not reading even the most basic Go documentation -- to say that you don't have exceptions that you can use within your own code and that you have to to error-return handling at every level in your own code to deal with error propagation. If you make that complaint, it is clear you don't know what you are talking about.


> This is flat out false.

So Go has panics and they are _just like exceptions_ in Python? Because to accuse the writer of being "flat out false" these have to be exactly the same and he as to be ignorant to not see the obvious.

Let's say you have some step in your function and you might raise an exception. In Python you'd surround it with try...catch

  try:
    dosomething()
  catch SomeException:
    handle_exception()
Can you do that in Go. Please show how. And don't crate a new function that that is not what Python does.

> there is absolutely no reason, aside from sheer ignorance and not reading even the most basic Go documentation

ignorance? have you read the basic Python documentation, sounds like you haven't


> So Go has panics and they are _just like exceptions_ in Python? Because to accuse the writer of being "flat out false" these have to be exactly the same

No, they don't. Because the claim that I called flat out false was this

>> No exceptions [...] If I want higher-level error context I need to propagate the error upwards, with an if at every function call along the way.

So, no, Go's exception mechanism doesn't have to be "just like Pythons" for this to be flat-out false. What it does need to be is anything that allows propagating errors up a call chain without an if at every function call along the way. Which panic/defer/recover is.

> And don't crate a new function that that is not what Python does.

Even if I did accept your "must work like Python" standard (which is, itself, wrong because its not necessary for the statement at issue to be flat-out false), what's the substantive difference between an immediately-called anonymous function and a block?

Sure, you could legitimately complain that the example would be more verbose in Go, but that's not the complaint OP made. (The most direct translation of your example would look something like this in Go -- though in practice you wouldn't really do this this way):

  func() { 
    defer func() { 
      r = recover()
      if e, ok := r.(SomeException); ok {
        handle_exception()
      } else {
        panic(e)
      }
    }
    dosomething() 
  }()


[deleted]


I'm not objecting to you choosing not learn something in depth because you don't like how it looks on the surface (that's a rational approach to avoiding wasting time, which is a valuable resource), I'm objecting to you, in the linked article, commenting in depth (and inaccurately) on something its clear that you decided not to learn in depth because you didn't like how it looks on the surface.

The statement that Go has "no exceptions", or that for deep error handling in your own code you have to handle errors at each level is false. And, fine, you didn't learn anything about it because maybe you saw sample code showing how you need to deal with the public API of common libraries because of the Go convention that panics aren't exposed across public APIs. And if all you said was that, it'd be fine -- but you went beyond that and said things that weren't true and which people who haven't learned the language might not realize were statements from ignorance rather than experience, and might turn them off of the language where they wouldn't be turned off if they knew the truth, because if they were true, they'd be real and deep problems with the language design.

Choosing to remain ignorant about Go because it doesn't seem like its worth your time to learn is fine. Making up stories about why you don't like Go that aren't grounded in anything real is not.

TL;DR: if you choose to be ignorant, you don't get to pretend to be an authority.


[deleted]


> Turned off the language because of me? Again hardly - if you like Go's standard library you like it, and if you don't panic/defer will probably not change your mind.

I'm not worried about people who have learned Go getting turned off by inaccurate comments about it not having exceptions, I'm concerned about people who might read your piece before learning much on their own. If people believe your claim that Go has no exceptions and that multi-level error handling in your own code has to be done the way you described, that could turn people off of Go whether they like the standard library or not (and before they even decide whether to try it at all.)


As of at least a year ago, their are bindings for the BLAS library of your choice (github.com/gonum/blas), which are used in many parts of the matrix package (github.com/gonum/matrix/mat64). As of a week ago, there are bindings for the Lapack implementation of your choice (github.com/gonum/lapack). These have not yet been worked in with matrix, but will be eventually (PR welcome!).


Not to mention there's 2/3rds of a go BLAS implementation (benchmarks https://groups.google.com/forum/#!msg/gonum-dev/Cqa41tbUUCw/...). Level 3 routines are much harder to make efficient (people are still researching efficient matrix multiply)


In the beginning i missed repl, but i've realised using repl in the first place was a mistake.. Now i rely on docs(godoc is awesome) and when i need to test something i use go playground.


Could you elaborate on why using the repl was a mistake? I don't really miss python's repl, but coming from the Lisp repl, the playground doesn't seem to cut it. Now, if anyone has written an emacs mode that lets you interact with the playground by loading a buffer- I think I'd be much happier.


The repl/insta-repl in LightTable was amazing to me. Particularly what you could do with clojure-script when it was hooked up to the browser.


Instead of going through the docs and code i used dir function inside repl to figure out what to do, how something works and so on.


The REPL is part of my TDD loop... Which is actually, write a test, come up with a plan to make it pass, google for the programming constructs I need in the language I'm using, test a snippet (or a fully functional line) in the REPL, copy over to my project, make the test pass, ..(refactor). Without the REPL I would have an extra cognitive overhead of wondering if I had introduced a bug through misunderstanding a sketchy part of the docs.


I missed a REPL at first too, which made me play a bit with REPL-for-Go ideas, but that didn't end up on anything really convenient. What solved the problem was rethinking why the REPL was convenient in the first place, leading to the creation of hsandbox:

http://labix.org/hsandbox

Despite it being created for Go, I got quite fond of the approach. Nowadays I also use it for Python, C, etc.


Great tool. In Go, I usually write a throwaway_test.go in whatever package I'm hacking on, and use "go test" as my REPL. (With the intent that these experiments and crude checks should become unit tests later, anyway.) For exploring packages that I'm not developing, this is a nice variation from the Go playground.


As for the GUI problem mentioned in the article, we now have https://github.com/andlabs/ui

I haven't yet tried this lib, It may not as good as something as Qt, but I think it is a good beginning.


I played around with it, it is nowhere near Qt, not even like 5% close. Qt isn't just a GUI library, it's everything you need to build a application in a nice large bundle. But it's a nice start...

However I feel that Go is not really intended to be used for desktop applications, I would much rather see a Rust GUI library.


Yeah he says "Go's Qt bindings are 'not recommended for any real use'" but he is only linking to a 3rd party library that's [apparently] not ready for real use--I guess he could "there are not yet good Go bindings for Qt", or "the GUI toolkits are less developed than Python's" or "Go has no built in GUI but python does (tkinter)" [does anybody use tkinger anyway?].

Another thing that confuses me here is that he's saying "here are things I think I'd miss if I moved from Python to Go" without ever having done it to see if he actually misses them...[once more familiar with Go, he may end up not missing them as much as he anticipated].

Curiously absent also is a mention of significant whitespace, perhaps he wouldn't miss it? :)


first thing it mentions in the readme: this package is very much incomplete


The argument "would have switched to X before" seems to not consider the possibility of new adoptions.

If I didn't need the things that Go provides before, that does not mean I do not need them today or tomorrow.


Did he say that people looking for faster build times than C++ went to java? They may not have found what they were looking for, in that case...

Go does lack a quality [IMO] IDE [re: REPL use is for development], but that can still come with time.

Quality GUI bindings can also come with time [how often do you actually use Python for GUI stuff, though...but still nice to have, just not its major use I doubt]

Another thing you'll miss from Python is accidentally typo'ing variable names and having to discover that only at runtime :)

Just my rebuttal :)


Java is faster to build than C++ almost all of the time. Header files (especially with templates) tend to turn C++ in to an O(n^2) run time, often running into hours.


OK, the only thing I can compare it against is my current "java" build times, and I can tell you that using maven is somewhat slow. So I guess you could say that if people went to java from c++ for the improved build times, they might still be interested in Go for the same improvement over java :)


Another thing you'll miss from Python is accidentally typo'ing variable names and having to discover that only at runtime :)

    python -m compileall . 
:)


I was referring to typo'ing for instance assignment (perhaps you were as well?)

a = 3 if true: a2 = 4 # lame I know--but beginners like me do it

--Python lets discover your failure at runtime...[as a good dynamic typed language does--Go doesn't FWIW]. It was just from an experience I had with learning Python recently. Now it makes me wonder what his blog post would look like saying things he'd miss going from Go to Python, or "what I wouldn't miss from Python..."


Well, your example is just not an error in Python. It might be a bug, it not be what you meant to do, but it is not an error in a dynamically typed language like Python. It is valid, legal code.

My point was just that although Python is dynamically typed, it is still a compiled-to-bytecode language (much as Java is) and you don't have to discover your errors at runtime, that is optional. You have the ability to perform just the compile step without executing the bytecode. The 'compileall' command does that.

There are also great tools like pylint for static source code checking. And of course unit testing is always key.

But of course, if you are looking to catch typing and variable declaration bugs at compile time, you need to be using a statically typed language.


He forgot the part about Go also being a great replacement for Java, for those who don't like Java.


Go could take a bite from Python in web development area, but scientific computing...No chance


Most of the GUIs are done on a main thread which makes some of Go's advantages unneeded. Don't build GUI in Go, ever.

Error handling is a way of thinking: TDD is one, Go's approach is another.

REPL is not a must, environment without REPL means you have to think more about what you want. It's a process. You should be able to read source code and figure out how things work.

Like many mentioned here, Go is great at doing system stuff, like background queues and processing. Stay with Python until you need to get thing faster, then move them to Go. That's how you should think of it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: