The initial quote is a good summary about the thing I find must annoying about Go. They constantly imply that programming research is useless in the real world or at least impractical, and use that to ignore the last several decades of progress. Then they try to turn ignoring research into a virtue!
It essentially presents a false dichotomy between programming research and programming practice. This is particularly unfortunate because the most practical and productive languages I know are the ones that embrace theory like OCaml and Haskell. After all, most programming research is all about being more productive in some way, and tends to approach the problem in a very organized and disciplined fashion, usually building on past results.
I'm not sure who "they" are, but certainly not the core development team, and Samuel Tesla does not represent them. See for example Rob Pike saying how mind blowing and inspiring Tony Hoare's CSP paper is, how everyone should read it and how it inspired Go[1]. Another example is how Go's regexp implementation has a solid mathematical foundation[2]. Now, those are two examples of quite old programming research being used to good effect in Go, so there is an opening for criticism there. But in general I'm pretty sure the core go devs don't have have anything against programming research nor try to create this false dichotomy.
It is rather surreal to see myself quoted. Especially something I wrote on my blog that I didn't think anybody actually reads.
Allow me to provide some context [1] for the quote. Shortly after the language was released, somebody was ranting [2] about how Go was not innovative at all in comparison to other modern languages. I was not suggesting that programming research is useless and that we should ignore several decades of progress. I was, rather, saying that Go was applying those several decades of progress to systems programming to produce a more modern version of C. Here is the whole paragraph that quote was lifted from:
"The point the author fails to see is that Go is not meant to innovate programming theory. It’s meant to innovate programming practice. This is an upgrade to C. It’s a language that applies the innovations of the last thirty years to the systems world, where the state of the art is still portable assembler. So no, the language isn’t introducing brilliant new ideas. It’s taking tested old ideas, and introducing them into a new arena."
I must be one of the idiot complainers he talks about, because his first "good thing" annoys me right off the bat. If you want to change a private method to public, you have to go through all your code and capitalize every use of it?
Yeah, I guess it could be handled with a refactoring IDE... if Go has one.
This feels like a throwback to hungarian notation. Names should just be names, quit trying to cram code syntax into it. Besides, if you do have a competent IDE, it can colorize private vs public names differently.
Rob Pike's Go FAQ says that this is one of their favorite features about Go, and I agree; it's how I've been writing C code since 1995. So, I mean, if conformance to my C programming style guide matters, YOU LOSE THE ARGUMENT! :)
I'd also contend that no language makes changing the visibility of a variable completely painless. You're fixating on the one case where you can change the token "private" to "public" and write new client code for a class. But you're ignoring public->private, which is a breaking change no matter what syntax you use; you're also ignoring the fact that moving a variable out of or into a class (for instance with class statics) is breaking no matter what you do.
As this article points out, the major win for simply using case to signal this is that the language gets to dispense with fussiness about where variables are; you can just put them in packages when they make sense.
It's all tradeoffs, of course. It's certainly your prerogative to be annoyed by design decisions in Go! I'm extremely annoyed by the import use requirement and refuse to not be annoyed by it no matter how many times Rob Pike and Russ Cox say it prevents bugs.
If I'm fixated on changing private -> public, that's because it's a common case. I doubt I'm alone with a programming approach of "start with everything private and expose public behavior only as necessary".
More philosophically, I have to wonder what the benefit of this capitalization scheme is. Just to avoid typing 'public'? Hardly seems worthwhile. To make it obvious when reading code? This is the same argument for hungarian notation, which even Redmond has abandoned. IDEs for static languages like Go are perfectly capable of presenting identifiers appropriately - if indeed public/private is really what you want to call attention to.
I admit that I've not done more than dabble in Go, but I'm somewhat skeptical of this last point. Of all the IDEs I have used for various static languages, I have yet to see any that visually distinguish public vs private identifiers. If calling attention to this distinction was so important that it should be built into the language, I would expect it to be common practice in IDE formatting already.
1. It creates a coherent syntax for making names visible that works both for member variables and package variables; you don't ever have to wonder how to expose something, because you always do it just by making the name uppercase.
2. It allows readers to see at a glance without thinking whether a value is public --- it's public if it's a proper name.
In practice, Go struct definitions are extremely succinct, almost (at first) to a fault, because of decisions like this that eliminate syntax from the language. Compare a Go package to an equivalently functional Java package to see what I mean.
#1 does not seem to be a real issue. Most similar languages have the common, popularly understood, and easily-remembered keyword "public". Whereas capitalization is an exotic special rule that must be learned. If "you don't ever have to wonder how to expose something" is an important measure, I think Go is at a disadvantage here - especially if you want your code to be read by neophytes.
#2 is the argument for hungarian notation. Why is it important to visually distinguish public/private as opposed to, say, local vs member variable? I point this out because IDEs have a long history of visually distinguishing local vs member variables and some programmers even use naming prefixes like '_' or 'm_'. Yet I've never seen an IDE highlight members based on visibility. Given this backdrop, it's certainly not obvious that visibility requires such an important visual reference. And why shouldn't the IDE be responsible for this decision? Counterarguments that programming languages should be designed for 80-col amber VT100s sound pretty wacky in 2013.
Regarding terseness of the language, I'm not convinced. Without exceptions, Go adds an ton of "if error then return error" boilerplate to practically every function call. This is a lot more typing than adding 'public' to struct members.
I think you have to look at this in the context of the whole language. Go doesn't just not have a "public" keyword. It also doesn't have classes. The way Go structures code is optimized to maximize code reuse while minimizing declarative syntax. So Go has an integrated package system (unlike C++) because that helps you build modules of code that can easily be used across projects, but doesn't have inheritance (unlike C++) because that is a feature that is less about reusing code and more about modeling systems.
I'm sure you've had the experience of working with libraries in other languages where the designer couldn't or didn't make up their mind about whether to expose capabilities as free functions or classes that needed instantiation, or, worse, exposed factories and singletons to get around their discomfort with simple functions.
Go sidesteps this problem. I found it helpful to think of it not as removing the "public" keyword from struct defs, but as creating a mechanism to "promote" functions and variables in packages --- something that would feel very shady in C++, but doesn't in Go. As a result, a lot of packages do The Simplest Thing That Could Possibly Work and just expose simple functions. The result is usually refreshingly clear.
I agree with you about error handling; Go's error handling makes its source code (though not its library interfaces) fussy and hard to read.
Some thoughts in defense of Go's use of capitalization. First of all, public/private are used in languages with classes. It would be misleading in Go, since visibility is at the package level.
Second, if Go didn't use capitalization and wanted to keep package level visibility (which I will strongly argue is the Right Thing), there would be choice of having both private and public as keywords and requiring one of the pair to be put before before all constants, package level variables, type declarations, struct members, functions, and methods -- creating an immense amount of noise -- or to have one of the two and make the other one the default. Probably it would make more sense to have everything private by default and public only when prefixed with public… But it would be really ugly and inconsistent.
Now it might be possible to get around this with some other convention. Maybe it could be like Python, where everything is public by default but private if it starts with _. Or it could use a different, shorter keyword like 'p' so there's less noise overall…
But I think that once you make the decision to have package level visibility, case is the most efficient way of expressing it.
1. Most languages allow multiple public modifiers, which can make humorously confusing experiences where you think something is private, but in reality, it's in the second public block.
In denser structures, particularly dense all-lowercase structures, this is easy to miss.
Python - everything is public, so a convention is required to make developers distinguish between public and intended-private-but-still-public (e.g. do not touch).
C# - I don't really know what's common practice in C#, but without a really good reason placing underscore in front of private properties would be really, really stupid.
On Python again - I used Python everyday for the last 3 years and I hate the whitespace convention. It's not because I have to use significant whitespace, but this syntax is the number one reason for why the language never got anonymous code-blocks. And I love the lightweight syntax, but looking at other lightweight languages, working with significant whitespace is just not worth it if it disallows such a useful and common feature.
If I'm not mistaken Guido has been thinking about anonymous code blocks as a possible purpose of the with statement but he was persuaded not to do such a thing as it won't fit into python's core concept TOOWTDI (there is only one way to do it). Significant whitespace by no means implies lack of anonymous blocks, they will just look a bit differently. Look at CoffeeScript if you want to see an example.
Actually, I find public->private is not a breaking change, because before I do a 'public->private' change, I first remove all the public calls. Therefore, all I have left to do is to check the in-class calls.
That's great for you, but I suppose you've never published a module on CPAN, a ruby gem, a Python... egg (man, that's a weird convention). As soon as someone else depends on your module - and Go is very modular, this behavior is encouraged - suddenly it's an exercise in putting fluids back into containers, or whatever they say.
The only references are in your package. They can be found trivially with ack or grep.
...private mutable variable to public:
You want to inspect all usages anyway to ensure that you're not violating the public invariant.
...private function to public:
You want to inspect all usages anyway in case you'd prefer to call an internal version of the function without the same preamble, such as argument defaulting or validation logic. If you change 'foo' to 'Foo', you can leave 'foo' around and just have 'Foo' call it.
...public to private:
You need to find all the external references anyway to eliminate them. Since the references are capitalized, it's easy to find them with a grep or ack. You won't get lots of false positives from locals that are named without a capital.
;
Personally, I prefer languages that let you bend the rules when you need to, but I also work with small and skilled teams on largely exploratory projects. If I were working on a larger and ever evolving corporate team, I'd be all over Go. They have made some excellent design trade offs for that audience; ie Google's internal usage.
If you want to change private bar to public Bar from package foo, the standard trick is to use gofmt's rewrite option and run "gofmt -r 'foo.bar -> foo.Bar' -w *.go".
FWIW, this makes me seriously question the sanity of the Go designers. I spent my college career writing code on VT100s, I'm never going back. I am much more productive with an IDE, and the patterns of colors and fonts are a big part of that.
I wonder how much of that is older programmers being used to not having highlighting, and how much is younger programmers having never tried programming without it. I think the default assumption is that it is all old people resisting change (hence the 80 columns amber terminals bit), but I'm a younger anti-highlighting example. I am not old, I learned to code with coloured syntax highlighting. I didn't realize how much of a hindrance it is until after I had been programming for years and tried turning it off.
Sometimes paradigm shifts are just that and indicate generational pivots. Ask an electrician today to do some wiring without color coded wires and they'll think you are batsh*t crazy, but it probably wouldn't have been so crazy a hundred or so years ago when color wasn't viable. The same thing applies to code; these people grew up programming before color monitors were much of an option.
I find I tend to disagree with most of the design decisions of Rob Pike, from minor (like that one), to the more important like the bogus emphasis on compilation speed and the half-baked excuses on not having Generics.
If it was someone else on board in Go, we might have had a vastly better language.
But it sounds like you don't like Go. As you say, it's heavily influenced by Rob's personal vision of how things should be done. Plan 9, unicode, refusal to complicate the parser etc. They're all decisions that have given Go its (IMO) unique feel. Without Rob we'd have a vastly different language, certainly.
Well, I like large parts of it though. From major stuff, like the implicit interfaces, the syntax for having methods on structs etc, the simple C like syntax, to minor stuff like the capital letter for exports and gofmt for one and only one formatting. And the promise (which is not yet a reality) of a fast, memory efficient, statically compiled language.
It doesn't, and I wasn't arguing that... I was suggesting that if one were to prefer reading and writing code without highlighting, one might be more inclined to design a language such that capitalization carries some semantic weight.
I'm just speculating here. I don't really have an opinion one way or the other, although I don't think anyone's an idiot for having an opinion about it, as the OP does.
I agree that syntax highlighting is useful, but people value different things. Whether a language creator likes syntax highlighting or not says nothing about the quality of the language they created.
Not all code is read in an IDE. Also, most IDEs don't color names differently at call sites. For example:
a = foo.Bar()
You know Bar is a public function of foo. It's not hungarian notation because you're not adding otherwise-nonsense characters to the beginning of an identifier, which would make them longer and harder to read.
I'm really not crazy about the "import github.com/foobar/foo" thing. It seems that systems like CPAN have an advantage in that a) you can roll your own mirror to avoid using an untrusted network and b) should upstream change their VCS etc, the go system requires changes to all files that import the affected module. The go system also seems to ignore the problem of versioning libraries, though I imagine there's probably some support for that in go get?
The 'go get' sub-command is the only part of Go that interprets the import path as the location of a version control system repository. The use of the 'go get' sub-command is optional.
You can use version control commands directly to get code from your own repository mirror or to get specific versions from a repository. As long as you clone the repository to the directory in the workspace corresponding to the import path, all of the other go tools will work with the code.
And what's nice about this is that if someone else uses your project and doesn't have their own mirrors for the repositories, he can use 'go get' to download the imports. And that's without making any changes to the codebase or build scripts.
I actually think the go solution to this problem is quite elegant (because it also encapsulates info on where the dependencies come from), and preferable to other package systems which have one point of failure or central package repository.
If you want to roll your own mirror simply change github.com/foo to my mirror.com/foo in your code. At least then you document the expected dependency with the code itself and can remove it if required. It also lets you very easily import a local package instead if you prefer. If the maintainer changes the location of their package and you have not taken a copy you might have to change your code, but this happens very rarely in my experience and isn't a big issue.
Go doesn't have versioning for packages yet, and they will need to add this, but that wouldn't be hard to add as an option to the current import scheme.
"If you want to roll your own mirror simply change github.com/foo to my mirror.com/foo in your code"
This is very messy. If a mirror changes, you have to go back through all of your source code and change all of the relevant imports.
I personally find the node.js/npm solution to the problem (separate the dependency mapping for external resources, allow changes to those without having to modify the source) to be more useful. It lets you swap local and remote versions effortlessly and, using a private npm server, maintain packages for a large internal network.
This is very messy. If a mirror changes, you have to go back through all of your source code and change all of the relevant imports.
Have you run into real-life problems with this or is it more a hypothetical issue?
You can use local package import paths instead if you prefer to manage your dependencies separately (and have several different goroots too with different sets of packages), or use DNS/hosts to manage where your private package server domain points, there are lots of options which don't require specification of a package server in one place.
it forces the package names to be globally unique, by piggybacking off of DNS. If you own "lclarkmichalek.com" you can put up a server and then have "import lclarkmichalek.com/foobar/foo" if you really want to.
That's a point I hadn't thought of. Though on the negative side, it does make it significantly harder to create a true "drop in replacement" that other languages have (i.e. where a library provides a replacement of another library without requiring any modification to the source code of an application using that library).
You're still going to need to recompile, because Go programs are all statically linked.
Replacing the imports is pretty easy, though - because Go is so draconian about certain elements of formatting and style (including restrictions on imports), it's relatively straightforward to migrate code like this.
(The 'go fix' tool is an even better example of the power that such an opinionated compiler provides as its tradeoff)
How so? I can think of a couple of examples where libraries have needed to be replaced and it's only because their replacements were transparently compatible that the replacement was possible without major pain all around.
Let’s say I want to declare a pointer to a variable length
array (what Python calls a list and Go calls a slice) of
pointers to FooType objects:
var foo *[]*FooType
It reads very simply ...
What???? Not that it's better in other languages but this makes the article feel like satire. It reads like "The Ugly" and "The Bad".
Aside: What's with the completely defective comment syntax on HN? Can anyone give some hints on how to quote and how to escape the star character?
Go's declaration syntax is one of the best things about the language. I thought it would take me forever to get used to, but within a day it was the opposite: C feels backwards. It's especially excellent for references to functions.
How so? There is a well thought out reason for this ordering [1], and it has taken hold in many other languages, such as rust and scala. Compare with the complexity of c or c++ type signatures.
The first is or could be Haskell. The length of a list is not part of its type. The second looks like Java, ditto there (although built-in arrays have a different syntax).
Haskell does not have mutability at all and thus no "pointers". In Java the insides of containers are mutable so you'd have to add an outer array or list. In either case though you don't need pointers to simply pass things around as you do in C (I don't know about Go).
They don't, it's inferred in my examples. Like I said, if you need them, you can use whatever sigils you want. They can go in front of the types as usual.
How would you express this using your first syntax?
I wouldn't. That syntax is decidedly inappropriate for vectors that incorporate length into the type.
"Go doesn’t suppose the use of a #! on the starting line, which means you can’t just use uncompiled files as utility scripts."
So the author wants to type "./foo.go" rather than "go run"? Why not just build your executable, "go build", then you can run/move the resulting executable ./foo.
This really doesn't seem like a big deal / why do I want to compile a utility program every-time I run it?
Go run compiled as often each #!python. The runner can cache whatever it wants.
But someone could write shebang-go wrapper to implement this functionality, but a modification to Unix would be needed, since shebang is a magic hack on the '#' character, not configurable,IIRC
Plus side: I had no idea about "go run foo.go" and it has changed my life.
Downside: couldn't read past "And what’s up with all the languages that claim all you need are linked lists? I’m sorry, this is not 1958, and you are not John McCarthy".
Did I miss anything amazing in the rest of the article? Thanks in advance.
>Did I miss anything amazing in the rest of the article? Thanks in advance.
Yes, you missed the fact that quitting articles or conversations because of some remark (which could even be tongue-in-cheek) doesn't make you any smarter and that "I stopped reading where he said" is not something to brag about.
"quitting articles or conversations because of some remark"
This is the problem with the internet and with political discourse in the 21st century: as soon as someone says something that seems off-kilter, even if it is technically correct, people stop listening/reading.
It's not correct. He's making an allusion to lisp and imputing to it the notion that everything is a linked list, which is not true. The article is a comparative discussion of programming languages. Earlier, it calls critics of the focal language in the article "idiots". That was OK with me while the author had credibility. It simply lost credibility for me at the point where it asserted that Lisp programmers don't use hash tables.
Apparently, I missed that Go could have, but didn't, support a syntax with shell shebangs at the tops of files so it could be used for utility scripts. I feel like I probably made the right call stopping where I did. ;)
>Earlier, it calls critics of the focal language in the article "idiots".
No, he only calls critics of SPECIFIC aspects of the language (aspects inconsequential and prone to bike-shedding) idiots.
>It simply lost credibility for me at the point where it asserted that Lisp programmers don't use hash tables.
You weren't supposed to read his article as coming from a Lisp expert, seeing that Lisp wasn't the main focus at all.
Heck, you weren't even expected to read the article as coming from a Go expert. Just as an article from a guy that tried Go and shares what he thinks of it.
>It simply lost credibility for me at the point where it asserted that Lisp programmers don't use hash tables.
Would an article by Einstein on relativity "lose credibility" for you if he makes some ignorant remarks about some aspects of mathematics in it?
I thought about it some more and now I know why I was irritated by this remark. I just felt this remark was exactly the same thing as some jerks bulling a kid with glasses (or the fat one) in school.
There is that social situation where you're new to the group and don't really know what's expected of you, what you should do. But you'd like to fit in. It's after initial introductions and you know the group's members names, but they're not that happy about you hanging around them. Some of them seem hostile, others just indifferent. So naturally you want to avert their attention and, if possible, show them that you have much in common.
And that's when this unpopular, fat or whatever, kid happens to fall just in front of you. Of course you will be laughing, at the very least, and it's quite possible that you'll be calling the kid an idiot with others. You know, maybe you're not fully accepted into the group yet, but at least you're not him. And you can laugh with your buddies, this matters too.
I find this remark exactly the same kind of thing. I read it as: "ok, so you may use PHP, or maybe Java, that's all cool because hey, at least we're not lispers! We're all one big, happy programming family (except for the lispers, of course, who don't even know what a hash is)!"
I don't like this. I wrote "PHP" because it's a popular target of the same bullying tactic. And Java, too, just in the other circles. I don't like it in these cases either, despite the fact that I don't use either PHP nor Java and frankly I don't think they're great. They may not be, but bullying is a bit below what I want to see in a technical article - or in real life.
Well, lots of important contributions in some fields have erroneous and ignorant remarks on other fields.
If that (instead of careful consideration on their own merits and in their main focus) makes them "lose credibility", then well, I guess you should have more patience and tolerance for errors.
But it was an article about the merits of one programming language, in which the author made erroneous and ignorant remarks about another programming language, and other ignorant remarks about programming language features (mild ones like associating static typing with verbosity). It's not as if they made some flawed remarks about mathematics or electrical engineering, which truly are different fields. For me, it detracted from the author's credibility, because they apparently don't have a breadth of experience with programming languages, so why should I take their opinions about this one particular language very seriously?
(Author here.) I find it odd that everyone thinks the remark is a criticism of Lisp. It's a criticism of Haskell. Lisp had a good excuse for focusing on linked lists back in the day. Then Clojure improved on it by promoting hashes to first class constructs. But Haskell is stuck in the past, at least as far as first class syntax goes.
The only thing that elevates lists above other data structures in Haskell syntax-wise is the "" and [] sugar for constructing them.
Haskell's type system allows you very easily to construct your own data types. For example, an unbalanced binary tree would be defined as
data Tree a = Leaf a | Branch a (Tree a) (Tree a) deriving (Eq, Show)
I admit that the easiest way to construct a hashmap, using fromList [(key,value),(otherkey,othervalue)], isn't as pretty as Clojure's syntax.
(Author here.) Actually, the remark was a complaint about "Learn You a Haskell," which I have been working through, on the grounds that it's NOT Lisp, so they shouldn't pretend like head/tail are the holy grail of data structure design.
>Never give up hope! One of these days, you'll leave a comment about people's refusal to read blog posts all the way through and by doing so change the world.
I agree. People having patience with other people and going through their argument thoroughly is much more important than someone complaining about another's remarks on his language of choice.
Of Go code I've written around 1000-1500 lines, merely to check the language out (a small server app and a logging utility). That said, Go itself was never the focus of this subthread.
To be frank, no. The article pretty much sums up my conclusions too.
I generally like Go, but dislike some of the design decisions (like the lack of Generics, or the special casing of a few select structures). Not much sold on the error handling either.
But those would be minor gripes if it had real good performance, which it does not, considering it's a statically typed, compiled language. It's been around 2x faster than Python in most of my uses (which don't involve parallelism), which doesn't really tempt to use Go over Python. If it could get closer to C++/Java performance it would be nice.
I wish Google had devoted some hardcore compiler guys like it did with Dart and V8 on it, because the current old-school team, while very good at systems programming and with a long history, is not really up to it for a 2013 language.
In the meantime, I wait to see what happens with Rust.
1. Languages that are fast enough that when I'm writing in them I can tell myself "no matter what I write, it can't possibly be slower than what Ruby would have done for me". C obviously falls into this class, as does Java, and Go.
2. Languages that are so slow, like Ruby and Python, that it doesn't matter what I do in them, short of picking the right problems to work on in them.
If I wrote a lot of kernel code, or games code, I'd probably have a third class of language that would admit C but not Go. Similarly, if I broke languages up by runtime footprint, I could make C work for embedded stuff for which neither Java or Go would work.
But those would be minor gripes if it had real good performance, which it does not, considering it's a statically typed, compiled language. It's been around 2x faster than Python in most of my uses (which don't involve parallelism), which doesn't really tempt to use Go over Python. If it could get closer to C++/Java performance it would be nice.
I haven't seen anything in the language design that would make further speed improvements especially difficult, so I'm hopeful that the speed of compiled code will improve as time goes by. I agree with you that if they get some more talented developers, that would really help.
As an aside, after reading about the LuaJIT trace compiler implementation, I am quite intimidated by the amount of knowledge and skill that is needed to write really fast code on modern processors (even embedded ones).
>It's been around 2x faster than Python in most of my uses
You must have some very unusual uses for that to be true. The computer language shootout isn't perfect, but it includes a variety of uses, and python is at best 5 times slower than go on benchmarks actually written in python instead of C, and 20-30 times slower for most of them.
Go is in the same performance field as java, haskell, scala, lisp, C#, etc. The 2-5x slower than C group. Python is in the same field as php, perl, ruby, etc. They are 30-50x slower than C. That's a huge difference.
That benchmark is for algorithms implemented in pure Python and in pure Go. But for real projects, a Python programmer might use several libraries which are written in C.
>> That benchmark is for algorithms implemented in pure Python and in pure Go <<
Note that papsosouid needed to add the restriction "on benchmarks actually written in python instead of C" to avoid mentioning the 2 tasks where Python is shown as faster than Go ;-)
That would be unusual, but 80% of a program's performance is generally tied up in 20% of its code. If you can replace the 20% with a couple widely-used C libraries, then you get vastly improved performance without actually having to write much C.
In most cases in Python, the part that one would consider the application proper is entirely written in Python, which use Python libraries that wrap around C libraries to do the intensive stuff.
Your comment was a non sequitur from what dnu said. Please refrain from attacking arguments with straw men - they add nothing to the discussion
Edited for tone:
s/Quit with the straw man arguments/Please refrain from attacking arguments with straw men/
>80% of a program's performance is generally tied up in 20% of its code
I've never seen any evidence to support this truthy sounding statement. In fact, my experience has been that it only holds true occasionally, and only for the very first initial profiling, seeing that some huge performance no-no was made. Once that is fixed, you have a pretty flat profile, with "the language is just slow" as your explanation, and no recourse.
>Your comment was a non sequitur from what dnu said.
No it wasn't. He responded to what I said. How could re-iterating what I said be a non-sequitur? Read the thread, I said python being only twice as slow as go "for my uses" would indicate some very unusual uses.
>In most cases in Python, the part that one would consider the application proper is entirely written in Python, which use Python libraries that wrap around C libraries to do the intensive stuff.
No, in the cases you want to put forth as representative because they support your belief that slow languages are good enough. Most applications don't have a convenient "intensive stuff" to do in C. Most applications are like django, just generally slow all over because they are written in a slow language.
>Quit with the straw man arguments - they add nothing to the discussion.
Neither do unwarranted accusations of straw man arguments.
It was a non sequitur, and a straw man - you said that python was 5 times slower than C at best and 30-50 times slower than C at worst on some benchmarks.
dnu pointed out that the benchmarks did not represent real-world Python code, since real-world code uses C libraries for performance-intensive stuff.
You responded by saying that programs that are 90% C and 10% Python are unrealistic.
Nobody ever suggested using programs that were 90% C and 10% Python, or said that that kind of usage was common. That didn't follow what dnu said at all. You weren't "reiterating" anything - you never said anything about "your uses", coldtea did, much farther up the thread. (If you're coldtea, then I understand your point, but if that's the case, you should have made that clear, as it vastly changes the context of what you were saying). You just brought up some benchmarks, which are not representative of how Python is used in the real world.
I see how you could have intended to imply that this was "your uses" by saying that coldtea must have unusual uses if he got go/2 performance out of Python, but this was hardly clear, if it is the case. Regardless, you totally misrepresented dnu's point.
As for your arguments against my points, the 80/20 rule, while obviously impossible to definitively prove, has far more authoritative supporters than it does detractors. If you haven't experienced that being the case, then that's good and well for you, but the experience of many others suggests that it usually is (in fact, it's usually stated as being closer to 90/10 than 80/20).
Your Django argument falls flat if you look at it more closely. Sure, Django might introduce some slowness, but the vast majority of web performance is in the networking and database layers, not the application layer. Your choice of language has little to no bearing on the networking aspect of things, but what do you think the database drivers are written in? C, or an equivalent. 10% of the code that has 90% of the performance impact - sound familiar? And you interact with database drivers most of the time exactly how I said - by using a Python library that wraps around a C library to run the database.
>>Quit with the straw man arguments - they add nothing to the discussion.
>Neither do unwarranted accusations of straw man arguments.
I apologize if I come across as insulting - it is not my intention. My phrasing in the grandparent comment was overly antagonistic, and I'm about to edit it to fix that. But my accusations of straw man arguments were correct.
It was neither, and repetition is not an argument. Let me paraphrase the conversation in the desperate hope you will at least read this since you refuse to read the thread you are discussing:
Me: if you are seeing 2x slower, then that seems like unusual uses of python.
Him: I think some real world uses of python are 90% C libraries and 10% python
Me: I feel the cases you are referring to are the unusual ones I spoke of.
>You weren't "reiterating" anything - you never said anything about "your uses"
>You must have some very unusual uses for that to be true
Wow, its like I am really saying exactly what I told you I was saying!
>Your Django argument falls flat if you look at it more closely. Sure, Django might introduce some slowness, but the vast majority of web performance is in the networking and database layers, not the application layer
That is completely, and entirely false. Scripting language web frameworks like django, rails, zend, etc are dozens to hundreds of times slower than similar frameworks in compiled languages.
>But my accusations of straw man arguments were correct.
No, they were not. And your continued insistence that your arrogant and pointless accusation was correct is absurd. If you don't wish to read something, don't reply to it.
> Let me paraphrase the conversation in the desperate hope you will at least read this since you refuse to read the thread you are discussing
> ... your continued insistence that your arrogant and pointless accusation was correct is absurd. If you don't wish to read something, don't reply to it.
I'm attempting to keep the conversation civil (I have even gone back and changed some of my more inflammatory phrasings), and I hope you can do the same, without resorting to ad hominems such as these. I have read the entire thread several times over, and have quoted from it at length in my responses.
Prelude aside, my response.
---
> It was neither, and repetition is not an argument.
That's why I followed that statement with an entire page of argument. I didn't expect to convince you with the first sentence, only to state the subject of the argument that followed.
---
> Me: if you are seeing 2x slower, then that seems like unusual uses of python. Him: I think some real world uses of python are 90% C libraries and 10% python Me: I feel the cases you are referring to are the unusual ones I spoke of.
>> You weren't "reiterating" anything - you never said anything about "your uses"
>> You must have some very unusual uses for that to be true
> Wow, its like I am really saying exactly what I told you I was saying!
I actually mentioned what you said here in my previous argument:
> I see how you could have intended to imply that this was "your uses" by saying that coldtea must have unusual uses if he got go/2 performance out of Python, but this was hardly clear, if it is the case.
Your latest response makes it clear that this is, indeed, what you meant, but that wasn't clear from the initial conversation, even after reading it several times over.
---
> Me: if you are seeing 2x slower, then that seems like unusual uses of python. Him: I think some real world uses of python are 90% C libraries and 10% python Me: I feel the cases you are referring to are the unusual ones I spoke of.
Except dnu never said that "some" real world uses of Python are "90%" C libraries. He said, and I quote:
> for real projects, a Python programmer might use several libraries which are written in C.
That is, most real world uses of Python use some C libraries. He never suggested a ridiculous number like 90%. Your comment came across as a mockery of his argument, saying that Python could never achieve reasonable speed unless it was 90% C anyway. If this isn't what you intended, I apologize for insinuating that it was.
It came across that way because almost no real-world Python application uses 90% C, unless you want to count the entire kernel and OS it runs on.
---
>> Your Django argument falls flat if you look at it more closely. Sure, Django might introduce some slowness, but the vast majority of web performance is in the networking and database layers, not the application layer
> That is completely, and entirely false. Scripting language web frameworks like django, rails, zend, etc are dozens to hundreds of times slower than similar frameworks in compiled languages.
I think you misunderstand me. My point was that the parts of a web application that have the biggest impact on performance are the networking and database layers, not the application layer, and that language choice only affects the database and application layers. I was not arguing that Django, Rails, Zend, or other dynamic language based frameworks are as fast as compiled language based frameworks.
It may not seem like the database is that influential to performance at first - but imagine a database driver written in Python. It would be by far the slowest performing part of the application, because the database requires such enormous amounts of computation and is used so much. Thus, database drivers are written in C, so they perform quickly. But that 20% or less of the code, written in C for performance reasons, cuts down an 80% or greater slowdown which would occur if the database drivers were written in Python.
Also, as an aside, Django is hardly representative of Python as a whole. I haven't used it - I've done all my Python web development in Flask, so far, so my opinions on it aren't authoritative, and aren't intended to be so, but it seems to be an overly bloated framework whose performance is much slower than Python in general.
---
Again, I don't intend to be insulting and demeaning, and it is clear that the disagreement is due at least in part to some misunderstandings earlier in the conversation. I am happy to engage in technical discussion, even if it becomes somewhat heated, but I don't want to degrade the forum or make enemies by engaging in personal diatribe. I hope you share my desires on this subject.
Perhaps in the future, rather than accuse people of non-sequiturs and logical fallacies, you might consider alternative explanations like "I misunderstood what was said". I asked several people to read the post you misunderstood, and it was completely clear to them. You are still insisting that your misunderstanding is my fault, and that does not at all indicate that you are interested in engaging in discussion.
Never give up hope! One of these days, you'll leave a comment about people's refusal to read blog posts all the way through and by doing so change the world.
Apparently "operator overloading, function/method overloading, [and] keyword arguments" are "bad for performance" so it's OK that Go doesn't have them.
For performance of compilation. Go compiles crazy fast -- think two seconds for complete compiler and runtime on a notebook. This provides for rapid hack-compile-debug iterations.
D is a language with operator overloading and function overloading (no keyword arguments however), at it compiles even faster than Go.
/tmp/dmd2/src/phobos/std$ wc -l *.d | grep total
162688 total
/tmp/dmd2/src/phobos/std$ time dmd -c *.d > /dev/null
std.cpuid has been deprecated. It will be removed in January 2013. Please use core.cpuid instead.
std.md5 is scheduled for deprecation. Please use std.digest.md instead
std.perf has been deprecated. It will be removed in January 2013. Please use std.datetime instead.
Notice: As of Phobos 2.055, std.regexp has been deprecated. Please use std.regex instead.
real 0m1.210s
user 0m1.016s
sys 0m0.188s
I stopped reading at the exact same place and I'm still considering going back to the article to read the rest of it. I don't know, maybe later - which means that I probably won't read it.
Calling people who disagree with a specific coding convention idiots was irritating, but it at least had a bit of merit - namely that 'readability counts' and that the code is more readable if it's written in consistent style.
The remark about linked lists in Lisps is both irritating and untrue. If even Schemes (I know. I just wrote a cute, little, hobby project in Racket and I used a list twice (by accident)) routinely deal with vectors and hashes then I can only imagine how many different builtin datastructures have CL (or Clojure) programmers at their disposal.
It probably was meant as a joke. Ok - it just isn't the sense of humor I share, that's all.
My personal grudge with Go is its sorting interface. You have to typedef various kinds of slices and define methods for them. I mean, all Len() and Swap() calls look exactly the same (at least in the sort package), it's such a horrible duplication of code.
Also, the Less() method should not be defined for the slice types. They should be defined for the types in the slices. Every sane language lets you define comparing methods for types themselves, rather than bind them to the container types, to honor some software principles.
Also, "err" is a surprisingly common name for variables even in the same function. When it's previously defined, and the variable after it is not, the ":=" operator doesn't work (I seem to remember). Appending numbers to "err" is ugly IMO. I don't mind no exceptions if not for this. Speaking of it, making "err" a keyword or special global variable may be a good idea, like the "errno" in C.
You should be checking errors immediately after they are returned. Once you've checked it, re-use the variable:
x, err := f(a)
if err != nil { // probably time to return from the func }
y, err := g(x) // we have at least 1 new var, so := still works
if err != nil { // handle }
x, err = h(y) // since we're re-using both x and err, just use =
I don't think that the code the author listed for Paul Graham's Accumulator Challenge is correct. The challenge specifically states that its for a number not an integer, and its any incrementation, not addition. I'm not very familiar with Go, but it seems like the Go solution is not as general as the solutions in Lisp and other languages, where any operator and type can be used with the accumulator function.
I'm sure there is probably some way to implement the same thing in Go using duck typing. Just wanted to point out the difference though.
I also don't believe in "one size fits all" counter-arguments.
Sometimes an ad-hominen is the proper thing to say. And in particular cases any of the so-called "logical fallacies" can be very much true and useful.
For example, we use the "argument by authority" every-time we trust our doctor instead of our friend's intuition, and I'm perfectly fine with that.
An "argument by authority" might not be a perfect tool in 100% of the cases, but it's a perfectly good guideline when we don't have time to go through the details of the matter ourselves.
The old-wives-tales on "logical fallacies" point at their less than 100% accuracy and come to the wrong conclusion that they are bad and thus useless.
I agree. I'm going to remind myself of this phrase next time I start to get sucked into yet another endless and pointless discussion over personal preference. One I've been using is Sweet Brown's "Ain't Nobody Got Time For That" and it's been very effective. Now I have another phrase to deploy.
He only used it in the title... probably better as "Google's Go" or somesuch. I can see why he did it though. Go ironically has to be the least Googleable new language name since C++ (ok, until they changed the search syntax). It's the name of a popular board game, a less popular programming language (Go!), and an incredibly common verb.
It essentially presents a false dichotomy between programming research and programming practice. This is particularly unfortunate because the most practical and productive languages I know are the ones that embrace theory like OCaml and Haskell. After all, most programming research is all about being more productive in some way, and tends to approach the problem in a very organized and disciplined fashion, usually building on past results.