> The most notable new feature in this release is official support for Android. Using the support in the core and the libraries in the golang.org/x/mobile repository, it is now possible to write simple Android apps using only Go code.
This is fantastic! Are there any example Android apps released by the Go team to help get started ?
As far as I can tell these samples are immediately bailing out of the Java framework and calling Go using JNI. Which means you don't get the niceties of the UI Framework. You could make OpenGL calls from the Go code like for a game or something. Maybe someone will come up with a Go game engine for Android.
I would be extremely surprised to see a language having no class, no inheritance and no good IDE become successfull in modern app development. Android devs are used to such a different type of programming, i don't expect them to accept banging their heads on so many walls for such little gains.
Ps : my stand is the opposite for server side dev.
Maybe this will help people (like me, for instance) who are disgusted by the overengineered OOP-heavy approach of most Java frameworks but enjoy the concise and elegant simplicity of languages like Go to finally move over and try out Android development.
Sure, the java bloated ecoystem is a real issue. Yet, imagine yourself building UI component without classes or inheritance ? Can't even say something like "my custom button is a special kind of android button, with just those two methods being overriden".
GUI is to me the field where inheritance actually makes a lot of things easier and natural. Now you may end up with something similar using struct inheritance and manually overriding function for the given type, but i doubt most devs will go past the first unsuccesfull attempts.
>imagine yourself building UI component without classes or inheritance
I see no problem at all. UI development in my opinion becomes much more elegant, concise and easy to follow/reason about in a functional language that does not implement classes or inheritance. As you said, Go has struct inheritance and interfaces, that's how you do things in Go and every Go developer is familiar with the concept, it's clean and elegant. I honestly see no problem whatsoever.
Go does have powerful constructs, but to call it "functional" is maligned.
I would also argue that it's not always easy to reason about and follow. The reason we build abstractions, which Java allows perfectly, is so that we can ignore the hidden complexity and reason about code more easily.
Go explicitly makes it difficult to hide underlying complexity. If I want to make, for example, a Bimap (which Java has many clean implementations of) I get to pick from two poisons in Go: Either I must force everyone who uses it to type-cast on every get (implement the bimap via taking and returning interface{}), or I must surface the implementation details (two maps) and have users directly manipulate those since the builtin uni-directional map is a type-safe generic data-structure which I can't replicate in the language itself. This might look appealing at first, but since I also need to do some Locking my interface for allowing people to manipulate those maps directly rapidly becomes troublesome.
Neither of those choices sound appealing to me.
Honestly, if you want to avoid java now you have a plethora of options. You can develop for android with Scala and get basically first-class support.
You can develop for android via webview+html+js and use something like http://ocsigen.org/js_of_ocaml/ to write OCaml and have a wonderful actually functional language.
You can wait for Rust to have some android bindings implemented and use a wonderful language that allows powerful features like macros that are excellent for interface-building.
I don't mean to bash on Go since it does have a place somewhere, I just feel that it is not the correct language for something UI heavy nor anywhere that suitably complex abstractions and complexity-hiding are a necessity.
I never claimed Go was a functional language. Sorry, maybe I shouldn't have placed those two statements on the same line but I was talking about two different things. After all, Go does implement inheritance and OOP constructs using structs (in a way), I thought it was obvious I was talking about a different thing when I said functional languages without inheritance or classes can work well with UI.
It's not about imagining what the best UI component would look like removed from any particular framework, but what the practical constraints are working within the already-existing android UI framework. Android APIs were simply designed for inheritance-oo style to work with them, and trying to avoid this will cause pain.
A similar situation has occured in ios development where the functional aspects of Swift do not complement the existing design of the coca framework.
Well, if what you are doing is inheritance-heavy, then OOP features will make your code cleaner. It's what OOP is made for. You can do it in Go, but it's not as elegant.
Inheritance as taught in OO courses (deep class hierarchies) is bad most of the time.
We can save 100 lines of code there by coupling 2 slightly related things forever (and arbitrarily choosing this criteria for division as the most important).
I've made game before I knew OO programming. I haven't knew it then, but I used Interpreter Pattern - there was data model, with a list of records, each record had some enums deciding how game logic should handle it. There was main loop updating the model using these enums to choose logic to run on this record.
It wasn't pretty but it worked, I was able to modify it however I wanted, to choose logic to run on some record basing on enums, other atributes of given record, attributes of parent records, or of colliding records, on whatever I wanted basicaly.
Then I rewrote the game to use OO design (I was enthusiastic- this is exactly what I need - I thought). It turns out that I had to choose arbitrary divisions. I can divide game objects into Solid and non-solid, Static and Movable, Visible and Invisible, and also Smart vs Dumb Objects. Which division should be first in class hierarchy? All the rest would need to be duplicated anyway. SmartMovableVisible, SmartMovableInvisible, SmartStaticVisible, etc... Ugly.
Inheritance solves part of the problem and leaves you with inconsistent solution.
The real solution is strategy pattern, and composition over inheritance. It can be done just as well in languages without inheritance (but with lambdas).
@vegedor you are hellbaned I think. I can't respond to you and your post is grey.
I wrote the OO game in C++, but I avoided multiple inheritance, because there were warnings everywhere: "don't use multiple inheritance". And I don't think it would be much help in the end.
Another thing inheritance doesn't solve - when you want to choose which logic should run on your "object" depending on dynamic conditions.
For example I had ParticleEffect abstract class, Smoke and Fire derived from it. Fire reacted differently to collisions with stuff, than smoke. Fire partices were smaller with each frame to simulate flames, smoke was getting bigger and more transparent with each frame to simulate, well, smoke.
Then I wanted to make fire that turns into smoke after a couple of frames.
Solution 1 - merge both classes, make enum field inside, switch it after a couple of frames [not OO design - behaviour depends on fields not on classes]
Solution 2 - destroy fire object after a couple of frames and create new smoke object in its place [unnessesary allocation and deallocation is bad in games, also this complicates identity - for fire and smoke it doesn't matter, but for player and bullets it's important - I need to ensure player isn't hit by bullets he shoot, that's achieved by keeping "parent" pointer in each object, when the player is recreated because he got new behaviour - all the pointers are wrong and need to be updated].
I can't change class of object dynamicaly, so all the OO divisions are static in time, I have to workaround class hierarchy to model my problem correctly.
Another thing - in collision detection, the logic to run should depend on both colliding objects types, and some of their fields. With OO I have to workaround this again (because most OO languages are single-dispatch).
Another thing - OO data is much harder to serialize. Structural data allows banal serialization. This is solved in some languages (like Python or Java) by providing serialization in the core language, but in C++ or javascript serializing arbitrary object graph correctly is nontrivial. Serializing list of records was trivial even in Turbo Pascal ;)
More than just "in practice", it might be worse than copy/paste. Richard Gabriel nailed it in the 90s when he identified inheritance as a form of compression, not reuse, forcing the programmer to have to understand every class under inheritance to understand any part of it:
Compression is a little dangerous because it requires the programmer to
understand a fair amount about the context from which compressed code will
take its meaning. Not only does this require available source code or excellent
documentation, but the nature of inherited language also forces the programmer
to understand the source or documentation. If a programmer needs a lot of context
to understand a program he needs to extend, he may make mistake because
of misunderstandings.
(from Patterns in Software, which is a really deep if long-winded book)
Sure, you can clone Foo into Bar and change one thing in Bar. But what happens when you need to change functionality common between Foo and Bar? Oh right, you have to change the same thing in two places now. Doesn't seem so smart anymore...
This is why Go has standalone functions and interfaces. You can easily share logic between types without having the types be the same thing.
The biggest problem with inheritance is that it's a lot like monkey patching (except in production)... the rest of the base class's code expects method X to do ABC, and you've just swapped out the implementation to do LMNO ... and if that never bites you in the ass, you're luckier than most OO programmers I know.
This is the big insight. Instead of trying to inherit code, figure out what that code does that interesting, and allow that code to do the interesting things without being inside the class. For example, imagine your class has a "WriteToFile" method. Instead of that, it should just have a method to return its representation, delegating the responsibility of writing to a file to something else. (Of course, the fact that a file can be written to should also be one of those "interesting things", and the thing that writes shouldn't care that it's backed by a file.)
Instead of
foo.WriteToFile("/tmp/foo")
You might write:
file.Write(foo.Representation()).
I promise that most inheritance in the real world is attempting to reuse something like "WriteToFile". "I wouldn't want to copy-paste the file-writing code, so I'll inherit from something that can write itself to the file." No. Don't do that.
I know you can, i've explicitly talked about struct inheritance in my comment, as well as manually defining functions for the new type. Yet, this is a weird construct if all you want to do is class inheritance. You have to deal with questions like "do i want to define the func on my type, or on a pointer to that type", or "how do i call "super"", how do i extend the constructor for that new type, etc..
All those questions do have answers in golang. But you have to learn new patterns in order to do an extremely basic and common thing in OOP, once again, without much immediate apparent benefit.
That's not the same thing as overriding a virtual method in an OO language. If B() calls A(), it gets the general A: http://play.golang.org/p/nu7kY168E9
GUI is to me the field where inheritance actually makes a lot of things easier and natural.
I would invert that and say GUI frameworks are why bad OO/excessive inheritance is so ubiquitous. I have no idea why all the major frameworks have these horrible inheritance hierarchies, there's really no excuse for it.
In my UI days, I would avoid this by having a Window or Component class extend nothing, housing the necessary component as a field. I'd extend it only if I had to (rarely, and even then usually just one or two methods), and expose it with a getter. Cleaned things up a LOT.
there are plenty of non-Android devs, like me, that just don't want to go with Java and are not fussed about IDEs. I know full well there are plenty like me in the Go community.
I'd also argue that by being designed to be machine refactored and parsed (unlike some dynamically typed languages), go is easier to wrap with an IDE for refactoring stuff.
The main use of go for android, is to write native apps (games) in go, using a simple draw loop, that calls into OpenGL. In this way you can have one codebase for your game on android and ios (go will support ios too in the future).
I think it was never meant todo UI android development, because that requires calling into java a lot. Since most of android IS java.
It nay not become popular among existing, heavily invested mobile devs, but they aren't the only devs who might target mobile. And the fashion of class-based OO and languages that require heavy IDEs seems to be increasingly less accepted as the one true way.
Android studio was just released a few weeks ago and became the official IDE for android.
Xcode advocates building GUI using Wysiwyg tool. Swift has all the OOP features, and it also has an embedded "playground" mode, inside the IDE itself.
I don't think the trend is toward slim environments. There will always be people to say that doing things manually is better than having the computer do it, but it always surprises me when this thought comes from programmers themselves.
Note : i would actually love to have a great IDE for golang, with all kinds of refactoring, code analysis and autocompletion features ( lightIDE isn't one of them yet). I dont't think big IDEs are incompatible with elegant language, as swift+xcode or visual studio+C# show. To me the problem with Java is that there isn't a single framework that you could actually use if your only editor is emacs.
> There will always be people to say that doing things manually is better than having the computer do it
Languages that don't require a heavy IDE (particularly one including lots of tools for writing and rewriting code -- static analysis tools are a different issue) aren't "doing things manually instead of having the computer do it" -- interacting with the language itself is no more "manual" than interacting directly with the IDE. Its just aligning the write-language with the read-language. If I need a separate visual language to use to write code from the language I'm nominally working in, that both increases the overall mental complexity and indicates that there are weaknesses in the abstractions available in the nominal working language.
I understand your point, although i don't think OOP has anything to do with ( as C# and swift demonstrate).
I disagree on your "read language" vs "write language" analogy. Take for example SQL and Database model designer. Both are useful, and the fact that the second makes things easier doesn't diminish the merits of the first.
It's the one i'm currently using, but calling it enjoyable is really excessiv. You can't compile or get any kind of completion while running your program, you can't define a script to be called when hitting "run" ( it always tries to launch your current file). It doesn't have any kind of macro or refactoring features, etc..
Coming from xcode and pycharm, it still feels a decade younger.
They stated that Java is the language platform, laught at the idea someone would like to try to use Scala instead and mentioned you are pretty much on your own if you feel like using the NDK.
Search for Google IO 2014 Android Fireside, video is available.
I don't really see it - a language made for systems programming (read: non-application development) being used for developing Android apps? Or is there some other kind of Android development in mind, here?
I think it was Rob Pike who later said he regretted using the term "systems programming" to describe Go. They never meant the phrase to mean purely operating system tools and programs (ie. not web applications or interactive end-user applications), which would be rather limiting. Instead, he said he said they meant it as a language for composing systems, as for example a typical SOA web site may be, or even for application development as you've alluded to.
According to Github 8.4% of glibc is written in assembly. You would actually expect that to be an over estimate, since assembly is guaranteed to be machine dependent.
I suspect this is a significant underestimate: glibc heavily relies on generated assembly (they have a set of scripts to generate boilerplate asm functions for system calls) and inline assembly (which gets defined once, probably counted as C by github, and used many times.)
With that in mind, I tried to use the phrase in a more wide sense than some other language communities would. But you're right that that phrase might as well be disassociated with the language at this point, to avoid confusion. Better to find a new word for it.
Pretty much any language can be used to program pretty much anything these days. After all, a decade or so ago people would have sniffed at the idea of Javascript running on the server side, Python being used in physics labs and anything other than C++ or Perl for UNIX systems programming (a touch of an exaggeration, but you get my point :))
These days the limitation of a programming language is the programmer him or herself.
An app isn't only about the ui -- maybe you'll need some background operations, and you'll need to coordinate between multiple inputs. Go sounds more suited than Java for that.
There has been some work by the community. A user created an app with a button and a webview using go 1.4beta[0]. I modified it to only be a webview with a Go web server[1]. I also wrote an app for myself in the same manner here[2].
I found the whole android developer ecosystem a bit of a pain to setup, but it works now. The amount of java I have to worry about is quite low, so that's a win for me.
I've favored hg over git for a long time but I gave up that fight a few months back: I mostly use git for all my new code now. I think the DVCS "war" is nearing an end and git pretty clearly won it, for better or worse.
Still, it's great we've had all this DVCS craze in the past years, we ended up with great open source tools that improved a lot over the CVS/SVN of yore.
Hello, I fund your c web development coment very intresting, as far as developing a website in c instead of php and as I was looking for a c - c++ freelancer would you be intrested for some work?
my email is tatzianarose@hotmail.com
thanks
The one feature I miss from Mercurial is the "hg serve" ability. When combined with the zeroconf extension, it made it really easy to push/pull changes to/from other developers. Since switching to git, I don't think we've had one (vcs) push that included commits from multiple developers, but we used to do that all the time. I know git can coerced into doing something similar, but hg made it so damn easy that developers actually used it.
Addressing your actual point, most shops won't use the full value of a dvcs, but they'll all get some benefit of it being a dvcs because each and every developer's machine has a backup of the central repo which, in the event the main git server dies, can be used to restore everything to it's previously happy state. When you use a vcs like svn, you need to be a lot more paranoid about backing up the central server or you risk losing revision history.
There isn't much of a difference. The underlying principles of how they work are essentially the same. However hg has IMHO a more pleasant UI, and it's easier to customise. I believe Facebook chose hg for the latter reason: https://code.facebook.com/posts/218678814984400/scaling-merc...
I like the simplicity and the better Windows support. Many things in git rely on you having a powerful shell, which is not the case on Windows (who wants to use cygwin all the time?).
In Mercurial I never lost a commit, even as a beginner. In git I managed to accidentally make commits unreachable (yes, I know how to recover, but still, it tooks a bit of googling and trying).
I very rarely need git's "remotes" feature. There's rarely a need for me to know where someone else's master is at the moment. I understand why git is doing this and I get that it's pretty flexible, but most of the time Mercurial's simple branching model is enough for me (I don't like bookmarks, they confuse the hell out of me).
Also, my repositories are usually small enough that I don't notice the Python overhead compared to Git. My ultra slow old disk is more of a problem in that case.
Mercurial's help is MUCH better than git's. Git's manpages are so famous for being mostly useless that people even write joke tools that generate gibberish manpages. Mercurial has a very clean documentation, with `hg help ...` just showing you what you need to know instead of (on Windows) using your browser to render an HTML manpage.
And lastly: Will git 2.0 ever come out for Windows? Seems like nobody is working on that, so we Windows users still get some preview version 1.9.4 at the moment. That doesn't look to me as if Windows was a first-class citizen in git.
Fun fact: Converting from git to mercurial is the easiest thing in the world (`hg convert my-git-repo new-hg-repo`), but the other way around is weird and complicated to setup (talking about hg-git).
Once Go is installed, I can make a simple web app from scratch that displays parameters from the URL or request body in just over 95 seconds with Go, using only the standard library. I can cross-compile it for another OS and architecture, then deploy a single binary file without worrying about dependencies, dynamic linking, or segfaults.
Try doing that in C/C++.
Maybe a better comparison is to more systems-level applications like you might code up in C. The same benefits carry over to those, without a significant performance hit (usually).
(And to be fair, sometimes C/C++ is the better tool for the job. But give it a shot; Go is pretty handy when you don't need to drop down to lots of unsafe memory and CPU operations.)
By the way, your existing C skills will be very useful in Go. For example, struct fields will still be padded unless you pack them carefully. Your experience with buffers, streams, memory allocation, and data structures will definitely not be wasted in Go.
Concurrency with goroutines and channels in Go is pretty awesome, as is the tooling: race detection, code formatting, PPROF performance monitoring, vetting (finding incorrect printf strings), and a great test framework with parallel execution support. The standard library is enough for most of what you'll need, and it's all open source, so you can learn to write idiomatic Go from it.
And if you want to use external code, you can download it with and integrate it into your main package in about 90 seconds (go get + go install). In order to learn to use it, godoc is there.
Sort of. On Linux, libstdc++ depends on the NSS system which dynamically loads libraries to implement things like getpwent(). Last time I had to actually make a portable static executable (admittedly some years ago) this was a real problem, and it was easier to statically link everything except libstdc++, then ship a copy which was loaded using RPATH in the linker settings.
I've been working with Go daily for a few months now at my job and I'd like to point out that Go stack traces have a really big signal to noise ratio. There is a lot of cruft that mostly nobody cares about and sometimes it can be hard to pinpoint exactly where something crashed (at which line) with a panic.
Just saying, they should probably improve it to be less verbose and more concise.
Yes, you are 100% correct, I dun goof'd. I would edit the post but apparently I can't, so I'm sorry for that. Same for the other person commenting on it.
The stack trace contains the stacks of all the goroutines. The running goroutine which caused the panic is printed first, so it should be pretty easy to find the source of the problem. At least I haven't had any issues in debugging complicated apps with thousands of goroutines...
But you can get that in C++ if you really wanted to as well.
I just wanted to point out that not having to worry about segfaults is clearly wrong.
Go doesn't really take any steps from preventing pointers from being null or forcing you to deal with possibly null pointers, or accessing uninitialized maps or slices.
> Once Go is installed, I can make a simple web app from scratch that displays parameters from the URL or request body in just over 95 seconds with Go, using only the standard library. I can cross-compile it for another OS and architecture, then deploy a single binary file without worrying about dependencies, dynamic linking, or segfaults.
Try doing that in C/C++.
You can also add numbers on an abacus. It _works_.
Put in a less snarky way, C is often way more low-level and tedious than you need.
The great thing about Go is that it lets you blend high-level and low-level programming in the same program, only getting low when you need to. It feels like a great mix of C and Pythonubyerlhphavascript.
>You can also add numbers on an abacus. It _works_.
Well, I wouldn't go there (C is antiquated/under-equiped like an abacus) if I was advocating Go.
After all, Go, just like an abacus doesn't have generics, or, besides channels and goroutines, most other facilities modern languages offer for that matter (GC and some basic data structures built-in is so 1980).
> Go, just like an abacus doesn't have generics, or, besides channels and goroutines, most other facilities modern languages offer for that matter
If you need mainly a large number of features in order to program, Go probably isn't for you. But from the perspective of a C/C++ programmer, this isn't likely to be an important point. Languages with more features than C have been available for 40 years (depends on what you'd count as a feature), so there were plenty of "better" choices in that regard available for said people.
For me, the lack of some "features" is a great asset for Go, I probably wouldn't have bothered to learn a new language if it had boasted the complexity of Rust or Haskell.
> The great thing about Go is that it lets you blend high-level and low-level programming in the same program
Where are map, reduce, select, filter, fold, zip ...?
I don't consider hand-rolling them in for/range as equivalent. Can I wind up with the same result? Yes. But I could do it with gotos too.
And I've seen "loops are the same as map/reduce etc" advanced as a serious argument by people who weren't, so far as I could tell, trying to tug on my leg.
Most of the questions about "how do I do X in Go?" are answered with "just write a loop". Yes, that means map/reduce, filter, fold, zip, etc. If writing simple loops gets your hackles up, Go is not the language for you. For me, I worry about the hard stuff, not easy loops.
For most people's uses, it's the same. Certainly, there can be performance benefits if your language's implementation uses lazy evaluation and you have a very large list and you only need to evaluate a small portion of the list... but in my experience, that is very rarely the case in real programs.
Saying "supports higher level and lower level" programming when you don't actually support most higher level programming primitives is also, from my point of view, untrue.
Go's design choices are what they are, which is fine. But I just see a lot of people who argue, in all seriousness, that a loop has the same level of abstraction as map, filter etc. It simply doesn't.
That languages are turing equivalent doesn't mean that they operate at the same level of abstraction; trying to pass one off as being "the same" as another is just silly.
For Go, I don't think what you need is a compelling reason from someone else. One compelling reason to one is not necessarily compelling to others.
Instead, since Go is a very simple language with very easy learning curve, I encourage you to spend a weekend to check it out. Pick a side project and do it in Go, then you will know if it's the language for you.
I would even argue that the joy of writing Go would be magnified by working as a team and the joy of reading Go would be demonstrated by exploring any Go project from any team. But I would not go that far for a Go novice who just want to check it out
You should be able to get up and running with Go extremely quickly if your default language is C.
If there's a reason you must be using C rather than another language (require manual memory management, require code to be as fast as possible, etc.), then Go might not be applicable. Otherwise, Go feels like an updated C and I don't see a compelling reason not to invest a little time in checking it out for yourself.
Yes but you really almost never need to do that. The speed difference is usually only a factor of 2ish... It's not like the factor of 10-30x for interpreted languages.
It's Go's simplicity that really struck a nerve with me. I've been coding for over 30 years now (Basic, ASM, C/C++, Java, Python, PHP, JS, Clojure) and with Go, I just love how everything is laid out for you: idiomatic Go, formatting, commenting, unit testing, concurrence, standard libraries etc. I don't have to research the basic tools. It keeps me focused on the task at hand. I'm very productive with Go. We have it in production and I'm super happy with it. (Btw, I never missed generics. If you pick a language based on its number of features, Go may not be for you.)
Since you know Python, do you find Gonas expresive an clear as Python?
Is there anything like list comprehensions and generators in Go? And anything simple like the Flask web framework?
How does it compare to deploying a python app using uwsgi+nginx in terms of performance / reliability?
There are no list comprehensions, but they're just trivial loops, anyway. Generators of a sort can be created, but are not commonly used. There are several web frameworks, I don't know flask, so can't answer if any are like that, but even with just the standard library you can trivially put up a web server that has routes and responds appropriately, plus json and html & text templates are in the stdlib too.
Deployment is where Go shines. You don't need nginx or a specific runtime, or uwsgi. You just deploy the file and run it. It runs its own web server. The web server is on par with nginx for performance, and is as stable as your code is (which is generally very stable, since Go's error handling is very explicit). And of course Go code in general is approximately 10x as fast as python.
Preach it! Been using Go for a while now, and I haven't been able to keep this much of the problem domain in my head since my days coding in M2. IMO, that's the point of Go.
If you're using C/C++ and need the performance, look at Rust. Fairly expressive, good type system (and they figured out generics), and C++ like or better performance. All the power of manual memory control, with none of the downsides.
Rust has great performance in general - and I also agree that it is an appealing language - but I haven't heard anyone claim better than C++ performance before. I'd be very dubious of anyone making that claim...
Rust has several specific areas where it is faster by default than C and C++, thanks to knowledge of ownership in the type system. The most dramatic example of this is that almost every pointer in Rust can be automatically and safely proven not to alias any other pointer, which is the equivalent of sticking `restrict` on every pointer in a C program. Sadly, Rust isn't actually passing this information onto LLVM just yet, so it's not generating code nearly as well as it could. :P
On the other hand, Rust also has specific areas where it is slower by default than C or C++. The most dramatic example here is that array bounds access is checked by default, with an unsafe method call for doing unchecked access. It strives to bypass this restriction by leaning heavily on iterators, which are an abstraction that allow you to safely omit the bounds check.
In the end, all that matters is that both languages offer zero-cost abstractions, so the ultimate arbiter of speed will be the optimizer on the backend. Saying that Rust is "faster than C++" is definitely premature at this point; I'll reserve that judgment for when it at last starts making use of all its free aliasing info.
Because it's arbitrary, most of the benchmarks aren't useful or interesting, and it encourages a mentality that I find to be a net negative.
Many of the tests boil down to "Does your language have GMP bindings" or "Can you write non-idiomatic code to theoretically make this as fast as something else?"
In other words, I don't find it to be useful, except in the absolute broadest sense.
> What about benchmarks game is arbitrary in your opinion?
For example, look at the history of D being in the game vs. not. Generally speaking, all of http://benchmarksgame.alioth.debian.org/play.html#languagex is incredibly arbitrary. I don't think that the maintainers of the game have some sort of moral obligation to support things they don't want to, but it is an arbitrary line which influences what kind of benchmark it is.
As another, the domain of the examples, which are largely related to numeric computing and/or memory usage. Which is fine, but isn't going to give me information about other use cases.
> None of the tasks boil down to "Can you write non-idiomatic code to theoretically make this as fast as something else?" -- if you don't want them to!
Yes, they do. Because one man is not an island: people use the raw numbers of the Benchmark Game to claim "X is faster than Y", even if the implementation of X and Y are totally different than what their implementation would be.
> that's been the choice of the Rust community.
I'm actually not thinking of Rust here. I'm thinking more of languages like Haskell, who have implementations that look significantly more like C than like Haskell.
But regardless of if the implementations are idiomatic or not, it means that The Benchmark Game is useless for evaluating if a program I write in a language is going to generally be faster or not than if I did it in another. Because this isn't the code that I'd write: it's code that takes advantage of every little last corner to get as small a number as possible.
>>… being in the game vs. not … but isn't going to give me information about other use cases.<<
1) That might be a credible criticism if the benchmarks game claimed to be some kind of exhaustive comparison.
In fact (if it wasn't already completely obvious that the benchmarks game has nothing to say about things it does not show) over & over again the web pages state -- "These are not the only compilers and interpreters. These are not the only programs that could be written. These are not the only tasks that could be solved. These are just 10 tiny examples."
2) You seem to have assumed that because you personally don't know the reasons, there were no reasons.
>>…totally different than what their implementation would be.<<
What does the benchmarks game have to say about those "totally different" programs that are not shown? Nothing.
Do you think there's an idiomatic style for programs written as though performance matters or does idiomatic just mean programs written as though performance doesn't matter?
>>…useless for evaluating if a program I write in a language is going to generally be faster or not than if I did it in another.<<
Does the benchmarks game website claim to be useful for that purpose? It sounds like magical thinking to me --
"… but the question is still asked - Will my program be faster if I write it in language X? - and there's still a wish for a simpler answer than - It depends how you write it!"
You can state those things all you like, in as large a font as you like, but it doesn't stop people from comparing languages based on the benchmarks game.
It's magical thinking to say otherwise: just because something doesn't claim to be X (or claims to not be X) doesn't stop popular opinion from thinking X. Maybe popular opinion is wrong, but it exists and it's what we have to contend with.
A few comments up you are saying that Steve's problems with the benchmarks game are invalid and the game isn't considered for the sort of comparison Steve (and many others) dislike because it says all over the website that
> These are not the only compilers and interpreters. These are not the only programs that could be written. These are not the only tasks that could be solved. These are just 10 tiny examples.
etc.
But now you are saying that obviously people will do naive comparisons using the benchmarks game. This validates the dislike of widely publicised one-dimensional benchmarks like the benchmarks game (NB this applies to a lot of benchmarks on the internet, the benchmarks game is just a particularly famous example, please don't get too defensive again).
Sure a discussion is taking place, but essentially any discussion about the benchmark game degenerates into either an argument about why the benchmark game doesn't cause one-dimensional comparisons (with the pro-Benchmarks-Game side consistently being overly defensive), or an argument about why using every unsafe corner of language X to basically mimic the fastest C program is perfectly reasonable, idiomatic and common in the real world.
Rust has fewer edge cases than C++ by an order of magnitude or so. Rust is effectively all the tools and best practices of C++11, except without the burden of twenty years of backwards compability (not to mention the albatross of being an almost-but-not-quite superset of C), and as a result it can design the entire language around these best practices and optimize them for ergonomics (Rust's boxed pointer is much easier to use than std::unique_ptr, especially since Rust defaults to move semantics), for safety (Rust's borrowed references are amazing when all you're used to is the use-after-free-for-all that is C++ references), and for speed (Rust's Rc smart pointer is faster than C++'s std::shared_ptr, because knowledge of ownership allows Rust to safely use non-atomic operations to bump the refcount).
I was at a lecture about security in C/++ code a couple of weeks ago. His conclusion was basically:
> There are no silver bullets with regards to safety in C/++ code; in order to achieve security, the programmer has to pay the price of being forever vigilant.
A lot of the complexity of Rust is for eliminating security pitfalls which are inherent in C/++.
It is common to use ARC and unsafe shared memory access in Rust. This defeats the purpose of this complexity. You can't isolate unsafe memory access. You rather write only memory safe code or end up with fully unsafe codebase where some nasty things (buffer overruns or segfaults) are possible. If somebody needs memory safety - managed languages with GC is the only real option.
Ah, I thought we were talking about isolation for code auditing purposes. If you're concerned about address space isolation, then this other statement is false:
> If somebody needs memory safety - managed languages with
> GC is the only real option
If I'm in Java, I can call C code via JNI that does whatever garbage I want with the memory of the Java program. There's no isolation there; we are thwarted by the need for FFI. Likewise, `unsafe` in Rust is just a reified FFI: it allows you to do things that Rust doesn't allow you to do, but, crucially, `unsafe` blocks in Rust are still much safer than the C code that you'd otherwise be writing. Thus `unsafe` is a mechanism for making Rust programs safer than they otherwise would be, by avoiding the need to call into C.
What is unsafe about using atomic reference counting?
> and unsafe shared memory access in Rust.
That should be provided with a safe interface, or an unsafe interface if calling that code is not safe.
As for it being common: I think they are working on minimizing the need for unsafe code.
> This defeats the purpose of this complexity.
Like having a VM implemented in C defeats the purpose of the VM for that language being safe. No, not really - that C code has to be really vetted, just like unsafe code in Rust has to be really vetted.
I guess we might - eventually - be able to formally verify a language implementation, thus really proving that a language is safe (that goes for those managed languages, too). Maybe that will be feasible in a few decades, if ever. Alternatively, you can use the ATS language, where you can prove that unsafe usage of pointers etc. really is being used in a safe way.
> You can't isolate unsafe memory access.
Sure you can - owned and borrowed pointers in Rust are represented as raw pointers at runtime. It's a safe abstraction. And if there turns out to be a bug in that interface, and they aren't really safe, then that will have to be fixed promptly - unlike in C/++, where one would be forced to say "Well, that's your fault for not being careful".
> What is unsafe about using atomic reference counting?
Nothing at all. But sharing objects between threads is unsafe.
> Like having a VM implemented in C defeats the purpose of the VM for that language being safe.
You can prove that VM code is memory safe? Good for you.
> Sure you can - owned and borrowed pointers in Rust are represented as raw pointers at runtime. It's a safe abstraction. And if there turns out to be a bug in that interface, and they aren't really safe, then that will have to be fixed promptly - unlike in C/++, where one would be forced to say "Well, that's your fault for not being careful".
And in C++ we have value and move semantics. Nobody uses pointer arithmetic to implement arrays and strings anymore. std::unique_ptr is a standard way to implement the same semantics as borrowed pointers in rust. Array access can be range checked if you want. So being careful in C++ is easy today, can I say that C++ is safe? :)
> Nothing at all. But sharing objects between threads is
> unsafe.
This is mistaken, as sharing immutable data between threads is trivially safe, and Rust's type system gives you the tools you need to prove that data is actually immutable (good luck sticking anything in an Arc if it contains an Rc (or any other non-Send type) anywhere within it). And sharing mutable data between threads can be safe if you get the locking right: Rust gets the locking right for you, so that you don't have to.
> std::unique_ptr is a standard way to implement the same
> semantics as borrowed pointers in rust
No, std::unique_ptr is analogous to the Box smart pointer in Rust, except more onerous to use because move semantics are not the default in C++. C++ has no equivalent to Rust's borrowed references.
> So being careful in C++ is easy today, can I say that
> C++ is safe?
Sure, if you're willing to throw out all C++ code written before C++11, and if you're willing to lower your standards of "safety" to "trivially, silently, and often surprisingly unsafe". :) I actually have a higher opinion of C++ than most developers you'll find (PHP too, but that's a different story...), but let's not pretend that safety is at all C++'s forte.
C++ is unsafe by default. You have to opt-in to obtain all of the safety (by adhering to a particular pattern of use, or using a particular kind of class, etc.).
Rust is safe by default. You have to opt-out to head into dangerous territory.
That difference may be trivial to some, but to me, it's enormous.
> Nothing at all. But sharing objects between threads is unsafe.
If sharing stuff between threads in Rust is unsafe, then that is a bug which you should report.
> You can prove that VM code is memory safe? Good for you.
Uh, that's my point... VM implementations are usually not proved to be safe, any more than unsafe code in Rust is proved to be safe.
> And in C++ we have value and move semantics.
Which aren't bulletproof - if you're not careful, they can be used in an unsafe way. Well, this is second-hand information, so take that for what you will. You could ask pcwalton about it if you want a truly informed opinion.
> So being careful in C++ is easy today, can I say that C++ is safe? :)
Sure you can. You can say, "My code is safe, because I only use feature X, Y, Z / because I avoid this and that...". While someone using Rust should be able to say "My library is safe, since I make no use of unsafe blocks".
A really crisp way to model concurrency. Most real world performance problems stem from the inability to model concurrency correctly.
I also found it's faster to read the library code than the docs. A lot of thought has gone into making the language and library clean and easy to understand.
If you're doing networking, it's worth looking into. It has a good standard library with good networking features. It has pretty solid concurrency primitives.
Beyond that, I would not recommend it. I think a lot of people get caught up in the fact that A) it's hip and new and B) it's managed by Google. Without knowing what you want to do with it, I would recommend any number of languages (Rust, Haskell, Julia, etc.) before learning Go.
Except Java and C# are significantly better at all of these points except for boilerplate, which I would even dispute whether that is a bad thing overall.
Really, things like concurrency primitives as a point in Go's favor? Running code on several processors at once is where it's good to have a library because there are tons of knobs like priority and scheduling policy for instance; it's been trivial to deadlock or arbitrarily delay an entire Go program just by having GOMAXPROCS busy loops and there's no scheduling control at all last time I checked. Meanwhile shared data locking isn't even in the language despite being at a very fine level where using libraries can be cumbersome.
And Go comments as good documentation is another mind-boggling claim. It's a mostly free-format with few machine-readable parts. Most functions don't even explain what types of errors can be returned, just "err Error". It's so bad it isn't even comparable to JavaDoc or C#. How can you have "safe" programming when you have to dig through sources to even find out what the error conditions are?
There is a good reason to use Go and that's to feel like a pioneer. But few technical reasons.
>Except Java and C# are significantly better at all of these points except for boilerplate, which I would even dispute whether that is a bad thing overall.
C# on Mono certainly isn't significantly better performing than Go from my experience, which is substantiated by Go winning all but one of the 'Benchmark Game' benches, typically by quite a wide margin:
On many of these benchmarks, with no memory allocation, since Go is compiled with zero dynamic loading (not even modules) it should be only slightly less fast than C due to range checking and lack of controls for fine tuning, but instead it is half as fast. Benchmarks using memory are 1/6th as fast.
Go should be faster than JVM Java at microbenchmarks due to no dynamic loading and precompilation, instead it's only faster on some really tiny algorithms possibly even due to just fitting in the cache better because the GC runtime is so basic. So if you want to write some simple command line tool Go maybe it'll be slightly faster than Java. On anything that uses much memory JVM destroys Go on performance.
So performance is not a reason to write something in Go -- in fact it's just the opposite. Since you can't control the coroutine scheduling you can't do anything about latency spikes or starvation or anything else that can cause performance problems in multithreaded programs.
If garbage collection is acceptable, you'd learn more from trying OCaml - or even Haskell, if you want to really expand your comfort zone. I'm not saying don't learn Go eventually, but for the "second language" you should try something more different from C/C++ so you get more of an idea of the range of stuff out there.
OCaml is been around for almost 2 decades. Something must be holding back its adoption. Microsoft's version F# seems to be doing better but developers aren't adopting it en masse.
It's become a lot more popular in the last few months/years, maybe there's a good reason for that. The English documentation is much better than it used to be, as is the library ecosystem.
I work mostly in Scala, but there's a lot of similarity, and I think the main reason we're seeing it take off now (10 years after originally released) is that the kind of problems where it's really useful are becoming a lot more common. If everything's in the cloud, you need a language that's good at distributed problems. If you need to handle huge volumes of data, you need something more flexible and explicit than traditional languages. If you're using too many layers of technology to understand what they all do, you need a language that can help you keep track of them.
True concurrency as in concurrent computation yeah, it's a bit awkward. But if you just want async I/O (which covers a lot of cases - if you're rendering a web page based on a database query, being able to handle other requests while you're waiting for the database results is much more important than being able to do the actual HTML rendering in parallel) it's pretty good at that AIUI.
there's lots of old HN threads and a blog and a couple books to give you the flavor (and yeah some people are less than enthusiastic); also Oreilly and Manning are coming out soon with "In Action" books, you can read a 8 of 14 chapters draft:
Concurrency is the most compelling reason. You will miss templates though. Try experimenting with the concurrency and network communication stack in the go standard lib.
It s really simple language to learn but a bit more to master because you have to solve some common problems in a different way (especially if you have a strong OOP background). To learn the basics, check out the tour[0]. It's really simple to go through and will explain the basics of the language.
It would be easier if you explain what type of programs you develop today in C or C++. In general terms Go is probably an easier and more productive language but you might lose some performance.
I would say one compelling reason is the speed of development/deployment. It is really a very well designed language. You have to try it to know it :-)
Most of them are tests, but I see a couple genuine use cases:
1. Draining a channel until it closes.
2. Since range of a string gives utf8 runes, the RuneCountInString function in the utf8 package simply has an integer count, and does a range through the string increasing the count in every loop.
If they added support for `range 10` to iterate 0-9 this would be useful for doing things a certain number of times. Now it's probably mostly useful for emptying iterators.
One use case would be to exhaust a channel (that is, read everything off it until it is closed). Another might be to loop over something of a fixed size to correlate with a corresponding number of other things.
Nice. Now all I need is for someone to turn https://github.com/jcla1/gisp into a slightly more complete LISP and I'll have a nice environment for all kinds of ARM systems.
Gitiles is a simple git repository browser built on JGit.
Emphasis on simple: the goal is to make it easy to see
your files and changes, leaving complex tasks to other
tools.
Gitiles is the source browser used by the Android Open
Source Project. To see it in action:
The port is Go 1.4+, and it's based on upstream dev.cc branch.
I intend to propose for inclusion into Go 1.5 after finding a way
to test it (either build each test as an App, or use the open source
xnu/arm port). Brad mentioned that inclusion of iOS support might
happen in Go 1.5 in his talk the state of the gopher
(http://talks.golang.org/2014/state-of-the-gopher.slide#40), now
I believe it really will happen.
The iOS port is my first Go OS port, but it's the only one that takes
almost 3 years to complete. :)
Go build iOS Apps. Let's see when can we see a Go based App in
the store.
Ken and I ported Plan 9 to the SPARC in 6 days over one fun Christmas break.
I wrote the disassembler (for the debugger) and Ken wrote the assembler, so we could cross-check each other's work. The hardest problem other than fighting register windows occurred when we both misread the definition of an instruction the same incorrect way.
An app can be written entirely in Go. This results in a significantly
simpler programming environment (and eventually, portability to iOS),
however only a very restricted set of Android APIs are available.
The provided interfaces are focused on games. It is expected that the
app will draw to the entire screen (via OpenGL, see the go.mobile/gl
package), and that none of the platform's screen management
infrastructure is exposed. On Android, this means a native app is
equivalent to a single Activity (in particular a NativeActivity) and on
iOS, a single UIWindow. Touch events will be accessible via this
package. When Android support is out of preview, all APIs supported by
the Android NDK will be exposed via a Go package.
Alternatively, you could write your UI code in Java and call into a Go shared library via JNI methods, if you aren't worried about graphics and just need code portability. Again, this is similar to working in the NDK.
Thanks for the link. I just read about it. So probably it's not a substitute for Java (yet) right? I guess it's kind of a replacement of C++ for multi-platform games development.
Also can someone please give a brief description of whole process to write a simple Android app in Go? What are the bare bone setup(Go, JDK, Android SDK??, etc.), so I can develop and compile? Is the final result an apk file? Thanks.
I am also not an Android developer, so don't know how to get started using Go for Android. Do I need to set up a Java skeleton first so it can call my Go code? Use Android Studio or other IDE? libraries needed? You know the basic set up and steps for producing a 'Hello World' app.
If the instruction is already out there somewhere, I will be very glad to check it out. Golang for Android app, this is exciting.
The original motivation for putting it into the standard library is so the 'go' command can use it, to watch the filesystem and know what needs to be rebuilt before you even run "go install" or "go build".
Fsnotify will probably make it into the standard library or other core golang repository at some point, given that it could help speed up the compiler. However there's nothing particularly mystical about it that prevents a 3rd party library from being just as good.
I think they ought to add a one argument version of the copy() keyword to bitwise-duplicate and fix up the pointers on the built-in slices and maps. This would support the copy-change-replace semantic their example for Value follows.
Good god. Seriously, another language. I'm just starting out learning how to program, and I find it irritating that there are so many languages and it's not that easy figuring out which ones you should learn and which ones you shouldn't.
> An explicit goal for Go from the beginning was to be able to build Go code using only the information found in the source itself, not needing to write a makefile or one of the many modern replacements for makefiles. If Go needed a configuration file to explain how to build your program, then Go would have failed.
And the hope is that go generate would be a hook into the dependency graph of go build. Much like the architecture switches, the naming conventions, etc, it seemed like it really could have solved my project's protobuf problem.
But it's not, and for reasons that rob pike's doc and the mailing list did not make clear. It looks like a mis-step to me, and I don't think it's due to my misunderstanding of what `go build` and 6g/gccgo are
Is it possible to do away with the typical Go project directory structure? I thought all that was enforced by the tools, but it's possible that I didn't dig deep enough to uncover greater flexibility.
The go team has decided that the go language doesn't depend on any build system, but pushes a single build system one that requires source code compatibility and provides no benefit over many existing and easy to use build systems (makefiles, tup, etc).
This relatively recent trend of company-specific languages annoys and disturbs me.
I don’t ever want to be tied to a language and library ecosystem under the thumb of a single (large) corporation. Not Visual Basic, not .NET, not C#, not Objective C, not Go (It’s even named after the company, for crying out loud! Yes it is. Don’t try to claim otherwise).
I’ve used Basic, I’ve used Pascal, I’ve used C, I’ve used Python, I’ve used Lisp, and so on and so on. Those were open platforms, with different companies in the “lead” position in any one time (Microsoft Basic, Turbo Pascal, Borland C, GNU C Compiler, CPython, PyPy, etc.), but the language was not “owned” by a single company which would loom in everyone’s mind, always the unspoken fear being “what if <company> doesn’t like it?”.
I will not use tools which make me afraid. I will not live a life in fear.
I'm generally in agreement, but it's hard to say what languages will be considered "company-specific" in the future.
Go back in time 35 years and C could have been called that "AT&T specific language" Or the opposite: Objective C looked like a hopeful contender to be a widely-used language but it just never caught much traction outside the NeXT/Apple (and NeXT ended up buying the developer) and today it's almost completely associated with a single company's ecosystem.
Maybe in 4 years we'll be talking about the new features coming in "ISO Go19" and complaining that VisualGo still doesn't support all of Go18 yet. Who knows?
JavaScript became ECMAScript, where ECMA is there to define standards, but still there are a couple of big players from certain companies and still they do Dart, which is mostly a Google thing and is a direct competitor to ECMAScript.. well, ECMAScript 6 at least. Even though it's not a bad language one can see a lot of company interest there.
And when you look at Go you see that it is controlled by people who work for Google, but don't exactly depend on them. I think it's more a language of Ken Thompson or Rob Pike than Google.
What do you think about Rust? It's affiliated with Mozilla, but Mozilla is (also) a Foundation and Rust is not a Mozilla Product like Firefox, where it often is really hard to get your changes in. There are a lot of people who got interested, even outside of Mozilla and at least to me it appears that they have influence.
And the last thing is what matters. Microsoft has huge interest on controlling C#, Visual Basic .Net, etc., because their income is related to data. They create the major platform for it, get income via selling the OS, the IDEs, etc.
If you look at Go then that doesn't seem to be the case. Google's main benefit (tell me if I am wrong!) appears to be being able to replace C++ and Python where neither seems to fit for one reason or another. If you look at the state of both the Android version now and Google App Engine then Google doesn't show huge interest into using it for money.
Of course that is only now and maybe this changes, but it doesn't look like it for now.
My personal opinion is that Dart was more to worry, also because of the approach being similar to Microsoft, when they create a standard (be it for documents or a language) and then create their incompatible extension or are always ahead of the others, because of their strong involvement. I really don't see that with Go.
Another thing that is slightly related to go is NaCl (Native Client, not the crypto library), being Google's version of ActiveX. But that's more because it of course is a target platform for Go, not a thing that's wrong with the language. Else you'd also have to worry about C.
Since you ask: Rust, being a Mozilla thing and Mozilla being a foundation, is certainly a less risky and “fearful” (so to speak) platform in this regard. Go, however light a hand Google might have on it at the moment, still has Google’s hand on it, which makes me wary. Sure, it’s better than Dart (or C#), but everything can point to something which is worse – that is not a good enough reason to accept a thing’s drawbacks.
If the non-company-specfic languages cannot keep up with the evolution company-specific languages show they will have a problem getting or keeping their users. I know it takes a lot of investment to build great tools, do marketing, evolve a language etc and currently some large companies seems to think that they get more of what they want by funding their own languages. If someone fund the free languages they same way they will probably evolve faster and get more users. There is also a problem of getting large groups of people and organizations to agree on something. This also slows down progress in some open/free projects. Sometimes it is easier to just decide for yourself.
There are certainly advantages for companies to develop languages in-house, but there are also, as I wrote, disadvantages for outside developers to use that company-specific language. This lesson was learned long ago about assembly languages and machine code: If you could program the Burroughs Whatzit 220, you could not program the Data General WhizJig 3000, and all the software ecosystem you (and others) had written over the years became obsolete when technology advanced. This was a large part of why programming languages were invented, and why they were always meant to be platform and vendor independent – so that this would not happen again¹.
It’s possible that we are due for another generation of developers to make the same journey of discovery of why vendor-specific technology solutions have larger drawbacks than what is initially apparent.
① This is also, incidentally, why Unix proliferated as an operating system – it was and is, for all intents and purposes, a vendor-independent platform for development at the operating system level.
>but the language was not “owned” by a single company which would loom in everyone’s mind, always the unspoken fear being “what if <company> doesn’t like it?”.
>I will not use tools which make me afraid. I will not live a life in fear.
This is a very emotional response. It uses the language of emotion: fear, afraid, even the odd emphasis of the words owned and loom. Unfortunately for you Hacker News tends to be logical and analytical.
Personally I don't fear any tools, except for table saws, which combine extreme sharpness with a dangerous axis of rotation that can lop off fingers or propel a 2x4 through your abdomen with ease.
Oh, I also fear C++, which is the table saw of the mind.
While technically true, this has almost nothing to do with what I wrote. Huge code bases thrown over the wall, while technically “open source” does not a community make. A language named after a company with the overwhelming majority of development sponsored by that same company is not something you realistically envision someone forking, so the fear I spoke of is still there.
Now, if the language was, say, maintained by 80% or more by submissions and monetary contributions from outside the company (and if the name was changed), then it would approach being a neutral platform to the benefit of all.
As it is, we’re all being invited into Google’s yard to play, but we don’t own the house.
Have you actually observed the Go community and how Go has been developed over the last five years? What you are saying are fair concerns in the abstract, but I cannot reconcile them with the reality of how the Go project operates.
You also seem unusually hung up on its name. It's not like it's called Google Programming Language All Access.
I see from another comment that this release first happened, and then, afterward, the release was pushed to its “official” Git repository. This is not the way real open projects do releases, and instead indicates that the real development is done in-house and the code thrown over the wall.
A name is important, as it is a symbol. As long as the language is called “Go”, Google will always have power over it, no matter who actually does most of the work, and thus the fear will still be there. (Yes, “Go” is symbolically the same as “Google Programming Language All Access”. It’s as if Microsoft released something called “MSCode”.)
> I see from another comment that this release first happened, and then, afterward, the release was pushed to its “official” Git repository.
You misunderstand. The release was made from the official open source Mercurial repository, and later pushed to the official Git repository, because this release coincides with the project's migration from Mercurial to Git.
Every single change of this release was written in public, reviewed on public mailing lists, and committed to a public version control system. You are misinformed and spreading FUD. Please stop.
>This is not the way real open projects do releases, and instead indicates that the real development is done in-house and the code thrown over the wall
There are several projects that are free and open source, wildly available to every platform, hacked upon by hundreds of people, that still do cycle releases and development behind the back of most developers and only release full .tar.gz archives with sources after a milestone is reached.
I might be wrong but iirc bash is one of those projects (or at least was), I seem to recall people complaining about it during the shellshock issue. The GNU libc might be another but I'm not sure.
Look at how the communities around languages like Basic, Pascal, C, Lisp and Python have developed.
They were, from the beginning, open with development, releasing early and often, and were explicit about readily accepting large contributions from outsiders (and really did so). The creators were always open to the possibility that they themselves might not be the eternal keepers of the language, which allowed competition when others developed the language further.
Contrast this with the development of the languages I mentioned. They have done the opposite of these things.
I freely admit that I do not know the intimate details of Go development, but my point is that they are almost irrelevant. Go is still perceived to be controlled by (and therefore is controlled by) Google. How much that is actually true is almost irrelevant until the perception changes. And with a name like “Go”, Google likely has no intention or wish for that perception to change.
I mean, can anyone claim that an internal developer at, say, Microsoft or Apple could develop programs in Go and have them become used for large parts of the internal company infrastructure without it becoming politically sensitive, just as if they had chosen, say, C? Until that happens, Go is not an obviously-neutral platform, and I therefore have no desire to use it.
> I freely admit that I do not know the intimate details of Go development ... How much that is actually true is almost irrelevant until the perception changes.
Since you freely admit your ignorance, can you please stop making uninformed statements that spread FUD about Go? Those of us in the Go community that invest our lives in this project don't appreciate your senseless negativity.
This is true of the very initial versions of C. My understanding is that it wasn’t really popular until the C compiler was freely given out to universities and later the world. Also, the book (The C Programming Language), effectively an easy-to-read language specification, contributed heavily to independent implementations, as the language filled a hitherto unfilled niche.
I think you should do what you ask to be done. You fork Go under a new name, create a community with the same objectives as yours around it, be competitive and develop the language further and then reap the satisfaction. If there is a significant need for your ideas I'm sure the developers will join you (otherwise you will have the best proof that your idea is just not good enough for many others). Personally I'm very glad and grateful for the product of a tremendous amount of man hours I receive for free even if I too have ideas (that don't overlap perfectly with Go) about how a better language should be.
You misunderstand me, please re-read what I wrote. Nowhere did I call for the forking of Go. On the contrary, I specifically wrote that it was unrealistic to even envision it, since it would not work as long as Google sponsored more into its development that I could do with the fork, and as long as Google was perceived as a more stable future sponsor of said development.
What I wrote was that I, personally, would not use Go as long as its development was perceived to be controlled and paid by Google. Forking Go would not ameliorate this in any way, unless it was successful, which would be extremely unlikely.
Normally I don't reply in this kind of situations but I make an exception: 1 I think You misunderstand Go. 2 You complain about something you admit you don't know well. 3 You want the development to be more the way you want it without making any effort. My feeling is: Your internal feelings towards Go development are not relevant for many.
This is fantastic! Are there any example Android apps released by the Go team to help get started ?