Hacker News new | past | comments | ask | show | jobs | submit login
C to Go: Could Go replace C? (stanford.edu)
144 points by hanszeir on May 11, 2011 | hide | past | web | favorite | 154 comments



To replace C, one must create a language which, first and foremost, can target shared libraries, because that's how most of systems software operates on virtually all current day's operating systems.

This is by far the #1 requirement: extensions to all VMs and scripting languages are shared libraries. Plugins for various servers are shared libraries, heck pretty much everything which is not a web app is a shared library, either on windows or linux.

In my life as a C++ developer (about 7 years) I have never, even once, worked on an executable. All my code, at all companies where I worked, always ran inside of either a DLL or a so-file loaded by some hosting process.

In fact, I believe that so-files must have been the default compilation target for both Go and D. So millions of Ruby/Perl/Python/Java/<anything goes> programmers could have used those languages for performance-critical or OS-dependent code in their programs, after all that's what "systems languages" are for.


It has become increasingly clear to me that C's influence on the systems landscape is even stronger than it would initially appear. Our ability to move beyond what we have now is not merely hampered by the fact that we have to create an entire systems infrastructure, it is hampered by the fact that we always get sucked into having to support C shared libraries in addition to that, and in general that requires certain things to be true of your language, which pulls you into the C local optima. And now you've failed to write a systems language that has moved fundamentally beyond C.

In short, to be able to run a C library means that you language must hand over total control to that C library, who may then dance all over memory, if it chooses. You can't build a system that enforces any further constraints. And you don't have to be an "academic weirdo" language anymore to want to be able to enforce constraints; even something like Go has a significant runtime that you need to work with properly to get the benefits of go, and a C library simply expects to own its thread for as long as it wants, do whatever it wants to allocate memory, use whatever resources it wants, not be scheduled in any way by anything other than the OS, and just this immense laundry list of requirements that all looked OK 30 years ago, but it's increasingly clear that to get to the next step of systems design, some of those are going to have to be modified. And C is not the language that will allow this.

The only language I know that manages to have radically different semantics that C at the deepest levels, yet can still talk to C libraries without (much) compromise of those semantics, is Haskell.

Somehow we've got to escape from C-type linking dictating requirements to the deepest levels of the semantics of the new language, or we're not going to escape from the local optima we are in right now.


Why are so many people intent on taking C away from us?

The fact that you can run everything from Erlang to Python to Lisp to a JVM on the same machine and under the same OS is thanks to C, not in spite of it!

C exposes the machine at a low level, which means that you can innovate in the language/VM space without having to beg the runtime gatekeepers to please implement your new experimental semantic. Anyone can generate machine code that will run directly on the hardware -- I recently wrote a JIT that parses Protocol Buffers 2x the speed of any existing parser. Don't take away from the the ability to do this!

Higher level VMs like the JVM can impose further constraints -- this is already possible today. But those VMs can compete for popularity in user-space without imposing limitations on the kinds of VMs you can write directly on the hardware.


"The fact that you can run everything from Erlang to Python to Lisp to a JVM on the same machine and under the same OS is thanks to"

Turing completeless, not C.

"you can innovate..."

We have. We've innovated for 40 years. It turns out we've learned some stuff since then, and it's time to move some of those innovations down to the lower parts of the system. Sorry, that means bumping C. But I assure you, I will celebrate C and put it on a pedestal and remember it fondly, even as I'm shoving it out the door and glad that it is no longer the constraint for all future systems.

It's not that it sucks, it's that it's basically used up. An immense amount of problems with modern computing basically boil down to having C at the lowest level of the system, and we aren't going to fix them until we get something else at the lowest level. The hardware model that it embraces is simply not appropriate for building the networked future on.


> Turing completeless, not C.

Be serious. Brainfuck is Turing-complete: are you suggesting that other programming languages could reasonably be implemented on top of it?

Almost every single VM or programming language implementation is written in C. What are people going to write programming languages in when you take C away from them?

> We have. We've innovated for 40 years. It turns out we've learned some stuff since then, and it's time to move some of those innovations down to the lower parts of the system.

So your opinion is basically that it's time to stop innovating, because now we know the best answer. What do you have to say to the fact that I wrote a JIT two weeks ago that effectively improved the state of the art in network protocol parsing? How is that not part of "the networked future?"

Your arguments that C is holding us back are vague and unconvincing. People write VMs all the time. Code that runs on these VMs can be as insulated as you want from the underlying system. The entire web is built on JavaScript which doesn't know a lick about C -- how would that whole ecosystem be any better if C wasn't underneath it? I argue it would be much worse, because the radical performance improvements of the last 4 years (which came from writing JITs that target the bare metal) would not have been possible.

You can pry address spaces and machine code from my cold, dead hands.


"Be serious. Brainfuck is Turing-complete: are you suggesting that other programming languages could reasonably be implemented on top of it?"

Be serious yourself. With that sort of hostile characterization, I have little interest in following up in a dead conversation. You don't appear to have spent even a second trying to understand what I'm saying before leaping to a nonsensical strawman. (And I am well aware that understand != agree. But you didn't take the time to even know what you disagree with.)

"So your opinion is basically that it's time to stop innovating,"

No, it's time to resume. C is not where the innovation is.


I spent a lot of time trying to understand what you were saying, and because it was unclear to me I asked very specific questions, none of which you answered:

    - what will people write language implementations in, if not C?
    - why don't you consider my JIT an innovation worth supporting?
    - if C is a barrier to innovation, why is the JavaScript landscape so thriving?
My point about Brainfuck (that's a language by the way, not a slur, in case that wasn't clear) is that Turing-completeness says almost nothing about the systems-level capabilities of the language. Turning completeness means that a language can calculate Pi: it doesn't mean that it can open a file or send data over a network, and it says nothing about efficiency. So when you argue that Turing-completeness is what makes it possible to run lots of different kinds of VMs on the same machine, it's hard to take such an argument seriously.


"So when you argue that Turing-completeness is what makes it possible to run lots of different kinds of VMs on the same machine, it's hard to take such an argument seriously."

Turing completeness is why you can run a JVM on top of a C environment. It means that any computation that C could do, the JVM could do. It means that I can run a JVM on top of any Turing Complete environment. The JVM could be ported to a modern day Lisp machine. The JVM could run on a Russian ternary machine. C does not get "credit" for enabling the existence of the JVM, because the JVM could run on any Turing complete substrate. You have the causality backwards.

"- what will people write language implementations in, if not C?"

The new systems languages will be written in themselves! What do you think C is written in? "Bootstrapping."

"- if C is a barrier to innovation, why is the JavaScript landscape so thriving?"

Wrong question for what I'm talking about. Javascript has no innovation of interest to systems level programming, it's just a thin layer over C. I'm talking more like something like Go, which is hampered by the fact that it can't really use C libraries properly because Go can't grab and schedule C routines properly. Or how E [1] is really hampered by the fact that it basically can't work properly in a C environment, its whole schtick really only works if it's down to the OS level. The entire point is that I want to replace the foundation, so it can do things that C basically can't. Like, write a secure operating system, which C is incapable of, despite immense amounts of effort.

Basically, why is UNIX still the preeminent OS? Yes, it was good, I know that, there's a reason why it's the only survivor of its time period, but where's the capabilities-based system? Where's the thing we can't even think of because we're too stuck on C? Why is one of the preeminent kernels of the day, the Linux kernel, still adding an average of two security vulnerabilities a week, the same ones, over and over? Why are buffer overflows still coming out in 2011? Why do I still get segmentation faults in 2011? Why are we building the very foundations of our system on a language that all but mandates such failures, when we know how to not do that?

C.

As for any response you might have as to why C is still worth it, my reply in advance is that every one of those things is room for "innovation", much of which has already been done.

[1]: http://www.erights.org/


> Turing completeness is why you can run a JVM on top of a C environment. It means that any computation that C could do, the JVM could do.

I'm genuinely confused: I thought this was the argument you called a strawman when I argued against it before.

I've already rebutted this argument twice: Brainfuck (http://en.wikipedia.org/wiki/Brainfuck) is a Turing-complete programming language, and yet you categorically could not implement the Java class libraries on top of it. For example, the java.io.FileReader class could not be implemented on top of it, because Brainfuck does not have any API for opening a file.

Even in cases where you can implement one language on top of another, it may not be efficient to do so. For example, Adobe Alchemy runs C/C++ code on top of Flash/ActionScript at a 2-10x slowdown compared with running the C/C++ directly on the hardware. Implementing a JVM on top of Java appears to be 4-9x slower than implementing it in C/C++ [0] (and that's only comparing two interpreters: the difference is far greater when the C/C++ implementation generates machine code directly).

C gets credit for efficiently supporting a whole host of different VMs with divergent GC schemes and synchronization primitives. That is what makes C special, and why it continues to dominate in system space.

[0] http://portal.acm.org/citation.cfm?id=1294345


> Turing completeless, not C.

C's mix of portability, simplicity, flexibility, and expressiveness made it the ideal choice for developing operating systems that had to run on a wide variety of architectures.

Perhaps a simple example of where C gets in the way would help clarify your case.


This is, indeed, why Rust originally compiled only to shared libraries (that also worked as executables). We ended up distinguishing shared libraries from executables for performance and OS integration reasons, but shared libraries are still key to the design.


Is there something fundamental in Go's symmantics that prevents dynamic linking? Or is this just a tooling issue?

Personally, I think I might avoid a language that defined how the end product could be packaged, that just seems outside the range of what a good language should focus on.

Go definitely bucks the trend in modern web age language development,for starters it generates raw binaries. That alone puts it in a different category than just about everything else of significance that has been developed over the last decade. Rather than coming out with a full auto-completing IDE from the start or a plugin for netbeans or eclipse, they've got a lightning fast object code compiler. The natural evolution of these things is to get raw binaries working, get the langauge pretty much stablizied, then add some default libraries and tools to make shared libraries and get dynamic linking and PIC code type stuff working and the language shouldn't have to actually change for that to happen, you just change the tools and create different ways to compile stuff. I'd expect shared libs to come along.


> I believe that so-files must have been

should, no? Considering you can not — as far as I know — compile to shared libraries in Go.


You can compile your own packages to statically-linkable libraries. Code sharing is what libraries are all about. Now, if you want dynamically-loadable libraries (written in Go), Go's tools do not support that. (You can, however, link to dynamic libs written in C and that linking happens at runtime.) Static linking is precisely what prevents dependency hell.

I'm somewhat hijacking the discussion with this, but with storage getting cheaper, why do we need dynamic linking anymore?


Dynamic linking was never about cheap storage, it was about RAM. Except that dynamically linking libraries into your program almost always uses more memory than statically linking your program. So there is no point. Most systems (including Unix, which did not have dynamic linking until late 80s/early 90s) worked just fine without it.

Here is a good list of reasons why you should never use dynamic linking:

http://aiju.de/rant/dynamic-linking


FUD. Dynamic linking is about saving on RAM. Nobody ever said it's faster or "secure" therefore the entire post is mostly about made up agruments.

And dynamic linking does save a lot of memory. That's one of big wins for Google's Dalvik: standard JVM is unable to share code - that's why all JVM processes are memory hogs.

Looking at my process map right now. Gnome-panel has 20MB in resident memory (not all of it is code, mind you) - and 13 of them are shared with other processes. Unity-window-decorator shares over half of its code with others. Just for the kicks open your process monitor, have "shared memory" column open, and sum up all those numbers - those are megabytes you would have lost without the shared code.

It is also about CPU as well. Moving sort() implementation in and out of L2 cache is expensive, better to have a single instance of those opcodes do sorting for multiple processes.

Sorry, but our operating systems and programs we run on them are mostly composed of shared libraries. The debate of either they're good or not is largely pointless unless we migrate to different OS designs.


"Sorry, but our operating systems and programs we run on them are mostly composed of shared libraries. The debate of either they're good or not is largely pointless unless we migrate to different OS designs."

What do you mean when you say that the operating system is dynamically linked? I don't even think Linux kernel modules count. I run OpenBSD, and all the base packages are statically linked. Many applications (esp. graphical ones) are dynamically linked, but there's not many reasons they have to be. The biggest one is bloatware libraries full of buggy crap you don't want or need. But dynamic linking isn't a solution for that problem.


"What do you mean when you say that the operating system is dynamically linked?"

By that I mean this: grab a debugger, attach to a random process you have running. Pause and look at the current stack: you'll notice that the code you're looking at resides in a so. And everything on the current call stack is all shared code, loads of it, sandwiched between the kernel at the bottom and a thin layer of hosting executable on top.

Here's another way to look at it: http://paste.ofcode.org/diZdtuH8uPs2UBWHEvTYWh (try to ram all that code into gnome-panel itself and ship with next Ubuntu - see what users will tell you)

One more time: shared code is absolutely essential. Only "one-process-per-machine" datacenter approach can do without. That's why there is no Java on the desktop.


Except that dynamically linking libraries into your program almost always uses more memory than statically linking your program.

If your metric is the memory usage of one arbitrary program. If your metric is the total memory usage of the system, it's likely that dynamic linking is a win because many applications can reuse the same pages in memory.

Most systems (including Unix, which did not have dynamic linking until late 80s/early 90s) worked just fine without it.

That's neither-here-nor-there; most systems worked just fine without operating systems until those were invented, too.


Plugins that don't require developer tools to relink your executable.


And when you have a security bug in your static library (let's call it "library Z", just for kicks), what do you do? Is it enough to just let the software know? What about big vendors, whose products might have 42 different copies of Z kicking around in the source base.

What if they miss some? Wouldn't an architecture that allows the distribution to define and replace Z on its own be valuable?


> "Wouldn't an architecture that allows the distribution to define and replace Z on its own be valuable?"

In theory, sure. But how many more years of practice do we need before we accept that it doesn't work that way in reality? That any change in Z is as likely to break apps that use it, as save them some hassle?


Clearly I was too subtle. It, uh, wasn't actually a hypothetical: zlib had known exploits that bit Microsoft and others for years because they had cut and pasted the library into a zillion places. Linux distros just updated.

There's no theory at work here. Static linkage of common components is a security vulnerability.


And I wasn't saying it never works. I was simply saying that it doesn't work that way regularly, or even often.


so that you can update a dependency without relinking every binary that uses it.


I know that shared libs are prominent and a better support for them would probably help Go's popularity. (Though the idea behind shared libs is somewhat flawed.)

But it rocks to deploy a Go app by just sending the binary to a dozen of servers and never have to worry about any dependencies on shared libraries.


Savvy Win32 devs used static linking whenever they could to minimize deployment hassles way back when. :)


I remember a time when people asked, "What will replace the floppy drive?" Everyone expected the replacement to look and feel like a floppy drive, but with more speed and capacity. In the end, the "floppy killer" never materialized. Several unrelated technologies ended up taking over floppy's uses one-by-one. CD-ROM's took over software distribution, USB thumb drives took over file transfer between computers, and the network took over everything else.

The same thing will probably happen to C. Modern languages like Python or have arguably "replaced C" in several use-cases already. Go will probably claim a few more. On the other hand, Go is completely unsuitable for applications that cannot tolerate garbage collection, such as embedded firmware. Some other language will have to come along and claim that niche from C as well.


I'm not so sure there are applications which simply cannot tolerate garbage collection. Unless, you have no reasonable way of preventing allocations.

Console games, for instance, are often largely garbage collected, despite being hugely performance critical applications. The way this is generally achieved is by allocating 100% of the console's memory upfront as object pools. The game then utilizes domain specific collection mechanisms on those pools.

Shawn Hargreaves has a good article on dealing with C#'s garbage collection in XNA games: http://blogs.msdn.com/b/shawnhar/archive/2007/07/02/twin-pat...

The problem in .NET land is that some base class library methods allocate internally, so you need to completely avoid them if you use "Path 1" (avoid collections). You wind up having to do funky things like re-implementing core methods and pre-allocating pools of strings to avoid concatenation. Presumably, a language designed to be a systems language could avoid these library problems.

I don't know much about Go, but if allocations are clearly demarcated and easily avoidable when necessary, you could allocate 100% of memory up front. You could treat virtual memory as an object pool of memory pages and perform domain specific allocation on those.

Alternatively, a collector with a richer interface, such as generation control, multiple heaps, etc. Would allow for "Path 2" (avoid latency) by performing very small, simple collections.


>I'm not so sure there are applications which simply cannot tolerate garbage collection.

Things that need to be really seriously realtime, such as aeroplane pilot assistance AI, and medical equipment firmware, I think these cannot use stuff like GC because there is no room for the slightest deviation in performance.


> Things that need to be really seriously realtime

Real-time garbage collection (hard and soft) is not a work of fiction. Hell, the first papers on real-time GCs were in the 70s.

> I think these cannot use stuff like GC because there is no room for the slightest deviation in performance.

IBM's Metronome (Bacon & al) is a hard real-time GC.


> Things that need to be really seriously realtime, such as aeroplane pilot assistance AI, and medical equipment firmware, I think these cannot use stuff like GC because there is no room for the slightest deviation in performance.

Nope - real time systems have room for deviation. They "just" have different bounds. Specifically, hard real time "merely" requires guarantees that operations complete before their deadline.

For example, real-time systems can have caches even though they introduce varibility in execution time.

Of course, the closer one is to the edge, the less room for error or mis-allocation, aka "time fragmentation".


As snprbob86 already mentioned, you pre-allocate all the memory you will need, so GC never runs.


There are microcontrollers with only 128 bytes of ram and 1KB of flash. They don't have enough memory to support malloc, so there is nothing to garbage collect in the first place. There isn't enough flash to even store the garbage collector's code, even if it sits unused.


For what its worth, Tim Sweeney appears to endorse GC for game programming in his POPL talk:

http://lambda-the-ultimate.org/node/1277


I've never done this kind of programming around GC, is manual memory management in a garbage collected language any better than manual memory management in a language without GC?

>I'm not so sure there are applications which simply cannot tolerate garbage collection.

From your post, it sounds like "tolerating garbage collection" is the same as making sure it never ever happens. The application clearly doesn't tolerate GC, the programmer tolerates the language enough to bend over backwards to avoid one of its central features.


The difference between floppies and programming languages is that as storage technologies improved, the use-case for a floppy drive disappeared. (Although I think that CDs actually replaced floppies, but that's besides the point - they're on their way out, too.)

But as computer technology in general improves, the use-cases for C do not disappear. C is still needed where it's good. So unlike a floppy drive, a replacement programming language will have to compete with C's strengths.


The threat there is simply that embedded firmwares are now much bigger and more capable (and more likely to have room for GC) than ever before.

I think you're spot on.


ah yes, the Zip-drive. I remember those... shudder

http://en.wikipedia.org/wiki/Zip_drive


C is probably the most firmly entrenched programming language that ever existed, and it's not going away in the near future. One big factor here is the fact that all major operating systems are written in (and have core libraries to support) C. I think in order to "replace C", you'd have to develop an OS in the other language, and that OS would have to gain widespread adoption.


While I disagree with the premise that an OS must be written with a non-C language and then become popular for that language to replace C, I absolutely think writing an OS is a litmus test for any language.

It's entirely possible for another language, any language, to become more popular than C even if the OS hosting that language's runtime is itself written in C. Hell, according to some sources [1], Java's already there.

However, for a language to replace C, to truly uproot C, I absolutely agree it must be fast and flexible enough for one to write an OS that is comparable or superior to peer OSs written in C. As I've heard from some sources, C# and the .NET runtime may excellent examples of an environment that fell short – Longhorn reputedly was attempted in C# and MS had to fall back to C for performance and reliability reasons. [Edit: My info on Longhorn's bad, per neilc, below.]

[1] http://www.tiobe.com/index.php/content/paperinfo/tpci/index....


Longhorn reputedly was attempted in C# and MS had to fall back to C for performance and reliability reasons.

That is not the case, as far as I know (see also http://en.wikipedia.org/wiki/Development_of_Windows_Vista).

MS have written at least one OS on top of the CLR -- Singularity. Very cool stuff, albeit a research system.


Ah, good citation. I hedged my statement for a reason. :-)

Singularity is indeed fascinating, but it does remain a research OS. I hope something like it finally comes to light in a future release.


I do too, but considering the herculean task that this would be, I doubt the Windows org would ever consider it.

WP7 might have been a good breeding ground for such technology, but even that never managed to shake off it's own legacy OS underpinnings.


Not exactly an OS, but to replace C the language will have to become the lingua franca for performance(read speed) critical applications. C has wide adoption not just in writing Operating systems but in other equally challenging areas of CS namely databases and compilers.

Modern day applications like webservers, middle ware layers and caching applications are all written in C(or at maximum C++). Not to mention, C syntax is almost every where. So there are many trained programmers already here.

If go needs serious adoption Google will have to do the following things. Develop killer apps in Go, give the world few things written in Go the world just can't live without. Write real cool tutorials/documentation/recipes and manuals for Go. Take it to enterprise have you people talk in every other conference on the Globe, convince universities to teach it. Make sure the day to day programmers has all the right tooling, support and batteries to use Go everyday.

Sun did a lot of this stuff to promote Java. I think there is a lot to learn from that.


Not exactly an OS, but to replace C the language will have to become the lingua franca for performance(read speed) critical applications. C has wide adoption not just in writing Operating systems but in other equally challenging areas of CS namely databases and compilers.

I do not think that performance is the main reason to choose C for writing a compiler -- I'd rather say it is because of the ubiquity of C compilers on all the platforms.


Exactly. Compilers manipulate symbols, so functional languages with pattern matching are a better match than C.


Google is likely capable of creating its own operating system (if you don't already consider Chrome OS a legit operating system).


> Google is likely capable of creating its own operating system

I don't think they are capable of such mistake ;-)

There are plenty of free more-than-good-enough OSs to build upon. Unless Google needs something more revolutionary than Plan-9, they don't need to create another OS.

The only reason I would find sane to develop an entirely new OS would be to use all these new shiny toys we got in specialized processors inside our computers. It would be awesome to have a machine with lots (hundreds?) of cores of varying capabilities that could be powered up or down according to load, with processes being transparently migrated between binary-compatible ones or between similar virtual machines hosted on diverse cores.

It's been a while since I have seen a cool new research OS.


In a sense, I agree, but I guess I'm applying Apple business practices to Google in my mind. If they create their own operating system, in their own language, they have essentially gained a large amount of control. Given that Google is the most popular search engine and many people want to use Google products, it wouldn't be too hard to gain the majority foothold. Google is even making a Chrome notebook!


I think it was more luck than anything else.

Apple was failing. It had an ancient and crufty OS and needed to find a new OS outside the company because it failed to develop one internally at least twice. They could go with BeOS, NeXT or MkLinux (I run MkLinux on one Mac in my collection). NeXT came with Steve Jobs bundled for free.

At that time, Apple leveraged its bets making Java development a first-class citizen. It was not clear whether Objective-C would gain any foothold because it failed to get traction during its NeXT years.


No doubt there are many organizations capable of building an operating system, and a few with enough influence to cause widespread adoption. Indeed, Google is a great example, as they are responsible for more than one OS: Android and Chrome OS. Both of which are Linux-based, and therefore C-based OS's.


I don't think it will make sense to re write a Unix-like operating system in Go again! or other one.

Google can release something else, like high performance webservers, databases or other web infrastructure. They are in as much demand as probably an operating system. And they make good applications for a killer app.


I don't know that much about working on operating systms, but here's Linus Torvalds on maybe using Go in the kernal:

"Hey, I think Go picked a few good and important things to look at, but I think they called it "experimental" for a reason. I think it looks like they made a lot of reasonable choices.

But introducing a new language? It's hard. Give it a couple of decades, and see where it is then."

http://www.realworldtech.com/forums/index.cfm?action=detail&...


From Linus, that's practically a shining endorsement.


Yeah. I should have mentioned that the quote was in marked contrast to the rants about the evils of C++ he had been delivering earlier in that thread.


Is this person intentionally trying to make the C harder to read? What's with all the missing newlines? I didn't get past the first few examples, because the C was so unnecessarily difficult to read, I assumed the Go was in similarly bad style. (I don't know Go.)

Maybe there's a point in there somewhere, but bad code samples distract from it.


His formatting style is just the best way to get lost !

if(...) ...;ELSE {

}

That's the first time I see this and hope the last.


"We deviate from standard style for compactness."


To what end? Is vertical space at some kind of premium? I have a scroll bar.


Compactness is not a very good reason to deviate from the standard style, particularly since in Go, the standard style can in some cases have an impact on the syntax of the code.


It seemed clear and fairly idiomatic to me, though it was clearly trying to reduce the line count. For maintainable code, I'd expect more newlines and a little more separation of assignments from comparisons, but not much more.

Intuitions like these are rough though - it depends on how low in your perception stack you have the C operators etc. embedded.


Agreed. Besides crappy formatting, he also makes his programs do confusing things. Why does the first example run in the infinite loop?


Because that's what the "yes" utility does:

"By itself, the yes command outputs 'y' or whatever is specified as an argument, followed by a newline repeatedly until stopped by the user or otherwise killed; when piped into a command, it will continue until the pipe breaks (i.e., the program completes its execution)."

http://en.wikipedia.org/wiki/Yes_(Unix)


From the first line of the man page for yes:

yes - output a string repeatedly until killed


The first example is a basic implementation of the 'yes' unix shell utility.


This is just a facet of a bigger issue: how come when you want a dynamically typed, scripting language you have plenty of choice but when you want an alternative to C/C++ (i. e. general purpose, compiled to actual machine code -- no VM, no GC), there is hardly anything (Fortran anyone?) If you add 'object-oriented,' you're pretty much left with C++ and, um, OCaml? Even when you want a functional programming language (still semi-esoteric in my book), you have more to choose from.


A couple of other alternatives to C and C++ are D, D2, OOC and Cyclone. In a more functional vein, BitC and Fortress. There are also a few interesting languages that compile to C, like Vala (C#-like), Cython (Python-like), Lush (Lisp-like), Chicken (Scheme-like).


I'm biased, of course, because I work on Rust full-time, but don't forget Rust: https://github.com/graydon/rust

We recently reached self-hosting status, in which we compiled Rust with itself, and over the past few weeks we've been working on making the compiler much faster (over 7x in fact).


I did actually think of Rust, but after looking at the website I got the impression it was still very alpha. I'm all for competition, it's cool to see it's in active development.


Lisp never became mainstream and if it hasn't succeeded up to now, it most likely never will. D & co. didn't make it either and things aren't going to improve for them. Great point about Cython though, and thanks for the Vala pointer, I wasn't aware of it.


Vala is really nice. I wish I did more non-web programming, I would probably use it all the time.


lisp was pretty successful in the time before many HNers were born. it wasn't a smashing success, but it was in much better shape than now.


Its a little difficult to beat 'Ideas done right, first time'. For the very same reason it will be difficult to replace Lisp(grouping all its flavors here), Perl(CPAN, power, flexibility, freedom, rapid prototyping, extensibility, help, documentation, support, regexes-text processing), C, Unix etc.

Despite their inherent flaws they haven't gone away, no matter how attractive alternatives exist. Because they solve certain set of problems so well, you just can't do without them.

Even tools like sed and awk have their own niches and uses, and replacing them with something else is just isn't going to happen.


Pascal and related languages that include various flavours of object Pascal like Delphi and other languages like Modula-2 and Ada are there. There were probably a bunch of them in the late 1980s.

The thing is for this kind of language C/C++ won. None of the others offers any substantial advantages over C/C++ so that's what stuck.

C for systems programming and C++ for applications programming could well be the default 20 years into the future just as they were 20 years in the past.


I bet there are many static typed, compiled languages. But they never get any traction because they are not "sexy" like the newest hyped dynamic language. There are many devs who don't care for static typing - they just want to hack their backend in a weekend and be done. Those devs seem to be the vocal ones with blogs and social media power users behind them.

Look at Python. Python is over 20 years old. It has been here since before Windows 95 but only got prominent (in the broad public) in the last few years because some opinion leaders of the hip and young crowd found out you could make cool web apps with it and hyped it into oblivion.

I bet if Go hadn't the names behind it it wouldn't get any more traction than any other of the gazillions of programming languages that die silently. I guess not many home brew languages get talking time at a Google conference.

Let's be just happy that Go seems to get a little popular - maybe it will be enough for a break through. I'd love that.


Python is over 20 years old. It has been here since before Windows 95 but only got prominent (in the broad public) in the last few years because some opinion leaders of the hip and young crowd found out you could make cool web apps with it and hyped it into oblivion.

Hmm, I thought it became popular for scientific applications and as an anti-Perl for scripting before it became popular for web apps? I first got into Python because I needed a scripting language. People said, "If you don't like the design philosophy of Perl, then you might like Python, which made the opposite choices." The centerpiece of Python advocacy back in the day was this article by Eric Raymond: http://www.linuxjournal.com/article/3882

I know that's the article that convinced me to try Python. Everything he praised about Python screamed "opposite of Perl!" and after experiencing my mind rejecting Perl like a body rejecting a transplanted organ, I was desperate for an alternative that would make me as productive at scripting as the Perl gurus. (Yes, I'm still jealous of the one-liners.)

So there is certainly room for a language to grow and become popular without having a first-class web framework. (These days any popular language will sprout a few web frameworks, but they don't have to become popular for the language to succeed.)


Also IMO the ActiveState did alot to make perl & Python popular. Only in the last 10 years or so did ActivePython become really useful. In the last 90's it seemed (in my mind, at least) to play second fiddle to ActivePerl.


Python is too good a language not to have made it. Google endorsement helped a lot, but they chose it because it's great.


Not true, I remember a few years back being a major Perl shop. The advice from senior guy in the hierarchy to use Python was 'If Google is using it, it must be good'. And I know whole bunch of people around me in many projects used it just because of Google.

Every tool has its own merit Python is nothing special in this regard. Python's low barrier to entry has invited many newbie's to pick it up in a day or two and get going with solving problems. But it has its own flaws and isn't that great as often people project it to be.

My own experience with Python, after being a heavy Perl and Ruby user isn't that good. Especially if you are coming from a language like Perl, Python as a language has an interesting trend. At first you feel very happy that things are so simple with it. Then slowly once you learn the language and start getting deeper into it, you try to use it for day to day work. Suddenly you realize that text processing sucks badly, something that is very big problem on Unix based environments. Writing a single regex takes a 10's of lines of exception handling. You try to use it for quick dirty scripting, again the language runs out of steam very soon. Its just too verbose for scripting class.

Now coming to more important reasons. Sacrificing power for readability might be great for newbie's but isn't good experienced programmers. Lack of multiline lambda's, no good support for TOC, GIL, barely there object system(compared to Moose), lack of extensibility, lack of syntax plugins. Forcing one way of doing things, all this is great hindrance if one wants to climb upper steps in the ladder of programming. And above all lack of thing like CPAN is just something more than sufficient reason above everything else.

I left Python for Perl, but had to go back again. Python wasn't just serving the need for day to day quick scripting. Which I often need to do twice or thrice a day. Text is uneliminable part of large IT projects. Tools like Perl, sed , awk help me do my job quickly everyday.

Slowly I realized if I'm using Perl every 50 minutes, I might as well use it for long term projects. On the other hand, Best practices matter for every project. You can writeable and un maintainable code both in any language.

We are back to being a Perl shop again. And its the best decision we have made.


Without knowing anything about what you do aside from what you've written here, it seems like you've fallen into the same trap that most folks who move from Perl to Python fall into. They are used to having the swiss army knife of string tools with regex, and don't know what to do when handed the complete toolchest of specialized string tools that Python provides.

I do a fair amount of text parsing in Python for my job, and only very rarely import the regular expression module - because most of the time I don't need it. My code is faster and more easily read for it, as well.


Could you please explain how I don't know anything about what I do aside from what I've written?(I don't know how you make such a comment without knowing anything about me).

And what are those specialized toolchest of string tools that Python offers? I have clearly explained(In comments above and below) Perl's 'tool chest', now can you please explain how Python 'tool chest' beats it?

Seems, languages like Python and Java are best when all your application needs a glue between one part of architecture(webserver, UI etc) and other(database, XML, JSON). Or at maximum some extra stuff like sockets.

The problem with Python sort of languages starts when you alter you inputs and outputs such that they are no longer standard and structured. All examples that Python folks give as superiority of Python over Perl come around in areas where data input is structured enough. Things like scientific computing, web programming(Interaction with databases, XML, JSON).

There are a lot of languages which can do this, The actual challenge is in areas dealing with unstructured data. There are very few languages which offer convenient tools to deal with those sort of problems.

Unfortunately Python isn't one of them


Did you really misunderstand your parent? Strictly speaking "without knowing..." should refer to the subject of the main clause, but it is clear that he (your parent) intendend himself to be understood as the potentially ignorant person.


What non-re modules should I look at for text processing in Python?


Incidentally, I'm working on a language comparison which includes Perl and Python and I would be interested to see how Perl string handling is superior. Did you mean speed or syntax or both? I always thought that the Perl way is quite friendly if you have Unix background, but since Python does everything by function calls it's also fine.

As for exception handling, I guess it'll always be shorter in Perl since it doesn't really support exceptions...


Perl has exception handling, its only that its not available via your usual try-catch syntax. If you want a more sugary exception handling syntax there is always Try::Tiny ( http://search.cpan.org/~doy/Try-Tiny-0.09/lib/Try/Tiny.pm).

Another advantage Perl has over Python in case of parsing is that regular expressions are first class elements in Perl, so they can be passed around like objects. Combine them with more powerful things like given-when(http://perldoc.perl.org/perlsyn.html#Switch-statements) and parsing becomes a lot more easier when compared to Python. Smart matching is a great feature. And all this obviously leads to better performance in terms of speed because less code to process means (to an extent faster code). And syntactically as well.

Imagine doing such a thing in Python. There isn't smart match operator at the first place, there is no switch statement, and on top of that dealing with innumerable regular expression functions(compile,match,search) with try-catch statements inside if-else loops. This piece of code in Python won't be easy to read and solution isn't elegant and better than Perl either.

Perl turns out to be better in all ways, the code is readable, concise, faster and powerful.


Yeah, it's through eval() which is a questionable design decision at best. Regular expressions can be passed as objects in Python, too. Three functions is hardly 'innumerable'.

"Perl turns out to be better in all ways, the code is readable, concise, faster and powerful."

To each his own, I guess.


Perl's block eval is not string eval. Quoting from 'perldoc -f eval':

  In the second form, the code within the BLOCK is parsed only
  once--at the same time the code surrounding the "eval" itself
  was parsed--and executed within the context of the current Perl
  program.  This form is typically used to trap exceptions more
  efficiently than the first (see below), while also providing
  the benefit of checking the code within BLOCK at compile time.
The only (significant) questionable decision there is the name; its functionality is the same as 'try' in other languages. There are some issues, however, with properly dealing with failures in an eval block. They're fairly obscure, and won't often come up, but they're explained in the "Background" section of the Try::Tiny docs: http://search.cpan.org/~doy/Try-Tiny-0.09/lib/Try/Tiny.pm#BA...

Try::Tiny is a minimal module with no dependencies that does the least amount necessary to get 'try' and 'catch' keywords into the language, connecting handler blocks to guarded blocks. There are some problems with this, like trying to use loop control or return statements inside of a catch block. The normal way to get good error handling that deals with all of the potential issues without introducing others is TryCatch, which also adds extra features, like Moose type constraints on multiple catch blocks: http://search.cpan.org/~ash/TryCatch-1.003000/lib/TryCatch.p...


I don't think you got my point about regular expressions in Python. If its only one regular expression, then its 3 functions. But if say you have 20 regular expressions, now you have 60 function calls. Say you have to add 6 statements of exception handling to each of this, you now have 360 statements. Imagine this inside a giant if-else loop structure.

Without a doubt the Perl solution is a lot more cleaner than Python solution.


If you want to match a regexp in Python, you can just say:

m = re.match("some_regexp", my_str)

You don't need to compile regexps before use but you get better performance if you do when matching the same thing multiple times (no idea if there's a Perl equivalent). Also match() and search() have different semantics and you don't use them at the same time. So it's one statement in Python per match.

Also how do you propose to get an equivalent of (non-mandatory) exception handling in Perl code without extra statements?


Perl's regexes are part of the syntax of the language. They're compiled when the script is. That's why the perl syntax is cleaner and the implementation more robust. A bug table of regexes to parse fields out of a record or whatnot looks natural (i.e. declarative) in perl; in python it's kind of a big, imperative mess.

Perl exception handling is quirky, but pretty clean. You raise an exception with die() and "try" a block with a simple eval. It's basically the same amount of syntax. The one bit perl lacks as a builtin is the type-based exception matching; you have to do that logic yourself.


[Perl 5 exception handling works] through eval() which is a questionable design decision at best.

eval() and eval {} are very different constructs in Perl 5. The questionable design decision there is reusing the keyword.


When I think "string handling" in a Perl-vs-Python sense, I think "regex syntax". There are 3 distinct tiers: Perl and Ruby's built-in syntax, PHP's preg_* functions, and finally the Python (and perhaps Java?) Extreme OO style. (In descending order by my preference.)

Python's approach also eliminates the pattern of `if (preg_match(...)) { use captured groups; }` since the match object comes back as a return value instead of either a reference or magic variables, and assignment of that value is illegal inside the if's condition. Very Pythonic, but adds an extra line of code to assign the match separately from testing the result.

(Edited a bunch because I fail at non-markdown.)


When working with regular expressions in Perl, the RegularExpressions::ProhibitCaptureWithoutTest perlcritic policy (default severity 3) helps avoid some common mistakes.

http://search.cpan.org/~elliotjs/Perl-Critic-1.115/lib/Perl/...


I would really suggest you actually learn and use Perl (all of it) before trying to do such a hard task. Too often people just compare a language to their favorite without bothering enough with the other language.


I know it's a peril, you tend to know the thing you like best. I'm trying to address it by asking a guy who teaches the Perl course at my uni and other fanboys ;)


We used Go to re-write our cluster management system. In 2,000 lines, we have master and slave daemons that launch programs, exchange files, and forward output from running programs; "launcher" functionality in the same codebase allows users to start jobs and check the status of the system. Switching to Go made everything significantly easier.


I'm wondering why you didn't use a scripting language like Python?


1. We wanted to try out Go

2. We wanted something that would give us a single, staticly-linked binary that we could easily push out to the nodes. Does Python allow that?

3. Go's goroutines and channels fit beautifully with what we were trying to do.


I see too many other languages defeating Go at its strengths. The only one that survives is its fast compilation, but others are not so slow to make that a deciding factor.

If you happen to like/need its particular combination of strengths it may be a good choice, but I think much better can be done. It's a good experiment, but I think they're taking it too seriously in some regards and not seriously enough in others. I just don't see it lasting.


I want a simple language that compiles to native code, has easy concurrency support, is garbage collected, allows me to distinguish between values and references, and has a compiler/standard library that is useable for real tasks. What other language betters fill that?


That depends. I don't have a good answer because I don't personally think Go qualifies. Rhetorically: is that really all you want out of it? Does Go really do it well enough in your opinion?

The thing about compiling to native code is interesting. Why is that important? And are you excluding VM languages which nevertheless produce system binaries in that?


This seems like a case of premature optimization. C rocks at what it does. The billion other languages are pointing out and improving upon the stuff C does badly, but they don't replace C (a language I've nearly forgotten, but respect). I RTFA (and its subpages) and didn't see anything mindblowing.

The payoff of C->Go comes if you have 1M LOC of C. 900k LOC of Go is much better. For most of us, just-getting-it-to-run is awesome and that's why Ruby, Python, Lisp and Clojure dominate.

The best part of the post is that (until later) I didn't know that the "ooh-wow" bits on http://www-cs-students.stanford.edu/~blynn/c2go/ch04.html#_f... were Go and were longer than their C equivalents. Sure, the C bits look scary to a noob; they look perfectly sane to a journeyman.


How can a language with garbage collection replace C?


Like you, I obviously don't grok Go. Is the goal to build a modern general purpose language? Or is it to build a better C?

I'm not sure how it can succeed at either when when it borrows a so much from C (a language which is at least a couple generations dated when it comes to general purpose languages) while adding garbage collection.


The purpose of Go isn't to replace C at what C does best, like being wicked fast and having very direct access to the hardware. It's meant to replace programs that Google used to write in C++, like web servers, data aggregators, etc. Go was designed by Googlers for Googlers, and that's very obvious from both the design and standard libraries.


The point of the article is that the author doesn't like programming languages like C++ and Java and has therefore stuck with C programming. The question the author poses is if Go can replace C for him.

In general, Go isn't meant to replace C but to replace C++ and Java.

Regarding garbage collections Go has the ability to use unmanaged memory although I haven't had occasion to try that, so I don't know how good it actually is.


Go is meant to replace C++ and Java for systems programming. They have no designs on implementing Go to do UI programming, for example, which is where C++ and Java are used more than anywhere else.


True, but I don't know about the "more than anywhere else" part.


There is a lot to be gained by designing a language that is easier to use than C++ while being at least as fast and that's what Google is probably betting on. GC saves a lot of programmer's time and they don't care about tiny embedded systems not being able handle it. Examples of D and other alternatives only show you need a company behind a language to make anyone use it.

On the other hand, C can't be improved much without compromising the 'as much speed as possible but handle with care' philosophy (like introducing GC while sacrificing a bit of speed).


If the title was "Could Go be used instead of C for a great many applications?" then the author would stand a chance at making a point. However, asking whether Go can "replace" C shows a deep misunderstanding of why people still use C: to do the type of stuff that a safe, typed and garbage collected language won't let you. Need to poke hardware? Need to use efficient memory layouts for your data structures that confound non-dependent type systems? Need exact control over generated code to achieve competitive performance? (And, apparently for Go's case) need to handle OOM conditions? (The "some systems don't even have malloc() return NULL on OOM" argument was particularly weak.) True, these things can be handled by C/C++ and called by Go through a FFI, but then C hasn't really been "replaced".

Really, the title should be "Can simple C programs be written in Go in about the same number of lines?" because that's all that was demonstrated.


-1 to Automatic semicolon insertion (seriously).


Yup. I stopped following Go when that happened. Not intentionally but adding yet more things (invisible ones, no less) that I need to be wary of is not what I consider convenience in a syntax. Other more awkward languages became less inconvenient to use and I just stopped using Go.

I probably would have kept at it a bit longer if the dev team was more accepting of feedback presented in good faith. I'm not really surprised -- it's often hard to tell the difference between fruitless bikeshedding and oh guys FYI you might consider this a problem, I do.

I am biased towards semis though, when I first saw OCaml's ';;' I was pleased.


I mostly stopped too. I still read the mailing list occasionally, but haven't bothered to download any snapshots.


Try as I might, I can't convince myself that semicolon insertion is something that I care about. Maybe I already have the One True Format trained into me, but I don't think I've ever had to think about or know the rules for semi-colon insertion. All I know is that I don't need to end things with ";" unless I'm trying to cram multiple things on a line, which I never am.

That said, I remember seeing something about a way that semi-colon insertion leads to the possibility of a misleading "if", but I can't find it anymore.


It's a lot more systematic than in JavaScript, rather similar to Python's semicoloning rules.


Agreed. It achieves a marginal increase in readability at the cost of all kinds of screwy implicit behavior. I like languages with dumb syntax that editors and IDEs can understand.


Go's syntax is one of the 'dumbest' around, it is extremely easy to parse compared to pretty much any other mainstream language (except scheme I guess).

All the bitching about ';' comes from people that clearly have not used Go or tried to parse it.


    As a rule, never start a new line with an opening brace; 
    it belongs with the previous line.
I'm always using the "one true brace style", so that wouldn't be a problem for me personally; however I feel the pain for people using BSD style. Using a consistent indentation and brace style across all languages you're working with, be it C, Java, Javascript, Perl, PHP is a must, and that Go idiosyncrasy seems really wrong to me; this kind of detail may hamper Go acceptance quite a lot.


I'm pretty sure you'd get used to it very fast. I think it's a good idea to enforce one style from the start: it kills all stupid arguments in the egg, and lets people focus on the important stuff.


Actually BSD style causes problems in Javascript due to semicolon insertion.

http://robertnyman.com/2008/10/16/beware-of-javascript-semic...


This whole topic is another thing where Go is a godsend, gofmt has ended all silly arguments over formatting style.


Really? Systems programming with a garbage collected language? I'm sure the kernel developers will love that.


Why not have a systems language with garbage collection?

From http://research.microsoft.com/pubs/52716/tr-2005-135.pdf:

  3.4	Garbage Collection
  Garbage collection is an essential component of most safe languages,
  as it prevents memory deallocation errors that can subvert safety
  guarantees. In Singularity, the kernel and processes object spaces
  are garbage collected.
I appreciate that Microsoft Research is saying "why not?" rather than "why?". Singularity is a fascinating OS and I hope it, or some derivative or something like it from someone else, eventually reaches the mainstream market.


BitC, a project started by Jonathan Shapiro of Coyotos project fame and briefly associated with Microsoft's Midori project, is a systems programming language that supports garbage collection but also allows the programming of critical sections of code in which garbage collection is not permitted to take place.

http://www.bitc-lang.org/docs/bitc/bitc-origins.html


Surely there were kernels written with garbage collecting - smalltalk xerox systems, or lisp machines come to mind.


In a talk/interview on Go Rob Pike (on infoq maybe?) explained that with "systems programming" they don't mean OS development (at least not now). GC technology will improve so that sooner or later it won't be an issue.


Garbage collection needs to go into the kernel. Providing fast, fine-grained control over virtual memory mappings gets you a read barrier on any system with virtual memory, which lets you build real-time GCs. Azul's patches to Linux show how this can be done:

http://lambda-the-ultimate.org/node/4165

They're using the VT-x for the virtualized page tables and still run the GC in userspace, but if you move the GC into the kernel you won't even need virtualized page tables.


I don't see a lot of buzz for Go or at least not nearly enough to make me even ponder "Could Go replace C?".


Related: simple Unix tools written in Haskell. Most of them become just one-liners. Everything is linked to a single executable.

http://www.haskell.org/haskellwiki/Simple_unix_tools


I'm surprised nobody has pointed out another major reason Go will not replace C: embedded programming. GC eats valuable kilobytes on hardware that often doesn't have any to spare.


It's all about trade-offs. GC does eat more memory but with the proliferation of reasonably sized chunks of RAM on ARM embedded systems for example, it's of little concern. Not only that, you reduce the time to market and likelyhood of your code falling over due to pointer and memory problems. That is probably preferred (in my experience) when your product is in situ and has no form of upgrade possible other than shipping an engineer out to do it.


The cheapest ARM chips are still total overkill for most retail embedded electronic devices and many times more expensive per unit.

The cheapest ATTiny AVR chip is $0.60/ea bulk. The cheapest ARM chip like an AT91 is about $4.00/ea bulk. Ship a few million units and suddenly the cost difference between developing in C and Go is pittance.


I'm no CS guru, but just reading the Go code makes me long for the C examples. Thankfully they are not far up the page. Aaahhh...


Serious question. Does Go really have anything unique going for it, apart from the fact that it comes from Google?

I have yet to see a convincing argument why I should choose Google's little (and relatively immature) language over anything else and I'm very close to assuming this thing is pure hype and has nothing actual newsworthy. Ofcourse, as always I'd love to be proved wrong.


It occupies a sweet spot on at least two axes:

1. Procedural -- OO (with its interface subtyping resembling Haskell type classes with default instances for every type)

2. Languages without safe automatic memory management -- languages with an explicit VM

Occupying sweet spots is usually opposed to having radically unique features.


How many unique features does python have?

Unique features in programming languages are rare and often unwelcome (cf. perl's references).


To be fair, the poster didn't ask what features it had that made it unique, he asked if it had "anything" going for it that made it unique.


Serious question would be "Does Go have anything unique going for it, and why should I choose it?" without insults.


No semicolons at the end of lines. For me, that's a killer feature ;-)

Now, seriously, it has some nice constructs like goroutines and channels. Go check http://golang.org/doc/effective_go.html


+1 on the goroutines and channels. Go is best used for creating scalable, highly concurrent, performant network servers. That's its sweet spot & the reason why Google can justify the R&D investment in it. It's a better language for building key pieces of their infrastructure.

The Heroku guys have seen this & used Go to build their own clone of Google's Chubby service called Doozer.


Yep, I now often forget to put semicolons when writing JavaScript or C :-) I was rephrasing the parent question.


There are a couple of things unique to Go (interfaces for example, while similar to things in other languages are still quite unique).

But that is not (and should not be) the point of the language, it is not about how many features it has, but about the right and very careful selection of features and how they interact with each other.

Or as others have said ( http://go-lang.cat-v.org/quotes ) "Go is not meant to innovate programming theory. It’s meant to innovate programming practice."


It's as easy and as expressive as python, nearly as fast as C and less complicated than both. That's good enough in my book.


nearly as fast as C

not quite; using comparisons from the The Computer Language Benchmarks Game [http://shootout.alioth.debian.org/u32/which-programming-lang...], Go is currently (in general) slower than C++ gnu g++, C gnu gcc, Java 6 -server, Haskell GHC, C# mono, just to name a few. Sure, it's not a definitive view, but it's better than code not run and compared.


For you, is 2.12 seconds nearly as fast as 1.77 seconds?

For 5 of those 10 tasks, the measurements on x64 show a C program less than twice as fast as a Go program.

http://shootout.alioth.debian.org/u64q/compare.php?lang=go

Could it be that those C programs were more highly optimised by the C programmers?

http://shootout.alioth.debian.org/u64q/program.php?test=spec...

http://shootout.alioth.debian.org/u64q/program.php?test=spec...


Go's compilers are still very young and do almost no optimizations.

Also, the 32bit compilers are specially bad as they are not really used by most of the Go developers, the 64bit compilers do much better as you can see here:

http://shootout.alioth.debian.org/u64/which-programming-lang...


There is also gccgo, which uses (and profits from) the gcc backend.


Is closer in performance to C than to Python more acceptable?


It's not "as easy" as Python and "expressive" is hard to measure.


It's all down to opinion. This is all IMHO.

I've found Python to be quite obtuse at times, particularly with the notion that it's sometimes functional and sometimes not. (Consider generators being promoted and map being demoted by Guido for not being 'pythonic').

Expressiveness is hard to measure I agree, but I find enough utility in Go to perform the same amount of work in a roughly equivalent number of keypresses. That's my metric.


Really? Replace C? Really? http://golang.org/src/


eww, single character variable and function names make me ill


Go is designed more like a language for the web. it has a special niche in server programming with ease in memory management and concurrency.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: