Hacker News new | comments | show | ask | jobs | submit login
Another go at the Next Big Language (cheney.net)
117 points by iand 1256 days ago | past | web | 121 comments



Here's the core problem with next generation languages. Languages that come out of academia focus too much on syntax and computer science level functionality, and it's extremely rare for a language of that sort to make it in the real world. The languages we use today either come from big companies with the ability to promote anything long enough to get traction on any language that is at least "good" or they come from the "streets". From extremely small teams who create incredibly flawed languages that are immimently practical and go on to rule the world. Perl, Ruby, Javascript, PHP.

It's "worse is better" again in spades. Ivory tower language designers try to come up with perfection when what we really need is to improve on the basics.

The next big language is probably not going to be something like Haskell (as nice as all that functional purity is) it'll be something that builds profiling and unit testing and better source control support right into the language, compiler, and tools.

Edit: if you look at where the average developer is spending most of their time and especially where the majority of the pain is it's typically in things like testing, debugging, performance profiling and optimization, and deployments. And if you look out there in the field you'll see lots and lots of awesome tools and systems helping peoplee tackle those problems. But it's exceedingly rare to see a new language which approaches those problems or tries to codify those tools into first class language features.

-----


Go was created in part by Ken Thomson, and supported by Google Inc. In other words, completely the opposite of what you are talking about.

Further, anyone who uses the "ivory tower" identifies themselves immediately as someone incredibly biased and political. There are politics in science to be sure, but overall it's frowned on.

Understand first principles. Understand your needs and the needs of others. Build something based on that. If you continually follow what is popular you will continue to be below average. By definition, popular is average.

-----


Languages that come out of academia don't really focus on syntax much (unless you count research into macros for Racket/Template Haskell/etc). If anything, the lack of attention paid to syntax tends to hurt research languages' adoption.

-----


Probability that pcwalton turns up in a Go thread and conversation turns to Rust? 1

-----


I like to reply to Rust comments, for obvious reasons. Rust usually gets brought up around here in Go threads, unfortunately. I try not to do language advocacy -- like I said, Go is a great language, and it's in a different space from Rust and the languages are not competing -- but if I've crossed the line somewhere I apologize.

To avoid derailing, I'll try to refrain from commenting on anything not immediately related to Rust in other languages' threads in the future.

-----


Please don't (refrain).

It's not like ANY and ALL HN conversations don't get derailed in some sub-thread or another.

Plus, it is insightful to here comments from a language designer/implementor in language related threads.

-----


Here's the core problem with next generation languages. Languages that come out of academia focus too much on syntax and computer science level functionality

Is that really a problem, or do they simply have a different goal to the languages you want to use?

Plenty of language concepts that enter industrial programming were born in academia and went through the mill in so-called academic languages long before they found their way into mainstream tools. Just look at all the ideas from the functional programming world that have become almost universal in recent years.

That doesn’t necessarily mean that the academic languages where these ideas matured are themselves good tools for industrial applications. To be successful in industry, a language needs a lot more going for it: a good set of developer tools and a critical mass of users, for a start. This often creates a chicken-and-egg situation that can sink a new language regardless of its potential or technical merit, particularly if the approach to programming is very different to what most practitioners are used to at the time.

Anyway, I think you’re being rather unfair with the “ivory tower” characterisation. If you look up Simon Peyton-Jones’s comments on “programming language nirvana”, for example, it’s pretty obvious that he understands these issues and the roles of different kinds of language just fine.

-----


The main problem of academic languages is that they improve one or two aspects and neglect the rest. Real world languages must improve one or two aspects without hurting the rest too bad.

The rest means for example: debugging, IDE, multi-platform, standard library, performance, deployment

-----


Unless your language research focuses on debugging, IDE, standard library; I don't bother with multi-platform, performance, and deployment, but I know others that do. Really, we are just single people, we are not out to create the NBL, we are out to push things forward and create well-thought-out ideas that could be included in the next NBL, and we realize that most of our ideas will fail to make it big time. But perhaps some of them will survive and have an impact (so is the depressing life of an academic programming language design researcher).

-----


What would be really cool is a language where I just write the tests and the compiler writes the actual code. That would be awesome.

Something like what Critticall attempted ten years ago: http://www.critticall.com/ The idea there was that an evolutionary algorithm wrote the core code, you just had to give it an environment and some way of knowing the results were still okay.

-----


As someone who has tried Genetic Programming I can tell you that we're quite far from this becoming a reality. However, maybe there is something to that concept: I can imagine that it's possible to generate a good part of the usual boilerplate (interfaces, function parameters, data initialization, library initializations) knowing the tests. The programmer would still have to choose the names of inner functions and variables and then do the actual implementation. You do not want to leave the implementation itself to an AI, at least not any AI we know right now.

Although, maybe a Watson-like AI fed with knowledge from stack overflow and official code examples might change that in the future ;).

-----


> it'll be something that builds profiling and unit testing and better source control support right into the language, compiler, and tools.

I think Go does an excellent job at this. The go tool comes with an excellent profiler, is version control aware and works with the `testing` package to make unit testing and benchmarking pretty easy.

-----


There is still something to be said about languages that try to reduce the amount of time you spend Testing, debugging, profiling and optimization. Where the goal is to aid you in getting as much of the concept intact to the computer with as little effort as possible. End goal is of course AI.

-----


D has profiling and unit testing built in (also coverage analysis and documentation generation), but I'm curious how you'd see source control built in.

-----


My kingdom for someone who can figure out how to solve error handling. My code consists of some reasonably straightforward sequence of actions with a random smattering of error handling significantly distracting from that.

That error handling code is tedious to write, very time consuming to test (and often virtually impossible) and usually not run very often. Exceptions at least let you put the handling code somewhere other than the normal sequence (although you may still have some finallys), continuations are quite nice, and the Go/C model pollutes the code but puts the error handling right next to the error detection.

None of these really solve the problem though. How can I have the least amount of error handling code possible, how can I test it, and how can I be sure it is correct, and all while spending my mental efforts on the code that actually does useful things?

-----


You and I sound like we may be cut from similar cloth. I find Go's approach to error handling -- which is to not to do much about the problem at all -- is better than pervasive use of exceptions but worse than some other possible mechanisms (and on this we may disagree).

Although I'm not saying the language Rust has got it right, perhaps some of the thoughts on the mailing list may be useful. However scarily, my mailing list post -- which only asks how to handle errors -- still ranks on top for "rust error handling" on Google:

https://mail.mozilla.org/pipermail/rust-dev/2012-March/00145...

Here I mention the Common Lisp condition system, which has been my favorite so far, but the one of the principal authors of Rust mention a somewhat similar but lighter weight alternative.

-----


I actually prefer exceptions to Go's approach, mainly because most of my code is structured with the best place to handle errors being higher up the call chain. Exceptions let me do that for free, whereas Go requires boilerplate code all over the call chain.

BTW your posting left out that close() can also fail (eg deferred writes run out of disk space).

And note that none of these really help with testing. In your 'cat' example, I'd want to test open failing, read failing, write failing and close failing. At a certain level the combination of language and libraries know these can fail. Ideally I would like to write zero lines of code to exercise those cases. In practise I usually find out that my test code is larger than the code being tested, and I usually have to add all sorts of instrumentation to force various failure modes which again makes my original code even longer and more obfuscated.

-----


Go's panic/recover pattern lets you handle exceptions in the place you feel is best, much like exceptions. Though it is true that you would still need to check the error returned, and panic if not nil.

However, there are a few libraries that offer 2 versions of the same functions, one that returns the value and error combo, and one that panics in case of error and returns only the value (regexp' Compile and MustCompile for example). When it makes sense, it's nice to have this option.

Anyway, error handling is hard, I'm not a fan of exceptions, though alternatives such as Node's error as first argument of a callback and Go's second return value require polluting the normal code path. I like go's "defer" idea as a "finally", though.

-----


We're interested in adding support for condition handlers. Arbitrary stack unwinding and/or first-class continuations aren't likely due to runtime/toolchain limitations, but the "register a function to be called on error, which might direct the call to be restarted" style is intended to be supported.

-----


Erlang has a strong philosophy on this -- let your process (a microprocess inside Erlang, not the entire VM obviously) crash rather than pollute your code with needless endless crap... dozens of try catches just do log that an error happened... that is insane.

Death to defensive programming which is a massive endless blackhole to throw developer resources down!

Due to the supervisor / worker model of Erlang -- if you don't know how to explicitly handle an error -- YOU DON'T!

You simply let the process crash! You be tight with your pattern matching (you can think of them like assertions) and you let Erlang do its thing.

{expected_thing, 55, SomeVarToCapture} = function_call(..)

If the function doesn't return something matching {expected, 55, ...} it blows up -- it crashes... and this is fine. Because in most cases you don't know how to fix that problem anyway!

But, it doesn't have to Erlang has try/catch when you want it -- for those cases when you CAN handle an error, you do know what to do to fix it... which is the point -- when you can HANDLE the error, you do... else let it go SPLAT.

-----


Exception handling exists in the problem domain, not just the solution domain.

-----


My kingdom for someone who can figure out how to solve error handling.

You don’t really seem to want someone who can solve error handling. You seem to want someone who can make it disappear, or at least get as close as possible to that ideal. I feel your pain, but I fear you are asking the impossible. :-(

Some errors are recoverable, and maybe it would be helpful to have default policies like “if the file is read only, have the OS prompt the user to make it writeable and retry” to take care of a lot of the repetitive coding work in those cases. At least then we wouldn’t have to do things like writing a top-level file-handling operation in a loop that tries (in the exception handling sense) to do something, and if any recoverable error occurs, attempts a suitable recovery and goes back to try again. On the other hand, to have some sort of default policy, you’d need something in the semantics of your language to indicate which policies should apply and where. By that point, you might not be doing much better than a try-catch in a loop.

In the end, though, not all errors are recoverable, and that is the fundamental problem. Some errors are fatal, and you just can't carry on as normal or rely on some automatically generated default behaviour, because the semantics of your program are now broken. What would you want to see that you don’t have today in terms of minimising this kind of error handling code or making it easier to test?

I don’t know what kind of testing you have in mind, but one thing I’d really like to see going mainstream is a serious effect system built into the type system of the language. That would be useful for a lot of reasons, one of which might be checking that whatever error handling constructs were available didn’t do things like leaving effects half-applied, transactions open, or resources locked, in the event of a change of plan because of an error condition. I expect some relatively near-future languages will start to pay more attention to formal effect systems as fields like distributed systems, concurrency and security become ever more important in the programming mainstream. But this is a topic far bigger than just error handling...

-----


I agree that zero lines of error handling is the perfect solution. Even getting error handling and testing to be a trivial percent of the actual useful code would be a major improvement. At the moment my testing code is around the same size as the useful code plus error handling, and error handling is around 10% to 50% of the code.

I think part of the problem is error reporting is still somewhat shaky. To a certain extent you only actually need to handle errors that occur in the real world. Some languages like Java and Python can get you a reasonable amount of information when an exception occurs, but you still have to find mechanisms to report it back, and chances are you won't have detailed program state or operations leading up to the error. But if you did you could in theory release programs with no error handling as alpha releases (in the original sense, not current marketing usage) and see what shakes out.

> I don’t know what kind of testing you have in mind

Using fdr's example of a cat program, it consists of an open, a sequence of read followed by write and a close. I would like to point the language/environment at that and say "test it" with no further effort. Currently I have to write far more code to test, and then add in instrumentation to force various failures (eg make close return an error). The test code in this example would take considerably longer to develop and could itself have bugs!

-----


Using fdr's example of a cat program, it consists of an open, a sequence of read followed by write and a close. I would like to point the language/environment at that and say "test it" with no further effort.

OK, so I suppose the next question becomes what “test it” means.

Perhaps you want your test system to make some kinds of automatic deductions about the intended behaviour of your code and then to verify that the actual behaviour matches? If that is the case, what kinds of deductions would you want to be handled automatically for you?

Or maybe you’re thinking of some sort of automatic simulation of possible failure cases when you get to the open/read/write/close operations, serving a similar roles to things like mocks and stubs in unit testing?

Some of these are the kinds of problems I’m hoping a good effect system will help with. If you know that the operations of opening, reading, writing and closing files must occur only in certain orders, you can automatically detect violations of those rules. And if you know all possible types of cause and effect that can be relevant within each part of your code — that is, you can identify all stateful/impure/externally observable results — then you can identify any of them that aren’t handled according to some set of rules. As long as you can figure out what you want those rules to be, that is...

-----


> OK, so I suppose the next question becomes what “test it” means.

It would be something similar to all possible code paths being executed with the program in all possible states. Heck right now I'd settle for even a small subset of this.

> Or maybe you’re thinking of some sort of automatic simulation of possible failure cases when you get to the open/read/write/close operations, serving a similar roles to things like mocks and stubs in unit testing?

That is closest to it. It is already known that open/read/write/close can fail. They are quite difficult to force to fail, so currently you have to write a lot more code to make that happen. And it gets really tedious once you look at all the combinations. eg open has to fail, open has to succeed and then read fails, open succeeds and the 3rd read fails and on and on. This is for a single ~5 line function! And then if my cat() is made available to a library, how to they test for it failing.

I'm quite happy using supervisory programs to detect rule violations. For example valgrind does an excellent job for memory allocation and usage, and I've never used helgrind but assume it works well too.

-----


It is already known that open/read/write/close can fail. They are quite difficult to force to fail, so currently you have to write a lot more code to make that happen. And it gets really tedious once you look at all the combinations. eg open has to fail, open has to succeed and then read fails, open succeeds and the 3rd read fails and on and on.

OK, I’m following you so far.

Assuming that

(a) our language lets us specify the possible failure modes for each function, and

(b) we have a test tool that can systematically simulate various possible combinations of success and failure,

what would you want to do in each test case?

Put another way, when we run our test tool and our magic replacement I/O functions simulate, say, success on the open and first two reads but then failing on the third read, what happens next? What’s the result we’re looking for to determine whether the test passes or fails?

-----


At the moment the error handling code for cat would be looking at failures for all 4 functions. Unless they fail that error code doesn't even get run.

My test code currently has to put the program into a state where cat can be called, and then has to mock/augment each failure point, and then check the results. Just having the middle piece (mock/augment) done automagically would be a massive help. And then of course cat itself can fail so each of its callers needs a way of testing too. Some hand wavy combination of convention over configuration, documentation and annotations would likely help.

Even the state issue should be automatable in many circumstances. For example you could make a successful run of the program, and then the magic records state at the entrance to code blocks. It can cause each failure circumstance, rewind state to known good and cause the next failure circumstance etc.

I'd even be happy running my code (with no error handling) under some tool that induces the errors - when they aren't handled it asks me what I want to do which typically involves writing code to handle the issue, which it then directly integrates and keeps running until the next error.

Even test pass/fail can be somewhat automatable. The tool records what happens, and then in the future alerts you when there is a difference in behaviour. The response is either that the issue needs to be fixed, or that the new behaviour is correct.

-----


The Erlang process model seems like the closest thing to a panacea here. It forces you to write robust software automatically. I haven't used that approach enough to know if it gets you the whole way to shipped, though.

(Note: Not specifically advocating Erlang The Language, just its approach to isolation and error recovery)

-----


Rust uses the Erlang model. Generally failing the task on error has worked well in practice, although there are times when we have to use result types and pattern matching (basically error codes, but more reliable since the compiler forces you to use them correctly). The vast majority of errors are handled either locally or by failing the task.

-----


> Note: Not specifically advocating Erlang The Language,

Why not. If you have a panacea for robust error handling (they call it fault tolerance) coupled with auto-magic cross-CPU and (with some work) cross-machine scaling why not advocate the language.

Anyone else will just be re-implementing that in an incomplete way. So you can have say an OS process like an Erlang process. That's one way. But now need to implement a supervision hierarchy. Ok so talking about SIGCHILD, pids and what signal to use to kill the process (SIGTERM or SIGKILL). Then of course these processes have to talk to each other so must pick a distributed messaging system. Then need to pick a restart strategy (ok so the process failed but we restarted it but it keeps failing and we keep restarting so perhaps that in an of itself is a failure and a higher process in the chain should be restarted...). Stuff like that. By the time that's done and looking back at the code you'd wonder, why didn't I just used Erlang with all this built it...

-----


You don't want a language to force error handling on you. Error handling is different in every case. Go only tries to force you to deal with errors instead of allowing you to ignore them until an exception is thrown. There is never going to be an all-purpose error handling answer. Its just a matter of deciding where your error handling code is placed.

-----


This is the most sensible comment I've ever seen on error handling and one I agree with entirely.

When designing the architecture for an application, error handling is actually the first thing I look at. It's different depending on where it is, who needs to handle it and the context it applies in (technical or business domain). There is no "one size fits all" solution.

-----


Lisp had a condition system that was very clever. You still had to write the code, but you were able to separate the problem, the handling, and the restart.

This is the problem that comes up in many languages and which needs to be dealt with:

"Because each function is a black box, function boundaries are an excellent place to deal with errors. Each function--low, for example--has a job to do. Its direct caller--medium in this case--is counting on it to do its job. However, an error that prevents it from doing its job puts all its callers at risk: medium called low because it needs the work done that low does; if that work doesn't get done, medium is in trouble. But this means that medium's caller, high, is also in trouble--and so on up the call stack to the very top of the program. On the other hand, because each function is a black box, if any of the functions in the call stack can somehow do their job despite underlying errors, then none of the functions above it needs to know there was a problem--all those functions care about is that the function they called somehow did the work expected of it.

In most languages, errors are handled by returning from a failing function and giving the caller the choice of either recovering or failing itself. Some languages use the normal function return mechanism, while languages with exceptions return control by throwing or raising an exception. Exceptions are a vast improvement over using normal function returns, but both schemes suffer from a common flaw: while searching for a function that can recover, the stack unwinds, which means code that might recover has to do so without the context of what the lower-level code was trying to do when the error actually occurred.

Consider the hypothetical call chain of high, medium, low. If low fails and medium can't recover, the ball is in high's court. For high to handle the error, it must either do its job without any help from medium or somehow change things so calling medium will work and call it again. The first option is theoretically clean but implies a lot of extra code--a whole extra implementation of whatever it was medium was supposed to do. And the further the stack unwinds, the more work that needs to be redone. The second option--patching things up and retrying--is tricky; for high to be able to change the state of the world so a second call into medium won't end up causing an error in low, it'd need an unseemly knowledge of the inner workings of both medium and low, contrary to the notion that each function is a black box.

Common Lisp's error handling system gives you a way out of this conundrum by letting you separate the code that actually recovers from an error from the code that decides how to recover. Thus, you can put recovery code in low-level functions without committing to actually using any particular recovery strategy, leaving that decision to code in high-level functions.

To get a sense of how this works, let's suppose you're writing an application that reads some sort of textual log file, such as a Web server's log. Somewhere in your application you'll have a function to parse the individual log entries. Let's assume you'll write a function, parse-log-entry, that will be passed a string containing the text of a single log entry and that is supposed to return a log-entry object representing the entry. This function will be called from a function, parse-log-file, that reads a complete log file and returns a list of objects representing all the entries in the file.

To keep things simple, the parse-log-entry function will not be required to parse incorrectly formatted entries. It will, however, be able to detect when its input is malformed. But what should it do when it detects bad input? In C you'd return a special value to indicate there was a problem. In Java or Python you'd throw or raise an exception. In Common Lisp, you signal a condition.

A condition is an object whose class indicates the general nature of the condition and whose instance data carries information about the details of the particular circumstances that lead to the condition being signaled.3 In this hypothetical log analysis program, you might define a condition class, malformed-log-entry-error, that parse-log-entry will signal if it's given data it can't parse."

http://www.gigamonkeys.com/book/beyond-exception-handling-co...

Read the whole thing. It was a very clever system and wish something like it was available in other languages. (The lack of it in Clojure was one of the few things I agreed with Loper about in his anti-Clojure rant: http://www.loper-os.org/?p=42)

-----


Lisp has a condition system that is very clever. You still have to write the code, but you are able to separate the problem, the handling, and the restart. ...

-----


A less flexible and less elegant way to avoid stack unwind in other languages would be to have a parameter that tells the invoked function what to do if an error occurs:

  def parse_log_entry(text, err_fn):
    entry = parse(text)
    if entry is None:
      entry = err_fn(text)
    return entry
Instead of passing a function, err_fn could also be some kind of resolver that finds a function for a condition name, or there could be a global recovery function:

  def parse_log_entry(text):
    entry = parse(text)
    if entry is None:
      entry = recover('invalid_log_entry', text)
    return entry

-----


Consider the hypothetical call chain of high, medium, low. If low fails and medium can't recover, the ball is in high's court. For high to handle the error, it must either do its job without any help from medium or somehow change things so calling medium will work and call it again. [...] The second option--patching things up and retrying--is tricky; for high to be able to change the state of the world so a second call into medium won't end up causing an error in low, it'd need an unseemly knowledge of the inner workings of both medium and low, contrary to the notion that each function is a black box.

<Devil’s advocate>

But if there was some way for medium to recover by itself, it had the option to handle the exception on the way back up the call stack and then to restart the computation that called low, hopefully under improved conditions.

Control will therefore return to high only if medium doesn’t itself know how to handle the error. At that point, the entire computation for which medium was responsible has failed.

Assuming the system as a whole uses reasonable functional decomposition and modular design, since exceptions have limited value under other circumstances anyway, shouldn’t any context that was known only within medium be irrelevant to any recovery action taken at high’s level?

</Devil’s advocate>

-----


I read this article last week, lost it, and was looking for it yesterday. You just saved me a long hour of guessing at terminology.

This seems like a large step in the right direction for exception handling, but I think it still has the problems that the programmer writing the function that can throw needs to enumerate a number of cases to make it effective, and the programmer calling that function needs to have documentation ready for a descriptions of all the possible restarts.

-----


Google's web history, with its toolbar, allows you to search the pages you've visited before (not just their titles, as in browser history). That is, you can search the subset of web that you've seen http://support.google.com/accounts/bin/answer.py?hl=en&a...

NB: Google will then have all your base, and people on HN have recommended turning off google web history altogether (let alone the toolbar!). I mention it, because it is also a killer-solution to the common problem you mention.

-----


It is much better than using browser tabs for an archive - I'd rather not let Google have all my base, though. I'm thinking I'll do a bookmarking solution one of these weekends that syncs to a webserver I own.

-----


Don't most browser history systems support this on their own?

-----


I think this allows one to do a full-text search, essentially. I.e. search for content on the page, rather than just URL/title.

-----


I remember reading a paper proposing error handling to be done only at function not block boundaries. A minimal change from the normal exceptions will be something like this:

    function foo(bar, quux, zut) try {
        var a = frobincate(bar, quux);
        return bubinate(a, zut);
    } catch (e) {
        // Stuff        
    } finally {
        // Stuff
    }
Whether such a change will make code easier to write, read and maintain is hard to say. Writing a language that compiles to JavaScript and uses that syntax shouldn't be that hard. I'm partly tempted, but I almost never program in JS

-----


I don't know which paper you read, but C++ supports exactly that syntax. It's rarely seen in the wild as many older compilers didn't implement it and honestly, who wants to remember yet another oddball C++ syntax variation?

-----


I don't think it would be that nice. For good error reports back to the user, you'd have to have a fairly large amount of logic in the catch block. If you have a function that copies data from one file to the other and catch some sort of IOException, you'd end up trying to analyse what it is to report it properly. After all, it could be either one of the files, while opening, closing, reading, writing, ...

-----


_For good error reports back to the user, you'd have to have a fairly large amount of logic in the catch block._

Serious question. How is that a reason to have that _amount of logic_ in the function body?

Having IDE/editors which can fold blocks of code and having the catch blocks collapsed by default will make code reading/understanding of code easier. IMO.

Actually, I may try something like makng the try-cacth the body of the function on a small to medium size project just to see the tradeoffs.

-----


I think that putting it in the body actually reduces the amount of code and makes it more readable. Compare something like this (ignoring naming, formatting, made up language, etc. I want to show the structure only):

    fin=open(fin_name)
    fout=open(fout_name)
    ...
    catch IOError e
      if e.call == 'open'
        if e.filename == fin_name
          log_error "cannot open input ..."
        elif e.filename == fout_name
          log_error "cannot open output ..."
        else
          log_error "cannot open file, but filename unknown"
      elif e.call == .....
        .....
to something like this:

    try
      fin = open(fin_name)
    catch IOError
      log_error "can't open input ...."
      return

    try
      fout = open(fout_name)
    catch IOError
      fin.close()
      log_error "can't open output ...."
      return
Both are ugly, but I really wouldn't like to be the person trying to match up the error handling from the first example to the exact places in the function body. I'm not even sure what would this look like if the exception was coming from somewhere deeper in the function. open() is easy because we know how it fails. More complex call chains would be a challenge to even match up to the place they happened.

-----


Ruby already does that implicitly. Each method have an implicit begin, allowing you to write code like

def hello return do_someting rescue FooError return :fail end

-----


I've always liked the Haskell approach of using a monad like Maybe or Either. Maybe is a special case where rather than having an error you have a null--perfect for functions like indexOf--but since it is isomorphic to Either (), all my points apply to both.

The most important point is that in Haskell these errors are reified as first-class types. So the type `Either err val` is just like Go's method of returning either an error code or the correct result. However, rather than being a special case, this is just a normal algebraic data type. This is a more elegant way to support this sort of error-handling, as opposed to just baking it into the language explicitly.

Being first-class citizens is nice, and we'll get back to it, but first let's look at another way they're better than Go's approach--they form a monad. There is some rich theory behind this, but it also has an immediate practical benefit: you get the default action of propagating the error for free. That is, code like this:

    if isError riskyValue 
      then error
      else if isError (riskyFunction riskyValue) 
        then error
        else ...
transforms into:

    do value <- riskyValue
       result <- riskyFunction value
       ...
so you get the simplicity of Go's approach with the convenient propagation semantics of normal exceptions. Now you only have to check for an error when you want to handle it; it gets sent through transparently if you don't. This is also cool because you can use the Either type to model early termination rather than an error, which is why it's called Either rather than Error.

So, being a monad, the Either type is nicer to use than normal returned error codes. But earlier I mentioned that there are some advantages to being a first-class citizen. What are these?

Well, the main advantage is that there is a fair number of generic functions that can be used to make your error-handling code neater. For example, you can use the alternation operator in a pattern I really like:

    canError1 a b <|> canError2 a b <|> return 42
what this does is try the various options one by one until it either finds one that isn't an error or gets to the end of the expression. The last element could be a default result, as here, or a default error.

Another fun combinator is optional:

    do someImportantAction 1 2
       val <- someValueOf 10
       optional (optimize something)
       return something
Another really cool thing is how types like this can interact with other types. In particular, I am thinking of monad transformers. In simple terms, monad transformers allow you to combine different "effects"; for example, you could combine error-handling as here with backtracking. But here you have two options: an error can either make the entire backtracking computation fail or it can just make a single branch fail. Which one should you choose?

The cool answer is that it is, indeed, the programmers choice. In particular, the order of transformers controls these semantics; something like EitherT err (Logic a) would be the first and LogicT (Either err a) would be the second. Not only do the types reflect the semantics, they actually control them! Very much self-documenting code.

The Haskell approach gives you high-level, declarative and very extensive control over exactly how you want to handle errors while hiding enough of the normal boilerplate to make them convenient to use. All this in a way that is not baked into the language but just an instance of a more general pattern (in this case a monad and monad transformer).

-----


Nitpick:

   EitherT err Logic a
   LogicT (Either err) a
(The monad transformer takes the monad and the value type, not the monad applied to the value type)

-----


Good writeup. Haskell actually does have an Error type class and ErrorT monad transformer for more fine-grained error handling.

Maybe gives you binary error handling, it either fails (Nothing) or succeeds (Just).

Either gives you "stringly-typed" errors which is sometimes a good choice.

http://hackage.haskell.org/packages/archive/mtl/1.1.0.2/doc/...

-----


ErrorT is similar to EitherT, so I just used the latter to make the names less confusing.

Either doesn't have to be "stringly typed" per se thanks to sum types. You can write a type like:

    data ErrorType = SomeError
                   | OtherError String
                   | ...
The compiler can now check that you handle all possible errors and will give you a warning if you don't, so it's strictly better than using strings.

Coincidentally, this is one place where some sort of sub-typing would be nice, I think. This sort of type would be perfect using OCaml's polymorphic variants because then you could share different error cases around while specifying a very specific type for each computation.

-----


Great idea, I completely agree. Error-handling code is pretty much impossible to eliminate since often the compiler can't possibly know what you want to do when theres a certain error. I think the solution is to use a RoR kind of philosophy, "convention over configuration". Not exactly that, but basically sane defaults. Maybe this would get better if/when compilers get extremely intelligent.

-----


I've found that the least amount of mental overhead is still achieved by doing all the necessary tests right where the 'useful' code is taking place, raising a standard Exception but with a good error description and doing a Pokemon-catch somewhere in main(). Not good enough for system programming of course, but for everything you want to fail early it's fine.

If we do talk about system programming: Error handling IMO is its main difference to 'other' programming. The necessity of deciding (and in case of security relevant systems testing) for all eventualities is why only veteran programmers should do that job.

-----


I was really excited about Go. Designed by some gurus, seemed to get everything right, google app engine supported it.

Then I tried to build something.

Java-like verbosity. Meh, I can deal with it.

[]byte and string aren't the same. Whatever, a few extra lines and thot cycles here and there, no big deal.

Overly complex library functions. Let me explain this one. In Lua, markdown (discount) is a single function. In Go, there was a bunch of extra stuff that just seemed like noise. Likewise for cypto, Base64, stringwriters, bunch of other stuff.

A bunch of little annoying stuff like that adds up. Eventually I just said fuck it. Maybe I'm not hardcore enough or something, but now I'm back with LuaJIT.

Goroutines are cool, tho. Lua's synchronous threads aren't quite the same. Also google's app engine datastore is sweet. Nice and simple, no screwing around with SQL. If I had to do systems stuff, I'd reach for Go.

LuaJIT is faster anyways.

-----


> []byte and string aren't the same.

Nor should it be. I've got plenty of issues with Go, but that most definitely isn't one. It's one of the few things Java got completely right (even though they fucked the String type itself): strings are not bytes.

-----


> LuaJIT is faster anyways.

This man speaks the truth. Well, actually, I know nothing about Lua. But I do know that Go currently isn't very fast. I mean, compared to highly dynamic languages the performance is fine. But compared to C++ or even modern JVM it's meh.

I feel like there's going to be improvement here, though, and I wouldn't be surprised if the versions of golang used internally at Google aren't faster.

My take has been to use it the way I imagine Google does. If I'm not building something that uses protobuffs and goroutines I do it in Python.

-----


When you say that Go currently isn't very fast, maybe you should try coding something in it. I found it to be suprisingly fast. I think if you are doing a lot of memory allocation, then the garbage collection may slow you down, but I wouldn't be suprised if it's still faster than a jitted language like LuaJIT.

-----


My largest Golang deployment is a blender service. That's where I used the Goroutines and protocol buffers I mentioned earlier.

Its speed is acceptable. My take on it is that this is a new language. Of course it doesn't have fully mature, fully optimized internals. And certainly part of the problem is that I haven't developed a full mastery of the language (Mastery is not a word i use lightly) so that will improve, too.

But if your experience is that you've built multi-threaded apps in Go that perform comparable to a c++ implementation then I'd love to know if you have any specific tips and also I suppose that means that we're not going to see any broad improvements in runtime performance?

-----


I recommend trying gccgo, the Go compiler for the GCC suite. (GCC 4.7.1 and later have Go built in.)

In my experience it is (at -O2) much faster than Go's own compiler. GCC, after all, has because a bunch of advanced optimizers with a long history of development behind them, and Go's own compiler does not.

My recolleciton is that Go's own compiler was intended as a "suboptimal but correct" reference compiler, but this may have changed.

-----


Appreciate the tip -- sincerely.

-----


>When you say that Go currently isn't very fast, maybe you should try coding something in it. I found it to be suprisingly fast.

I've tried in in 2-3 things which I re-implemented afterwards in C and had before in Python.

In all cases it has about 2x the speed of Python, which I consider meh.

(The tasks where mostly parsing some huge files and doing some filtering and computations, and C made it IO bound while Python/Go both had it CPU bound).

-----


This is the first time I've heard "Java-like verbosity" as a criticism of Go, and I'm baffled by it. Go is generally much more succinct than C++, Java and other similar statically typed languages. Can you expand on your comment?

-----


everything is package prefixed, names are long.

Totally minor, but reminded me of my days with java

-----


I guess you could call that "verbosity", but it's very different from what people normally think of when you refer to "Java" and "verbose".

-----


* []byte and string aren't the same.

...and how could they be?

-----


Despite Unicode being something like 20 years old, people still expect strings to be byte arrays and other assorted lunacy, like reading from file not requiring stuff like encodings. "How hard can it be" to just figure out which encoding the file has, after all...

-----


BTW in go a string can be cast to []byte like this:

    []byte(aString)
it returns the utf-8 bytes, casting to []rune gives unicode codepoints.

-----


In Lua, markdown (discount) is a single function

https://github.com/buu700/upskirt.go

-----


I still think that the ability to generate self-contained, static binaries is potentially a huge advantage for Go vs the scripting languages. Instead of installing an interpreter and a bunch of libraries and fiddling with search paths you just copy a single file and go (as it were).

Think about what it takes to get something like a PHP photo gallery going vs what it would take with Go, for instance.

-----


If you are looking for a new language you are not looking for the right thing. We already have the language of mathematics and the homoiconic programming language Lisp. What we need isn't a new language, its a new platform which uses Lisp all the way down. Unfortunately, I don't see that happening anytime soon.

> Rule #1: C-like syntax

Just what we need! Another programming language with C-syntax! Its not like we don't already have thousands of those, none of them better then the other. I think this new language should be renamed from the next big language to just another C-based language.

> Personally I had hopes for Clojure, but I realise that the same people who think that knowing what a Monad is makes them mathematicians also think they’re being hip and edgy by pointing out that Lisp has a lot of parentheses.

A more accurate statement would be that Lisp code has a lot of links (pointers between data structures). Lisp code is a linked data structure, it doesn't have any parenthesis. However, Lisp code is sometimes presented with S-expressions which do have parenthesis.

-----


> just another C-based language.

That would be Jacbl, or even if we changed it to "Just another C-like language" we could call it "Jacll" (pronounce like "Jackal"). Now I want this language :)

-----


You should really read Steve Yegge's original blog post[0] before reading this submission.

[0]: http://steve-yegge.blogspot.com.au/2007/02/next-big-language...

-----


I started with Clojure a few days ago. I mostly like it but:

1. I often need to do something like a = foo; b = bar; c = combine(foo, bar). Using "let" for this in Clojure results in nested and realatively less readable code.

2. Are macros such a nice feature that they justify S-expressions? How useful macros are in your experience? ML-like syntax would be some ways better (e.g. infix arithmetic operators, no parens).

3. When working on non-trivial projects, I find static typing and OOP really useful. Are functional programming languages suitable for bigger real world projects (e.g. a desktop application)?

-----


> Using "let" for this in Clojure results in nested and realatively less readable code.

Why would you have to have nested code? Why can't you just do this:

  (let [a foo, b bar, c (combine foo bar)])
> Are macros such a nice feature that they justify S-expressions?

I'd say S-expressions are the nicest looking syntax, which justifies them regardless of their considerable practical advantages. The only reason you might think that need to be "justified" is that you are still to used to C syntax. Try using S-expressions for a few years first to get used to them.

> When working on non-trivial projects, I find static typing and OOP really useful.

Again, I think you are just used to mainstream static OOP languages like Java and C++. If you try Clojure for long enough you can get used to its way of programming, which is arguably nicer then the mainstream static OOP way.

-----


What I meant was this:

   (let [x (math/sqrt (+ 256 (* a a)))
         y (math/log (- (+ a b) (/ 1 (+ a (* b b b))))]
     (* 2 (+ x y)))
In other languages I would do something like this:

   x = math/sqrt(256 + a * a)
   y = math/log((a + b) - (1 / (a + (b * b * b))))
   2 * (x + y)
As for static typing - in combination with IDE support, it's really useful in larger projects.

-----


I made a reader macro (not technically supported by Clojure, but I don't care) for these situations that provides something I call the "scoping operator".

    #>a 6
    #>(let [b 7])
    #>(while true)
    ...
Using this operator, the above code is then equivalent to the below code, which I find much easier to deal with for code that is strictly linear in execution semantics.

    (let [a 6]
        (let [b 7]
            (while true
                ...)))
I don't use it for most functions, however, only those where I find myself otherwise ending up in nested expression hell. When I do, however, it works perfectly with other language features (example: doing a syntax quote on a #> works exactly as one would hope; it is conceptually just a fancy macro).

(...and where breaking up the logic into smaller functions would just obfuscate the process, as the steps fundamentally could only be called from a single place anyway, turning the nesting into an even-worse problem of broken apart logic.)

-----


In the C-based languages that you apparently familiar with there are several control statements that aren't expressions like for, if, while, and do. For example, here is how you can use the if statement in C:

  if (a >= 0) {
    printf("Positive");
  } else {
    printf("Negative");
  }
  
The if control statement in C can only be used to induce side effects, such as printing to stdout, it can not be used for functional programming. On the other hand, Lisp is an expression oriented language, so expressions such as if can be used as values for functional programming:

  (if (<= 0 a)
    "Positive"
    "Negative")
    
The same principle applies to the let statement. It is an expression so that you can use it without ever resorting to creating mutable state or inducing side effects. If you allow emacs to automatically handle the nesting involved with your statements, then there it shouldn't really be an inconvience. However, when you need global mutable state use def:

  (def x (Math/sqrt (+ 256 (* a a)))
  (def y (Math/log (- (+ a b) (/ 1 (+ (* b b b))))
  (prn (* 2 (+ x y)))
Clojure encourages good practices like avoiding local mutable state. That said, if you want to you can always create your own local environment to declare local mutable state:

  (defmacro defun
    [name args & code]
  
    `(defn
       ~name
       ~args
       (with-local-vars [~(symbol 'e) {}]
         (let [~(symbol 'def*!)
               (fn [p# v#]
                 (var-set ~(symbol 'e) (assoc (deref ~(symbol 'e)) p# v#)))]
             ~@code))))
           
Here is an example of code like yours using this macro and a local environment called e:

  (defun func
    []
    
    (def*! 'x (Math/sqrt (+ 256 (* a a))))
    (def*! 'y (Math/log (- (+ a b) (/ (+ a (* b b b))))))
    (* 2 (+ (@e 'x) (@e 'y))))
	
You could probably make that look nicer using macrolet and by definining other new macros and operations.

-----


>If you are looking for a new language you are not looking for the right thing. We already have the language of mathematics and the homoiconic programming language Lisp. What we need isn't a new language, its a new platform which uses Lisp all the way down. Unfortunately, I don't see that happening anytime soon.

Because you just (presumably) discovered the hammer of Lisp, it doesn't mean anything has to be Lisp like.

Even more so that the supposed superiority of Lisp is mostly anecdotal -- no actual studies on developer productivity, product robustness, etc: all anecdotes.

Where is your PROOF for what you say, computer SCIENTIST?

If anything, empirical data favor languages with C like syntax. More programs we CAN'T DO without have been written in those (from OSs, to office applications, to embedded systems that power almost everything, to servers of all kinds) than in Lisps. In fact, the ratio is incredibly small for Lisp-made world changing programs (Emacs --which is partly C, and then what?).

-----


Going by popularity, C syntax is indeed king. In the 16th century, going by popularity, it was best not to shower or otherwise clean yourself.

Popularity is not a useful metric. It would be nice to see studies of languages and productivity, I agree. However, it's made incredibly difficult by those who adhere to popularity. The result, languages like php, frameworks like node.js, and misunderstood movements like 'nosql.'

-----


>Going by popularity, C syntax is indeed king. In the 16th century, going by popularity, it was best not to shower or otherwise clean yourself.

Only I didn't go by popularity, I went by results: most of the must-have programs the world uses every day are written in C/C++, and almost none of them are written in Lisps (in the order of statistical noise).

Saying that that is just due to popularity ("C is more popular so offcourse more programs are written in it") is not really valid. Lisp has the headstart on C, it had a chance (AI, the Lisp Machine era), etc.

In the very least, you would expect the Lisp advocates to have a few killer-apps to show for all that superiority, the kind of stuff lone-worf team-of-one C/C++ programmers churn out (and a lot of them are mighty popular).

Can you point to any, preferably not Emacs?

-----


The popularity of the language and the applications built in it are irrelevant to the quality of the language itself.

-----


>The popularity of the language and the applications built in it are irrelevant to the quality of the language itself.

This borders on the mystical. There is no intrinsic quality that a language has if that quality can not be measured in some way.

A "better quality" language that does not account for it in more useful programs being written in it, is like a "good man" that never did any good deeds. Even if in some mystical, esoteric way, it was true, it would be of no consequence. Only what has actual consequence in the outside world matters in engineering (and programming).

(I didn't wrote "in more better programs being written in it" because that would be a vicious circle).

-----


Programming languages are standalone products, that can evaluated regardless of their environment. For example, anyone reasonable can see that is brainfuck is a deeply flawed language regardless of the applications written it. If you cannot evaluate a language on its own merits, there is really no point in talking to you about this.

-----


>Programming languages are standalone products, that can evaluated regardless of their environment.

Only in some abstract way, that is of no consequence to the practice of programming.

Because in the real world the environment very much matters, and the best proof that "it works and gives results" it to see it, er, working and giving results.

>For example, anyone reasonable can see that is brainfuck is a deeply flawed language regardless of the applications written it.

Only the C/Lisp issue is not that trivial. Brainfuck was designed to be "deeply flawed", whereas C/Lisp were designed to have different, specific, constraints and strengths.

Or, reversely, it is because there are NO applications written in in brainfuck that we can safely deduce that it has some problems (empirical observation). If hundreds of killer apps were written in it we would have to reexamine our assumptions about it (but of course, it would have to be a different language for that). So cause and effect are lined in a feedback circle in these evaluations.

>If you cannot evaluate a language on its own merits, there is really no point in talking to you about this.

Languages are not works of art. They are tools. Tools are not to be evaluated "on their own merits", they are to be evaluated by their results.

-----


Paradoxically, Lisp depletes the ego. Too many choices.

http://www.winestockwebdesign.com/Essays/Lisp_Curse.html

-----


I would love to see a scientific method to measure developer productivity that isn't ridiculously flawed in completely obvious ways. So far I haven't seen one.

What I care about is primarily what makes me productive. I don't care if it makes anyone else productive but there is a good chance that it might.

Call it proof by induction based on an admittedly shaky prior ;-)

-----


Although I agree, I also think some numbers are somewhat better than no numbers. Even in the worst case, at least there is something to argue with other than opinions.

I won't be shocked if data shows that people do better with things they know well and feel happy about, however

-----


>I would love to see a scientific method to measure developer productivity that isn't ridiculously flawed in completely obvious ways. So far I haven't seen one.

Well, just use the good old empirical method then. Of all the programs out there that people and businesses need to have, in what languages was the majority written? Do proponents of older, supposedly superior languages have an equal body of work to show for it?

>What I care about is primarily what makes me productive. I don't care if it makes anyone else productive but there is a good chance that it might.

I'm fine with that, what I tried to counter-argue was the statement "What we need isn't a new language, its a new platform which uses Lisp all the way down.".

-----


Of all the programs out there that people and businesses need to have, in what languages was the majority written? Do proponents of older, supposedly superior languages have an equal body of work to show for it?

The obvious flaw of this approach is that the majority of people might have written their software in a less than optimal language for reasons unrelated to productivity.

Proponents of niche languages, almost by definition, never have a body of work to show that is equal to that of the mainstream languges. If they did, they would _be_ the mainstream.

Or to put it more succinctly: The majority can be wrong.

-----


>The obvious flaw of this approach is that the majority of people might have written their software in a less than optimal language for reasons unrelated to productivity. Proponents of niche languages, almost by definition, never have a body of work to show that is equal to that of the mainstream languges.

I'm not expecting equal bodies of work. Just show something.

Forget the enterprise, big companies and such. How about lone wolf programmers? Where are the Lisp gurus "beating the averages" and producing some killer stuff? 3-4 apps would suffice. For Lisp I can see very few things, statistical noise almost. Heck, even Erlang has Riak.

One way to see it is: "of course Lisp doesn't have a large body of A-list programs written in it, since it has less programmers". This is your reading of the situation.

Another way, though, is:

"there is a reason Lisp doesn't have as much A-list programs written in it, and it's not adoption. The reason goes deeper and it also explains adoption".

One explanation: Lisp was too high level for the machines of past era to run sufficiently. That explains why it didn't caught on in the past. It means that despite being conceptually better, it was a bad language for the problems most people were trying to solve (squeeze the last trace of CPU and memory juice from very constrained hardware).

And now? Now other languages have the most essential of the high level features it used to have, so other factors weight more in using them over Lisp (e.g available programmers, libraries, etc). Which means again that despite being conceptually better, it is a bad language for the things people do now (front end web stuff needs JS, enterprise needs Java/.NET and Oracle/MS support, embedded needs C, web apps need Node/RoR/Django/PHP, etc).

Not a single niche where Lisp is the best option.

Consider eg that: Productivity = LanguageProductivity + EnvironmentProductivity.

And let's take the scientific computing field. Even if Lisp, the language, has 70 productivity points over 50 for Python, the Environment for Python has 80 points (NymPu, Scipy, Sage, etc) over 20 for Lisp.

So, Lisp = 70 + 20, Python = 50 + 80, hence Python wins.

(The numbers are out of my ass, but you can make a similar thought experiment and, people that make it come to similar conclusions when they pick their tools. Even PG if he had to build something today he would have picked RoR, not CL).

Lisp guys tend to argue that Lisp has "language productivity" of 100, but I don't think so. And even it it has it's not 2-10 times the productivity of something like Python the language. Maybe 20-30% better.

In the grande scheme of things, macros don't matter that much.

-----


There certainly are other factors than a language's productivity. I don't dispute that at all, or I wouldn't be writing so much code in C++.

But I think language popularity has very little to do with productivity or any other rational factor. Languages mostly piggyback on platforms that emerge rapidly at some point in history.

C came with Unix. JavaScript and Java came with Web browsers. SQL came with relational databases. Objective-C became widely used (if not poplar) with the iPhone. PHP came preinstalled with shared web hosting. C# comes with Windows. VBA comes with Office, etc.

Developers mostly just choose platforms not languages. Whoever makes the platform decides on the language and it will be "popular" regardless of how atrocious it may be.

And by the way, PG is building something today and he's building it in a Lisp dialect (http://paulgraham.com/arc.html)

-----


> Because you just (presumably) discovered the hammer of Lisp, it doesn't mean anything has to be Lisp like.

I didn't "just discover the hammer of Lisp", I discovered it quite a while ago and I haven't looked back since. For example, I have been posting Lisp code in this blog for at least a couple of years: http://lisp-ai.blogspot.com/.

> Where is your PROOF for what you say, computer SCIENTIST?

The burden of proof is not on the Lisp community to prove the worth of its fifty year old technology. Lisp outdates C-based languages by at least a decade, so it really should be up to C and the other new languages to prove their worth.

So the burden of proof isn't on me, its up to YOU to prove to me that there is a new language that has advantages over Lisp. And if your designing the "next big language" please prove to me that there is any reason Lispers should be interested in it.

-----


Thanks for the link!

-----


>The burden of proof is not on the Lisp community to prove the worth of its fifty year old technology. Lisp outdates C-based languages by at least a decade, so it really should be up to C and the other new languages to prove their worth.

The burden of proof doesn't change sides based on ...age. Algol68 is older than Haskell, but still most experts will agree that Haskell is better.

As for it being "up to C and the other new languages to prove their worth." that's what I just said. By ratio of useful, must have programs (including code used in space missions), C wins, and Lisp doesn't have much (if anything) to show for.

-----


I discovered a long time ago that the benefits of Lisp are more than outweighed by the frothing zealotry of a lot of the members of the Lisp community that just can't see that almost every single language design decision is about tradeoffs and not about right or wrong.

Lisp seems to attract people that are a little mentally imbalanced.

-----


> By ratio of useful, must have programs (including code used in space missions), C wins, and Lisp doesn't have much (if anything) to show for.

Those programs may as well as have been written in brainfuck for all I care. You can write as many programs you want in a language, that doesn't make that language better.

At the very earliest stage in their functioning, C programs go through a pre-processor that is so horribly stupid it was completely removed from D. Thats just the start of it...

http://wiki.theory.org/YourLanguageSucks#C_sucks_because:

-----


It's a pity that we are still talking about things like syntax in the context of a next big language. Programs are still full of bugs, especially concurrent programs, lots of time and effort is still spent on testing. At this day and age, the NBL should be a language that helps or guides a programmer write correct concurrent programs with good performance. It should prevent programmers from making mistakes as much as possible. I think the NBL will be in the same school as Erlang and Scala.

-----


Syntax is very important for a programming language. It's in your face all the time. A serious downside to many programming languages is their awful or inconsistent syntax, and that results in code that is hard to read and comprehend even before trying to understand what the code is doing.

-----


It sure is important but who is to say C-style syntax is better than Lisp-style syntax and vice versa. Obviously the NBL should have a sensible and consistent syntax but it has to have a lot more than that to be the NBL. C-style syntax is definitely NOT a must have.

-----


"Two years later however, D was still stuck in 2007. This was in part due to the infighting between the standard library camps who had failed to learn from the mistakes of the Java class library and were busily adding bloat and verbosity to Phobos and Mango."

This was never a case for Phobos. D still lacks a lot, but Phobos is a pretty lean library. It's built around the concepts that were introduced by Alexander Stepanov and STL and refined them to the great extent. Go on contrary has no support for generic programming, therefore I can only imagine how full of ad-hoc algorithms the typical Go code is. So yes, for me it's definitely not the NBL but just a niche language. D still has a lot of issues, but as a language it's better than Go.

-----


The NBL requires a Next Generation Editor. We can't escape from this local maxima of code expressiveness, readability, amenability to change, richness of types, etc. without a truly better programming environment.

Working on it...

-----


I believe Go is actually BSD-Licensed, not MIT. Though from what I understand these are very similar (permissive) licenses.

-----


They're almost interchangeable. If I wasn't reading carefully I probably wouldn't notice the difference.

http://opensource.org/licenses/MIT http://opensource.org/licenses/BSD-2-Clause

-----


From http://producingoss.com/en/license-choosing.html :

There is perhaps one reason to prefer the revised BSD license to the MIT/X license, which is that the BSD includes this clause:

Neither the name of the <ORGANIZATION> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

It's not clear that without such a clause, a recipient of the software would have had the right to use the licensor's name anyway, but the clause removes any possible doubt. For organizations worried about trademark control, therefore, the revised BSD license may be slightly preferable to MIT/X. In general, however, a liberal copyright license does not imply that recipients have any right to use or dilute your trademarks — copyright law and trademark law are two different beasts.

-----


I have to agree with the commenters on that article that JavaScript is the next big language. With HTML5 it's pretty amazing what you can do with JS. It's reached the point of being nearly as powerful as any thick client technology yet with ubiquitous browser and OS support.

It performs fairly well too: http://shootout.alioth.debian.org/u32/javascript.php . I'm not sure why one test is 100x slower, but the rest are < 10x slower than Java.

-----


I don't know, I'd say JavaScript is pretty close to reaching its peak at this point, if it hasn't already.

-----


It's got tons of room to grow outside the browser where it's never been used all that much.

-----


>>I'm not sure why one test is 100x slower<<

See http://stackoverflow.com/questions/7025286/why-does-this-v8-...

-----


The lack of generators in javascript really kills it for me.

-----


This comment is for all those people who have written against Lisp in this thread.

Lisp was designed to represent/design/play with/experiment with advanced and complex algorithms. Now, when was the last time you wrote a code that was algorithmic or was about some new cool algorithm, try to remember, let me tell you the answer: most probably never ever in your life (and there is no hope in future as well). What you people do is stitch APIs, or more precisely, iOS API, Android API, Some web framework API, DOM API, API API API... look at your code and see it is just a bunch of APIs call thrown in a mix. This is absolutely fine because what you guys build is "Application" (not "Technology") which grab some data from here and there, store it in some shiny DB and when user want it show him in a shiny new UI, that all you do and for this purpose go ahead and use whatever language those APIs are in, no problem at all, but please don't rant about something that you don't understand.

-----


Dart almost exactly matches what Yegge wrote about a few years back.

-----


> I have no hesitation in recommending Go to displace any programming task which would previously have been targeted towards Java.

Java is faster and has more libraries.

-----


What I want is somewhere in the middle of Java, Scala and JavaScript. I have no idea what that looks like exactly though. Maybe Kotlin? Dart?

-----


I haven't used much Scala, but Haxe (haxe.org) sits somewhere between Javascript and Java. I enjoy it, ymmv.

-----


Yeah, go is in compition, but Rust is in competion too.

-----


[while checking Yegge's list the NBL should have:]

>The final tally is 11 affirmative, 7 negative

Yes, but among the negative are items with far most importance than among the affirmative.

-----


I disagree. The negative items are things that people are used to but not necessarily the best solution to the problem:

Perl regex: it covers most perl RE features that are important. The RE engine is the only real weak point I see in Go at the moment but that will change in time and it's "good enough" now.

Strings and streams as collections: String is a collection of runes - you can get the rune at an index fine. The latter is just wrong - A stream is not byte addressable by nature.

Iterators: for not good enough for you? You can build an iterator interface over any type if you desire that but it's really not needed. If it makes you comfortable...

Generics: These are really not needed. This is a common misconception. I've come from a heavy OO C# background (DDD, 1000 class models etc) into Go and I've not missed them for a second. In fact I think I've probably been freed from a million bad design decisions and many refactor sessions.

Standard OOP: I'm not even going down that route. The hacks you have to do to get proper composite models or some level of dynamicity using "standard OO" is horrible. I wish it would go away. CLOS is the nearest thing to something usable - not what C++, Java, C#, PHP force upon everyone.

An there is a cross platform GUI for Go - it's called a web browser.

The only nicety I'd like to see is dynamic loading as it'd allow composite, modular applications to be built without compilation but that has its own pitfalls.

-----


> An there is a cross platform GUI for Go - it's called a web browser.

A web browser is NOT a replacement for a proper GUI library. Writing web GUIs is rather hard in comparison because that was not it's original intention. They're rather slow in comparison because they were not originally intended. The list goes on like this. Everything we did to HTML/CSS/JS in order to "support" those features better of the last decade were just workarounds and things to cover up the fact that we're still dealing with a Markup Language here which is supposed to add semantic meaning to a text or some other piece of information in order to display it.

GUI libraries on the other hand were designed to do GUIs. They're designed not just to display information but to handle user input, handle widgets and components of user interfaces. Most importantly though they're designed to be consistent. Consistency is one of the most important aspects of user interface design and ultimately user experience.

For example, look at Apple: The whole OSX experience and all the apps it comes with are consistent in look and feel. They've got the same polish on top and the same usability features that come with it. That is what takes most of the burden from users. You learn how to use it once and then you're able to apply that knowledge across many other applications. That is how things start to feel "intuitive". That is what all the UX designers are ultimately aiming for.

HTML/CSS/JS enable us to build wildly different GUIs that do their job well but are not consistent for the most part of it. It has enabled designers to go on a rampage, each of them trying to improve on certain aspects. That's good, that's progress but it also comes at a great cost because things stop to feel intuitive and the paradigms of interaction you once learned are rendered invalid. Every site these days is trying to be different to stand out from the crowd and eventually it's all a bloody mess concerning overall UX.

The only real use case for web GUIs are GUIs on the web, not native apps. Introducing the same mess we've got on the web nowadays to native desktop apps is not the right way to do it. A proper GUI library is the way to go both for the developer as well as for the user.

-----


Perhaps I simplified a bit too much. I agree with you entirely.

The browser provies an API and canvas on which you can build a user interface much as X provides it to Linux, GDI/WPF does for Windows and Quarts of OS-X. It's a medium, not the solution.

I also agree about a lot of sites on the internet being a bloody UX mess.

-----


Have you looked at http://pyjs.org ? I can't understand why it has got so little traction. Is GWT not considered a "proper" GUI library?

-----


>I disagree. The negative items are things that people are used to but not necessarily the best solution to the problem

That's what you say, and it might be true or not, but you don't get to use Yegge's list and count positives/negatives when you don't agree with Yegge's priorities.

-----




Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: