Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What I Expect From a Programming Language (eiffelroom.org)
28 points by jpro on Jan 7, 2013 | hide | past | favorite | 47 comments


Interestingly, these qualities all apply to Go:

- easier to read than write : it has been said a lot that go (esp. its error handling) was somewhat verbose, though very readable.

- Not tricking the programmer : this one is a bit subjective, go does use the equal sign as assign operator, but the required braces for if-else, gofmt tool and strict compiler help in not tricking you.

- one way to do things : go (its community) is very actively promoting the idiomatic way of doing things. Also consider, no while/do loops, only for.

- as much static typing as possible : maybe not as much as rust, but yeah, very static typing (array length is part of the type, for example)

- no warnings : exactly that! Unused variables and imports are errors.

- coding conventions part of the language : the gofmt tool does that. Don't publish code that is not gofmt'ed!


I read the article and then came to the comments looking to make this exact point.

Gofmt is my favorite example of why coding conventions can, and should be part of the language.


I think "one way to do it" Java and Python philosophy is overrated. Sure, when it's presented that way, it sounds great! Why wouldn't we want one way to do things?

I like to think about it differently: instead of trying to coerce my problem to fit the language, I want to mold my language to fit the problem. It's easier to think about the problem in its own terms--the terms of its domain--than in those terms filtered through a "one-size-fits-all" language design.

I want code that is declarative and closely reflects its underlying logic. And this means there will be more than one way to do things--while many problems are similar mechanically, their actual meaning is vastly different.

So there really is a benefit to having more than one way to do things: you can choose a way that's appropriate for the problem you're solving. Of course, this puts a bit of trust into programmers to have good taste: while it allows you to write far better code than a one-size-fits-all language, it also allows you to write far worse code. I personally think this is a worthy compromise.

Another interesting thing I've found is that having more than one way to do things naturally emerges when you have a distinct set of powerful primitives. This is most evident in math: for any given problem, there are often a ton of different ways to arrive at the same solution. Each of these ways emerges naturally from the fundamental building blocks of math.

Coincidentally, these different ways of looking at the same thing are not only natural but actively useful: they give you different perspectives on the same idea. An example I recently encountered was with lattices. There are two different ways to define a lattice--one in terms of partial orders and one in terms of algebras. The former gives you an intuition on the structure of a lattice; in a sense, it tells you what a lattice "looks like". The latter makes it easy to see relationships between lattices and other algebraic structures: for example, Boolean algebras are just a special kind of lattice. You can then even combine the two and start thinking about Boolean algebras in terms of orders.

So to me, having multiple different but equivalent ways to do things is both useful and natural, especially if these different ways emerge naturally. I much prefer this approach to something more prescriptive and monolithic, where the language designer has tried to guess exactly what and how I will be doing and explicitly provided a way to do it.


Sure there are corner cases such as lattices where having more than one way of expressing a problem is beneficial, I think what the op is getting at are things like:

if/else vs. the ternary operator in C. Blocks/Procs/Lambdas in Ruby. etc.

The concepts are so close that the language might as well pick one way of doing it and just do that. I don't need 18 kinds of for/for-in/while/do/do/while loops each broken in it's own way, I need one that works.

For myself in the above cases I'd make if/else an expression, and get rid of blocks/procs.

Most of what is in today's languages shouldn't be spec'd as part of the language, but rather provided by the standard library. What is the point of async/await in C# but not monads? It does not empower programmers, it subjugates them to the will of Anders, these tasks are blessed, these are not.


Even if you made if/else an expression, the ternary is still useful for brevity.

Compare:

  foo(a ? b : c)
Versus:

  foo(if a then b else c end)
Which do you think reads more clearly?


The slightly less terse:

     if(a)
        foo(b)
      else
        foo(c)


My only complaint with that version is that foo() is repeated now. It's not clear at a glance that you're definitely calling foo, and only deciding which argument to pass. (Well, in this simple case it's quite obvious, but...)


If that's really a concern:

    if (a):
        arg = b
    else:
        arg = c
    foo(arg)
In my experience, trying to optimize for brevity at the expense of clarity is almost always a bad idea.


They are trying to optimize for clarity. Too much verbosity reduces clarity. It is not easy to find the correct balance between concision and cryptic. Many factors have also to be taken into account: should the program be easy to read by beginners or can we assume an experienced reader.


I like this - idiomatic, brief and clear:

    arg = a and b or c
    foo(arg)


Personally I don't really see what's unclear about foo(a ? b : c)

In fact, I'd say your "trick" with using boolean operators on non-booleans that way is less readable.


I don't think there is anything at all unclear about (a ? b : c). I use it all the time when programming in C and its derivatives.

The (a and b or c) idiom is used in Lua, Ruby and Python (in descending order of idiomaticness). Since the person I was responding to seemed to be using syntax reminiscent of these languages, I thought I'd offer up this expression form as an alternative to resorting to statements.


While I agree with you that there should be more than one way to solve a problem (although I wouldn't say Java's philosophy is against that), when it comes to actually writing code, I want to have one clear and understandable way in which something should be written. That is to say, I prefer it when a language has strong opinions on what the code should look like. This is in stark contrast with, for example, Scala's philosophy. Scala is a language that tries to be everything for everybody and its syntax reflects that. There are a lot of situations where you can write the same thing in two or more different ways. Can it reduce verbosity? Sure, but it also ads complexity. In the end you get code that can be a real hodgepodge of different styles , depending on how many developers work on it, and can look really messy. Conventions can help that, but aren't conventions really only here to make up for a lack of proper syntax rules? This is why I prefer that a language has minimal syntax with clear rules and really, preferably, only one way to write something.


Python's philosophy is often said to be:

"There should be one - and preferably only one - obvious way to do it."

Which is not exactly the same as "one way to do it". There can be many ways to do it, if they are obvious from perspectives, just don't add to many since it will cloud the scene, and the programmer won't know which to use. It's all about having different developers come to the same simple solution when confronted by the same type of problem.

But I agree with everything you said. Some times only having one way becomes a burden. Many times precisely because it's not the most obvious way in the specific situation.


That's Perl's philosophy isn't it -- there is always more than one way to do it. Of course, that's one of the reasons why Perl has notoriously poor readability.


Distant second or third behind the sigils and other non-alphanumeric/non-punctuation characters.


Making unused variables and code formatting issues into errors was considered and rejected for Rust. The reason is that much of debugging consists of commenting out pieces of code and rebuilding. Often this results in variables becoming unused and formatting becoming messed up. I've worked with systems that threw errors here (FxCop), and it was terribly inconvenient in practice.

I think the optimum is just a loud set of warnings, a community expectation that all code be warning-free, and a pretty printer included with the language to get the ecosystem to standardize on a style.


tl;dr: OP likes Eiffel.

A perfectly reasonable post, but not a very interesting one. I could probably write something similar:

  - homoiconicity
  - tail-call elimination
  - s-expressions
Can you tell what my first language was?


Scheme?


Chicago's version of 6.001, straight from SICP.


Nope. Must be XML.


XML used to be a very easy way for my coworkers to get a rise out of me. Thankfully, I haven't had to think about it since I left Apple.


Clojure.


I'm a little older than that.

Also, the JVM precludes TCE.



Which is useful, sure. But when you're conditioned to write in a tail-recursive style by the language environment, it still pulls one out of the flow. That said, I've only written Clojure for amusement, never on anything large, so it could simply be a matter of getting used to new convention.

I'm also come around to strong, static type systems since my Scheme days, but that's a different kettle of fish entirely.


it's interesting how his entire class of expectations is different from mine. my focus is mostly on what facilities for abstraction and safety the language gives me; i hold that if you have powerful enough abstraction facilities to factor out boilerplate, your code can be made to be way more readable than the "only one way to do it" class of languages, and if you have safety features built into the language (static typing, abstract datatypes, contracts, linear types) it will be forced to be more reliable.


He wrote: "As much static checking as possible: Static checking is good. Did you ever write a larger piece of code in one go, compiled it and it produced tons of errors? All these error would still be there, if you would not have static checking. You err much more than you think."

I think he errors much more than he really thinks. If errors are what he's truly trying to avoid, he'd be more focused on unit testing and TDD than passing syntax.

Descriptions like his make me wonder how many static typing proponents have actually spent any legitimate amount coding in a dynamic language for a production application. It seems all scary to code applications without a big IDE, but I promise that it's possible and a lot of people make it work.


Java developer: "How can you code without an IDE?"

non-Java developers: "How can you code in fucking Java?"


I write Java in Vim on occasion.

I think I'm broken.


I'm a bad person: this made me spit-take.


FTFY

Java developer: "How can you code without a fucking IDE?"

Non-Java developers: "How can you fucking code in Java?"


Type systems are a little more than just syntax. Static analysis can identify unreachable sections of code, for instance, which will often indicate a bug.

Unit tests cannot prove correctness, only provide a finite set of ways in which the program /isn't/ wrong. (having said that, they're definitely useful)

And above all, he's talking about his ideal programming language. Test coverage is the domain of the programmer. There's very little you can put in a language spec to improve test quality.

For the record, I'm a static typing proponent, and I do most of my coding in vim; I only use an IDE when I /have/ to (I use C# at work). I think a lot of people assume static typing necessarily means very verbose languages like Java, which isn't at all true. My ideal solution is some combination of type inference and optional typing that I haven't quite arrived at yet. I like being able to whip up a quick program without worrying about manual type annotations, but it's nice to be explicit about expected types in an API for instance.


"There's very little you can put in a language spec to improve test quality"

You could require presence of unit tests with 100% code coverage (compile both in memory, run unit tests, write object file if 100% coverage)

I'm pretty sure it would drive people crazy, though, and would have them write almost meaningless tests. 100% coverage isn't enough, either.


As I said, very little that can improve test quality. :)

Good to brainstorm though. But in my experience, anything that breaks flow (like those languages that refuse to compile if there's unused code, even if you've just commented something out for debugging) is a bad idea. Often it's nicest just to write the code, then write the unit tests once you've got the core idea down, rather than having to alternate between them at high frequencies.


OK, try #2:

Loosen the unit test restrictions to just require that edge cases for function preconditions are tested. For example, a precondition 'x>3' would require a test calling a function the function with x=4 and one with x=MAXINT (if x is an int) or with x=nextFloat(3) and one with x=MAXFLOAT (if x is a float)

Ideally, the language would require gets for all edge conditions, but doing that is impossible. Instead, specify something similar to what Java does for 'variables must be provably initialized before use' to find easily detectable (in some sense) edge cases.

Also, allow the IDE to run and debug unit test code before it complies with the rules.

Keep the rule 'never write an object file that can be used outside of the IDE until the code passes tests'.

I think I could find such restrictions useful, if I were writing pacemaker software or something similarly critical.

If you find that too restricting, also allow debugging non-unit tet code from the debugger.


I like the idea of automatically generating a set of test cases, but requiring you to fill out the tests. Even better if I can run

    my-lang --gen-test-cases input.lang
and it outputs stub tests for the edge cases.


I think that static and dynamic type systems are classic incommensurable goods [1]. There's no way to pick between them based on some sort of value calculus; you just have to toss your hat in one ring or the other and live with the weaknesses as well as the strengths of your chosen approach.

[1] http://plato.stanford.edu/entries/value-incommensurable


I think people pick dynamic more often because of the reward schedule. Dynamic languages don't dish out as many errors upfront; most of the time it'll "work"-as-written. Immediate positive feedback.

Meanwhile the static typing compiler is spitting grumpy errors about some nitpicktastic piece of fluff it spotted. Immediate negative feedback.

So given the weekend-new-language thing, which example leaves a better impression?

Basically -- generalising enormously -- dynamic languages reward in the short term and punish in the long term; static languages are the opposite.

But human cognition is dreadful at long term prediction or comparison. So static languages will always be underrepresented unless, I dunno, Haskell compilers start doling out XP for fixing errors in your code.


But the overall calculus, in terms of, say, "productivity" (which is the hand-waviest of hand-waveries) is going to zero out. Or at least, that's my contention, based on these facts (∅) and twenty years of opinion and anecdote.


The problem is that the hand-waving could conceal a lot of useful variables -- and different people will care about different variables.

So one variable might be "time to customer value", and dynamic languages have been steadily dominating this space. But another variable might be "lifetime cost of defects" and static languages might come out in front by simply precluding many classes of defect.

And just like that we're back to trading off the long term and short term.


I think this is off base with regards to warnings. Warnings are the sign of a language that has been widely used. There is something that is now known to be bad (i.e. error prone), but previously was not known to be bad.

If you make it an error you break lots and lots of code. If you totally ignore it, you allow more preventable bugs to be introduced. Thus the warning is born.


I don't understand how he can want static checking, but no warnings. "x is declared but never used" has saved me so many times. Maybe he's used to languages where warnings cannot be locally disabled? In Common Lisp I can (declare (ignore x)) to "shut up the compiler", and it works really well.


He wants "x is declared but never used" to be an error that halts compilation. Go's philosophy on this[1] is "if it's worth complaining about, it's worth fixing in the code."

1: http://golang.org/doc/go_faq.html#unused_variables_and_impor...


Programs should be easy to write and easy to read

Formatting should delineate structure wherever useful

Syntactical noise should be avoided

Should provide powerful tools for expressing ideas succinctly

Static checking should be available but not required

Should provide a rich set of built-in tools

Should support runtime program manipulation


I agree with most of the post, but this:

> Programs have to be easier to read than write

I think is impossible.


It was one the main motto of Ada. Ada is very easy to read (to be honest, it is less true since Ada95) but when I do not use it during a long time, I forget how to write in Ada.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: