Hacker News new | past | comments | ask | show | jobs | submit login

Absolutely this.

For whomever tells me verbosity isn't a limitation to a language: find me the single incorrect statement in a 100 line function vs a 10 line function.

And no cop outs with "I use {bolt-on sub-language that makes parent language more concise}" (that's not a mainstream language then) or "Well, you can just ignore all the boilerplate" (bugs crop up anywhere programmers are involved).

Or give me an ad absurdum concise counterexample with APL. :P

Ultimately language verbosity is mapped directly to proper abstraction choice. In that the language is attempting to populate the full set of information it needs, and can either make sane assumptions or ask you at every turn.




The fact that even the pythonistas are now adopting types suggests that verbosity is much less of a concern than a bunch of spaghetti code that cannot be tested, understood, or refactored. You have to squint really, really hard to think that the people who chose type-less languages over Java ten years ago actually made the right choice. Personally, when diving into codebase its "verbosity" has never been an actual issue. Nor has lack of "expressive power." Of much greater concern is how well modularized it is, how well the modules are encapsulated, and how well the intentions of the original authors were captured. Here verbosity and types in particular have been absolutely invaluable. I suspect in the end this is why serious development at scale (involving many programmers over decades) will never occur in "highly expressive" languages like lisp and to a lesser extent, ruby etc. It is simply not feasible.


As I dive deeper and deeper into this thread, it looks like people are confusing "verbosity" with "it-has-a-type-system".

Java (5,6) wasn't verbose just because of types. Java was verbose because the language, and everything surrounding it was verbose. It was difficult to read Java at times because the language had been gunked up with AbstractFactorySingletonBeans. FizzBuzz Enterprise Edition is a joke that is only funny and simultaneously dreadful in the Java world. However, despite being relatively more complex, Rust is far less verbose than Java- even though Rust is more powerful with regards to types. "Hello World" in rust is 3 lines with just 3 keywords. The Java version has 12 keywords.

Engineers ten years ago weren't choosing Ruby/Python over Java because of static typing. They didn't choose Java because it was relative nightmare to read and write.


Lambdas saved the language. Java 6 was the kingdom of nouns. You couldn't pass statements or expressions, so instead you had to create classes that happen to contain what you really wanted. Async code was nearly unreadable because the code that matters is spread out and buried.


This was said in other threads under the article, but we've definitely made huge strides in more efficient typing.

The general narrative of "early Java typing hurt development productivity" to "high throughput developers (e.g. web) jumped ship to untyped languages" to "ballooning untyped legacy codebases necessitated typing" to "we're trying to do typing more intelligently this go around" seems to track with my experience.

Generics, lamdas, duck typing, and border typing through contracts / APIs / interfaces being available in all popular languages drastically changes the typing experience.

As came up in another comment, I believe the greatest pusher of untyped languages was the crusty ex-government project engineer teaching a UML-modeling course at University.

To which students, rightly, asked "Why?" And were told, wrongly, "Because this is the Way Things Are Done." (And to which the brightest replied, "Oh? Challenge accepted.")


I really think what saved Java is the really good tooling. These nice modern IDEs almost program for you.


10 years ago I was writing Java and still am today, alongside other languages.

I will never chose Python/Ruby for anything else other than portable shell scripts.


15 years ago I wrote Python code for a living. Then about 9 years of Java. The last four years was exclusively Python. I'm never going back to Java, it has nothing I want.


What's your job while using Python?


Each to his own I guess.


(There’s only one keyword in Rust’s hello world, “fn”.)


And I think 3 in Java's? public, class and static.


Does "void" count?


That seems to be a keyword, yes. So 4.


Python types are optional, and have adequate inferencing. Any where you think it's too verbose to use types then you don't have to. In Java, you must use types even if you believe it is just boilerplate. That's an essential difference.


I keep a mental list of the qualities of good code, "short" and "readable" are on the list. I've sometimes wondered whether "short" or "readable" should be placed higher on the list, and I eventually decided that short code is better than readable code because the shortness of code is objectively measurable. We can argue all day over whether `[x+1 for x in xs]` is more or less readable than a for-loop variant, but it is objectively shorter.

Of course, it's like food and water, you want both all the time, but in a hard situation you prioritize water. Likewise, in hard times, where I'm not quite sure what is most readable, I will choose what is shortest.


> I eventually decided that short code is better than readable code because the shortness of code is objectively measurable

I can debug sophisticated algorithms code that is readable and explicit far more easily than short and concise. Anyone that tells you otherwise has never had to debug the legacy optimization algorithms of yesteryear (nor have they seen the ample amount of nested parens that comes from a bullshit philosophy of keeping the code as short as possible).


All arguments about computer languages will always end up in disagreement, since every person in that argument does programming in an entirely different context.

Short is good when the average half-life of your code is less than a month.

When you're writing something for 10 years and beyond - it makes sense to have something incredibly sophisticated and explicit.

Otherwise it doesn't since the amount of time it takes me to comprehend all of the assumptions you made in all of those nested for loops is probably longer that the lifetime of the code in production.

List comprehension has a nice, locally-defined property in python: it will always terminate.


Only if you iterate over a fixed-length iterable.

    [x for x in itertools.count()]
will never terminate.


It will terminate as soon as it runs out of memory.


By this definition, python has a nice locally defined property that it will always terminate ;)


No; this will never-ever terminate:

    (x for x in itertools.count())


Obligatory Dijkstra: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."


That's actually Brian Kernighan. Dijkstra would have never advocated debugging to begin with.


So you must be a fan of obfuscated C contests?

The main reason list comp in python were given so much praise is because how they are (were?) More efficient than loops. I personally find a series of generator expressions followed by a list comp more readable than a three-level list comprehension, although the latter is more readable.


If you reliably generate the boilerplate and throw it away, you can ignore it (and you've changed which language you're really using). If it's at all possible for a human to permanently edit the boilerplate, well now it can be wrong, so you have to start reviewing it.


A valid point. I didn't mention it above to stay concise, but the question then becomes:

If you can reliably generate boilerplate, AND it's non-editable, then why is it required by the language in the first place?

If it is editable, then it collapses back down into review burden.

I think this is where "sane, invisible, overridable defaults" shines. Boilerplate should be invisible in its default behavior. BUT the language should afford methods to change its default behavior when necessary.


> no cop outs with "I use {bolt-on sub-language that makes parent language more concise}" (that's not a mainstream language then)

Why is "I use {bolt-on} that makes {parent language} more concise" a cop out? The bolt-on could be a macro language or an IDE that does collapsing of common patterns. If it makes it easier to find a bug in a 100-line function in the parent language, or to not generate those bugs in the first place, then the {bolt-on} isn't a cop out.


Because I believe language stability is proportional to number of users.

Would I use a new transpiler for a toy personal project? Absolutely! Would I use it for an enterprise codebase that's going to live 10-15 years? No way!

If you accept that every mapping is less than perfect (e.g. source -> assembly, vm -> underlying hardware, transpiler source -> target), then it follows that each additional mapping increases impedance.

And impedance bugs are always of the "tear apart and understand the entire plumbing, then fix bug / add necessary feature."

When I'm on a deadline, I'm not going near that sort of risk.


I see "transpilers" as being on a continuum ranging from IDE collapse comments and collapse blocks at one end, to full code generation syntax macros at the other end. There's a sweet spot in the middle where the productivity gains from terser code outweigh the impedance risk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: