Hacker News new | comments | show | ask | jobs | submit login

Java's verbosity made us all hate type systems in the early 2000s so many of us migrated to dynamic languages such as Python, Ruby in the mid 2000s that allowed us to work fast and loose and get things done.

After about 10 years of coding in a fit of passion we ended up with huge monolithic projects written in dynamic languages that where extremely brittle.

Fortunately languages with type inference (Rust, Golang, OCaml, Scala, etc) started becoming the answer to our problems. (We also collectively decided that Microservivces were another solution to our woes though that cargo cult is being questioned).

So we have a decade of code and packages written in Python and JavaScript that work well for our use cases (Data Science, MVP web apps/services, Database integration, etc) that is hard to give up. Often because alternatives aren't available yet in the statically typed languages (Hurry up Rust!).

There is often a lot of friction to get new languages introduced. I love Rust, but I don't think I can introduce it into our Golang/NodeJS/Javascript environment anytime soon.

> Java's verbosity made us all hate type systems in the early 2000s so many of us migrated to dynamic languages such as Python, Ruby in the mid 2000s that allowed us to work fast and loose and get things done.

You may be overgeneralizing, depending on whom you mean by the term "we".

More often than not, the code I've written has to be very well trusted when deployed. For me, "getting things done" means getting to effective and trustworthy code ASAP. Static type systems have been invaluable for that work.

I interpreted the comment as "Java's verbosity made us think all static type systems were also verbose". Which I know is what a lot of people still think ("but the lack of REPL!", "but the boilerplate!", "but the elegant code", disregarding that other statically typed languages have all these features and more).

Even Java has some of those now, to some point.

especially so in scala,clojure

I don't really want to get into a flamewar about static vs dynamic. I'm a polyglot, I use several languages with multiple flavors of type system, I think that most options have at least a couple things to recommend them.

However, the grandparent has a point: Java's type system makes static typing much more painful than it needs to be. I didn't start working in Java until fairly recently, and it was only then that I started to understand how many fans of dynamic languages could say that static typing mostly just gets in the way. But, if the only static language you've spent much time with is Java. . . now I get it. Java's type system mostly just gets in the way.

How does Java's type system get in the way?

It's statically typed, but with basically no type inference (the diamond operator is something, I guess, but not much). So you end up putting a lot of time into manually managing types. That creates friction when writing code, since you need to remember the name of the return type of every method you call in order to keep the compiler happy. Worse, it creates friction when refactoring code, since any change that involves splitting or merging two types, or tweaking a method's return type, ends up forcing a multitude of edits to other files in order to propitiate the compiler. I've seen 100-file pull requests where only one of those file changes was actually interesting.

Then, to add insult to injury, its static type system is very weak. A combination of type erasure, a failure to unify arrays with generic types, and poor type checking in the reflection system means that Java's static typing doesn't give you particularly much help with writing well-factored code compared to most other modern high-level languages.

Speaking of generics, the compiler's handling of generics often leaves me feeling like I'm talking to the police station receptionist from Twin Peaks. Every time I have to explicitly pass "Foo.class" as an explicit argument when the compiler should already have all the information it needs to know what type of result I expect, I cry a little inside.

Long story short, if I could name a type system to be the poster child for useless bureaucratic work for the sake of work, it would be Java's.

Some fair points... some comments and one question:

1. Java 10 has type inference so that should improve your first point going forward to some degree. That said, I would also say type system syntax != type system.

2. Compared to what other modern high-level languages? Also slight changing of goal posts, but what other modern high-level language that has some market adoption?

3. Agree with passing `Foo.class` or type tokens around. Very annoying.

It's not powerful enough at the same time as being overly verbose.

After using python for a bit I came back to Java and tried to do all sorts of things that just were not easy without a lot of faff.

What would you consider to be a good static typing system?

C#'s, if you're looking for a Java-style language done right.

Objective-C's is also interesting, for one that makes some different design tradeoffs. Being optionally static with duck typing, for one.

Any ML variant, if you want to see how expressive a language can get when static typing is treated as a way for the compiler to work for you, not you working for the compiler.

ReasonML has it just right imho

I think that "Java bad" isn't all of the story of the move away from static type checking. Type checking in programming probably originated, and certainly featured prominently, in selection of representation of values. I think it's a valid insight of the dynamic camp that for most purposes we don't care how things are represented so long as we know how to work with them. What's often missed is that types can be a powerful tool for talking about other things, too.

Static typed languages are harder to test. So if you do cover 100% dynamic is not so bad. However well built static languages reduce the things that need to be tested in the first place. Like non-nullabilty in Kotlin and Swift.

I have to ask: what makes you think that staticly typed languages are harder to test? My experience is precisely the opposite. Large testing codebases can benefit hugely from the increased refactorability. In addition, the types help to explicitly define the contracts you need to test.

I think what dep_b refers to is that, in dynamic languages, you usually have an easier time injecting mocks and doubles. In a staticly typed language, it's usually much harder to inject mocks for IO, network, clock, etc., unless the original code has already been written to afford that (e.g. that whole Dependency Injection mess in Java).

That's exactly what I meant, thanks

Java's verbosity made us all hate type systems in the early 2000s so many of us migrated to dynamic languages such as Python, Ruby in the mid 2000s that allowed us to work fast and loose and get things done.

This was actually a replay of what happened with Smalltalk versus C++ in the 80's and 90's, which was a part of the history of how Java came about. And even that was a replay of what happened with COBOL and very early "fourth generation" languages (like MANTIS) from a decade before that!

I don't know this history. Would you mind expanding on it, or giving a link where I could learn more?

C++ is a utterly complex language. Java appeared as an option to simplify programming compared with all the bureaucracy of C++: no need to manage every bit of memory (GC), no multiple inheritance, no templates, no multi-platform hell, a big library included etc.

Smalltalk was not available to most programmers back then, it needed an expensive machine with a lot of memory and the implementations were very expensive. Apps were also much smaller, so the disadvantages of C were less pressing.

Smalltalk was not available to most programmers back then, it needed an expensive machine with a lot of memory

I was programming back then. It ran just fine on fairly standard commodity hardware from the time 486 stopped being "high end." Also, at one point the entire American Airlines reservations system was written in Smalltalk and running on 3 early 90's Power Mac workstations.

the implementations were very expensive.

More or less true. At one point there were $5k and $10k per-seat licenses.

Apps were also much smaller, so the disadvantages of C were less pressing.

There was a major DoD project that let defense analysts do down-to-the soldier simulations of entire strategic theaters. (So imagine this as an ultra-detailed, ultra realstic RTS.) They did this as a competition with 3 teams, one working in C++, one in another language I can't recall, and one in Smalltalk. The Smalltalk group was so far ahead of the other two, there was simply no question. That was a complex app. There were countless complex financial and logistics apps.

So, small apps? Not so much.

It was already clear by the mid 90's.

Bare bones C vs something like Turbo Pascal 6.0 with Turbo Vision framework on MS-DOS 5.0.

Verbosity is usually the worst argument against a language. Your coding efficiency is not limited by how fast you can type. I've been using Kotlin recently which is basically just Terse Java and it's very nice but hasn't turned my world upside down.

Verbosity absolutely hurts comprehension of code. It's easy to hide hugs in code that seems to be just boilerplate. It also means that given a fixed screen estate you can less of the actual logic of the code at a time.

Absolutely this.

For whomever tells me verbosity isn't a limitation to a language: find me the single incorrect statement in a 100 line function vs a 10 line function.

And no cop outs with "I use {bolt-on sub-language that makes parent language more concise}" (that's not a mainstream language then) or "Well, you can just ignore all the boilerplate" (bugs crop up anywhere programmers are involved).

Or give me an ad absurdum concise counterexample with APL. :P

Ultimately language verbosity is mapped directly to proper abstraction choice. In that the language is attempting to populate the full set of information it needs, and can either make sane assumptions or ask you at every turn.

The fact that even the pythonistas are now adopting types suggests that verbosity is much less of a concern than a bunch of spaghetti code that cannot be tested, understood, or refactored. You have to squint really, really hard to think that the people who chose type-less languages over Java ten years ago actually made the right choice. Personally, when diving into codebase its "verbosity" has never been an actual issue. Nor has lack of "expressive power." Of much greater concern is how well modularized it is, how well the modules are encapsulated, and how well the intentions of the original authors were captured. Here verbosity and types in particular have been absolutely invaluable. I suspect in the end this is why serious development at scale (involving many programmers over decades) will never occur in "highly expressive" languages like lisp and to a lesser extent, ruby etc. It is simply not feasible.

As I dive deeper and deeper into this thread, it looks like people are confusing "verbosity" with "it-has-a-type-system".

Java (5,6) wasn't verbose just because of types. Java was verbose because the language, and everything surrounding it was verbose. It was difficult to read Java at times because the language had been gunked up with AbstractFactorySingletonBeans. FizzBuzz Enterprise Edition is a joke that is only funny and simultaneously dreadful in the Java world. However, despite being relatively more complex, Rust is far less verbose than Java- even though Rust is more powerful with regards to types. "Hello World" in rust is 3 lines with just 3 keywords. The Java version has 12 keywords.

Engineers ten years ago weren't choosing Ruby/Python over Java because of static typing. They didn't choose Java because it was relative nightmare to read and write.

Lambdas saved the language. Java 6 was the kingdom of nouns. You couldn't pass statements or expressions, so instead you had to create classes that happen to contain what you really wanted. Async code was nearly unreadable because the code that matters is spread out and buried.

This was said in other threads under the article, but we've definitely made huge strides in more efficient typing.

The general narrative of "early Java typing hurt development productivity" to "high throughput developers (e.g. web) jumped ship to untyped languages" to "ballooning untyped legacy codebases necessitated typing" to "we're trying to do typing more intelligently this go around" seems to track with my experience.

Generics, lamdas, duck typing, and border typing through contracts / APIs / interfaces being available in all popular languages drastically changes the typing experience.

As came up in another comment, I believe the greatest pusher of untyped languages was the crusty ex-government project engineer teaching a UML-modeling course at University.

To which students, rightly, asked "Why?" And were told, wrongly, "Because this is the Way Things Are Done." (And to which the brightest replied, "Oh? Challenge accepted.")

I really think what saved Java is the really good tooling. These nice modern IDEs almost program for you.

10 years ago I was writing Java and still am today, alongside other languages.

I will never chose Python/Ruby for anything else other than portable shell scripts.

15 years ago I wrote Python code for a living. Then about 9 years of Java. The last four years was exclusively Python. I'm never going back to Java, it has nothing I want.

What's your job while using Python?

Each to his own I guess.

(There’s only one keyword in Rust’s hello world, “fn”.)

And I think 3 in Java's? public, class and static.

Does "void" count?

That seems to be a keyword, yes. So 4.

Python types are optional, and have adequate inferencing. Any where you think it's too verbose to use types then you don't have to. In Java, you must use types even if you believe it is just boilerplate. That's an essential difference.

I keep a mental list of the qualities of good code, "short" and "readable" are on the list. I've sometimes wondered whether "short" or "readable" should be placed higher on the list, and I eventually decided that short code is better than readable code because the shortness of code is objectively measurable. We can argue all day over whether `[x+1 for x in xs]` is more or less readable than a for-loop variant, but it is objectively shorter.

Of course, it's like food and water, you want both all the time, but in a hard situation you prioritize water. Likewise, in hard times, where I'm not quite sure what is most readable, I will choose what is shortest.

> I eventually decided that short code is better than readable code because the shortness of code is objectively measurable

I can debug sophisticated algorithms code that is readable and explicit far more easily than short and concise. Anyone that tells you otherwise has never had to debug the legacy optimization algorithms of yesteryear (nor have they seen the ample amount of nested parens that comes from a bullshit philosophy of keeping the code as short as possible).

All arguments about computer languages will always end up in disagreement, since every person in that argument does programming in an entirely different context.

Short is good when the average half-life of your code is less than a month.

When you're writing something for 10 years and beyond - it makes sense to have something incredibly sophisticated and explicit.

Otherwise it doesn't since the amount of time it takes me to comprehend all of the assumptions you made in all of those nested for loops is probably longer that the lifetime of the code in production.

List comprehension has a nice, locally-defined property in python: it will always terminate.

Only if you iterate over a fixed-length iterable.

    [x for x in itertools.count()]
will never terminate.

It will terminate as soon as it runs out of memory.

By this definition, python has a nice locally defined property that it will always terminate ;)

No; this will never-ever terminate:

    (x for x in itertools.count())

Obligatory Dijkstra: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

So you must be a fan of obfuscated C contests?

The main reason list comp in python were given so much praise is because how they are (were?) More efficient than loops. I personally find a series of generator expressions followed by a list comp more readable than a three-level list comprehension, although the latter is more readable.

If you reliably generate the boilerplate and throw it away, you can ignore it (and you've changed which language you're really using). If it's at all possible for a human to permanently edit the boilerplate, well now it can be wrong, so you have to start reviewing it.

A valid point. I didn't mention it above to stay concise, but the question then becomes:

If you can reliably generate boilerplate, AND it's non-editable, then why is it required by the language in the first place?

If it is editable, then it collapses back down into review burden.

I think this is where "sane, invisible, overridable defaults" shines. Boilerplate should be invisible in its default behavior. BUT the language should afford methods to change its default behavior when necessary.

> no cop outs with "I use {bolt-on sub-language that makes parent language more concise}" (that's not a mainstream language then)

Why is "I use {bolt-on} that makes {parent language} more concise" a cop out? The bolt-on could be a macro language or an IDE that does collapsing of common patterns. If it makes it easier to find a bug in a 100-line function in the parent language, or to not generate those bugs in the first place, then the {bolt-on} isn't a cop out.

Because I believe language stability is proportional to number of users.

Would I use a new transpiler for a toy personal project? Absolutely! Would I use it for an enterprise codebase that's going to live 10-15 years? No way!

If you accept that every mapping is less than perfect (e.g. source -> assembly, vm -> underlying hardware, transpiler source -> target), then it follows that each additional mapping increases impedance.

And impedance bugs are always of the "tear apart and understand the entire plumbing, then fix bug / add necessary feature."

When I'm on a deadline, I'm not going near that sort of risk.

I see "transpilers" as being on a continuum ranging from IDE collapse comments and collapse blocks at one end, to full code generation syntax macros at the other end. There's a sweet spot in the middle where the productivity gains from terser code outweigh the impedance risk.

With a proper type system you can often trade away the verbosity through type inference. Still, I'd argue that even if you couldn't, the extra 'verbosity' you take on from writing types in a language with a strong type system (Haskell, Rust, Scala, Ocaml, etc) is actually paid back in spades for readability. Particularly because you can describe your problem space through the types, and enforce invariants.

It's really just the 'static' type systems that only prevent the most pedantic of type errors where the argument holds any merit.

It's all just trade-offs. "Verbosity" is too abstract to argue by itself because it comes in so many different flavors and spectrums.

For example, the Elm lang eschews abstractions for boilerplate, deciding that the boilerplate is worth the simplicity. And I've come to appreciate its trade-offs. For example, coming back to Elm after a long break, I've not had to spend much time recredentializing in it like I had to an old Purescript project.

On the other end of the spectrum, there's something like Rails that avoids boilerplate at the high price of having to keep more complexity in your head.

I personally love introducing lots of hugs in my code.

Write hugs, not bugs!

Not if the verbosity is providing extra information. Just taking away types isn't concise, it's hiding complexity. Tracing Java in IntelliJ is always trivially easy. Tracing JavaScript Kafkaesque. Python is in between.

The problem of language verbosity is not about writing code, it's about reading code.

Writing code is rarely problematic. Usually when you sit down to write code, you have a clear idea of what you want to do. Or if you don't, you have the advantage of being intimately aware of what you're writing.

Once your project becomes large enough that you can't hold it all in your head at once, reading code becomes supremely important.

Any time you write code that interacts with other parts of your project in any way, you will need to understand those other parts in order to ensure you do not introduce errors. That very frequently means being able to read code and understand it correctly.

There's a saying that issues in complex systems always happen at the interfaces.

I see language verbosity an advantage when reading foreign code for maintenance.

Much better than deciphering some hieroglyphs.

That's a strawman. Concise code != hieroglyphs.

In fact, if you need hieroglyphs to keep your code concise that's a deficiency of the language. This is why we want expressive languages in the first place.

The size of your code is the number one predictor of bugs. The more code you have, the more bugs you probably have. Smaller code bases have less bugs. Verbosity means more code.

This is why very terse dynamic languages like Clojure often have relatively low bug counts despite a lack of static checks.

Some interesting reading on this topic:


From the very article you referenced: "One should take care not to overestimate the impact of language on defects. While the observed relationships are statistically significant, the effects are quite small. Analysis of deviance reveals that language accounts for less than 1% of the total explained deviance." That's a tiny effect size.

Moreover, Clojure is fairly compact, but isn't really a "terse" language. Consider APL and J as examples of truly terse languages. Programs written in them are generally horrible to read and maintain (at least for an average programmer who didn't write the original code!). So there might be some relationship between verbosity and quality, but the relationship is far more complex than "verbosity -> more code -> more bugs." Otherwise we'd all be building our mission-critical software in APL.

Plus, there are numerous well-known cases of bugs caused because a language provides a terse syntax, where redundant syntax would have prevented the problem. E.g., unintentional assignment in conditional expressions ("=" vs. "=="), and the "if (condition) ; ..." error in C-like languages. I've personally made similar errors in Lisp languages which are about as terse as Clojure, e.g., writing "(if some-cond (do-this) (and-that))" instead of "(if some-cond (progn (do-this) (and-that)))".

Redundancy in a programming language is often a safety rail: it shouldn't be casually discarded in the name of brevity for its own sake.

Personal experience tells me that size is probably a proxy for "this code is mathematically written". If you have even a vague idea of the math of the code you're writing, the code tends to be both shorter and have fewer bugs. But, I'd be wary of turning that around to a blanket endorsement of terseness. Terse code often needs to be restructured when requirements change. Restructuring takes more time and also risks adding new bugs. Then there are problems with readability during debugging and understanding interfaces.

> Terse code often needs to be restructured when requirements change.

It depends if the code is terse because of the programmer’s approach, or just naturally terse because of the language.

Well, there are bugs and there are bugs. With some, it is easy to find the offending bit of source code and spot the bug right away. Others may take days to localize and fix.

Bug count is not the whole story.

> Your coding efficiency is not limited by how fast you can type

Verbosity hurts reading, not typing. Think of reading an essay that takes hundreds of pages to make an argument that could have been written in a single paragraph.

That's simply not true, unless you're talking assembly-level of detail.

High-level language constructs can hide details in ways that make them harder to read, not easier to read. Ask anyone that has had to read a very complicated SQL statement about how long they had to look at various bits of the statement in order to understand exactly what was going on, and you'll get an idea of what I'm talking about (obviously, that person could be you, also).

In contrast, anyone can very easily understand for or while loops in any language without thinking about them at all. You can read them like you read a book.

It's simply a matter of fact that, unless the hidden details are very simplistic, abstract concepts with no edge cases, terseness hinders readability.

As for things like identifiers, all I can say is that developers that use super-short, non-descriptive identifiers because they think it helps readability are doing themselves, and anyone that reads their code, a grave disservice. They either a) don't understand how compilers work, or b) are beholden to some crazy technical limitation in the language that they're using, ala JS with shorter identifiers in an effort to keep the code size down before minifiers came on the scene.

>It's simply a matter of fact that, unless the hidden details are very simplistic, abstract concepts with no edge cases, terseness hinders readability.

No. Using the correct abstractions helps readability.

I'll agree with you that a complicated SQL statement may not be a good thing to use, but it also probably isn't the right abstraction.

Compare, on the other hand, using numpy/matlab/Julia vs. something like ND4J.

Its the difference between `y = a * x + b` in the former, and `y = a.mmul(x).add(b)`. Granted the ND4J version isn't terrible, but I used an older similar library in Java that only provided staticmethod, so it was `y = sum(mmul(a, x), b)`, which is fine when you're only working with two operations, but gets really ugly really fast when you want to do something remotely more complicated.

And I'll even note that all three of these are already highly abstracted. If you want to drop all semblance of abstraction, keep in mind that that `y = a * x + b` works equally well if `a` is matrix and x a vector, or `a` a vector and `x` a scalar, and separately it doesn't matter if `b` is a vector or a scalar. They'll get broadcast for you.

Overly terse code does indeed hinder readability. But so does overly verbose code. Its much more difficult to understand what is happening in

    outputValue = sumVectors(
            inputValue, weightsMatrix),
            biasValue, size(inputValue)))
than it is in `out = weights * in + bias`, even though the second is significantly more terse.

I don't know, your example seems perfectly understandable and easy to read to me. The naming is pretty good and descriptive, so anyone can understand what's going on pretty quickly.

That doesn't mean that a DSL or higher level language feature might not be better (the operations are pretty clear and not prone to edge cases, as I said before), but as far as "big problems" go, I find that example to be a pretty minor one.

My example was small but illustrative. If instead of implementing a single linear transform, you're implementing a whole neural network, or a complex statistical model or something, it will be much easier to grok the 10 line implementation than the 150 line one.

That means less surface area for typos, bugs, etc. This compounds if you ever want to go back and modify existing code.

> That's simply not true, unless you're talking assembly-level of detail.

Modern language design (of the past few decades) seems to disagree with you, with a couple of exceptions. This debate involves a degree of subjectivity, of course, but it's generally false that it's "simply not true" that less verbosity and boilerplate hinders readability. The consensus seems to be the contrary. Even Java -- late in the game -- is adopting features to reduce its verbosity and improve its expressivity.

High-level language constructs and idioms only "hide" unnecessary detail; i.e. the detail where the only degree of control you have is the possibility to introduce bugs. You learn these idioms just like you learned what an if-else or a foor loop was.

> Your coding efficiency is not limited by how fast you can type.

It absolute is, at least for me. Granted, the majority of time spent during programming is on thinking rather than typing, but any time spent typing detracts from time that could have been spent on thinking instead. Whenever I type a line too long, I tend to lose focus on what I am thinking, and get bogged down by language details. Besides, typing the useless thing again and again (like repeating a long type name) frustrates people, and frustrated people have a harder time to concentrate.

I have a hard time believing 80% of time is spent writing code vs thinking of the right solution.

That ratio alone is enough to not worry about efficiency typing imo.

You either did not read my comment or understand it, because I said the exact opposite.

As a matter of preference, the more verbose a language is, the less likely I'm motivated to learn it. Why should I have to type extra stuff to do the same things I can do in other languages? If the compiler can handle it, I shouldn't have to type it.

Java never stuck with me because of that, same with trying to learn Objective C. But languages like Swift, Go, Ruby, Python hit the sweet spot.

I wouldn't agree with the notion that Kotlin is just terse Java - it has a lot of things that don't exist in the Java world.

If you're just using it as a more terse version of Java, then I can understand why you're not seeing much of a change in your experience.

I can see how you might say that but... what about...

    std::vector<std::pair<std::size_t, std::complex<float>>> data = SomeMethod();

    auto data = SomeMethod();

Adding types to local variables is quite useless and should be considered redundant and non value adding verbosity. The main driver for typing is at interface boundaries so you know what types of input another function expects.

No, it's not useless, often i want to know what's the type of a intermediate binding without looking up all the functions/exprs that transformed an input to it, often an editor plugin helps with this, but otherwise it's real pain to understand some parts (this is a problem with rust, lsp server doesn't help with this, but for ocaml merlin works beautifully).

I don’t know if I agree with this. If you want to communicate an interface for maintainabilty you’ll declare your type so you know what you’re dealing with in the future.

I'd call that first example nothing but noise. Its probably a bit better with a type alias, but knowing that SomeMethod returns an array of uint/complex pairs doesn't really tell me anything.

If anything, this just shows me bare type information alone isn't useful without accompanying documentation. For example if `SomeMethod` was renamed `CalculateAllEnemyLocations`, it might make sense, but then I get most of the relevant information from the method name.

In other words, you have a bad api, and type information sort of, but not really, makes up for that. But that just means that you're ignoring the real problem.

It doesn't matter if the more verbose version is better, we'll still use the short version because we are lazy. When programming, and you have figured out what to do, it's basically just typing in all the instructions. So typing (and reading) friendliness does matter! If it wouldn't matter we would still program in machine code. Also there's some abstraction, where 3 lines of JavaScript would need 300 lines of machine code or Java :P

The problem with verbosity is that not how fast you can write but how easily you can read the language. Verbosity with no information gain is distracting.

> I love Rust, but I don't think I can introduce it into our Golang/NodeJS/Javascript environment anytime soon.

Rings especially true for my shop as well. I had to introduce Rust or Go, and went with Go. Seeing the mild pushback I get from Go's type system makes me especially glad I didn't choose Rust.

... though, in some cases, Rust+Generics would be easier than Go.

> Rings especially true for my shop as well. I had to introduce Rust or Go, and went with Go. Seeing the mild pushback I get from Go's type system makes me especially glad I didn't choose Rust.

An other possibilities is that Go's static types feel like a lot of ceremony for too little benefit. That was one of my biggest issues with Java, and though Go's lighter it also provides less benefits… By contrast, Rust is in the camp of requiring a higher amount of ceremony but providing a lot of benefits from it (sometimes quipped as "if it compiles it works").

Why do you think Go's type has "less benefit" than Java's type system?

That's my personal, anecdotal feeling as well. Go feels a bit more like I'm yelling at the compiler "you know what I mean, why can't you just do it?!" whereas with Rust it's more "ah, I see, you have a point".

Java has generics for one.

Failed generics with type-erasure sure, but definitely better than Go which has nada.

Type erasure on it's own isn't necessarily a bad thing, Haskell implements type erasure also.

I guess this was your point, but the problem is how Java does does type erasure. With Haskell type erasure is an implementation detail, but with Java it leaks into the compile-time type checker. For instance, you can't have both these overloads:

    public void foo(List<String> list)
    public void foo(List<Integer> list)
This just wouldn't compile.

The problem there is that the equivalent Haskell would dispatch statically, whereas this is dynamic dispatch in java

Not today, but in the future it might, depending on how the work on value types will turn out.

The difference is that Haskell has very little RTTI, so you can't really see the missing types at runtime. For Java it's much more noticeable, because runtime type manipulations are way more common.

Well, Go's type system is a butt of many jokes.

>> (We also collectively decided that Microservivces were another solution to our woes though that cargo cult is being questioned)

I wouldn't really say that, I think it's more that we all discovered huge monoliths don't work in an "agile cloud" environment where you have 10 teams deploying on their own cadences with no coordination (which you have with on premise binary delivery, or waterfall, or when you have implicit coordination by operations because they have to build out the physical infra). Further, I think modularity has become much bigger in the past 15-20 years as more and more people contribute to open source, more problems become "solved", and languages/domain spaces mature. Whether microservices are the best solution to those observations is still up for debate, but cargo cult or not I doubt many engineers these days would use a magical wand to go back to monoliths even if they live in microservice hell right now.

For me it was less about the verbosity and more about the overuse of patterns and general overengineering present in many (most?) Java APIs. Java doesn't strike me as being much more verbose than Go, but the differences in the API designs make a huge difference in how it feels to work with the language.

Go can be terse if everything is a interface{} and you ignore errors. But production quality Go is huge because there are so many things the compiler won't help with. I want a language that would generate the same boilerplate Go other people spend actual time on writing and reading.

Actually I went the other way around.

The performance problems dealing with Tcl on a 2000 startup made me never ever again use programming language without JIT/AOT support for production code.

To this day, Java, .NET and C++ stacks are my daily tools.

I don't think lack of type inference is the primary reason of un-attractiveness of Java compared to dynamic languages. The main reason is bad expressiveness of java's types itself.

You can't even have tuples. Neither tuple-like named types and named records. You have to make class every time, and OOP discipline tells you to hide data in it and make "behavior" public (this approach is definitely not for everything, so "beans" with getters and setters became hugely popular). Ubiquity of hashmaps in dynamic languages is huge relief after that.

Scala has reputation of "rubified java" rather than "FP for java" because of hugely improved expressiveness of types (presence of data types).

It's just as possible to have huge, monolithic, and highly brittle projects written in languages with stronger typing support.

The only difference is that you get to eliminate a class of trivial annoyances.

The big difference is that there are tools that allow automatic and guaranteed safe refactors for such languages. For instance, I can't guarantee that something as simple as renaming a method won't cause runtime errors in a dynamic language.

Agreed, and I would not claim it makes no difference, just that it eliminates only one category of brittleness from a project.

Others — poor testing, over-coupling, external dependencies, and the broad category of anti-patterns — are still available to us.

In type safe languages your tools can do refactorings like extract class/extract interface (reduce coupling), create mocks automatically based on type information (helps testing), etc.

Why not let the compiler eliminate a whole class of problems and let the automated tools help you with guaranteed safe refactors?

I get both of these in python (extract class is provided by a good idea, and `mock.Mock(OBJECT_TO_MOCK, autospec=True)` creates a mock object which raises an exception if called incorrectly and can do a lot of other powerful things.

Until you try to mock anything related to the boto3 library provided by AWS....

All of the functionality is something like....

s3client = boto3.resource("s3")

The IDE has no idea what s3client is. Since I've just started using Python and mostly to do things with AWS, is this considered "Pythonic"?

Btw, After using PHP, Perl, and JavaScript, and forever hating dynamic languages, Python made me change my mind. I love Python for simple scripts that interact with AWS and event based AWS lambdas.

My Python IDE handles all the "rename a function" burden for me.

yeah, trivial annoyances like

   AttributeError: 'NoneType' object has no attribute 'method'
when you just don't expect it.

Right - this trivial annoyance, the Python equivalent of a NullPointerException, is not actually prevented by the static type system in Java and some other popular static languages. (Kotlin does prevent it, though!)

Just to clarify, I'm using trivial here in the technical sense.

YMMV on what counts as trivial to you, of course, after three days straight tearing your hair out!

If your type system only prevents trivial errors then it's not sufficiently 'strong'.

"Just as possible" is a pretty strong claim, given you're effectively saying that types have no effect on brittleness or contrarily, robustness.

> We also collectively decided that Microservivces were another solution to our woes though that cargo cult is being questioned

I'm genuinely curious (and probably absurdly naive), but can you explain why you believe that microservice architecture is being questioned, what alternatives there are and why they are better?

Even java has type inference now.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact