Hacker News new | past | comments | ask | show | jobs | submit login
Rust is a hard way to make a web API (macwright.com)
331 points by tmcw 51 days ago | hide | past | favorite | 422 comments

Yes. That's what Go is for. Go is for doing the things that Google does on servers, which mostly means web apps of one kind of another. It has all the parts for that, and they're well-exercised because they're doing billions of operations per second on Google's own work. It's hard-compiled, so you don't have all the extra overhead of the interpreted languages. Also, Go has a better async story than most web-oriented systems. Goroutines, or "green threads", do both the job of threads and "async". In async land, if anything blocks or uses much CPU time, you're stalled. Not so in Go. A goroutine can block without stalling other goroutines.

Right now, I'm writing a client for a virtual world in Rust. There's a big need for concurrency, but, unlike web stuff, it's all tightly interrelated. There's a GPU to keep busy. There's network traffic of several different kinds. There are compute-bound tasks which need to be kept out of the frame refresh loop.

Rust is good for this. The existing C++ client is too much of a mess to make concurrent; people looked at it and gave up. In Rust, it's coming along nicely.

Use the right tool for the job.

I have seen, though, a fair amount of Go applications with http[s] servers that are (unintentionally, I think) missing Content-length in the response, correctly handling HEAD/RANGE/etc requests, ETAGS, Expires, and so on. It seems like people aren't fully aware of what the bundled libraries do and don't do...and how that might affect end-to-end performance and load.

I've never seen that in 5+ years of Go.

1) You don't have to touch Content-length since it's computed. https://golang.org/pkg/net/http/#ResponseWriter

2) ETAGS, expire, ect ... are not managed by the std lib, as it should not.

On 1), Looked, and yes, you're right, and it seems to have been there from the beginning. I think I may have been thinking of node.js on the Content-length having to be a deliberate action.

Good to know but this is pretty small deal, easily fixed, also there's probably quite a few Go backend frameworks and I doubt all of them have this issue

On nodejs that is left on the userland. It's up to the user to provide content-type; content-encoding; provide etags, cache-control and 304; handle cross origin requests.

Which looks like the right choice. ExpressJS has some helpers for it but it's a soup of unpredictability and shitty performance. You'd want something as simple as uwebsockets.js

Go is amazing, probably my favourite language I've ever learned. The rate of development is insane I find, you can just bang out straight forward easy to read code and everything you'd ever need to write a web API is there in the standard library.

Where I work we've shipped Go APIs using just the standard library running on like 1/10 of the hardware we were using in the .NET (not core) world and handling 10,000s requests per second. Perhaps that was just because we had really old shitty .NET code though.

I've seen my co-workers talk about similar things about Go. They especially like the verbosity. I personally like features that make code smaller, concise and at the same time readable. I'm referring to the syntactic-sugary things like the `for` comprehensions in Scala and `do` notation in Haskell which are pretty much built to do the same thing. Wish Go had a little more of those, it would have made things a little more exciting for me.

Yeah, my experience with go is there is a lot of boilerplate, and the verbosity actually makes it harder to read imo, because the core logic is cluttered with less important details.

I've had the opposite experience. Go is one of my least favorite languages. Out of the dozen or so I know pretty well.

I find some of its syntax choices pretty hostile and the lack of generics is a problem, unless that has been solved already, I'm not sure.

Classic .NET is slow as hell.

One of our Java backends handles a few thousand requests/second with no real optimization.

The ORM story in Go is terrible compared to Java and C#. Probably due to lack of generics. I won't touch it personally till this changes

For a point of reference, which other languages have you learned?

I share the opinion of the person you are asking this question to and I can share my list of languages I delivered production systems in: Scala, Erlang, Java, Ruby, Python. In the past life C# (.net on Windows).

Now 4 years of golang. It beats all of them at development pace and ease of integration, especially since modules are working as they should. The killer features of golang: modules, http stuff, channels, ssh in stdlib, fantastic, easy to use crypto.

>you can just bang out straight forward easy to read code and everything you'd ever need to write a web API is there in the standard library.

The prevalence of swagger disagrees.

Go still has a few annoyances: error handling and casting things to/from interface{} are the most painful of them.

Certainly, it's still ways better than C. But the resulting boilerplate and copy-paste programming ought to be a source of errors.

(I'm happy to see the generics proposal coming closer and closer to a future feature.)

Error handling is not that great in either Go or Rust. It wasn't that great in Python, either, until Python standardized its exception hierarchy. That seems to be the key - get everything to use the same exception format, especially the libraries. How the error values are passed around is less important.

1 always need to spend 30 mins educating everyone on the "nil interface" on Go. Go lang is fine for small network services that do just one job but it doesn't scale up as an app programming language for large teams. It has far too many edge cases and gotchas. Also, loads and loads of boilerplate - making Java look elegant.

Error handling is such a train wreck. Manually returning an error in 2021? What were they smoking?

Explicit is better than implicit, and you can't get much more explicit than that.

Nobody wants to write or read

  SUB ESP, 12
a thousand times a day. The best you can do is avoid screwing it up. So when you write a function in any language since 1960 it just happens. Stopping a function halfway and bubbling an error up the stack should also just happen.

I agree, although there are IMO more optimal and explicit ways to handle errors - Rust and Haskell come to mind. When dealing with a Result/Either/Maybe or some other functor, you have to acknowledge the context of the value, i.e. that the value could possibly be an error. This way the error handling is baked into the type system in a way that's hard to replicate in languages that just return error codes or exceptions.

> Explicit is better than implicit

Which just demonstrates why slogans are not a good substitute for thought.

They're good for programming a community though.

If you believe that I hope you only write in assembler

It's actually really nice to have the good quality error messages that Go encourages when something goes wrong.

If you remember to write them and propagate them...

After 8 years of writing Golang microservices one observation is comparing the compile times when optimizing for production.

A typical Golang CI/CD pipeline should - compile without CGO enabled and with go build -tags netgo -a -v so that the binary can run on in a minimal docker image like alpine or scratch - should be compiled and tested using the race detector - should be linted using golanglint-ci

All of these flags and linters considerably increase the build time for deployment to production

The rust compiler effectively gives me the same outcome using the type system and the compiler advancements when targeting release musl.

I have seen many Golang projects where the race detector was skipped because the code base was to large or it was flaky.

One the other hand I do miss the code coverage tools. Code coverage for rust seems to only work on Linux

Most of Google's Web front end servers are written in Java, a lot of the backends too.

C++ still dominates in most of Google's 'big' systems, most of which are implemented as HTTP servers.

google uses stubby (essentially gRPC), not http, for internal APIs.

Stubby still uses HTTP as the transport layer. You're probably thinking of REST.

I see a lot of people asking for resources to learn Go. I strongly recommend Thorsten Ball’s How to write a compiler in Go and How to write an interpreter in Go. It’s definitely overkill but it does more than the average language intro book.

Lastly, does anyone have any resources for actually writing concurrent Go code? I have been spoiled by so many of the underlying libraries “already” doing this for me.

I wonder if the author of the article would agree with you on that. He made a distinction between specialized services and general web applications, which reminded me of this article about why Go isn't necessarily a good fit for the latter: https://chadaustin.me/2016/04/two-kinds-of-servers/

> Go is for doing the things that Google does on servers,

What does Google do on servers that has not already been served by Java and C++? As far as I know, the vast majority of code both existing and newly written at Google is Java and C++, not golang.

Make it easy to ship code? C++ compilation is a nightmare & it's extremely expensive language to teach. JVM tuning is a nightmare and takes a while to setup (at which point it does run well relatively unattended, barring security fixes). With go, you practically need to work to avoid shipping an binary that can copy and run. You can learn the entire language in like two days and it can easily compile as you type with a decent latency to feedback (<1 second, in my experience). Its FFI is about as usable as JNI, maybe a little better once you get past the awkwardness of using a separate stack. Its GC is good enough with reasonable bounds.

Keep in mind, I use rust and c++, primarily. I just look over the fence at go and can appreciate what it does well.

You don’t really need to tune the JVM for most applications other than setting a maximum heap size. More than likely you will make things worse than the defaults.

And I don't see how Go could be better in that respect, other than not allowing tuning. But that's really no different than not tuning the settings on the JVM.

I can't speak for present day, but, while I was still working there a few years ago, they were pushing Golang pretty hard. They weren't being hard-lined, so I'm sure a lot is still done in other languages, but I wouldn't be surprised if Go rivals at least C++ in terms of service count at this point.

They realized C++ sucks for simple services and due to the Oracle litigation they were trying to get away from Java where possible.

Having written backends in C++, Java, Python, Rust, and Golang myself, Golang is far and away the best tool for the job in my opinion.

> due to the Oracle litigation they were trying to get away from Java where possible.

not a lawyer here, but isn't the litigation related to Android Java-clone, not 'server-side' java-usage?

Exactly. It seems many people conflate the issue.

I could definitely be wrong. That's how it was explained to me in passing by a co-worker.

well it doesn't make sense... why move away even from server-side java? I mean, you guys can use kotlin which is a combination of all the goodies (functional + ergonomic + type-safer than java/go) that has good java interop... and kotlin's even the official language for android projects...

I guess paying for Go development resources is cheaper than having bought Sun when given the opportunity.

However, Koltin and Android depend heavily on the JVM eco-system, and Google is certainly not going to rewrite everything in Koltin/Native or start using ART outside Android.

Also, the OpenJDK is completely open source (even though it is primarily developed by Oracle employees), it’s not like there is any risk in using it. Google is sued because it created a java-copy instead of buying a license from Sun which explicitly required one for mobile devices.

And since they doubled down on their Android Java flavour, it is now a major headache to produce modern Java libraries that work on Android.

Even if you castrate yourself to Java 7, whatever works depends pretty much on the Android version.

I've been around go, java, and rust for a bit now (I even worked full-time on go projects for a couple years). Go really lacks a few features that are required for the numeric computing, and data processing. If you are building a web service that needs these kinds of things, it'll be a painful experience.

There are a bunch of things rust has which makes it attractive over go for these types of apps, from a practical perspective it tends to be better for a team to use a minimal number of languages across their stack.

The lack of generics means that each numeric type has it's own signature, if you deal with a lot of numbers with different precisions - you'll have a bunch of identical methods lying around just to support alternate precisions.

The lack of function/closure serialization in Go means that "moving compute to data" ala Spark or Hadoop becomes exceptionally difficult.

As native code, rust interfaces cleanly with C and can run in a few more places than Go - there are some active efforts to even support GPU kernels.

Go can't really do any of the three things above today, and the community is averse to include new features into the language for both good and bad reasons. Java can do the first two, but struggles in the last category.

Rust has an interesting opportunity to be a modern C which does the things java does and the things C/C++ do. But the library support and learning curves would need to improve dramatically for it to get there.

One of the things I really hate about golang is the idiomatic usage of single-letter variables. Other than that, and a couple of other quirks, it’s good.

Single letter variable names annoy me so much. Every programmer uses an editor or IDE with autocomplete. Just use a damn name that describes the thing, if it's a little long no one cares.

CS papers are the worst in my experience. I know lines have to be kept short, but I think some people believe a tiny code block makes them look like a better programmer.

I like how Swift does it[0] — it reads nicer than any other language I've used.

[0]: https://swift.org/documentation/api-design-guidelines/

> Every programmer uses an editor or IDE with autocomplete.

This is news to me. I use JOE and, occasionally, vi (sic). But when I use short variable names it's often because they're idiomatic and expressive. In C for example, i, j, n, src, buf, cp, etc.

Many moons ago I used IDEs. Eventually I tired of installing, configuring, and maintaining bespoke environments.

I'm confident I'm not alone.

It's cool you have found a workflow that works for you, but I don't think there are many people using JOE or (plain) vi to write software.


I can't imagine working that way myself. I need syntax highlighting, code completion, and go to definition at the minimum. Another newer feature I like is spelling and grammar checking of code comments.

Edit: Have you given Sublime Text 3 a shot?

The breakdown of "All Respondents" (i.e. all programmers) into Web, Mobile, and SRE/DevOps is telling.

Let's just say that it's a big world. People programming Excel spreadsheets aren't using vi, either. I won't dispute that.

I’d love to understand the rationale for preferring extremely short variable names.

I like short variable names - for variables used in only a short block of code. I like long variable names - for e.g. a field on a struct used all over a file, and certainly exported ones.

I think "Go uses single letter variable names" is not so true in practice. There's a balance, in every language, between when to use short and long names. Go's balance is tilted (a good bit) more towards short, but it's not a hard rule.

There's significant mental overhead in reading `numberOfSubprotosOnProto`, as well as time hearing it in my head. I'd rather see `n` declared, used a few times in the next <10 lines, and then disappear. If there's two similar concepts in the same function, use slightly longer names, sure. Certainly not m and n.

I think that's the same reason math tends to use single letter variables. "Taylor series are easy! Just memorize: 'f_series_degree(evaluation_point, expansion_point) = sum(derivative_order=0..series_degree, (evaluation_point - expansion_point)^derivative_order * f^{derivative_order}(expansion_point) / derivative_order!)'

No thanks, I'll take f_n(x, a) = sum(i=0..n, f^n(a)(x-a)^n/n!).

I don’t know if it’s ironic or not, but as someone not into mathematics, the first example is far more readable, and it’s not close.

By readable, I think you mean you get to read some whole words thereby get the impression of better readability, but in actuality it doesn't not necessarily aid comprehension if you don't know anything about taylor series, or maths in general.

I'm not saying more descriptive variable names doesn't help, but saying that you are not into mathematics makes your point moot.

But I know what an evaluation is, a point is, a degree is, etc. These words contain semantic data.

Single letter variable names are throw away variables which you don't actually care about except for maybe a line or so.

I think to use them when I'm trying to "show my working out" and it takes me back to algebra class days so I don't mind

Gotta save those bytes

Some people prefer lattes, some cappuccinos, but it's all milk baby.

> That's what Go is for. Go is for doing the things that Google does on servers

Definitely not, unless you are google.

Go is great for hardcore backend and systems programming, as a replacement for c/c++ or Java.

For most companies over there (which are not Facebook, Amazon, Google, etc) what you need is just f**ng python/django, ruby/rails, php/symfony, etc.

P.S. You probably don't need kubernetes and monorepos either.

Do you have a recommended starting point for learning how to build web apps in Go? As a language, Go seems pretty easy (although the upper case method names really bug me), but I don’t have a ton of time to pick up new stuff on my weekends, and have not yet stumbled onto a great “start here” guide.

It’s not free, but I really recommend ‘Let’s Go!’ by Alex Edwards. He goes through building a web app and all the code is included, so for someone like me who learns better by tinkering with things, it was super helpful. He also blogs about some things which I still find myself referring back to.


I can’t recommend Go with Tests enough if you do TDD. It’s fantastic.


Seconded. "A Tour of Go" didn't really stick until I actually started writing code by following Learn Go With Tests, and then it clicked. I had to learn it because I got put on a Go project at work and needed to get comfortable with the language over a weekend -- and after doing Learn Go With Tests I was ready to hack on production code come Monday (with some supervision, hah).

And it’s been kept up to date through go versions and new content has been added to it over the past year.

This is the simplest "start here" guide out there: https://learnxinyminutes.com/docs/go/.

For more depth, check out FreeCodeCamp's course: https://youtube.com/watch?v=YS4e4q9oBaU.

Thank you! The tutorial is great.


You are new to HN. If you click on the timestamp of a comment, you go to the permalink for that specific comment. There, there is a "favorite" button you can use to save it. You can access all the comments you have upvoted or favorited from your profile page.

You da man, thanks.

Exactly. I'd like to add that Go gets out of the way like I've never experienced before (having worked with Python, .NET, Java (Spring), C++, Rust, Javascript). Very good tooling and a standard lib that is great (e.g. networking) to mostly good enough (e.g. xml).

Developer productivity is unmatched with Go. Focus is almost always on the problem to solve not on language features (I'm happy for generics to come but really don't need them), memory management / borrow checker or build tools / config (gradle et all).

Rust is a formular 1 car. Use it when you need it and be aware that it has substantial costs developing and maintaining (immature ecosystem).

I think the closest thing to "Rust plus green threads" isn't Go, but Haskell. If you try to switch from Rust to Go, you're going to find yourself missing the type system, ADTs, traits, etc.

I wrote this back in 2014 comparing (early) Rust, Go, and Haskell, but I think it's still more or less accurate. https://yager.io/programming/go.html

This misses the entire point of the article. The author's pain about Rust wasn't about wanting green threads - it was about building web APIs. Haskell doesn't succeed at that either. Haskell's Morpheus also suffers from n+1. Haskell isn't nearly as supported for the author's use cases as Python or Go, and Haskell is closer to Rust in that regard.

I'm addressing the comment I'm responding to. If I was responding to the contents of the article, and in particular the specific thing you're focusing on (GraphQL support) I would have written a different comment and posted it elsewhere in the thread.

hm I agree on the points made you made on the GoLang -- its design seem to be a regression. (to add another: look at Go's "default-value instead of Null" instead of adding actual null-safety in the type-system like kotlin/swift/scala/typescript/etc.)

Anyway, this blog-post seems to rally for a better eco-system work on Rust part. Being a new language without a rich parent (like Go's "google" or C#/Typescript's "MSFT"), the situation is somewhat understandable.

Yup! That's why Haskell is still the best language for business logic. Rust is for programmers at large companies trying to avoid the business (logic). I'm quite sympathetic, I would be doing that too at one of those. But when I need to program something a non-programmer actually cares about, Haskell is the way to go.

"The existing C++ client is too much of a mess to make concurrent; people looked at it and gave up"

99,9% of games that you describe are built in C++, it works fine. Modern engines are doing that without much troubles.

Every C++ project is different. Maybe this one was poorly architected.

The original implementation is 20 years old. Some very good people designed it, but it's from the era when computers came with one CPU and GPUs didn't do much.

C++ programs tend to be heavily inheritance oriented, with huge lists of includes and inheritance. Decoupling things to parallelize them is very difficult in that environment. Rust is mostly a single assignment language with move semantics, which encourages creating objects (yeah, the Rust crowd calls them structs) and passing them off to something else in a somewhat functional style. So you get programs that are less tightly coupled. That's useful if you find you want to pass an object to another thread, put it on a queue, etc. for performance reasons.

Any dangling pointers or cross-thread references will be caught at compile time. That alone makes development far easier.

Although not perfect, assuming all source code is available, clang and visual c++ checkers can do similar checks.

So it is always a matter of equating the value of a full rewrite and how well the checkers are able to provide valuable feedback.

Which I concede Rust will be much better, however then there is the whole loss of ecosystem to also consider.

> clang and visual c++ checkers can do similar checks.

> Which I concede Rust will be much better,

Aren't these two statements contradicting each other?

A lot of games slam the CPU even when idle because of busy waiting and looping they built into the game which could likely be improved by better async and parallelization natives or frameworks.

This is common for many real time applications, not just games. It can be a result of designing for WCET (Worst case execution time). By always running on worst case even if not needed, your testing becomes much easier because there is less need to go into every corner case and see if your frame rate drops. For gaming its less of a safety issue but it gives a more consistent experience.

Games also really don't care since they are single use applications. You are not going to run something else cpu intensive in the background while playing.

That seems like quite a bold assumption about what I'll be running. While I do expect to a game to be able to take what it needs and certainly that could be a lot, I've definitely ran multiple games or ran intensive software at the same time. It's not always so simple

> Use the right tool for the job.

Do you agree with the article's recommendation of Rust for command-line tools, or do you think Go is the right tool for that job as well?

Or perhaps I should stick with C++? For one thing, it's nice to be able to re-use code between command-line tools and other C++ programs, which AFAICT would be a bit harder between Rust & C++ and much harder between Go & C++.

I’m not the one you asked, but I think go is also a fine language for command-line tools, thanks to is truly breezy cross-compilation story.

The ability to trivially cross-compile in Go, and get a single-file executable artifact, is often mentioned but still not appreciated enough. This is still a multi-hour project, minimum, for most languages, especially if you are introducing a second platform to a well-established project. In Go, it‘s a compiler flag.

I think between http, encoding/json, and the io libraries, it’s also very simple and straightforward to do most of the things a typical command-line program does, in a cleaner way than shell or Python (both have their place) and with less fuss than Rust or C++.

There's a relatively large difference in executable size, which might matter in some situations. And in run time. For executables that can get put to task doing a lot of heavy lifting and hammering on memory resources like grep and awk and others, it might make a real difference.

Not only Go, any language with automatic memory management and AOT/JIT capable toolchain.


People are still trying to fix that: https://news.ycombinator.com/item?id=25750582

The last error handling fix didn’t really go anywhere, but just being able to send Either[error, T] will help quite a bit.


More accurately, imagine how much better the world would be right now if the Java community had been interested in dealing with the many usability and performance problems encouraging developers to move elsewhere.

Imagine if all of those years of saying that it was just laziness causing people not to like Maven, massive object hierarchies, or repeating claims about JIT performance which statistically represented no application anyone actually used had instead translated into “There are a ton of basic gaps which give every new user a bad first impression. We should do something about that!”. Repeat for “Maybe it was a really bad idea re-inventing PHP-style typing with introspection so you have all of the inconvenience of static types but still commonly find type-confusion problems which editors and the compiler won't tell you about”, “Maybe it isn't reasonable to tell people that you normally need to edit multiple XML and properties files to run anything more than hello world”, or “Maybe we should burn with fire any codebase which thinks the right way to handle an error is to permanently stop working but still keep running so anything which doesn't do an end-to-end health-check won't know all of your users see the service as down”.

It may sound a bit bitter but I'm saying that as someone who started using Java circa 1995, left to more productive shores, and recently came back to a new development team seeing that nothing seems to have changed in terms of the amount of time developers spend fighting the tools rather than doing their jobs. Yes, Java code can have competitive performance (Solr/ElasticSearch are world-class) but the code I see is more commonly slower than Python while using more memory and lines of code because it turns out that even heroic attempts at JIT optimization can be easily defeated by the average enterprise Java architecture.

What sort of code do you see that is slower than Python??

It is either the “we have rewritten it in X language and due to the nature the rewrite we actually realized what is a better architecture for our software/half of the features are not even needed”, or you compare totally unrelated things.

Also, I think people commonly underestimate complexity. Just look at your typical npm package description, it is not XML, yet you would never be able to write it by hand. For sufficient complexity, XML is quite good and human parseable.

Also, it simply doesn’t matter. Noone writes 344 hello world projects. What may be missing (but is there with spring boot for example) is an auto-generator for projects.

> What sort of code do you see that is slower than Python?? > > It is either the “we have rewritten it in X language and due to the nature the rewrite we actually realized what is a better architecture for our software/half of the features are not even needed”, or you compare totally unrelated things.

Code which is over-architected with many levels of indirection and attempts to bolt on dynamic behaviors at various levels in the frameworks used. I’ve seen teams unable to optimize things because there were so many levels of indirection that they were unable to reason about the system.

This is not Java the language but enterprise Java the culture as the problem. I have at multiple points seen a web app replaced by Perl (in the 90s), PHP, or Python to get whole multiple performance improvements and stability increases (no more mystery OOMs where it helpfully stays running serving errors rather than exiting so the OS would restart it) when the previous team literally had no idea how to make it faster or stable within the libraries they thought Real Developers™ had to use.

That’s entirely culture, not the core language, but a very high percentage of developers I’ve interacted with had thought that good Java developers were supposed to write code with that much extra complexity. I’ve been wondering whether something like Kotlin will spur a bit of a renaissance as people realize that they don’t have to do things the old enterprise Java way.

That will happen regardless of the language.

The same people were doing that with C, SUN RPC/DCE, C++, CORBA, COM/DCOM, and will do it with Go gRPC or whatever else comes down their path.

This is true in general but I've seen it mostly commonly in Java and done by people who are certain that you aren't doing Java right if you don't. That's why I said “Java community” — there are definitely people pushing for less gratuitous complexity but they don't appear to be a majority. I've seen people be told that they might not be smart enough to be programmers if they didn't immediately grasp the wisdom of a small project needing a hugely complex stack, and once you get the filtering mechanism of that attitude going it takes active effort to recover.

The Java JEE community was created by former C++ CORBA and DCOM developers.

Everything that people complain about Java culture has its roots in 90's C++ and Smalltalk enterprise development platforms.

Even J2EE initial version, hated by so many Java bashers, was originally written in Objective-C as part of the collaboration effort between NeXT and Sun, in the context of OpenSTEP support for Solaris.

Which ended up Sun coming up with J2EE after they decided Java was the way forward and not Objective-C, while Apple post NeXT acquisition when Java still had parity with Objective-C, used the same ideas to port WebObjects into Java as well.

Yes – if I was writing a longer comment C++ would have been my first choice for a contender in the competition of which language has been worse served by complexity fetishists.

My point to the original poster was that it was too simplistic to criticize people for not using Java or the JVM without acknowledging the many legitimate reasons which motivated people to adopt alternatives, and many of the recent features the language has acquired were added because they’d been popular elsewhere. I consider that healthy since competition benefits everyone and especially in programming languages many partisans are loathe to admit that different people have different sweet spots because not everyone does the same work.

Everybody on this thread took literally my wish of "one language" but I never would've said C++ would be replaced by Java, for example. So I really meant "one language" in each category. Obviously there has to be a language to generate machine code, and then another higher-level language on top of that, and so forth.

The world is better off with fewer rather than more languages for obvious reasons. I never really considered the industry to be a fragmented mess until "Go" and "Rust" came along. That was the straw on the camels back for me where I just said: "Ok please stop, you aren't helping."

I have just used one language professionally, but do you think having just one language can solve the problem? But, I agree if a language has all the features you need, and is mainstream, you are well off using that than jumping in for another language for the heck of it and reinventing wheel is not helping anybody.

I am repeating myself, but different people might have different working style. And have different communities have different taste and ways of doing stuff.

Language is not the only barrier in communication. It is just one parameter. I honestly think, different languages allows different ways of doing things (not better), with different tradeoffs. My preference to some language is not just about language I know, but also with respect to what I want to do. And how I want to do solve the problem at hand.

Even if a language, supports all possible styles of programming, will that result in a coherent language? Then, might the language be language not get very big that different people working in different styles resulting either in monstrosity of code base or fragmentation in language community. Else, the language itself might not represent the problems succinctly as the user wants. Features are not the only issue here.

If the language is just needles and redundant, there is not going to be difference in its absence. Yet, I have seen people who have very strong opinions on how to organize code, OOP or not OOP, and of course tabs vs spaces. The fragmentation is inevitable and going to exist as long as strong opinions exist.

The best analogy I can think of to make the point of how important the 'single language syntax' goal should be is "mathematics".

Imagine if everyone "doing" math did it their own way, using their own sets of symbols, and syntax. Even Einstein got his Special Theory of Relativity transform directly from Lorentz. Most innovations are built on top of other innovations, and the ability to share depends on common language.

Not to refute your point here, but I consider improvements in the way the language lets you express your thoughts is also very important. But, I get what you are saying.

Consider an extreme straw man example of roman numeral system winning over everything just because of the inertia. The decimal system is great innovation because, it gave us succinct way expressing idea of numbers.

Not all languages are like that, but certain languages allow and encourage certain style of programming like OOP for java, though it is a multi paradigm language. I think inertia plays a big role here too. Java is always going to be OOPy language for the most part, because it is what community is built on.

Most problems in trying to understanding and communicating is going to be about concepts not the language. The syntax is probably some work, but half a day of work. But, I often do not find it hard to read the code in some other language, I get it once I see the code. But, even after seeing some code, I find it hard to come with code in that language. Because, that is the hard part. jq is one such language. Even though it is niche. When after reading docs, I find it so hard to combine all the small parts of it to make sufficiently large program. That will take, however similar the syntax is.

That is the reason, some particular style of programming dominating a language community. We got settle on something. Language is geared towards solving problems and introducing features for certain style. And that helps. It gives a coherent outlook to language and the community.

That is what I mean when I say, fragmentation is inevitable. Not everybody wants to understand all different ways of doing things. Even if the concepts seem understandable, thinking in terms of them is much harder for people. Multiple language for different dominant style seems to be a saner alternative than single language with all the different style.

One thing you may be missing is that a lot of the work usability over the past decade was not with Java itself, but rather other JVM languages.

As far as performance there is no doubt that the modern JVM is quite fast. Vert.x benchmarks are consistently on par with Actix and other low level frameworks.

> As far as performance there is no doubt that the modern JVM is quite fast

As a non Java developer, the reason I don't touch anything Java related is because of popular applications like:

1. Eclipse and Jdeveloper which you have time to get a coffee before any of them boot up

2. solr/elastic search: 8 core and 16GB of RAM as a minimum requirement on a DEV machine, that's insane when sqlite can do basic full text search on the cheapest raspberry pi

3. Jenkins which can't run a few jobs without using many GB of memory. Other CI/CD I've used and was made in Java were also dog slow and painful in so many way: IBM urban code and another one made by oracle which was well oracle

4. the arduino IDE which often freeze my laptop forcing me to ctrl+alt+suppr

5. oracle DB: internally refer as the beast by people who deal with it, I wasn't able to run a local copy on my machine as my work laptop don't match their 256GB of RAM server we have for the dev environment.

6. boot time which make it almost unusable for CLI type of application

7. Oracle

Maybe you can make fast and efficient Java application but the highly visible ones aren't that good if you don't need all those complicated features. On the contrary, applications made in Go/Rust tend to run fine on my machine or the cheapest server.

> Eclipse and Jdeveloper which you have time to get a coffee before any of them boot up

I just take to time to wait for one of those Rust builds to finish

> solr/elastic search: 8 core and 16GB of RAM as a minimum requirement on a DEV machine, that's insane when sqlite can do basic full text search on the cheapest raspberry pi

Definitly not the hardware requirements of a 2011 dev workstation, the first time I used Solr, and as such also not the requirements of using it in 2021.

> Jenkins which can't run a few jobs without using many GB of memory. Other CI/CD I've used and was made in Java were also dog slow and painful in so many way: IBM urban code and another one made by oracle which was well oracle

So which alternative are you using with similar capabilities and less hardware resources?

> the arduino IDE which often freeze my laptop forcing me to ctrl+alt+suppr

XCode does it all the time, and is pure native code, in a mix of C and Objective-C code.

> . boot time which make it almost unusable for CLI type of application

Learn to use AOT compilers for Java.

> Oracle

Partner in crime alongside Sun and IBM in making Java widespread, having a RDMS that allows for stored procedures in Java, responsible for sponsoring research work to prove it is possible to write high quality JIT compilers in Java, did more for Java than Google would ever done for it, if they actually bothered to buy Sun.

> 2. solr/elastic search: 8 core and 16GB of RAM as a minimum requirement on a DEV machine, that's insane when sqlite can do basic full text search on the cheapest raspberry pi

You would need to be using the production server settings and traffic volume for this to be true. I run both - sometimes simultaneously - on a 2012 MacBook Air with 8GB and rarely even notice them running. Unless you’re hammering the system with expensive requests these less memory than Gmail in Chrome.

That's the requirement for dev, prod is double that: https://www.google.com/search?q=elastic+search+hardware+requ...

Those figures Google highlights are for a Palo Alto product which uses ElasticSearch. Speaking from experience, that is much higher than necessary - usually the 1GB default is more than enough for development use and even production for lighter usage. The general figure to care about is index size - if you’re indexing hundreds of GB in development you might have problems fitting it on a laptop but otherwise this is not excessive or unusual given the massive feature set.

You're conflating the monstrosity of J2EE (which admittedly was a mess, and completely unrelated to Java itself) with the "Java Language" by which I mean the syntax of the language itself.

Yes the younger/newer generation took a look at all the XML files in J2EE and said "Isn't there something simpler", and so now today we ended up with JavaScript running on the server side (NodeJS). TypeScript came to the rescue over the last 5 years, but now we have a world with TS and Java. The solution should have been to make Java run in the Browser, and not JavaScript, but I understand the history, and how, when, why that occurred. I've been coding 30ish years myself, like you.

> You're conflating the monstrosity of J2EE (which admittedly was a mess, and completely unrelated to Java itself) with the "Java Language" by which I mean the syntax of the language itself.

I'm not sure how well you can separate that: most people are going to hit those bumps when they get started, and they're going to see an ongoing frictional cost against their productivity. Yes, there are some neat things in the language — especially newer versions — but there are a lot of people who would benefit a lot from, say, a “Java: the good parts” distribution which makes it easy to start a new project and evolve it, and then SEO it as much as possible so people get that advice rather than the 10+ year old stuff many web searches will have highly ranked.

I do think there's an interesting alternate history where WebAssembly happened many years earlier in the form of Java byte-code. That would have lost a lot of the ability to read other people's code but it would have had better performance earlier and a good story for language competition which has brought many good features to Java from other languages. Unfortunately, after around 1997 or so that really came down to Microsoft so it would have been VB or C# in any case.

XML is interestingly similar where the core language community charged ahead building ever more standards on top of it but assumed that its inevitability meant someone else would fix their … not great … tools, specs, and examples for them — and then were surprised when almost everyone abandoned them for easier tools. I always liked the idea of XPath but it's effectively never moved past 1999 unless your world exclusively avoids libxml2, which wasted most of the hard work the standards group did.

One thing I think may be common about both of those is a big shift in how easy it is to build and ship development tools and programs: a 90s programmer had to live with what they got in a distribution to a much greater extent because you couldn't count on everyone having an internet connection, a fast one, package management hadn't happened much outside of the Linux/BSD world, and new things required waiting for someone to write a book you could order, finding a local user group, asking in Usenet, etc. Warts in your language still mattered but less so since for many people it was easier to keep using something than switch. After the 2000s, though, I think a lot of people realized that the cost of switching was so low that they had to care about developer ergonomics a lot more unless they had something like the massive platform pull Apple had with Objective-C and Microsoft had with C#.

On the .NET side our JEE mess is called SharePoint and DCOM.

Also regarding XML, Biztalk anyone?

I remember DCOM! I was a WIN32 MFC C++ Windows app developer from 1990 to 2000. I forget which person at MSFT finally ended up admitting "The Registry" was a mistake, but I think it was Gates?

It was right around the time when WAR/EAR/JAR files in Java were showing the power of having everything packaged and self contained. Meanwhile MSFT was stuck in DLL Hell for decades. ha.

As Java/.NET consultant I deal with both stacks since they exist, so it is always kind of funny see one side throwing balls to the other one, without realizing their own glass roof.

Regarding Java packing, isn't it funny how everyone is jumping into Docker hype just to get what Java containers have given us for the last 15 years?

Yeah technology stacks have gotten completely out of hand over the decades. My current concoction is here:



I think I'm "doing it right", but like religions no one's is provably correct.

The younger kids are saying: "Meh, we'll just run NodeJS on the server, so we can ignore the J2EE mess the previous generation created since they're older and therefore dumber than us."

> I'm not sure how well you can separate that: most people are going to hit those bumps when they get started.

Your XML example is spot on and I agree. However I'd never say XML itself was flawed specifically because some of the tools, APIs, and specs built on top of it were flawed.

That would be like saying electricity itself is flawed because you don't like electric tooth brushes.

Likewise, Java was not flawed just because J2EE was a mess. The fact that J2EE drove people to NodeJS, has nothing whatsoever to do with Java itself.

> However I'd never say XML itself was flawed

I’ll say XML was flawed. It was a great initiative behind the wrong data model. The idea was to take what made sense for documents (strings with markup) and use that to represent everything else. But most data structures are really awkward expressed that way, and you run into weird decisions like: Should my map be a list of items or a set of attribute pairs in a single node? Is the key a node or an attribute? When does order matter? Etc.

If the XML standards group started with JSON or YAML or .. anything else, I think they would have made something really useful. But we didn’t think of that at the time, so instead we have XML. An impressive monolith built on a dead mountain.

You took the first half of my sentence pretended like that was the complete thought. It wasn't. The whole sentence is the thought.

What I meant was "You can't say technology X is flawed because tools using X are flawed".

Ha, not even. I see more shops doing TS/TS (backend TS/FE TS).

Even me, 20 year Java vet. Basically been doing full time TS for about 5 years now.

Q: Why does TypeScript exist? A: To fix JavaScript.

Q: Why was JavaScript invented? A: To run code in the Browser.

Q: Why didn't they run Java in the Browser? A: They did (applets). They tried. They failed. So they had to start over. They WANTED "Java" in the browser, but it wasn't practical for memory and CPU reasons, so they cobbled JavaScript together over a weekend.


TS is what everyone is coding nowadays, and yes it definitely "fixed" the problems with JS.

However the world would be a better place if there was just Java, and neither of it's two offspring were ever born (JS and TS).

Yet, it compiles to JavaScript and it allows many of the things that make JavaScript unsafe.

JavaScript is highly mutable and under the right circumstances you can trick JavaScript applications to modify themselves. i.e.: Prototype pollution.

You can write 100% perfectly type-safe code in TypeScript. You'd have to accidentally do a 'var' instead of a 'let' variable, or forget to add a type to a parameter, but if you try you can get pretty much 100% type-checking on every letter of every variable or object.

Run this, please.

    // 100% type-safe function :-)
    function f(a: Date): number {
        return a.getDate();

    // Input to 100% type-safe function
    const p = new Proxy(new Date(), {
        get: function(target, prop, receiver) {
            if (prop !== 'getDate') {
                return Reflect.get(target, prop, receiver);

            return function() {
                console.log('Not typesafe :(');

Output: "Not typesafe :("

What happened? all type-safety was circumvented. The f function that should return a number did not return a number. In fact, it didn't return anything.

The TypeScript compiler didn't return any errors either. This is perfectly valid TypeScript.

Why? because TypeScript is based on JavaScript. It is not 100% typesafe and it will never be.

Is TypeScript an improvement? yes. Does it help a lot of people? yes. Is it 100% typesafe? no.

Did TypeScript "fix" JavaScript? Mostly, but no. I hope the mods unflag my original comment.

I had already given two examples of how you can create non-type-safe code in TypeScript. I mentioned 1) the 'var/let' way and 2) The "Not including types on parameters" way.

You then gave an actual example of #2. lol.

That is not what is happening here. TypeScript correctly infers that type of the variable is a Proxy for a Date object. If you do not supply a Date object as first argument, the code won't compile.

The part that breaks things is that when f invokes a.getDate(), the Proxy get trap changes will return another function instead of Date#getDate at runtime, and that wasn't validated at compile time.

Proxy is part of JS. When did I ever say ANYTHING in JS is typesafe? Nothing in JS is typesafe.

However this is also simultaneously true: You can code in TypeScript and achieve 100% type safety by putting types on everything. If I claim all cubes can be painted red, you can't disprove that by proving green cubes exist.

And what does TypeScript compile to? JavaScript.

Does TypeScript forbid using JavaScript features? no.

Can TypeScript realistically verify that your program is type-safe? no.

Is TypeScript 100% type-safe? no. They wanted fast adoption, therefore they allowed JavaScript and untyped variables.

You can go into my 250,000 loc project and randomly pick any variable name, method name, parameter name, etc., and put a typo in it or wrong type, and the compiler will catch it.

I have 100% type safety, and it's all done by TypeScript and my programming language IS TypeScript (not JS)

The the final output format generated by a compiler has absolutely nothing to do with whether the Language being compiled is typesafe or not. That would be like saying since Bytecode is not typesafe therefore Java is not a type safe language.

TypeScript is not the only way to achieve that.

Google closure compiler and JetBrains IDEs can use JSdoc to validate types. I have used those to a reasonable level of accuracy to document entire JavaScript projects where I cannot migrate to TypeScript.

Excellent. Rock on man. I never thought I'd be a Microsoft fan again, but after they invented TypeScript/VSCode, I love them so much I might move to Redmond.

Just Java? That seems bizarre. There's so many more languages now days than just Java. I would love for Swift to take off. Don't get me wrong there is a lot in the Java ecosystem, but it's age is showing. And it's not lightweight. If I'm trying to spawn processes quickly it's not Java. If I want to run 100 instances of something, the RAM is going to cost me. JS and other fast to boot languages are also much more adaptable to serverless.

Now I'm not saying there's no room for Java any more, but "just Java" meeeeeeh.

This. Time flies by and before long it happened.

I can confirm that the seeming majority of application-layer code written at Google is Java (with Go being a relatively small contributor by way of comparison).

That said, I shudder at the thought of Java being any larger than it is, much less ubiquitous. If I had to choose between this world and one where Java was the "one true language" (or even a more extreme version of the current situation where we have other languages all built atop the JVM) I would pick this world with fragmented languages every single time.

Whatever people find unacceptable about Java could've been fixed. No need to burn it all down and start over.

Remember, I'm talking about "source level" syntax of the language not any specific VM implementation. Every compiler and VM can be replaced eventually.

It's the code syntax that needs to be "universal" not anything else.

> Whatever people find unacceptable about Java could've been fixed.

It's been DECADES. Plenty of time to fix things but nope. I don't know what Java's developers were interested in but improving things for other developers was not one of them.

C# is literally a better Java already. And Kotlin.

I agree. The problem with Java is its culture, which cannot be fixed by same thinking which created it.

And by culture I mean, java attracted people who think that lines of code are cheap and see nothing wrong with a 200 line class which has no actual behaviour beyond boilerplate. People who measure their output by the number of lines of code added per day, not the number of features added to a project per year.

Any "fixed" version of java needs to either bring those people along - and in the process import a lot of java's stale thinking. (Some corners of the C# world struggle with this). Or leave those people behind and build a new community around your new thing (eg Scala, Clojure). You can see this in the java community already - there are plenty of tools in the modern JVM for doing terse functional programming. But as far as I know that stuff remains unpopular. Why? My theory is that most of the people who really care about writing terse, efficient, maintainable code jumped to better languages years ago.

> 200 line class which has no actual behaviour beyond boilerplate

A skilled Java developer might make a huge POJO have every variable private and then implement getters and setters to control or monitor access to every member.

But then some noob straight outta college will look at that code, not understand why it's being done so 'stupidly', and just delete all the accessors and make every variable public, so you don't even need any getters/setters. Said noob will consider himself a superior programmer until he looks back on it 10 years later, with enough wisdom to see clearly.

Yikes! On behalf of everyone who doesn’t write Java for a living, 200 lines for a plain data object, full of trivial getters and setters everywhere is crazy. Code like that decimates your velocity because of how much effort it takes to refactor or add features to your project.

The noob’s instincts are right. Well written code in just about any other language doesn’t need all that crap. And for good reason - you can really feel the difference if you actually measure your performance programming, in the short term or the long term. And in either features/hour or features/line changed.

I worked at a place once which had this C# project which wrapped incoming http calls into calls to our internal backend (also http). Then it took the JSON from the backend and converted it to JSON for the browser. The code was 3500 lines full of that sort of POJO getter and setter crap. Every time we added a feature, this code needed to be updated. We had 2 people assigned full time to do those changes. After way too many meetings we managed to replace the whole thing with a few hundred lines of nodejs code. Those guys were moved into more useful roles - saving the company 2 full time salaries.

I probably have 1000s of plain getters and setters in my code, and I never typed ANY of them. lol. VSCode has a "Generate Getters and Setters" refactor function, and I can make them plain publics any time I want. So you can't blame the Java Language itself for their existence.

What a waste. C# has properties. If Java wasn't so against breaking forwards compatibility, Java could actually be a good language. It was good in 1995 but the changes since then have been too slow and too late.

About 1998 when Microsoft tried to invent their own "flavor" of Java, I lost all interest in the company, and that's right when I switched from a C++ developer to Java. C# was just another "copy" of the VM concept Java had, as you well know. Even if C# was superior under the hood (which it isn't) I still claim it just fragmented the industry and was just an example of a company creating their own proprietary stuff for their own self-serving reasons, and before you think you're correcting me, yes I know the CLR is supposed to be machine independent. When VBScript was finally yanked out of browsers the world celebrated.

I will openly admit however when VSCode came along and TypeScript I totally regained a new love for Microsoft, and they have redeemed themselves, also by accepting Linux into their dying OS.

The fact that you didn’t have to type them doesn’t make them free. Software spends most of its lifetime in maintenance. Those getters and setters have to be documented, adjusted, unit tested, code reviewed, deleted and renamed, and scrolled past.

Typing is easy. It’s all that other stuff that I avoid by using better languages / better conventions.

That's why experienced Java devs generally try not to put any logic code in POJOs.

So you never generally even open those files unless you're expending the 5 seconds it takes to add a new prop. But as I said I could make each property use one line of code if I chose too...something like "String x;" just like all other languages.

Setters and getters are only popular in Java EE/Spring based environments (and it can be easily useful).

There is nothing in java that requires it per se, it’s just a design pattern/“standard” for some library to dynamically inspect objects. As for the amount of time it takes, I’m fairly sure that it takes less time to press alt+insert+select setters getters than it is to write anything, but to each there own. It’s not like people read source code from top to bottom.

Also, this decimates your velocity is just straight-up coming from an alternative world. How do you write code, are you copying something by typing?

I’m sorry to assume it, but I think you only know about java development from third-hand infos and it has nothing to do with reality. Fact is, Java is so huge that for every unreadable mess of a project, you get someone working on robot programming or something totally different with proper architectural decisions and the like. It is quite a bad take to assume it for the whole language.

> Setters and getters are only popular in Java EE/Spring based environments (and it can be easily useful).

I'd love to see stats on how common this stuff is amongst java programmers. I think its more popular than you think.

I agree that modern java has lots of modern tools to write reasonable code - like closures and functional primitives. But its very normal amongst java programmers to never use that stuff.

I worked as a professional interviewer for a year or so recently and interviewed 400+ programming candidates. One of the tasks was a 30 minute coding challenge - using the candidate's own computer and preferred language. A huge percentage of the java programmers, even under explicit time pressure, wasted time adding needless junk (like getters and setters or extraneous, pointless classes) to their code. Out of maybe 50 java programmers I think I only saw 1 or 2 java candidates use any of java's functional programming primitives (like map) to keep their code terse and clean. Most typed everything manually, and didn't know or didn't use their IDE's codegen features.

Is that the fault of java, the language? I don't know. As I said in another comment I think the problem is cultural. I don't really have a problem with java-the-language. But a large part of java-the-community seems blissfully content with mediocrity. I took java off my resume years ago because I don't want that kind of coworker.

> How do you write code, are you copying something by typing?

In the languages I use, copy+paste is never needed. Thats what the compiler is for, and things like rust's derive(Eq) macros.

> I’m sorry to assume it, but I think you only know about java development from third-hand infos and it has nothing to do with reality.

Nope. Eg:


I think every sufficiently large community will have outliers (in both direction), and Java is huge. I have a similarly bad opinion on the average JavaScript developer (or basically every “average” developer), but it is the product of the business decisions behind projects (cheap, bad developers, with huge churn rate) more than fundamental technical one.

Nonetheless, I’m sorry for my previous overtly aggressive comments, it has no place here.

> I have a similarly bad opinion on the average JavaScript developer

I write a lot of javascript for a living, and you're not wrong[1]. I think the struggle javascript has is that an extremely high percentage of javascript programmers are pretty new to programming in general. So there's an awful lot of javascript code written by unsteady hands.

If javascript is a magnet for novices, java feels like a magnet language for middle aged with kids "programming is just a job for me" indifference. I feel contempt for that mindset - but to respond in kind to your apology, I think the contempt I feel is probably reflected fear / disgust at the idea of settling. To me programming still feels like casting magic spells. I think I'm terrified of that spark some day being extinguished.

[1] I check in on this issue every year or so and it never fails to delight and horrify in equal measure: https://github.com/ChainSafe/web3.js/issues/1178

I personally don't mind heated technical debates. It's unfortunate that there's so many SJW moderators hiding in their 'Safe Spaces' who think it's their job to terminate any arguments that arise. I guess they're always "on the hunt" and trigger happy.

We don’t write it, and we don’t read it, so it never needed to be there. Boilerplate is our arch-enemy, and I’m glad we have Groovy and Scala and Kotlin to attack it.

If it really bothers you, there is lombok. But I will tell you something, the complexity of most programs will require many lines of code that you simply can’t decrease anymore. Whether it is more by a tiny tiny factor doesn’t matter, over performance, observability and maintainability of a language - though being on the JVM, the mentioned languages can tick all of them (I’ve only written groovy with gradle, but due to the lack of typing, I though of it as more of a scripting language)

I use Java since first beta. These getters and setters are always looking awkward for me, unless they are part of a library API, so they are useful for providing backward compatibility, or they are lazy objects, or they are defined in an interface. In my own code, I avoid using them in POJO.

For example, I prefer

  public class LineOptions {
    public Color color = Color.BLACK;
    public int thickness = 1;

  void drawLine(int sx, int sy, int tx, int ty, LineOptions options) { ... }

  ctx.drawLine(0,0, 100, 100, new LineOptions() {{ color=Color.RED; thickness=3; }});
IMHO, it is much simpler and easier than classic POJO + Builder patterns.

  public class LineOptions {
    private Color color = Color.BLACK;
    private int thickness = 1;
    public LineOptions() { }
    public LineOptions(Color color, int thickness, ...) {
      this.color = color;
      this.thickness = thickness;
    public Color getColor() { return this.color; }
    public LineOptions setColor(Color color) { this.color = color; return this; }
    public Color getThickness() { return this.thickness; }
    public LineOptions setThickness(int thickness) { this.thickness = thickness; return this; }
  ctx.drawLine(0,0, 100, 100, new LineOptions().setColor(Color.RED).setThickness(3));
or even more classic

  LineOptions opts = new LineOptions();
  ctx.drawLine(0,0, 100, 100, opts);

The Java languages gives you the choice of whether to use getters/setters or not. You added a good example. The main reason I love getters/setters is they are a 'hook' to add any code (like logging) if you want to detect when a variable is being read/written.

Isn't that exactly what is happening in Java land now?

That's what Scala and Kotlin are, fixes for the original Java flavor. Then again, more alternatives are good for competition; and incubating and sharing new designs that aren't possible in a very old language.

barring competition languages seem to stall out over time. Java fell out of favor because every java app turned into a pile of xml and factories that everyone hated and was slow to develop. The faster evolution of peer languages allowed for recognition of what people didn't like in Java (The ceremony of getting setup and invoking "app" logic).

The better parts of Scala, kotlin, and the guava library are steadily making their way into the language. But if there wasn't competition the language would absolutely stagnate again as risk aversion sets in.

Right, I think the languages that compile JVM bytecode were much more heading in a better direction than the ones who said "let's have fun and just start from scratch, and further fragment the industry"

Couldn’t you also say the same thing about Java not going with one of the earlier bytecoded VMs? Or, from the other direction, would we have seen more languages embrace the JVM if it’d been more open and things like invokedynamic had been added earlier?

I think there are some interesting points to made where you’re coming from, but this fragmented approach isn’t effectively making them. Something like a blog post might be better.

Here's a non-fragmented translation for ya: "The fewer languages there are the easier it is for developers to share code."

You’re just painting caricatures of language designers. Flamebait without substance.

This sounds far-fetched to me, partly because of the nontrivial technical differences between Java and Go and partly because of the politics surrounding Oracle’s ownership of Java and the way Oracle quashes community efforts.

Value types, interior pointers, multiple levels of pointers, slices, and radically different indirect dispatch semantics are just a few things that Go has that Java doesn’t. These aren’t things that you can staple to Java. Java is more than syntax, it’s the semantics of the JVM and its object system, which should only be extended in a backwards-compatible way.

So I’m gonna say that the reason Go exists is exactly because the creators didn’t want to burn down Java and start over. It turns out that you can create a new language without burning anything down. There is also C# and the various nontrivial differences between C# and Java, like value types and type erasure in generics.

There are plenty of languages which exist side-by-side with Java in the JVM: Clojure, Kotlin, and Scala to name three. These all have their differences but they’re all designed specifically with the JVM in mind. Go isn’t.

Go instead decided to go with a minimal runtime with a one-two order worse GC than any you can find in the JVM, which will result in inferior throughput in actual world benchmarks.

While generating less garbage decreases the load on the GC so it can sometimes get away with it, complex server apps will not fall into this category. So actually, I really don’t see much value in go, other than the “you can throw as many developer at it, because they can’t really step on each other’s foot” which is actually great for what it was created for.

> ...which will result in inferior throughput in actual world benchmarks...

You're telling me that Go's garbage collector, which is optimized for low latency, has worse throughput than the JVM's default allocator, which is optimized for high throughput?

Most garbage collectors are either designed for low latency or high throughput. You cannot optimize for both. The choice for high throughput makes sense for batch programs, like compilers, command-line tools, generating analytics reports, etc. Choosing low-latency makes sense for web services and the backend services behind them.

As fanout increases, the GC latency in backend services becomes much more important.

I think the world was almost tolerable before Go and Rust came along. We already had too many languages even before those two, but at least there was a clear leader: Java. Nowadays the market is completely fragmented.

What's your issue? How exactly do more languages harm you? Or harm anyone? Is choice bad for some reason?

I think all languages do suffer from their initial design. And spend 5-years iteration to circumventing them. Java will have proper asynchronous management in 5 years, Go will have generics in 5 years, Rust will have a garbage collector variant in 5 years, Python and JavaScript will have full multithreading in 5 years.

In the end, no language is catching the whole problematic and solve it at once. So there is a programming language fatigue.

PS: when I say that things happen within a timeframe of 5 years, it includes the time to deliver the feature in the language, and people to massively adopt it, and new developped code to include it in mass.

Every new language is an attempt to fix some problems in the existing languages but at the same time be simple to learn and use. But the problem-space of programming is very complex. So while a new language does something better than some existing languages the language designers do not foresee all problems that may arise in actual use.

Maybe a way to proceed would be to have a common set of code which every new language would try to rewrite, and thus see how much better it is, or is not, than the existing ones. Comparative analysis.

Exactly right. Each new language thinks they've simplified and solved problems, but in the end they end up having to solve the same old problems that have already been solved decades earlier lots of times.

The ability share code is critical. For people to share and cooperate it's much better to have fewer languages, than 100s.

Companies are spending billions each year because of having to translate from one language to another.

Rebuilding similar functionality maybe but who's actually porting between languages?

Plus code can already be shared with various bindings. Lately there's been a big push with "microservices" or generally networked architectures so different components built in entirely different stacks can still interoperate just fine.

> Whatever people find unacceptable about Java could've been fixed.

yeah by making it common lisp instead of java

That's a great point. Lisp is so simple in every way, you could make a good case that it was superior to all other languages, and that perhaps inventing Java was the mistake and Lisp should've been the "final" language.

There was never, and shall never be, a particular reason to anoint Java - or any other development environment - as some kind of universal panacea “one true language”, and demanding everyone’s attention for your own pet preference is beyond hubris and into the realms of absurdity.

> All those kiddies

You chose the wrong venue for this remark.


Perhaps you have this forum confused with Reddit. By my estimation, everyone here is over forty, either in body or in soul, and by way of particularly cringeworthy example the correspondent you most unfortunately chose to rag on was already a computer scientist of international note twenty-five years ago. Many, if not most, of the designers of the new programming languages and paradigms under discussion are of similar breadth and depth of experience.

There is no monopoly on the grumpy old fart act; there is only a difference in their willingness to continue pushing back the boundaries of ignorance.

All of which is notwithstanding the necessary and essential vigour of youthful perspectives, c.f. Seymour Cray’s notable preference for working with recently graduated engineers both for their energy and their fresh ideas.


I am sufficiently ancient to not require any external validation, and yet here you are.


I disagree with your assessment, but at least we can see where you fall with respect to that attribute.

To put it plainly, Java is a fundamentally broken language and its mistakes, starting from not having sum/option/result types from day 1, make it impossible to fix.

edit to respond: calling them "bells and whistles" is a deep misunderstanding of what sum types are. They profoundly change how every single piece of code in the language is written, starting from how nulls are handled, and cannot be retrofitted into an existing ecosystem. (You can add them later, but baking in sum types from the start is very different. In particular Java would not have its terrible exception system if it had sum types.)

Sum types are coming to Java in the form of sealed classes:


To be fair, Java has always supported a particular flavor of sum types founded on subtyping: use an abstract class with a private constructor as a base, add nested static final subclasses for the variants, and decide whether to use instanceof or the Visitor pattern to dispatch on each variant. (The Visitor approach corresponds to a certain isomorphism -- the sum type `A + B` is equivalent to `forall T. (A -> T, B -> T) -> T`, where the pair of handlers `(A -> T, B -> T)` is the titular visitor.)

That JEP elaborates on the same ideas (notably using `instanceof` instead), as the linked JEP explains, but the motivation for sealed classes extends to further harmony with subtyping. The major item seems, to me, to be allowing interfaces to be a supertype. A minor but distinctive item is that the sum type's variants can be defined at (some) distance from the base type, rather than all being defined in a single breath.

The problem is, even though you could always express sum types in Java, it's verbose, unpleasant, and above all "clever" in a way unnecessary in other languages. I think this JEP helps somewhat -- the private constructor trick in particular goes away -- but it doesn't really help with clients of your sum type. Pattern matching is still yet to come, and if you want a Visitor, you still need to write a Visitor.

To the grandparent's point, this JEP isn't enough to provide a coherent alternative to checked exceptions or other kinds of structured control flow.

Every language is fundamentally broken from day 1, whether it is a known tradeoff or something that remains hidden for several years. Also, even if you had the perfect language with a presumably not-mainstream feature, you would have to make that feature somewhat known in the developer ecosystem (just look at Rust’s borrow checker, it does cause issues for newcomers as opposed to features already having an analog in other languages). While FP ideas were known before Java (ML predates it by quite a few years), it has become only recently more acceptable, so it makes sense to only know incorporate it into the language (and yet again, do note that FP is not a silver bullet so java has to let developers write imperative constructs just as well as before)

Unlike first-class monads, sum types are not some sort of esoteric FP construct that arises from challenging technical problems around dealing with external state. They are the very obvious dual of product types, which every language has some version of. They are easy to understand, require very little tutorial-ing, and their power is immediately obvious when you start using them.

Product types model "x and y", sum types model "x or y". One of them is universal. The other has been totally missing from mainstream languages until recently. No wonder our software sucks so much.

I agree with you in that I really like and really miss them from languages since most of them doesn’t have it in a feasible way. But sealed classes are coming fortunately in Java at least.

I’m curious what your alternative is, given that Go famously also lacks sum types.

Yes, Go is a fundamentally broken language too. As is C++, as is C. Sum types are the absolute bare minimum a language must have for me to not consider it broken beyond hope.

Go in particular is really unfortunate for a language so new. The language's weaknesses have dropped my estimation of its creators by several notches.

"I'm broken and can not use a language without Sum Types"

There, I fixed it for you.

Yes, that's possible. I would rather tear my eyes out than use a language without sum types. The benefits of sum types in being able to model arbitrarily complex domains are extraordinarily massive.

Every language has product types. Every programmer understands how useful product types are. Why in the world would the same not apply to sum types?

You sound like you have a hammer, and so every problem looks like a nail.

Sum types can be useful. Other aspects of languages can be useful, too. Why fixate on that one? And if you're going to say that it's a minimum bar, well, some of the other features are missing in most languages that have sum types.

Pick the language that has the total set of features that makes it easiest to write whatever program you're trying to write. Don't get locked in to focusing on only one feature.

I work with Java in my day job; it's by no means my favorite language, but I can be quite productive in it.

I haven't kept a tally of my frustrations with Java, but I can assure you that the most frequent is its lack of first-class sum types. Every time I need to encode (encode!) such a basic concept as "it could be this or that" using subclasses and some Design Pattern to Manage the Variants (usually Visitor, sometimes State), I metaphorically weep for my soul.

There are plenty of other minor gripes I have with the language -- there have been multiple instances where generic generics (HKTs, `class Foo<F<_>>`) would have made things much clearer -- but sum types are certainly the most prominent.

Yes that's why I write Rust.

Our field has a terrible reputation for quality. In my estimation roughly 70-80% of that is directly or indirectly because of the lack of sum types.

Directly: people don't have the right tools to make invalid states in their business logic unrepresentable.

Indirectly: the lack of sum types results in broken null and error handling.

Late reply; I hope you see it. My reply is late because I had to think for a day first. So, my sincere congratulations - a post that makes me think for a day is much rarer than a post I agree with.

To me, sum types for error handling are isomorphic to checked exceptions. They both let you do dual-track programming - separating the normal path from the error path. Both have compiler support for enforcement. But checked exceptions are no longer considered to be the answer. What went wrong?

The problem turned out to be the programmers. They did at least two things that subverted checked exceptions.

First, they silently ate exceptions (that is, had an empty catch block just to make the exception go away. This is the equivalent of having a sum type that is either an integer or Nothing, and a function. Rather than return the sum type, the function returns an integer. In the Nothing case, the function just returns 0. That's about the same as the empty catch block to not have to declare the exception in the function's return type.

The second way checked exceptions went wrong was the opposite. When a function could throw Exception1, Exception2, Exception3, and Exception4, it was tempting to just declare it as throwing Exception (the base class). In the same way, a function that gets SumType1 back from one function, SumType2 back from another function, and SumType3 from a third function may return the sum of the sum types. It becomes an Everything type.

In both cases, the problem was that programmers were lazy. But here we are 20 years later, and programmers are still lazy. Until the programmers change, sum types won't fix things any more than checked exceptions did.

Thanks for the thoughtful response. I spent a bit of time thinking about it, and thank you for that as well.

So, I think you're right about a result type being isomorphic to checked exceptions, for the reasons you laid out. If you compare it to Rust, what ends up happening is similar -- many libraries return an error type that's a union of all their dependencies' error types, and binaries end up using an "everything" anyhow::Error type in the end. However, I think where sum types in general end up working and checked exceptions don't are:

1. Sum types don't encourage a mix of checked and unchecked exceptions the way Java does. Translating from Rust, all exceptions that are meant to be handled by regular users are checked. The only exceptions that are unchecked are broken invariants (panics), which usually end up being handled either through aborting the program or through some sort of top-level restart logic. You could write Java in that style but it's not the ecosystem's convention.

2. Sum types are more general than checked exceptions: they can be used to express nulls and business logic as well. I suspect checked exceptions would have worked better if Java also had nullable and non-nullable types, because nullability is such a common source of errors.

3. You're right that you can drop an error on the floor with sum types as well, just like an empty catch block with exceptions. But that just doesn't happen nearly as often in practice, because result types form a closed set. With checked exceptions, in practice you often end up with a method throwing both checked and unchecked exceptions, and the same syntax is used to handle both. I think checked and unchecked exceptions are fundamentally very, very different and mixing the two is a mistake.

Once again, thank you for your response.

Can anybody explain why the parent is being downvoted?

We can argue on the percentage. But the point on the direct and indirect damages makes a lot of sense.

At this stage I'm now convince you're trolling us all. Well done you had us all fooled :)

Not trolling. Entirely serious.

Also - "I refuse to use most languages that have been used to solve a shit ton of problems".

Or, "I will not use any language which made software eat the world"

Does the Either type from the vavr library for Java cover this? https://www.baeldung.com/vavr-either

I wouldn't say no, because it's built with the right spirit in mind. But at the same time, it's different from languages have native support for sum types. Advantage of sum type is that it can be matched exhaustively like an enum, but at the same time it's flexible to contain any data like a data class.

Having a type available in an optional library is not the same thing as having them baked in from the beginning. The absence of sum types warps a language in grotesquely terrible ways, such as Go's err != nil patterns or Java's exception system and visitor patterns.

C has unions

What more do you need?

Are unions used for error handling and null checking? No? Then they're not very useful.

Improving a dominant (but broken) language is called "innovation" and "improvement". There's no reason all your magic bells and whistles couldn't have been added into Java. Creating new languages just made life worse for everyone.

>There's no reason all your magic bells and whistles couldn't have been added into Java.

Because it's designed in a way that doesn't always allow these bells and whistles.

If I were to tell you that we should have kept using horse-drawn carriages, and that any of the fancy bells and whistles that cars have could be added to horse-drawn carriages, you would probably look at me with confusion.

> Improving a dominant (but broken) language is called "innovation" and "improvement"

Sure? But modifying an existing (and for most of its history proprietary) language can be a difficult, potentially-political move. If I have an idea on how to do, I don't know, linear-types checked at compile-time, I wouldn't have any idea where to begin adding that to the javac compiler. It might be easier to build my own little language to build it, and then hope that maybe the Oracle devs decide to pick it up.

The horse-drawn carriages analogy is not applicable because I'm using Java in my current project, like most other large corporations, I don't find it antiquated, I find it "state of the art".

There are some improvements that can be made however (any language can be improved on), that would require minimal breaking changes (like fixing Type-Erasure), and I'd even be fine with minimal breaking changes that break backwards compatibility every 5 years or so.

The problem is when people say "Let's burn it all down and start over, so zero existing code is salvageable."

Sure, great, but you must understand that adding language features to basically anything that isn’t Lisp moves glacially at best.

I mean, let’s go with your example with type erasure; people have been complaining about it for more than a decade, and Oracle still hasn’t fixed it. The Java language designers aren’t stupid, and I’m sure they’ve read the complaints, but it still is almost universally agreed upon to be a broken feature of the language.

It’s almost never clear if “language feature X” is a good idea until it’s been implemented and battle-tested. If I have a new idea on how to build something in a compiler and I’m not sure if it’s a good idea, are you saying that the best path forward would be for me to make a PR to javac instead of building a proof of concept language?

Replying to my own post because it's too late to edit and I'd like to clarify a bit.

I actually do agree that as software engineers, we're often a bit too eager to reinvent wheels. While I am not a huge fan of Java, if I owned a company, I would probably be more likely to use a JVM language (probably Clojure) than I would to use something like Haskell or Idris, precisely for the reasons you've discussed.

My overall point, though, is that often times languages themselves lag behind the state-of-the-art; there's been a lot of progress made in language design, and Java (and a lot of other languages) can feel crufty in the process. Sometimes the wheel really does need to be reinvented...if we could somehow convince the entire industry to use Lisp, this would (arguably) be a non-issue, since they allow you to abuse macros relatively easily and add language features (see CLOS or core.async for examples), but Lisp hasn't really taken over the world like I wish it would.

Thanks for your two posts, I agree with pretty much all of what you're saying.

When you look at the millions of man-hours that have gone into the "burn it all down and start from scratch" languages (Go, Rust, etc) and you consider what if those guys had written new compilers for JAVA syntax? How great that would be. They could still accomplish many Go and Rust objectives without having to completely invent their own non-Java-like syntax. Even something that was "Java-like" is better than burning it all down every time something new is needed.

I don't think I agree with your last point; I actually think that the Java syntax isn't particularly great, even for the time. I don't think that Java's idea of object oriented design is ideal (I'm more of a fan of the ObjectiveC/SmallTalk model), and that version of OOP is all Java really contributes from a syntax perspective...otherwise it's largely just C/C++'s syntax.

However, in a sister thread I think you mentioned that you are basically alright with languages targeting the same VM, which I think is probably a better path forward for a majority of use-cases. Clojure and Kotlin and Scala all benefit from being more-or-less fully interoperable with each other; as a result, one can feel free to experiment with language design to their heart's content without too much fragmentation.

That said, I don't know that it's entirely fair to completely criticize Rust on this; Rust exists specifically to address issues with C and C++, languages without garbage collection, and whose design doesn't quite allow the same level of compiler safety and goodness that Rust does, though to be fair Rust does have C FFI so it's not necessarily always reinventing the wheel either. I mean, I agree with the blog post we're chatting on top of; Rust might be super awesome for systems-ey stuff, but for anything TCP-or-higher, I think a managed language is kind of better.

Languages can't be changed substantially once they become popular. Python broke compatibility significantly once and people are still complaining.

My dislike for Java aside, you fail to realise there have been hundreds of programming languages created before Java too.

The same would be true if all the car companies in the world just cooperated on a single car model to rule them all, right?

Oh, you think I'm just pushing for Java. I'm not. I'm saying having one language is better than 100. People can't share code very well if there's so many languages.

Languages themselves don't need to be completely reinvented like things in the real world (cars, etc). For example C/C++ has been around for decades and no one ever said "Burn it all down and start over."

I am a noob and I am happy to be corrected.

Every language is designed with a specific design decisions and tradeoffs intended for hopefully wide audience. I think C and unix culture reflects this in some way. It is home grown environment by handful engineers working initially their own sake. You cannot objectively say, "Everything as file" is good concept, but if you are just handful of people tackling myriad of problems, it should have been liberating to have a too simple of an abstraction. It might be pile of hacks on pile of hacks but, it worked for the time.

Whether it is good or bad, depends on what you want to achieve. I think more communities can benefit from more focus on what they are and how they want to do things than trying to be "be all, do all" thing. I might not want to solve problems the way somebody else would solve the problem and I would use a tool to best fits my way of thinking.

Even if there is only one language, fragmentation is inevitable. Because, all people might not have the same way of working or thinking. Even if it is written in same language, one might not understand idioms of somebody else.

It is easier to read a language with community adopted idioms, than a language that is made for all people. I have been using Java, but the erlang concurrency primitives looks great. But, it would be hell, if single code base has all possible primitives and style.

Asking people to have one language to work is not different from asking people to think in one single way.

Edit: added couple more rants.

I think this one of the important, good reasons. Others range from NIH to runtime considerations to strategic ones.

> Languages themselves don't need to be completely reinvented like things in the real world

Why not? And why things do "in the real world"?

The fact that languages should evolve (and that they do) is not in opposition to a need or desire for new languages.

Dude, we’d writing in COBOL if anyone followed your logic.

And it would still be better than java. :-)

Twenty years I predicted to various peers that Java would become the new COBOL, the lazy enterprise’s default choice of our time, and I’m very sorry to have been proved right.

lol. A generation of script kiddies too lazy to learn Java invented NodeJS, so they could run the toy version of Java on the server side. Then they have the audacity to claim the shoulders of giants upon which they stand should've been strong enough to comprehend JS rather than Java...even though those giants invented BOTH JS and Java before you were even born.

Here's a newsflash for you: if the Netscape inventors of JavaScript had been capable of making Java run in the browser then all you kids would be running Java instead of JS today.

The only reason JS exists at all is because early browsers had limited memory and CPU back in 1990s, so they cobbled something together over a weekend and named it JavaScript. Now that JS is running server side, these kids claim that's the way it should've been from the beginning. No, sorry JS is a train wreck compared to Java, which is why MSFT had to come to the rescue with TypeScript.

Oh, it’s you again.

Yep. "Java is the new COBOL" was simply irresistible. If you want to troll a Java developer, that was absolute perfection.

JavaScript and even modern TypeScript has syntax that is extremely similar to Java, so you can't really say that when I claim Java was already "mostly" everything we needed (WAY before "Go" and "Rust" were invented) that somehow that means I would have therefore said the same about COBOL.

There is such a thing as "finally getting it right". We don't need to replace English with something "new" every couple of years, and writing instructions for machines is the same. The syntax of Java was "almost perfect" and is STILL extremely similar in TS/JS so there was no need to "burn it all down" like Go and Rust did.

Why do you think JavaScript has the word "Java" right in it. lol. Must have been something in there that was good to this very day.

> Why do you think JavaScript has the word "Java" right in it. lol. Must have been something in there that was good to this very day.

Eh, no. From the mouth of Brendan Eich himself [0]:

> InfoWorld: As I understand it, JavaScript started out as Mocha, then became LiveScript and then became JavaScript when Netscape and Sun got together. But it actually has nothing to do with Java or not much to do with it, correct?

> Eich: That’s right. It was all within six months from May till December (1995) that it was Mocha and then LiveScript. And then in early December, Netscape and Sun did a license agreement and it became JavaScript. And the idea was to make it a complementary scripting language to go with Java, with the compiled language.

It was completely down to marketing. I've heard it described as riding the coattails of Java, which Sun was dumping huge amounts of money into marketing to begin with. Allusions to "something in there" notwithstanding.

[0] https://www.infoworld.com/article/2653798/javascript-creator...

As http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m... put it, “Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. Later still, in an effort to cash in on the popularity of skin diseases the language is renamed ECMAScript.”

Eich started work at Netscape Communications Corporation in April 1995. Eich originally joined intending to put Scheme "in the browser", but his Netscape superiors insisted that the language's syntax resemble that of Java. As a result, Eich devised a language that had much of the functionality of Scheme, the object-orientation of Self, and the syntax of Java.

From day one, Eich was told to make it "Java-like". Netscape first tried to get Java itself running in the browser, even before developing JavaScript. JavaScript looks like Java not by accident but on purpose.

If that argument were valid wouldn't it apply to any language? Would Forth be 1000x better if nobody used anything else?

I don't mean Java was perfect from day one and forevermore. I mean it was the language that was perfectly dominant and acceptable at the time, and could've been extended and kept backwards-compatible, rather than restarting new languages from scratch.

JavaScript is particularly hideous, and was cobbled together over a weekend, and because of that misstep the industry got stuck with it and had to invent TypeScript to "fix it" and create something tolerable in the modern world.

> I don't mean Java was perfect from day one and forevermore.

Neither do I. My point was that your comment didn't use any features specific to Java, so if you replaced "Java" with literally any other language you'd have the exact same argument. The details you added here like that it was "perfectly dominant and acceptable" help in that regard (not that I agree with where you're going), but your original comment would have been equally valid with literally any programming language.

> I mean it was the language that was perfectly dominant and acceptable at the time

No. Java never was what you're claiming.

If you don't think Java was ever dominant (and frankly I think it still is), then probably we just disagree about the definition of the word "dominant".

It was a dominant language. But you said "perfectly dominant", which I took to mean "the one dominant language". Java was not that - not overwhelmingly more popular than C++, for instance.

Java was perfectly dominant in it's realm only. Obviously Java is not competing in the realm of machine code generation at all, and assembly language should always exist too.

C/C++ is the perfect example of a language lasting multiple decades without each new generation trying to throw it all out and start from scratch, but simply evolving it.

Sounds like you think you know best, too.

No, Java would not be the "only" language in any set of circumstances, however fictional. No language will be, ever, because no language is better than all other languages at everything.

Go is to backend what React is to Frontend and what Kubernetes is to infra.

Everyone wants to have fun.

And when things get complicated enough just switch jobs.

> Use the right tool for the job.

I think you mean to use the right medium for the project. Languages are a communications medium. Your compiler, linter, editor, debugger, those are your tools.

They are tools only in the broadest use of the term that applies to most everything we use to interact with the world. From philosophies like science to the mundane like a spoon. Why use such a general term when a more specific, much more meaningful term is available.

"Medium" is a much more general term than "tool," the edit you suggested adds nothing to anyone's understanding of gp's point, and to suggest they actually 'meant' something that means the same thing is... strange.

Also, they are tools in a much more narrow sense as well, but that is left as an exercise.

> Yes. That's what Go is for.

> Use the right tool for the job.

Rust will be the right tool for the job when the library ecosystem evolves beyond just Actix. Give it time.

FWIW, I've written half a dozen web services in Rust [1] and it's a breeze. The type system is incredibly expressive and lets you accomplish things with clarity. I already find it an appropriate tool for backend web and service development.

[1] eg. https://vo.codes is 100% Rust

It has absolutely moved beyond just Actix. There are many web frameworks or lighter routers available to choose.

Like Java, Go is a fundamentally broken language and its mistakes, starting from not having sum/option/result types from day 1, make it impossible to fix.

edit to respond: as the commenter below pointed out, dynamic languages can express sum types just fine (though statically verifying e.g. null checks isn't possible without some sort of gradual typing effort). This is why they are superior to traditional statically typed languages like Java. But even better are languages which have static sum types.

Yes, person on a lisp-powered hackers forum, tell us more about how languages lacking native sum types are unusable.

With untyped languages, essentially everything is a (shitty) sum type because of runtime tagging. Tags <-> sum type.

You can do dynamic tagged values in Go, too. It's called interface{} and everyone hates it. So this does not respond to the parent's point.

I'm not arguing untyped is good: it's most certainly not. Everything nullable is also bad, and also introduces a limited some type (Maybe).

What I am saying is if you have static types (good) but don't do anything else, you loose sums (bad). You then need to add (back) sums.

Please don't spread this flamewar-provoking way of speaking about type-systems.

Firstly, dynamic languages are typed. The word "untyped" has a very specific meaning in a narrow branch of computer science, as in "untyped calculus", which refers to a situation in which the types of the syntax tree nodes of the program are not being taken into consideration. (Which doesn't mean they don't exist, by the way!)

In sofware engineering, an untyped language is something utterly unsafe, like assembly language or BCPL, where every value is just a machine word, and the meaning/type of a machine word is just that of whatever operation is being applied to it at a given spot in the program.

Dynamic languages are typed, and even to the extent that some of them do assign type to the nodes of program (type inference).

Secondly, a "tag" is an implementation concept. A tag typically does not give all of the type information about an object, or not about every object kind. There may be more than one kind of tag used. For instance, a value might have a two-bit tag indicating a crude classification of a value into four categories. For an unboxed fixnum integer, that indicates the exact type (fixnum), but most heap objects might be lumped into the same category accordint to that tag, distinguished by a more comprehensive type tag that might take on, say, one of twenty values. That tag is also not complete information for all objects. For instance for an OOP object, the tag might indicate "this is an OOP object", without regard for its class: all OOP objects might share the same tag. Yet, at the language level, their type is their class. Another example is functions. All functions might have a tag indicating "this is a function". There might be a separate tag for things like compiled function, interpreted function, foreign function or whatever. But the type of a function takes into account not just that it's a function, but other properties like number of arguments. In many dynamic languages, you cannot call a two arguments function with three arguments. That check doesn't come from the tag, which doesn't have that information.

The type of an object in a dynamic setting goes beyond the tag. For some objects, the word tag tells everything, and the heap tag does for others; but not for all object kinds.

When the dynamic language implements a type inference system, the truths which that inference system works with cannot simply be reasoning about tags. The representation of type has to include concepts like the class of an object, or the number of arguments and types of a function and its return type.

I think it's disingenuous to conflate the GP's claims of poor design choices with "unusable". To its credit, lisps also tend to have plenty of interesting, unique, and very good design choices. I don't think it's fair to compare Go to lisp in this context.

Can functional program people speak without mentioning sum types or monads?

Those educated on Lisp languages.

I don't care nearly as much about first-class monads, though monadic patterns pop up everywhere in programming. Anyone who's written a foldMap or a flatMap has at least a sense of what a monad feels like.

Sum types are a zillion times more important.

Yep, every language is fundamentally broken. Fortunately, some are useful. Go has proven to be useful in building stuff that matters, at the end of the day. This unfortunately can't be said of all the fancy languages with perfect type systems.

I'm the maintainer of Juniper. Juniper is a library, not a framework. As such it is agnostic to how you want to solve N+1. You get a lookahead and can do what you want...dataloader (like Facebook), eager loading all data up front (like Rails), generating efficient SQL on the fly (like prisma).

There are projects and examples to do all three but we don't want to assume one solution is right for all domains.

We take this to the extreme...Juniper doesn't even require a web server and isn't tied to a particular serialization format! Of course, we provide optional integrations with popular web frameworks using json but the key is they are optional.

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact