Hacker News new | past | comments | ask | show | jobs | submit login
“No, we’re telling everyone we are using Java” (twitter.com)
400 points by tosh 14 days ago | hide | past | web | favorite | 368 comments



Back when I worked on building software for clients, I used to tell people "we run all software on the Java Virtual Machine" instead of "it's written in Clojure", because I was afraid they would get scared.

But then I noticed that nobody actually cares much about what tools you use, as long as they get the job done. The end result is what counts.


This is not always true. A friend once built an application for the government (I don't remember any more specifics) in Python. He delivered it to them and mentioned in a document somewhere that it was written in python. This was unacceptable to them because python was not on the list of approved languages. So, after some negotiating, he caved, switched the project to Jython (with minimal changes) and delivered it as "running on the JVM". This was seen as acceptable, and the delivery was accepted.


I’ve seen software rejected this at a university as the IT teams (pleural) didn’t like the language it was written in. However the problem was solved by the software vendor withdrawing their quote as we were too hard to deal with.


There is nothing wrong with rejecting software because of the technology it uses.

Think about how important maintenance is. If an organization doesn't have in-house skillset to maintain and enhance software solutions they buy outright, why would they want it?


I agree, though this isn’t what occurred. University IT is wonderous to experience though I am speaking from an experience of just 1. Hopefully your mileage varies.

Why didn't they specify beforehand theor language choice? The negociation about Jython you describe should have taken place before the work started


It likely was, but buried in one of the hundreds of documents they reference.

That must have changed in the past 10 years because I was working on a DOD project in 2007 and it had a large python component.


In my experience in the aerospace industry working closely with USAF, it depends entirely on the program (as in military/civil program, not computer program) as well as the systems being targeted for deployment. I worked on projects that allowed Python 2, but not 3, some that only allow the Python that ships with Anaconda or Canopy. Many programs and classified environments are barely getting approval for C++11 and some unfortunate teams are stuck with uncompiled VBA macros behind Excel UIs because the security personnel for their target systems is unable to review compiled code in fewer than 6 months... it is a mess.


C++11 is becoming pretty common. The super computer centers (LCFs) uses C++11 now.

I wonder why that is. I know theres a list of DoD approved software licenses and guidance on using open source software but didnt know there was a list of languages allowed.


Probably more to do with the runtime environment.


This is exactly why.


Some governments (and individual institutions) will have restrictions on only supporting standards because it allows for competing implementations. Some will require multiple implementations directly. Still others may have restrictions based on whether there was a successful audit of the runtime environment.


Many times it is hard to pinpoint what/when the ‘end result’ is. So while the clients are happy now, they won’t be happy to hear they might have a hard time maintaining the project through the years. Clients will want to know they can hire people to work on the codebase.


Unfortunately most times if an external vendor is delivering custom software the clients have already failed - software has a lifecycle that extends far beyond 1.0 delivery. This is why we still have things like X-UA-Compatible: IE=5.


I wish I followed this advice when I was setting up lambdas on aws. There was this general distaste for "java" in the dev community. Doing a new project and this time around I am using java, and it works so beautifully. The second choice would have been python, but I like the strict type checking.


Do you find that using the JVM in Lambda affects your cold start times? If so, do you have a pre-warm strategy and how does it work? Or maybe it doesn't matter for your application?


Try Micronaut, which is a micro framework specifically designed for superfast startup time and low memory footprint. It also works with GraalVM.


I did when I used Java Lambdas behind API gateway for a largish public API. I ended just setting the HTTP purification tests to run every minute. Solved the issue and let me know how stable it was. Not a huge issue in the end.


Lambda does re-use the JVM instances where it can to alleviate that.


That only matters if your concurrency is one. If you get 10 requests each before the previous one completes you will get cold starts every other call or so.. its manageable, just takes effort.


That is hilariously ironic considering 90% of AWS is written in Java.


Have you got a source or more information on this? I'm not disputing the statement - but I've seen several different languages claimed to run "most of AWS" (including Haskell at one point) and it'd be good to have a firmer answer.


I used to work for AWS. The API framework that powers most if not all of AWS API's is called "Coral" and is Java based.


It's like people think Java only supports concurrency via thread pool and isn't capable of event loop / async io models.


> But then I noticed that nobody actually cares much about what tools you use, as long as they get the job done.

Unfortunately there are a lot of cases where this matter:

- Customer requires the sw to run on their standard environment (think some version of RHEL, or some Windows Server version). Hence you're limited to what's supported there (usually from the vendor). No bleeding edge versions of Python/Node/.NET etc. You might need to build a Go or Rust binary for that specific system and it might not always be obvious. Java might be a different story, but I wouldn't risk it.

- Embedded software (for similar reasons)

- Sw that needs to run on a customer's system

- Sw that needs to interface with an existing system (loading a DLL/.so? JVM?)


Yes. When a client hires a developer, he cares that he receives a proper delivery and he cares that all of the decisions related to that delivery were made in his best interests. Technology choices are an important part of building a technical vision for a company and planning for software maintenance. People may not seem to care much, but that's only because they are trusting the people they hire to care and to know better than they do. When there are any questions on technology, clients appreciate and value unbiased discussion on those choices, in the context of their business. An important part of having clients, perhaps the most important, is establishing trust and one does that with success, but also a record of transparent, appropriate project decisions.


I still tell people we're using Java instead of Groovy, but that's more because I can't be bothered having the repeated 5 minute discussion with them about what it is. (Besides that, I actually consider Groovy almost to be essentially like a template engine or set of macros for writing Java code. It's not like other JVM languages that try to introduce a different style).


Groovy, the real JavaScript. I love Spock but never could get into using Groovy for non-test code.


Good idea. I refer to Apache Groovy as "Bash for the JVM". The static typing features in Groovy were bolted on for version 2, whereas Scala and Kotlin were designed from the ground up for building actual systems on the JVM, so they're a far better choice for that.


Yeah, you really need to adopt a completely different style if you're writing a whole applications with Groovy as opposed to scripting or test code. It's like a different language almost (more like nicer Java). It does work quite well for it when you do that though.


I really like groovy with all its groovy idioms and its extended library. It’s really fun to Google for how to write something in Groovy you usually get something really ice. I just didn’t like the lack of typing for non test code. And spotty IDE support even in IntelliJ. It seems to be perfect for tests because your code still compiles and you can fix the tests when you’re ready. Also Spock is beautiful...


You might enjoy Kotlin.


Yeah I really want to move to a language with pattern matching. Hopefully it will finally land in Java


Kotlin does not support pattern matching.


Yeah, Kotlin has some "enhanced switch" type stuff, but nothing that compares to Scala's matching with unapply, list extraction, etc.


Actually they have some foundations, componentN functions serve similar role to unapply, so theoretically it should be possible to build good enough pattern matching. But, unfortunately, they seem to slow down language development a lot since release.

It has destructuring in value binding and lambda parameters, as well. Pretty handy when you're dealing with a lot of pairs, triples, and data classes, though not quite as powerful as true pattern-matching.


Oh! Thought it did. Forget Kotlin, then :)!!


Kotlin is still a pleasant language to use, and some of the ideas have some good carryover to Scala, with quite a lot less culture shock than going right from Java to Scala.


scala, eta



> Back when I worked on building software for clients, I used to tell people "we run all software on the Java Virtual Machine" instead of "it's written in Clojure", because I was afraid they would get scared.

That seems... dishonest? What happens when the client wants to hire someone else in the future, only to find out it's not Java, it's actually this language which is not mainstream at all?


I think it would be dishonest if the agreement was that the software should be built in Java - but to me, this reads more like an operational concern. The client felt comfortable with a software package they can deploy on the JVM... but that doesn't necessarily mean they cared whether it was written in Java. If I were hiring a dev for a Java project, I'd explicitly hire a Java developer, not just someone who promised a product which would run on the JVM.


I think the point is the client might not be sophisticated enough to realize there’s a distinction there. If my auto mechanic replaced a part in my car with something that worked basically the same but cost me a lot of time and money and hassle to fix when it breaks down the road, I’d be pretty upset, especially if he strongly implied that he replaced it with a normal part.


> I think the point is the client might not be sophisticated enough to realize there’s a distinction there

Yup, that's exactly my point. Your car analogy is a good one. I was going to make a similar one with house construction but you've hit on it well.


Actually, I think there are cases where the clients would care: for example with on-premise software, using either the OpenJDK or Oracle JVM will incur different costs. Idem with Oracle/SQL Server.

I once told a competitor that we're using Javascript extensively and our TTM for new features is super low thanks to simply adding NPM modules whenever necessary.

In hindsight I should also have mentioned NoSQL databases and a microservices architecture.


> In hindsight I should also have mentioned NoSQL databases and a microservices architecture.

That would just be cruel :(


Recently I was curious why so much big cloud software is written in Java. Cassandra, Kafka, and Neo4j are some examples. I don't have a lot of experience in the area myself (embedded software engineer) so I put it down the JVM providing straightforward platform independence and a consistent API, and the (assumed) simplicity deploying .jar files. This anecdote suggests it may be a less deliberate choice and that Java is just the de facto language. Does anyone have deeper insight on this this trend that they might be able to share?

My apologies for this post being a little tangential.


Performance.

The JVM is an extremely fast platform given its feature set. It also has a coherent and powerful concurrency specification that is stable. While it is possible to have very low latency systems in Java, it requires a bit more work, and people tend to gravitate to C++ if the 3x penalty from C is too high.

But aside from the latency, if you can deal with full GCs, Java is a data throughput monster. The default GC is actually very good in most cases, and the JVM GCs allow significant performance tuning, which is a discipline in its own right.

It will be interesting if the low latency Z Garbage Collector introduced in JDK11 eventually removes one of the last complaints on Java performance.


Java has issues both with latency and memory usage due to the garbage collector and general age of the language (missing modern features like value types).

Performance is.. complex. The lack of value types causes a lot more indirection, the hotspot compiler patches away some of it, the copy collector will compress connected objects together so that they will fit 'automagically' into CPU cache.

Unoptimized Java code may often run faster than unoptimized C/C++ code, but Java gives you only vague, global flags for attempting to tune memory behavior and code generation.

For monolithic Java applications, you can have a lot of different memory patterns internally (short term objects for the web stack, medium-term objects for transactional state, plus long-term objects representing accumulated state) which can really muck with the garbage collector's assumptions around generations. For server applications, this can cause things like heavy load causing request/response short-term state to be promoted to the mature generation. You start having to break up monolithic applications for performance reasons.

But, that still is being a victim of your success - and you would be better off by default creating the first version of your application in a language that allows your developers to iterate quickly.


What's the alternative? I have experience with a lot of programming languages and recently decided to use Java on the server side of a side project.

Java is fast, has amazing IDE support, large library ecosystem and it's a solid language. And yes, I stand behind the last claim, I don't understand why Java gets so much hate. Swift is the best-designed general purpose language I've used and it's not hugely better than Java.

BTW, part of the decision to use Java was that I'm very familiar with it and wanted to get started fast, but now I'm having second thoughts. I guess Kotlin would be a good choice? And maybe Swift but not sure about library ecosystem on the server.


> I don't understand why Java gets so much hate.

I remember when java first came out, I thought it would take over the world if they would "finish it". To me, this meant letting it run like other scripting languages:

    #!/usr/bin/java
and let it do useful things everywhere in the os, but with wonderful OO.

But instead of becoming a systems language, it sort of became a "nice cobol" that was mostly adopted to do business logic. I suspect the limitations imposed on it were to ensure portability and security, but it sort of polarized the people who would or would not adopt it. It was approachable to people who didn't think pointers were necessary (or some who couldn't do pointers), and repulsive to people who didn't want these constraints.

I think over the 20 or so years that followed, the idea of what java is used for has stuck. I also think perl took up the slack on the systems side, with python.

I wonder if java had been more of a systems language from the start, would it have been unsucessful? Or would it have displaced perl/python?


Yeah I agree that it should be easy to run it like a scripting language... I just found out it's possible since Java 11, you can now run "java MyProgram.java"

Now that I think about it, I can understand why people would dislike Java, the standard library kind of sucks, it lacked lambda functions before Java 8... and in general there seems to be lack of focus on usability and elegance, which also spreads to the ecosystem. Modern Java is not that bad though.

But - Java offers a surprisingly unique package - it's fast, it has GC, static typing, good library and IDE availability, supports both OOP and functional programming (kind of). There's really only 1 competitor unless I'm missing something - C#. There's also Kotlin if that counts. Swift doesn't have GC and not sure if it's mature enough outside of the Apple ecosystem. Dart is slower.


Java is sometimes considered inferior to other languages for several reasons:

Ecosystem wise: - The JVM still uses quite a bit of memory, sometimes 3-4x more memory than say Ruby or Python (which are also garbage-collected languages). There is a reason that Android phones ship with several times more system RAM than iOS devices. - The JVM is gradually becoming only suitable for server development. Client GUI development has been mostly abandoned (with applets being deprecated, no Swing improvements for years, and JavaFX no longer being an official project), and the startup time is too slow for systems development a la shell scripts - IDE support is actually more middle-of-the-road in my experience. IDEs can suffer from needing plugins to deal with diverse third party tools such as build systems and test systems, often themselves developed with no regards for how to integrate within a graphical environment - Many popular libraries exist in varying states of disrepair. Lack of clear stewardship by Sun then Oracle caused several libraries to target quite ancient JVMs. - Use of popular libraries can thus hinder your ability to use newer JVM features, and can even hinder your ability to upgrade to newer JVM releases - exploratory and incremental development (REPL environments, exposing server app changes) have historically been pain points. I personally have seen environments where verifying a a fixed typo in a JSP template file requires a five minute redeployment. - there are some very poor programming patterns (such as getter/setters for internal state) which have been cemented in the Java ecosystem, such that I would never recommend it for starting or junior developers. - there is a long-standing reputation for java developers (more than developers in other languages) to treat Java as a hammer and all problems as a nail. In reality, Java is in the top ten for solving certain problems, and outside the top 50 for solving others. You simply can't be good at everything. But this means a lot of recommendations to use Java get discounted - "is this person only recommending to use Java because they have zero experience with anything else?"

The JVM and Java language themselves: - The language has gone years with relatively few improvements. - The language has a strong guideline of backward compatibility maintenance which hinders the efforts to improve it. As examples, lambda support was added in Java 8 without a clear model for dealing with errors, as well as Optional added without compiler support to enforce usage or VM-level support to optimize memory layout. - There is a lot of dead weight and known bad classes in the standard library (many pre-1.2 classes like Hashtable, two date/time systems, CORBA, RMI). Some of these are finally being removed from Java by being relegated to external libraries. - The language and JVM have an obfuscated generics system, with features (such as reified generics) likely never to be completed - The Java language is wordy and simplistic, requiring a lot of code to accomplish tasks compared to many other languages - Due in part to the simplicity of the Java language and focus on supporting multiple implementations of library APIs (such as multiple logging frameworks, multiple XML libraries, multiple JSON libraries, and so on) the language tends to be design-pattern-heavy and complex to navigate. - The exception model of Java (with checked exceptions) is generally considered to have been a poor choice now. - The class loader creates frustrating-to-diagnose issues, eating up valuable time you could be working on your application

Kotlin solves some of the language issues, but not the JVM or ecosystem issues.

Swift has a nicely developing server ecosystem, but does act differently from Java in important ways. Java has a broken intra-process isolation model, but you do still have web "containers" that allow for deployment/redeployment of applications on a running server, and which will attempt to recover gracefully from programmer errors. Many Java applications dedicate a thread for handling a request, and will handle exceptions as a failure in that request - but otherwise continue on attempting to handle future requests.

Swift is _not_ gracious about programmer errors, and will abort() on misuse such as force-unwrapping an optional without a value, or indexing an array beyond bounds. This is not unusual (node and PHP are examples of servers which will abort on certain issues) and is generally a benefit in that you get to diagnose failures by primary effects rather than secondary effects, but it does mean that you need infrastructure to catch such issues and make sure a new instance of your server is started.

(For what its worth, there have been proposals in the future (post a Swift language concurrency model) to add isolation around subprocess or actor boundaries, so that unexpected errors would only abort a "portion" of a server.)


> The JVM still uses quite a bit of memory, sometimes 3-4x more memory than say Ruby or Python (which are also garbage-collected languages).

The JVM will use as much memory it is assigned. Different GCs are more liberal with memory usage, especially the ones tuned for throughput. Newer, latency tuned GCs (like G1) will commit unused memory back to the OS.

The performance of Java is nowhere near as Python and Ruby. Furthermore, Java is used in memory constrained systems like blu ray players and chip readers. So it depends on the JVM implementation and tuning.

> There is a reason that Android phones ship with several times more system RAM than iOS devices. - The JVM is gradually becoming only suitable for server development.

Android is not a JVM implementation, so you cannot make this claim.


I'll try to answer the second part to the best of my ability, not knowing more about what sort of server problems you are trying to solve (http content? javascript PWA/SPA app? API server? chat server?)

- Python is a pretty popular language for server development. It does have some performance issues (including a global interpreter lock), so you would have to decide the trade-off of your time vs infrastructure cost. - Ruby on Rails and Sinatra are both pretty mature at this point; less trendy but a known mix of positive/negative points. In particular, I have a pretty easy time making high-quality REST APIs and backing persistence store in rails than in a lot of other languages (but I'll also admit to a stigma against Python based on a prior work project I had to co-own and maintain) - Javascript (via Node) isn't to be flippantly discounted. It isn't the best language and doesn't have the best tooling, but for a web application you likely will have front-end javascript anyways. You can also use Typescript, which usually is a net positive for productivity and maintainability (but I've hit a lot of issues lately with third party library type definitions) - C# is becoming more compelling with the .Net Core work. In some ways it feels like a Java which was properly maintained by its vendor. - I won't bash PHP, since its disadvantages are well known. But for quick prototyping web applications (without excessive javascript), a dev with PHP experience can work faster than in just about anything else - For non-web applications, Erlang (or Elixir) as well as Go may be the best choices due to their inherent support for concurrency. I can't comment on their use for making web applications. - C and C++ win in terms of theoretical maximum throughput and lowest latency/memory usage - but it requires time, knowledge, and skill to achieve that, and you likely care much more about business logic. I've seen experienced developers work months on a C-based server, only to beat its performance with something they wrote in a weekend in another language. - Swift imho has yet to be proven for the server. If you love the language it may be worth trying for a project, but I wouldn't choose it for something on the critical path (yet)

Java still has huge benefits when integrating with third party technologies within a monolithic application. For example, when selling servers acting as on-premises middleware, Java enables you to use third party libraries and some glue code to make a plugin to theoretically support everything. You also have the benefit of being able to publish a programming API for others to extend your product themselves with Java code, something which may not be as easy to accomplish with compiled languages or if it requires the customer to learn something a bit more esoteric.

Java is somewhat unique in how easy it is to add third party code and dependencies to a project - scripting languages usually cannot match this as dependencies may also include native code. This is one area where Pure Java was a win, and is why a lot of distributed processing systems (such as Hadoop and Storm) often are written in and prefer Java.

Finally, I personally don't buy into JVM languages very much (Using JRuby/Jython to run portable code on the JVM is another matter). A different syntax doesn't solve the problems with the Java ecosystem or the JVM's resource usage or lack of inherent support for newer programming concepts. As soon as you branch out to use Java code, you lose a lot of the safety/immutability guarantees and features in the new compiler model. In return, you lose the ability for all of the Java developers to be able to understand and contribute to your code.


Java has a huge base of engineers familiar with it, great libraries, scales well, and has stable development tools. If it were not for some of the FUD around Oracle ownership I think at least some of the concerns about it would disappear.

It's interesting to me that people don't seem to throw shade on C++ as much even though it's even older and in many ways harder to use effectively than Java.


> people don't seem to throw shade on C++

I hear my friends make dismayed noises about C++ any time they are forced to work on it. When I look at Java or C# code snippets I mostly feel like I understand what's going on. C++ makes me go wat?! far too often.

Personal opinion I think Java would have a better rep if they'd optimized the GC for latency instead of speed. C# on the other hand would have a vastly better rep if it wasn't a Microsoft product.


There are Java GCs that are optimized for latency. Check out project Shenandoah and ZGC for examples.

To be a big fairer I had friends that worked at Azul Systems. One of them said that earlier versions of 0x86 processors didn't support the required atomic operations to implement guaranteed low latency GC.

Also despite the marketing hype the main application for Java was server side. I wouldn't be surprised if GC latency issues weren't a problem in that space.


Before Go and Rust came out, the mainstream statically-typed languages were C, C++, and Java. Java was also one of the few "modern" languages that can idiomatically use multiple cores. So yes, Java was pretty de facto.


Even now Java is pretty much defacto in many places.

Go does not scale well and it is a dumb language, not that that is bad. But just that it does not feel like a improvement to switch to Go.

Rust is still new and I think it will not get mass adoption like Java because of higher learning curve.


I’m slightly unconvinced by the learning curve thing about rust. C++ has a high learning curve too but it is used very widely. I don’t really think rust has a higher learning curve than C++. The main differences are the borrow checking (instead of having to learn and unreliable manually check memory usage everywhere), the type system having more ML-style polymorphism instead of crazy templates, and instead of inheritance with tables being implied by class, vtables are independent things (ie traits) and work largely like implementing multiple interfaces in c++ but without the inheritance and with implied instances from some rules.

The rest of the differences are just having different standard libraries/tool chains. I don’t think that, for a novice knowing neither, rust is somehow much harder than c++


I love Rust and all my personal side projects are in Rust nowadays. But rust is definitely not close to the learning curve of golang and Java. In comparison with C++ I am not too sure.


C and C++ are also both taught in almost every university (at least in America but I assume in other places as well). You can't really say the same for rust.


C is nearly 50 years old. C++ is 35 years old. Rust is 8 years old. Rust 1.0 is 4 years old. I think it is reasonable to guess that this is the reason rust is not taught so much at universities. Especially at universities which choose things based on what they think industry wants or what the professors know as this will lag behind actual trends in industry too.

I think this argument only really applies to rust’s learning curve if one assumes that each year universities asses the programming language world for the best teaching languages and pick out those which can be used for the courses and are easiest to use. They obviously do not do this and you can tell because they picked c++ (which is actually at least three languages to learn with the preprocessor macros and the functional programming language that is hidden in the template system)


There have been university classes on Rust, but nowhere near C++ or Java, it’s true.


> Go does not scale well

What do you mean? I've seen huge services built in Go handling an enormous amount of traffic.

(Disclosure: I work at Google, primarily in C++ and JS)


I was not intending to say that there can't be huge code bases with go, but java, C# with the tooling, eco system, less verbose and a much better static type system scale much better. There is still some areas where golang doing great, like CLIs and infrastructure tools. I definitely understand its simile design but I would very unlikely going to use golang where I can use Java.


Java/C# less verbose than Go? In which alternate reality?

My current employer uses golang for the majority of their code base. Absolutely agree that Java would have been more concise. There’s almost nothing in golang that lends itself to writing shorter code. Error checking is verbose and dumb. Need to define interfaces just for the sake of mocking, even if they have a single implementor. No map/filter/takeWhile/etc. meaning a single or 2 line in Java ends up taking 5-10 lines and more, with helper functions littered throughout the code base, making code harder to follow and ending up with more code. Even a simple example like

    final var a = foo() ? bar() : baz();
Is several lines in golang, which doesn’t even have an analogy to ‘final’

    var a int
    if foo() {
        a = bar()
    } else {
        a = baz()
    }

I never thought that there will be a modern programming language that is more verbose than Java. But there you go, golang proved me wrong. and Java is getting less verbose with each release. The major pieces of verbosity in Java is auto-generated by IDE anyway.

I've read a lot of Java, C++, and Python code at work and some Go, and verbosity I've seen is Java > C++ > Go > Python.

I don't really buy Rust learning curve. The good IDE integration and package management makes it all a breeze. VScode with RLS is great. I went from zero to fully shipped software inside a week, with meager C++ knowledge beforehand. Only trouble was understanding lifetimes and getting the borrow mechanics solidified in my spine. The entire set of language constructs is actually very small, there's just a couple of new unfamiliar concepts. Biggest downside compared to C++ is absolutely library support. Sorely missing many powerful C++ libraries.


> Go does not scale well

Are there any articles to back this up? I'm aware that the scheduler and networking integration are certainly opinionated. Nevertheless it seems to work very well for most applications that I have seen so far.


I mean code base does not scale well, not performance. Performance of golang is top notch.


Have you e.g. checked the Kubernetes codebase? It's pretty much huge. I would agree that at least the past iterations of Go dependency management might not have been ideal (but neither was it e.g. for C/C++), but that that apparently hasn't prevent people from building huge projects with it.

You can have large code bases in assembly and C and python. Just because you can it doesn’t mean you should. Golang does not have any useful features for defining and navigating large code based. The way imports are handled is dumb and verbose. Seriously you can’t import a specific struct or function?

Also other things like you have to import “strings” to use features like split and substring, instead of them being defined on the string struct itself, which makes code awkward. No string interpolation which is quite ridiculous. The list goes on.


Wasn’t it Kubernetes that was notorious for having large amounts of code being auto generated and checked in because hey, no generics.

Java, by nature of its moving GCs, can often do better with long running applications than languages like C and C++ which can much more easily suffer from internal memory fragmentation. You can work around it by using tricks like memory pools, using a sophisticated malloc implementation, etc., but that all is much higher effort than with Java.

Additionally, a lot of the systems you mentioned can often benefit from the sophistacted dynamic classloader the JVM provides. For something like Spark (mentioning this one because I am familiar with it), it is a requirement to load and run user specified code dynamically. You can do this in C, but the JVM makes this much easier.


Yes, it's really more about the GCs than the JIT. The yak-shaving involved in a performant C++ app is an order of magnitude higher. Plus, you get nice stack traces. Libraries like Netty make for extremely high-throughput apps.


Java is a very decent language even in comparison with other modern languages. And the amount of tooling, libraries are not matched by any other. It is easier to hire good Java developers. C# is the only other languages I can think which is as much capable as Java. I was hoping for golang but IMO it is not great for anything other than small command line apps.


Faster than Python/Ruby. Safer than C/C++. Sun was seen as nicer than MS. No remaining languages had as many replacement developers once you ruled the previous ones out.

Then add in two decades of entrenched software and network effects.


Engineers with Java experience and enterprise development are more plentiful (less expensive) than with many other languages and ecosystems.


I don’t understand talking about programming language without context.

There is no point to discuss or compare programming languages without having the context about project, environment, scale, resources, and architecture.


The context is "major financial services company". There are actually quite a few running Elixir.


This is spot on. The same programming language / framework can provide positive, zero or negative competitive advantage depending on the usecase. If you don't have that context, this information is really rather useless on its own.

With that being said, in this example you have to imagine their competitors certainly will have that context (they'll each be building a replica of the same thing, essentially) and so the situation is different. Only hearing the language / framework they are using becomes more useful and valuable, then.


> If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

http://paulgraham.com/avg.html


Viaweb used Lisp.

Google used python, Java, and C++

Amazon used perl and C++

Facebook used PHP!


Indeed. And reddit used Lisp, only to their regret after some time [0].

Such decisions are in fact not simply about languages in themselves. They revolve around languages, availability of libraries needed for the task in hand, community, availability of proficient programmers vs your ability to train new ones etc.

[0] https://redditblog.com/2005/12/05/on-lisp/


Reddit's rewrite was done "pretty much done in one weekend".

Things that can be rewritten in one weekend are trivial; they don't speak to anything.

A Lisp site that can be rewritten in Python in one weekend can be probably be rewritten in shell + awk + CGI in two weekends.


If they faced issues in a one-week project, I ask myself what would have happened in a project big enough to speak to something


The link exposes some underlying reasons which I think applies to bigger projects as well. Like the lack of libraries.


Lack of libraries applies to tiny projects, because the "activation potential" has to be small. Developers are reluctant to write 1000 lines of library wrappage for the sake of 700 LOC of payload, but they might well do it for something that can be expected to be 25 kLOC, and much more likely if they're going to have 250 kLOC riding on it before the product reaches 1.0.


It definitely applies to large projects as well. It's not just a matter of what developers think about writing more code. It's a bit more complicated economic decision involving how a significant amount of resources is going to be spent on writing code, testing it, whether the team is actually capable of writing the needed libraries, security risks introduced by bugs etc etc, just to begin with.


Also, looking at the reddit1.0 code, it’s not particularly well-designed. It’s very unclear whether the problem was lisp or not having the benefit of having written reddit already.


They expose some general reasons, like the lack of tested libraries, which are applicable to other projects in general.

And regardless this, my point is not showing a rant against Lisp (I'm huge fan of it in fact). My point is simply that the language may be a great choice or a bad one, the outcome is not an intrinsic property of it.


I think Yahoo used a lot of C++ back in the day as well.


If the client asks and I tell them I write it in Delphi, I loose the contract period, if they don't ask I generally get the contract. Been like that for last 20 years and will be for the next 20


If they don't ask then they don't plan on making changes to the code (even if they will, they're not _planning_ for it).

If they do ask then they are planning to make changes. If they're planning for that then it makes sense they would lean towards the most popular and easy to hire for languages.


I loved Delphi. I still use the OSS "Lazarus" IDE for hobby projects.

But I've got enough experience with Delphi and its "VCL" libraries that I'd require a fully licensed and running VirtualBox image with your exact build environment capable of compiling the project before I'd sign off on the deliverable.


Is this real? If so, please elaborate.


Its real, certainly not going in to client details though that would be pretty unprofessional.

Currently my main income is derived from a suite of software I wrote for a small winery who subsequently grew to be a very big winery. They got big enough to hire an IT team and a Manager and a consultant who basically are telling me that Delphi is ancient and my whole system is a risk to the business despite having 100% up-time for the last 10 years (Basically if their server was up the system was up). Nothing I am able to do is going to convince them otherwise and it is on the cards to replace it.

Again I will gracefully back down and support the system until it is changed so should get another 2-4 years out of it based on previous experience *

I was an IT Manager for a long time and I do understand peoples worries but I also understand that if something is working, is documented, has an open source and data structure, can be supported and doesn't have a forced end of life there is sometimes a bigger risk to making a change. Behind a lot of software is generally one guy or gal, I don't try and hide that fact and sometimes pay the price but and I know several other packages that have just one guy as the linchpin but there will be no pressure on them as they hide it behind a company.

I also have C and SQL skills which people don't find so offensive :-)

* The expenses system I wrote in the UK had the same decision made on it in 2004 to replace it with the new mega SAP R/3 ERP they were moving to and it was still being used up until 2016. A Time Management System I wrote in 2008 is still being used despite the client telling me it would be phased out in 2013 as part of a new CRM system.


Nobody writes Delphi so it would be difficult for the client to maintain their software if their relationship with the original author ever ended.


Yup, I don't blame the client at all for rejecting a Delphi contractor.

Where I live it would be easier to just start the project from scratch than to try to find someone proficient in Delphi to maintain it


Just find any decent programmer and have them learn Delphi as needed?

I mean, a lot of people program in a language they aren't familiar with because it has the best ecosystem for a given job or because the open source project they are modifying is written in that language.


That's an option, but I can't imagine many folks would want to learn Delphi and continue to invest their time into it unless the money is worth it.


"Just find any decent programmer and have them learn Delphi as needed?"

That is big money.


I just read the marketing material for Delphi and I beg to differ. I thought Delphi was a curiosity last heard of in 1993. How wrong I was.

According to the tick boxes you can rapidly develop for any platform and deploy everything without waiting around. I have no idea why anyone would want to develop in anything else. If the client could not find a developer to take on the project they could be up and running pushing all the features by teatime.


On top of it, there's a Delphi-compatible, FOSS IDE and compiler to use instead:

https://www.lazarus-ide.org/

It has an active community. I don't know how big it is, though.


Using Delphi is an unfair business advantage.

Whatever competitive advantage there might be, such secrecy also puts you at a recruiting disadvantage.


Not just recruiting, but there are all kinds of problems if your engineers can't have conversations about the language in public forums/meetups/etc and can't make contributions or even feature requests for the language and tools.


Using languages and technologies that aren't relatively mainstream also puts the company at a recruiting disadvantage. It's trivial to have a full pipeline of candidates who are relatively inexpensive to hire and "good enough" within days of posting a bog-standard "Java services developer" job description.


I’ve been hiring spring boot developers, and on day one I just throw them the Phoenix guide and say “this is what we use here, learn or leave”


When/how can a company assess that using a particular language is a "competitive advantage"? It seems that the competitive advantage is perhaps in hiring the people who are actually proficient in the language -- and the advantage arises when that language is one that has X, Y, Z paradigms and is designed for A, B, C use cases, and then is used appropriately.

> https://twitter.com/devoncestes/status/1104000439987683328


I am not advocating for the company but it might be something like this;

Handling of same amount of users comparing to their competitors with a software stack that requires %70 less employees to run will give them enough room to lower their product price comparing to others. Imagine If they can keep it this way X number of years, then they will be able to eliminate their competition.


Minimizing costs like this is most important for commodity software with lots of similar competitors. I think the real answer will vary greatly from company to company and market to market. Generally, I would recommend companies be careful to not think they're all that special. Special companies/products/markets may require special tools, but 99% of companies should pick the most mainstream tools available: Java, .net, c/c++, etc.


Not just the cost, but often the complexity reduction in team communication overhead can mean improvements in the ability to add features and services quickly. Financial arenas deal with a lot of events that need to be processed quickly with complex data transformation requirements that change quarterly in many cases.


Developer maintence time is not the marginal cost/priced of software. Software has low marginal cost to the economics is all about scaling up sales, and your tech stack is mostly binary: the job is either possible or impossible. developer cost within a factor of 10x is irrelevant.


yes, I'd say if a language fits the product for some reason then it's an advantage, but a "competitive advantage" sounds like a stretch. If it's a niche language you want to attract the programmers in a smaller pool so you may as well advertise it.


One of the perl guys (maybe Larry Wall?) said the same thing at a talk once about companies using perl but not wanting anyone to know it because it was a such a time saver they considered it a competitive advantage.


That's still the same about Perl, but for different reasons now.

> While the rest of the world sees Perl as a legacy language and analysts insist that no one is talking about it, our Perl business is vibrant, alive, and growing. Leading companies such as Amazon, Boeing, and Cisco continue to demand Perl skills in their developers while Booking.com is investing in Perl as its core development language. How can this be explained?

> This is the Perl Paradox. No one is interested in talking about one of the most influential modern programming languages yet it continues to thrive under the radar.

https://www.activestate.com/resources/webinars/perl-paradox/


> continue to demand Perl skills in their developers while Booking.com is investing in Perl as its core development language. How can this be explained?

It's simple: because rewriting it doesn't make business sense. But from having talked to Booking.com devs, I don't get the impression it's something they'd recommend for new projects.

So while some businesses build on Perl might be "growing", is this because of Perl, despite Perl, or doesn't matter? The success of a business says nothing about the ecosystem surrounding the language they are using (unless they get so big they can shape it). A far better proxy for language health is how easy it is to hire a team of competent <language> programmers at all skill levels - by which I mean veterans with enough varied experience, and college graduates who'll consider programming in <language> without a bigger paycheck. (The ultimate metric would factor in turn over due to dissatisfaction/burn out from tech debt.)

For what it's worth, I still love `perl` for one-liners, because it's far more consistent than the various GNU and BSD versions of `sed`/`awk`/`grep`. But I'd rather be programming something else, and I'd rather be deploying something else, given that consistent deployments are possible now with language-agnostic tools like docker. So a big advantage Perl had is gone.


Yeah, I felt like there must have been a culture shift at Booking recently, because I spent the whole time at TPC::EU Glasgow last year without hearing a single pitch for hiring from Booking. Given that "Booking is hiring" been something of a running gag, something must be up.

It's not all that hard to hire competent and experienced Perl devs if you're willing to hire over the age of thirty, and remote (i.e. not moving to Amsterdam).

I would mostly recommend Perl 6 over Perl 5 for new projects, though. At least for those intended to last more than 5 years (which de facto means 20 years). I know the module ecosystem is not quite there yet, but the concurrency support is far more advanced than anything planned for Perl 5.


> Yeah, I felt like there must have been a culture shift at Booking recently, because I spent the whole time at TPC::EU Glasgow last year without hearing a single pitch for hiring from Booking. Given that "Booking is hiring" been something of a running gag, something must be up.

Booking have changed their policy w/r/t sponsoring conferences and/or the "we're hiring" thing in the last 12 months. I know this as I'm one of the Swiss Perl Workshop organisers and they've sponsored us for the last few years, last year we couldn't get anything out of them and when we enquired we found out the reasons. I'll approach them again this year to see if the policy has changed again.

I don't know the exact reasons for this, and I suspect it's not a Perl thing but rather a change in management policy and/or the realisation that throwing devs at their systems is not the solution. Maybe someone read Brooks? I interviewed there several years ago and it seemed utterly bonkers that their dev team was > 100 given the nature of the business.

Anyway, to tie in with the grandparent post - we know that many banks in Switzerland are using Perl but they will absolutely not talk about it, nor sponsor us, nor send employees to the workshops. We know this as we have had private attendees who work for those banks tell us these things.

Perl was everywhere at one point, and by extrapolation that means it's still in an awful lot of places.

> It's not all that hard to hire competent and experienced Perl devs if you're willing to hire over the age of thirty, and remote (i.e. not moving to Amsterdam).

We're looking at the junior route, we've taken on 4 in the last 18 months and intend to continue this. We're at a point where new graduates were born after the Perl peak, so they don't have any knowledge of its decline in usage and/or preconceptions about the language.


> How can this be explained?

I have a much simpler explanation. Publishers and conference organizers like O'Reilly eventually saturate the market for Perl books and conferences (books especially, because used books start to cannibalize their sales), so they move on to promoting a different, new language, so that even their existing customers will have to buy the new books and pay hundreds/thousands of dollars to attend conferences for the new thing.


Almost nobody buys books anymore.


> Almost nobody buys books anymore.

Can you use a search engine before posting nonsubstantive, dismissive comments?

https://www.publishersweekly.com/pw/by-topic/industry-news/b...

I am not sure the total count of how many books O'Reilly published last year, but they have not exactly slowed down in releasing new titles: http://shop.oreilly.com/


In SF, maybe. But most developers in the world live outside of SV (for example, there are on the order of 1M C# developers worldwide). They don't write blogs, or even read them.


Uh...

Developer outside of SF (outside of USA, in fact) here. I read blogs, have written them on occasion. Have worked with many developers in many non-SV locales. None of them read programming books, most of them read blogs.


I worked at Booking.com for a while, though never in Perl directly.

Take this with the grain of salt of any personal anecdote on the Internet, but I can definitely say that Perl is not loved there. Most people hated it deeply, some of them were like "it's not horrible...".

They are now allowing Java as well for the newer project, and they will have Java and Perl as 'blessed' languages moving forward. I don't really believe that they are doubling down on Perl in any sense of the word, they're just doing what any other company with a huge amount of code written in a dying language would.


I recently left a job at a company with a primarily perl 5 codebase. I would never choose to use perl to start a new project. In my opinion its relevance these days is primarily legacy systems and duct tape/scripting. I still use it for hacking on small tasks, it's pretty decent for technical interviews.

I programmed in Perl (5) in a large enterprise environment for five years. On a huge, monolithic codebase, with people who were innovating with the language in interesting ways. The business environment was restrictive (healthcare), but the amount of people leveraging Perl's flexibility to build powerful, flexible tools to enable faster development on the codebase at scale was startlingly high. A large proportion of developers engaged at every level, from XS to MOP (we used a modified version/fork of Moose) to distributed computing (we used a home-rolled queue worker solution, think Celery for Perl, but without the AnyEvent cancer), and novel web frameworks on the frontend, some of which were put back out on CPAN.

Basically anyone who wanted to be productive there demonstrated an impressively innovative, cross-cutting skillset that combined deep knowledge of UNIX technologies, the particulars of the application, and an incredibly expert language of the particulars of the language itself.

And honestly?

It sucked.

A lot.

I won't say this about any other platform--not modern security-vuln-in-the-package-manager-every-two-weeks JS, not "do you mean actually cross-platform or relying-on-compiler-UB cross-platform?" C, not the worst old-layered-on-new-layered-on-old-again PHP, but: Perl as a platform on which to build something non-tiny, or something that requires more than a small handful of developers, is unutterably awful. And I say that as someone that got pretty good at it, I think.

At the micro level, TIMTOWDI confuses newcomers, makes code review inconsistent, and means that as soon as someone feels fluent and productive on the codebase, then they have to engage with someone else's code, and they get stuck all over again. This means that mentorship is a complete bastard, and developer progression is incredibly hard to gauge, teams form fiefdoms (even in the face of huge linting tools; things that make an ultra locked-down JS/TS project look paltry by comparison--even for off-to-the-side greenfield projects) and can't transfer developers, you name it. Perl at any sort of scale is worse than the quoted "write-only" slogan: it's "write-once, re-learn from scratch on read". For a junior dev, rewriting would be a blessing.

At the medium (between micro and macro) level, the metaprogramming abilities of Perl just . . . fuck everyone up, no matter what they want to do. Want to get your work done in as straightforward and repetitive a style as possible? Welp, no matter how simple the task, and how straightforward-looking the utilities for it might be (on CPAN or in house), they won't interoperate for shit. Want to reduce boilerplate and ease the pain of common tasks by encapsulating (or, god forbid, applying metaprogramming) to speed up some process? No problem, first-class laws-of-the-universe-altering facilities are available to everyone--to get your change functional, you'll just have to interoperate with . . . well, everyone (some of whom wrote third party modules, and aren't people you can ask nicely for help). Object systems will fight with message queue clients for control over how calling nonexistent methods on arbitrary objects that neither one created should work (you thought Ruby's method_missing was a foot gun? Ha!). A tool for printing console logs will override the alarm(2) hooks used by your main HTTP client, meaning that if someone leaves a debug print in the wrong place, HTTP connections to a down endpoint will start blocking forever and kill you. Can this happen in other languages? Sure! Python, Ruby, and PHP (to some extent) all allow the same flexibility and low-level access. But only Perl makes this the default convention* to follow. I've heard people say "Perl programmers are just C programmers who couldn't hack it, but still want to write C". There's a grain of truth to that. Problem is, those aren't the kind of people I want to share a codebase with.

At the large (5MMSLOC+ codebase) level, Perl's an operational nightmare. Thanks to all the ways that libraries can customize the language, it has the metastasized version of most other interpreted scripting languages' problems when it comes to compilation phase and memory, namely: "what happens when I have to compile a huge dependency graph on startup? Can I cache those things in some sort of intermediately usable format or do I have to wait many minutes to start a test script? Can I fork? When I fork, what gets shared? Just filehandles opened by the application? Or random shit inside libraries too (and are compiled dependencies encouraged to make their IO resources' lifecycles manageable by the outer runtime)? Will my box crash due to GC-caused refcount/allocation cycles if all my forks exit at once?" These aren't unique to Perl, but I think they're worse in it than any other language. Oh, and that's without getting into the insane degree of mutability Perl permits. It's the freedom of C without the discipline ("set environment vars any time you want! Hell, they're a first class language data structure! Oh, and change what the STD* streams point to, that's fine to. And dynamic scoping and Scope::Upper means you can't tell when something will change because some code totally unrelated to yours decided it should!"). When trying to handle requests or do anything "nested" (terminate SSL, alter things that would go on e.g. "Context" in now-unpopular Go idiom), Perl's conventional answer is just "dynamically overwrite globals!" In general, this means that it doesn't matter in what context you tested your code in, it'll do something different at runtime because a) if it does anything interesting it depends on global-ish state, and b) other random code can rewrite SIBLING global state whenever it wants (think Python, but if the convention were for any coder that got stumped about how to pass data around to just modify globals/locals/vars willy-nilly). Again, a risk in most environments, only actively encouraged in Perl.

I promise that wall of text isn't just specific-employer PTSD. I've been through CPAN code, negotiated with package maintainers, gone to conferences, tried to get a sense of how people are contextualizing these problems. And the impression I came away with is that the vast majority of the entire Perl ecosystem--from the practices understood to be desirable by programmers to the behavior of existing/hardened/public code--is overwhelmingly harmfully inaccurate, poorly-thought-through, and defended by the worst strain of cleverness-above-practicality (or "don't touch it, it works when you hold it just right and don't breathe" for incredibly simple requests) when challenged.

Perhaps at some point in the past this ecosystem reflected the cutting edge, but I think that time is long past. If you're making a small commandline utility or personal one-off in Perl, go nuts. I don't think you're a bad person. Just make sure it doesn't get any bigger than one developer's worth of code.

EDIT (probably the first of many, because essay): typos and grammatical fixes. Promise I won't change the substance.

* Why the hell are either of those libraries calling alarm(2)? Answer: the logger didn't realize that write(2) wasn't interruptible, and the HTTP lib author didn't realize that connect(2) took a timeout. By the way, both were in incredibly popular modules on CPAN.


We worked at the same company, can confirm. Every problem mentioned above is true.

I once wrote a sub at this gig that returned a list (because hashes are lists) containing a string built by sprintf, which contained a sub dereference wrapping a sub that returned a string built by sprintf. Though it was necessary at the time I'm still just really, genuinely sorry about that. I guess the takeaway is that perl reverts even the most civilized devs to utter savagery.

>> Perl at any sort of scale is worse than the quoted "write-only" slogan: it's "write-once, re-learn from scratch on read". For a junior dev, rewriting would be a blessing.

Rewriting is really risky too; global state is problematic in any language that offers it but the almost complete lack of guarantees provided by the language is exhausting and makes it difficult to reason about even the most trivial change. Imagine being dropped into a 5k line function that hasn't been touched in 10 years. There are no tests, few comments, and the author quit 6 years ago. What types can this function return? Is it always called with all of its expected arguments? Where is it called from? None of these questions can be answered trivially. You'd think grep would handle the last, but you'd be wrong because people can and do build identifiers piecemeal as strings and eval.

The casual use of evals and symbol table manipulation in probably any large perl codebase only became more terrifying as I became more comfortable with the language. I cried a little bit when I figured out how the import system is cobbled together, and not exactly for its elegance or simplicity.

On the bright side, building healthcare systems with perl made me a very disciplined and defensive coder. It also got me used to saying things about my code like "reasonably confident" instead of "it works". Software engineering is so much more exciting with assumptions and guesses, who doesn't like to roll the dice every now and then? Now if I could just remember what all the runtime flags do...


Cool story bro.

To me it just sounds like your colleagues were a little nuts. You don't need metaprogramming to write a large application, and reaching for Devel::Declare should be reserved for last resorts and "hold my beer" moments. You're right that the conferences have a lot of this kind of stuff, but most of us know better than to actually use it, and read Perl Best Practices.

Most of these issues are solved in Perl 6. Any language changes are scoped lexically by default (e.g. declaring sub infix:<==>). Emphasis on threads instead of forking for concurrency, with await and first class Promises. If you want to write see, just write normal C and use NativeCall, instead of writing in the bizarre XS dialect of C. But it's kind of a different language... yeah.


I'm not going to say we weren't all nuts... but a lot of advanced language features, including metaprogramming, were necessary to build things that other communities take for granted: frameworks, test harnesses, mocking, static analysis, etc.

For major soft dev projects, Amazon deprecated Perl 12years ago when they moved to java for retail website development, and uses it for maintenance of too-big-to-fail legacy parts of the retail website.


I've read an equal number of horror stories to success for Erlang / Elixir. That isn't to say it's not good or useful, but that it isn't some magic bullet that solves every problem for you.

Edit - I guess my point is that it's not good enough to keep a secret. :)


I've been working in both professionally for several years now and talked with many over that time, at conferences and such, and have yet to hear "horror stories", but such stories could definitely be useful as illustrations on how _not_ to build a system in Erlang or Elixir, so you should share examples!

I _have_ seen some projects with pretty terrible code, but that has nothing to do with Erlang/Elixir, and everything to do with the skill level of the team behind the project.


friendlysock brought up a legacy project with a lot of specific problems:

https://lobste.rs/s/pcebor/choosing_elixir_for_code_not_perf...


This is the first time I've heard of Lobst.rs. Looks like a HN clone more targeted towards programming. Thanks!


I sure haven't I'd love to see a few if you have them. Help me be better informed for choosing a language for large projects


I've used Java, Erlang, and Elixir professionally and I would never go back to Java. I like being able to write code without the crutch of an IDE. I like not having to deal with JVM deployments. I also like Erlang's functional paradigm and concurrency model a lot better. I'm at least twice as productive in Erlang and Elixir, so I view the two languages as having a competitive advantage over Java.


> I've read an equal number of horror stories to success for Erlang / Elixir

Care to share them here?


Please: provide some details so that those of us looking at the pool can decide if we want to walk over.


Remember this happening at a couple of big companies that used Smalltalk in the 90’s. It was a BS move then and now. That’s why you take with a grain of salt language popularity numbers.


I’ve heard the same “we don’t tell people we use Erlang because it gives us such an advantage” ~10 years ago.

Yup, those companies moved a lot of their business to Java, and C#, and Go, and even Ruby.


Even the ones writing telephone switches?


Relation of Erlang to telco switching is somewhat opposite than what most people believe. Ericsson's stored program switches do not run on Erlang, but use execution model enforced by hardware that undeniably was inspiration for Erlang.

Other manufacturers used either some traditional mix of C(++) and assembly or got on the CCITT/ITU bandwagon of CHILL ("CCITT High Level Language"), which is combination of C/BLISS style low level execution model with Algol/Pascal syntax and COBOL-like verbosity with some extent of native support for threading and coroutines.


There are very few companies developing telephone switches. Ericsson famously ditched Erlang in the late 90s, and as I hear there's very little Erlang remaining there.

Can't say about other companies.

Cisco uses Erlang in the control plane apparently [1]

[1] https://twitter.com/guieevc/status/1002494428748140544


Ericsson absolutely still uses Erlang [1] - the vast majority, if not all, of the core Erlang/OTP team are employed by Ericsson, and that team is still very much active and constantly improving the language and runtime.

[1] https://www.ericsson.com/en/news/2018/5/erlang-celebrates-20...


Yes, I know that. They have the team and continue developing and improving Erlang because they have paying customers who use Erlang.

Inside the rest of Ericsson, however, there's very little Erlang left. Once again, only hearsay, no hard proof. They still hire for Erlang here and there (used to be primarily non-Swedish offices) [1], and they offer theses for Erlang [2]

[1] https://jobs.ericsson.com/jobs?keywords=erlang&page=1

[2] https://www.linkedin.com/jobs/view/master-thesis-erlang-json...


Citation needed


Is it really a thing that companies keep such technical choices as a secret to seek advantage? How about even more trivial stuff like using SCRUM (or the secret approach to software development you do). Or "The secret to our success is giving free donuts to all our employees".

Do you have other real examples?


Are we already on the next hype cycle?


I tell everyone I'm using Perl.


Heh... reminds me of a startup from the dotcom era that I joined right when they were burning through their last millions. It was a hosted service aimed at telcos and claimed to handle thousands of active sessions per box. Except it was written in Perl and in reality didn't scale beyond 10 or 20 sessions. But it didn't matter a bit, because they had no customers. So, yeah, Perl :)


It's mildly misleading to say Erlang was the special sauce, language specifications and runtime are so often conflated. It's really BEAM VM that's special. There's not many other examples like BEAM languages that you could have a meaningful post either. Most languages are developer preference, and simply make some things easier than others. It didn't take me too long to get over "language churn", after finding there wasn't much for language innovation happening. Today I'm a fan of spaces like Java and C#. The languages are fine, but it's everything else that makes them so great.


It might be time to reconsider using Java because of the Oracle v. Google lawsuit. Oracle has asserted that Java's APIs are copyright; that issue is the subject of a writ of certiorari currently before the SCOTUS. And some recent versions of Java are no longer free and open source. It is strange when language choices are driven by intellectual property considerations.

This is really sad. Since Erlang is both a practical and simple, elegant language.

I always believe if Sun and Ericsson swap their languages back to 90s, say Sun have Erlang and Ericsson have Java the world would be totally different.

I'm pretty sure Java guys struggled quite some years for distributed Web development. Thread and lock were all the things most of them know.


This whole thing reeks of the CTO selling a story to the non-technical CEO. I'm not saying it's not useful to the company, but companies build strange internal myths about obscure tech choices.


Are those internal myths equivalent to rationalizations of the decisions? Do you have any anecdotes or know where to find some?


I call them myths because it's not important if it's literally true, just that if the executives believe it's true, then they make the right choice. For example, if you have a home grown internal tool, like a database or a messaging system, it's important that the executives believe that it is a competitive advantage for the company so they don't try to replace it. If they try to cut the maintenance of that internal tool out and switch to something cheaper, then it would probably be terrible for the company because engineering would need to rework the whole system that depends on it and it wouldn't end up being cheaper. Same with switching the language you use to Java. Sure you can hire more Java developers, but rewriting the system from scratch or splitting the codebase is probably worse, so just make sure the CEO believes Elixir is key to their success, whether it actually is or not.

Listen to Steve Jobs describe object oriented programming. I'm not sure he even knew what it was, but it worked out for the company that he sold it so well. https://www.youtube.com/watch?v=2kjtQnPqq2U


Erlang and Elixir processes seem like a good model for serverless FaaS. Your code as written could run on one core or scale to multiple cores or servers.


Elixir is one of those languages that changes you. I realize its a ruby flavored wrapper on top of erlang but I don't care. Coming from an OOP/C based language background, it took me a while to get it. Implicit returns? Immutable data? Pipes?? No inheritance??? No for loops????? And all thats just if you use the phoenix framework which maps pretty analogously back to a standard web server flow. Once you start digging into Supervisors and umbrella projects, the top of your head will get blown clean off. It's a hell of a drug.

I am glad to hear there are more companies out there giving it a try. Any time I see a job posting for a company that uses it, I give them a hard look. It tells me that they are a) hiring people who are of a certain mindset (see the python paradox[0]) and b) they understand the competitive advantage that using a language like Elixir can bring to the table.

[0] http://paulgraham.com/pypar.html


I'm a Ruby guy and I'd love to feel what you're feeling. Where should I start / what should I check out? Bonus points for concept walkthroughs and materials designed to bring Ruby folks up to speed, specifically concepts that need to be temporarily forgotten or unlearned in order not to get sad.

I like being a polyglot, but my schedule is tight enough that I need to hit the ground running or else I get pulled back to reality pretty quickly. There's a lot of posts and videos out there on Elixir, so curators are incredibly awesome in my world.


Start here --> https://elixir-lang.org/getting-started/introduction.html

You will start feeling the magic around chapter 4 with pattern matching and the rest gets just better :)

Try not to migrate your "how to .." thoughts from other languages into Elixir. It has some uniqe ways to solve everyday problems.

Have your interactive elixir console ready to play with as well, good luck!


Pattern matching will break you.

I get frustrated at least once a week trying to use it in a language that doesn't have it. My brain just thinks in pattern matching!


It's unfortunate when I encounter a language that has a crippled implementation of pattern matching (looking at you, Scala). After using Erlang, it's hard to accept anything less.


How are F#'s and OCaml's implementation of it, in your opinion?


I’ve not tried either, although in my limited exposure to SML it seemed fine.

The other related feature whose lack I regularly lament in most languages is multiple function heads, which apparently is rarely used in the ML family tree, bafflingly.


I haven't heard the term "multiple function heads" - is it similar to Haskell's way of defining functions? e.g.

    fib 0 = 0
    fib 1 = 1
    fib n = fib (n-1) + fib (n-2)


Yeah, this is Erlang:

    fib(0) -> 0;
    fib(1) -> 1;
    fib(N) -> fib(N-1) + fib(N-2).


But these are all compiled into, essentially, a giant case statement. It’s a syntactic sugar. Your last line is missing a dot at the end.


Concise code is its own reward. Yes, it's syntactic sugar, but there's a reason syntactic sugar is valuable.

Even if performance is the same, you wouldn't prefer this Javaesque code:

    Int.from(8).multiply(5)
You'd use `8 x 5` were it available.

Several discrete function heads are easier to mentally parse than a long case statement because you know without a shadow of a doubt that there's no code in the function above or below the case statement, and you always have all relevant bindings directly adjacent to their usage.

Consider this Erlang code:

    f(X, Y) ->
       case X of
          {1, 2} ->
              low;
          {2, 4} ->
              high;
          {3, 6} ->
              Y
       end
    end.
When you get to the 3rd case statement and `Y` appears, it's jarring because it was declared several lines removed.

Instead, this code makes it explicit for each function clause that we don't care about the 2nd argument, and when we do care, it's immediately obvious where it came from.

    f({1, 2}, _) ->
       low;
    f({2, 4}, _) ->
       high;
    f({3, 6}, Y) ->
       Y.


Yes, I agree with your point, I've written plenty of Erlang.

Exactly. I don’t know what the typical industry term for it might be.


Multiple Dispatch / Multimethods (I think?)

Interesting, it appears that multiple dispatch is accurate. Thanks.

Pattern matching!


No, it relies on that, but the creation of multiple function clauses via different matched heads itself isn't simply pattern matching.


What's wrong with Scala's pattern matching?


I shouldn't have said "crippled." It's just much more limited than when used in Erlang.

From digging a bit it appears to be somewhat the JVM's fault[0] and quite a lot of it is simply that it's a different type of language from Erlang.

In Erlang, nearly every line of code involves pattern matching, regardless of whether the developer takes advantage of it or not. Every assignment statement, every function return, every function parameter, is an exercise in pattern matching. It's at the core of the language, not an add-on.

As I mentioned in another comment, a closely-related feature that really shows it to its full advantage is multiple function heads/clauses.

[0]: https://www.scala-lang.org/old/node/11982


OK, I also don't like Scala's inability to deconstruct in the parameter list. I come from an ML background where that is possible. But I found it less bothersome in practise than I had originally anticipated.

Erlang has pattern matching on bits [1] which is convenient, that would be nice to have, especially when writing networking software.

[1] P. Gustafsson, K. Sagonas, Efficient manipulation of binary data using pattern matching.


Good point on the binary pattern matching. I’ve rarely used it, but it is quite powerful.

I was pleased to discover Python’s tuple pattern matching in function heads, only to be disappointed that it was removed in Python 3.


You are completely right. I work mostly in TypeScript and JavaScript for day to day and I miss pattern matching so much. Object de-structuring is nice and all but once you've seen the possibilities in a language where its a first class citizen... everything else just seems inadequate.


It's getting there for us in JS land, slowly but surely! There's still a lot of discussions that need to happen (read as "mostly bikeshedding" IMO), but once they settle on the actual syntax, i'd feel comfortable to start using it with compilers.

https://github.com/tc39/proposal-pattern-matching


Awesome! Thanks so much.


No problem at all! I am also a ruby guy, so i can easily say that Elixir will not disappoint you.


I'm kind of a ruby guy, and I started with the elixir intro from their website. Very clear and well documented. Then I built a website using Phoenix and learnt through that.

Docs in elixir are first class citizen, so everything is very well documented. And just like in ruby/rails land, any question you may google will return a lot of high quality posts with answers!


Highly recommend this: https://codestool.coding-gnome.com/courses/elixir-for-progra...

I think it's probably the best way for an experienced programmer to get up to speed on idiomatic Elixir, why you do certain things, etc...


+1 on this.

The coding-gnome is the first tutorial that explained OTP, Supervisors and GenServers at a level that helped me understand how to actually use them.

Afterwards, official Elixir Documentation also became much more readable and understandable to me. Before that point, some things just seemed to arcane because my understanding of OTP was wrong.


The official docs for Elixir or Phoenix framework go a long way. One thing that sticks out above the rest for me is Elixir koans[0]. They’re extremely rudimentary but I think the project showcases Elixir’s hot reloading if I’m not mistaken and it’s very fluid. Other follow along courses may have taken this approach but I was extremely impressed by how everything fits together.

[0] https://github.com/elixirkoans/elixir-koans


"The gateway drug" IMO is the Phoenix framework. It models cleanly to much of what you already know as a Ruby dev. The creator, Chris McCord wrote a book [0] for it that is one of the most well written programming books that I've come across. Chris does a fantastic job of making the information accessible and understandable. You learn by building an app of substance and apply all the concepts that make phoenix a wonderful web framework to use. Towards the end, Chris starts easing you into some of the more "hardcore" concepts and gives you the opportunity to use them on the project you are building throughout the book. That level of quality just seems to permeate the ecosystem as a whole. Others on here have made the point that the waters might start getting muddied as it gets more popular but I've had nothing but great experiences so far and would love to see it really take off.

[0] https://www.amazon.com/Programming-Phoenix-1-4-Productive-Re...


I found the Programming Phoenix book to be quite nice because it similar to Hartl's Rails tutorial. Seeing where and how the two differed was very useful.

Elixir in Action from Manning, and all of the PragProg books are great


Funny how that pg post has aged. Now python is the language you learn if you want to get a job.

What’s the new python?


Rust, ClojureScript, Clojure, Racket, Elixir, Elm, Reason, OCaml, Haskell


I would add Nim to this list, just after Rust


Also, Java and C++, even if they are "only" used at a "few" companies that happen to have the hiring power of 1000 startups each.


OCaml? Someone other than Jane Street is using OCaml?


I think the Reason syntax is picking up a lot of steam these days, especially in areas where React is used (web frontends and ReactNative apps). You can see some of the companies using it here https://reasonml.github.io



Rust, maybe. I'm not particularly leading edge myself, so I've heard of it but never actually tried it.


Yep. The last time I picked up a language it was python. It lived up to much of what I had read. I've ignored most languages and watched them come and go with the exception of web dev which is not my thing. I wanted a better more pythonic C++ and have started dabbling in Rust. It appears to be everything people say. Would recommend.


golang


JavaScript.


No loops? That can be life changing.


If you’re wondering: Erlang (and so Elixir) has a very “pragmatic” design behind its abstract machine/runtime; it’s not like the lack of loops is to be functionally pure or anything.

They remove loops in the Erlang abstract machine so that there are only ever O(1) instructions executed between each function call (where a tail-recursion is considered a function call.) With this constraint in place, the runtime can get away with a hybrid of cooperative and preemptive scheduling called “reduction scheduling”, where functions only ever yield when they make a function call (or return from one.)

By ensuring every function body has O(1) “reductions” before it hits a CALL or RET op, the runtime guarantees (unlike cooperative scheduling) that execution will always yield from a task in a bounded amount of time; and by ensuring yields only happen at call-sites, the runtime ensures that (unlike preemptive scheduling) there is no register state to preserve at time of yield—all the scheduler has to record when it pauses a task is the thunk (function pointer + parameter list) for the call it was about to make.

Together, these guarantees allow for extremely low-overhead context switching between tasks in a soft-realtime context.

Other systems (e.g. .NET’s Orleans) simulate this hybrid approach by splitting code into state machines with explicit yield points being the state transitions. But AFAIK, only ERTS takes the approach of making function calls into the state transitions. (Because, without the no-native-loops constraint, such an approach doesn’t work at all.)


Recursion is the way to go in Erlang / Elixir: https://learnyousomeerlang.com/recursion When one writes a bit if Erlang for a while, one gets a completely different view on any software to be written in the future. Erlang is life changing. But there are things where other technologies are better fit.


> Recursion is the way to go in Erlang / Elixir

I think it's important to realise that they're different ways of writing the same thing. For example, in Lisp, the 'do' operator is basically a C-style for loop: in Scheme, it's a macro over tail recursion, and in Common Lisp, it's a macro over gotos, but it provides an almost identical interface to programmers. Using recursion over loops is just a stylistic/syntactic choice, the really important part is understanding how recursive functions are equivalent to loops, and vice versa, so that you can apply your experience in one situation to the other.


Hence I said “in Erlang / Elixir”. Because, really, in those two, recursion IS the only way to go.


I don't know Elixir, but I do know Erlang (one of Elixir's inspirations). The lack of loops is replaced by recursion or higher level functions. Tail calls (all, not just recursive) are optimized so that you don't blow the stack. Since the two (iteration and recursion) are easily transformed into the other, it's a pretty clean mode of coding.


You can sort of loop in Elixir eg.

Enum.map(1..100, &IO.puts/1)

prints the numbers 1 to 100


Well, there is also a `for` construct:

for i <- 1..100, do: IO.puts(i)

I guess what people mean is that that’s an abstraction over function calls?


> No loops? That can be life changing.

I doubt it would be for someone from a Ruby background, where higher order functions already generally replace built-in loops.


As someone who's done a bit of Rust, I'm starting to see its influences just reading that list of features.


It looks to me like Elixir is just doing the same things most other functional languages are doing.


The language itself is a fairly typical FP, but only if you disregard the VM it runs on and the OTP framework.

As I've frequently said about Erlang, the language is much more powerful than the sum of its parts. There are a great deal of complementary features in the stack.


It is really the BEAM that is most compelling, Elixir is a great language on top of it, but the VM provides tools that simply don't exist in most other languages, if any. Specifically things like runtime tracing with pattern matching, the ability to connect an interactive remote shell to a running system and poke at it using code, ETS/Mnesia, hot upgrades/downgrades, etc.


I really wish that Erlang/Elixir were strongly typed. I've blown my foot off with the "dynamic language gets us going fast and now let's scale" footgun too many times to not be afraid. Obviously big important systems are built in Erlang, but I don't think I've got the guts to do it.


Erlang's niche is distributed systems anyway, an area where static typing is at odds more often than not. I mean, you can guarantee the executable you are working on is exempt of typing errors, but it's just a small component of the overall architecture. What about data you are receiving? What if some remote client is using an older version of your software? What if the web service you are calling changed its API recently?

I use Elm, which is as statically-typed as possible for a client webapp, but I know I can never guarantee that JSON data I receive from remote servers is of the expected type.


I find static typing is extremely useful in the case where you're receiving JSON though - I write the server side and we use Scala. For us we write the type we expect, validate the input as soon as it hits the server (in the controller), fail immediately if it doesn't validate and if it does validate then we're dealing with that type in all the rest of the code. Obviously that's not going to handle receiving a shape of data that you don't expect, but you'll error immediately and I find it really robust in practice.


Dynamic languages do this with schema validation. It can be more expressive, checking value ranges or string formats, multi field conditional constraints, and so on. And allows better& customizable diagnostics.

Edit: also, versioning, cross language sharing, use in generative testing, api client code generation and other metaprogramming uses. And documentation generation, eg swagger


See e.g. JSON schema, which has a wide language support and tooling (even some IDEs support it for validation while you're writing a compliant document)


and once you've validated a value your code immediately forgets what type it is.


Beware of confusing dynamic typing with weak typing :)

If you are using eg Clojure's spec, you can reference the same stuff downstream in the control flow too, for eg fn argument validation.


> Erlang's niche is distributed systems anyway, an area where static typing is at odds more often than not.

I've worked on a lot of distributed systems, and I don't see what this has to do with anything. Yes, static typing doesn't prevent you from doing any of the things you mentioned like changing your API out from under someone, but it's not supposed to. It Does however prevent certain classes of mistake in the individual service or executable, which is still a laudable goal in and of itself. It's not a silver bullet, but it's no less useful in distributed systems than it is anywhere else.

Also, are we talking about strong typing or static typing? The OP said one thing and this reply says another. Either way, both are nice in distributed systems too.


I've come to believe that modern type systems are not primarily about types. They are mostly useful static code analysers. Sometimes you can use them to optimize the machine code generated by the compiler, but they are much more about enforcing constraints at compile time on the abstractions we build in code. System boundaries in distributed systems are just places where you have to declare and dynamically validate what other systems send in. Validating how you use those inputs according to your declared constraints at compile time is still very useful.


Static typing with uncontrolled input, whether user input or distributed systems running multiple versions, is handled via "Don't trust, verify!". Structures and inputs are validated at the ingress so that the rest of the code of you application can rely on static type checks. If the validation fails the entire request can be immediately rejected.


This is rediculous. If I'm dealing with something that completely terminates quickly, like the Python in Meson or Nix in Nixpkgs, maybe I can get away with not having static types and still get work done.

If I'm dealing with some complex long-lived distributed system, I don't have time for the bullshit errors no static type checking allows. I'm already waist-deep with real problems!

Also if you control the client and the server, do not just blindly follow Postel's law and except crappy data. Be rigid and lo, and behold, one side quickly excises all the bugs in the other.


Oh, come on, static typing does not prevent all type errors. Everything what arrives on the wire can blow your program up.

> Also if you control the client and the server, do not just blindly follow Postel's law and except crappy data.

But in real world, we do not livd in a bubble. Very often, we do not have the comfort of controlling the server and the client. We have keep edge cases in mind. Unexpected errors / cases happen.


> static typing does not prevent all type errors. Everything what arrives on the wire can blow your program up.

How?


I’m guilty of HN generalisation. It depends on the language implementation of type system. In Java - any null can blow your program out. Type system is not going to help. Golang, with interfaces and nil pointers can blow you out of the water.

So whenever something was expected to arrive from the wire, it does not parse correctly, depending on the parsing method, once you have that null, all bets are off. Type system gave up.

It all depends.

Another example is Java interfaces. Merely checking for an interface type does not prevent errors down the line. It does limit the amount of errors.


Because it's just bytes, which may or may not deserialise to the type you expected.


That shouldn't blow up my program. If it doesn't deserialize to my type I handle that as a normal branch in my code.


Sure, but now you're enforcing types manually instead of with a type system. What have you gained over not having a type system? In this case, nothing.

It's still enforced by the type system. The type-safe deserialization libraries nowadays use 'result' types to indicate success or failure. The compiler forces you to handle both.

This seems like an overstatement since protobufs (for example) are a widely used way of describing a common data format for distributed systems and they do support static types. Though, common practice is to make all fields optional.

Types can also be very useful for security, to keep track of which runtime safety checks need to be done. For example, you can have a separate type for HTML strings that can be rendered without escaping.


It had type specification for years: https://learnyousomeerlang.com/dialyzer

They are opt in of course, but have the property that the more specifications you add and the more precise they are, the more type errors they'd discover. After it run it tells you if there is a type discrepancy. If it can't decide it doesn't say anything.

The technical term for that is "success typing" http://www.it.uu.se/research/group/hipe/papers/succ_types.pd...


is this exactly the same as 'soft types' (http://wiki.c2.com/?SoftTyping) or subtly different?


I write elixir professionally and that's how I'd describe the type specification system.


For what it's worth, while I don't really disagree, Erlang's pattern matching is very powerful and covers a lot of the same ground as static typing.

Erlang is not in my opinion a “dynamic language gets us going fast” language. It's a language specifically designed to handle large scale reliably for long periods of time.


I think Erlang and possibly Elixir get about as close as you can get to having a rock solid reliable service in a dynamic language. Largely due to the pattern matching and the emphasis on coding the happy path and letting the non-happy path crash with a supervisor restarting it and logging the crashes for later analysis.

You will get a lot of milage out the Erlang ecosystem.


Phillip Wadler tried to add static typing to Erlang in the 90s. He could only make it work for a subset of the language.

IIRC message passing greatly complicates things.


The problem all extant typing systems for message passing (in particular session types) have is that they cannot really deal well with data dependent patterns of message passing. Erlang's mailbox-based message passing is especially prone to data dependent patterns of message passing.

Yes, you can add enough expressive power to the typing system to make this work (e.g. dependent types), but then it stops being pragmatically viable, as far as we know in 2019.


Claiming that dependent types are not pragmatically viable is quite a bold claim, would you mind backing it up? My experience with them so far has been more than great.


Nothing bold about it, here is some evidence.

- No type inference, since that's not decidable for dependent types.

- In most industrial software engineering, getting the specifications is a big problem, whence agile methods. Without detailed specifications early on in a software project's lifecycle, what's the point of paying the price for type dependency when you don't even use it.

- Relatedly, dependent types don't solve the oracle problem in software engineering: i.e. where do you get the specs from and how do you know that your specs (in this case types) are correct rather than false?

- Rewriting & refactoring code becomes much more expensive, because you now also have to rewrite / refactor the specifications. (This is already a reason why post-Java, exception specifications have been abandoned)

- No widely used programming language offers full dependent types (yes I'm aware of Scala's path dependent types and Haskell's forays into dependency), and that despite dependent types are at least half a century old [1]. So it's not like programming language designers don't know about them.

- Dependent types have really only been worked out beyond research prototypes for functional languages, not e.g. for message passing concurrency.

   My experience with them so far 
   has been more than great.
I'd be interested to learn what non-trivial code you've written in dependently typed programming languages. By non-trivial, I mean: industrial code, say > 20k LoCs, at least 3 programmers involved, specification changed over time. Specification was not fully available at the start of the project.

[1] N. de Bruijn, Automath, a language for mathematics.


> - No type inference, since that's not decidable for dependent types.

You can still have type inference, it just won't be able to infer the type of all valid terms, in which case you just have to add a manual annotation. This is not a strange concept, this is what haskell does for example with higher ranked types.

> getting the specifications is a big problem

You do not need to specify what the whole program will do (in fact you can use a language with dependent types without having to specify anything at all) - that being said, people usually specify the behavior of specific functions in the program.

> - Relatedly, dependent types don't solve the oracle problem in software engineering: i.e. where do you get the specs from and how do you know that your specs (in this case types) are correct rather than false?

I thought that the point of erlang having dependent types would be to make data-dependent code easier, not to specify the behavior of functions. But yes, this is correct, and this holds true with every proof assistant that I am aware, not only with the ones that use dependent types.

> because you now also have to rewrite / refactor the specifications

This is a great thing actually - I would consider it a bug if my function after a refactoring did something that went against its specification, with dependent types I can have the compiler warn me about it.

But again, you are not forced to write a specification. Dependent types only add possibilities without removing anything.

> So it's not like programming language designers don't know about them.

I would argue that most programming language designers are not familiar with type theory. There are thousands of languages, one does not need to be a genius to make one. Heck, two of the most popular languages (C and Go) do not even have parametric polymorphism, not to mention that generics were added in Java only in 2004. In addition to that pretty much no popular language has proper support for higher order types either.

> industrial code, 20k LoCs, at least 3 programmers involved, specification changed over time. Specification was not fully available at the start of the project.

In that case, none. Though I would argue that your standards are too high, after all just the fact that I have not worked in the software industry excludes anything that I have made.

As for LoC, I would not use it to measure the scale of a project. With Java for example even the most trivial program can easily reach thousands of lines.


> > industrial code, 20k LoCs, at least 3 programmers involved, specification changed over time.

> Though I would argue that your standards are too high

Ahaha what? 20kLoCs and 3 programmers is a college-level one-off project. Industrial code (and industrial projects) routinely involve orders of magnitude more code and orders of magnitude more programmers.

So no, "anything that you have made" doesn't count until yes, it's not just you working on your own project that you know inside out.

So yes, when you write "My experience with them so far has been more than great" and you can't provide evidence that they work even for a measly 20k-LoC-project with three people, your experience is entirely irrelevant.


   most programming language designers 
   are not familiar with type theory.
SPJ, Odersky, Leroy, Hejlsberg, Syme et al are familiar with type theory, it's not rocket science.

   parametric polymorphism
There are well-rehearsed reasons for excluding PP. I don't agree with them, but those choices were not made from ignorance.

   I have not worked in the software industry
I invite you to change this, and once you have a few years of industrial programming experience, to reconsider.

Static types with message passing works great and is used in production at any company which uses Akka typed.


Static types in a distributed system with hot swapping, live upgrades of parts of a running system, isn't trivial.

Types are great at stating and preserving global invariants. Invariants sometimes do vary as systems evolve, though. You may need to handle polymorphism in your data flow mid-transition. And runtime polymorphism is just another phrase for dynamic typing.


Yes and no. It works great in Scala thanks to extractors but Java is a bit of a mess, example: https://github.com/patriknw/akka-typed-blog/blob/master/src/...


The guts or the discipline?

Dynamic doesn't really cause problems in its own. Only abuse of it which can be said about most things.

I guess there is an argument that f forces you to have good habits but that comes at a pretty big cost if your team is half decent.


"Just write good code" or "Just hire perfect programmers" the go to retort for any language weakness. I've been in this industry long enough to just roll my eyes every time I hear that now, it simply doesn't scale beyond one individual.


Good thing I didn't say or imply either of those then.

I said don't abuse language features. That is a far cry from writing perfect code.

Non perfect code is expected early in a startup, I'm just saying don't confuse your non perfect code with a language failure.

I'd also like to say if you've never seen "hire perfect developers" scale passed one you've been optimising for the wrong thing.

I've seen a team go from technical debt hell and dread to a powerhouse team where everyone commits daily.

It doesn't take much, and it doesn't take perfect developers, and it certainly doesn't have anything to do with the language.

It had everything to do with honest and healthy code review. It had everything to do with people feeling comfortable enough to say, your solution is a hack, why are you taking this shortcut.

That isn't write perfect code, or hire the right people. That is hire pretty much whoever will take the job. That is prioritising and optimising for quality.

We still had a legacy code base, it still made us money, and it gave us the time opportunity to do things right, but you still need to take the time.


My team might be half decent. So might the dozens of other teams working on the code base over years and years. Everyone will certainly have the best of intentions. But mistakes will certainly be made. Misunderstandings will occur.

Note I said "now let's scale" -- it's pretty easy to rely on good practices and a steady hand when it's a dozen engineers in a room. When it's a thousand engineers in three timezones it's a fair bit harder.

Edit: typo


Well actually you said "let's move fast and then let's scale" as in you were implying there was some technical debt built up by moving fast.

You can move slow, choosing a static language only imposes that on you, it doesn't fix the underlying problem (developers being lazy.)


Isn't Erlang the language that you can edit/patch while it is executing? I think hard typing would be extremely difficult in that kind of environment.


Someone should correct me if I'm wrong, but I don't think that this is a special feature of Erlang.

It's done by writing your handlers to accept a closure to use for future requests. It's not inherently different from doing this in Java:

  if (req.isUpdate()) {
    handler = Class.forName(req.getUpdateName()).newInstance();
  } else {
    handler.handle(req);
  }
It's way easier when you have message passing and every level is set up to restart on crashes, but I don't think anything necessarily stops you from doing the same in any other language.


A bigger problem is this:

- Given a distributed Erlang system with two nodes, where both nodes are running the same release;

- Given the same process (Erlang thread) on both nodes, running the same code (i.e. a GenServer backed by the same module)

- Given a new version of the release is being applied

You can't assume that the two processes will be upgraded at the same time. This means that messages sent between those processes may violate the types expected by one version or the other. Furthermore, in Erlang, any process can send a message to another process, so there is no way to enforce that the data a particular `receive` expression gets will even be a type defined in the code it is running. Furthermore, even on the same node, during an upgrade, some parts of the system are still running old code, while some parts are running new code, so it isn't even specifically about distribution.

There is a whole area of type system research around session types, which are designed for more or less this use case, but from what I've seen, none of them handle the case of arbitrary messages, or the case where system upgrades are being rolled out and old code may receive messages from new code.

I still think there is a place for a type system in Erlang/Elixir, but it would have to deliberately ignore process messaging at the very least, at least until a type theoretic solution is available.


You're still going to run into the same problem between old/new processes and message types mismatching with dynamic typing, it's just going to be implicit.

I think that this ideas generalizes pretty well. Just because a language doesn't have explicit types doesn't mean types don't emerge organically. Assumptions will still be made about certain fields existing, there's just no guarantee that the assumptions still hold!


It is, but you can still running static analysis tools on written code before executing a code switch. The key thing is that you track the changes applied so you understand what's happening.

The nice thing about erlang though is that the nature of the message passing interface and pattern matching means that you can get relatively understandable interfaces.


I suppose the 'Let it crash' characteristic of the BEAM kind of compensates for a lack of a strongly typed approach, if the concern is stability and being able to stay up despite triggering a bug.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: