Hacker News new | past | comments | ask | show | jobs | submit login
Why aren't there more programming-language startups? (akitasoftware.com)
106 points by mpweiher 13 days ago | hide | past | favorite | 148 comments





First off, this article isn't actually answering the strawman question it looks like its asking. I think most people who don't work in the space would assume the question means "why aren't there people selling new languages/compilers?" That's a strawman question with a trivial answer, though--free compilers have been the de facto standard for decades [1], so if all you have is a new compiler or a new programming language, you're going to make only a pittance on it directly.

The actual question is why are there so few startups for people who work on PL technologies--which isn't limited to compilers or programming languages, but includes things like IDEs, debuggers, static analyzers [2]. For example, Coverity--a decently well-known paid static analyzer--is kind of the best model of what you'd expect to see more of here. So it is somewhat more interesting to ask why this tool space isn't particularly well-served as a market.

[1] The biggest exception to the rule is custom hardware (particularly embedded, but this also holds true for mainframe and HPC systems), where custom compilers come for the ride. But that would be a hardware startup, not a PL startup, so it doesn't really count.

[2] I tend to lump in decompilers into the formal methods/PL/compilers/software engineering/computer architecture spectrum, mainly being a low-level compiler developer myself, but I suspect most people would consider reverse engineering tools to be a security side of things rather than a PL side things. And decompilers are certainly another noticeable successful paid-software niche, with the very expensive IDA Pro still remaining more or less the industry standard.


This is a good analysis. The core question is, "Why aren't more startups are working on programmer productivity."

My early experience was in system design for board level and chip level systems (e.g. ASICs, FPGAs, SOCs) where there is considerable investment in tools ("electronic design automation" because we had some clear metrics for developer productivity.

My simple answer is there are multiple reasons:

+ The industry has been very well served by open source tools.

+ Outsource development and test firms tend to bill by the hour not the result, cutting hours lowers their revenue.

+ Coaches tend to focus on methodology and process results, not programmer productivity.

+ Managers of the programming / development function don't appear to devote cycles to determining where investment in commercial tools would pay off. There are exceptions.

+ First level managers tend to measure success by headcount, mirroring the outsource development and test firms, and not results.

---

I think there are significant opportunities for startups developing productivity tools but they are in niche markets that complement existing open source where development teams are paid for or rewarded on results not hours.


Here's my theory: tools for developers are one of the most obvious and satisfying kinds of projects you can work on, and immediately benefit the person working on them, which means the free/open-source options are exceptionally competitive (compared with the free/open-source options in other categories)

Though there is probably also a "commoditize your complement" effect going on from big tech. Just look at VSCode: it's incredible - Microsoft could definitely charge for it - but they aren't even interested in doing that.


> it's incredible - Microsoft could definitely charge for it - but they aren't even interested in doing that.

Because the telemetry data is worth more to them.


Not really. I mean, the telemetry data is important to the business, but it ain't the reason why VSCode exists.

A much bigger value is that VSCode brought Microsoft into the consciousness of developers who would have otherwise never thought of using a Microsoft product for anything. It's been so successful that there are even classes of developers who think that "Visual Studio" refers to VSCode, not big-poppa-windows-application Visual Studio.

Source: I worked in the dev tools business at Microsoft for the past half-decade.


I'm not sure you are being completely honest even though you say you worked at Microsoft. VSCode exists because Visual Studio is becoming harder and harder to maintain and they wanted something to target all the other languages not supported by VS2019. Also, they use Visual Studio Code to upsell their other paid services. They have first class integration with Azure and GitHub meaning the first thing people try to use with Microsoft's free service is some other paid services. They have the same playbook with Windows -> Office & OneDrive ecosystem.

I'm not saying this is bad or evil but let's first be honest


> VSCode exists because Visual Studio is becoming harder and harder to maintain and they wanted something to target all the other languages not supported by VS2019

That's definitely not true w.r.t strategy. I could speak quite a bit about the challenges of doing things with VS, since I worked directly on tooling for it. There are maintenance issues like any huge codebase with thousands of little things it supports, but you're probably not aware of how internal infrastructure for VS has made enormous improvements. Among other things, one of the products I worked on went from an unreliable 24-hour turnaround into a 4-hour guaranteed turnaround for my changes to show up in dogfood builds. Everyone who works on VS also thinks that can be improved, and works towards improving it too. It was actually one of the more heartening things to see: multiple teams who give a shit about engineering systems and make an effort to improve them over time. This narrative that VS is a mess and VSCode is somehow "the future plan" is complete nonsense.

VS and VSCode couldn't be any more different. Visual Studio does so many more things than VSCode, and that's by design. VS is a fully-fledged IDE, and the intended use is that practically everything you do is through the lens of VS. VSCode is fundamentally an editor that is a companion to your other toolsets. It's not meant to replace Visual Studio for VS's users, because doing so would mean building so many things into VSCode that more or less go against its entire design philosophy. Such a decision would be bad from the business's standpoint, especially since VS itself has no significant problem evolving over time (hello 64-bit new VS!!), and there are so many more high-leverage things to do for VSCode than re-creating things that already exist, But In Electron Now.

> Also, they use Visual Studio Code to upsell their other paid services.

Well, yeah, of course. It's an editor suggested in all kinds of contexts, there are product-specific extensions (proprietary marketplace!), and when you have millions of people who love a product, why not try to upsell some of them? That's what I'm alluding to. Nothing dishonest here.


That's very reasonable. Our company looked over at azure while it wasn't even in the back of our mind some years ago.

What data telemetry does vscode collect? And how would that be useful?

Is the answer "because tools for themselves were one of the first things developers made"? That's like a forty year old market that's already gone through consolidation and commoditization.

The author talks about how a lot of the tools currently being pitched are not that useful day to day - the static analyzer, say, that shows you the ten thousand things you could improve but never actually will. Lots of devs out there using print statements instead of proper debuggers, so even the existing tools often offer far more than they think they need to learn!

From there, my guess would be that the space of truly useful groundbreaking tools that wouldn't be just competing with a good-enough free one or an established incumbent is pretty well explored. This isn't as true for new areas - Kubernetes management interfaces tend to majorly suck, say, especially if you aren't a keyboard fiend who's going to get the hang of k9s quickly - but new areas are also niches that could be risky to try to build a company in. And probably also still suffers from the "don't know what I don't know" issue.

"Why aren't developers more interested in tools" is an interesting resulting question, to me, then. I think it probably has to do with the business-driven focus on building new systems and new features at high speed. We aren't building code to last for more than a few years, in many cases, so how deep is it justifiable to go on a particular block of it?


Selling static analyzers are hard. Your customer either has a bunch of code, none of which was written with the static analyzer in mind, and so they have to spend a bunch of time fixing spurious errors and warnings, or they have no code, in which case they probably also have no money to pay you with.

> so they have to spend a bunch of time fixing spurious errors and warnings

It's rare for a tool to be so shoddy that this is all you get. From what I've seen, you'll get three categories of report: valid bugs, valid bugs which the developers insist are false positives until forced to do in-depth analysis, and actual false-positives. The exact ratios vary depending on the tool in question but generally I've seen the first two categories be a good bit larger than the third.


My last job was a fortune 500 that decided to start using a static analyzer on a 20 year old project. Thousands of false positives and less than ten true positives after hundreds of hours were spent going through it. Now the codebase was definitely a piece of work which is why there were so many false positives, but there were plenty of "the static analyzer is horrible" false positives too.

> Thousands of false positives and less than ten true positives after hundreds of hours were spent going through it.

Ten true positives could very well be worth thousands of programmer hours if they're severe enough.


Yes - I’m not saying that you won’t see false-positives, but I’ve rarely seen a project where it didn’t find real issues, including some that people resisted accepting (including exploitable security holes).

If you approach it determined to reject everything, or to blindly follow the tool, you’ll get poor outcomes. If you view it as a tool which can help you improve your code and triage results accordingly, you’ll almost always have better results.


I think this depends on two things:

1) Quality of existing language/compiler. Some languages are more prone to turning carelessness into exploitable bugs than others. Some compilers are better at flagging questionable practices than others (and, as an approximation, newer compilers for basically any compiled language are better than older ones).

2) How much you care about fixing the problems. If a team is understaffed and asked to overdeliver, they have no time for fixing technical debt of any sort, whether static analysis warnings or old cryptographic hashes or whatever.

Static analysis tools do best when 1 is low and 2 is high. But that's unusual. First, you never start a project in that situation. If you care about the sort of bugs a static analyzer will find, you almost certainly are going to pick a good ecosystem to implement in. So you're working with legacy code, almost by definition.

And if you have the bandwidth to address tech debt, there are many ways to use better toolchains and deal with legacy code. And this isn't a "Rust solves everything" post - frankly, upgrading your C/C++ compiler and listening to what it says will get you a lot of the way there. But that itself tends to be a sizable tech-debt-reduction project. (I'm one of the people on the GCC upgrade team at my own workplace, and we do justify the work to senior management by pointing out that new warnings are helpful for code quality, and we do spend time addressing those warnings.) We also have more tools and frameworks for moving parts of your computation to a different language or toolchain (more norms around microservices, better serialization libraries, etc.) if you want to go the rewrite route but want to do it incrementally. And there are plenty of language choices for a rewrite; frankly, most use cases in the '90s that needed C++ will do just fine today with unoptimized and easy-to-read Python.

So, if 2 is high, you have a lot of options other than static analysis.

(Aside: the post is about PL startups. I would bet the average PL grad would rather work with the ecosystem of a new, modern programming languages that has benefited from recent research than write tools to work around an old and known-suboptimal one.)

I think the only real case where static analysis can work is where 1 is low but the choice to buy the static analysis tool causes 2 to become high. Basically, it acts like a consultant. The devs already know, vaguely, that the code is crap, and they wish someone would give them resources to fix it. Senior management buys a shiny static analysis tool, which is an objective outside voice saying how bad the code is. They tell devs to fix it, and devs are happy to knock out obvious fixes, your category 1 - and they'll still debate the non-obvious fixes, your category 2, because they're not being motivated by the static analysis tool, they're motivated by their own sense of what code is crappy, they just know that their new OKR is to silence all the static analysis warnings.

That's certainly a better outcome than not fixing anything, but that's still a worse outcome for the company, probably, than asking your devs what's wrong with the code and how they want to fix it.


> That's certainly a better outcome than not fixing anything, but that's still a worse outcome for the company, probably, than asking your devs what's wrong with the code and how they want to fix it.

I think this is homing in on the different starting points: if you experience static analysis as something imposed on the development team (e.g. by a bureaucratic security mandate) you'll definitely see more cost and less benefit than when the team itself selects that tool and selectively uses it as part of their ongoing maintenance work along with other tools (compiler warnings, fuzz testing, etc.).

For example, I've used tools like this where you add it as a CI stage where you say that new code shouldn't make the problem worse and then the team spends the next few months triaging reports on the area they're working on, tuning rules and prioritizing valid findings. Generally that'll have a surprising impact on the average code quality over 6-12 months, in part because cleaning up the chaff tends to make it harder for more significant bugs to be missed in a sea of compiler warnings and convoluted code.


I've implemented static analyzers in several codebases, and every time it's paid off in finding multiple "must fix" bugs.

I'd day there's another big group you're missing. People with large codebases that compile with tons of warnings that just get ignored. If you aren't willing to spend the time fixing the existing warnings to prevent problems why would you spend money on a tool that will give you more of them?

My theory: no sane programmer wants to be locked-in by the vendor of a programming language.

Hence, all successful programming languages are open and free.


When was the last time someone forked a programming language? (note: not a runtime, a language)

In practice the only kinds of languages you don't get locked in to are those with excellent interop with other languages. It's got little to do with whether the reference implementation is open source. Kotlin's lack of lock-in doesn't come from the fact that it's open source - that's nice but in practice most community contributions are small. It comes from the fact that you can recode individual classes in Java if you so wish. And in fact, even convert Java to Kotlin automatically, if you're going the other way.

Also, lots of developers have written code in proprietary languages with a single vendor. It's HN groupthink to believe that nobody 'sane' does this. Look at ABAP, Apex or MatLab. All very widely used, even though you may not have heard of them. In the past Delphi was also very popular, and it's only recently that the C# compiler went open source.


> When was the last time someone forked a programming language? (note: not a runtime, a language)

Python 2 -> Python 3? Perl 6? I know, neither of those intended to be a fork.

Before that, C# could be considered as a fork of Java, and C++ as a fork of C.


> Before that, C# could be considered as a fork of Java, and C++ as a fork of C.

C# and Java might be eerie similar, but by no definition of "forking" might C# be considered a fork of Java. Maybe you could consider it a fork of C as its original name was "C-like Object Oriented Language" (COOL).


So eerily similar that the class names differ just in letter case? Yeah, I'm not buying how that's "not a fork".

How open? Neither Go nor Kotlin cost money to use, so I’ll give them free.

Coverity is what I was thinking of when I read the article, also Resharper. Since I work at megacorp I have access to licenses but I think the problem is that running these tools effectively adds a lot of overhead. We programmers are a jittery bunch so planning ahead and integrating these tools and then spending the time is hard.

Back when I started my career almost nothing was free. At the time I worked at a startup and we were always drooling over the stuff in "Programer's Paradise". We did pay for a few crucial tools but couldn't afford many. Times have changed a lot since then and the fact that high-quality compilers and IDEs are effectively free has raised the bar for paid products substantially.


> [1] The biggest exception to the rule is custom hardware (particularly embedded, but this also holds true for mainframe and HPC systems), where custom compilers come for the ride.

This is exactly what RISC-V was created to address. People here get excited about running Linux on it, but really the goal is to have a flexible, extensible ISA that can be used from microcontrollers [1] to DSPs, GPUs, & TPUs. Full-on Application Processors, with their FPUs, MMU & Cache, are only a small part of the design space of RISC-V.

The shared ISA means the rest of the stack could share and leverage tools. Any day now ;)

[1] https://www.espressif.com/en/news/ESP32_C3


For this information, I recommend viewing Dave Patterson's "50 years of Computer Architecture" talk:

https://www.youtube.com/watch?v=HnniEPtNs-4

Dave Patterson is one of the creators of RISC-V, and his credentials include pioneering RISC, the original concept.

https://en.wikipedia.org/wiki/David_Patterson_(computer_scie...


We’re building a primarily PL startup at Prophecy.io - Low Code Data Engineering Platform. We user parser combinators heavily, code generation and all. Hard to find skilled engineers these days in compilers though.

The Low-Code movement seems very focused on programming, no?


Would you consider proprietary solvers or optimization software in that category?

There aren’t more startups creating languages because it’s hard to go up against established incumbents who give away their product for free. Python, Go, JavaScript, Rust and most other mainstream languages thrive on this model. All of them bend over backwards to make it easier for developers to join the ecosystem. Free tools, free learning materials, easy installation, gentle learning curve (as gentle as possible). More developers leads to a flywheel effect of more libraries, more teams taking advantage of the rich ecosystem, more job opportunities cropping up and therefore more developers flocking to the language.

If you have a “pop out the credit card” moment at any point you slow adoption and end up with a smaller, poorer ecosystem that finds it hard to compete with the Pythons and Gos.

But if your language is targeting a niche where no established FOSS language exists, then a startup would work. Create a language, give it away for free (MIT or Apache ideally), then charge for creating/maintaining niche libraries or other consulting. This is the model chosen by Julia (https://news.ycombinator.com/item?id=9516298). One of their main competitors is a paid product by MathWorks.

Side note - this flywheel effect determining the long term success of programming languages is why discussions around them can get so contentious. A criticism of “your” language could influence an exodus from the developer community around it, making it die slow death. For a person who’s put years into the language, it could mean throwing away all that knowledge and starting afresh elsewhere. So they respond harshly to the criticism, subconsciously hoping that the language ecosystem stays healthy.


Julia targets a space that has plenty of free offerings as well (like Python and R). Julia is itself open source too, and while the founders got work in consulting and cloud offerings, they didn't have to lock up part of the ecosystem behind proprietary licenses.

I don't see any point in having a proprietary ecosystem when open source is so much better in every aspect.


> they didn't have to lock up part of the ecosystem behind proprietary licenses.

That's what JuliaPro (Pumas, etc) is.


Thank you for the correction. I'm disappointed :( is it just Pumas, Bloomberg and Miletus?

Your side note is a really good point. I think it's renforced today with the reliance on libraries. These days, the "blub-iness" [1] of your language is not only the language itself, but mostly the libraries. Also, it's easier to wait for your language to integrate the good part of others (Java/C# with immutable records and pattern matching for example) than to change what you use. I sometimes fear that the more time passes, the harder it will be to start a new programming language.

[1]: In reference to why Lisp over other languages, in this article: http://www.paulgraham.com/avg.html


The nice thing about the JVM ecosystem (and to a lesser extent .NET) is that it makes new languages a lot easier to spin up and the discussions around them far less contentious because language interop works so much better.

Like, if someone says they prefer Java to Kotlin, that doesn't threaten me at all because I can use their stuff easily. Likewise in reverse, as long as the author of a library actually wants it to be usable from Java it will be. Even stuff like Scala and Clojure where the ecosystem guys really don't care about interop in the reverse direction, it's still possible and can be done when people want it for relatively low cost (just avoiding exotic language features in your API definition, more or less).

It does take a lot of the heat out of those discussions. You don't get the same influx of "I rewrote grep in Rust" type stories.


It’s funny you should mention ripgrep (text search in Rust) in particular because it’s a great example of a tool being reused in other environments. It could be used with C bindings or by calling the binary. The latter is how Visual Studio Code integrates it. It’s a win for everyone if the fastest grep tool (benchmarks - https://blog.burntsushi.net/ripgrep/) is used widely.

Yeah, grep in Rust was probably a bad example. I do actually use some replacement command line tools that are written in Rust :) I think you get what I mean though. Maybe an HTTP stack would have been a better example. Like, people finding fairly trivial excuses to get a language in front of people.

Don't think HTTP is a great example either. The most used HTTP stack in Rust was integrated into curl :)

https://daniel.haxx.se/blog/2020/10/09/rust-in-curl-with-hyp...


You're damn right grep was a bad example. And I don't know of anyone building good tools just to "get a language in front of people."

Ripgrep isn't really a rewrite of grep. It's a competing product that tries to do a better job of serving use cases that people commonly use grep for.

On one hand that's true, on the other hand people will often think "why bother?". I expect that Kotlin usage (at least on the server) will slowly fall as Java is getting better and better, and probably the same for Scala. Clojure seems to have a more unique value proposition but it's already the least used of the 3 hosted languages.

Maybe? But Kotlin was first launched in 2010 and didn't hit 1.0 stable until the start of 2016. Even the latest versions of Java are still very far behind in ergonomics, features and general usability, to say nothing of exclusive features like Kotlin/Native, Kotlin/JS and Jetpack Compose.

Let's say Java does eventually catch up, but it'll take at least five years. So then Kotlin has been making JVM user's lives easier for a decade by that point. That's pretty good! Definitely worth having.


It's a truism that programmers won't use non-free programming languages, but I don't think anyone in this thread has talked about why.

I think a key part is that investment in a language goes both directions between programmer and language author. When someone learns a language, they are spending a lot of time and effort to load that language into their head and build expertise in it. That effort amortizes out over the amount of useful software they are able to write in that language in return. When the language is paid and where users have no control over the cost, they can end up in a situation where they are unable to extract value from what they've already learned if the language gets too expensive.

It's sort of like asking users to turn a corner of their brain into farmland but then giving the compiler authors the key to the padlock on the fence. Users don't want to give financial control over a part of their own headspace to someone else.

Free programming languages ensure that the user can always have access to the value of their own expertise in the language.


> It's sort of like asking users to turn a corner of their brain into farmland but then giving the compiler authors the key to the padlock on the fence. Users don't want to give financial control over a part of their own headspace to someone else.

Well put. I think this is why the Google/Oracle court battle was so contentious to a lot of programmers. It wasn't just that people hate Oracle (sure, that too), but also that it was about Oracle trying to claim a monopoly on whatever piece of our brains is dedicated to our knowledge of Java.


Well, to be fair, Google is in a unique position to shape opinions and manufacture outrage over their right to ham-fistedly thumb their nose at Oracle's software licenses. Google has hundreds of patents, and they have some applications for patent which are total nonsense (like the heart hand gesture... seriously). I don't think any party involved has a credible moral objection to intellectual property rights. I think they do have different friends D.C.

While I do have some sympathy for the argument that Google has too much power in the court of public opinion, Oracle was clearly on the wrong side of history here. The damage to the industry from allowing APIs to be copyrighted would have been immense.

>Oracle was clearly on the wrong side of history here.

I agree insofar as that is a euphemism to mean that Oracle is not tightly aligned with the interests of the left. Google is a big donor both in cash and in kind. Oracle is much less so.

> The damage to the industry from allowing APIs to be copyrighted would have been immense.

If I were to copy Google Sheets, offer an online service called "Google Sheets" and the only difference is that I modified a handful of implementation details so the service would run my spyware, and not Google's, then I would lose any copyright claims made by Google, hands down. You could make an identical argument that the web interface is an "API" and copying their code is a necessity of providing equivalent functionality. Google ripped off Sun and got away with it because the US legal process is a farce. Denying the right to copyright ANY code at all would be a morally defensible position, but that's not the ruling.


If you stole the source of Google Spreadsheets and built it into your product, of course you’d win. Oracle did not claim that Google did that.

If you built a spreadsheet app that was compatible with Google Spreadsheets’ formula language, Google (rightfully) wouldn’t have much of a case. In fact, Google pretty much did this by implementing much of Excel’s formula language.


That's a very interesting thought and it seems to answer the question in the title (which the linked article doesn't).

In theory though there are certain business opportunities in this space that haven't been explored yet, it seems.

For example, a faster and better compiler for a popular language that itself is slow, might get some traction. Personally I would pay for a Swift compiler that doesn't take seconds to compile a single long-ish expression with floating point arithmetic, or doesn't take tens of minutes to fully build a project that isn't even that big. Similarly I'd pay for a replacement for Xcode that doesn't have bugs in every single feature it implements. As an iOS developer I'm honestly very tired of and fed up with both Swift's slowness and Xcode's bugginess. There has to be some competition here too, already.

In other words, competition is what the PL industry is lacking, I think. But not among the languages themselves as much as among the compilers (and possibly the rest of the tooling and environments).

I don't know how we ended up in a world where there's only one compiler per each newly created language. I'm not sure this is normal or it's the way it should be.


> For example, a faster and better compiler for a popular language that itself is slow, might get some traction.

I used to think this, but now I am skeptical, because of zapcc. zapcc was almost exactly this: a faster compiler for C++, first propreitary, now open source because it failed. Still no traction.


> For example, a faster and better compiler for a popular language that itself is slow, might get some traction.

This is a viable business model in principle but seems to rarely work in practice.

A big part of the problem is that all languages in wide use already have mature compilers and the language itself is evolving as quickly as its designers able to make it.

This means that anyone entering the third-party compiler business has to not only catch up to the existing mature implementation (and be fully bug-for-bug compatible with it), they have to surpass that and track a quickly moving target. So far, few have been able to pull that off successfully.


> that doesn't take seconds to compile a single long-ish expression with floating point arithmetic

Or some complex Combine chains just hang forever and not compile at all with no indication what the problem is. Yet they keep adding language features that clearly the compiler can't deal with.


Check out AppCode by JetBrains. It's an Xcode competitor.

Does it though? How valuable is knowledge of Pascal, perl 5, Flex, or even uncool JS frameworks like Angular 1 these days? You're at the whims of the mob and fashion just as much as somebody building on a proprietary tool is at the whims of its vendor.

The most successful programming language startup (Microsoft) ultimately had to pivot to operating systems to go public and then pivot again to office suites and cloud computing to stay relevant.

More recently they wanted to monetize C#, but found it best to make the language itself free and open source while monetizing through complementary products.

When did they ever want to monetize C#? That is pure FUD. It's been an ecma standard from the start with multiple compiler implementations including Mono since very early 2000s.

Yeah but Mono wasn't really close to the official implementation in terms of performance or completeness.

Where is the evidence to support this? Utter fabrication.

I'm sure they still make good money from MSDN subscriptions but the value of that is deeply tied in with Microsoft's own platform (Windows, SQL Server, Azure, etc).

>More recently they wanted to monetize C#,

what? src?


What are complementary products for c#?

Azure, Windows Server, SQL Server, Exchange, MS office, Dynamics, anything else in the enterprise suite where C# is a good integrator.

Someone gets it. There is no true computer science company. Google works on search and retrieval which is one of the fundamental problems in CS. But they sell ads. Microsoft Azure and Amazon AWS are in a way programming language startups in their own right but we don’t see them as such.

I have been thinking about the topic for a while and the article seems a bit wrongheaded to me. (e.g. you just can't save time by deliberately making tools that aren't 100% correct as much as everyone wishes they could.)

For instance, "parser generators" like yacc, bison and all the rest are absolutely awful. One feature they miss is that you should be able to make a small revision to a grammar with a small code patch. (Say you want to add an "unless(x)" statement to Java that is equivalent to if(!x)" - ought to be 50 lines of code not counting the POM file.)

I think a better parser generator could open up "compiler programming" and "meta programming" to a wider audience and get LISP fanatics into the 21st century, but it never happens. Anybody who understands the problem enough to solve the problem will settle for a half-baked answer; the labor input to go from "the author can use it" to "revolutionary new parser generator" is a factor of 10x and there is some risk that people still won't get it.

Programming the AVR-8 Arduino it's hard to miss the conclusion that "C sucks" but anything really better (say Supercharged Pascal or a Miracle Macro Assembler) would take a huge amount of fit and finish work to make it something that would really beat C. For pedagogical purposes you can argue that C rots the mind like BASIC but you can't argue that learning C is a waste of time.

To make a profitable business out of any of the above seems almost impossible.


> Programming the AVR-8 Arduino it's hard to miss the conclusion that "C sucks" but anything really better (say Supercharged Pascal or a Miracle Macro Assembler)

I did see an Ada compiler for the AVR recently (I'm not an Ada programmer). It seemed unpolished, and not really an advancement on C.

I think one problem with embedded work is that you either stick to the low-level, in which case you have no advantage over C, or you go for high-level, in which case you have convenience but lose flexibility.

In terms of meta programming, Val Schorre has an interesting idea with META-II. Both Alan Kay and Joe Armstrong spoke highly of it.The compiler has its own assembly language. I had this goofy idea of writing a Basic-like language that replaced the assembly. So you'd have a language that was both high-level Basic and a compiler. It would embrace the GOTO, because META-II also generates GOTOs, and GOTOs are a natural fit for CPS (Continuation Passing Style) used by many compilers.


Personally I find bit-twiddling in C to be painful compared to bit twiddling in assembly.

For my Arduino projects in particular I never make recursive function calls, and would be happy to statically allocate everything in RAM. The stack-oriented activation records in C, calling conventions, return values, are more a problem than a solution.


For call graphs without cycles it should be possible to run static analysis to determine the maximum amount of memory taken by your program at any point, assuming you don't use things like malloc or alloca or dynamically sized arrays.

I had the idea a while back to do a parser-generator in the form of Rust macros, which would mean a) you don't have to source-control a giant pile of generated files, and b) you get real-time editor integration without a separate "generate" step. I got out of my depth trying to get the actual macros to process the BNF (it gets very meta because Rust's declarative macros are themselves written in a very BNF-like syntax), but I still think this would be possible and pretty cool

> but anything really better (say Supercharged Pascal or a Miracle Macro Assembler) would take a huge amount of fit and finish work to make it something that would really beat C.

Zig is trying to work in this space, and they seem to be getting rather close to the ideal.


Actually do you mind expanding on the quality of yacc, bison being awful. I dont mean that as passive "oh yeah?" line to "trap" you. Language front-ends (and their tooling) is something I am pretty passionate about and having developed parser generators in the past (and working on something now) I just wanted to explore specifics that make yacc, bison unweildy for others.

Just to start off - I still find their debugging or their (lack of) ability to reason a huge barrier to wannabe language developers. But I am still unable to put this in concrete terms. My last few months has been immersed in reading a ton of papers and I still see the papers use bison as the staging ground for most new ideas (menhir these days seem to be gaining popularity) so curious what is keeping yacc/bison in their throne position?


For one thing the callback-based interface in yacc is unergonomic.

In a language like C# or Python you usually want to get an object tree for the AST and you shouldn't have to write any code to get that. It should come "for free" with the grammar.

Another missing capability that people don't know they are missing is the ability to "unparse" the AST back to text. This is valuable when you want to write a code transformer: for instance in query languages like SQL there is a lot of value in doing a transformation on the query and then writing it back to SQL. You should be able to transform the AST and not have to write a line of code to transform it.

Another problem is composability of grammars. If you want to support SQLLite, Microsoft SQL Server and Oracle SQL dialects you should have a base SQL grammar and extended grammars that inherit from the base.

All of these capabilities are so beyond the pale of what parser generator "insiders" are used to that it's a very hard conversation to communicate that these capabilities are both possible and useful.


Nice. Starting from the easy(ish) one - composability is slowly becoming a solved problem these days or atleast something that can be tacked on via templating (not an ideal or native solution to these tools I agree).

The AST availability is defintely a big one that I cannot align on a decent contract on. At some point in time not all AST transformations are reversible yeah? Heck even a conversion from Parse Tree to AST is often not bidirectional. A lot of AST directed IDEs (even MPS from Jetbrains) end up being very cumbersome when typing-at-scale. I wonder what kind of semantics can be foundational enough that AST generation is simplified for atleast 90% of the cases? Again my interest in this comes from building incremental editors for certain domains so I am definitely looking to learn more.


Your thinking very closely reflects mine. I actually did build a C# parser generator producing strongly typed AST from base ANSI grammar and inherited Oracle, SQL Server, MySQL, and Postgres grammars. We are using it in our database migration products and will soon use it for Resharper style intelligence in a very lightweight and fast database manager. It never occurred to me to publish the parser generator, though. It's our secret sauce.

I've worked in a number of spaces that you mentioned. Parser generators are awful, which is why my preferred parser technology are parser combinators. They occupy a nice space between parser generators and handwritten recursive descent parsers.

Meta programming is an excellent target, but it usually needs to be directly baked into the language. The closest system I've seen to a nice language agnostic close to meta programming system is Jetbrains MPS and Spoofax Language Workbench. Last time I checked, MPS is mostly dead and Spoofax is only used my programming language researchers.

For the Arduino programming stuff, I currently maintain a domain specific functional reactive programming language for Arduinos called Juniper: https://www.juniper-lang.org/

However the intersection between people who do Arduino programming and functional programmers is quite small. Also my interest has waned on and off over the years which means there have been long gaps between releases. I just work on it for fun mostly.


"you just can't save time by deliberately making tools that aren't 100% correct as much as everyone wishes they could."

The point was about application development - you can save time by making applications that aren't 100% correct. In most software, any bug is OK as long as it happens a small enough percentage of the time.


> For pedagogical purposes you can argue that C rots the mind like BASIC but you can't argue that learning C is a waste of time.

That's a strong statement.

C's biggest flaw is that it lies about the underlying memory, but it's a small language that's especially suited to be easy to port.


If you are exposed to too much C you can't see the C-thulhu.

Undefined behavior is normal. The whole relationship between .h and .c files is sick. How you use the "extern" keyword is arbitrary. strcpy is Turing complete. It took a while for people to realize templates are Turing omplete. I'm not sure where the grammar fits in the Chomksy Hierarchy since you have to look at the symbol table to parse it...

Good C looks like poetry, but beginning programmers shouldn't be subjected to any of it. I certainly hate cleaning up the mess!

ALGOL, PL/I and similar languages put a lot of work into specifications but nobody really knew how to write a specification that could be implemented back then. C was an early attempt at specification + implementation that was "minimum viable" (almost?) and ready in time to be the bridge between microcomputers and minicomputers.

Probably the best idea in C was that you didn't have to agonize over I/O in the language specification but you could just leave it to the stdlib.


While I don't like to write more C than strictly necessary (which lately is close to none) I move that everyone should be exposed to C at an early age. If not, they will end up on accidentally quadratic when don't realize what their javascript does behind the scenes.

Also, templates are C++ not C.


> strcpy is Turing complete.

My google foo is not working, I can’t find a source for this. And am too dull right now to figure it out.


The high barrier to entry for creating dev tools is truly a shame, and something I think about a lot.

The mental model distance between dev tools and user tools is simply too large. Dev tools are still too anchored in the 50's model of source-code >> compilation >> runtime code, when consumer products have a much more healthy mixture of active content and tool palletes working on the content. Heck, even the guy in the article asks about "Programming Languages Startups" when he means "Integrated development startups".

This difference is shrinking in the web, both client (DevTools run like an app inside the browser, with live changes to the code reflected instantly in the page) and the cloud (the entry point interface is a collection of collections, where you can inspect and operate on any component running in the system, and even create some kinds of workflows in the live environment).

But the lowest level for designing components relies on specifying a fully formed program in formal code, and having to run a build-deploy-test cycle without errors before seeing whether it works, instead of having code and data run side by side to see how the code behaves.

I put my hopes in online notebooks, but these are used mainly for data analysis, and "true developers" seem determined to ignore them.


The gap from an "online notebook" to "here's the script to run the monthly report" or "here's the model complete with training and inference that anybody can retrain" is wide.

Yet it's the difference between a data scientist making an occasional report to putting their expertise on wheels that makes it worth paying for.


Notebooks don't need to be online. You can run a local notebook same way as you run a local IDE.

The most significant difference is that the notebook always has runtime information about how the available information is being processed, while the IDE only has this information available while it's in debug mode. The first model provides much more information about the system behaviour, easier to access.


> I put my hopes in online notebooks, but these are used mainly for data analysis, and "true developers" seem determined to ignore them.

This is merely my own personal preference, not an assertion that my stance is correct or common.

For development, it's important to me that I can put the toolchain into version control along with the code. This ensures that I can check out a build that is years old and be able to build it without issue.

SaaS solutions break this for me, so I avoid them to the greatest extent that I can. If it's cloudy, then it can change without warning or recourse. That's a problem.

For that reason, I would never seriously entertain using something like a netbook for development.


You can use toolchains with notebook software, provided

a) the notebook has an internal text representation that can be version-controlled, and

b) the notebook follows a spreadsheet-loke paradigm with incremental non-mutable data structures, so that all state is represented consistently at all times (no dependence on what order you've run the cells in the current session).


But why, when I can just use a laptop that doesn't have such restrictions?

"The guy in the article" is not a guy.

I stand corrected.

> For instance, "parser generators" like yacc, bison and all the rest are absolutely awful. One feature they miss is that you should be able to make a small revision to a grammar with a small code patch. (Say you want to add an "unless(x)" statement to Java that is equivalent to if(!x)" - ought to be 50 lines of code not counting the POM file.)

You could do that. Add "unless" to the Flex file and hack the Bison file to interprete 'unless->expression' as an 'if->negation->expression)' when pushing to the abstract syntax tree. You can probably copy paste from the other rules for the hypotetical Flex/Bison Java compiler.


As a LISP fanatic in the 21st century it feels like we're still waiting for everybody else to leave the 20th century.

I don't think parsing is the hard part of designing a compiler any more

It's not the hard part of "designing a compiler" but who wants to "design a compiler?"

In ordinary software people spend most of their time modifying existing code. I wouldn't want to reinvent "javac" or "cpython" but adding a new feature to a compiler like that should be easy.

Alternately there are a number of "compiler-technology based tools" that we don't have. Instead we get problems like

https://www.liquibase.org/

which defines a whole new XML-based language to manage database changes. That's like jumping from the frying pan to the fire since you're still going to have to understand your RDBMS, how to write SQL for it, what the non-standard features are in your database, etc.

A sane tool like that would use an extensible parser generator to easily support the variant syntaxes of different SQL dialects and would be able to parse and unparse SQL or a variant of SQL to define change sets.

(speaking of which... shouldn't a compiler generator automatically make something that can reverse the AST back to source code? how about round-tripping source code comments, formatting and all like some CASE tools did in the 1990s?)

A while back I had the problem that arangodb scrambled the order of columns in JSON result sets and wrote something that parsed the AQL, figured out what the order of the columns was, and turned the result sets into nice pandas frames.

If "writing compilers" was easier people would write "compilers" that do things other than compiling to machine code.


> but who wants to "design a compiler?"

Lots of people, actually! Just in recent years, Golang, TypeScript, and Rust have shown there's plenty of demand for new languages (compilers!) as developers and their requirements keep evolving. Given how expensive and time-consuming languages are to build and evolve, I'm honestly flabbergasted that we've seen 3+ languages come of age in the past decade and break into the mainstream.


Let's see... Go is someone at Google going up in the hierarchy by doing things that sound advanced.

TypeScript is an attempt to make Javascript less of a mess.

Dunno about Rust.


Not sure I understand your point. One of the major reasons why new languages emerge is to solve problems that the current incumbent language(s) cannot solve easily.

> have shown there's plenty of demand for new languages

Guess I'm disagreeing with "demand" in two of the cases. One because you don't know what office politics at Google led to Go being promoted. I strongly doubt it was to solve a problem, just someone with a lot of clout had a pet project.

... and second because it attempts to patch an existing trainwreck and isn't even a new language.

Rust does seem well intentioned though so maybe there was a "demand".


> I strongly doubt it was to solve a problem, just someone with a lot of clout had a pet project.

I don't know office politics there, sure. But Go most definitely solved problems by having a language optimized for fast(er) builds and generally being a language you can write code without thinking too hard about and getting good runtime performance out of.

Many languages start out as pet projects, but can often solve a real problem such that they see internal adoption. That adoption can grow for various reasons, and in some cases, end up creating a sociotechnical system where it's now a thing even though it never started out with the intention of being a thing. It's at this time that demand is a key driver behind adoption, new features, etc.

Do you still feel that Go "doesn't count" with respect to demand?

> second because it attempts to patch an existing trainwreck and isn't even a new language.

Firstly, JavaScript is hardly a trainwreck of a language, considering how successful it has been with respect to adoption and having such an incredible ecosystem for developers to work in. I worked on a typed functional programming language for 5 years, so be aware that my statement isn't one of ignorance of other languages.

But nonetheless, since this is really a philosophical question, what exactly makes up a new language to you?

In TypeScript, the type system is Turing Complete. It's a novel type system; there is no other language with the same set of features and semantics associated with it. I can write very functional and type-heavy TypeScript today and it's really not reminiscent of JavaScript aside from a few syntactical elements.

Do you still feel that it "doesn't count"?


> Firstly, JavaScript is hardly a trainwreck of a language, considering how successful it has been with respect to adoption and having such an incredible ecosystem for developers to work in.

Quality != popularity. Look at Hollywood. JS is only popular because it got built into every browser, not because there is anything "quality" about it. Thus, attempts to fix it.

> In TypeScript, the type system is Turing Complete.

Ok, I'm afraid the only thing I can think of now is the metastasis that C++ is in. But that's again a philosophical point of view, and I'm drifting on another tangent too.


Because programming languages are a public good, and public goods are fundamentally incompatible with a profit motive. A “programming language startup” would be like an “anti-poverty startup” or “social justice startup” - either an edifice doomed to fail because of its inherent contradictions, or a front for lies and grifting.

This premise can be trivially disproven with one citation, so I'll choose Clojure and Cognitect.

Ok, not fundamentally incompatible, right?


Cognitect was a successful consulting services startup that didn't build a scalable software product/service, right?

I'm not sure why they would have sold if they ever figured out how to make a software part of the clojure ecosystem sustainable & growable revenue source, e.g., hosting.

In contrast, Julia's recent fundraise is on a successful science tool that happens to be written in Julie, and a hope that hosted Julie might one day pay the bills. But not proven yet. NPM & dockerhub showed repo hosting is tough to succeed on even with wide use, though Anaconda shows promise when mixed with consulting revenue.

Jean's article also toes around developers not always being the buyer, so requiring a model more like Twilio, where they are presales/marketing for some other customer. That kind of misalignment adds another level of pain.


> Cognitect was a successful consulting services startup that didn't build a scalable software product/service, right?

I'm not sure what exactly you mean by that, Cognitect is a relatively small company (if I'm not mistaken, there are fewer than 20 engineers work there). And there are plenty of products built on Clojure ecosystem. It scales very well. Cisco, Apple, Walmart, etc., are all successfully built and keep expanding their [massive] Clojure code-bases.


Didn't Cognitect primarily make money through consulting services revenue for other people's projects, and not selling Clojure / Clojure tools?

I think there was a product attempt by becoming a database company -- datomic -- that happened to be written in clojure, but ultimately they still got bought out for consulting vs product: https://www.cognitect.com/blog/2020/07/23/Cognitect-Joins-Nu...

So again, for all the good parts, just not the success story for a PL/tools startup that Jean is looking for..


That's clearly not true because such companies existed in the past and were not "doomed to fail" nor fronts for lies and grifting. The most obvious example was Borland which dominated the 90s with their superior commercial compilers and programming languages, at least for anyone writing Windows apps which back then most people were.

There are many reasons why there are not many PL startups. From my perspective--as a PL tool developer for 40 years--is that we still do not have reliable formal grammars that describe the syntax and static semantics for a given programming language. That is a crux because it is difficult to write a tool for a language when we cannot even define what that language is. When you ask where to find a grammar for the language, the default answer is to just use an "official" implementation. That often requires a large amount of time to find and understand an API--and then locks you into that implementation. Some of the popular languages publish a grammar derived by a human reading the implementation after the fact, prone to errors. For C#, the standards committee is still trying to describe version 6 of the language (https://github.com/dotnet/csharplang/tree/main/spec), and we are now on version 10. An Antlr4 grammar for C# is mechanically reverse engineered from the IR, but it does not work out of the box--it contains several mutual left recursions and does not correctly describe the operator precedence and associativity. Julia does not even have a formal grammar published for the language. You cannot easily refactor a formal grammar from something that does not even exist, and have to write it from scratch. Often, what is published is out of sync from the implementation, not versioned, tied to a particular parser generator and target language with semantic predicates. The quality is suspect because you are unsure whether it even follows a spec. No one bothers to enumerate the refactorings involved in modifying the grammar to fit a parser generator, or for optimization, e.g., speed or memory. Yes, I can agree: “[t]here's definitely a need for SOMETHING, as developers have so much pain.”

Taking the question at face value, I think these are the issues:

  - the switching costs for programming languages are very high, even if you're switching to an established language
  - it's hard to evaluate a new programming language; you don't know if it's good until you've written something non-trivial
  - learning a new language takes a lot of effort, particularly for languages that aim to be better than existing languages; better implies different
  - given all of the above, early adopters will probably not use the language for their day job, they'll use it for hobby projects; hobbyists are price sensitive
I don't actually think having to compete with free is actually that much of an issue compared to the above. Even free languages that are genuinely improvements over more mainstream alternatives face multidecade slogs to create a small community and get some adoption.

I like Grady Booch's explanation from the Twitter thread:

  Rational Software was this: founded in 1982, acquired by IBM in 2003 (they outbid Microsoft, who were also interested in acquiring us).

  They were a number of dev tool companies in that time, but no more: open source makes the economics of this a tough sell these days.

I agree with jcrammer in that the question being posed isn't really the one the article seems to be discussing. I read the article as discussing "why aren't there more new programming languages"? So I'll comment on that one instead.

As an aside, I've been a professional developer for several decades now, and that's a question that I've heard for my entire career.

I think the answer is a combination of things that are impediments to the adoption of new languages of any sort, no matter how much of an improvement they may technically be

1) If there isn't an accessible supply of developers who know the language, using the language becomes risky as a business decision.

2) Related to #1, using a new language is placing a bet: once you've put a lot of time, money, and sweat into developing a codebase in a particular language, if that language falls out of fashion then you're stuck with a codebase in a language that will lack support and a ready supply of developers who know it and are willing to work in it.

3) Career-minded developers will tend to prefer to develop skills that are likely to be in demand with a wider array of potential employers. Much like the bet I talked about above (#2), working in a new language is also a bet being placed by the dev.

4) New languages tend to have immature toolschains. Using them can result in decreased production (and possibly decreased quality) as a result. This problem goes away should the language see widespread adoption, but that takes a while and may not happen.

There's a reason that the most-used (as opposed to most popular) languages tend to be ones that have been around for a while. They're safer (from a business point of view).

I've also noticed that the newer languages that become popular tend to grow out of hobbyist use. Devs learn and use them for fun in their off time. Some become proficient in them and advocate them to other devs. Once this hobbyist community reaches a critical mass, then companies will begin to entertain using the language, but not before.


Why aren't there more human-language startups?

... languages have value substantially due to network effect. By making a proprietary language you hobble the network effect and destroy the language's value.

And there aren't startups to create open languages because monetization is a big "then a miracle occurs"-- there is no obvious moat. And at our current level of science re-programmer productivity, it's a lemon market: lots of people say this or that will improve productivity but there aren't much in the way of established ways to prove it. Hard to convince people to pay for something when its benefits aren't unambiguous.


Proprietary languages work when they fill a niche. I work with the Q/K programming language. It's closed source and, from what I understand, has a significant licensing fee. But it is a language designed specifically to power trading systems (via the KDB+ platform/ecosystem). It fills a niche well, and is popular in the industry.

Let's put programming languages aside, but I also question the idea that "there is money to be made with developer tools".

a) it seems to be a bit of winner-takes-it-all game here, maybe fragmented by language or ecosystem, but overall still true. IntelliJ is kind of an outlier, but they used to be Java only for a few years, then PHPStorm/WebStorm were kinda halfway accepted, and some other languages as well.

n) Also it takes so damn long until a new tool is even considered for widespread use. I remember ~11 years ago when I was more involved in conferences and tooling in the PHP world (and I also wrote my diploma thesis on this) - the broad majority of people only used Jenkins and continuous integration, if at all. Every other tooling was already rare, and don't even think for a moment they'd pay. The few cases that did had Bamboo and Sonar. From what I heard from friends this was not unique to the PHP world. And even in the last 5 years it's the same story.

c) I don't see a lot of change in the last 10 years that companies are willing to buy tools for their developers when there's something free to be had and you can save a bit per year. This can be big or small companies, startups or ones.


A lot of my friends work at companies with legacy code, a lot of which have libraries that they bought where the company isn't even around anymore and it was locked on to a specific system by license, they can't transfer it, can't fix it, can't update it.

I won't use a language that isn't open source, I don't trust companies enough, there is a chance they could die, revoke a license or do some other stupid thing that makes it unreliable in the future.


The title prompted me to look into the history of matlab a bit [1]. My main takeaways:

- It was given away for free for years. During that time people wrote _reliable_ numerical packages that other people glommed onto.

- matlab users aren’t programmers, they are [engineers/mathematicians/etc.] who need to write some code to do their job. One aspect of this fact is that they generally aren’t on the hunt for a new “framework” to do this stuff. They also aren’t thinking much about long-term maintainability: they ran it once and got the answers they needed.

- Academics (and students who can use university licenses) are price insensitive.

- Engineering disciplines where errors can’t happen (civil, aerospace, etc.) come to rely on aforementioned reliable numerical packages, which eventually required proprietary licenses.

[1] http://www.tomandmaria.com/Tom/Writing/MolerBio.pdf


Luna Lang / Enso

is visual programming language start up based in Cracow, Poland and they even had office in SF iirc.

They have shitton of fancy visualisations, it looks really interesting, but I'm personally not the target for Visual Programming

https://enso.org/language


Looks like Quartz Composer on Mac. Which is quite fun to play around but in the end I prefer a textual interface.

The article is a little mis-titled. It asks the question about the Programming languages, but it goes into discussion development tools. This the correct framing of the problem, the language itself is very hard sell. It requires an an environment to flourish: libraries, build tools, etc.

I'm very excited about the variety of languages which are being actively developed right now. There are 2 major open source projects which are helping quite a bit for the new languages: - Visual Studio Code (Eclipse used to be the choice in the past). Every language can get high quality IDE with a simple integration. - clang & llvm - high quality back-end and common code generation tools.

The developer tools market is not going to be extremely big. In most mature companies the R&D costs are up to 15% of the revenue and the dev tools is fraction of that.


My personal answer to the titular question is something like "there's only so much you can do with our predominant programming paradigm (i.e. plain-text)". Programming languages specifically can only make incremental improvements and/or tradeoffs in that medium.

Tools like IDEs, debuggers, visualizers, etc are more interesting, but I think their effectiveness is also limited (and creativity constrained) by having to work in a plain-text world.

Moving beyond plain-text is mostly a social dynamics/network effects problem, and I wouldn't be surprised if the intersection between people who are good at figuring out that distribution × people are who are deeply interested in how programming happens is actually quite small.


I think it's the opposite. If you give up on plain text, you forfeit a gigantic number of existing tools (git, grep, editors with their plugins, sed, etc.) and have to redo all of them from scratch.

The big issues with making a new language in 2021 are more related to semantic tools, like a LSP server (in which parsing is just the first building block). Just like wiring a compiler is much more than just parsing.


This is why I am so hopeful for Pernosco, a better debugger with better interface.

https://pernos.co/


Nobody pays for general use programming languages. The market has been conditioned to believe that dev tools should always be free.

Special purpose languages geared toward specific niches like Julia may be able to monetize.


One of the big reasons is that developers don't want to be locked in to something. The tool itself needs to be significantly better than the free competition, while still being able to easily pull it out and replace it with something else on short notice. Startups don't generally like those rules, they tend to thrive on lock-in and controlling the users for a chance of a big payday. I think developer tools are generally fit more of a "lifestyle business" than a startup.

Because these same "startups" are much more viable as open-source projects. Sure it doesn't make much money, but monetizing software is an uphill battle anyways.

Plus, you won't get a dime out of me for, say, an AI copilot plugin. Unless it's free, and I can guarantee that I'll have access to it in the future, why would I want to integrate something like this into my workflow? Adding another SAAS into my life is just putting my neck under another sword of Damocles.


Because programming languages are the most basic piece of infrastructure on software.

And every infrastructure projects suffers from the same issue. It benefits from mass adoption, which basically requires the thing to be free and open source. With that, the value will be captured 0% by the project creators, 10% consultants around the project and 90% cloud providers. The issue is so much more pronounced for programming languages.

Why bother..


Hey, I want to create my own programming language startup but I don't see how to make a business out of it. Specifically, I want to create an alternative Python compiler with performance in mind. It can be done, there's a lot of ground to cover, but it's just an awful lot of work. It is also a sensible topic to many. DM me if you are interested, thanks!

A counter-example: https://daml.com/

Daml is essentially a smart-contract language using Haskell syntax with built-in primitives for privacy & authorization (e.g., so you don't have to either make your whole ledger public or private)

Worth checking out if you're into DLT & PLs! :)

Disclaimer: I work for the company developing this.


I think we need to look beyond the OS/system level, to the domain and industry level to be able to deliver tools for them as well. Most domains now have a computation element to them, including core sciences, and I feel the tooling is often lacking for doing domain-specific stuff. To give you an example in biotech, our scientists have written scripts for simple automation tasks for their day to day activities, things like programming pumps based on instrumentation parameters on a fermentor. Its a scripting sorta language that is part of a software package used to run the hardware. To a "real" programmer the language used feels like a joke, and usually its not something many non-technical people even attempt to use. This functionality would be improved 100x by PL and tooling experts working on it in conjunction with domain/SMEs.

Jon has talked about this on his "Jai" streams. They need some sort of income to fund development of the compiler (hiring more programmers to work on it), but the number of people willing to pay for a compiler is probably not enough to hire programmers

It is extremely difficult to compete with good enough, and it is extremely difficult to compete with free.

There are a plethora of good enough free programming languages available, that also have good enough free tooling and ecosystems.

There isn't therefore really a business model.


Developers don’t pay for languages.

Most don’t pay for tooling.

Sometimes they do, and they get bitten by it. So never again.


Lot of reasons but mostly because you have to control the platform for most new programming languages to make sense. You want a new language on Apple platforms that isn't Swift? Good luck getting any buy-in from users.

One possible path that might work is if you release your language with a framework that is great for some use case. E.g. if Qt were rewritten in some alternative language maybe that could be successful. But to create anything that polished takes a lot of time and effort, so any company is going to burn through capital just getting to a place where they can determine if they've made something users will actually switch to.


A successful programming language is rarely ever something that stands on its own. It’s usually the byproduct of a corporation or some sort of organized committee. It gains relevance through a growing community or standardized through some sort of committee.

To answer the question shortly, programming languages aren’t very profitable. There are so many of them in competition with each other and so few that companies can adopt and base their recruitment around.

Tech startups usually thrive from apps, services, or valuable technologies. Not yet another programming language.


>Tech startups usually thrive from apps, services, or valuable technologies.

I think an unprecedented abundance of cheap leverage and an inability of investors to make reasonable valuations of highly abstract products should be in that list.


I’ve watched what Rich Hickey has gone through with Clojure and Cognitect. I think he did a great job there, but I wouldn’t say I envy the experience he had getting Clojure this far.

Would be fascinating to hear his take on it seeing as how he also tried to run a database startup. Nobody outside knows what Datomic's profitability actually looked like, but the fact it was aquihired by a banking startup suggests to me Datomic's financial position was less than compelling.

"Acquired by a banking startup"?

just fyi: Nubank is the largest financial technology bank in Latin America. It's not "some startup".


Man time flies. I remember seeing Nubank employees on the conference circuit back when they really were a startup. They were one of the huge early success stories of Clojure. No disrespect ment to Nubank at all, but the point still stands. I'm pretty sure that Datomic was acquired not for their financials, but to secure the future of the technology that such a now-massive company was built on. In a way I feel like Nubank buying Datomic only validates skeptisim of closed-source database companies, that Nubank felt having their database be a closed source product by a company with an uncertain future was an unacceptable risk. Most startups are not going to have the level of success Nubank did and therefor will not have the same options to mitigate the risk of betting on a closed source db.

Some recent programming langue startups that are still around or have come and gone:

- Dark: https://darklang.com (hibernating, laid off everyone to buy time to find product/market fit)

- Eve: http://witheve.com (defunct)

- Luna/Enso: https://enso.org (just rebranded)

- Unison: https://www.unisonweb.org (hanging in there)

- Urbit: https://urbit.org (hanging in there)

That's off the top of my head. I'm surprised an article asking this doesn't mention any of them or analyze how they came to be and where they went.

The main reason from my perspective is there's not funding. It takes years to build a programming language from scratch, so you need a lot of runway. And it takes a specialized group of developers so they cost a lot of money to hire. The combination of the two means you need a lot of money to get a PL off the ground. Millions of dollars.

The most likely way to get that kind of money right now is going through VCs, and they aren't going to give you anything for a safer systems language or a JavaScript transpiler. They want to be pitched a world-changing language that will redefine the computing landscape as we know it, and bring billions of people into the programming world. You might notice that as a common denominator among some of the languages I listed above.

But this is a research problem with an uncertain time horizon and unknown feasibility. It could take a decade more to complete such a task, but it's very unlikely you're going to get a decade worth of funding from VCs. You'll probably get 3-5 years, and at the end of that you'll need to have something tangible that can get you follow-on funding to continue your work. But that means your original pitch of revolutionizing the computing landscape by democratizing programming gets downgraded to making an incremental change that increases some productivity for developers.

Anyway, there are ways to get around this, but it likely means your company won't be a "startup" in the HN VC-funded sense. See Unison which is developed under public benefit corp. Andrew Kelley has managed to bootstrap Zig using user support and now corporate sponsorship. Urbit had an ICO and sells digital real estate.


Fyi, Dark isn't hibernating. It's actively being worked on. See the repo: https://GitHub.com/Darklang/dark

Here's my particular story, but I'm sure there are many other people who were burned in ways that are similar.

I used to pay for a programming language, Turbo Pascal, which then became Delphi. Then Borland went crazy and started pushing C++, and raising the prices as their focus was lost. Eventually they were sold, and the prices went up, and up.

Never again will I be locked into a vendor with a profit motive.


Because "avoid (success at all costs)" makes the best PLs, and startups are literally about success at all costs.

Start ups are (ideally) based on ideas for solving existing problems. Programming languages are just a tool to solve a problem, not the solution in and of itself.

because you have to build an entire ecosystem from scratch. that's like asking why there aren't more OS startups

that's the easy part. getting anyone to actually use it...

I think Ericsson sold Erlang initially

Does Urbit count?

Not HN approved, sorry. Plus, I don't think that Yarvin still has his mojo since his wife died. He'll be back, but Urbit is way too convoluted to just coast into success.



Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: