The actual question is why are there so few startups for people who work on PL technologies--which isn't limited to compilers or programming languages, but includes things like IDEs, debuggers, static analyzers . For example, Coverity--a decently well-known paid static analyzer--is kind of the best model of what you'd expect to see more of here. So it is somewhat more interesting to ask why this tool space isn't particularly well-served as a market.
 The biggest exception to the rule is custom hardware (particularly embedded, but this also holds true for mainframe and HPC systems), where custom compilers come for the ride. But that would be a hardware startup, not a PL startup, so it doesn't really count.
 I tend to lump in decompilers into the formal methods/PL/compilers/software engineering/computer architecture spectrum, mainly being a low-level compiler developer myself, but I suspect most people would consider reverse engineering tools to be a security side of things rather than a PL side things. And decompilers are certainly another noticeable successful paid-software niche, with the very expensive IDA Pro still remaining more or less the industry standard.
My early experience was in system design for board level and chip level systems (e.g. ASICs, FPGAs, SOCs) where there is considerable investment in tools ("electronic design automation" because we had some clear metrics for developer productivity.
My simple answer is there are multiple reasons:
+ The industry has been very well served by open source tools.
+ Outsource development and test firms tend to bill by the hour not the result, cutting hours lowers their revenue.
+ Coaches tend to focus on methodology and process results, not programmer productivity.
+ Managers of the programming / development function don't appear to devote cycles to determining where investment in commercial tools would pay off. There are exceptions.
+ First level managers tend to measure success by headcount, mirroring the outsource development and test firms, and not results.
I think there are significant opportunities for startups developing productivity tools but they are in niche markets that complement existing open source where development teams are paid for or rewarded on results not hours.
Though there is probably also a "commoditize your complement" effect going on from big tech. Just look at VSCode: it's incredible - Microsoft could definitely charge for it - but they aren't even interested in doing that.
Because the telemetry data is worth more to them.
A much bigger value is that VSCode brought Microsoft into the consciousness of developers who would have otherwise never thought of using a Microsoft product for anything. It's been so successful that there are even classes of developers who think that "Visual Studio" refers to VSCode, not big-poppa-windows-application Visual Studio.
Source: I worked in the dev tools business at Microsoft for the past half-decade.
I'm not saying this is bad or evil but let's first be honest
That's definitely not true w.r.t strategy. I could speak quite a bit about the challenges of doing things with VS, since I worked directly on tooling for it. There are maintenance issues like any huge codebase with thousands of little things it supports, but you're probably not aware of how internal infrastructure for VS has made enormous improvements. Among other things, one of the products I worked on went from an unreliable 24-hour turnaround into a 4-hour guaranteed turnaround for my changes to show up in dogfood builds. Everyone who works on VS also thinks that can be improved, and works towards improving it too. It was actually one of the more heartening things to see: multiple teams who give a shit about engineering systems and make an effort to improve them over time. This narrative that VS is a mess and VSCode is somehow "the future plan" is complete nonsense.
VS and VSCode couldn't be any more different. Visual Studio does so many more things than VSCode, and that's by design. VS is a fully-fledged IDE, and the intended use is that practically everything you do is through the lens of VS. VSCode is fundamentally an editor that is a companion to your other toolsets. It's not meant to replace Visual Studio for VS's users, because doing so would mean building so many things into VSCode that more or less go against its entire design philosophy. Such a decision would be bad from the business's standpoint, especially since VS itself has no significant problem evolving over time (hello 64-bit new VS!!), and there are so many more high-leverage things to do for VSCode than re-creating things that already exist, But In Electron Now.
> Also, they use Visual Studio Code to upsell their other paid services.
Well, yeah, of course. It's an editor suggested in all kinds of contexts, there are product-specific extensions (proprietary marketplace!), and when you have millions of people who love a product, why not try to upsell some of them? That's what I'm alluding to. Nothing dishonest here.
The author talks about how a lot of the tools currently being pitched are not that useful day to day - the static analyzer, say, that shows you the ten thousand things you could improve but never actually will. Lots of devs out there using print statements instead of proper debuggers, so even the existing tools often offer far more than they think they need to learn!
From there, my guess would be that the space of truly useful groundbreaking tools that wouldn't be just competing with a good-enough free one or an established incumbent is pretty well explored. This isn't as true for new areas - Kubernetes management interfaces tend to majorly suck, say, especially if you aren't a keyboard fiend who's going to get the hang of k9s quickly - but new areas are also niches that could be risky to try to build a company in. And probably also still suffers from the "don't know what I don't know" issue.
"Why aren't developers more interested in tools" is an interesting resulting question, to me, then. I think it probably has to do with the business-driven focus on building new systems and new features at high speed. We aren't building code to last for more than a few years, in many cases, so how deep is it justifiable to go on a particular block of it?
It's rare for a tool to be so shoddy that this is all you get. From what I've seen, you'll get three categories of report: valid bugs, valid bugs which the developers insist are false positives until forced to do in-depth analysis, and actual false-positives. The exact ratios vary depending on the tool in question but generally I've seen the first two categories be a good bit larger than the third.
Ten true positives could very well be worth thousands of programmer hours if they're severe enough.
If you approach it determined to reject everything, or to blindly follow the tool, you’ll get poor outcomes. If you view it as a tool which can help you improve your code and triage results accordingly, you’ll almost always have better results.
1) Quality of existing language/compiler. Some languages are more prone to turning carelessness into exploitable bugs than others. Some compilers are better at flagging questionable practices than others (and, as an approximation, newer compilers for basically any compiled language are better than older ones).
2) How much you care about fixing the problems. If a team is understaffed and asked to overdeliver, they have no time for fixing technical debt of any sort, whether static analysis warnings or old cryptographic hashes or whatever.
Static analysis tools do best when 1 is low and 2 is high. But that's unusual. First, you never start a project in that situation. If you care about the sort of bugs a static analyzer will find, you almost certainly are going to pick a good ecosystem to implement in. So you're working with legacy code, almost by definition.
And if you have the bandwidth to address tech debt, there are many ways to use better toolchains and deal with legacy code. And this isn't a "Rust solves everything" post - frankly, upgrading your C/C++ compiler and listening to what it says will get you a lot of the way there. But that itself tends to be a sizable tech-debt-reduction project. (I'm one of the people on the GCC upgrade team at my own workplace, and we do justify the work to senior management by pointing out that new warnings are helpful for code quality, and we do spend time addressing those warnings.) We also have more tools and frameworks for moving parts of your computation to a different language or toolchain (more norms around microservices, better serialization libraries, etc.) if you want to go the rewrite route but want to do it incrementally. And there are plenty of language choices for a rewrite; frankly, most use cases in the '90s that needed C++ will do just fine today with unoptimized and easy-to-read Python.
So, if 2 is high, you have a lot of options other than static analysis.
(Aside: the post is about PL startups. I would bet the average PL grad would rather work with the ecosystem of a new, modern programming languages that has benefited from recent research than write tools to work around an old and known-suboptimal one.)
I think the only real case where static analysis can work is where 1 is low but the choice to buy the static analysis tool causes 2 to become high. Basically, it acts like a consultant. The devs already know, vaguely, that the code is crap, and they wish someone would give them resources to fix it. Senior management buys a shiny static analysis tool, which is an objective outside voice saying how bad the code is. They tell devs to fix it, and devs are happy to knock out obvious fixes, your category 1 - and they'll still debate the non-obvious fixes, your category 2, because they're not being motivated by the static analysis tool, they're motivated by their own sense of what code is crappy, they just know that their new OKR is to silence all the static analysis warnings.
That's certainly a better outcome than not fixing anything, but that's still a worse outcome for the company, probably, than asking your devs what's wrong with the code and how they want to fix it.
I think this is homing in on the different starting points: if you experience static analysis as something imposed on the development team (e.g. by a bureaucratic security mandate) you'll definitely see more cost and less benefit than when the team itself selects that tool and selectively uses it as part of their ongoing maintenance work along with other tools (compiler warnings, fuzz testing, etc.).
For example, I've used tools like this where you add it as a CI stage where you say that new code shouldn't make the problem worse and then the team spends the next few months triaging reports on the area they're working on, tuning rules and prioritizing valid findings. Generally that'll have a surprising impact on the average code quality over 6-12 months, in part because cleaning up the chaff tends to make it harder for more significant bugs to be missed in a sea of compiler warnings and convoluted code.
Hence, all successful programming languages are open and free.
In practice the only kinds of languages you don't get locked in to are those with excellent interop with other languages. It's got little to do with whether the reference implementation is open source. Kotlin's lack of lock-in doesn't come from the fact that it's open source - that's nice but in practice most community contributions are small. It comes from the fact that you can recode individual classes in Java if you so wish. And in fact, even convert Java to Kotlin automatically, if you're going the other way.
Also, lots of developers have written code in proprietary languages with a single vendor. It's HN groupthink to believe that nobody 'sane' does this. Look at ABAP, Apex or MatLab. All very widely used, even though you may not have heard of them. In the past Delphi was also very popular, and it's only recently that the C# compiler went open source.
Python 2 -> Python 3? Perl 6? I know, neither of those intended to be a fork.
Before that, C# could be considered as a fork of Java, and C++ as a fork of C.
C# and Java might be eerie similar, but by no definition of "forking" might C# be considered a fork of Java. Maybe you could consider it a fork of C as its original name was "C-like Object Oriented Language" (COOL).
Back when I started my career almost nothing was free. At the time I worked at a startup and we were always drooling over the stuff in "Programer's Paradise". We did pay for a few crucial tools but couldn't afford many. Times have changed a lot since then and the fact that high-quality compilers and IDEs are effectively free has raised the bar for paid products substantially.
This is exactly what RISC-V was created to address. People here get excited about running Linux on it, but really the goal is to have a flexible, extensible ISA that can be used from microcontrollers  to DSPs, GPUs, & TPUs. Full-on Application Processors, with their FPUs, MMU & Cache, are only a small part of the design space of RISC-V.
The shared ISA means the rest of the stack could share and leverage tools. Any day now ;)
Dave Patterson is one of the creators of RISC-V, and his credentials include pioneering RISC, the original concept.
The Low-Code movement seems very focused on programming, no?
If you have a “pop out the credit card” moment at any point you slow adoption and end up with a smaller, poorer ecosystem that finds it hard to compete with the Pythons and Gos.
But if your language is targeting a niche where no established FOSS language exists, then a startup would work. Create a language, give it away for free (MIT or Apache ideally), then charge for creating/maintaining niche libraries or other consulting. This is the model chosen by Julia (https://news.ycombinator.com/item?id=9516298). One of their main competitors is a paid product by MathWorks.
Side note - this flywheel effect determining the long term success of programming languages is why discussions around them can get so contentious. A criticism of “your” language could influence an exodus from the developer community around it, making it die slow death. For a person who’s put years into the language, it could mean throwing away all that knowledge and starting afresh elsewhere. So they respond harshly to the criticism, subconsciously hoping that the language ecosystem stays healthy.
I don't see any point in having a proprietary ecosystem when open source is so much better in every aspect.
That's what JuliaPro (Pumas, etc) is.
: In reference to why Lisp over other languages, in this article: http://www.paulgraham.com/avg.html
Like, if someone says they prefer Java to Kotlin, that doesn't threaten me at all because I can use their stuff easily. Likewise in reverse, as long as the author of a library actually wants it to be usable from Java it will be. Even stuff like Scala and Clojure where the ecosystem guys really don't care about interop in the reverse direction, it's still possible and can be done when people want it for relatively low cost (just avoiding exotic language features in your API definition, more or less).
It does take a lot of the heat out of those discussions. You don't get the same influx of "I rewrote grep in Rust" type stories.
Let's say Java does eventually catch up, but it'll take at least five years. So then Kotlin has been making JVM user's lives easier for a decade by that point. That's pretty good! Definitely worth having.
I think a key part is that investment in a language goes both directions between programmer and language author. When someone learns a language, they are spending a lot of time and effort to load that language into their head and build expertise in it. That effort amortizes out over the amount of useful software they are able to write in that language in return. When the language is paid and where users have no control over the cost, they can end up in a situation where they are unable to extract value from what they've already learned if the language gets too expensive.
It's sort of like asking users to turn a corner of their brain into farmland but then giving the compiler authors the key to the padlock on the fence. Users don't want to give financial control over a part of their own headspace to someone else.
Free programming languages ensure that the user can always have access to the value of their own expertise in the language.
Well put. I think this is why the Google/Oracle court battle was so contentious to a lot of programmers. It wasn't just that people hate Oracle (sure, that too), but also that it was about Oracle trying to claim a monopoly on whatever piece of our brains is dedicated to our knowledge of Java.
I agree insofar as that is a euphemism to mean that Oracle is not tightly aligned with the interests of the left. Google is a big donor both in cash and in kind. Oracle is much less so.
> The damage to the industry from allowing APIs to be copyrighted would have been immense.
If I were to copy Google Sheets, offer an online service called "Google Sheets" and the only difference is that I modified a handful of implementation details so the service would run my spyware, and not Google's, then I would lose any copyright claims made by Google, hands down. You could make an identical argument that the web interface is an "API" and copying their code is a necessity of providing equivalent functionality. Google ripped off Sun and got away with it because the US legal process is a farce. Denying the right to copyright ANY code at all would be a morally defensible position, but that's not the ruling.
If you built a spreadsheet app that was compatible with Google Spreadsheets’ formula language, Google (rightfully) wouldn’t have much of a case. In fact, Google pretty much did this by implementing much of Excel’s formula language.
In theory though there are certain business opportunities in this space that haven't been explored yet, it seems.
For example, a faster and better compiler for a popular language that itself is slow, might get some traction. Personally I would pay for a Swift compiler that doesn't take seconds to compile a single long-ish expression with floating point arithmetic, or doesn't take tens of minutes to fully build a project that isn't even that big. Similarly I'd pay for a replacement for Xcode that doesn't have bugs in every single feature it implements. As an iOS developer I'm honestly very tired of and fed up with both Swift's slowness and Xcode's bugginess. There has to be some competition here too, already.
In other words, competition is what the PL industry is lacking, I think. But not among the languages themselves as much as among the compilers (and possibly the rest of the tooling and environments).
I don't know how we ended up in a world where there's only one compiler per each newly created language. I'm not sure this is normal or it's the way it should be.
I used to think this, but now I am skeptical, because of zapcc. zapcc was almost exactly this: a faster compiler for C++, first propreitary, now open source because it failed. Still no traction.
This is a viable business model in principle but seems to rarely work in practice.
A big part of the problem is that all languages in wide use already have mature compilers and the language itself is evolving as quickly as its designers able to make it.
This means that anyone entering the third-party compiler business has to not only catch up to the existing mature implementation (and be fully bug-for-bug compatible with it), they have to surpass that and track a quickly moving target. So far, few have been able to pull that off successfully.
Or some complex Combine chains just hang forever and not compile at all with no indication what the problem is. Yet they keep adding language features that clearly the compiler can't deal with.
For instance, "parser generators" like yacc, bison and all the rest are absolutely awful. One feature they miss is that you should be able to make a small revision to a grammar with a small code patch. (Say you want to add an "unless(x)" statement to Java that is equivalent to if(!x)" - ought to be 50 lines of code not counting the POM file.)
I think a better parser generator could open up "compiler programming" and "meta programming" to a wider audience and get LISP fanatics into the 21st century, but it never happens. Anybody who understands the problem enough to solve the problem will settle for a half-baked answer; the labor input to go from "the author can use it" to "revolutionary new parser generator" is a factor of 10x and there is some risk that people still won't get it.
Programming the AVR-8 Arduino it's hard to miss the conclusion that "C sucks" but anything really better (say Supercharged Pascal or a Miracle Macro Assembler) would take a huge amount of fit and finish work to make it something that would really beat C. For pedagogical purposes you can argue that C rots the mind like BASIC but you can't argue that learning C is a waste of time.
To make a profitable business out of any of the above seems almost impossible.
I did see an Ada compiler for the AVR recently (I'm not an Ada programmer). It seemed unpolished, and not really an advancement on C.
I think one problem with embedded work is that you either stick to the low-level, in which case you have no advantage over C, or you go for high-level, in which case you have convenience but lose flexibility.
In terms of meta programming, Val Schorre has an interesting idea with META-II. Both Alan Kay and Joe Armstrong spoke highly of it.The compiler has its own assembly language. I had this goofy idea of writing a Basic-like language that replaced the assembly. So you'd have a language that was both high-level Basic and a compiler. It would embrace the GOTO, because META-II also generates GOTOs, and GOTOs are a natural fit for CPS (Continuation Passing Style) used by many compilers.
For my Arduino projects in particular I never make recursive function calls, and would be happy to statically allocate everything in RAM. The stack-oriented activation records in C, calling conventions, return values, are more a problem than a solution.
Zig is trying to work in this space, and they seem to be getting rather close to the ideal.
Just to start off - I still find their debugging or their (lack of) ability to reason a huge barrier to wannabe language developers. But I am still unable to put this in concrete terms. My last few months has been immersed in reading a ton of papers and I still see the papers use bison as the staging ground for most new ideas (menhir these days seem to be gaining popularity) so curious what is keeping yacc/bison in their throne position?
In a language like C# or Python you usually want to get an object tree for the AST and you shouldn't have to write any code to get that. It should come "for free" with the grammar.
Another missing capability that people don't know they are missing is the ability to "unparse" the AST back to text. This is valuable when you want to write a code transformer: for instance in query languages like SQL there is a lot of value in doing a transformation on the query and then writing it back to SQL. You should be able to transform the AST and not have to write a line of code to transform it.
Another problem is composability of grammars. If you want to support SQLLite, Microsoft SQL Server and Oracle SQL dialects you should have a base SQL grammar and extended grammars that inherit from the base.
All of these capabilities are so beyond the pale of what parser generator "insiders" are used to that it's a very hard conversation to communicate that these capabilities are both possible and useful.
The AST availability is defintely a big one that I cannot align on a decent contract on. At some point in time not all AST transformations are reversible yeah? Heck even a conversion from Parse Tree to AST is often not bidirectional. A lot of AST directed IDEs (even MPS from Jetbrains) end up being very cumbersome when typing-at-scale. I wonder what kind of semantics can be foundational enough that AST generation is simplified for atleast 90% of the cases? Again my interest in this comes from building incremental editors for certain domains so I am definitely looking to learn more.
Meta programming is an excellent target, but it usually needs to be directly baked into the language. The closest system I've seen to a nice language agnostic close to meta programming system is Jetbrains MPS and Spoofax Language Workbench. Last time I checked, MPS is mostly dead and Spoofax is only used my programming language researchers.
For the Arduino programming stuff, I currently maintain a domain specific functional reactive programming language for Arduinos called Juniper: https://www.juniper-lang.org/
However the intersection between people who do Arduino programming and functional programmers is quite small. Also my interest has waned on and off over the years which means there have been long gaps between releases. I just work on it for fun mostly.
The point was about application development - you can save time by making applications that aren't 100% correct. In most software, any bug is OK as long as it happens a small enough percentage of the time.
That's a strong statement.
C's biggest flaw is that it lies about the underlying memory, but it's a small language that's especially suited to be easy to port.
Undefined behavior is normal. The whole relationship between .h and .c files is sick. How you use the "extern" keyword is arbitrary. strcpy is Turing complete. It took a while for people to realize templates are Turing omplete. I'm not sure where the grammar fits in the Chomksy Hierarchy since you have to look at the symbol table to parse it...
Good C looks like poetry, but beginning programmers shouldn't be subjected to any of it. I certainly hate cleaning up the mess!
ALGOL, PL/I and similar languages put a lot of work into specifications but nobody really knew how to write a specification that could be implemented back then. C was an early attempt at specification + implementation that was "minimum viable" (almost?) and ready in time to be the bridge between microcomputers and minicomputers.
Probably the best idea in C was that you didn't have to agonize over I/O in the language specification but you could just leave it to the stdlib.
Also, templates are C++ not C.
My google foo is not working, I can’t find a source for this. And am too dull right now to figure it out.
This difference is shrinking in the web, both client (DevTools run like an app inside the browser, with live changes to the code reflected instantly in the page) and the cloud (the entry point interface is a collection of collections, where you can inspect and operate on any component running in the system, and even create some kinds of workflows in the live environment).
But the lowest level for designing components relies on specifying a fully formed program in formal code, and having to run a build-deploy-test cycle without errors before seeing whether it works, instead of having code and data run side by side to see how the code behaves.
I put my hopes in online notebooks, but these are used mainly for data analysis, and "true developers" seem determined to ignore them.
Yet it's the difference between a data scientist making an occasional report to putting their expertise on wheels that makes it worth paying for.
The most significant difference is that the notebook always has runtime information about how the available information is being processed, while the IDE only has this information available while it's in debug mode. The first model provides much more information about the system behaviour, easier to access.
This is merely my own personal preference, not an assertion that my stance is correct or common.
For development, it's important to me that I can put the toolchain into version control along with the code. This ensures that I can check out a build that is years old and be able to build it without issue.
SaaS solutions break this for me, so I avoid them to the greatest extent that I can. If it's cloudy, then it can change without warning or recourse. That's a problem.
For that reason, I would never seriously entertain using something like a netbook for development.
a) the notebook has an internal text representation that can be version-controlled, and
b) the notebook follows a spreadsheet-loke paradigm with incremental non-mutable data structures, so that all state is represented consistently at all times (no dependence on what order you've run the cells in the current session).
You could do that. Add "unless" to the Flex file and hack the Bison file to interprete 'unless->expression' as an 'if->negation->expression)' when pushing to the abstract syntax tree. You can probably copy paste from the other rules for the hypotetical Flex/Bison Java compiler.
In ordinary software people spend most of their time modifying existing code. I wouldn't want to reinvent "javac" or "cpython" but adding a new feature to a compiler like that should be easy.
Alternately there are a number of "compiler-technology based tools" that we don't have. Instead we get problems like
which defines a whole new XML-based language to manage database changes. That's like jumping from the frying pan to the fire since you're still going to have to understand your RDBMS, how to write SQL for it, what the non-standard features are in your database, etc.
A sane tool like that would use an extensible parser generator to easily support the variant syntaxes of different SQL dialects and would be able to parse and unparse SQL or a variant of SQL to define change sets.
(speaking of which... shouldn't a compiler generator automatically make something that can reverse the AST back to source code? how about round-tripping source code comments, formatting and all like some CASE tools did in the 1990s?)
A while back I had the problem that arangodb scrambled the order of columns in JSON result sets and wrote something that parsed the AQL, figured out what the order of the columns was, and turned the result sets into nice pandas frames.
If "writing compilers" was easier people would write "compilers" that do things other than compiling to machine code.
Lots of people, actually! Just in recent years, Golang, TypeScript, and Rust have shown there's plenty of demand for new languages (compilers!) as developers and their requirements keep evolving. Given how expensive and time-consuming languages are to build and evolve, I'm honestly flabbergasted that we've seen 3+ languages come of age in the past decade and break into the mainstream.
Dunno about Rust.
Guess I'm disagreeing with "demand" in two of the cases. One because you don't know what office politics at Google led to Go being promoted. I strongly doubt it was to solve a problem, just someone with a lot of clout had a pet project.
... and second because it attempts to patch an existing trainwreck and isn't even a new language.
Rust does seem well intentioned though so maybe there was a "demand".
I don't know office politics there, sure. But Go most definitely solved problems by having a language optimized for fast(er) builds and generally being a language you can write code without thinking too hard about and getting good runtime performance out of.
Many languages start out as pet projects, but can often solve a real problem such that they see internal adoption. That adoption can grow for various reasons, and in some cases, end up creating a sociotechnical system where it's now a thing even though it never started out with the intention of being a thing. It's at this time that demand is a key driver behind adoption, new features, etc.
Do you still feel that Go "doesn't count" with respect to demand?
> second because it attempts to patch an existing trainwreck and isn't even a new language.
But nonetheless, since this is really a philosophical question, what exactly makes up a new language to you?
Do you still feel that it "doesn't count"?
Quality != popularity. Look at Hollywood. JS is only popular because it got built into every browser, not because there is anything "quality" about it. Thus, attempts to fix it.
> In TypeScript, the type system is Turing Complete.
Ok, I'm afraid the only thing I can think of now is the metastasis that C++ is in. But that's again a philosophical point of view, and I'm drifting on another tangent too.
Ok, not fundamentally incompatible, right?
I'm not sure why they would have sold if they ever figured out how to make a software part of the clojure ecosystem sustainable & growable revenue source, e.g., hosting.
In contrast, Julia's recent fundraise is on a successful science tool that happens to be written in Julie, and a hope that hosted Julie might one day pay the bills. But not proven yet. NPM & dockerhub showed repo hosting is tough to succeed on even with wide use, though Anaconda shows promise when mixed with consulting revenue.
Jean's article also toes around developers not always being the buyer, so requiring a model more like Twilio, where they are presales/marketing for some other customer. That kind of misalignment adds another level of pain.
I'm not sure what exactly you mean by that, Cognitect is a relatively small company (if I'm not mistaken, there are fewer than 20 engineers work there). And there are plenty of products built on Clojure ecosystem. It scales very well. Cisco, Apple, Walmart, etc., are all successfully built and keep expanding their [massive] Clojure code-bases.
I think there was a product attempt by becoming a database company -- datomic -- that happened to be written in clojure, but ultimately they still got bought out for consulting vs product: https://www.cognitect.com/blog/2020/07/23/Cognitect-Joins-Nu...
So again, for all the good parts, just not the success story for a PL/tools startup that Jean is looking for..
- the switching costs for programming languages are very high, even if you're switching to an established language
- it's hard to evaluate a new programming language; you don't know if it's good until you've written something non-trivial
- learning a new language takes a lot of effort, particularly for languages that aim to be better than existing languages; better implies different
- given all of the above, early adopters will probably not use the language for their day job, they'll use it for hobby projects; hobbyists are price sensitive
Rational Software was this: founded in 1982, acquired by IBM in 2003 (they outbid Microsoft, who were also interested in acquiring us).
They were a number of dev tool companies in that time, but no more: open source makes the economics of this a tough sell these days.
As an aside, I've been a professional developer for several decades now, and that's a question that I've heard for my entire career.
I think the answer is a combination of things that are impediments to the adoption of new languages of any sort, no matter how much of an improvement they may technically be
1) If there isn't an accessible supply of developers who know the language, using the language becomes risky as a business decision.
2) Related to #1, using a new language is placing a bet: once you've put a lot of time, money, and sweat into developing a codebase in a particular language, if that language falls out of fashion then you're stuck with a codebase in a language that will lack support and a ready supply of developers who know it and are willing to work in it.
3) Career-minded developers will tend to prefer to develop skills that are likely to be in demand with a wider array of potential employers. Much like the bet I talked about above (#2), working in a new language is also a bet being placed by the dev.
4) New languages tend to have immature toolschains. Using them can result in decreased production (and possibly decreased quality) as a result. This problem goes away should the language see widespread adoption, but that takes a while and may not happen.
There's a reason that the most-used (as opposed to most popular) languages tend to be ones that have been around for a while. They're safer (from a business point of view).
I've also noticed that the newer languages that become popular tend to grow out of hobbyist use. Devs learn and use them for fun in their off time. Some become proficient in them and advocate them to other devs. Once this hobbyist community reaches a critical mass, then companies will begin to entertain using the language, but not before.
... languages have value substantially due to network effect. By making a proprietary language you hobble the network effect and destroy the language's value.
And there aren't startups to create open languages because monetization is a big "then a miracle occurs"-- there is no obvious moat. And at our current level of science re-programmer productivity, it's a lemon market: lots of people say this or that will improve productivity but there aren't much in the way of established ways to prove it. Hard to convince people to pay for something when its benefits aren't unambiguous.
a) it seems to be a bit of winner-takes-it-all game here, maybe fragmented by language or ecosystem, but overall still true. IntelliJ is kind of an outlier, but they used to be Java only for a few years, then PHPStorm/WebStorm were kinda halfway accepted, and some other languages as well.
n) Also it takes so damn long until a new tool is even considered for widespread use. I remember ~11 years ago when I was more involved in conferences and tooling in the PHP world (and I also wrote my diploma thesis on this) - the broad majority of people only used Jenkins and continuous integration, if at all. Every other tooling was already rare, and don't even think for a moment they'd pay. The few cases that did had Bamboo and Sonar. From what I heard from friends this was not unique to the PHP world. And even in the last 5 years it's the same story.
c) I don't see a lot of change in the last 10 years that companies are willing to buy tools for their developers when there's something free to be had and you can save a bit per year. This can be big or small companies, startups or ones.
I won't use a language that isn't open source, I don't trust companies enough, there is a chance they could die, revoke a license or do some other stupid thing that makes it unreliable in the future.
- It was given away for free for years. During that time people wrote _reliable_ numerical packages that other people glommed onto.
- matlab users aren’t programmers, they are [engineers/mathematicians/etc.] who need to write some code to do their job. One aspect of this fact is that they generally aren’t on the hunt for a new “framework” to do this stuff. They also aren’t thinking much about long-term maintainability: they ran it once and got the answers they needed.
- Academics (and students who can use university licenses) are price insensitive.
- Engineering disciplines where errors can’t happen (civil, aerospace, etc.) come to rely on aforementioned reliable numerical packages, which eventually required proprietary licenses.
is visual programming language start up based in Cracow, Poland and they even had office in SF iirc.
They have shitton of fancy visualisations, it looks really interesting, but I'm personally not the target for Visual Programming
I'm very excited about the variety of languages which are being actively developed right now. There are 2 major open source projects which are helping quite a bit for the new languages:
- Visual Studio Code (Eclipse used to be the choice in the past). Every language can get high quality IDE with a simple integration.
- clang & llvm - high quality back-end and common code generation tools.
The developer tools market is not going to be extremely big. In most mature companies the R&D costs are up to 15% of the revenue and the dev tools is fraction of that.
Tools like IDEs, debuggers, visualizers, etc are more interesting, but I think their effectiveness is also limited (and creativity constrained) by having to work in a plain-text world.
Moving beyond plain-text is mostly a social dynamics/network effects problem, and I wouldn't be surprised if the intersection between people who are good at figuring out that distribution × people are who are deeply interested in how programming happens is actually quite small.
The big issues with making a new language in 2021 are more related to semantic tools, like a LSP server (in which parsing is just the first building block). Just like wiring a compiler is much more than just parsing.
Special purpose languages geared toward specific niches like Julia may be able to monetize.
Plus, you won't get a dime out of me for, say, an AI copilot plugin. Unless it's free, and I can guarantee that I'll have access to it in the future, why would I want to integrate something like this into my workflow? Adding another SAAS into my life is just putting my neck under another sword of Damocles.
And every infrastructure projects suffers from the same issue. It benefits from mass adoption, which basically requires the thing to be free and open source. With that, the value will be captured 0% by the project creators, 10% consultants around the project and 90% cloud providers. The issue is so much more pronounced for programming languages.
Daml is essentially a smart-contract language using Haskell syntax with built-in primitives for privacy & authorization (e.g., so you don't have to either make your whole ledger public or private)
Worth checking out if you're into DLT & PLs! :)
Disclaimer: I work for the company developing this.
There are a plethora of good enough free programming languages available, that also have good enough free tooling and ecosystems.
There isn't therefore really a business model.
Most don’t pay for tooling.
Sometimes they do, and they get bitten by it. So never again.
One possible path that might work is if you release your language with a framework that is great for some use case. E.g. if Qt were rewritten in some alternative language maybe that could be successful. But to create anything that polished takes a lot of time and effort, so any company is going to burn through capital just getting to a place where they can determine if they've made something users will actually switch to.
To answer the question shortly, programming languages aren’t very profitable. There are so many of them in competition with each other and so few that companies can adopt and base their recruitment around.
Tech startups usually thrive from apps, services, or valuable technologies. Not yet another programming language.
I think an unprecedented abundance of cheap leverage and an inability of investors to make reasonable valuations of highly abstract products should be in that list.
just fyi: Nubank is the largest financial technology bank in Latin America. It's not "some startup".
- Dark: https://darklang.com (hibernating, laid off everyone to buy time to find product/market fit)
- Eve: http://witheve.com (defunct)
- Luna/Enso: https://enso.org (just rebranded)
- Unison: https://www.unisonweb.org (hanging in there)
- Urbit: https://urbit.org (hanging in there)
That's off the top of my head. I'm surprised an article asking this doesn't mention any of them or analyze how they came to be and where they went.
The main reason from my perspective is there's not funding. It takes years to build a programming language from scratch, so you need a lot of runway. And it takes a specialized group of developers so they cost a lot of money to hire. The combination of the two means you need a lot of money to get a PL off the ground. Millions of dollars.
But this is a research problem with an uncertain time horizon and unknown feasibility. It could take a decade more to complete such a task, but it's very unlikely you're going to get a decade worth of funding from VCs. You'll probably get 3-5 years, and at the end of that you'll need to have something tangible that can get you follow-on funding to continue your work. But that means your original pitch of revolutionizing the computing landscape by democratizing programming gets downgraded to making an incremental change that increases some productivity for developers.
Anyway, there are ways to get around this, but it likely means your company won't be a "startup" in the HN VC-funded sense. See Unison which is developed under public benefit corp. Andrew Kelley has managed to bootstrap Zig using user support and now corporate sponsorship. Urbit had an ICO and sells digital real estate.
I used to pay for a programming language, Turbo Pascal, which then became Delphi. Then Borland went crazy and started pushing C++, and raising the prices as their focus was lost. Eventually they were sold, and the prices went up, and up.
Never again will I be locked into a vendor with a profit motive.