Hacker News new | past | comments | ask | show | jobs | submit login
The Alternative Implementation Problem (pointersgonewild.com)
272 points by mpweiher 14 days ago | hide | past | favorite | 83 comments



I'll add one more point which the OP missed, but which is also very important. If you're developing an alternative implementation, you probably have a different architecture from the canonical version, and things that are easy to do in the main implementation might be very difficult to do in yours.

Let's say there's a piece of proprietary software for creating financial reports[1], which is using a weird binary format to store their documents. You want to make a free alternative, which can load and save these documents. The format is not fun to deal with, so you have a single function that loads the whole document into memory and one that dumps your data structures back to disk and entirely overwrites the file, but you operate purely on in-memory data while the software is running. What you don't know is that the proprietary software doesn't do this, it was developed in the days when users had very little RAM, so it only loads and saves the section the user is currently operating on, and knows how to modify the document in-place.

Then, the proprietary software introduces a way to add attachments to their documents. People keep adding stupidly large files, recordings of investor calls, scanned PDFs with hundreds of pages and so on. THe proprietary software loads documents one section at a time, so this works just fine. You, on the other hand, always deserialize the entire document at once, which suddenly becomes a problem when the documents are bigger than the users' available RAM memory. You now basically have to re-architect the entirety of your software, to support a change that took a week of a single developer's time for the main implementation.

[1] a purely hypothetical example, I've never actually worked in this space.


I work in an industry that requires 2 seperate implementations of all critical software components; so I have been involved in several reimplemenations to help satisfy that. There are definitely times where I saw a feature and thought "the original implementation was obviously doing X; and this feature does not really make sense given our architecture". However, I have never needed to rearchitect my program to implement the feature. The only real difference is that a trivial feature for one program takes a bit of work on the other. However, I could also point to functionality that took more work to implement on the original program, that the different implementation made trivial on the second program.

I have only had to do 1 major rearchitecturing, and that was on the original implementation, which had some architectural assumptions that forced an exponential blowup. I'm wasn't working on the second implementation, but they started years after us, and managed to avoid our mistake from the beginning.

To take a public example, that is far more complicated than anything I have worked on, consider the problem of running Windows applications under Linux.

Linux is a completly different implementation of a kernel than NT, and doesn't even attempt to be compatible. However, running Windows applications does not require a rearchitecture. Just a few relativly generic Kernel features, and a userspace compatiblity layer. Not to say that Wine does not take a lot of effort to write and maintain; but not nearly as much effort as implementing Windows itself. And, it is running on top of a platform that never aspired to be Windows compatible. Wine does, however, suffer from the problem that the article points outs, in that it is perpetually behind Windows. Wine also exemplifies a second problem, which is that bugs in the primary implementation end up being relied upon, so in order to be bug-for-bug compatible, you need to first discover what all the bugs you need to implement are.


There are huge swaths of the Windows API that Wine straight up doesn't even try to implement, the complex APIs that a screen reader or driver would require to function being good examples.

There's a good argument that implementing kernel-mode driver APIs would be a complete waste of time, after all, Wine is running on top of another operating system, which should be responsible for interfacing with hardware. However, what the Wine developers didn't foresee was the fact that some applications (mainly games) started requiring a driver to function as an anti-cheat feature.


> I work in an industry that requires 2 seperate implementations of all critical software components

Very interesting, what industry if you may discuss it a bit more?


I've seen this in aerospace in cases where testing the software against ground truth isn't possible for some reason[1], so you need to generate an alternative version of an algorithm as a sanity check.

[1] This is typically because the expected outputs are not easily known or specified in advance (e.g., interpolating a 3D wind cube to a higher resolution from lower resolution forecast data) and there isn't much if any experimental data, and collecting such data is expensive because it requires flying expensive aircraft around for long periods of time.


My guess would be something safety-critical, probably aerospace. I haven't had to do the "2 separate implementations" thing yet but will likely be exploring quorom/voting-based redundancy pretty soon. Fun times!

On the other end of the windows emulation space, another example is ReactOS, which has always lagged years behind its target.

> things that are easy to do in the main implementation might be very difficult to do in yours.

The author mentions this when they talk about the ease of implementing new features in an interpreted language vs. a compiled language.


Hm, this is true, but I think it's worth pointing out that this is more general. The article was focusing on how JIT is harder than interpreting, whereas neither model in parent's post is inherently more complicated.


> If you're developing an alternative implementation, you probably have a different architecture from the canonical version, and things that are easy to do in the main implementation might be very difficult to do in yours.

Yes. This is the classic problem with making Python faster. CPython started as a naive interpreter, one that just plods along doing what the code says without optimization. In CPython, everything is a dictionary and there's little real concurrency. This allows any code to patch anything else on the fly. Nobody does that much, but people scream if you try to take that out. So implementations which really compile Python have to be able to handle some thread suddenly modifying something out from under another thread.


Classical Smalltalk feature `a become: b`, now everything is changed across the whole image, or after a redo in a Lisp Machine like environment.

Yet both ecosystems were able to come up with JIT implemenations that can handle the "reboot the world at any moment" use case.


Do you agree with them that "implementations which really compile Python have to be able to handle some thread suddenly modifying something out from under another thread"?

Would you say that's also true of Smalltalk implementations?


As if there aren't Smalltalk and Lisp implementations with support for parallelism, and have naturally to handle such cases.

By "support for parallelism" do you mean invoke multiple VMs and use sockets or shared memory?

You know what I mean, no need to play lawyer, leading the witness.

No.

As-far-as I can see those comments are unhelpful to the point of being evasive.



And when we look at that (proprietary Cincom Smalltalk) approach we find "simple mechanisms for spawning multiple running copies of the image and using those to perform various tasks in parallel" and "Objects are marshaled between the master and drone as copies, not as references."

That's not an example of comparable complexity to "… Python ha[s] to be able to handle some thread suddenly modifying something out from under another thread".


With the Innovator’s Dilemma the reference implementation could be a competitor’s. I’ve seen this play out well and badly. Having better knowledge of the market now, you might pick a better architecture, and find yourself in a place where it’s cheaper for you to add new features quickly than for your competitor. That’s one of the ways a smaller company can raise your overhead. Or one who has ignored less tech debt.

In fact this is one of the few times you can clearly illustrate tech debt to management. It just takes us longer than Acme to implement these features.


Sometimes this difference in architecture is deliberate, too. For example, the GNU versions of a lot of basic Unix utilities make completely different tradeoffs from the original Unix (and BSD) versions specifically to resist allegations of copyright violations.

> positioning your project as an alternative implementation of something is a losing proposition

> don’t go trying to create a subset of Python

I agree - a project that's marketed as "Python but with more X" is always going to struggle to compete with the canonical implementation. (Especially if X is speed - ultimately, if you're using a dynamically-typed language you probably don't care about execution speed).

However, alternative implementations are not always doomed to failure. MicroPython seems to be somewhat successful, despite supporting little more than Python 3.4 (10 years old). It's designed to run on microcontrollers so it's not competing with CPython - it's competing with other microcontroller programming environments.

OTOH, I'm sure the MicroPython maintainers get a lot of feature requests for more recent Python features. I once considered building an alternative implementation of Python that's lightweight and focused on embedding in applications - again, it wouldn't be competing with CPython, but with Lua. The number one feature request seemed to be "will it support NumPy?"


Interestingly MicroPython is now getting attention/requests for usage one the web. Thanks to efforts from PyScript (which are fantastic). However there is much more expectations here (compared to a micro with 1MB RAM total) that existing code will run, and having full CPython experience/compatibility. Which is very challenging. At the same time though, there seems to be work ongoing in CPython to make it work better in frontend. Main painpoint is the package size.

I learned something similar founding a startup. If I could do it again, I would have aggressively avoided pursuing the table stakes feature in our space. Instead we should have done the minimal amount to be comfortable that our architecture could support the enterprisey things. We should have then concentrated everything on something that would set us apart, something we could demo and get a “wow… I see where this could go” instead of something like “oh this is just a clone of x”.


I sort of see your point, but you’re misusing the term “table stakes”, which by definition are what you have to put up to play at all. It sounds like you would implement the table stakes features, but not go farther with the “normal” extensions. The strategy being to get a callback with the exciting features, and have the table stakes features that let you avoid being vetoed for missing something essential.


Yeah that’s pretty much what happened. What I was thinking was we over estimated what the table stakes were. We were building a collaborative SQL editor aka notebooks. We spent way too much time getting it working with different dbs instead of focusing on a couple and building the things that actually made us stand out. A single customer wouldn’t really care about all the dbs we could talk to that were the one she was using.


I don't disagree, but I also hear "table stakes" used more often as a hand-wavy justification for a feature list than as a genuine, informed evaluation of customer demand. This seems to align with the sentiment of the parent comment. At some point, "common usage" become unassailable, no matter how much it irks me.

If I understand you correctly, you’re saying implement the bare minimum for enterprise customers—whatever their leadership requires to do business with you—but beyond that, forget about those features and focus on something very novel that sets you apart from the competition?


If you're just like the competition, you lose, because you're more of an unknown. If you want to win business, you have to have something that your established competitors do not. (And, of course, it has to be something that your customers want, not just some random thing.)


> If you're just like the competition, you lose, because you're more of an unknown.

That sword cuts both ways.

The reason the competition in a field has converged on all the same features, all looking the same and acting the same is because they were all shaped by the same market forces: that's what the businesses wanted!

If a potential client looks at your product and it doesn't look like a duck, doesn't talk like a duck and doesn't walk like a duck, they're going to assume that it's not a duck.

I've pitched a fairly simple product (still iterating on it so not sharing details) that had some extra features (not AI) to a business, and the business eventually went with a more expensive competing product, with my contact at the place explaining "The stakeholders feel that your product is for a slightly different use-case."

Yes, my product did have all the features of the competition. The added feature was low-code extensibility for the backend API. The business interpreted the API-extending-mechanism as "Something only big companies would use", not "Something we can ignore if we don't use".

Now, fair enough, this is absolutely an outlier - in most cases extensibility is regarded favourably. But, in this edge-case the audience came away with the primary impression of "Great Development Tool", not "Great User Product"[1].

Humans still have this notion that a thing has a primary purpose and multiple secondary purposes, and they'll absolutely go with the product that has, as it's primary purpose, satisfying their need, even if some other product's secondary purpose also satisfies their need.

[1] And yes, this was a failure of the pitch. For future pitches, I'll tailor to the audience, highlighting their needs and ignoring anything that the product does which isn't in their list of needs.


>The reason the competition in a field has converged on all the same features, all looking the same and acting the same is because they were all shaped by the same market forces: that's what the businesses wanted!

By this logic, we all want AI. We're screaming out for Microsoft/Amazon/Google to make all their services AI-driven.

I'm sure some customers do want AI, but mostly it's their investors.


> By this logic, we all want AI.

With AI, specifically, it's way too early to say that products with AI in it were shaped by market forces.

It takes years for the forces of the market to have an effect on what the product looks like:

1. Purchasers have to make poor purchases, which isn't known until years later.

2. Sellers have to get feedback from rejections to determine what to refine, and how, which also takes years for most products.

3. Sellers have to run out of money when ignoring the signals, which also takes years.

So, sure, maybe extra AI in $PRODUCT isn't wanted by the majority of people, but we won't actually know what forces the market for $PRODUCT is exerting until much later than 2024.

EDIT: web3 was so obviously unwanted, and yet it took about 4 years for that to reflect. It is not so clear about AI, so we can expect that to take longer to establish.


That’s what I was trying to express. We implemented the features that were unique and interesting too late. We should have led with them and build some compelling demos


Kind of like what the iPhone did when they launched without copy-pasting, and a very underwhelming gsm radio


Yeah, that original iPhone launch keynote is something I like to watch every so often. It really emphasized we’re only doing this because we see what’s wrong with smartphones and this is our vision. They didn’t go “we made the best stylus you can have”.


I feel this way about all wrapper code

where someone is like 'okay we need our own internal version of this API'. the reasons vary but are like 'we don't trust people to use the official API correctly' (which, fine, but yours is less standard and worse-documented).

sometimes the answer is 'we need extra functionality' which, fine, but 1) don't wrap the entire API for gods sake, just add 3 functions, and 2) your codebase over time will end up 99% polyfills

point being if you are not using the defaults you are inflicting great misery on whoever inherits your codebase


With such wrappers, it should help to have an interface there to interact with the lower library directly, I think.

For example, there is the ziggy-pydust library to write native modules for Python in Zig, and it certainly looks prettier than the usual Python.h import. Still, there is also .ffi to address functions directly that are not yet implemented but are available in Python.h.

If such an option is not available, one often wants to replace such a library and prefer the original one.

However, even this is in a way a wrapper (a native module), and sometimes it is better to use ctypes directly, for example, for the sake of speed of development.


Great post. Definitely some great lessons in there.

It's missing a key ingredient though...

I see this as any other competing alternative to any kind of product. It's like saying Amazon failed because the didn't have a brick and mortar book store like people were used to.. obviously that's not the case

The reason all these jit-alts failed and were in a constant catch up, is because in practicality, most developers of x-language don't care that much about JIT.

Or more importantly, they care more about language features and interoperability than they care about JIT. This is why a product that "joins them" rather than competing wins. Because you are not offering more stability and/or interoperability.


Having worked on languages and compilers for many years, I thoroughly enjoyed this post.

Another way to phrase the same idea: a language is _a lot more_ than just compile speed.

Compile speed is very important, in the top 10 dimensions, for sure. Especially because improving compile speed increases the developer feedback loop clock speed which helps the core team improve all the other dimensions faster.

But there are still >30 other dimensions that are very important in a programming language.


The post seems to be about execution speed though. However, even there it's definitely not #1 factor as witnessed by popularity of CPython...


I think the takeaway of how to succeed with this kind of project is good, but a missing element for why so many kind of never take off that I haven’t seen mentioned yet is that a lot of times the compatability of these alternative implementations is in practice lower than claimed even for old language features. E.g. it’s very common for Ruby and Python apps to have some native C extensions somewhere in their dependencies, and AFAIK the major alternative implementations have never supported them. (They’ve tried sometimes, but it’s never really worked out for obvious technical reasons, and the alternative of expecting libraries to provide multiple implementations has a rocky history as well.)

Combine that with the fact these languages are often used to make CRUD websites where I/O is likely to be a bigger performance factor than CPU and the faster alternatives look a lot less appealing.


This is a really excellent post. I love love love the sociology of technology.

Maintainers of a language implementation would like to have maximum flexibility in designing and shipping new features that benefit their users. They don't want to be hamstrung by having to get multiple implementations to agree before they can get a feature out the door. As a case study, look at how glacially slow JavaScript evolution was for many years. And look at how comparatively fast TypeScript evolves.

At the same time, alternate implementations can be a signal of the robustness of your language ecosystem, so there is an upside too. And if the alternate implementation is really good (even if just for some subset of users with niche requirements), it can be a real value add for your ecosystem.

So if you're a language designer/maintainer, you might not be actively hostile to alternate implementations, but there are downsides. And, for the most part, the feedback you'll get from users will be towards encouraging you to ship new features and evolve the language. You won't get too many people asking you to slow down so that PyPy/IronRuby/LuaJIT/etc. can keep up.

For a language consumer, choosing which implementation to build on is often a choice where the biggest priority is safety and stability. No one wants to find themselves sitting on a million-line codebase that happens to subtlely rely on the behavioral quirks of some languishing alternate implementation created by a very bright PhD student who has since moved onto other projects. So there's a very strong positive feedback loop where users tend to go to the most-used implementation, which then encourages other users to go to that implementation, which encourages... The network effects are quite strong.

The end result is that unless there are quite strong forces pushing in the other direction, most languages tend towards a single canonical implementation. There's a good argument that that's a good thing: it means that almost all engineering effort put into implementing the language benefits all users instead of it being divided among a number of separate implementations. The downside, of course, is that the implementation can get stuck in a local maximum.


LuaJIT is mentioned, but I think it also demonstrates that this is not always the case. Lots of people and projects also selected luajit over lua as a deliberate choice.

Came to the comments to post just this. LuaJIT is enthusiastically used in the TeX community and luajittex executable is readily available in parallel to luatex.

Another less popular take is that people (yes me too at times) should check their ego. It can definitely be easier to make "your own" parallel project vs contributing to an existing open source project, but ask yourself who you're serving. Is it the project maintainers? The project itself? The users? Your own ego? Hint: if that last suggestion makes you mad it's probably in play ;-)

Adding a JIT to an existing language is a major undertaking, so the canonical implementation will have a high bar for acceptance. But you should probably still aim for that. Forking or rolling your own can provide freedom to do big things too, but that should often be regarded as temporary.

If your goal is to show what you can do, it probably won't go far. If your goal is to make something better for its good, you'll learn to work within others constraints.


Suprisingly, I have the opposite opinion

All the people that preach about single project, contributing only to canonical project and having no alternative implementations need to check their egos, learn about existance of multiple approaches and why competition is better than monopoly

As I see it, article shows how Python, Lua and Ruby failed many people by choosing this approach - causing thousands of devs and millions of users to have slower development and software. Not because it's not possible - but because administratively there's no insentive to do that


At least in the case of Python, it's not obvious it's slower overall. PyPy has always had issues interfacing with C-extensions, and for lots of cases using something built on C-extensions is not only faster but easier to use (see the scientific ecosystem built on numpy). If core parts of your ecosystem don't run, or perform worse on your "faster" alternative, then don't be surprised it isn't widely adopted.

The reason for the “issues” is that the Python/C API, as opposed to the Python language itself or e.g. JNI, has always been considered an implementation detail of CPython specifically; but because CPython is so dominant, essentially everybody tied themselves to it. That is, it’s once again a monoculture problem. (Then of course e.g. Unladen Swallow predates the wide adoption of NumPy but failed regardless.)

I would suggest the C API is just as much a part of Python as the standard library, and numpy (and swig) predates Unladen Swallow (see e.g. https://stackoverflow.com/questions/2328267/unladen-swallow-... for people trying to get them to work together).

I don't think it's surprising that micropython/circuitpython have had more success, in many ways they are a port to an environment where the existing libraries wouldn't be used, so they can at some level build their own ecosystem.


> numpy (and swig) predates Unladen Swallow

Sure, but that doesn’t contradict what I said. Numpy is surprisingly old, but in 2010 (my impression was that) supporting it was in no way table stakes for an implementation of Python. Hell, Jython was still occasionally taken seriously then, and Microsoft still pretended they cared about IronPython.

> I would suggest the C API is just as much a part of Python as the standard library

As things stand, yes, no question.

The problem is, it was never designed that way. In particular, it cannot (or at least is not designed to) support any other memory management strategy but CPython’s, thus efforts like HPy[1]. For example, while the language docs admonish you not to depend on __del__ running at any particular moment, if we consider the Python/C API a part of the language, then the language has to behave as though it uses naïve reference counting and occasionally runs a cycle collector. And that’s the easy part—think about how deeply the GIL is rooted there.

That is not good design for something that’s part of the language, as it effectively is. It was never intended to be and never received commensurate amounts of care. It just turned out that way—and ain’t that a sad thing to admit.

[1] http://hpyproject.org/


I think even back in 2010 (which ironically when I was getting into scientific Python to replace MatLab scripts for labs), numpy was table stakes for any implementation that wanted to be used in scientific Python communities (note that these started in the 90s). Whether the scientific Python communities were big enough then to prevent Unladen Swallow succeeding I don't know, but it would be interesting to compare the Python C API, HPy, and the C APIs (if any) of Perl, Ruby and Tcl (given PDL existed but seemingly failed to be adopted, and Tcl was widely used in science software but also seems to have died out), and see if there was anything about Python's one that caused it to be adopted so widely (my feeling is Java's lack of scripting and baroqueness ruled it out no matter the state of JNI).

> All the people that preach about single project [...] need to [...] learn [...] why competition is better than monopoly

Then tell me: why is competition better than monopoly for these [!] people?


Pff, sometimes the official things are just completely cursed and you need to create your own little garden. Even if it does turn into twenty acres and a crop-rotation schedule, sometimes that's better than a volcano.


Thias sounds similar to the situation with web browsers. In theory they're built on open protocols. In practice whatever chrome wants will eventually be where things go.


it's like this only because nobody except me seems to prefer Firefox


I don't use Firefox because it's Firefox, I use it because it's not Chrome.


I use FF because I want to browse the web without being signed in to everything.


Exactly: Firefox doesn't add anything I want (eg. better performance, useful features, more customizability), but it doesn't do some of the bad things Chrome does, so that's why I use it.


I had to stop using Firefox because it kept using all the memory on my machine. So I am using Edge now.


Oh, I use it as well, especially on Android where it has proper support for extensions.

Except that Chrome itself emerged as an "alternative implementation". We had Mosaic which lost to Netscape which lost to IE which lost to Firefox which lost (ish) to Chrome. And in the mobile space things are still kind of wild.

If anything Chrome is an exception to the article's "Alternative Implementation Problem"; what started as Konquerer, an alternative browser, eventually mutated into the most common desktop browser currently in use.


That may be, but I'll only find out what that is when Firefox implements it...


> If you want your Python software to be PyPy-compatible, you’re much more limited in terms of which Python features you use

More so, you are also limited in terms of popular libraries versions, and it is much more cumbersome because of versioning hell itself.


Even where the language is defined by a standards committee, you end up with similar problems. In C and C++ the number of compilers has been rapidly dwindling; Intel’s suite is now LLVM based, as is Microsoft’s. Borland and others are a thing of the past.

C++ Builder just got a release last month, and even though it is now LLVM based, the whole set of extensions and Delphi interop from Borland days are pretty much part of the whole picture.

Also another interesting tidy bit, all those compiler vendors that leech from clang/LLVM hardly contribute to upstream in regards to ISO C++ compliance, hence why nowadays clang is lagging behind versus its initial velocity.


Huh? MSVC is now based on LLVM? Where can I read about this? Or am I misreading what you're saying

MSVC is not LLVM based, but C++ MSbuild projects can optionally use clang/LLVM as the C++ compiler instead of MSVC.

Microsoft is still actively maintaining MSVC


> positioning your project as an alternative implementation of something is a losing proposition.

Insightful, as I witnessed the problem caused by TiDB's compatibility with MySQL.


Is there any comment on Kotlin?

Kotlin succeeded precisely because it is its own language (though still interoperable with Java). If it were just a modified reimplementation of Java nobody would bother with it.

Kotlin succeeded because Google got in bed with Jetbrains for their IDE and programming language.

Without Google pushing Kotlin no matter what on Android, Kotlin would be yet another JVM guest language.


I think a similar thing happened with PHP. There was no JIT, then a JIT was merged into the mainline codebase, and now we all have JIT. Yay! Alternative implementations like HipHop VM from Facebook really never took off except for the rare engineering org that wanted to experiment with it.

> New versions of CPython come out regularly, always adding many new features, and PyPy struggles to keep up, is always several Python versions behind. If you want you Python software to be PyPy-compatible, you’re much more limited in terms of which Python features you use, and most Python programmers don’t want to have to think about that.

Nothing can be further from the truth. A small minority, in any language, not just Python, will use the most recent version. In any language community, the most used version lags several iterations behind the most recently released. In a decade and a half of working with Python, changing plenty of companies and positions, I've never been in a situation where I was using the newest version right after it was released. Not for the "production" environment anyways.

The reason PyPy is not popular is... if I had to guess that Python is a first language for many programmers (i.e. the its audience is mostly not very knowledgeable in general, and will struggle to install a Python, installing a special kind of Python will be too overwhelming for them). Python is also a popular language with people who aren't professional programmers, eg. researchers, statisticians. In many places where Python is used its speed makes no difference, even if its used by professional programmers (eg. various kinds of automation). So, there's no particular reason to choose PyPy.

To top this: Python is a common language for OS scripting / automation, where interpreter starting time may be more important than the performance of the script.

PyPy mostly doesn't support native Python modules, but even when it does, it offers no benefits in so doing.

Finally, there's a low-hanging fruit for any Python programmer who makes Python packages: just run them through Cython. This alone gives a significant boost to speed (and some memory savings). Virtually nobody does this. Which is, imo, an indication that speed isn't important. (Also, fighting Python infrastructure and trying to understand the packaging process isn't for the faint of heart, which underscores my previous point of general low competency on the part of Python users).

----

Bottom line: PyPy doesn't offer anything of value to the overwhelming majority of Python users. It would've been a headache to work with even where there might be potential benefits. So... it's not used. Python's release schedule is laughably bad, and new features are laughably bad... well, worthless for the most part... but they aren't the reason PyPy isn't popular.


If I learned anything from my startup days with Tcl, writing extensions in C, was to never ever use again a programming language for production code that doesn't have a JIT/AOT story, and keeps pushing for a dual language approach instead.

I only use Python for portable OS scripting tasks because of that, if they don't need to be portable, it is either UNIX shell/Perl, or Powershell.


But what about Clang and gcc?

The moral here seems to be that people will not switch to "the same but better/cheaper". As a consequence, being competitive does not work.

To what extent is this an indictment of capitalism and/or free market ideology?

The linked Thiel speech video is interesting. It says that the aim of a capitalist is not to compete, but to create a monopoly. This is starkly against the interests of the consumer, and we see the results of that today with the enshittification of modern tech.

Are we at a stage where the capitalist elites themselves are freely admitting that the system works for them and against the general public?


It isn't an indictment of any philosophy or ideology, it's an indictment of the thinness of the analysis that says anything is the "same but better/cheaper". It's never the same. PyPy is not Python, for instance. There's always expenses on the ground that weren't visible from the 30,000 foot view.

One of the problems that arise from things that try to be the same is that by definition the value of switching can't be that great. So it doesn't really take much cost to derail the process. Things that are radically different may have significant advantages that make it worthwhile. If you are going to make something "the same" it needs to be very very the same. Such things do sometimes happen, e.g., the competitive for-pay JVM space.

This is true for things like programming languages and runtimes, which are always intrinsically huge. It doesn't apply to which brand of bottled water you may prefer because those are quite interchangeable for no cost.

This is one of the reasons business people are always interested in the features that differentiate their product. If you want someone to pay the cost to switch you need to bring them something more than just "it works as well", in general.


> There's always expenses on the ground that weren't visible from the 30,000 foot view.

I think this is almost what the article's saying, but not quite. The point is that even if you do make something exactly the same but better than a canonical product, you're then trapped into playing catch-up with all developments in the canonical product, and have little to no directional control.

> It doesn't apply to which brand of bottled water you may prefer because those are quite interchangeable for no cost.

You might be right, but I can see analagous arguments applying even to the bottled water space. Imagine I create something that is "Evian but cheaper". When Evian adds a feature such as reducing the amount of plastic per bottle cap, or improving its mineral content, I have to match it in order to keep growing.


I agree that I'm saying something the article isn't quite saying.

I've been grappling with similar questions for a while. I've been focusing less on the nominal replacements like the author did, and looking more at our general inability to create a programming language that is "like" another language, but just generally improved. C++ is almost literally the last example that really took off. Python 3 was something like what I'm talking about, and while Python survived Python 3, I'm not sure I can call it a success.

But we could really use "like X but modern" in quite a few places. I'd love to see a modern dynamic scripting language that is a lot like the current ones, but, for instance, was designed from day to work with threading somehow. In this case I'm not referencing async versus thread debates, I'm not talking about how it gets exposed to the users, I'm talking about the difficult that 10-20 year old scripting languages still have to this day using multiple processors in any reasonable way. And maybe we've learned we can get most of the benefits of dynamic scripting languages without quite being as dynamic as the current crop is, so maybe instead of a 10-40x slowdown we could be looking at another 2 or 3 speed increase over the current crop with only minimal feature loss. And a few other things.

Various such languages arguably exist, but they are starved of oxygen.

As for the bottled water point, if you look with sufficient detail you can eventually figure out why this Evian bottle is better for you than that one. No two things are identical. But if you can't how the differences between two brands of bottled water and two entire language implementations are a sufficient difference in quantity to be a difference in quality... several times over, honestly... I don't really know what to say.


I believe that author is just pessimistic. There are plenty of examples where alternative implementation won. Sometimes to the point that the original isn't even remembered anymore.

It's hard to be so much better that old users will want to switch. But once you are that much better, you steal all the users of the competition. So, it could be worth a try, if you believe you can be that much better. (The original UNIX and a bunch of stuff related to it, s.a. compilers, editors etc. many of which found their reimplementations in Linux).

But, not only that. Some projects live well in the shadow of the "canonical" twin. Take Cassandra and Scylla. The later is alive and well, even though may not be as popular as the former.

Speaking of Python. Anaconda Python is an alternative distribution of python.org Python. They aren't going to steal the spotlight from the "canonical" Python any time soon, but they won't go away either because of some compelling features their competition doesn't offer.

Honestly. I don't believe there's a pattern. The deeper problem here that I see is the lack of competition. The fact that every company is trying to build something that isn't an alternative, but a different thing altogether leads to all the different variants of what would've been essentially the same thing being garbage: because nobody has the capacity for substantial development and testing.

We have hundreds of thousands custom-made e-commerce shops, all of which suck. Having e-commerce shop makers compete in the same category rather than each in the category of their own would've created a high standard for user experience of shopping online.


That is not capitalism that is state socialism aka fascism , as a state is required to enforce a monopoly. Being rich does not make him smart or correct.

That might be correct when it comes to an strict monopoly where a state provides exclusive rights to one company. Thiel is talking more loosely about tech firms with large market shares such as Google or Microsoft. Such corporations typically maintain their market share through a mixture of mergers & acquisitions, product bundling, consumer manipulation, and government lobbying. Only the last of these requires a state. Similar arguments apply to other industries with a small number of large-market-share players, such as oil, airlines, music eventing etc... . A lot of these industries operate at a global level, without being dependent on regulations from any individual state. See for example: https://www.upi.com/Top_News/Opinion/2015/03/26/The-Kraft-He...

> a state is required to enforce a monopoly

There exist lots of kinds of monopolies; only for some of them a state is necessary to enforce them.


> as a state is required to enforce a monopoly.

Citation needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: