
How often do superior alternatives fail to catch on? - deafcalculus
https://lemire.me/blog/2017/11/24/how-often-do-superior-alternatives-fail-to-catch-on/
======
jasode
Multiple dimensions.

The common theme why people get confused about "superior" products not winning
boils down to ignoring the _multiple dimensions_ of quality. A product's
_overall_ "superiority" is a single score that compresses multiple scores of
the various attributes of the product. If one is ignorant (or discounts the
value) of the other dimensions, he will be perplexed why the supposedly
"inferior" solution won.

E.g. I never understood why bicycles didn't beat out cars. Bicycles are
obviously superior because:

\+ narrow profile can squeeze through tight alleys or even heavily treed
forests that cars can reach

\+ requires no fossil fuel that emits pollution

\+ costs less than 1/100th the price of a car

\+ easily repaired by homeowners in the garage because no computers

\+ etc, etc

That fixation on those attributes causes the confused person to totally miss
the _other positive attributes_ of the car:

\+ travel faster than 25 mph with minimum physical exertion

\+ typical size car can carry 5 adults which is ~1000 pounds of weight

\+ carry entire week's worth of groceries (~10 bags)

\+ occupants don't get wet when when it rains

If one doesn't understand the _all the multidimensions_ of qualities and weigh
them in an objective manner that's detached from personal preferences, he'll
always be confused why "superior" USENET lost to "inferior" Reddit, or why
"superior" Lisp lost the popularity contest to "inferior"
C++/Java/Python/Go/etc, or why Mac OS 9 lost to Windows NT.

Likewise, there were multidimensional factors to Betamax vs VHS and "picture
quality" was only one of them.

~~~
baddox
I agree with everything you said, but you can also go too far in that
direction of analysis. If you let the empirical gain in traction to sneak in
as a factor in your analysis, you effectively have a tautology, and you lose
all predictive power, similar to the misuse of the phrase “survival of the
fittest,” where you conclude that variant A was “clearly” more “fit” than
variant B because A survived and B didn’t.

It’s still important to recognize that a variant might “win” because of
factors completely unrelated to its “fitness,” like an organization using its
market power in an unrelated sector to promote its variant, or even just good
old fashioned dumb luck.

~~~
doiwin
So how do we decide if qwerty or dvorak is fitter?

~~~
bobajeff
I've often wondered why alphabetical order isn't the default on phones. Given
that most of the population that uses phones didn't learn to type.

It really bothers me actually.

~~~
MereInterest
People know the alphabet as a one-dimensional object. Unless your keyboard has
26 keys all in order, the multiple rows don't correspond to an existing mental
model. At that point, you can either choose qwerty, which pleases anyone who
already knows it, or you can make a sort-of alphabetical layout, which pleases
nobody.

I'd be curious about your statement the most people who use phones didn't
learn to type. Is that for a particular age group? Alternatively, for the
developing world, where phones are more common than computers? I'm having
difficulty seeing the justification for the statement.

~~~
bobajeff
If you know the alphabet you can easily guess where the keys will be. (In a
alphabetically ordered keyboard)

Do you believe that most people who use phones know how to type?

~~~
MereInterest
If the keys are laid out in a 1x26 keyboard, yes. If the keys are laid out in
a more usable way, then no. If I can see the key 'j', I cannot guess whether
'm' will be to the right or left of it, without knowing the length of the row,
and considering it at all times.

I would describe today's society as one in which typing is necessary for any
basic tasks, and is universal, and so I don't see why smartphone users would
be different from the norm in terms of typing ability. Are you using "typing"
to refer only to "touch typing"?

------
aaavl2821
There is a drug used to treat a form of eye disease that causes blindness. The
product, eylea, has dominated the market despite (until recently) having no
evidence that it improves vision vs the competition, provides no significant
improvement on other relevant clinical endpoints, had no major safety benefit,
and it is priced at a comparable or higher level than all competitors

A development stage product had shown better potential for vision improvement
without major safety concerns, but when I interviewed physicians to ask
whether they'd still prescribe eylea if this "better" drug was approved. They
unanimously, without hesitation, said they'd prescribe eylea

Why would doctors treat eye disease with a product that does a worse job of
treating eye disease? Why would patients accept a treatment that does not
optimally restore their vision? There was nothing in the clinical literature
to suggest this made sense. This must be a case of a large, established pharma
company playing dirty, maybe bribing doctors or brainwashing patients with tv
ads

The reality is that eylea required fewer doses than the competing products.
For this drug, dosage means injection in the eye with a big needle. Further,
physicians don't get paid more for doing an additional dose, so for them it's
lost revenue. Turns out the incremental vision improvement does not offset the
benefits of less frequent dosing

Getting this insight is not a feat of data analysis or scientific brilliance,
but simply talking to customers. And getting this right, in this case, means
winning a $4-5 billion market

------
codingdave
As people have said already, you need to define what criteria you are using
when you say something is "superior". On HN, there seems to be a divide
between engineers who measure a product by its engineering quality vs.
business folk who define product quality by its market success. They are not
always the same thing. The reason "inferior" products succeed is because they
may have some killer features desired by the market, or better tech support,
or better sales and marketing -- in other words, the business quality beats
out technical problems. Ideally, you'd have high quality in both areas, but
that isn't reality in most organizations.

~~~
baddox
But you’re dangerously close there to having no predictive power. Business
folk may rightly judge a product by its market success, but some business folk
need to decide where to invest resources in building _new_ products. Hindsight
is 20/20, and of course products that are widely viewed as having poor
engineering quality can succeed and even dominate a market, for a variety of
reasons.

But that doesn’t provide much evidence to support the claim that, when
developing a new product, engineering quality doesn’t matter. Sure, we can
take the fatalistic view that new product development is an unguided process
like biological evolution, where we will only know which random mutations were
successful when we observe their prominence many generations down the line.
But clearly most people act as if they believe they can do better than uniform
random (i.e. investors generally don’t invest the exact same amount in every
new company or product they can find).

~~~
aaavl2821
You typically don't try to "predict", you just learn about customers' wants
and needs and solve for those. Why try to guess what attributes are important
in a product when you can just ask / test

~~~
baddox
As far as I can tell, product development is still guided, at the bare minimum
by the ability of humans to predict other humans’ preferences based on an
assumption of similarities between humans’ minds. I’m not aware of any product
development that is intentional truly random, other than perhaps some details
like the color of a button.

~~~
codingdave
Are you saying that talking to your customers, and solving for their needs is
"random"?

------
ilaksh
People usually do confuse popularity with merit. They are in fact, two
completely different, and often fairly unrelated, characteristics. The
keyboard that happens to have the most market share is the most popular one.
It is not the superior one.

Start with the example of the top ten most popular pieces of music at the
time. Do we generally conclude that these are the most meritorious musical
compositions of the moment? Or are they popular because they happen to be
catchy, or just raunchy enough to play on our base desires but not enough to
be censored, or because the distributor has a deal to continuously play the
song on the radio?

QWERTY is still popular mainly just because of momentum. At the time it came
out many decades ago, it had some nice utility. But clearly the core idea of
slowing down typing became outdated long ago. But when it comes down to it,
learning a new keyboard is not easy.

To switch to a new way of doing things does not depend on having a better way.
I believe it depends on some type of social networking effects and chance. For
example, if several celebrities suddenly decide to create a Twitter campaign
about the evils of QWERTY and incredible qualities of DVORAK, then it for some
reason came standard in a new hot electronic device that happened to be trendy
among teenagers, perhaps we would see a significant market switch.

This is one thing that I have noticed about some posts on HN. People will come
on and say that their startup failed, and then come up with a list of
rationalizations blaming various material qualities of their product or
service. In most cases I believe this is quite an incorrect interpretation of
events. They usually have a perfectly good or often superior product, which
simply did not catch on. So I think that things like marketing are quite key
to becoming popular, but again, being popular or not doesn't substantiate or
unsubstantiate the quality of the product.

~~~
donatj
The quantifiable improvement and cost benefit is also important. Dvorak shows
on average a 1-2% benefit to typing speed, which is basically not worth the
time and effort lost to learning a new layout.

~~~
baddox
I don’t even think that learning a new layout is the dealbreaker. I think the
dealbreaker is that, no matter how you work, you will occasionally need to use
a computer with a standard keyboard layout. It may not be very often at all
(the last time I used a random computer was an on-site programming interview
with the company’s PC, and that was a text editing disaster despite my
familiarity with QWERTY, though I still got an offer), but it’s probably often
enough that it’s not worth the small potential gains of another keyboard
layout.

Of course, now that I present that argument, I realize it could be applied to
text editors shortcuts and plugins, which I have customized extensively on all
my main development machines. That said, I almost never do any significant
coding on random machines, so the cost/benefit analysis might work out in my
favor (I certainly behave as if it does). I also remap all my Caps Lock
physical keys to Control/Command, which is pretty common among programmers but
very rare among non-programmers. That’s another case where my gut says that
the small rare inconveniences of using random computers are outweighed by my
regular productivity increases.

However, I don’t have any actual data on this, so it turns out my argument in
the first paragraph of this comment is really just a guess. I do intuitively
feel like QWERTY to Dvorak is a much more significant alteration than Caps
Lock to Ctrl or some customized hotkeys in my text editor of choice.

~~~
zZorgz
People can also cope with switching from different setups if trained well
enough in both, which should not be too surprising. Eg: shortcuts in vim vs
native editor shortcuts, scroll direction, one key swap.. It’s even possible
for full blown keyboard layouts. I type using QWERTY and Colemak both on a
regular basis, although I’ve not tested how long I can go without a layout to
retain it, but forgetting is harder than relearning in that case.

------
CM30
It depends what you mean by superior.

If you mean superior in a technical sense (which unfortunately seems to be all
a lot of startups and engineers focus on), then obviously there are tons of
instances where 'superior' alternatives fail to catch on, simply because the
'superior' alternatives lack the non technical benefits of the older products
or services.

Like how a new social network might be more decentralised and censorship
resiliant than Facebook, but lack the community/userbase that makes Facebook
valuable to begin with. Or how a CMS system might be better coded than
WordPress, but have a UI that people find more awkward to use (or an install
process that's overly tedious/annoying for non technical folk).

In that sense, a lot of superior alternatives fail to catch on simply because
for all the 'objective' improvements they make, they just don't do what people
need them to actually do. A community site or social network doesn't need the
best codebase possible, a long laundry list of features or a censorship
resistent setup, it needs a big enough community/userbase to get people
invested in it.

Of course, even when all factors do line up... well, that doesn't exactly
guarantee the alternative with succeed either. People aren't robots, and don't
choose every action to be as rational as possible. The difference between two
competitors can just as easily come down to pure luck, some perceived
emotional connection, timing or anything else, not necessarily the quality of
the product or organisation behind it.

------
Pica_soO
Often, because in the calculation of the provider, one factor never appears.
Training-time, if a superior tool comes along, making it necessary to retrain
to do the same work, its not superior, its inferior until the gain is so big
that the productivity loss is visible to the customer.

That is why even with superior software it is necessary to add a "Legacy"-user
interface, that allows for a easy switch from old software, while at the same
time retraining people to use the new interface. This changes the transition
costs.

------
btilly
There are lots of examples. For example a base 12 number system is superior to
a base 10 one. Esperanto is strictly easier to learn than English and so would
make for a better lingua franca than English does.

What doesn't happen very often is that the market chooses a worse technology
when we each can individually benefit from picking the better one.

But the definition of "better" has to be right. Often things that are worse on
one axis are better on another. See [https://www.jwz.org/doc/worse-is-
better.html](https://www.jwz.org/doc/worse-is-better.html) for a famous essay
about the difference between what makes software good versus readily adopted.

~~~
nayuki
Is Esperanto really easier to learn than English? I'm basing on
[https://en.wikipedia.org/wiki/Esperanto#Grammar](https://en.wikipedia.org/wiki/Esperanto#Grammar)

English uses 26 un-accented letters, which is easily the lowest common
denominator for typewriters and computer text systems. Esperanto has 28
letters with one type of diacritic.

Esperanto has word inflections for case (e.g. subject vs. object) and more
verb tenses than English. Are inflections really easier to use than having
auxiliary words or relying on word order?

I think English became a lingua franca for a reason, because it has far fewer
grammatical and orthographic features than other European languages like
French, German, etc. I think Esperanto has more complex features of European
languages than English.

~~~
btilly
For years the lingua franca was French. (In fact lingua franca is an Italian
phrase meaning "French language".) And based on experience, I believe that
French is easier to learn.

The transition to English only happened after the British Empire became
economically and politically dominant. It retained its importance because as
England declined, the USA became one of the two world superpowers.

The big win of English is not that it is easy to learn. We often have parallel
words for the same thing (eg earth vs dirt). Words that would be related in
any other language aren't (eg cow vs beef). Attempt to read
[http://www.i18nguy.com/chaos.html](http://www.i18nguy.com/chaos.html) aloud
if you need further convincing.

The big win is that the people you most want to talk to already speak it.

------
VladimirGolovin
A related thought based on my recent experience.

One of my teams is currently using a mesh colorization system for our game,
which we developed in-house. E.g. we have a game level where each mesh is
assigned some color ID, such as "primary accent" or "background1" (but not an
actual RGB color); and there are color schemes which we can apply to the level
according to these color IDs to get different looks.

The first version of the color scheme system was "clearly superior": it
offered a lot more possibilities for assigning color schemes to meshes, but it
needed about 80 color values for each color scheme.

The second version of the same system was "clearly inferior" \-- just 28 color
values, and much less creative control over the resulting look of the level.

Guess which system was adopted? Right, the second, simple one. Despite
offering less creative possibilities, the second system was smaller and thus
could fit more easily into artists's heads, which produced better-looking
results overall.

To sum up, "superior" is multidimensional. Dvorak can offer better typing
speed, but it offers much less "habit compatibility" with existing keyboards
in the real world, and thus is a worse time investment. Or, Haskell offers
much better maintainability, refactorability and reliability, but dynamic
languages offer faster time-to-market and easier-to-hire coders.

~~~
snuxoll
If you’re a touch typist physical keyboard layout doesn’t matter too much.
Beyond ergonomics, at least.

On that note, I’m loving the increasing number of games that read the OS
keyboard layout and display appropriate key mappings instead of assuming every
gamer uses QWERTY (some games do even worse and don’t use the key code but the
mapped character forcing me to rebind keys or switch layouts).

------
zimbatm
For programmers, QWERTY is actually superior to DVORAK in my opinion. The US
keyboard variant more specifically as most language designs have been
influenced by how reachable each punctuation is to that specific layout. And
with auto-complete, the alphabet placement is not that important.

Actually if we were to rethink the keyboard based on the frequency of usage,
it might make sense to place all the punctuation a bit closer to the home row
for programmer keyboards.

------
hyperpallium
When there's a new, far superior way of doing things, it _always_ catches on.

But usually there are several different versions of this new way, with
slightly different properties, trade-offs and non-product features (like
marketing, support, compatibility, price etc) - which of these wins out is a
crap-shoot. That's why some companies, investors and customers back different
versions... and the pragmatic ones wait for a market leader to emerge.

------
tw1010
To add a bit of insight to the conversation, beyond "it's a matter of more
dimensions than one", which people have already commented on. This whole
situation could also be seen from an economics perspective. Demand for a
product, let's say a can of soda, is called "elastic" when customers will
switch from one vendor to another purely based on price. If one vendor has a
lower price than another, all customers, in this idealized model, will head
directly to that place to buy their soda, leaving the more expensive one in
the dust. In this situation, however, demand for the thing, keyboard layouts,
is "inelastic". That means that the factor that we'd think would be the only
property of keyboards to matter for customers, is not in fact all that
matters. Another thing that affects whether they will switch is how long it
will take to learn the new layout.

TLDR; demand for quality in keyboard layouts is inelastic. There are other
variables in play. One of them is the fact that there is a lot of friction (in
terms of setup, having to learn a new layout, etc) for customers to switch
from one layout to another.

------
eggie
It's true that there isn't necessarily much difference in typing speed between
QWERTY and Dvorak layouts, for trained typists (even the same typist).
However, we might say that we're throwing the baby out with the bathwater when
we claim that Dvorak is technically not superior because weak studies showed
little to no marginal improvement in a few particular metrics.

Modern work is a marathon of text entry, and in light of this I believe that
what is technically important about a keyboard in the long run is not speed or
even accuracy, but ergonomics. I won't make any general claims, because it
seems like it's never been possible to study, but as an anecdote I can offer
my own experience. I used to type on QWERTY until I had to stop due to
repetitive strain. In the fifteen years since I switched to Dvorak I have
never had a single day when I finished work feeling any pain in my hands,
whereas this was almost an every-day occurrence with QWERTY. I can type for
hours without stopping, without pain. The design of the keyboard makes it
pleasurable to type in English. Words roll off the fingers of both hands in
relaxing patterns. My hands move very little. It's nice, and I would suggest
it as an option to anyone who types or programs a lot and is having trouble
with their hands, not because I have any financial or personal stake in it,
but because it really helped me and I care about the health of my fellow tech
workers. I have seen a lot of people taken down by their hands, and I have
also seen many people try some crazy gimmicks.

In the same space, not considering the placement of the letters on the
keyboard, there is an even more absurd technical anachronism embedded in
almost every single keyboard on the market, including virtual ones on our
phones. The keys are positioned not in clean vertical rows, but offset as if
they have mechanical arms behind them. This is pure path dependence, and there
is no conceivable reason why we are stuck with it thirty years after
mechanical typewriters fell out of use except that those who learned on a
mechanical typewriter couldn't even imagine to design or test a different
positioning of the keys. It's not just the weird zig-zag pattern of the
position of the keys that is anachronistic. Why should the backspace and
delete keys (which are so essential when we are typing in the flexible medium
of digital text) be relegated to the far corner of the keyboard? (TypeMatrix
presents an example of a modern reconceptualization of the layout. I'm not
affiliated but I do enjoy their products.)

To summarize, I think that this article presents a rather limited (and even ad
hominem) attack on the keyboard issue, with acknowledgment but little
appreciation of the degree of path dependency in tech development. How can
Dvorak be better if the research was flawed? This is not a complete answer.

Of course we are going to end up in suboptimal equilibriums, and together we
should appreciate this if we ever want to get out of them.

~~~
mrob
Reason.com is a presenting a political argument more than a scientific one.
Their goal is to promote free markets, not to accurately decide the best
keyboard. Of course they'll cherry-pick data to avoid the appearance of market
failure.

~~~
asymmetric
_> Why is the keyboard story receiving so much attention from such a variety
of sources? The answer is that it is the centerpiece of a theory that argues
that market winners will only by the sheerest of coincidences be the best of
the available alternatives_

 _> Because first on the scene is not necessarily the best, a logical
conclusion would seem to be that market choices aren't necessarily good ones_

By the way, the reason.com article also goes into an explanation of path
dependence, which the grandparent mentions extensively.

~~~
mannykannot
The reason.com article inflates the significance of the QWERTY myth, and then
claims that debunking it debunks path dependence, which, of course, it
doesn't.

------
badsectoracula
While i agree with the sentiment that the supposedly superior product isn't
actually that superior all the time, the given examples are a bit
cherrypicking-ish. For example the BetaMax vs VHS comparison is about image
quality, not length (and FWIW BetaMax could record 2 hours, but it is true
that VHS thanks to its size could record more than that). Similarly the
mammals vs birds comparison feels a bit stretchy - like calling a tomato salad
a "fruit salad" because tomato is technically a fruit.

------
stretchwithme
I remember OS/2 was allegedly superior To Windows 95. But it failed because
there were far fewer applications for it. IBM had to to pay Netscape to
finally port their browser to it.

I used to say OS/2 was HALF an OS, as it was missing a very important half of
the equation; all those annoying applications.

Plus I could never get it to stay installed on an IBM PC. I did it once, but
never could do it a second time. So that third half of the equation was pretty
weak too.

------
afpx
I recommend that people take a look at “Diffusion of Innovations” Rodgers 1962
and all of the subsequent work since. It doesn’t directly address usability
and design, which I believe are also important, but it addresses it
indirectly.

[1]
[https://en.m.wikipedia.org/wiki/Diffusion_of_innovations](https://en.m.wikipedia.org/wiki/Diffusion_of_innovations)

------
chiefalchemist
Why? Because context matters.

That is, for example, just because BetaMax had a superior Feature X doesn't
mean that Feature X matters to enough people. That feature exists in a broader
expectations + wants + needs "eco-system." That momentum can be very difficult
to redirect once it gets rolling.

Yes, a unique / superior feature _might_ be what wins you the market but it's
not always that simple.

------
jstewartmobile
Whether it's a computer or a bar of soap, most things have so many aspects
that even the experts can't agree, so A) All the damn time.

------
jondubois
I do think that superior alternatives will catch up eventually if they stay in
the game long enough... The problem is that sometimes the window of
opportunity can be suppressed for very long periods of time; sometimes several
generations... Maybe even several centuries.

Great one-time success can create a protective buffer which allows inferior
traits to persist through long periods of time.

------
mathgeek
> It is often said that birds have far superior lungs than mammals. So mammals
> are failures compared to birds…

I've never heard this argument once in my life, and Google doesn't come up
with too much discussion on it beyond a few articles discussing the
differences. This point struck me as a bit of a stretch, and the blog post
itself is probably stronger without it.

------
launchtomorrow
Microsoft became Microsoft because they were better at marketing/business in
the technology industry, not because they were better at building and shipping
technology.

Amiga workstations and Macs in the late 80s were way ahead of DOS's UX with
its 640k RAM limits and poor CGA graphics capabilities. But they became the
standard and caught up with the graphics and multimedia capabilities of the
other two platforms 20 years later once they could invest into removing their
technical debt.

In contrast, Amiga died a slow and painful death because of mismanagement and
owner squabbles, even though they were used for much more than home gaming
(real time TV station visuals) even into the early naughties. They were just
much better than the alternatives. And ahead of its time as a technical
platform and home PC.

~~~
mannykannot
The importance of the selection of Microsoft to provide the operating system
for the IBM PC cannot be overemphasized. It was not the only thing that made
Microsoft what it became (the decision to make Windows for PC-compatible
computers, and to make decent business applications for it, were critical),
but without the first step, it probably would not have been in a position to
do what followed.

There is also path-dependence in IBM's decision to make the PC architecture
open (as an attempt to grow it quickly despite being late to the party), which
is what made Windows possible.

~~~
ghaff
You touch on something that's a very important point: path dependence. There
are a ton of both de jure and de facto technology standards that we are locked
into at some level because they gained dominance at one point because of some
combination of tech, marketing, industry dynamics, and more. Once Ethernet,
USB, x86, or whatever gets established it's hard to kick it out for something
else that is locally better.

This has become increasingly the case in much of the tech sector because of
the increased importance of interoperability , ecosystems, and network
effects. Conversely, you do see fairly rapid switching aware from online
properties that aren't sticky because something new and shiny has come along.

------
raverbashing
I agree

"Superior" has to be followed by: "superior to whom?" and "superior in what
aspect?"

"X is better than Y" but X does not do what Y does then it's not superior if
it won't solve a problem.

------
anotheryou
UI has the big problem of power vs casual/noob user and everyone stars as a
noob. There are good compromises, usually the power-user tool is hard to get
in to and more efficient.

Similar for switching away from QWERTZ.

------
joeblau
Someone once told me that the porn industry played a part in the VHS/Betamax
and Blu-Ray/HD DVD battles. While Betamax and HD DVD were technologically
superior, utility drove adoption.

~~~
jsjohnst
In what way was HD DVD _technologically_ superior? Unlike BetaMax, there’s
literally not a tech spec that it has a clear pronounced advantage with[0]. HD
DVD couldn’t even really be seen as the VHS of the 2000s, while it had price
and some basic utility advantages (cheaper to produce), it didn’t ever have
the adoption (by the movie industry or by hardware manufacturers).

[0]
[https://en.m.wikipedia.org/wiki/Comparison_of_high_definitio...](https://en.m.wikipedia.org/wiki/Comparison_of_high_definition_optical_disc_formats)

~~~
Dylan16807
Agreed, HD DVD was the cheaper and technologically inferior option.

------
hyolaris
Important topic but strawman examples in many ways.

In most of the important cases, you will have never heard of what could have
been.

Also, often the problem is that better ideas weren't developed as much as they
could have been. That is, we look back and compare the inferior but dominant
product that was revised and revised because of it's problems and dominance,
and the superior but abandoned product in it's unimproved state, which is some
sort of survivorship bias.

The OP has a point but it's exaggerated.

------
ddmma
The tipping point by Malcom Gladwell might have an extensive answer on this
matter and is similar effect to epidemics.

------
k__
Professionality?

You can basically ship crap if you do it professionally.

------
dzuc
[https://en.wikipedia.org/wiki/Worse_is_better](https://en.wikipedia.org/wiki/Worse_is_better)

------
Gravityloss
Excellent post. It's short and doesn't waffle. It has a coherent idea and at
least I learned something new.

~~~
Rainymood
Personally, if I may be crude, found the post to be too short and
unsubstantiated. I can honestly not imagine this is a "serious" post by a
professor (!). It's so short-sighted I don't even know where to begin.

My personal opinion is that "superior" often means "cheap as shit and I'm
willing to compromise a lot of stuff".

------
timthelion
I believe that the 90-90 rule [1] proves that many superior alternatives will
fail to catch on. If something is not revolutionary better, and has the
potential to compete with a product, but has not been developed to the point
of actual competitiveness, it will lose out.

Combine this with network effects, and it seems that it would be impossible
for superior techs not to fail.

Take email headers as an example. They are objectively inefficient and hard to
parse. Everyone agrees on that. But we still use them despite the obvious
possibility of a better format. Why? Obvious network effect.

The 90-90 examples will be less obvious. You cannot easily tell which tech is
better, until you have invested the same amount of effort developing all of
them. Superior products will be more likely to eventually succeed in markets
where such parallel development is justified. Currently, most speakers are
made by gluing a permanent magnet to a piece of plastic or paper, and using an
electromagnet to pull on the permanent one. Through parallel development, we
now know, that we can make much much better speakers by making the membrane
magnetic, by gluing copper traces onto it, or statically charging it. We can
make objectively better sound with these lighter membranes which don't have a
heavy permanent magnet glued to them. Slowly, expensive speakers and
headphones are using the new tech, but most speakers sold sill use the
inferior tech. The superior tech was developed parallel to the inferior one,
only because speakers are such a large market, that this parallel development
could be justified financially.

Now take braille displays. A much smaller market. We think that the current
tech used in Braille displays is worse than a set of other techs. There isn't
enough money for the parallel development, so despite the fact that
alternative, probably superior, technologies exist, we haven't developed them
to the point of competitiveness.

[1] [https://en.wikipedia.org/wiki/Ninety-
ninety_rule](https://en.wikipedia.org/wiki/Ninety-ninety_rule)

------
user5994461
Most of the time. Take a look at developer tools for instance:

jenkins,nagios,graphite,jmeter,mysql,postgresql,puppet,vagrant,virtualbox,mongodb,npm,docker,openvpn.

What all these software have in common. They are the most popular despite
being poor products and often having notably superior competitors.

~~~
sho
Your list makes no sense to me. PostgreSQL is a poor product with noticeably
superior competitors? This is - how should I put it? - a niche opinion.

And besides, popularity is its own quality sometimes. NPM might not be the
"best" but everything's on it so it's the most useful. A lot of devops people
already know docker. If you need to hire, there's a deep pool to draw from.
That counts for a lot.

~~~
jandrese
I'm guessing he's referring to Oracle as the competitor.

~~~
sho
Well that's unexpected! I'm aware that Oracle has a few features pg doesn't,
and replication is supposed to be easier, but at extraordinary cost and with
the added bonus of supporting one of the worst companies in tech.

I don't really think you can even compare them as they serve totally different
markets. If I was the CTO at a Fortune 500 and needed a DB for my new global
logistics system, fine, Oracle would be in the running. Since this is HN
though, I was thinking in terms of startups - and a startup choosing Oracle
would be pure insanity unless they had an _amazingly_ good reason.

