
Jepsen: MongoDB 4.2.6 - aphyr
http://jepsen.io/analyses/mongodb-4.2.6
======
dang
All: there was a big thread about this yesterday
([https://news.ycombinator.com/item?id=23285249](https://news.ycombinator.com/item?id=23285249))
but because it didn't focus on the technical content, and because there were
glitches with a previous submission of this report (described at
[https://news.ycombinator.com/item?id=23288120](https://news.ycombinator.com/item?id=23288120)
and
[https://news.ycombinator.com/item?id=23287763](https://news.ycombinator.com/item?id=23287763)
if anyone cares), we invited aphyr to repost this. Normally we downweight
follow-up posts that have such close overlap with a recent discussion
([https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...](https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=by%3Adang%20follow-
up&sort=byDate&type=comment)), so the exception is probably worth explaining.

------
inglor
I also want to point their Node.js transactions API is wrong and looks like
they have no idea how promises or async code work in JS.

In mongo, you have a `withTransaction(fn)` helper that passes a session
parameter. Mongo can call this function mutliple times with the same session
object.

This means that if you have an async function with reference to a session and
a transaction gets retried - you very often get "part of one attempt + some
parts of another" committed.

We had to write a ton of logic around their poor implementation and I was
shocked to see the code underneath.

It was just such a stark contrast to products that I worked with before that
generally "just worked" like postgres, elasticsearch or redis. Even tools
people joke about a lot like mysql never gave me this sort of data corruption.

Edit: I was kind of angry when writing this so I didn't provide a source and
I'm a bit surprised this go so many upvotes without a source (I guess this
community is more trusting than I assumed :] ). Anyway for good measure and to
behave the way I'd like others to when making such accusations here is where
they pass the same session object to the transacton
[https://github.com/mongodb/node-mongodb-
native/blob/e5b762c6...](https://github.com/mongodb/node-mongodb-
native/blob/e5b762c6d53afa967f24c26a1d1b6c921757c9c9/lib/sessions.js#L376)
(follow from withTransaction in that file) - I can add examples of code easily
introducing the above mentioned bug if people are interested.

~~~
inglor
If you work for Mongo and are reading this. Please just fix it. I don't need
to win and I don't care about being "right".

I just don't want to be called to the office on a weekend anymore for this
sort of BS.

Production incidents with MongoDB last year: 15 Production instances with
redis, elasticsearch and mysql combined last year: 2 (and with much less
severity)

Edit: just to add: I didn't pick Mongo, I was just the engineer called to
clean that mess. I created enough of my own messes to not resent the person
who made that shot for it. We are constantly on the verge of rewriting the
MongoDB stuff since a database that small (~250GB) should really not have
these many issues (In previous workplaces I ran ~10TB PostgreSQL deployments
with much more complicated schemas and queries with far fewer issues). It's
also expensive and support at Mongo Atlas hasn't been great (we should
probably self host but I am not used to small databases being so problematic)

~~~
brianwawok
This is why most of us don’t use mongo in production. It’s just not worth it.
Postgres is a tank and supports Json when you really need it.

~~~
hetspookjee
The Guardian posted quite a nice blog in 2018 about the switch to Postgres
from MongoDB. Especially interesting because they intended to use Postgres as
replacement document storage: Here's the link
[https://www.theguardian.com/info/2018/nov/30/bye-bye-
mongo-h...](https://www.theguardian.com/info/2018/nov/30/bye-bye-mongo-hello-
postgres)

~~~
guanzo
> Automatically generating database indexes on application startup is probably
> a bad idea.

aw crap. oh well it probably doesn't matter for my small-ish application.

------
aphyr
Hi folks! Author of the report here. If anyone has questions about detecting
transactional anomalies, what those anomalies are in the first place, snapshot
isolation, etc., I'm happy to answer as best I can.

~~~
devit
Have you considered presenting the data in a concise manner in addition to the
in-depth analyses?

That is, a table on the jepsen.io frontpage, or at least on each product's
review page, with database products and configuration on rows and consistency
properties on columns, and a nice "Yay!" or "Nope!" mark in the cell, plus
links on how to achieve the database configurations in the table (esp. how to
configure each database to have the most guarantees).

Also, ideally the analyses should be rerun automatically (or possibly after
being paid, but making it easy for the company to do so) every time a new
major release happens rather than being done once and then being stale.

Finally, there should be tests for the non-broken databases (PostgreSQL for
instance, both in single-server mode, deployed with Stolon on Kubernetes and
using the multimaster projects) as well to confirm they actually work.

~~~
aphyr
_That is, a table on the jepsen.io frontpage, or at least on each product 's
review page, with database products and configuration on rows and consistency
properties on columns, and a nice "Yay!" or "Nope!" mark in the cell, plus
links on how to achieve the database configurations in the table (esp. how to
configure each database to have the most guarantees)._

This is a wonderful idea, and I've got no idea how to actually do it in a
standardized, rigorous way. Vendor claims are often contradictory, it's hard
to get a good idea of anomaly frequency, availability is... a rabbithole, and
it's hard to come up with a standard taxonomy of anomalies--most of the
analyses I do wind up finding something I've never really seen before, haha.
With that in mind, I've wound up letting the reports speak for themselves.

 _Also, ideally the analyses should be rerun automatically (or possibly after
being paid, but making it easy for the company to do so) every time a new
major release happens rather than being done once and then being stale._

I don't know a good way to do this either. Each report is typically the
product of months of experimental work; it's not like Jepsen is a pass-fail
test suite that gives immediately accurate results. There is, unfortunately, a
lot of subtle interpretive work that goes into figuring out if a test is doing
something meaningful, and a lot of that work needs to be repeated on each test
run. Think, like... staring at the logs and noticing that a certain class of
exception is being caught more often than you might have expected, and
realizing that a certain type of transaction now triggers a new conflict
detection mechanism which causes higher probabilities of aborts; those aborts
reduce the frequency with which you can observe database state, allowing a
race condition to go un-noticed. That kinda thing.

If I'm lucky and the API/setup process haven't changed, I can re-run an
analysis in about a week or so. If I'm unlucky, there's been drift in the OS,
setup process, APIs, client libraries, error handling, etc. It's not uncommon
for a repeat analysis to take months. :-(

~~~
X6S1x6Okd1st
It's probably more snarky than helpful, but it'd be great to have a section
where it's just marketing materials or docs that you've corrected with a red
pen

~~~
bcrosby95
It's probably better to keep it professional. Your average employee can afford
some snark. But when companies hire you for this sort of consulting, you could
turn off a lot of potential clients by including it in materials you produce,
even when they didn't pay for it. Because it is a representation of the
product they would be paying for.

It would be kinda like you including this sort of thing on your resume. Which
would also be a bad idea.

~~~
ashtonkem
For those who don’t know, Kyle makes a living offering these types of analysis
to database companies directly. While a lot of us love to dunk on Mongo
(myself included), it would be silly to expect Kyle to risk his livelihood.

------
lllr_finger
Mongo has been _related to_ "perpetual irritation" up to "major production
issue" at all three of my last companies.

For as easy as it is to use jsonb in Postgres, or Redis, or RocksDB/SQLite, or
whatever else depending on your use case - I can't find any reason to advocate
its use these days. In my anecdotal experience, the success stories never
happen, and nearly developer I know has an unpleasant experience they can
share.

Big thanks to aphyr and the Jepsen suite (and unrelated blog posts like Hexing
the Interview) for inspiring me to do thorough engineering.

~~~
ep103
Is Postgres what most people would suggest as a MongoDB replacement?

Anyone have any suggestions for a true non-MongoDB jsonDocument based noSql
option?

~~~
jfkebwjsbx
The first question you must ask yourself is: do I really need a document
store?

Because the answer is "no" in the overwhelmingly majority of cases, specially
if your product is mature.

~~~
ep103
Or trust me, I'm aware. But inevitably I will be in a design meeting where
they will want a non-sql alternative, and I'd be nice to know what I can
suggest besides Mongo

------
mtrycz2
aphyr, you are of great inspiration as an engineer and as a human.

Your attitude of "a tool I need doesn't exists, so I'll just go ahead and
create it" blew my mind and changed me for the better.

I'm dedicating my next test framework to you. Thank you for everything.

~~~
aphyr
Aw shucks, thank you! <3

------
chousuke
This article reinforces my stance that bad defaults are a bug. Defaults should
be set up with the least number of pitfalls and safety tradeoffs possible so
that the system is as robust as it can be for the majority of its users, since
the vast majority of them aren't going to change the defaults.

Sometimes you end up with bad defaults simply by accident but I feel like for
MongoDB the morally correct choice would be to own up to past mistakes and
change the defaults rather than maintain a dangerous status quo for "backwards
compatibility", even if you end up looking worse in benchmarks as a result.

~~~
aphyr
I think this is a good way to look at things, and there are vendors who do
this! VoltDB, for instance, changed their defaults to be strict serializable
even though it imposed a performance hit, following their Jepsen analysis.
[https://www.voltdb.com/blog/2016/07/voltdb-6-4-passes-
offici...](https://www.voltdb.com/blog/2016/07/voltdb-6-4-passes-official-
jepsen-testing/)

------
zzzeek
How many more years do we have to keep evaluating, studying, and reading about
MongoDB's ongoing failures? It would appear this product has been a great
burden on the community for many years.

~~~
aphyr
I like to keep in mind that MongoDB's existing feature set is maturing--
occasional regressions may happen, but by and large they're making progress.
The problems in this analysis were in a transaction system that's only been
around for a couple years, so it's had less time to have rough edges sanded
off.

~~~
zzzeek
there are _so_ _many_ _great_ _databases_ out there. There's no need for one
that has been mediocre for years and continues to make false claims. This is
an issue of years of super aggressive marketing of an inferior product making
it hard on engineers.

~~~
trashcan
I think if you compared it to other databases that are designed to scale
horizontally like Cassandra and DynamoDB, you might have a more favorable
opinion. IMHO, most products at this scale are terrible in different ways,
because it is a difficult problem to solve generally.

I have been responsible for <100 clustered Cassandra instances, and <500
clustered MongoDB instances, and I would choose the latter every time.

~~~
runT1ME
Can mongo really support the kind of large scale ETL or time series use cases
Cassandra can?

------
bithavoc
> Clients observed a monotonically growing list of elements until [1 2 3 5 4 6
> 7], at which point the list reset to [], and started afresh with [8]. This
> could be an example of MongoDB rollbacks, which is a fancy way of saying
> “data loss”.

I hope they learned the lesson, don't fuck with aphyr.

~~~
amenod
That's... not the lesson they need to learn. Databases are app foundations.
Make sure you do them right and don't overpromise.

~~~
baq
I agree but maybe it’s the only lesson they are able to understand at this
time. Their attitude was asking for somebody to call them, which aphyr is
maybe the best positioned to do.

I’d love to read a roasting like that authored by Leslie Lamport for a
different perspective but aphyr’s works absolutely stand on their own.

Any ideas how to get Jepsen and TLA to work together? :)

------
junon
I wanted to incorporate MongoDB into a C++ server at one point.

Their C/C++ client is literally unusable. I went to look into writing my own
that actually worked and their network protocols are almost impossible to
understand. BSON is a wreck and basically the whole thing discouraged me from
ever trying to interact with that project again.

------
loeg
Aphyr is such a competent professional. What a relatively thorough and polite
response to Mongo's inaccurate claims. "We also wish to thank MongoDB’s Maxime
Beugnet for inspiration." is a nice touch.

------
egeozcan
The general mood I observed about MongoDB was that it used to be inconsistent
and unreliable but they fixed most, if not all of those problems and they now
have a stable product but bad word of mouth among developers. Personally, I've
treated it as "legacy" and migrated everything that I had to touch since 2013
[0], and luckily (just read the article so hindsight 20/20 -- transaction
running twice and seeing its own updates? holy...) never gave it another try.

[0]:
[https://news.ycombinator.com/item?id=6801970](https://news.ycombinator.com/item?id=6801970)
(BTW: no, my dream of simple migration never materialized, but exporting and
dumping data to Postgres JSONB columns and rewriting queries turned out to be
neither buggy nor hard).

~~~
cyphar
> MongoDB was that it used to be inconsistent and unreliable but they fixed
> most, if not all of those problems and they now have a stable product but
> bad word of mouth among developers.

This report is 9 days old, and tests the latest stable release of MongoDB. The
problems it discusses are present on modern MongoDB.

~~~
egeozcan
If it wasn't clear, I said "mood" (what you conveniently ignored), referring
to chit-chat I heard recently, and was underlining the fact how wrong it has
been. I totally understand what the report says and know what version it
tests.

~~~
cyphar
In my defense, it wasn't clear that's what you were saying in your original
comment. "Mood" has become a filler word at this point -- hence why I omitted
it from the quote -- and can mean anything from the traditional meaning of
"mood in the room" to "incredibly relatable/factual statement". How I
originally understood your comment was that you were saying that you felt that
most of the issues are in the past, but you still decided to migrate away from
it.

~~~
egeozcan
English is not my mother language and given the down-votes, probably it's my
wording at fault here - sorry.

I'm glad now that it's been clarified :)

------
judofyr
This is not directly related to this report or Jepsen, but since you're here
I've got to ask: Aphyr, are there any recent papers/research in the realm of
distributed databases which you're excited about?

~~~
aphyr
Calvin and CRDTs aren't new, but I still think they're dramatically
underappreciated! Heidi Howard's recent work on generalizing Paxos quorums is
super intriguing, and from some discussion with her, I think there are open
possibilities in making _leaderless_ single-round-trip consensus systems for
log-oriented FSMs, which is what pretty much everyone WANTS.

I'm also excited about my own research with Elle, but we're still working on
getting that through peer review, haha. ;-)

~~~
thramp
> I think there are open possibilities in making leaderless single-round-trip
> consensus systems for log-oriented FSMs, which is what pretty much everyone
> WANTS.

Woah, that's wild. Are there any pre-prints/papers/talks that you can link to
on this subject? I'd _love_ to read this.

> I'm also excited about my own research with Elle, but we're still working on
> getting that through peer review, haha. ;-)

I read over bits of Elle; the documentation in it is absolutely top-notch. You
and Peter Alvaro knocked it out of the park!

~~~
aphyr
_I think there are open possibilities in making leaderless single-round-trip
consensus systems for log-oriented FSMs, which is what pretty much everyone
WANTS._

This is based on her presentation and some dinner conversation at HPTS 2019,
so I don't know if there's actually a paper I can point to. The gist of is
that Paxos normally involves an arbitration phase where there are conflicting
proposals, which adds a second pair of message delays. But if you relax the
consensus problem to agreement on a _set_ of proposals, rather than a single
proposal, you don't need the arbitration phase. Instead of "who won", it
becomes "everyone wins". Then you can impose an order on that set via, say,
sorting, and iterate to get a replicated log.

 _I read over bits of Elle; the documentation in it is absolutely top-notch.
You and Peter Alvaro knocked it out of the park!_

Thank you! Could I... hang on, just let me grab reviewer #1 quickly, I'd like
them to hear this. ;-)

~~~
judofyr
> _This is based on her presentation and some dinner conversation at HPTS
> 2019, so I don 't know if there's actually a paper I can point to. The gist
> of is that Paxos normally involves an arbitration phase where there are
> conflicting proposals, which adds a second pair of message delays. But if
> you relax the consensus problem to agreement on a set of proposals, rather
> than a single proposal, you don't need the arbitration phase. Instead of
> "who won", it becomes "everyone wins". Then you can impose an order on that
> set via, say, sorting, and iterate to get a replicated log._

This sounds very similar to _atomic broadcast_
([https://en.wikipedia.org/wiki/Atomic_broadcast](https://en.wikipedia.org/wiki/Atomic_broadcast))
where each node sends a single message and the process ensures that all nodes
agree on the same set of messages. Not sure how it would fit with a log-
oriented FSM, but it certainly sounds interesting.

~~~
senderista
It’s really pretty trivial to implement RSM given an atomic broadcast
protocol. But you can implement many other things, like totally ordered
ephemeral messaging with arbitrary fanout, or a replicated durable log ala
Kafka. Here’s my current favorite atomic broadcast protocol (from 2007 or so),
which is leaderless, has write throughput saturating network bandwidth, and
read throughput scaling linearly with cluster size:

[https://os.zhdk.cloud.switch.ch/tind-tmp-
epfl/394a62dd-278f-...](https://os.zhdk.cloud.switch.ch/tind-tmp-
epfl/394a62dd-278f-47dd-862f-0c67a6aea084?response-content-
disposition=attachment%3B%20filename%2A%3DUTF-8%27%27paper.pdf&response-
content-
type=application%2Fpdf&AWSAccessKeyId=ded3589a13b4450889b2f728d54861a6&Expires=1590439387&Signature=GZgWFIfvE%2BB8dinv7CDQFx%2Brn3I%3D)

------
inglor
Without going into details die to NDAs, the experience in the OP matches the
ones of several fortune 500 companies I had gigs with.

------
nevi-me
Friendly question: did you update anything on the findings since
[https://news.ycombinator.com/item?id=23191439](https://news.ycombinator.com/item?id=23191439)
?

~~~
aphyr
Nope! Something weird happened to that post; it got a lot of upvotes and some
comments, but never made it to frontpage. After the InfoQ article took off
yesterday, an HN mod got in touch and asked if I'd like to resubmit it.

------
azernik
Ouch. This is what you get when you order up a third-party review and then
misrepresent it in advertising.

~~~
taywrobel
I’m still waiting for Jepsen to put Confluent’s “Kafka provides exactly once
delivery semantics” claim to the test.

Since they’re claiming something provably false, it’d be nice to have some
empirical evidence as such.

~~~
aphyr
I'm not convinced it _is_ false--IIRC their claim is specifically w.r.t other
Kafka side effects, and those they _can_ control.

------
sam1r
Extremely well written! I learned a lot.

I wonder if someone can type up a well-manicured post-Morten of the recent
triple byte incident?

------
depr
>Sometimes, Programs That Use Transactions… Are Worse

I understood that reference

------
rmdashrfstar
The main argument for using a documented-oriented database:
[https://martinfowler.com/bliki/AggregateOrientedDatabase.htm...](https://martinfowler.com/bliki/AggregateOrientedDatabase.html)

------
sorokod
I suppose there are reasons why the defaults are the way they are. Can anyone
comment on the implications, performance or otherwise, of bumping up the
read/write concerns?

~~~
goatinaboat
In general, MongoDB’s defaults fall into two categories. The first could
possibly be justified as making it easy for inexperienced devs to get started,
but it means that people rely on those defaults and then try to promote to
production, and unless there is an experienced traditional DBA with the power
to veto it, it will go ahead. This is how they “backdoor” their way into
companies. The second category is whatever will look good on a benchmark,
regardless of any corners cut.

Compare and contrast with the highly ethical Postgres team, who encourage good
practices from the start and who get a feature right first before worrying
about performance. That may harm their adoption in the short term but over the
long term, that's why they're the gold standard. And with their JSONB datatype
they have a better MongoDB than MongoDB anyway! And have a million other
features besides!

~~~
threeseed
> Compare and contrast with the highly ethical Postgres team

You do know that PostgreSQL had issues with not fsyncing data as well ? It's
technology. Bugs will be made. Design decisions will be wrong.

I think it's really disappointing and inappropriate to be labelling MongoDB
engineers as unethical for simply having incorrect defaults. Which in their
history they often change after they are made aware of them.

~~~
goatinaboat
_You do know that PostgreSQL had issues with not fsyncing data as well ?_

See, you can name just one Postgres bug, and they held their hands up to it
straight away. Whereas the MongoDB "bugs" are countless and by sheer
coincidence, they mostly skew to improving performance in benchmarks and
demos. That's a pattern.

------
bbulkow
mongodb's business model, forever, has been to get developers to write code,
be damned the fact that you can't support it reliably on a cloudy day.

------
jtdev
Now do DynamoDB.

~~~
aphyr
I'd like to, but I don't have any way to do fault injection on a system
someone else owns. :(

~~~
petrikapu
They have downloadable version of it
[https://docs.aws.amazon.com/amazondynamodb/latest/developerg...](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html)

~~~
TheDong
The downloadable version of dynamodb is only intended for testing and is not a
distributed system by any definition, nor does it behavior match the
production system exactly.

There's no reason for Jepson to be applied to a single-node in-memory kv
store.

------
tester756
Why is this being here everyday for last 3 days?

------
fastball
At this point I think we might be going a bit overboard with title changes.

Now that it's just "MongoDB 4.2.6", the title makes me think that this is a
release announcement, not an analysis of the software.

The first title (that specifically referenced a finding of the analysis) was
best, imo. Mildly opinionated or whatever, but at least it quickly
communicated the gist of the post. On the other hand:

"Jepsen: MongoDB 4.2.6" – not super helpful if you're not already familiar
with the Jepsen body of work.

"MongoDB 4.2.6" – as stated above, sounds like a release announcement.

If you want a suggestion, maybe something like "Jepsen evaluation of MongoDB
4.2.6"? Not overly specific (/ negative) like the first title, but at least
provides some slight amount of context.

@dang

~~~
Ecco
It's the article's title...

~~~
fiddlerwoaroof
A generic “Mongo 4.2.6” title doesn’t help me decide whether to click on the
link (especially with how light the domain is). I thought it was a release
announcement and only clicked through to the comments because of yesterday’s
discusssion.

~~~
dang
An HN title needs to be read along with the site name to the right of it.

~~~
fiddlerwoaroof
The styling of the site name makes it hard to scan. If it’s so essential, the
font should be darker and bigger.

~~~
dang
That's a fair point, but people have a lot of contradictory preferences about
things like that. I think I'd rather address this by allowing more
customization of the site. Still thinking about
[https://news.ycombinator.com/item?id=23199264](https://news.ycombinator.com/item?id=23199264).

~~~
fiddlerwoaroof
As I said there, I’d like to see that added

~~~
dang
Ah so you did! I missed that.

------
pier25
> Normally we downweight follow-up posts

So you manually moderate the content?

~~~
VonGuard
I mean, this was kind of an exception case, where there is a big old technical
war of words back and forth. Almost a "He said She said" except here, He is an
absolute expert, and She is just some marketing dorks at Mongo.

I, for one, welcome this by-hand moderation because it keeps this issue alive,
and allows Kyle to keep the discussion going.

As I commented in a previous post, Kyle is the Chef Ramsey of database
testing, and here, he's in a position where some idiot has just served him an
undercooked hamburger. Bits will fly, marketing people will be flayed alive,
and Kyle will be the only one left standing at the end.

Without this by-hand moderation, we'd be missing out on the second act of this
intense thriller!

~~~
pier25
I'm totally ok with the moderation/curation/whatever!

------
lmilcin
I am tech lead for a project that revolves around multiple terabytes of
trading data for one of top ten largest banks in the world. My team has three,
3-node, 3TB per node MongoDB clusters where we keep huge amount of documents
(mostly immutable 1kB to 10kB in size).

Majority write/read concern is exactly so that you don't loose data and don't
observe stuff that is going to be rolled back. It is important to understand
this fact when you evaluate MongoDB for your solution. That it comes with
additional downsides is hardly a surprise, otherwise there would be no reason
to specify anything else than majority.

You just can't test lower levels of guarantees and then complain you did not
get what higher levels of guarantees were designed to provide.

It is also obvious, when you use majority concern, that some of the nodes may
accept the write but then have to roll back when the majority cannot
acknowledge the write. It is obvious this may cause some of the writes to fail
that would succeed should the write concern be configured to not require
majority acknowledgment.

The article simply misses the mark by trying to create sensation where there
is none to be found.

The MongoDB documentation explains the architecture and guarantees provided by
MongoDB enough so that you should be able to understand various read/write
concerns and that anything below majority does not guarantee much. This is a
tradeoff which you are allowed to make provided you understand the
consequences.

~~~
lllr_finger
> The article simply misses the mark by trying to create sensation where there
> is none to be found.

As someone who is a tech lead for a large database install, I'd urge you to
read the rest of the Jepsen reports. They aren't intended to be hit pieces on
technology - they're deep dives into the claims and guarantees of each
database. IIRC MDB has explicitly reached out to OP in the past (I doubt
they'll continue to do so after this).

Why that matters to the rest of us: once I learn all those dials and knobs I'm
left wondering why I would choose Mongo over another technology, and how much
the design of the default behavior and complexity of said dials/knobs are
influenced by their core business.

~~~
afarrell
I would also wonder about the surrounding ecosystem of tooling & libraries.

Imagine there was a programming language which had rather inconsistent naming,
poor automated testing support, and a history of guiding its users toward
security vulnerabilities. A culture would grow up around that language and the
most successful members would be those who could best tolerate those
properties. People generally self-select into language communities. So unless
some powerful influence pushed random programmers to use the language or made
it easier to add new tooling, the culture would continue to undervalue what
the language originally lacked.

I suspect the same social dynamic would apply to a database.

