> The few core developers they have do not work with modern tools like git and collaboration tools like Github, and don’t accept contributions, although they may or may not accept your suggestion for a new feature request.
The funny thing about this comment is that SQLite is as close to the gold standard of software quality that we have in the open source world. SQLite is the only program that I've ever used that reliably gets better with every release and never regresses. Could it be that this is precisely because they don't use "modern tools" and accept outside contributions?
I feel like a lot of fantastic software is made by a small number of people whose explicit culture is a mix of abnormally strong opinionatedness plus the dedication to execute on that by developing the tools and flow that feel just right.
Much like a lot of other "eccentric" artists in other realms, that eccentricity is, at least in part, a bravery of knowing what one wants and making that a reality, usually with compromises that others might not be comfortable making (efficiency, time, social interaction from a larger group, etc).
SQLite's quality is due to the DO-178B compliance that has been achieved with "test harness 3" (TH3).
Dr. Hipp's efforts to perfect TH3 likely did lower his happiness, but all the Android users stopped reporting bugs.
"The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out... We crashed Oracle, including commercial versions of Oracle. We crashed DB2. Anything we could get our hands on, we tried it and we managed to crash it... I was just getting so tired of this because with this sort of thing, it’s the old joke of, you get 95% of the functionality with the first 95% of your budget, and the last 5% on the second 95% of your budget. It’s kind of the same thing. It’s pretty easy to get up to 90 or 95% test coverage. Getting that last 5% is really, really hard and it took about a year for me to get there, but once we got to that point, we stopped getting bug reports from Android."
> he managed to segfault every single database engine he tried, including SQLite, except for Postgres. Postgres always ran and gave the correct answer. We were never able to find a fault in that. The Postgres people tell me that we just weren’t trying hard enough.
I've always felt like Postgres is like one of those big old Detroit Diesel V12s that power generators and mining trucks and things. It's slow and loud and hopelessly thirsty compared to the modern stuff you get nowadays, and it'll continue to be just as slow and loud and hopelessly thirsty for another 40 or 50 years without stopping even once if you don't fiddle with it.
(I should say that it is not at all difficult to crash an Oracle dedicated server process. I've seen quite a few. This doesn't crash the database (usually).
I've never run an instance in MTS mode, so I've never seen a shared server crash, although I think it would be far from difficult.
I might be curious about the type of Db2 that crashed, UDB, mainframe, or OS/400, as they are very different.)
it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.
You can use all of those things as to enable people to do things better and with less friction, but you also need to keep in mind that if a tool becomes more of a hindrance than a help, you should go looking for a new one.
> it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.
For me, the concept of best practices is pernicious because it is a delegation of authority to external consensus which inevitably will lead to people being treated as tools as they are forced to contort to said best practices. The moment something becomes best practice, it becomes dogma.
This comment perfectly encapsulates the point that I am making about best practices: the concept is used as a cudgel to silence debate and to confer a sense of superiority on the practitioner of "best practice." It is almost always an appeal to authority.
No one wants cowboy pilots ignoring ground control. Doctors though do not exactly have the best historical track record.
Knowledge communities should indeed work towards consensus and constantly be trying to improve themselves. Consensus though is not always desirable. Often consensus goes in very, very dark directions.
Even if there is some universal best practice for some particular problem, my belief is that codifying certain things as "best practice" and policing the use of alternative strategies is more likely to get in the way of actually getting closer to that platonic ideal.
Perhaps a better example might be "covering indexes," or what Oracle would call an "index full scan."
Is is an idea so efficient that to disregard it is inefficiency.
"I had never heard of, for example, a covering index. I was invited to fly to a conference, it was a PHP conference in Germany somewhere, because PHP had integrated SQLite into the project. They wanted me to talk there, so I went over and I was at the conference, but David Axmark was at that conference, as well. He’s one of the original MySQL people.
"David was giving a talk and he explained about how MySQL did covering indexes. I thought, “Wow, that’s a really clever idea.” A covering index is when you have an index and it has multiple columns in the index and you’re doing a query on just the first couple of columns in the index and the answer you want is in the remaining columns in the index. When that happens, the database engine can use just the index. It never has to refer to the original table, and that makes things go faster if it only has to look up one thing.
"Adam: It becomes like a key value store, but just on the index.
"Richard: Right, right, so, on the fly home, on a Delta Airlines flight, it was not a crowded flight. I had the whole row. I spread out. I opened my laptop and I implemented covering indexes for SQLite mid-Atlantic."
This is also related to Oracle's "skip scan" of indexes.
> And people should never be treated as merely tools.
maybe on a tight knit team people don't mind being treated like tools because they understand what needs to get done next, and see that it makes the most sense for them to do it, it's nothing personal.
At my freshman year "1st day" our university president gave us an inspirational speech in which he said "people say our program just trains machines... I want you do know we don't train machines. We educate them."
I'd say that if you have a tight-knit team, you are already doing the very opposite of treating people as tools. There's nothing wrong with having a shared understanding of a goal and then assuming a specific role in the effort to accomplish that goal; people are very good at that.
The problem is when you think of people the same way you think of a hammer when you use it to hit nails: The hammer doesn't matter, only that the nail goes in.
Best practices are subjective. What is best practice is for C is not the same as Python.
SQL DBs provide consistency guarantees around mutating linked lists. It’s not hard to do that in code and use any data storage format.
Imo software engineers have made software “too literal” and generated a bunch of “products” to pitch in meetings. This is all constraints on electron state given an application. A whole lot of books are sold about unit tests but I know from experience a whole lot of critical software systems have zero modern test coverage. A lot of myths about the necessity of this and that to software have been hyped to sell stock in software companies the last couple decades.
"Best practices" are just a summary of what someone (or a group of someones) thinks is something that is broadly applicable, allowing you to skip much of the research required to figure out what options there are even available.
Of course, dogmatic adherence to any principle is a problem (including this one).
Tools can be misused, but that doesn't really affect how useful they can be; though I think better tools are generally the kind that people will naturally use correctly, that's not a requirement.
I don't think you need "abnormally strong opinionatedness" or anything else special: all you need is a certain (long-term) dedication to the project and willingness to just put in the work.
Almost every project is an exercise in trade-offs; including every possible feature is almost always impossible, and never mind that it's the (usually small) group of core devs who need to actually maintain all those features.
I interpreted "opinionatedness" as meaning they have a clear definition of what sqlite is and isn't, including the vision of where it's headed. That would result in a team with very strong opinions about which changes and implementations are a good or bad fit for sqlite.
Can a project consistently make the right trade-offs without having strong opinions like that?
These devs provide a platform and any change to a platform has a huge impact for the users. They have a plan they follow, and in every project are layers. Constraints can be good, when and if applied correctly like in this case.
Fossil is not less modern than Git, just less popular.
Under the hood it seems a lot like Git. The UI is more Hg-like. I disagree with D. Richard Hipp's dislike of rebasing, but he's entitled to that opinion, and a lot of people agree with him.
Calling Fossil "not modern" is a real turn-off. TFA seems to be arguing for buzzwords over substance.
Why? Fossil is a great name, since the past of a software project is... fossilized in the VCS, and looking through it is like doing archeology. No, Fossil's a great name. I just wish it adopted rebase as an optional workflow.
When the retelling of the history of virtualisation ignored everything before Xen, I questioned the value of the essay.
When it got to asserting that Fossil isn't modern, I discarded it. Fossil's a DVCS, but unlike git it chooses to integrated project tooling for things like bug management with the code repo. You can argue about whether you like the approach. But 'not modern' is an absurd statement.
Agree, after reading up about fossil. Except for one thing: I don't want the "closed team" culture that was intentionally baked into the tool.
When git replaced SVN, it was so empowering that I, as an individual, was able to use the full maintainer workflow without the blessing of a maintainer, privately by default.
Before git, we saved the output of "svn diff" into a .patch file for tracking personal changes. When submitting a patch, the maintainer had to write a commit message for you. With some luck, you even got credited. For sharing a sensible feature-branch, you had to become a maintainer with full access. This higher bar has advantages (it tends to create more full-access maintainers, for one). However, it sends this message: "Yes open source, but you are not one of us."
Yes, fossil has this great feature of showing "what was everyone up to after release X". I miss that in git. (Closest workaround: git fetch --all && gitk --all.) But if "everyone" means just the core devs, then I'm out.
> I don't want the "closed team" culture that was intentionally baked into the tool.
I've been using Fossil for years and TBH i don't see that "closed team" culture you speak of (though i also have almost all of my projects as "open source, not open contribution").
Fossil is a distributed version control and in fact it is "more" distributed than git if you consider that people tend to tie it with centralized services like GitHub to get more than just the VCS part. A Fossil repository contains not only the versioned files, but also a wiki, tickets/bugtracker, forum, chat room, blog/technotes - even the theme is part of it. And since it is a decentralized system, all of it are cloned when you clone the repository.
AFAIK the only limitation (hasn't really tried it myself since i only use it solo) is that for security cloned users aren't "fully" cloned so you'd need to make new users in the cloned repository - you can use the same username though (but in commits, history, etc it'll appear as different users). It'd be useful if "user" and "identity" could be distinguished so that you can have the tool know that two usernames are really about the same person.
Also Fossil works pretty much everywhere - a local server, as a FastCGI server, as a plain old CGI "script", you can even have it as a "CGI interpreter" for ".fossil" files (the repository files - the entire repository is stored in a single SQLite database file) which makes it usable with many shared web hosting services without needing root or even shell access. In a way that is the most decentralized you can go :-P.
> Fossil is a distributed version control and in fact it is "more" distributed than git if you consider that people tend to tie it with centralized services like GitHub to get more than just the VCS part.
The GitHub blurb makes no sense. Even if N developers standardize their workflow on using a couple of remote repos to exchange work, that does not make the underlying system less distributed/more centralized.
> A Fossil repository contains not only the versioned files, but also a wiki, tickets/bugtracker, forum, chat room, blog/technotes - even the theme is part of it.
That sounds like a major design faux pas. It makes zero sense to tie a chat room/blog/e-mail client/alarm clock to a source code repository.
> The GitHub blurb makes no sense. [..] It makes zero sense to tie a chat room/blog/e-mail client/alarm clock to a source code repository
I think you need to reconsider your senses if you can't see the tie between the two :-P. The "more distributed" part was exactly because of Fossil providing that additional functionality GitHub provides in a decentralized form.
Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.
A single person can develop and release extremely high quality software, and as long as it meets the needs of the users (it's not missing a lot of features that a taking a long time to deliver), a single person in absolute control and writing all the code is probably a benefit in keeping it high quality and with less bugs.
It may not follow that the same can be said a few years from now, or even a few months from now, since the bus factor of that project is one, and if "bus events" includes "I don't want to work on that anymore but nobody else knows it well at all" then for some users that's a problem (and for others not so much).
On situation isn't necessarily better or worse than the other (and it's probably something in-between anyway), it really just depends on the project and the audience it's intended for. That audience might be somewhat self-selected by the style of development though.
> Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.
I think in this area SQLite has most other open source software beat. SQLite is used in the Airbus A350 and has support contracts for the life of the airframe.
Fair points, although the bus factor is more like 3 or 4 for SQLite as far as I know. The question though is that if the entire team vanished from the face of the earth, what would the impact be? My guess is that either SQLite would be good enough as is for 99% of use cases and it wouldn't need much development apart from maybe some minor platform specific use cases or, if new functionality truly is needed, then it would be better for a new team to rewrite a similar program from scratch using SQLite as more of a POC than as a blueprint.
SQLite is supported until 2050, and will likely outlast many other platforms if this goal is attained.
I hope the bus factor is high enough to reach the goal.
"Every machine-code branch instruction is tested in both directions. Multiple times. On multiple platforms and with multiple compilers. This helps make the code robust for future migrations. The intense testing also means that new developers can make experimental enhancements to SQLite and, assuming legacy tests all pass, be reasonably sure that the enhancement does not break legacy."
I was speaking less to the SQLite situation specifically and more to the general idea of "Could it be that this is precisely because they don't use "modern tools" and accept outside contributions?" and how I think teams that are very small and not very accepting of outside help/influence might affect that.
To that end I purposefully compares extremes, and tried to allude to the fact that most situations fall between those extremes in some way. SQLite is more towards one end than the other, but it's obviously not a single developer releasing binaries to the world, which is about as far to that extreme as you can go. The other end would probably be something like Debian.
That's not to say either of those situations have to be horrible at what the other excels at. That singular person could have set things in place such that all their code and the history of it gets released on their death, and Debian obviously has a working process for releasing a high quality distribution.
AFAIUI the company behind SQLite (Hipp & co.) have basically endless funding. Not unlimited, just a good enough budget and not likely to end soon. That's also a big factor.
> It may not follow that the same can be said a few years from now, or even a few months from now, since the bus factor of that project is one, and if "bus events" includes "I don't want to work on that anymore but nobody else knows it well at all" then for some users that's a problem (and for others not so much).
You may have been speaking generally, and you'd be right, but specifically the bus factor of the SQLite team and the SQLite Consortium is larger than 1, and they could hire more team members if need be.
If and when the SQLite team is no longer able to keep the project moving forwards, then I think we'd see one or more forks or rewrites or competitors take over SQLite's market share.
Yes, I was speaking generally. Specific development models have advantages and disadvantages, but those can often be countered by non-development model actions taken to limit those disadvantages. For example, and extremely open development model is likely prone to more bugs and quality problems, as well as a harder to read and work in code base. There are steps to combat that, such as style guides and automatic style converters, numerous reviewers that can go through code to fund bugs and make suggestions for better quality, etc.
It's not so much that one model over the other will have those problems I've mentioned for each, as much as I think those are common things those projects should be cognizant of and take steps to combat.
As you noted elsewhere, it sounds like SQLite has done a lot that mitigates what I see as the inherent disadvantages of their development model, which is laudable. At the same time I doubt the average SQLite developer is as easily and quickly replaced as the average Linux kernel contributor, even if there are specific kernel developers which would be hard to deal with their loss. Sometimes all you can do is mitigate the harm of a problem, not remove it entirely.
I dunno, I've seen new team members jump onto projects and thrive where most other members have a decade or two of experience. Once you've been around the block dealing with large and complex codebases, picking up a new one gets easier, so I'm not at all worried about the SQLite bus factor. I agree that much larger projects like Linux can have much larger communities to draw leadership from, but I think SQLite is fine.
Once upon a time I wanted to make a contribution to SQLite, and I tried to negotiate making it, but it was quite an uphill battle. On the other hand, I found the codebase itself quite approachable.
> Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.
And so does the Linux kernel. There are numerous cases of successes and failures at both ends of that spectrum.
My point wasn't to imply that you can only pick one, but that in some cases choices to maximize one aspect can negatively affect the other if care is not taken, and depending on audience high quality released software is not the only thing under consideration in open source projects. Keeping the developer group small and being extremely selective of what outside code or ideas are allowed might bring benefits in quality, but if not carefully considered could yield long term problems that ultimately harm a project.
> > > Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.
> [...], but if not carefully considered could yield long term problems that ultimately harm a project.
SQLite has a successful consortium and a successful business model, namely: leveraging a proprietary test suite to keep a monopoly on SQLite development that then drives consortium membership and, therefore, consortium success, which then funds the SQLite team.
This has worked for a long time now. It will not work forever. It should work for at least the foreseeable future. If it fails it will be either because a fork somehow overcomes the lack of access to the SQLite proprietary test suite, or because a competitor written in a better programming language arises and quickly gains momentum and usage, and/or because the SQLite Consortium members abandon the project.
Very good points. The proprietary test suite is clearly the (open) secret to SQLite's success. It seems to me that it isn't even entirely accurate to describe SQLite as written in C when the vast majority of its code is probably written in TCL that none of us have seen. It's more like C is just how they represent the virtual machine which is described and specified by its tests. The virtual machine exists outside of any particular programming language but C is the most convenient implementation language to meet their cross platform distribution goals.
If someone did want to carve into SQLite's embedded db monopoly, it would take years to develop a comparable test suite. This seems possible, particularly if they develop a more expressive language for expressing the solutions to the types of problems that we use SQLite for. Who would fund this work though when SQLite works as well as it does?
Ultimately, the long term harm I was thinking of (for the most part) was lots of proprietary knowledge being lost as a developer is lost for one reason or another, and a resulting loss in quality and/or momentum in the project as that developer may represent a large percentage of project development capacity.
That a large chunk of this knowledge appears to have been offloaded into a test suite is good, and does a lot to combat this, but obviously nothing is quite as good as experience and skill and knowledge about the specifics of the code in question.
As a theoretical situation, how much more likely is a fork to eventually succeed if one or more of the code SQLite developers is no longer available to contribute to SQLite? There are a lot of variables that go into that, but I would feel comfortable saying "more likely than if those developers were still present". That idea encapsulates some of the harm I was thinking of.
Institutional knowledge, and leadership, is indeed critical. The knowledge of a codebase can be re-bootstrapped, and its future can be re-conceived, but actually providing leadership is another story. I think there's one person on the SQLite team besides D. R. Hipp who can provide that leadership, but I'm not sure about business leadership, though who am I to speculate, when I don't really know any of them. All I can say is that from outside looking in, SQLite looks pretty solid, and libSQL seems unfunded.
In difference to the article implying that they use out-date project tooling they don't(). They VCS isn't out-date but in some way more modern then git, it's just focused on dev flows similar to theirs to a point where using typical git dev flows will not work well. Similar not using GitHub is a more then right decisions for how the project is manage, github is too much focused on open contribution projects.
(): You could argue they use "not modern" tools like C and similar to do modern things like fuzz testing. But the articles author clearly puts a focus on the project tooling highlighting git/GitHub so reading it as implying that their VCS is "not modern" i.e. outdated i.e. bad seem very reasonable IMHO.
> Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.
Software that's truly "extremely well written and maintained and high quality as of now" has the option of a plan like:
> "At the time of my death, it is my intention that the then-current versions of TEX and METAFONT be forever left unchanged, except that the final version numbers to be reported in the “banner” lines of the programs should become [pi and e] respectively. From that moment on, all “bugs” will be permanent “features.” (http://www.ntg.nl/maps/05/34.pdf)
If your software needs perpetual maintenance, that's a good sign that it's probably not that high quality.
> If your software needs perpetual maintenance, that's a good sign that it's probably not that high quality
The problem in a lot of cases is not the software per se, but changing environments. Windows upholds backwards compatibility to a ridiculous degree (you can still run a lot of Win95 era games or business software on Win10), macOS tends to do major overhauls of subsystems every five-ish years that require sometimes substantial changes (e.g. they completely killed off 32-bit app support), but the Linux space is hell.
Anything that needs special kernel modules has no guarantees at all unless the driver is mainlined into the official kernel tree (which can be ridiculously hard to achieve). Userspace is bleh (if you're willing to stick to CLI and statically linking everything sans libc) to horrible (for anything involving GUI or heaven forbid games, or when linking to other libraries dynamically).
The worst of all offenders however is the entire NodeJS environment. The words "backwards compatibility" simply do not exist in that world, so if you want even a chance at keeping up with security updates you have an awful lot of churn work simply because stuff breaks left and right at each "npm update".
You say nothing seriously false, but perhaps depending on things like NodeJS just inherently means your software is going to be poor quality. If that is true, then both you and the PP are probably right. I tend to think software quality will be higher if you depend on a third-party collection (such as so-called Linux distributions) than a second-party aggregation (such as NPM).
Even the distributions have a hard time with the NodeJS environment and its relentless pace - and the more software gets written in JS the worse. When e.g. software A depends on library X@1.0 and software B on library X@1.1, and X has done breaking changes, what should a distribution do?
Hard forks in the package name (e.g. libnodejs-x-1.0 and libnodejs-x-1.1) are one option, but blow up the repository package count and introduce maintenance liability for the 1.0 version. Manually patching A to adapt to the changes in X works, but is a hell of a lot of work and not always possible (e.g. with too radical changes), not to mention if the work should be upstreamed, then licensing issues or code quality crop up easily which means yet more work. Dropping either A or B also works, but users will complain. And finally, vendoring in dependencies works also, but wastes a lot of disk space and risks security issues going unpatched.
And that's just for final software packages. Dependency trees of six or ten levels deep and final counts in the five digits are no rarity in an average NodeJS application.
Importing even the bare minimum introduces an awful lot of work and responsibility to distributions.
When NetBSD imported sqlite and sqlite3 into their base system that was a signal to me that SQLite is no-nonsense and reliable. That was many years ago, around 2011 I think. Not sure why SQLite is getting all the attention on HN lately. Usually more attention means more pressure to adopt so-called "modern" practices and other BS.
SQLite is interesting to me because like djb's software its author is not interested in copyrights.^1
The author disclaims copyright to this source code. In place of a legal notice, here is a blessing:
May you do good and not evil.
May you find forgiveness for yourself and forgive others.
May you share freely, never taking more than you give.
Apparently this is not be enough to convince some folks they can use the code (maybe they really are doing evil), and so there is also a strange set of "reassurances" on the website:
This part also rung some alarm bells for me. It makes me think the author is unable to see outside his bubble, and that feeling is only reinforced by the comments about Rust and the CoC in the Readme.
I'm all for minimizing friction for contributors, but when I read things like: "The few core developers they have do not work with modern tools like git and collaboration tools like Github", I wonder if the collaboration from someone who refuses to send a patch to a mailing list (because it is not what they are used to and don't care to learn how to do) is really worth considering. I mean: if someone is not wanting to move a few millimeters out of their comfort zone to make a contribution is, very likely, someone who has very little commitment or will try to force their opinions and methods onto others.
The irony is that sqlite uses fossil which is more modern than git.
But really, I agree, the elephant in the room is that any time someone use the term "old" or "not modern enough" or "legacy" it means they have a system they don't under stand that they want to get rid of. software does not "wear out".
It does but you have to go through the maintainers and they have to be in line with the core principles of SQLite and have the necessary code quality etc.
I.e. it's hard to a point you can just say it's impossible for most people.
But what the author of the article fails to mention is that many of the things libsql wants to add to sqlite are in direct conflict with the core principles of sqlite.
E.g. SQLite: Max portability by depending on a _extreme_ small set of C-Standard C-Functions. libSQL: lets add io-uring a Linux specific functionality more complex then all the C-Standard C-Functions Sqlite depends on together.
E.g. SQLite: Strongly focused on simplicity and avoidance of race conditions by having a serialized & snapshot isolation level without fork-the-world semantics (i.e. globally exclusive write lock). libSql: Lets make it distributed (which is fundamental in conflict with the transaction model, if you want to make it work well).
E.g. SQLite: Small compact code base. libSql: Lets include a WASM runtime (which also is in conflict with max portability, and simplicity/project focus).
> It does but you have to go through the maintainers and they have to be in line with the core principles of SQLite and have the necessary code quality etc.
Even so, I think they’ll prefer to rewrite the contribution. They need to be absolutely sure not to incorporate any copyright encumbered code by mistake.
> The funny thing about this comment is that SQLite is as close to the gold standard of software quality that we have in the open source world. SQLite is the only program that I've ever used that reliably gets better with every release and never regresses. Could it be that this is precisely because they don't use "modern tools" and accept outside contributions?
Reminds me of OpenBSD, who still primarily uses CVS for source control.
but SQLite IS using modern tools, you could say their VCS is more modern then git. It is just not compatible with a lot of git work flows due to being focused on workflow not working well with git.
It is also following modern best practice like:
- use the best tool for the job (i.e. not git or GitHub)
- consider upfront how the project can long term be maintained (i.e. realize that you don't have resources to manage/moderate an public issue tracker/PRs and that you don't want to delegate this work to 3rd parties you barely know)
- keep things simple, i.e. no global exclusive write lock and serialized isolation level (instead of subtle race condition and/or fork the work handling etc.)
- test a lot, use fuzzing etc.
- limit features to you targeted use-cases to keep complexity in check (maintainability, bug avoidance)
- opinionated code style, formatting
- clear cut well defined dev/contribution flow (for the few which can contribute directly)
I.e. if we ignore superficial modern best practices like "use exactly this tool" I don't know which modern best practice it does not fulfill(). Through some are not fulfilled in the way people are used to.
(): Ok, maybe they don't keep to: Prefer languages with more guard rails as far as possible. Through due to their targeted compatibility/portability C is kinda the only option.
This make it sound like sqlite isn't using source control at all. What they are actually doing is using a more obscure source control program than git. Honestly who cares? Source control is source control.
You're conflating two different arguments: not using modern tools, and not accepting outside contributions. It's certainly possible that limiting contributions to a set of trusted contributors helps things move smoothly.
However it's not clear at all that using old tools has the same effect.
Also, Fossil is essentially implemented _in_ SQLite. Fossil is used to develop SQLite which is used to implement Fossil. It's a virtuous cycle. For the SQLite project, using Fossil is obviously superior to git. This doesn't mean that arbitrary projects should use Fossil over git.
The opposite can be said though, but git arbitrarily won over a bunch of other DVCS systems (another e.g. hg), mainly because of bandwagon and marketing.
I have some of my great-grandpa's carpentry tools, and I use them often. I guess I should go out and replace perfectly good tools with new stuff from a big store like Home Depot or Lowe's?
People forget that Stanley #4 plane hasn't really changed in over 100 years. It's still one of the best tools out there.
We dumped CVS because it was a poor tool, for the time. Subversion was better. Then completely distributed systems became better, because connectivity and computational power came about.
The funny thing about this comment is that SQLite is as close to the gold standard of software quality that we have in the open source world. SQLite is the only program that I've ever used that reliably gets better with every release and never regresses. Could it be that this is precisely because they don't use "modern tools" and accept outside contributions?