With software I most often go from “heavy” to “light”. First, I try to realize the features I need and then I try to optimize where possible. First, I will use many libraries and after I got everything working, I will see if I really need all those libraries and may replace a whole library with a custom more simple implementation that still fits the needs.
A case for “stupid light” software is probably trying to use a static website when an optimized dynamic site is much more appropriate. You start to over-engineer a complex build workflow, when all that’s really needed to speed up a site is better caching.
I have some experiences especially related to the database point. When I developed Android apps, I tried to use flat text files too. But recently I discovered how awesome SQLite is. In some cases flat files are great, but especially when you need to parse those files or retrieve specific information from them, SQLite might be better than flat files. But in many cases SQLite could replace complex PostgreSQL or MariaDB databases, especially when concurrent writing isn’t needed.
It works the same in hiking. You go first your first hike with _definetly_ all equipment you need. And afterwards you trim down for the next one, until you hit the sweet spot.
And if you go too far, you are reaching the stupid light point.
In some ways, the best programs are the ones I write for own use only. I know the user better than anyone and I have full control over the design choices. Obviously, choosing not to use someone else's software is ineffective as a means of controlling someone else's design choices. Perhaps unsolicited "advice" found in blogs might have some influence though.
The friend who brings all the serious hiking equipment, and even worse expects everyone else to bring such stuff, is really annoying in that situation.
Now what programmers do to their own couches or RAM I couldn't care less about, but generally they write software for other people and their RAM.
I currently work on a project based around SQLite, which suffers from that fact. Migrating it to PostgreSQL would be great, but would involve a lot of tedious work.
I thought their "SQLite for Analytics" tagline was confusing and I'm certainly questioning my recollection now.
People do seem to have a wide range of impressions of what DuckDB is for, so i think it's fair to conclude that their messaging has not been very effective!
I think the project is immature and the documentation is sparse. I expect to see detailed documentation on default file locations and the recovery process for an OLTP/HTAP engine and I didn't see anything like that in the DuckDB docs. My bad, I should have checked my assumptions.
I would not adopt DuckDB as an OLTP replacement for either SQLite or PostgreSQL without a strong OLAP component and some serious performance/load/integrity testing, but it sounds like you are confident in its abilities. Thanks for sharing your experience.
As neither seems to be a JAVA program, What does this refer to? A jar with the DB bindings, or with binary code for the relevant DB compiled into the jar?
Or do you mean using a Java lib that can read the relevant DB format without needing a DB service?
And it's not like it's nothing because we have terabytes of RAM now. A 2020 laptop I'm eyeing only comes in 8GB/16GB soldered RAM config; and the 16GB version has been sold out since the release. Which is not surprising, since the browser alone easily needs 8GB in a medium-heavy session.
Stupid light? Call me when common software isn't stupid heavy.
I recently tried using a small JS module to identify a file's mime-type. It pulled in 68MB of dependencies. And that's just one of many things an actual application will need. W.T.F.
So I agree, this is not the problem we need to tackle right now.
I mean, it's clearly not the problem _you_ need to tackle right now, but it's a real problem that other people need to tackle.
There are still people hand-rolling broken CSV parsers, manually re-implementing basic STL containers and other stupid "lightness" even when it doesn't make sense. This is not an academic discussion.
'cause a handrolled CSV parser vs. a third party one is often not a lightness issue; you're using a library either way, and your own library is not necessarily lighter.
The article was not even talking about that. The "lightness" approach in your examples would be sticking to arrays instead of using containers, or using Java serialization back when it was introduced (language feature, no extra weight!) instead writing to CSV, which requires a parser.
I feel like you are taking about a different problem here, and the problem TFA addresses is indeed academic.
(Edit: the author of the article shies away from giving example. The only example -- in-house gettext alternatives -- was definitely not a lightness concern where I've seen it; consider that i18n framework in Android isn't simply gettext for reasons that have nothing to do with TFA)
I think software such as Node/NPM have serious bloat/security issues that need to be criticised, and by extension stuff built on Atom. Any software build has to consider dev-time vs software-size, but I think the reason NPM is so popular for apps is not because it truly saves time, but because it uses existing web-dev technology, which ultimately means the technology is used in a context it wasn't designed for (and a lot of it was janky design-by-committee stuff even then) and various tricks/hacks/magic/shims are used to make it work.
Hopefully there are moves towards more WebASM in the future? But that still requires sane frameworks, or even better - libraries.