With software I most often go from “heavy” to “light”. First, I try to realize the features I need and then I try to optimize where possible. First, I will use many libraries and after I got everything working, I will see if I really need all those libraries and may replace a whole library with a custom more simple implementation that still fits the needs.
A case for “stupid light” software is probably trying to use a static website when an optimized dynamic site is much more appropriate. You start to over-engineer a complex build workflow, when all that’s really needed to speed up a site is better caching.
I have some experiences especially related to the database point. When I developed Android apps, I tried to use flat text files too. But recently I discovered how awesome SQLite is. In some cases flat files are great, but especially when you need to parse those files or retrieve specific information from them, SQLite might be better than flat files. But in many cases SQLite could replace complex PostgreSQL or MariaDB databases, especially when concurrent writing isn’t needed.
>With software I most often go from “heavy” to “light”.
It works the same in hiking. You go first your first hike with _definetly_ all equipment you need. And afterwards you trim down for the next one, until you hit the sweet spot.
And if you go too far, you are reaching the stupid light point.
Is "stupid light hiking" really a proper comparison. In the case of hiking, the hiker is making the choices and the hiker is the one who will have to live (or die) with them. In the case of software, too often it is a commercially-oriented application developer making the choices and it is a different person, a user, who must live with them. The hiker has control over her choices. The user has no control over the application developer's choices.
In some ways, the best programs are the ones I write for own use only. I know the user better than anyone and I have full control over the design choices. Obviously, choosing not to use someone else's software is ineffective as a means of controlling someone else's design choices. Perhaps unsolicited "advice" found in blogs might have some influence though.
I feel there is an obvious missing comparision -- I'm often not going on a life-or-death hike, I'm just going on a pleasent afternoon walk with friends.
The friend who brings all the serious hiking equipment, and even worse expects everyone else to bring such stuff, is really annoying in that situation.
Stupid light does not have to mean life or death. I was shaving a few ounces from my pack - and switched to a hex tarp for my hammock. Got caught up in a rainstorm where wind shifted directions, and ended up getting soaked. I picked up a tarp with 'doors'. Similar mindset with code. We typically have to support what we write. When the war room hits because the dumb was stronger than expected... shortcuts can come back to bite one in the ass.
If the developer is marketing directly to the user then the user has a choice: They can decide if they wanna use the app. If the developer is marketing to a third party that is forcing the user to use the app (e.g. employers) then the user is powerless. The only exception is if the app locks the user in in some way.
I once heard someone say (paraphrased), "To some programmers RAM is like that couch in your grandmother's living room, with a plastic cover, that nobody's ever actually allowed to sit on."
Replace "RAM" with "electricity" and it becomes obvious why that quip is wrongheaded, at least for those of us who pay our own electric bill. "Developers" like that who don't care about externalities because they're not footing the bill waste millions of dollars worth of RAM, millions of dollars worth of electricity, etc. globally.
Programmers get paid (in money or respect) by other people like you who are free to use or to not use their software. Idk which percentage of modern stacks is free to us, and how much it would cost if "they" did it the "right" way. Blaming them for using your RAM to do the job is like blaming potato for using your enzymes to dissolve into nutrients.
As an aside "Stupid light" hiking is a great term for it. I recall a case where a British hiker died a long and lonely death in our country when he broke a leg a few days into a 4 week journey. Locals had encouraged him go take a locator beacon (due to the rough terrain and remoteness), but he'd refused, because it weighed too much.
If i am ever in a situation where i am considering SQLite, i will try to use DuckDB instead. It's a similar weight (the all-inclusive JARs are 6.9 MB for SQLite and 8.7 for DuckDB), it also stores a database in a single file, it's different in many ways, but for me, the killer feature is that it uses the PostgreSQL SQL syntax (and parser). This should make it very easy to migrate from DuckDB to PostgreSQL if that becomes necessary.
I currently work on a project based around SQLite, which suffers from that fact. Migrating it to PostgreSQL would be great, but would involve a lot of tedious work.
It is my understanding that DuckDB is an embedded in-memory OLAP ColumnStore designed to be primarily used with R/Python data frames. I noticed that they added JDBC support. Does this mean they've added persistence and other OLTP functionality?
I thought their "SQLite for Analytics" tagline was confusing and I'm certainly questioning my recollection now.
As far as i know, DuckDB has always supported files, and the full SQL standard you would need for OLTP, and Pandas is just one use case.
People do seem to have a wide range of impressions of what DuckDB is for, so i think it's fair to conclude that their messaging has not been very effective!
The problem is my recall and laziness. I didn't look into the details of what parts of HyPer were borrowed and I missed the adoption of the DataBlocks [1] storage format. DuckDB is designed to be an embedded HTAP (Hybrid Transactional/Analytical Processing) engine.
I think the project is immature and the documentation is sparse. I expect to see detailed documentation on default file locations and the recovery process for an OLTP/HTAP engine and I didn't see anything like that in the DuckDB docs. My bad, I should have checked my assumptions.
I would not adopt DuckDB as an OLTP replacement for either SQLite or PostgreSQL without a strong OLAP component and some serious performance/load/integrity testing, but it sounds like you are confident in its abilities. Thanks for sharing your experience.
> the all-inclusive JARs are 6.9 MB for SQLite and 8.7 for DuckDB
As neither seems to be a JAVA program, What does this refer to? A jar with the DB bindings, or with binary code for the relevant DB compiled into the jar?
Or do you mean using a Java lib that can read the relevant DB format without needing a DB service?
Both SQLite and DuckDB are available as JARs containing native libraries for the database, for some range of platforms, and Java code which unpacks the library at runtime and uses it.
Something I've used similarly in these situations is LevelDB or RocksDB if you just need a really small key value store. It's a really tiny library that gets you really fast write performance and some light atomic operations.
This article, in the age where a chat app eats gigabytes of RAM, is almost insulting.
And it's not like it's nothing because we have terabytes of RAM now. A 2020 laptop I'm eyeing only comes in 8GB/16GB soldered RAM config; and the 16GB version has been sold out since the release. Which is not surprising, since the browser alone easily needs 8GB in a medium-heavy session.
Stupid light? Call me when common software isn't stupid heavy.
I'm not insulted. The concern is valid. Its range of application is just so fringe as to make this a near-academic discussion.
I recently tried using a small JS module to identify a file's mime-type. It pulled in 68MB of dependencies. And that's just one of many things an actual application will need. W.T.F.
So I agree, this is not the problem we need to tackle right now.
>this is not the problem we need to tackle right now
I mean, it's clearly not the problem _you_ need to tackle right now, but it's a real problem that other people need to tackle.
There are still people hand-rolling broken CSV parsers, manually re-implementing basic STL containers and other stupid "lightness" even when it doesn't make sense. This is not an academic discussion.
Is the perceived "lightness" an issue there, or good old NIH syndrome?
'cause a handrolled CSV parser vs. a third party one is often not a lightness issue; you're using a library either way, and your own library is not necessarily lighter.
The article was not even talking about that. The "lightness" approach in your examples would be sticking to arrays instead of using containers, or using Java serialization back when it was introduced (language feature, no extra weight!) instead writing to CSV, which requires a parser.
I feel like you are taking about a different problem here, and the problem TFA addresses is indeed academic.
(Edit: the author of the article shies away from giving example. The only example -- in-house gettext alternatives -- was definitely not a lightness concern where I've seen it; consider that i18n framework in Android isn't simply gettext for reasons that have nothing to do with TFA)
Needlessly criticising software envs / prog-langs should be avoided as it causes needless flame-warring; but
I think software such as Node/NPM have serious bloat/security issues that need to be criticised, and by extension stuff built on Atom. Any software build has to consider dev-time vs software-size, but I think the reason NPM is so popular for apps is not because it truly saves time, but because it uses existing web-dev technology, which ultimately means the technology is used in a context it wasn't designed for (and a lot of it was janky design-by-committee stuff even then) and various tricks/hacks/magic/shims are used to make it work.
Hopefully there are moves towards more WebASM in the future? But that still requires sane frameworks, or even better - libraries.
With software I most often go from “heavy” to “light”. First, I try to realize the features I need and then I try to optimize where possible. First, I will use many libraries and after I got everything working, I will see if I really need all those libraries and may replace a whole library with a custom more simple implementation that still fits the needs.
A case for “stupid light” software is probably trying to use a static website when an optimized dynamic site is much more appropriate. You start to over-engineer a complex build workflow, when all that’s really needed to speed up a site is better caching.
I have some experiences especially related to the database point. When I developed Android apps, I tried to use flat text files too. But recently I discovered how awesome SQLite is. In some cases flat files are great, but especially when you need to parse those files or retrieve specific information from them, SQLite might be better than flat files. But in many cases SQLite could replace complex PostgreSQL or MariaDB databases, especially when concurrent writing isn’t needed.