My side projects from 2012-2017 cannot be built or ran because of dependencies. My jsbin repo with lots of experiments cannot be ran anymore. But I have the sqlite database.
I forgot to pin dependencies when I was working. It would take a lot of trial and error and effort to get back to where I was. Or I have to rewrite the experiments.
Take lots of screenshots and screen casts to preserve your software!
A screenshots of multiplayer editor that allows transclusion that I can't build anymore.
It's a good habit in and on itself, but there still are objective heuristics to evaluate software quality.
Simplicity for example: if to solve the exact same problem, with no (impacting) performance penalty, one solution is considerably simpler (meaning, more straightforward) than another, then there's a clear winner.
The amount of automated tests is another objective way of gauging software quality.
I think I am contradictory when it comes to software : I don't enjoy maintaining something that breaks all the time: dependencies, system upgrades, deployment scripts and things that aren't 100% working reliably every time.
So my ideal system is to run a simple binary against a file or SQLite database and have it work reliably everytime. Not a complicated micro service architecture with lots of indirection and keep things running on network.
But balancing this with my hobby of designing multithreaded software.
> Simplicity for example: if to solve the exact same problem, with no (impacting) performance penalty, one solution is considerably simpler (meaning, more straightforward) than another, then there's a clear winner.
Not so clear: some people say `foreach` on arrays is simpler than single line map/reduce/comprehensions. Others say the opposite.
Some people say a monolith is simpler than microservices. Others say the opposite.
Some people say Python is simpler. Others say Java is simpler.
I mean, an earlier response to an earlier comment of mine on a different story was from someone who thinks ORMs are simpler than SQL. I think the opposite.
Until we can get everyone to agree on what 'simple' means, it's not that useful a metric.
I should have emphasized considerably, my bad: the goal was to cover the foreach vs. map type of issues, both being essentially equivalent, and more of a matter of style.
What I had in mind was things like removing 20/30 lines of literally useless code, or avoiding sophisticated OOP patterns when a few bare functions on ``struct``s are sufficient. I've saw both cases in practice (eh, I'm guilty of it myself as well): they're often the result of overthinking, or "trying to make code reusable."
For the micro-service vs. monolith, I don't think they are comparable as-is, with respect to complexity. Once the factual requirements are known, and once the usage of each pattern to fulfill those requirements is known, then we can compare. But before, it's kind of a "C++ vs. Rust" type of debate: what is the real situation?
Regarding ORMs vs. SQL, I tend to agree with you: we often can't pretend ORMs are perfect ("non-leaky abstraction") black box: in some use-cases, it's true, but as soon as you shy away from toys, you tend to have to understand both how SQL and the ORM work. Way more work than just dealing with SQL.
Same goes for dependencies in general. But they are also essentially mandatory dependencies (e.g. a TCP stack, disk driver, an HTTP lib, etc.).
I would like to see some kind of query AST for this stuff in a query engine for semantics that its ops can be fused together for efficiency. For example, like a Clojure transducer.
I think my technical perspectives have moved in a similar direction: keep things extremely simple with minimum moving parts.
I maintained (automated upgrades) a RabbitMQ cluster and while it is powerful software it is operationally expensive. For a side project you probably just batch process in a cron.
If I were to take the approach in this blog post I would want everyone on the team to be extremely familiar with the model of task running: stuck jobs, timeouts, duplicate jobs, client disconnects and retries, stuck "poison" jobs seem like issues you might face.
I'm using a PaaS, so I didn't want to pay the extra money for a cron job. Maybe not a wise a choice, but here we are.
Totally agree about the issues I might face. I'm glad that the Clojure REPL is a thing, so I can test out all of a job's functionality before sending it off to async land.
From just thinking about the architecture of Rama I feel it would scale very well.
Data parallelism and partitioning and sharding is a very effective scaling technique.
Nathan, I would appreciate writings about the mental model of mapping software to your mental model of implementing behaviours in the streaming data approach because it is a different paradigm.
I can't read the DSL yet and know what is going on!
The best documentation on the mental model of using Rama is the last page of the tutorial, linked below. However, I would recommend going through the whole tutorial rather than starting there.
I think our programming languages haven't caught up to an ideal notation of semantics of computer system behaviour at a macro level.
The design patterns (Gang of Four etc) and software architectures that are so complicated they require extreme discipline to keep understandable. (And cognitive load)
Which pattern did you find complicated? Most of them just come down to "wrap it/them with a new object to encapsulate the new/shared state or provide the new behavior". The rest are probably made obsolete by modern programming languages.
I forgot to pin dependencies when I was working. It would take a lot of trial and error and effort to get back to where I was. Or I have to rewrite the experiments.
Take lots of screenshots and screen casts to preserve your software!
A screenshots of multiplayer editor that allows transclusion that I can't build anymore.
https://github.com/samsquire/liveinterface?tab=readme-ov-fil...