I don't know what the future holds, but I do know that Junior devs are cheaper than senior ones. I could envision many junior devs coding (with the help of AI) and a few senior devs doing mostly code reviews.
I don't use AI, but I know an IT manager that uses it with code snippets and prompts like: "explain what this code does". He says it works great.
That seems to play into the kind of tools that would help junior devs become senior devs. But again, I really don't know. AI may fade away like pet rocks...
> I don't know what the future holds, but I do know that Junior devs are cheaper than senior ones. I could envision many junior devs coding (with the help of AI) and a few senior devs doing mostly code reviews.
Honestly, that seems like a recipe for dysfunction. You have junior devs relying on a crutch and failing to develop their own skills, then you have seniors getting their skills dulled/stagnating because code reviews aren't how you do that either (it's just an limited application of existing skills). Also, code reviews are a drag, and I can't imagine someone staying motivated if that's literally their whole job.
That sounds horrible, personally, when I have my senior dev hat on, doing code reviews is very far down the list of rewarding work. I’d much rather have an LLM that could do that for me.
I think the article makes an excellent point. But it begs the question:
Are there compiler flags to disable the 'tests'? If not, it would sure be nice to have them to get the fastest 'make a change -> run test' loop. It would be the best of both worlds. (Unless the developer forgets to run the tests once in a while...)
There is the opposite, you can only run correctness checks without compiling by doing `cargo check` (though your IDE is probably already doing that for you). But compiling without these checks isn't really possible, since the later stages of the compiler rely on the assumptions guaranteed by the earlier stages. You can't skip type checks and still know that you're generating valid code.
And as much as people like to talk about rust compile times, it's now actually very fast at compiling small incremental changes in debug mode. The bigger pains are when you are building in release mode, or when you are building from scratch (e.g. first build, updated compiler, updated all your libraries, etc)
> it's now actually very fast at compiling small incremental changes in debug mode.
This breaks down quickly at scale. If you reach significant code complexity, even with dedicated build machines, remote Bazel builds & distributed caching you're going to wait a long time for single-line changes in a debug `cargo check`. There are some patterns that play well with incremental compilation, but they're easy to break in practice.
I predict as more devs are moving from writing Rust as a hobby (or with focus on smaller libraries) towards larger production systems, we'll see an uptick of frustration with build times. But I'm happy that Rust leadership is aware of the issue (e.g. it was listed as one of the main points in the most recent survey, which made me immensely happy).
As for why: A power loss will likely only affect filesystems being written to. So less risk of ending up with an unbootable system if / and /usr are not being written to.
Even moreso if they are mapped read-only.
I don't know about now, but I've seen people only enable softdep mount option for FSs that need some extra speed, and not on / and /usr. (article does this for /home. look at the listed mount options)
It's also nice to mount /tmp and /home with "nosuid" and "nodev". Some file systems even "noexec".
Edit:
Note that I'm not advocating for it. I'm merely listing reasons one could have.
Yeah. I don't quite remember now, but I think many of the packages don't work if you don't have a separate /usr/local fs, since the packages need wxallowed.
Unless you want to enable wxallowed on / or /usr.
Then of course there's the fact that they don't do journalling. It's not my expertise but if literally everyone is doing journalling instead of softdep, then maybe they're right.
I've used OpenBSD off and on since 2.1, and I've experienced much more data loss on it than on Linux. So yeah I'm also not a fan of OpenBSD's filesystems.
I feel like OpenBSD don't have enough staffing to do the right things (e.g. Wayland, Bluetooth), so instead they try to do what they can, but right.
Which is fair enough, but will become more and more like retro computing for every year that passes.
I'm honestly in much doubts here. To make it proper reply it must be lengthy, will try to highlight at least several points and keep it short.
* finding FreeBSD guys, who probably the best match here is somewhat puzzle on its own
* highly likely common approaches in modern world would fail - cloud-init, systemd units to be adopted, not even mentioning Docker/podman and highly likely monitoring/metrics tooling. Not checked, but very unsure NewRelic or Datadog are compatible => admins will not be able to use their previous skillset effectively
* convincing people to join team of supporting OpenBSD systems can be somewhat tricky. I'm basing on my own feelings here - I'd rise the bar for salary 2 times and even that will think twice on should I spend my time on such experience
* leaving performance alone, I bet it will require extra hardware planning instead of buying any Supermicro/HP/whatever server and be sure it's Redhat compatible. Must be very serious reasons to put yourself in chores of this sort. And reasonable admins must consider such risks and delays for rollouts of products in production
I guess that's part of "adaptation" and for humans probably not a bit deal
But I feel that would be the top of the iceberg - bottom part would be - how much of other tooling, say Ansible will fail? Homegrown scripts will fail?
Amount of efforts seems to be very high for, let's say politely unknown and unclear benefits.
OpenBSD has certain flags it sets on various partitions to enable/disable performance and security features. Sort of like setting `noexec` on /tmp, using XFS instead of BTRFS on your database volume, etc. Like most OpenBSD things, it stays way on the "correctness/security" side of the "security--convenience" scale
65 and retired 2 years now. Worked as a developer for the last 35 years before that. I lived for the times I could get into the zone or flow state. Maybe 60 to 70 percent of the job was other things, and did not enjoy those parts nearly as much. Still tinker at home now, hoping to move from windows to linux someday...