"Great nutritious technologies to use: Make, Emacs, Lisp, CLI"
I personally hate Make, it's burned me too many times. Now I use CMake, and I haven't been burned in years.
And is it candy or an olive that I like VSCode and not Emacs (not that I've ever tried Emacs, I just don't feel like investing in that ecosystem)?
I agree the front end web framework churn is out of control; hopefully it will stabilize over time. But come on, don't just rip on random scripting languages that have been around for 25 years
But who is writing it and why they are writing it are inextricably linked with the language. Every language has a culture and a reason for existing, those are pressures that bear as much on the person writing the language and the code produced as much as the person writing the code does on the code and the language.
Whether or not say, modern Java encourages good code is neither here nor there, as not only is there resistance to learning new things there is resistance to using the new things properly. Java has a history of bloated, inefficient code for a reason, and when you use Java, the abstractions you use will alter the code that you write. It's impossible to get away from 'bad code' without rewriting things, and then you're not coding your new project you're rewriting someone else's. It's also 'good practice' to use x y z language idioms, but there are cases when such idioms make the code much harder to read and much less clean.
As an example from C: I like POSIX, but dirname and basename are two ridiculously shit functions. If I choose to use them, the rot starts infesting my code, and everything that deals with them has to account for that, which itself causes a bias towards more obtuse code.
Good languages kinda enforce some kind of basic rules, and prevent developers form making silly mistakes - because we all do them all the time. Those languages usually trade off the performance for safety though.
We should pick the safest language, that prevents errors and mistakes as long as we stay within allowed performance bounds.
I generally prefer statically typed languages just because there is less ambiguity and function definition can help you greatly - you know what you put it, and what it returns.
bash and devOps code has their own special place in hell. bash has only one advantage - it works on any linux distro out of the box, and that's it. It is unreadable untyped mess.
It's the thing that attracts a lot of new beginners who may or may not be interested in the more serious theory that makes programming what it is.
The core data types actually strike a really excellent balance between representing the major distinct paradigms (named collections, unnamed collections, strings, numbers, booleans) while giving you maximum flexibility within them. Its standard library - array functions in particular - has become very mature. FP and OOP concepts are both quite well-supported, and doing async stuff (client or server or otherwise) with promises just smashes every other language I've used in terms of ergonomics.
Much of the above could be said about Python too, but regardless.
TypeScript and Python (with typehints) are the current favorites. Golang seems to be getting up there for me too, but I don't have enough experience with it to actually be able to say that.
There's half a dozen different syntaxes for importing/exporting functions, creating functions, creating objects, etc., and it's all fair game. Then there's the asynchronous nature, so sometimes it's async/await, and sometimes it's promises, and sometimes it's callbacks, sometimes all happening in the same function.
And then on top of that there's all the front-end web frameworks which are like, totally un-opinionated, man, except for the parts where they're implicitly super opinionated.
I'm not saying that a language should only offer one approach to a problem, but there should be general guardrails and guidelines. A shared vocabulary. None of that exists in modern JS
- Bjarne Stroustrup
The fact that we have things like the Wayback Machine is a testament to how this approach, while terrible for developers starting out or working in a legacy codebase, has enabled us to preserve what is now a significant portion of contemporary cultural history. AFAIK there is no similar archive of programs for, say, Java or C.
Semicolons could be considered idiomatic or stylistic, for example. Requiring strict equality operators feels more of an idiomatic way to avoid bugs, rather than simply whitespace. Eslint certainly has a number of rules that are more idiomatic than stylistic.
Of course it evolved. It’s easy to evolve if it was literally nothing in the first place.
It's like if someone took the BASIC my calculator offered back in the day and turned it into a general purpose programming language.
I wonder how many devs are struggling to debug and fix bad code written using Promise and async features. With care you can use these features safely and productively, but it is also very easy to make subtle mistakes that cause intermittent faults or strange corner case errors.
I would argue that anything for phones is similarly braindead.
While the languages are okay, the API's, UI idioms, and general ecosystem seem to have a half-life of about 6 months. If you step away from programming phones for 18 months, it's like you're learning everything from scratch.
(Side note: All right, Groovy should be taken out and shot and the idiot who foisted it on the Android build system needs to be taken to the woodshed and beaten severely.)
I think your beef there should really be with Gradle, or at least, how it is using Groovy. The way Groovy is implemented as a DSL for Gradle is, in my view, toxic. Almost everything is implicit behavior, crucial aspects rely on totally incidental "magical" features that just make things happen when you invoke some cryptic incantation that has an unobvious connection to its context.
Groovy itself is pretty good and IMHO much better than Java for most purposes.
If you were any good 12 months ago - you will pick up any of the trends quite easily in a week or so (partially because you have also predicted the future).
The project you landed on has been using some souped-up supermake with a ton of features (which were used with wild abandon) and some subtly different semantics for (say) macro expansion and variable assignment (and these are really difficult to determine the dependency scope of). The build system has grown to a recursively gnarly Turing-complete mess.
So when you go to update the tooling, you find out that this supermake is (a) a commercial project, (b) the company has gone out of business, or adopted a predatory per-seat licensing scheme, or no longer supports your platform, or has changed the product so significantly that you might as well port everything to a different version of make anyway.
Never happened to me.
Common make fail #2
I was regularly dealing with bugs in multi-million-line makefiles. Make, you see, has no debugger; you can't step through things to find out what's going on. To debug an issue you insert print (echo) statements and look at output, and/or gronk through many, many lines of make's internal structures. Why didn't file A compile, and why file B compile twice? It could take a long time to figure out.
Common make fail #3
"Builds are too slow, we need to speed them up."
So you go parallel. Hey, this is easy. Except for those libraries there (they need to be done first). And those headers are generated, so those need to come first, but first first. And everything depends on these libraries getting built, so need to wait for the linker here, here and here. Pretty soon you have a dependency hairball that nobody understands and that people are afraid to change because things always break. Eventually you get to a point where "yeah, building twice usually seems to fix that issue..." and that's when you know the project has grown too damned big for its shell again.
Common make fail #4
You die a little each time that you get screwed by tab-versus-space. And cross-platform . . . umm, what year is it? Then someone smugly mentions some XML-based tool to you, and you snap because that tool is worse in every single measurable dimension and a few more besides but the developer in question doesn't question things because the tool uses Holy XML and Blessed Java (well, some version of Java, anyway, you can probably still download it from somewhere) and who cares if it's just the same set problems with a different color paint slapped on top and the usual Reddit community dedicated to the elimination of all heretical views. Support? Who needs that, just read the source.
Seriously, I haven't seen any build tools in the past 30 years that have done a great job and that were a joy to use. CMake, Ant, Gradle, make-of-the-month-club, whizzy vendor-locked tools, bespoke in-house constructions, they have all been terrible. I see little hope, entropy has won this one.
Not starting by reading the manual, and then writing an idiomatic make file.
Once you understand how to use auto generated .d files, you’ll never have a multimillion line make file.
Once you understand the built in lex and yacc rules, you can generalize them to whatever code generator you are using.
The remaining thing (not spelled out in the manual) is to avoid recursive sub make, and to use the include directive instead. “Recursive submake considered harmful” is a helpful paper, but it should have had the subtitle “a practical alternative”, because people cite the title without reading the whole paper.
I've personally only used make for a few projects, and might not be "in the know" about how to best use make, but isn't a multi million (!?) line makefile a sign you should probably break it into peices, or use a different solution?
Multi-million just seems insane to me
Personal feeling is you should have a way of regenerating makefiles.
I guess my point is that make doesn't scale, and unavoidably gets messy over time as each project solves the same problems over and over in different, buggy ways.
make solves problems you get early in a project ("let's get this handful of files automatically built") and doesn't even try to address issues of modern software development, like build parallelism and dependency management. Its facilities for debugging problems are laughable.
Go look up plan9's mk. It's a remake of the program written by the designers a few decades later with everything they learned from it. Not only are the variables idiomatic, and the spaces problem fixed, but you can ask for output at each stage and (IIRC) view the graph that it builds.
Of course you quickly get to million-line Makefiles in bigger projects. In C++, if you depend on stuff like Boost, a single file can easily depend on thousands of header files, and every single one must be tracked by Make.
But of course, nobody in his right mind would write these dependencies by hand. At my dayjob, the core Makefile is 700 lines. These 700 lines do cross-compilation with gcc/clang for various platforms, unit testing, valgrind, cppcheck, clang-tidy, coverage, and several other things. It is completely non-recursive, runs in parallel without problems, tracks all dependencies, and also works on Windows.
Of course this stuff is complicated. Make is very low-level compared to a build-generator like CMake. But if you want a bespoke build system for your project which lets you control every last detail of your build, it is still a good choice.
> grown to a recursively gnarly Turing-complete mess
> gronk through many, many lines of make
Thank you for my new word of the day. It describes exactly how I feel sometimes, dealing with technical debt accumulated over years, Frankenstein's systems where parts are always churning - often for no good reason at all, new but worse in most aspects - it can be exhausting gronking through the layers of complexity.
Then Microsoft got rid of it because InstallShield gave them money. Now you need the super-extra-professional edition of Visual Studio to have anything but the most basic installer, and you need to hire a guy to write InstallShield scripts while he's taking a break from contemplating just how tough it would be to get by on pogey alone, and God forbid you need to change anything because the guy is on a different floor because he's not a real developer, etc. etc.
Have a look at redo.
Also, #3 is what make excels in. I'm still to see another tool that is more fit to that kind of thing.
At the end of the day, I don't like make. It's for reason #5: there's no project introspection, you can't group tasks into high level values, update your graph on the fly, or add anything to your target except at the source level. I see this one didn't make into your list. This is one of the reasons those multi-million makefiles exist.
I'd rather deal with a small shell script, or a larger Python one if it can't be small. Yet, autotools are great for system programming.
The most common mistake, by far, is trying to use make recursively (where make calls other makes down a directory hierarchy). That's typically where "builds are too slow" come from, because you end up with a hairball of complex workarounds because no one make invocation actually has the correct dependency data.
I just create a single makefile and treat it like any other program: Add carefully, ensure that the text is clear, etc.
I think that we're going to see more languages with built-in build systems from early-on in the language's development, but that is a doubled-edged sword (works great for your version of WHIZBOL, doesn't work great if you need to interop with other things).
When you (the general "you", please) use a tool wrong, you can look kinda a fool for blaming the tool.
Is this a copypasta?
That's a common misconception, but people often don't understand what make is. Contrary to the intuition, make is _not_ a build system (like, say, cmake), although build systems can be (and often are) implemented on top of make; nor is it a task runner (like rake).
At its core, make is a declarative expert system (with a very sane design and a very quirky rule syntax). Its area of expertise is updating files. I urge every user of make who hasn't read "Recursive Make Considered Harmful" to do so.
I'm curious about VSCode - why that, and not Visual Studio or another full-featured IDE? I've gone from IDE to vim and back, and I have to admit with great sadness that the various little gimmicks an IDE provides (without hand-configuring the damn thing!) make it worth it, particularly since they all allow for vim keybindings to one degree or another.
Where do you see that?
I’d be interested in hearing about the problems Make has caused you— Whenever I’ve needed to write or edit a makefile, it’s been a pain looking up the relevant details in the manual, but nothing as severe as what you seem to be implying. Have I just been lucky, working with less complicated projects, or something else?
The main problem that kept burning me was that I'd have a "dirty" build when I thought I had a clean one or an incorrect one when I thought I had a correct one. It was super hard to get the dependency graph correct because developers were creating files left and right but would only update the Makefile just enough to get it building (and forget to add new header files to the list of headers, etc) so I had little confidence in incremental builds and was always building from the ground up.
CMake's generated build systems always have correct dependency graphs (i.e. it will only rebuild the bare minimum when you touch a given file) and I've always had complete confidence that my build is clean (since I just created a fresh directory) and that my incremental builds are correct.
This is why gcc/clang have the option to generate Makefiles for the header dependencies for you. Maintaining these yourself is a hopeless effort.
Well actually, CMake is a build system generator - and the default build system it generates is Unix Makefiles. So I assume CMake generates Makefiles with the appropriate automatic dependency detection baked in, I'm not sure.
The important part was that it eliminated human error on our team and we haven't had dependency graph problems since.
AS_SRC = main.S util.s
OBJ = $(AS_SRC: .S=.o)
rm -f $(OBJ)
Granted, knowing that there’s an error here helped greatly in spotting it; there’s a good chance I would have run the incorrect makefile at least once before investigating.
I've worked around that in the past by creating empty "target files" that are created when a given target is built and which dependencies rely on, but it's not a perfect solution.
I've looked around for a make alternative, but although I like rake, I don't really use Ruby these days, and so dealing with RVM/rbenv and whatnot is sort of a pain and not worth the effort over make.
Could you elaborate on how you've been burned by make? Because my experience is the exact opposite of yours. I shudder to remember some of the cmake I've had to deal with.
The ability to have the entire build/build artifacts in a directory completely standalone from the source directories was a huge win.
Because for me, that meant I could have 3 separate build directories - one for my optimized ARM builds, one for my debug (-g -O0) ARM builds, and one for my x86 unit test/coverage builds.
If I modified one file, I didn't need to re-spin 3 clean builds - I just could hop into the one or more build directory I was interested in and incrementally update the builds. Plus, out-of-source builds make building clean as easy as `rm -rf build/` instead of hoping that `make clean` has all the right pattern matching and subdirectory listings to truly scrub the build artifacts.
If you think the churn is that then you're lucky. To me, the churn is working for the man. Everything I ever do is for the man.
If I could ever reach a point where I could say, fuck you, the man, well, that‘d be the day, wouldn't it? That'd be The End Of The Churn that The Man invented.
Maybe you are looking for sovereignty, where you do things not because you "have" to but because you "choose" to.
The idea your touching upon might be bigger than just your "day job" but how you spend your time and what decides that.
There are many groups that say they do this work, but very few that actually do. Then you have to move yourself internally until you are actually doing work that is the greater good.
What is the greater good to begin with? The flexibility of human understanding and the breadth of types of companies with "good intentions" available means that the search for such a mythical group is something that can take a lifetime and still not yield any fruit.
Maybe you can work for a non-profit if you're lucky, but how many are there out there that pay market rate?
No. That's a misconception. Corporate directors are required to act in the interests of the shareholders, but they have a lot of discretion in determining what those interests are and how they are to be served.
Here's a reference from a decent law school:
I agree that there's still a lot of lee-way in that, like giving an employee a bonus might hurt shareholders directly in the short-term but you can claim it increases productivity and so is a good decision. You still have to justify all decisions in terms of value to the shareholders though. If you (as CEO) decide to just stop all work and spend every day at Disneyland until the coffers are empty you can be sure you'll lose that suit. And when every single decision has to be viewed through that lens, you aren't able to directly do real good for the world, just indirectly.
That's just the excuse the CEO's mouth pieces trot out when the company is doing stuff that is shitty and sociopathic. You'll notice the CEO has no problem getting the company to do things that enriches himself at the expense of the shareholders.
You probably already could if you wanted to. Ten years ago I quit my software engineering job and spent 2 years driving from Alaska to Argentina. Recently I quit again and spent 3 years driving around Africa.
When you spend less money, you have a lot more time than you think, and you don't have to work for the man.
But I do think pessimism is the correct state of mind, given how far the tentacles of the man have reached and how little we do to resist his influence.
Who or what is the man? Only a Morpheus can tell you such a thing. I'm not him.
That technology was TeX, even though it's on the author's "good" list.
It's somehow got to a point where every time two people got together to work on a document, we had three different incompatible header files; package X doesn't work with version Y of package Z; what order you import packages in matters on my computer but not on a colleague's (still not sure why); the order of macro expansion and character class redefinition in some packages causes hard to track down bugs ...
80% of what I need to do, I can do in markdown and then run through one of many document-generating tools. Currently for some projects I'm using gitbook-cli and I'm both more productive and much happier. I even wonder whether I want to make a fork of gitbook-cli, it's something I'd trust myself to be able to contribute to.
For the other 20%, I use word. Yes it's a WYSIWYG interface but for short documents (max 20 pages) where layout and design is important to get right, I've found myself being able to save time and frustration compared to TeX.
Editor-wise, I use vim or vscode depending on the task. I've tried emacs, we two are not really compatible.
The best thing you can do to insulate yourself from the pain described in the post (trying to remember how code worked, etc) is to brain dump important things you learn along the way. I've recently begun keeping a TiddlyWiki for every major project I undertake. In it, I keep unexpected things I learned, cheat sheet items, command-line snippets, and longer form entries about structure.
The best part is it was all written by me, so the communication barrier is as low as it can possibly be. Reading one of these TWs allows me to pick up a project again extremely quickly. It's also useful on large projects where different areas are like their own projects unto themselves.
Tools, not closing yourself off, help you overcome your limitations.
I know a web developer who refuses to learn or use any web tech besides PHP. Finds cloud-based hosting confusing as well. "I can do anything I need in PHP." Everything, except find good paying job that isn't maintaining gnarly legacy codebases. Doing PHP is his bread and butter, and unless he decides to learn some of the newer (stable) webdev tech, he's going to find himself with no marketable skills in the future.
I think ruby has actually gotten a LOT better at minimizing the churn (both core/stdlib, and the ecosystem, specifically including Rails itself), as a result of people learning from the experience. The ecosystem still doesn't prioritize it as much as I'd like.
I also think this points out the benefit of sticking with a platform/ecosystem for a while. Many people didn't realize the danger of backwards-incompat churn until they saw it through experience over years.
You notice it when you work with the same code for a while -- if you are always abandoning a codebase and coming to a new one, you abandon before it gets painful, and either have a new one that isn't yet painful or a legacy one where you can blame the pain on your predecessors making bad decisions.
If you are always abandoning a platform for a new one -- the dangers of 'churn' aren't apparently in a less mature platform, you never pay the price at version 1.0, only after it's been around a while. But they will be if it lasts long enough and you stick with it long enough -- if when it starts hurting you abandon it for some other new thing thinking the other new thing will be better and not realizing it's better in that respect only cause it's newer... you never learn.
At the same time, finding something that works and sticking with it perpetually sounds wonderful - perfect, even. But breaking things and doing hard things also leads to innovation and new ideas.
So ultimately I think there needs to be a balance and everyone should embrace a little churn while eschewing it in the broad form.
Totally agree. Sticking to things that work can hurt your job prospects seriously. You are quickly an “outdated dinosaur”.
Sure, a mainstream language like Js will have a lot more jobs than Clojure, but it also has a huge pool of developers making it hard for any individual developer to stand out. Meanwhile, Clojure has a much smaller market, but a growing one meaning that the demand currently outpaces supply of Clojure developers making it much easier to actually get an interview. Another side effect of this is that companies tend to be a lot more open to remote work.
Projects which are updated often have a forcing function on them - they need to become updateable ... that can mean tests, that can mean reasonable build tools, and that can mean dropping individual dependencies that are too painful to update.
Obviously some ecosystems make all these things easier than others — typed languages with reasonable consistent build processes help more than anything. Good codebases that isolate and wrap usage of most dependencies next and reasonable automated tests probably next most useful.
This is not about your code not breaking while you change it. It's about things breaking when you change nothing. I've learned the lesson long ago by using PHP software, some ecosystems just break much more often than others.
OP does not give any reason as to why these are churn and why stop them ? This makes no sense...basically to op’s point we should just never adopt new technology because it’s just a new shiny thing. This makes no sense at all.
At the end of the day, honestly, how many frameworks do you need to build webpages? And 95%+ certainly are not operating at a Google or Amazon scale.
On a side note, I recently saw a dev spend a few days wrestling with dependency issues - he was trying to wire in Spring into a Java application and running into configuration issues with decorators. At the end of the day, this was a tiny, tiny application that periodically ingests messages from a topic and forwards their payload to an email service. These are 5-10 messages a day and the emails that are generated go to internal business customers as a courtesy and not less of a mission critical notification. It’s worth thinking about what was the business value vs the cost?
The take away isn’t that you should never use new technology - rather, understand that new technology or change in general comes at a cost. Your job is to balance that cost considering several factors - code quality (cost of time spent), flexibility and extensibility, features (the stuff valuable to whatever function responsible for your pay check), cost of on going support and maintenance (operational load). Disregarding those factors and over indexing on latest and greatest is usually the wrong approach.
I do not know about Ruby nor about React-Preact-Vue-Angular to judge, though based on what i've heard the latter pile does sound made up of things that expect you do waste your time ensuring your software keeps working.
Another way to spot "The Churn": When you're done, what new value did your effort create or unlock? Are users better off? Is the system more resilient and reliable? Or did you just get it back to the way it was working before everything went sideways?
Find more ways to create and unlock value and you'll find you move forward much faster.
For what it's worth, olives are basically inedible from the tree and require a considerable amount of processing before they are actually edible.
But it's also hard to tell the difference between churn and maintenance, and I think one of our modern world's blind-spots is a de-prioritization of maintenance.
I maintain a cross platform desktop app for Windows, macOS and Linux.
Windows is the best, 32 bit versions going back 15 years still work no issues. macOS is next, 32 bit don't no longer work, but 64 bit versions still work going back 5+ years. Ubuntu is by far the worst, some library I depend on changes it's API pretty much every year, and the old version is removed, breaking my app.
The solution appears to be Flatpak which bundle up the app with all it's required libraries. However I'm not sure how to make this work for plugins. Would each plugin need to be in it's own Flatpak? It's insane.
UI has no such minority consensus for local maxima and is very much not solved, which is why React-Preact-Vue-Angular is churning while the humanity hivemind iterates towards a solution.
Here is the relevant Rich Hickey quote: http://www.dustingetz.com/:rich-hickey-web-frameworks/
How much effort would it be worth avoiding to spend a week every year ?
Sure the weeks can add up, but so does the time spent on low level abstraction, and refusing to adopt better tools when the whole environment in changing around (e.g. there is no mention of native mobile environments. Would it be churn use to Swift instead of non-ARC ObjectiveC ?)
Every answer on Stackoverflow about Swift has several answers, one for each api version. Any time you grab some Swift code from the web or an older project, it's not going to work.
Avoiding the churn isn't an option since new Xcode versions drop support for old Swift versions. And only the two latest Xcodes will run on latest macOS. They even drop the support for the conversion tools. So if I go back to an old Swift project now, it won't compile in my Xcode, nor will my Xcode help convert the code to modern. My only option is to run an order version of Xcode in a VM to convert the code.
If I'm writing a library I want other people to use or share between projects, I'll still do it in Objective-C. Apps I do in Swift but I find it annoying.
I still have 15 year old non-ARC Objective C libraries. Why spend the time updating them when the are debugged and work fine?
Every time I have to do a Swift version update I introduce bugs.
Cutting yourself from new frameworks and hardware features just to cut churn would be a horrible tradeoff in most cases.
There are some niches where churn can be mostly avoided, but I think churn is usually a fact of life we could just embrace at a healthy pace.
From the opposite angle, a field with extremely low churn would seem suspicious to me. For instance I would expect any language with no significant update in the last 10 years to have abysmal unicode support.
If it's important, it needs and deserves all those things; they are not "churn" but maintenance.
Churn kills important code, making it unfit for purpose.
But draw your own line in the sand as to which tech you are going to use in your production systems, and move to new tech only when both your skills are sufficient, and there is a matching business need driving the change.
(I know you didn’t mean that literally)
Having CI won't stop the developer of the dependency changing the interface.
I would suggest that there are definitely some organizations where you can't trust the pipeline to work reliably, and the cost of figuring out what went wrong becomes the churn.
Well, one way would be those libraries to have backwards compatible APIs so that your code keeps working yet still using the same APIs. Some actually do try that (or at least they claim so), e.g. curl.
Is it possible to do modern mobile dev without churn (coming from a situation where my clients often go months without requesting features)?
- Reduce the use of dependencies to the minimum (I'm bad at this)
- set version compatibility in my Cartfile / Podfile
- when new breaking versions of Swift came out, suggest to my client a 2 / 3 mission solely focused on upgrading their app and it's dependencies (otherwise it will probably make their next last minute super-urgent-right-now feature needlessly long and complex to develop)
But I'd also willingly take any advice on this
Edit: also, semi-solved this for my JS work using automated dependency update services, like dependabot, coupled with unit / integration tests, but still haven't found a similar service for Swift
The Lisp tradition is to leverage the expressiveness of the language to quickly create functionality that would be included as libraries in other languages. If you need an algorithm implemented, just do it and if it turns out you need it hyper-optimized and able to handle the pathological cases you look for someone who has something like that.
In early 1996, John Ousterhout stopped by to pitch Tcl/Tk, but it was too late. VBScript was coming, but JS in 1995 Netscape betas got on first and saved us from that dystopia.
> Examples: UNIX, LISP, The Web, Emacs, TeX
1. Or just computer science.
2. Churn is not a problem. If you keep accepting the churn, you will learn them faster and faster, until you scan the doc and know most of the things, if it is churn. If it is not, it's something new (eg: Coq), then you have the ability to identify 'new but not churn', instead of just learning 'old and not churn' from a half-century ago (eg: Lisp).
3. Among the churns, there would be 'meta churn' like Haskell, Lisp, Erlang, and Rust. There are countless languages stealing Monad from Haskell, and Kubernetes patterns are very similar to Erlang OTP patterns.
4. Editors and IDEs are really irrelevant, I used Emacs a lot, I'm very fluent with Vi, but for some languages, I prefer Intellij and VSCode. Just use tools you feel productive.
I look at Julia (for HPC) and think, sure if I were a grad student and had time to burn. But now, I need to get an idea to figures live coded in the space of minutes, NumPy and matplotlib are boring and just fine.
Research is one domain where churn is a very much a daily thing. Careers are made on churn, in research.
These forms of churn are hardly related besides that they are often self-inflicted. The first 2 examples are really just technical debt, and perhaps they can be referred to as the "grind" rather than churn.
Rewriting your code base to use the framework of the now may happen because of the industry changing to a point where using old-reliable.js is making it difficult to hire new talent, whereas elon-musk.js is the hot new thing that tons of programmers are interested in. Companies can feel obligated to follow this trend because they think they'll become obsolete if they don't. The company I currently work for is going through this at the moment, actually.
The churn that this causes is more novel than the grind because, as an engineer, you are learning to do the same job in a different way. Avoiding the grind isn't likely to fundamentally change the nature of your job, as writing documentation and reducing the number of dependencies aren't very radical ideas. But taking someone from one language and having them learn another, or having them transition from one framework to another, can effectively demote an engineer to junior grade until they've had experience with their new tools. With the churn, your expertise can lose its meaning.
Even if you learn to solve the grind, solving the churn can be difficult even if your loyalties remain to a single tech stack. With the exception of purely personal projects, the industry will continue to shift its own loyalties to different tools, so you've got a few different games to play when looking for jobs:
- Be the one who knows the legacy tech and can make sense of other people's horrible legacy code. (Which the company will inevitably decide to have rewritten in something like React out of the belief that all their problems are being caused by the old technology.)
- Be the one who knows the hot new thing and can write code that, whether or not the code under the hood is atrocious, will make the bosses believe that they can be like the Googles and the Facebooks.
The vast majority probably pick the latter. But if one wants to avoid the churn as much as possible, the probably need to not only stick to tried and true tools but also find a good company and stay there indefinitely, rather than hopping companies every few years. Of course, that may come with a harsh penalty down the road.
IMO this is why churn is not going away. It's much easier to achieve flow-state from zero than working up to it in a codebase that you're unfamiliar with.
Maybe what's important is not making people belive that yesterday's technology is obsolete and should be burried or abandoned.
The shiny new thing is just an other tool in the toolbox and it's not because I created a new shape of screw that all the other screws are worthless. There is still plenty of screws all over the world that need to be unscrewed, we need those tools to do that, people to know how to work with them and some of these screws still need to be made. Maybe people don't talk about these screws as much ? They don't make the news anymore because they've been there for so long ?
Maybe the point here is that social media thrives on the shiny new thing, it's the core concept of it. An old news paper is worthless, no one ever sold an older news paper. So they need to grab some attention to live and the shiny new thing is a way to accomplish that.
But it's not because there is lots of articles about all those new frameworks that the older ones are worthless. We need both. Maybe for some projects we will not choose an architecture based on 10 years old concepts and frameworks and some other projects need stability and a whole tested and stable toolchain and documentation for the years to come.
What kind of work do you want to accomplish ? Small projects where you work alone ? Where you just serve a few people ?
Or big projects with a huge infrastructure that needs to deliver to millions with great economic impact ?
I bet you don't use a hammer only when you want to build kitchen furniture. And then tools that are needed in a factory that build kitchen furniture are not the same that the ones that needs to be used for home. They need a whole supply toolchain, some testing & quality checking. Whereas the worksman in his own workshop is going to have less tools ...
We need all kinds of software and technology, maybe what's hard is to be able to identify the right ones and defining precisely what we want and where we think we will go. People are always going to want to try to use the shiny new thing. We all need novelty, enhancing the shortcommings of your current tools.
You don't see in other disciplines such as physics or chemistry where people spend years learning about existing research before actually starting to contribute.
With programming the barrier to starting to write code is much lower than to actually learning the background research. People don't bother looking at what's been done already, and just start "inventing" things.
More often than not this result in half baked ideas because the authors of projects don't really think about the full scope of the problem. Then once the project starts hitting limits in practice, people start kludging things on top of it, and eventually it becomes unwieldy to use. Then somebody comes by and does the same thing simplistically again, and the cycle repeats. Nothing new is learned in this process, and you get churn for the sake of churn.
But on the other side of the coin, do you really want to be locked into a decades-old solution to a "solved problem" forever? Or are there still improvements that can be made to reduce friction and human error?
S-expressions also have no standard for comments, and can't distinguish maps from lists.
A S-expression based serialization standard would be a bit cleaner than JSON, but it's enough of a change that it's worth redoing anything.
People don't discuss solutions in a vacuum. Ideally they'd think about what the potential fallbacks of using XML alternatives are before jumping into a new tech. Which they will, unless they're really really new
Easy to say with hindsight now that it is massively popular and has a huge ecosystem of parsers for every conceivable language. At the time it was invented though, it was yet another data exchange format and I'm sure there were a lot of grey beards pooh-poohing it.
The job market is smaller, but so is the pool of developers. Companies tend to be more flexible because of that and are often open to remote work. I'd much rather work in a sane niche market than deal with the mainstream churn.
We have local Clojure meetup in town, and when I first started going there pretty much everybody was using Clojure as a hobby. Today, we have a bunch of companies using it in production, and all of them are actively hiring. The last three coop students I had all ended up getting Clojure jobs. I imagine this varies based on where you live of course, but another option is to simply introduce Clojure at a place that's using something else. That's where Clojure jobs come from in the first place at the end of the day.
Great programmers can find virtues in all tools. Hate the player, don't hate the game.