Hacker News new | past | comments | ask | show | jobs | submit login
The Software Crisis (wryl.tech)
185 points by todsacerdoti 5 months ago | hide | past | favorite | 195 comments



Hi! Author here. I think it's important to address certain aspects of this post that people tend to misunderstand, so I'll just list them here to save myself the effort.

* I do not argue against abstractions, but against the unrestricted application of them.

* I do not advocate for a reversion to more constrained platforms as a solution.

* I do not advocate for users becoming "more technical" in a "suck it up" fashion.

The key to understanding the software crisis is the curves of "mastery of a platform" and "growth/release cycles". We have, in the past 40+ years, seen these curves diverge in all but a few sectors. We did not address the crisis when these curves were close in proximity, but the second best time is now.

As for folks calling this clickbait, it is the first in my log, and reflects my thoughts on the situation we find ourselves in as developers. The sentiments are mirrored, in various forms, around multiple communities, some of them based in counterculture.

I do want to deliver some part of the solution to these problems, so I do intend on following up on "I'll show you how". I am a single entity, so give me time and grace.


I personally don’t understand what you are arguing for. While I agree that problems can be too abstracted or that bad abstractions abound, I don’t think that’s a controversial opinion at all. You’ll also literally never fix this problem because millions of people produce software, and you’re bound to disagree with some of them, and they’ll never all be as skilled as you. So my only interpretation of this post is that you think we should have even higher standards for constitutes an acceptable abstraction than is the norm already.

That’s easy to say writ large but I think if you examine specific areas you consider “too abstracted” you might find it a humbling experience. Most likely, there are very good reasons for those “over abstractions”, and the engineers working in those areas also find the abstraction situation hairy (but necessary or unreasonable to fix).

For example, a lot of software is built by providing a good abstraction on top of a widely used or accepted middle abstraction (eg Kubernetes on Linux + container runtimes + the traditional control plane:config layer:dataplane architecture). All that logic could all be implemented directly in a new OS, but that introduces compatability issues with users (you can build it, but you may have no users) unless you reimplement the bad abstractions you were trying to avoid. And, that kind of solution is way, way, way harder to implement. I personally want to solve this problem because I hate Kubernetes’ design, but see it purely as a consequence of doing things the “right way” being so absurdly difficult and/or expensive that it’s just not worth it.


Indeed, its tradeoffs all the way down


It's important to remember "Clarke's three laws"[0]:

  The laws are:

    1. When a distinguished but elderly scientist states that
       something is possible, he is almost certainly right. When
       he states that something is impossible, he is very
       probably wrong.
    2. The only way of discovering the limits of the possible is
       to venture a little way past them into the impossible.
    3. Any sufficiently advanced technology is indistinguishable
       from magic.
Depending on one's background and when they entered this field, there exists a non-trivial probability that previous abstractions have become incorporated into accepted/expected practices. For example, at one point in computing history, it became expected that an Operating System with a general purpose file system is assumed to be in use.

The problem I think you are describing is the difficulty one experiences when entering the field now. The prerequisites in order to contribute are significant, if one assumes the need to intricately understand all abstractions in use.

While this situation can easily be overwhelming, I suggest an alternate path; embrace the abstractions until deeper understanding can be had.

0 - https://en.wikipedia.org/wiki/Clarke's_three_laws


A tangent, but Clarke was slightly wrong. Magic is not just indistinguishable, but it actually IS advanced technology.

In fantasy worlds, wizards spend years studying arcane phenomena, refining them into somewhat easily and reliably usable spells and artifacts, which can be used by general public. (Don't believe depictions of magic in video games, they simplify all the annoying little details.)

The above paragraph is actually true about our world, we just happen to call these people scientists and engineers, not wizards.


Gandalf wasn’t a technologist.

(Edit) and his magic wasn’t remotely Vancean


He wasn't a technologist by the standards of his family, but it's somewhat implied in Silmarillion that the magics of Valar are intertwined with their understanding and knowledge of the world.


Yes, but in the Legendarium, the Valar were innately ‘magical’ creatures created by a supreme being, not people who studied spells


This is true, but the practical LLM wrangling seems less technological than I think any tech I have seen until now.


I worked in image processing for some time, it’s very similar


Whoooosh


Subspace whoosh. Or more seriously, I think OP was going for a corollary to how stories about dragons aren’t false, they are more than true because they teach us the dragon can be slain. The same could be said for magicians - the stories are more than true because they tell us the complexity and arcana can be tamed. (Also note how many of the magic stories warn of unintended, unforeseen consequences resulting from unintended interactions between parts. (Sourcerers apprentice.) This should ring true for anyone in programming.)


I liked the way you showed the problem as ongoing in history. Indeed, the phrase "software crisis" is nice because it references the first point when the situation was articulated.

That said, I think the reason the situation is not going to change is clearly economic. That's not saying that bad software is cheaper. But there's a strong incentive to cheap, bad practices because cutting corners allows one person/organization to save money now with the (greater) costs being picked by the organization itself later, the organization's customers and society at large. Moreover, software isn't amendable to the standards of other sorts of engineering and so there's no way to have contracts or regulations demanding software of a certain standard or quality.

Edit: The only imaginable solution would be a technological revolution that allowed cheaper and better software with same technology also not allowing even cheaper and worse software.


Yes to this.

Lately I feel like we have built a society with expansive software infrastructure, where that software is doomed to be crappy and inhumane because our society actually couldn't afford to build this quantity of software well.

So another hypothetical fantasy solution would be a lot less software, like being careful about where we used software, using it in fewer places, so that we could collectively afford to make that careful intentional software higher quality.

Certainly a society like that would look very different, both in terms of the outcome and in terms of a society that is capable of making and executing a decision to do that.


Our society / societies may well be able to afford to build this quantity of software well. We choose not to.


I don't know how/haven't seen an attempt to approach this question by a method other than "my hunch", but as a software engineer "my hunch" is it would cost at least 10-50x as much human labor (not just engineers but designer and UX researchers as well as all the other support roles like project managers etc) to build the software "well" (including more customization for individual enterprises or uses), and that it would become an unsustainable portion of the GDP.

Just "my hunch", but one I reflect on a lot these days.


You could easily reduce the amount of software that exists today by 10-50x and have an adequate amount of software for virtually all purposes.

But this incredibly hypothetical. A lot of software labor today revolving around manipulating the user rather than aiding them so where we'd go with "better" versions of this is hard to say.


> (not just engineers but designer and UX researchers as well as all the other support roles like project managers etc)

Oh! I thought you were going to say "testing teams, design reviewers, trainers".

I'm not on-board with this "10-50x" claim for the amount of effort. I'd say maybe 3x the effort, given the developers have been well-trained, the methodology is sound, and the managers' focus is exclusively quality. That last item is one I've never experienced; the closest I came was on a banking project where the entire development team was subordinated to the Head of Testing.


Humanity could easily afford to provide proper healthcare and schooling for every person on the planet if we didn't spend so much money on our collective militaries too, but "we" don't.

Getting everybody (or even a minority of any sufficient size) to act in service to a single goal has been a problem for humanity ever since we first invented society.


What doesn't get spent on quality goes down as profit. This is why we have fake mobile games. Vaporware is the real software crisis.


It's not just about cost it's about survival. If I fail the generate revenue in a reasonable timeframe, I can't put food on the table. If I have a healthy income, I'm happy to tinker on the the technology ad infinitum.


This state of affairs is very much due to competition (instead of cooperation) and the feast-and-famine shape of what software invention and development looks like, especially in the US.

It takes time and effort to optimize software for performance, user-friendliness, developer-friendliness and everything that ethical and mindful developers hold up so highly, but that’s not the kind of thing that the big money or academic prestige is very interested in. The goal is to get the most users possible to participate in something that doesn’t repulse them and ideally draws them in. The concerns of ethics, dignity, and utility are just consequences to settle, occasionally dealt with before the world raises a stink about it, but are hardly ever part of the M.O.

Imagine if developers could make a dignified living working to listen to people and see what they really need or want and can focus on that instead of having to come up with a better ‘feature set’ than their competitors. It’s essentially why we have hundreds of different Dr. Pepper ripoffs sold as ‘alternatives’ to Dr. Pepper (is this really a good use of resources?) instead of making the best Dr. Pepper or finding a way to tailor the base Dr. Pepper to the consumers taste, assuming enough people claim they want a different Dr. Pepper.


I don't think there is a software crisis. There is a more of a software glut.

There is so much software on so many platforms, such a fragmented landscape, that we cannot say anything general.

Some projects may be struggling and falling apart, others are doing well.

There is software that is hostile to users. But it is by design. There is cynicism and and greed behind that. It's not caused by programmers not knowing what they are doing, but doing what they are told.

The users themselves drive this. You can write great software for them, and they are disinterested. Users demand something other than great software. Every user who wants great software is the victim of fifty others who don't.

Mass market software is pop culture now.


Windows 3.1 and Word easily fit on a 40MB hard drive with plenty leftover. Word operated on systems with as little as 2MB RAM on a single core 16MHz 80386. Modern microcontrollers put this to shame! Word then didn’t lack for much compared to today’s version.

Windows and Office require now, 50-100GB disk just to function? 1000X for what gain?

This is sheer insanity, but we overlook it — our modern systems have 5000-10000X as much disk, RAM, and CPU. Our home internet is literally a million times faster than early modems.


You mean the Word where i had to save the document every 20 seconds just to be sure i didn't loose my school assignment from crashes at the most inconvenient moment.

It left scars in my spline, to the point that I still have to actively hold myself back from the reflex of hitting Ctrl+S halfway through this comment.


Yes, that Word. Imagine if we'd been focused on improving reliability this whole time...


Is that all you got?


> 1000X for what gain?

A stepping stone to the 1000000X gain that would make it possible for Windows and Office to emulate the mind of a Microsoft sales executive so it can value extract from the user optimally and autonomously. Also with memory safe code because Microsoft fired all the testers to save money. And so they can hire people who can't write in C or C++ to save money. And ship faster to save money.


I could not agree with you more. I'm trying to formulate data and arguments and I think I actually have the data to prove these points.

I'm working on a series of articles at enlightenedprogramming.com to prove it out.

I'm also working "show you how" solutions because, absent data (and sometimes even with it), I still get people who believe that the industry has never been better, we're in some golden age and there is not a whole to improve on and it just boggles my mind.


It's hard to get past the hubris wrapped up in the statement "I'll show you how"... as if the tens of thousands of bright engineer's whose shoulders you stand on were incapable and you're the savior... maybe you are! (But just adding the word "try" in that sentence would reduce the perceived arrogance by orders of magnitude.)


I didn't read it that way. I read it as an introduction to a counter-cultural paradigm shift away from the "move fast and break things" and "abstract over it" mindsets. I'm very interested in the path they are going to outline.


Well, that certainly wasn't my intention, but in an environment of "I have a silver bullet and it'll cost you $X", I can understand the sentiment.

At the same time, I do want to show that I have confidence in my ideas. Hubris and confidence must be applied in equal parts.


Hubris and confidence are two sides of the same coin. Did you maybe mean to say that one should balance hubris with humility to avoid coming across as arrogant?


Yep, that's what I meant. Lots of comments, not a lot of time for proofreading!


Well, I don't think many people would disagree about the crisis.

Personally, I'm anxious to see your proposal on "how".


You claim to be the author of Modal [0]. While Modal might not be your "solution" it is likely aligns with what you have in mind philosophically. Modal seems like some sorta pattern matching lisp. I dont think im alone in being skeptical that a "pattern matching lisp" is going to be the "solution" to the software complexity problem.

[0] https://wiki.xxiivv.com/site/modal


Assuming you're posting here at least in part to get feedback, I'll share mine. This piece would have been a lot more readable with a bit better organization. Being dominated by one and two-sentence paragraphs, it's more of a soup of claims and quotations than a coherent article or essay. A clear structure makes it faster and easier for readers to identify which parts are your main points and which are written to support them.


There is no software crisis. I have worked on software for 30+ years and it is as easy/hard to write and maintain software today as it was 30+ years ago.

And we are creating software today that would absolutely blow the minds of software developers working 30+ years ago. Mostly because we have amazing abstractions that makes it possible to stand on the shoulders of giants.

All that is happening is that as we get better at writing software, we and our users demand more of it. So we will always be at the breading edge of not being able to deliver. But we still do. Every day.


I think this might be premature if you're trying to apply this to how the world makes money with software. There's too much of the economy that depends on adding another abstraction layer to make money, so you'd need to rewire that as well unless we just want to break things. You'd eventually have to tell a business "no" because they're +1 abstraction layer above the limit.

I think most of us realize that working at corporate is still a necessary evil as long as we need to pay for stuff. Frankly that sector can burn because they've been taking advantage of developers. Most of the money goes to people above you, performance bonuses are not the norm, etc. We shouldn't be actively trying to give them improvements for free because of this behavior. Let them follow the hobby/passion projects and learn these practices and limits on their own.


I think society has made it convenient for people to lend their creativity to someone else's vision. I don't think its a good idea for talented people to work for corporations that have no connection to the public good.

I don't think its a necessary evil, I think most people either don't want to work to realize their own vision of what they want in the world, or want more than they need and they are willing to sacrifice their soul for it.


Hi wryl. I'm interested in hearing your follow-up to this post, so I tried to add your log to my feed reader, but I couldn't find an rss/atom feed, which is the only way I'll remember to check back in. Do you have a feed I can follow?


Hi! Yes! I'm intending on adding an RSS feed at https://wryl.tech/log/feed.rss within the next few days.

I have a habit of hand-writing the markup for the site, so it's missing a few features people expect from things like statically generated sites or hosted blog platforms.

I'm watching this thread for accessibility and feedback, and am taking the suggestions to heart!


I've been a fan of handmade and seems like a new blog in the same spirit is born. I look forward to reading more of your writings.

I have recently started writing myself and paying more attention to what I read. I really liked your style and enjoyed reading it.

So please keep writing!


Thank you very much! Very much in line with the handmade ethos, and I'm hoping to get involved with more handmade (and handmade-like) things in the future. :)


My problem with your article is that it seems to operate on the misconception that someone must completely understand from top to bottom the entire stack of abstractions they operate atop in all of its gory and granular detail in order to get anything done, and so a larger tower of abstractions means a higher mountain to climb up in order to do something.

But that's simply the opposite of the case: the purpose of abstractions — even bad abstractions — is detail elision. Making it so that you don't have to worry about the unnecessary details and accidental complexity of a lower layer that don't bear on the task you have at hand and instead can just focus on the details and the ideas that are relevant to the task you are trying to achieve. Operating on top of all of these abstractions actually makes programming significantly easier. It makes getting things done faster and more efficient. It makes people more productive.

If someone wants to make a simple graphical game, for instance, instead of having to memorize the memory map and instruction set of a computer, and how to poke at memory to operate a floppy drive with no file system to load things, they can use the tower of abstractions that have been created on top of hardware (OS+FS+Python+pygame) to much more easily create a graphical game without having to worry about all that.

Yes, the machine and systems underneath the abstractions are far more complex, and so if you set out to try to completely fit all of them in your brain, it would be much more difficult than fitting the entirety of the Commodore 64 in your brain, but that greater complexity exists precisely to free the higher layers of abstraction from concerns about things like memory management and clock speeds and so on.

So it's all very well and good to want to understand completely the entire tower of abstractions that you operate atop, and that can make you a better programmer. But it's very silly to pretend that everyone must do this in order to be remotely productive, and that therefore this complexity is inherently a hindrance to productivity. It isn't. Have we chosen some bad abstractions in the past and been forced to create more abstractions to paper over those bad abstractions and make them more usable, because the original ones didn't allied out the right details? Yes, absolutely we have. I think a world in which we abandoned C Machines and the paradigm where everything is opaque black box binaries that we control from a higher level shell but have no insight into, and instead iterated on what the Lisp Machines at the MIT AI Lab or D-Machines at Xerox PARC were doing, would be far better, and would allow us to achieve similar levels of ease and productivity with fewer levels of abstraction. But you're still misunderstanding how abstractions work IMO.

Also, while I really enjoy the handmade movement, I have a real bone to pick with permacomputing and other similar movements. Thanks to influence from the UNIX philosophy, they seem to forget that the purpose of computers should always be to serve humans and not just serve them in the sense of "respecting users" and being controllable by technical people, but in the sense of providing rich feature sets for interrelated tasks, and robust handling of errors and edge cases with an easy to access and understand interface, and instead worship at the feet of simplicity and smallness for their own sake, as if what's most important isn't serving human users, but exercising an obsessive-compulsive drive toward software anorexia. What people want when they use computers is a piece of software that will fulfill their needs, enable them to frictionlessly perform a general task like image editing or desktop publishing or something. That's what really brings joy to people and makes computers useful. I feel that those involved in permit computing would prefer a world in which, instead of GIMP, we had a separate program for each GIMP tool (duplicating, of course, the selection tool in each as a result), and when you split software up into component pieces like that, you always, always, always necessarily introduce friction and bugs and fiddliness at the seams.

Maybe that offers more flexibility, but I don't really think it does. Our options are not "a thousand tiny inflexible black boxes we control from a higher later" or "a single gigantic inflexible black box we control from a higher layer." And the Unix mindset fails to understand that if you make the individual components of a system simple and small, that just pushes all of the complexity into a meta layer, where things will never quite handle all the features right and never quite work reliably and never quite be able to achieve what it could have if you made the pieces you composed things out of more complex, because a meta-layer (like the shell) always operates a disadvantage: the amount of information that the small tools it is assembled out of can communicate with it and each other is limited by being closed off separate things as well as by their very simplicity, and the adaptability and flexibility those small tools can present to the meta layer is also hamstrung by this drive toward simplicity, not to mention the inherent friction at the seams between everything. Yes, we need tools that are controllable programmatically and can communicate deeply with each other, to make them composable, but they don't need to be "simple."


> Operating on top of all of these abstractions actually makes programming significantly easier. It makes getting things done faster and more efficient. It makes people more productive.

Only if those abstractions are any good. In actual practice, many are fairly bad, and some are so terrible they not only fail to fulfill their stated goals, they are downright counterproductive.

And most importantly, this only works up to a point.

> Yes, the machine and systems underneath the abstractions are far more complex, and so if you set out to try to completely fit all of them in your brain, it would be much more difficult than fitting the entirety of the Commodore 64 in your brain, but that greater complexity exists precisely to free the higher layers of abstraction from concerns about things like memory management and clock speeds and so on.

A couple things here. First, the internals of a CPU (to name but this one) has become much more complex than before, but we are extremely well shielded from it through its ISA (instruction set architecture). Some micro-architectural details leak through (most notably the cache hierarchy), but overall, the complexity exposed to programmers is orders of magnitudes lower than the actual complexity of the hardware. It's still much more complex than the programming manual of a commodore 64, but it is not as unmanageable as one might think.

Second, the reason for that extra complexity is not to free our minds from mundane concerns, but to make software run faster. A good example is SIMD: one does not simply auto-vectorise, so to take advantage of those and make your software faster, there's no escaping assembly (or compiler intrinsics).

Third, a lot of the actual hardware complexity we do have to suffer, is magnified by non-technical concerns such as the lack of a fucking manual. Instead we have drivers for the most popular OSes. Those drivers are a band-aid over the absence of standard hardware interfaces and proper manuals. Personally, I'm very tempted to regulate this as follows: hardware companies are forbidden to ship software. That way they'd be forced to make it usable, and properly document it. (That's the intent anyway, I'm not clear on the actual effects.)

> I think a world in which we abandoned C Machines and the paradigm where everything is opaque black box binaries that we control from a higher level shell but have no insight into, and instead iterated on what the Lisp Machines at the MIT AI Lab or D-Machines at Xerox PARC were doing, would be far better, and would allow us to achieve similar levels of ease and productivity with fewer levels of abstraction.

I used to believe in this idea that current machines are optimised for C compilers and such. Initially we could say they were. RISC came about explicitly with the idea of running compiled programs faster, though possibly at the expense of hand-written assembly (because one would need more instructions).

Over time it has become more complicated though. The prime example would again be SIMD. Clearly that's not optimised for C. And cryptographic instructions, and carry-less multiplications (to work in binary fields)… all kinds of specific instructions that make stuff much faster, if you're willing to not use C for a moment.

Then there are fundamental limitations such as the speed of light. That's the real reason between cache hierarchies that favour arrays over trees and pointer chasing, not the desire to optimise specifically for low(ish)-level procedural languages.

Also, you will note that we have developed techniques to implement things like Lisp and Haskell on stock hardware fairly efficiently. It is not clear how much faster a reduction machine would be on specialised hardware, compared to a regular CPU. I suspect not close enough to the speed of a C-like language on current hardware to justify producing it.


I just want to say this was a wonderful log. There are not many in this field who have an intuition or impulse towards materialist/historical analysis, but whether you are aware of it or not, that is certainly what I read here. Just to say, its not quite recognizing how we find ourselves thinking that is enlightening, but rather recognizing why we find ourselves thinking like we do that can bring clarity and purpose.

In a little more woo woo: abstractions are never pure, they will always carry a trace of the material conditions that produce them. To me, this is the story of computing.


Thank you for the kind words! "I appreciate them" would be an understatement.

And I agree entirely. Tracking the history of an abstraction will usually tell you its root, though it gets pretty muddy if that root is deep-seated!


abstraction is the crabification of software,it gets do powerful and so good,it kills the carprace wearer. Allowing fresh new script kid generations to re experience the buisness story and reinvent the wheel.


This article presupposes that this software crisis actually exists or is a significant problem. The crisis is all this these things:

    Projects running over-budget
    Projects running over-time
    Software was very inefficient
    Software was of low quality
    Software often did not meet requirements
    Projects were unmanageable and code difficult to maintain
    Software was never delivered
Now take the word "software" out and how many human endeavours have one or all of these things? And then how much software is actually pretty great? We tend only see the failures and the flaws and success is just a baseline that we completely ignore even as it gets continuously better.

When we press the power button on our computer and it gets to a desktop, our computer has already run through hundreds of abstractions. Just at that desktop it is already the most complicated machine we have or will interact with all day. This happens billions of times a day, all across the world, and mostly flawlessly. And that's just one tiny example.


The article doesn't do a good job of explaining what they determine the actual crisis is. Complexity in and of itself isn't a problem. The problems you listed, however, are.

But I don't think that's the real motivation for articles like these: the real motivation is that things just seem out of hand.

But I think as an experienced programmer you have to balance that feeling of overwhelm with what needs to be done. It's taken me a while to arrive at this conclusion, but I believe it's important. Things will never be completely figured out and you have to be okay with it.


I think being a software developer is always being in conflict with reality. Most of our job is logic and rules. Preciseness and completeness. Reality, however, is full of people, chaos, and change and we are uncomfortable by that. I think this is why programmers have such strong opinions on languages, tools, and techniques.

Over the years I've started to tune out articles that say everything is doomed or articles that propose the silver bullet to fix everything. Things are not terrible -- they are, in fact, always getting better. It's never been a better or easier time to be a software developer. But, because of that, the problems we have to solve are harder. The demands are greater. I think this appears to many like we're stuck in place because we always feel like we're behind the 8 ball. The appetite for software is insatiable.


The thing that is unique about software is the lack of physical constraints which serve as a natural forcing function or filter on quality. With a bridge, for example, at some level it must meet a minimum bar of structural integrity, quality of materials, etc. or it will fall over from its own weight. As a cook, there is a bare minimum I have to hit with the quality of my ingredients and skill of preparation in order for the food to be palatable and edible.

No such limits exist on software beyond maybe performance and memory constraints. But both of those are in abundance, so we can and do patch over crap endlessly. Every one of us has had that moment where we think or say, "how does this even work?" But until the user hits the wrong edge case they have no idea how rickety the underlying code is.

> This happens billions of times a day, all across the world, and mostly flawlessly.

No, I'd argue it's much more common for there to be flaws. They're just not obvious. They're random crap like my phone continuing to vibrate after I've answered the call until I get another call or text. Or options disappearing from a web app because something didn't load 100% correctly. Or the joys of flaky Bluetooth pairing. The list is endless.

"Have you tried turning it off and on again" is evidence of this. It's normal for software systems to get into such inscrutably subtle bad states that the only fix is to wipe everything and reload from scratch.


> The thing that is unique about software is the lack of physical constraints which serve as a natural forcing function or filter on quality.

Completely agree with this. The number of good+reasonable solutions is almost infinite, and the number of bad solutions is also almost infinite.

What makes it even worse is that we really don't have a good method of communicating the design+structure of our models to others (tech and non-tech). As the system gets more complex the issue gets worse.

We carry so much info in our heads about system models, and that info is painstakingly acquired through inefficient communication methods and trial and error and thoughtful analysis.

It would be amazing if we could create tools that allow us to describe the essence of the model and make it directly available to our brains so we could all collectively reason about it and manipulate it collaboratively.


The number of bad solutions is not just almost infinite. It is definitely infinite. Because, by induction, you can always make another bad solution from a bad solution by adding something unnecessary to it. Hence infinity. QED


"It would be amazing if we could create tools that allow us to describe the essence of the model and make it directly available to our brains so we could all collectively reason about it and manipulate it collaboratively."

Fuck that, I need a job.


> No, I'd argue it's much more common for there to be flaws. They're just not obvious. They're random crap like my phone continuing to vibrate after I've answered the call until I get another call or text.

This is kind of what I'm talking about. The absolute massive complexity within your device that you and billions of people to seamlessly make calls from anywhere in the world to anywhere in the world with devices made by all different companies using infrastructure made by all different companies and it all works so incredibly well that we mostly take it entirely for granted.

But yes, sometimes the phone doesn't stop vibrating.


One big difference between software and everything else is that software is built on top of other software, on top of other software, in an ever-increasing stack of abstractions and "good enough". This stack grows faster in software than in any other domain, because, as jdbernard said above, there's a lack of physical constraints preventing this from occurring. As this stack grows and grows, more points of failure are introduced, which can cause emergent, unexpected bugs and issues, which are hard to diagnose and pin down, because doing so involves traversing the stack to find the root cause(s).

It's one thing to appreciate the natural world around us and how it all seems to work together flawlessly to provide the reality we all experience together—but that's because it's natural, not artificial, like the world of software we have created. When things work less than perfectly in software, there are human causes behind it, which, once identified, could be resolved in order to improve everyone's lives. But instead, most people share your "good enough—it's a miracle it all works!" mentality, which causes acceptance of any and all software issues, which leads to further issues down the line as we say "good enough" and build yet another layer atop it.


What I've noticed is that most issues are at the leaves. Abstractions that have existed for a long time and that are used heavily tend towards being more solid over time. (And more documented, well known, etc)

It's the stuff at the edges that appears to be less reliable but that's mostly because it's new. It doesn't really feel like that is getting worse though. We are constantly interacting with more and more software than ever before but it's not like everything is broken. The fact that you can reliably make a phone call is far more significant than the fact that the vibration doesn't stop. Both are build on ever-increasing stacks of abstractions.

The difference in that example isn't some emergent complexity -- it's just that one is more important than the other. There are lots of analogies in the physical world where less-important things are crappier than more important things. We don't consider that some crisis of abstraction.


This to me is a cliché that laments the fact around how certain aspects and communities in software is very popculcural.

One thing good programmers do well is choosing abstractions that are tested and validated, not popular or hyped.

The stack has not changed I decades. E.g It's still TCP, memory allocations and perhaps SQL. Whether you don't want to learn those is up to you. Learn them and you will know your way around for decades to come.


> But yes, sometimes the phone doesn't stop vibrating.

I failed to communicate clearly. Yes, I agree, the scope of human achievement is amazing, software included. However, the issues with software go far deeper than just the trivial example I gave of the phone. It's pervasive and pernicious. I assume most software developers understand this as lived experience, but I'll elaborate more.

Almost every single person I know who is not an IT professional of some sort regularly comes to me, my son, and others who are IT professionals for help with things that are broken and non-functional to the point that they cannot use it without assistance. And that's just the show-stoppers. There is tons of crappy software that they've just found workarounds for, and they've gotten so used to having to find workarounds that they've just stopped complaining. Not because it's working correctly, but they would rather be happy and accept it than get constantly worked up about something over which they have no control. This is not entirely unique to software, but there isn't really any other mass technology that is as bad as software.

Software is simultaneously miraculous and horrible. Because of computing we have capabilities that would be seen as science fiction or magic not too long ago. But because of the things called out in this article, these magical tools and technologies are plagued by endless amounts of things that just don't work right, break randomly, and fail in unexpected ways.

With physical systems and hard sciences we identify the physical constraints, model the system, and then engineer solutions around those models, refining our solutions as we refine our models. With software we make up the models, rarely fully document them, and slap things together as we go. Some companies do better than others, and some domains take quality and reliability seriously, but in my experience even that correlates to distance from physical constraints. In general the portions of our industry that are closer to real-world constraints (chip manufacturing, control interfaces, etc.) have historically been better about delivering quality software.

If I come across as frustrated it's because I am. I love building software, and I used to love using software, but I am incredibly frustrated by the gap between what we've built and what is possible.

I could keep going, but I'll end this comment with one other observation. I'm not an Apple fanboy by any means, but Apple used to be the exception to this general rules of "software just kind of generally sucks." Apple used to be "it just works." And that was because Steve Jobs was the forcing function on quality. It's possible.

Again, this isn't unique to software. Excellence in any domain requires someone to be the standard bearer. At a restaurant it may be the head chef, refusing to let anything out of the kitchen that doesn't meet their high bar, for example. In every domain there exist various pressures that push quality down (time, cost, etc.) but software lacks the natural floor established by physical constraints, and is unique riddled with problems, IMHO.


I think I agree with you but disagree with the article.

> But because of the things called out in this article, these magical tools and technologies are plagued by endless amounts of things that just don't work right, break randomly, and fail in unexpected ways.

There is way more software out there than one can even imagine responsible for literally every aspect of human society. There is an insatiable need for more software and there simply isn't enough software developers in the world to do it all. What you get is triage -- things half baked because of effort. There isn't some problem specific to software with regards to physical constraints. Cheap stuff made quickly breaks whether it's software or power tools or children's toys. It's that simple.

> And that was because Steve Jobs was the forcing function on quality.

Exactly. What does that have to do with abstractions or the lack of physical constraints?


Yes, I think broadly we agree. Thanks for the discussion.

> What does that have to do with abstractions or the lack of physical constraints?

Someone like Steve Jobs becomes a forcing function to maintain a high bar of quality. He's the exception that proves the rule. In the general case you can't rely on someone like Steve Jobs, but in other domains you still have a higher minimal quality bar driven by physical constraints.

I concede that in every domain it is true that "cheap stuff made quickly breaks." My argument is that even then there is a minimal bar that is set by physical constraints. If the bridge doesn't stand it doesn't get built. If you cut too many corners or rush too much it just doesn't get built, or has to be rebuilt. This leads to overages and delay, but at the end of the process the bridge meets the physical specs or it falls over. I think the lack of any such physical constraints means that there is no such minimal bar for software and the floor of quality is therefore much lower, even in software by large, successful, and well-funded, "marquee" companies.


That seems like an arbitrary distinction. Plenty of software gets built that doesn't meet the minimal standards -- and you know what happens? It doesn't get deployed. How is that different from the bridge that doesn't get built or falls over?

All the same factors are at play: poor planning, insufficient resources, bad management, inexperienced workers, insufficient funding, lack of testing.

Plenty of construction projects have corners cut. Just like with software, it is someone's job to patch it later.


I just wanted to thank you two here. Your dialog is so insightful. I think the original article was bad. It's biggest merit is that it got you two together here in the comments!


I largely agree with you and share your sentiments. However, in software there’s plenty of standard bearers distracting us from the real issues by trying to push for things like having CI break when you spelled something ”colour” instead of color. In fact, the whole industry is full of nitpickers who want to gatekeep others’ entries based on arbitrary rules like where braces, tabs, spaces or semicolons go, because it lets them come up with the rules and enforce them instead of doing the hard work of solving the real problems.

So it’s not so much that we don’t have people pushing for quality, we do. We just can’t agree on what quality even looks like.


You make great points.

Most of which you enumerate, if not all, can be traced back to two fundamental concepts; communication and understanding.

Communication because this is how understanding can be propagated and the lack thereof remedied.

Understanding because this is fundamental to the success of any endeavour. Without it one or more of the symptoms above will manifest. With it, they may still, but at least there can exist a pathway to success.

In my experience, the majority of problems in the software engineering industry are people problems. Not technical, nor technology, nor process problems.

This is why communication and understanding are vital to success.


> how many human endeavours have one or all of these things?

You want a binary answer as in "it never happens with X"? Because for most kinds of project, we say it didn't go well when 1 or 2 of those happen, while we declare a victory in software if we manage to avoid 2 of them.

> And then how much software is actually pretty great?

And now I'm curious about your bar for "pretty great" software. This is obviously subjective, but most people wouldn't look at any practical software and call it great.


> This is obviously subjective, but most people wouldn't look at any practical software and call it great.

This what I think is insane. We interact with probably thousands of pieces of software every day (and I'm probably off by an order of magnitude) and none of it is great? What is the bar here?

It's funny that something that was absolutely amazing yesterday is no longer even so much as great today. I can instantly connect to a remote server from my phone, type some text, and generate a pretty good poem about my cat and all of that is already "meh".


> generate a pretty good poem about my cat

I do not consider a text generator to be the pinnacle of "great software", regardless of how well the code was written. I don't think "meh" is the wrong word.


Exactly my point! A statistical text generator trained a vast majority of the available content on the Internet and is able to understand natural language commands to generate content on any available subject is bloody amazing.

Literally five minutes after this absolutely crazy-ass technology was released to the world, it's already the baseline.


> Just at that desktop it is already the most complicated machine we have or will interact with all day.

Except for your brain manipulating, and taking signals from, and reasoning about, the computer. ;)


If you look at the resumes of engineering or automotive company leadership, you'll see people going through stages of ever expanding responsibilities of part, component and product design, or management of production facilities of increasing size and importance. The CEO will still emphasize their technical knowledge, non-technical staff will at least try and fake it.

In agile software development on the other hand, technical competence usually ends at the lowest tier. A scrum team has folks on it who make software, that's it. Then, lots of scrum masters, business analysts have probably never coded much; the first actual boss in the hierarchy has mostly secretarial and managerial work and will hardly look at code at all.

Point is, it's not just that software development is done in ticket-sized portions which does not invite philosophical considerations about the numbers of abstraction layers that one builds and maintains. It's that software developers don't even have a seat at the table(); they get babysat by scrum masters, make compromises during code review, are discouraged from thinking beyond the ticket, and are then of course not usually promoted into leadership roles to which they would bring their technical competence.

It appears therefore that any movements to bring awareness to the "software crisis" will be relegated to hobbyists, as the article states at the end: to "Handmade, Permacomputing, and various retro-computing circles".

() I partly blame Hollywood and their incessant humiliation of software/IT people, while creating endless leading roles for doctors and lawyers, effortlessly weaving their complicated terminologies into fascinating storylines, which is aparently not possible to do for us? Maybe the scriptwriting AIs can come up with something here soon.


> I partly blame Hollywood and their incessant humiliation of software/IT people, while creating endless leading roles for doctors and lawyers, effortlessly weaving their complicated terminologies into fascinating storylines, which is aparently not possible to do for us?

Doctors and lawyers deal with people and everyday problems that are easy to turn into a interesting story. I don't see many contract lawyers or radiologists as protagonists - it's ER docs and criminal law.

Software development is about rigorously talking to a computer all day, either solving mundane solved tasks in a new application, or problems that you can't even grasp without a technical background. I'm a developer who started programming as a teen over 20 years ago for fun - and I'm bored out of my mind with most of the work - I don't even try to talk about it to non-developers because I know it's about as interesting as accounting, more people would get useful info out of their stories.


accounting, expecially tax optimization is really interesting/fun, but maybe its just in my bubble in a high tax country where most of the middle class needs tax optmization to live above the poverty line


Bugs can make great stories. But only for a niche audience. Not Hollywood.


I could buy this argument. Had I been a junior during the agile era, I'm not sure I would have developed as fast or as far as I have.

The most agile pilled company I worked for just treated juniors & seniors as interchangeable cogs, except seniors should be able to clear more points per sprint. Active discouragement from thinking outside the scope of your ticket, keep your head down and mouth shut.


When I was starting out a bit more than a decade ago, I was on an agile/scrum software team in a hardware company. The team was fine but I found the process painful. I ejected as soon as a new director realized he could do algebra with the story points and began plotting stories quarters out. I've never been on a scrum team since and am happier for it. As a manager now, I would push back on scrum in almost every situation (and have).


The problem with "agile" is that everybody claims to do it. Companies with a healthy environment cram their procedures into agile-derived names, and keep the healthy environment, while unhealthy ones also cram their procedures into agile-derived names, and use their universality as an excuse to never improve.

I think scrum is irremediable, but even then, some places only practice it pro-forma: divide the work into tickets at the beginning of the sprint, rewrite the tickets into what they did at the end, and go home happy.


This doesn’t mirror my experience at all. Where I worked agile meant removing unnecessary processes as well as trusting people over tools. As a junior I was given a lot of flexibility and probably too little oversight. I was mostly given a high level task, no deadline, and had to together with others find ways to make the tasks smaller.


> juniors & seniors as interchangeable cogs

welcome to the car factory


In big tech lots of managers are highly technical all the way up.

But there's two problems: they can't get into the weeds, and they also are subject to the perverse incentive of being rewarded for generating complexity.

Some people fight it, sure, but those who fight it are less likely to be promoted. You don't get rewarded for having less staff under you or eliminating your own role.


The perceived misery you describe I feel is self-inflicted. Many devs "below" me have become entirely disconnected from customer needs, instead only focusing on "interesting" dev problems.

Why do developers only work on ticket-sized portions of the actual requirements? To put it succinctly: because they are simply too dumb. They cannot wrap their heads around it. They cannot grasp it.

Do I sound frustrated? I am. It is inscrutable.

Sorry.


I think categorizing it as "Too dumb" is also doing disservice to many developers who are stuck in feature factories. After a while you realize business is happy with status quo so do your Jira tickets, take your paycheck and go home. My puny stock options are extremely unlikely to be impacted by my work output.

My boss doesn't care about Tech Debt. Get this ticket done, get it done quick and move on. He figures he will be long gone before tech debt racks up to point he would get in trouble for it. Hell, I'm not sure his higher ups even realize the tech debt problem so fact if he is here for 4 years, they wouldn't realize what was cause of the tech debt.


Though my employer certainly could be categorized as a feature factory (individual software development), we kind of sell the opposite: Sustainable development producing software which can be changed easily. There's only direct monetary compensation. Hierarchies are flat. Like, too flat.

I understand many do not have the energy to fight the status quo and some may not have the… eloquence to do so. I have worked very hard for many years to end up where I am. If others don't, I expect them to at least accept where they remain. Because they don't do "don't care". They are effectively sabotaging projects.

They certainly aren't unintelligent. They still act pretty dumb. Again, I must apologize for my polemics.


Problem is, generally only places not doing this are FAANG and we all can't work there. I've interviewed at two FAANG companies in my field (Ops Type) and went ok but didn't get hired. So apparently, I'm not good enough for FAANG so far. Also, they are losing their luster as well.

Now, whatever. It's a job that pays well. I'm making rich people richer but I'm not sure what I'm supposed to do differently. MBAs continue to strip mine everything and my country upcoming elections are between two people we should be saying "Sure grandpa" while they tell us stories in their nursing home.

If that makes me dumb, I'll take the label I guess. I try not to be but when I'm having to explain to Dev for 5th time to stop doing appsettings.json, I realize that my tilting at windmills isn't going to fix anything.

Also, this is a feature factory: https://www.productplan.com/glossary/feature-factory/


Problems surrounding computing education do compound these frustrations, and I sympathize.

Having worked for larger organizations (whatever FAANG calls itself these days, I can't keep track), as well as academia and independent education, I've seen both halves of the "production line" for newcomers to computing.

Something has to change in how we bring individuals into our field. I have some ideas based on my experiences, but you're not in the wrong for feeling frustrated about this. It is the state of things, and many companies are not equipped to handle it, because it's unexplored territory.


> Why do developers only work on ticket-sized portions of the actual requirements? To put it succinctly: because they are simply too dumb. They cannot wrap their heads around it. They cannot grasp it.

I think the reason is entirely different: ticket-sized portions of requirements are the only thing that one can hope to estimate in any useful fashion. Business side needs estimates, so they create pressure to split work into pieces they know how to handle.

Put another way, it's not that developers are "too dumb" to wrap their head around actual requirements. They're not allowed to. The business prefers devs to consistently crank out small improvements, keep "velocity", even though it leads to "organically" designed systems - i.e. complex, full of weird edge cases, and smelling of decay. The alternative would be to let the dev talk, design, experiment, take the project holistically - but that also means the work becomes impossible to estimate more precisely than "sometimes next year, maybe", which modern businesses can't stand.


I very much dispute the claim that devs are not allowed to think big. It’s just that they take the easy way out. After all, others are taking care of the visual design, requirements engineering, reporting, controlling and whatnot, right?

It’s fine, really. But then please don’t try to overstep the role you assumed.

In my opinion, it is critical you do both: Know the big picture, the vision, build a technological vision based on that. And then you must work on this, in bite-sized pieces. From my experience, in all but the smallest projects, not working iteratively (“experiment”, as you call it) is pretty much a guarantee to build the wrong thing from a user/customer requirement standpoint. Not having the technological vision is also guaranteed to result in a steaming pile of tech debt.

I don’t see a problem with providing reasonably accurate long-time estimates either. Build your technological vision and you’ll know. Everybody knows and will understand that substantial requirement changes or newly discovered requirements will change the timeline.


Counterpoint: Boeing.


This paints abstraction like an evil, but it's an unavoidable tool for any human-made software to reach certain capabilities

Rich Hickey said something along the lines of "A novice juggler may be able to juggle two or three balls, but the best juggler in the world can only juggle maybe nine balls. There isn't an order of magnitude difference in human ability, we hit a ceiling quickly." If we are to surpass those limits, we have no choice but to abstract

Of course there may be bad abstractions or too many abstractions in a given case, which is what I think the author is mad at. But that's an important distinction

This part is also plainly false:

> It is no longer easy to build software, and nothing comes with a manual.

Making software has never been easier or better-documented


In that talk he mainly proposes simplicity (decomposition) rather than abstraction.

We don’t necessarily need indirection to understand a whole, complex thing.

In order to deal with complexity, we need to disentangle it, so we can understand each part in isolation. Hiding complexity with indirection, puts distance between the us and thing we have to reason about.

Abstraction is good for users (devs or not), so they don’t have to care about the details, but it doesn’t make it easier for us to build it.


> It is no longer easy to build software

It is very easy if you know the right tools for the right job, but information about these are suppressed so you never hear about them.

What the vast majority of people think the tech tooling landscape looks like and what it actually looks like are very different. The tools we know about are mostly horrible. They try to be silver bullets but they're really not good for anything... Yet they're the most popular tools. I think it's about capital influence, as hinted by the article.

For example, with the tool I'm using now, I made a video showing how to build a relatively complex marketplace app (5+ pages) in 3 hours with login, access control, schema validation, complex filtered views from scratch using only a browser and without downloading any software (serverless). The whole app is less than 700 lines of HTML markup and 12 lines of JavaScript code. The video got like 10 views.


> with the tool I'm using now

Correction: with the tool you created and are not-so-subtly trying to market right now. Complete with an $18/mo subscription fee, no users and (of course) a cryptocurrency that's fueled by promises and hype.

No, despite the conspiratorial twist, modern tooling is more flexible and easier to use than ever before. It has flaws, but those flaws are nothing compared to what developers had to go through a couple of decades ago. There's no big collusion to cover up your tool.


Web3 / popular crypto is not my bailiwick (although I like math and so unpopular crypto). Your video is three hours long, I'm not going to watch the whole thing, and leave it at that. That's not a slight, but three hours for something which I already feel isn't going to deliver something I can use today, next week, or this month. I'm a big believer in no/low code and run-anywhere... although those might also not mean what you think I mean. Props for you for doing it.

So, I started watching your video and the first question I had was: what are Codespaces? How is it secured? Where does the final app run? Can I run it on my hardware? Can I run it without cloud access? Which cloud? Will it be around next week, next month, next year... ten years from now? However don't pay too much attention to a sample of one. ;-)

I think the situation with no/low code can be summed up by analogy to desktop publishing. There is no CI/CD pipeline for DTP: you press print; it is a fully integrated environment... unlike literally all of the "continuous integration" environments which try so hard they can't stop shouting about it because it's eating them alive. Somewhere there might be a tool for generating color seps, setting gutters, importing fonts or creating postscript header libraries or LaTeX; but you'll never use it or see it. Somewhere, somebody is still doing multi-plate prints with pigments exhibiting greater variety than CMYK that are only vibrant in natural light; most people don't see the point because all they get is a crappy snapshot on their phone. It's not just a creator problem, consumers don't know what's possible and don't possess devices which fully reveal the full scope of what's possible. Consumer devices actively compromise that full scope for myriad walled-garden reasons, as well as run-of-the-mill negligence and ignorance.

I got a Cadillac as a rental a few years ago, and I'll never buy a modern Cadillac and avoid driving one if possible. I couldn't control the windshield wipers, and the console display flashed the most attention-grabbing WARNING several times that I was not looking at the road, presumably because it couldn't suss that I was wearing glasses. I didn't actually look at the console display since it happened at high speeds in heavy traffic on an unfamiliar road (I had a passenger, and I asked them "WTF is that flashing screen saying??!?"); good job, Cadillac!


Saasufy supports rollover deployment. It's serverless, meaning you don't need to write any server-side code. You just define the collections, schema, views/filters, access controls then deploy. It's radically different from anything that's out there.

It makes it impossible to introduce back end bugs or security flaws (assuming that access control rules are defined correctly; which is a lot easier to verify via a simple UI than reading thousands of lines of back end code).

Look at this app I built with it on the side: https://www.insnare.net/

It's only ~1300 lines of custom code in total. Most of it is HTML. Throughout the entire process from start to finish, I never encountered a single back end bug. Deployment essentially never fails; the UI doesn't let you do anything invalid and it helps avoid inefficient queries. Yet look at the complexity of what kinds of queries and filters I can run against the data and all the different views. It has almost 1 million records... Indexes are not even fully optimized. Look at the data that's behind access control/associated with specific accounts (e.g. the Longlists).

Go ahead and try to bypass access controls... The WebSocket API is documented here: https://github.com/Saasufy/saasufy-components/wiki/WebSocket...

It would normally have taken months and tens of thousands of lines of code (front end and back end) to build that.

The value is there in terms of security, development efficiency, maintainability etc... The hard part is getting people to see it.


Link it!



Although not sure, I guess socketcluster might be referring to what is further elaborated on in these earlier posts I just quickly stalked:

https://news.ycombinator.com/item?id=40326185 https://news.ycombinator.com/item?id=40530828


Shallow and composable is something we all experience when using UNIX tooling.

GUIs are where this all falls apart as they are literal islands that don’t communicate with each other in a composable manner.

I’ve been experimenting with some GUI-meets-shell-pipeline ideas with a tool I’ve been working on call guish.

https://github.com/williamcotton/guish

I’m curious to know if anyone knows of any similar tools or approaches to composable GUIs!



> I’m curious to know if anyone knows of any similar tools or approaches to composable GUIs!

I believe the emacs architecture is the answer. emacs is not just a CLI, it's not just a GUI, it's a wonderful (but archaic) matchup of both, where they integrate seamlessly.

I am currently working on some ideas on this as well, my idea even looks superficially similar to yours, but I would say you should look into how this works in emacs as I feel like your approach is not really very "composable" (though I haven't looked very deep, sorry).

EDIT: perhaps this may also inspire you, in case you don't know it: https://gtoolkit.com/ (Smalltalk environment where literally everything is programmable, just like in emacs but kind of the other way around: the GUI is language itself, not a result of its commands).


I feel like your approach is not really very "composable"

It is as composable as a shell pipeline because all it does is make shell pipelines!


I was thinking of the GUI itself: how can you compose GUI components?


Yes, shallow, wide, and composable. That's how our abstractions should be.

But the larger problem is not GUIs. GUIs are a problem, but they are necessarily at the top of the abstraction stack, so the problem doesn't compose any further. (What interestingly means they are so much of a problem that they aren't anymore.)

The elephant in the room nowadays are distributed systems.


I would love to hear your thoughts on how distributed systems lead to deep, narrow and isolated abstractions. I can think of some obvious reasons but it seems like you’ve put more thought into this than I!


I haven't put anywhere near enough thought to decide on something simple.

There are two main dynamics that I noticed:

- Each distributed component has a high minimal cost to create and maintain, what leads people to consolidate them into more complex, fewer ones;

- You'll invariably need to interact with some component that doesn't share your values. So you need security, contracts (the legal kind), consumer protection, etc... and that creeps into your abstraction.

Outside of that, there's one thing that isn't inherent but does happen every single time is information erasure. Distributed interfaces are way less detailed than the ones on monolithic systems.


I'm a fan of KNIME (https://www.knime.com/) for those "Sunday Driver" pipelines that I use heavily for a few days or a week and then shelve for months at a time. It uses Python / Pandas as its code layer.


This looks pretty cool!

I agree -- a big reason I like Unix, and find it productive, is that it's "shallow and composable".

But GUIs have long been a weakness. I guess because there are many "global" concerns -- whether a UI is nice isn't a "modular" property.

Personally I would more of a UI, but still retain automation

That is, the property of shell where I can just saved what I typed in a file, and then run it later

And modify it and run it again

Or I can copy and paste a command, send it to my friend in an e-mail, etc.

---

As context, I've been working on a from-scratch shell for many years, and it has a "headless mode" meant for GUIs. There are real demos that other people have written, but nobody's working on it at the moment.

Screenshots:

https://www.oilshell.org/blog/2023/12/screencasts.html#headl...

https://www.oilshell.org/blog/tags.html?tag=headless#headles...

More links here - https://github.com/oilshell/oil/wiki/Interactive-Shell - some interesting (inactive) projects like Xiki

If you find the need for a compatible shell that's divorced from the terminal, or a NEW shell that is, feel free to let me know (by e-mail or https://oilshell.zulipchat.com )

Basically we need to people to test out the headless protocol and tell us how it can be improved. I think we should make a shell GUI that HAS a terminal, but IS NOT a terminal -- which looks like it has some relation to what you're building

Right now we're working mostly on the new YSH language, but I'd like to revive the GUI work too ... I'm not an experienced UI programmer, so it would be nice to have some different viewpoints :)

---

Also, I'm a big fan of ggplot, so I'm glad you included it there.

Actually ggplot is exactly where I miss having graphics in shell!


I’m going to play around a little bit with this headless mode and see if I can come up with anything interesting!

Is there a place y’all hang out or a way you communicate? I foresee needing some guidance in places.


Oh yes feel free to join https://oilshell.zulipchat.com/ -- there is a whole #shell-gui channel . We are mainly developing OSH and YSH now, but it would be nice to work on the interactive shell too

The key idea is that OSH and YSH can run "headless" with the "FANOS" protocol (file descriptors and netstrings over unix domain socket), which has a tiny implementations in C and Python

The reason we use FD passing with Unix sockets is so that anything that uses terminal still works:

    ls --color
    pip install
    cargo build
All these programs will output ANSI color if `isatty(stdout)` is true (roughly speaking).

Most people didn't quite get that -- they thought we should use "nREPL" or something -- but my opinion is that you really need a specific protocol to communicate with a shell, because a shell spawns other programs you still want to work.

---

Here is a pure Python 3 implementation of the recv() side of FANOS in only ~100 lines

https://github.com/oilshell/oil/blob/master/client/py_fanos....

It uses the recvmsg() binding . I'm pretty sure node.js has that too? i.e. it has Unix domain socket support

---

Anyway, thanks for taking a look - we're interested in feedback!


Oh also, my feeling of why this may be interesting is if you think "I want to write a GUI for shell, but I can't modify the bash source code to do what I want"

That is, the shell itself is a limitation

OSH is the most bash-compatible shell by a mile, AND we can easily add new hooks and so forth.

I think most stuff can be already done with FANOS, but I think completion is one area that needs some testing / feedback. For example, I imagine that when constructing a pipeline, you might want to complete exactly what's available -- like shell functions that the user has defined ?


This is really, really cool! Definitely going to share this around, I love tools like this. They bring so much visibility into dark spaces.


Thanks! I started on it a week ago so it is just barely presentable at this point. I expect there are many glitches and bugs that I am completely unaware of!

The parser is off-the-shelf and seems robust. The AST-to-bash aspect is a bit messy. Wrapping arguments with spaces in them with either " or ' is somewhat of an unsolved problem. Like, it's hard to tell if someone wants to embed a bash variable and thus wants " or if they are using something like awk that uses $var itself and thus need '. We'll see how it goes!


It's still really impressive! Comparisons were thrown around to nushell, which is also a really impressive project. I can see the vision and I hope to follow along! :)


The ending: "Things can be better. I'll show you how."

It's just an intro for clickbait.


The prime criteria for being clickbait is if it is sensationalized, deceptive, or intentionally misleading. This strikes me as none of those things. It's the last sentence of a blog post.


It's something where you have to read the introductory material, and then find out the important part is elsewhere. In this case, in a blog post yet to come.


It's the first (and only) blog post on that site: https://wryl.tech/log/index.html


If only they had unveiled the single unifying abstraction of everything :(


> If only they had unveiled the single unifying abstraction of everything :(

Here you are: https://en.wikipedia.org/wiki/Category_theory

;-)


Oh that's a solved problem since 1969. It's called "unix". Everything is a file which can be processed as a byte stream. Composition is a breeze- can't be any more general than that!

(I kid, mostly :)).


- UNIX: everything is a file.

- LISP: everything is a lambda.

- Tandem: everything is a database.

- QNX: everything is a message.

- IBM System/38: everything is a capability.


That's the worse-is-better grand unified abstraction.

The Right Thing, as any Lisp programmer can tell you, is lambda.


An actual Lisp programmer will tell you that the right thing depends on the situation. The right thing can be a lambda, a string, a symbol, a structure, a class object, a hash table, a vector, a list, a call to a foreign function, a syntactic abstraction, a bitmask, a load-time value, a pattern matcher, ...



Ad Hominem? Or are you just stating that as a fact and not questioning the veracity of the arguement?


> Very rarely do these models reflect reality.

> It's a nice coincidence when they do.

> It's catastrophic when they don't.

Well, generally, it’s not my experience. Most software out there is not critical. Many bloated crappy webapp might end up badly doing what user is expecting while sucking irrelevantly large amount of resources all day through with erratic bugs showing here and there, yes all true.

But this is not as critical as the software that handle your pacemaker or your space rocket.

Most software can afford to be crap because most projects are related to human whims for which a lake of quality is at worst a bit of frustration that will not cause a feeling of devastation or pure death penalty. All the more, I guess that most software developers out there are neither working on Sillicon-Valley-like money motivation, nor paying their bill with the kind of software project they love to build on passion. So this means most software that hits market are generated through exogenous miserable remunerations. Who will expect anything else than crap to be the output of such a process?


While I sympathize that the stakes don't appear to match up, I am talking about all software, and also taking into account the fact that these frustrations are numerous. It'd be one thing if it was taken in small pieces, small cracks in the stairs, but half of them are missing on average.

I really don't like missing stairs.

We're very removed from the usage of our software, and experience it in short-form (hopefully) actionable signals that we use to inform our development process. We don't get to appreciate the real pain in this "death by a thousand cuts" unless we can somehow switch bodies with a new user.

I see programming as a trade, however, and we do have the power to govern the quality of our software. There are, however, incentives, financial or not, that can get us to look the other way.


I don't think most programmers are removed of usage of software. I have to use Google Workspace daily and I'm not sure how Workspace team doesn't have entire GooglePlex hunting for their head. Likewise, I'm shocked that Microsoft Teams is not secluded for their own protection. I've got a laundry list for Azure Team and most of Microsoft employees I know have same list yet it remains so some outside force is driving this.

However, I think it's clear why it's this. Business team wants cheaper developers and hope is that if you put enough abstractions out, they can turn over Development to AI or monkeys with typewriters trying to create great works of Shakespeare. I remember reading Log4J exploit and wondered which Log4J developer thought it was a great idea to allow "feature" they added in the first place. Probably someone trying to prevent the monkey from destroying the typewriter.

However, it's excellent article and I await the next installment.


I don't think we have a software crisis. Millions of programmers are able to create more or less useful programs all over the word; everything including your toaster is running software on it succesfully enough; and the community was able to build programs that are accessible to all from a 5 year old kid all the way to your grandparents. Where is the crisis in it?

However, we have a project management crisis, which is not only limited to software, where people in charge of planning are distanced from people in charge of the delivery. And we don't seem to be able to bridge the gap. Agile, Scrum, whatever are indicators of this gap where "gurus" take all of us as fools and we are not able to produce anything better ourselves.

Commoditization of software development is also contributing to this mess because people of all skill levels can find a way to partake in this activity with results of varying success. This is not good or bad, but just the nature of ease of entry into the field. Not much different than food business where we have both Michelin star restaurants as well as MacDonalds of the world both of which have consumers. But we don't say we have a restaurant crisis.


> I don't think we have a software crisis. Millions of programmers are able to create more or less useful programs all over the word; everything including your toaster is running software on it succesfully enough; and the community was able to build programs that are accessible to all from a 5 year old kid all the way to your grandparents. Where is the crisis in it?

About that toaster point; this is the actual, real, concrete example of the software crisis. Your toaster runs bad code. Bad code that is not designed with resiliency or security in mind but is connected to internet. It reduces your toaster's lifetime. In the past, you might have had a toaster which lasted 10 years. Now, because of the bad software and maybe even mandated Wifi or Bluetooth connection, your toaster is garbage after 2 years because the vendor stopped pushing updates. Or maybe there were no updates at all. But because we might not always directly see it, or because the current hyper consumption and never-ending buying of new products, the crisis is not always that visible.

We might be even okay that toaster stopped working after 2 years, and did not pay attention or knew why that happened. But maybe it was part of Mirai botnet https://www.cloudflare.com/learning/ddos/glossary/mirai-botn...

Likely toaster was not part of that, as they use simpler chips, but who knows.


But I wouldn't call a toaster running code an example of software crisis. Call it business crisis or crisis of capitalism/consumerism, but it is not software crisis.

I agree wholeheartedly that an internet connected toaster is a very stupid idea, but our ability to build such a system shows, if anything, a triumph of software (and hardware), not a crisis. What is a crisis here is societal one; the fact that there is a demand, however artificially created, for such things. Software abstractions or layers of abstractions to be able to build such devices have nothing to do with this kind of crisis.


> I agree wholeheartedly that an internet connected toaster is a very stupid idea, but our ability to build such a system shows, if anything, a triumph of software (and hardware), not a crisis

We have managed to build systems with software that "triumph" for a very long time. Look no further than Voyager 1. A technical masterpiece which still works.

But fundamentally, about the toaster; we see these things in toasters because there are so many abstractions and possibilities. It is easy to embed software for the toaster without having a legion of software developers from different fields.

And it adds up and is the main issue of "unintended" usecases for toasters. Many from these cases are not depending from the developer. Developers do not even know about them or were not thinking about them because there were abstracted away. Overall system of the toaster can be so complex in these days, that no single person can understand it completely alone.

But yes, the consumer part and that it is okay if toaster breaks, is indeed social problem. But likelihood of toasters breaking or doing something else than supposed to, could be originating or increasing from the software crisis.


This is very close I think. In a way it is a business crisis, in that business directs software engineering not science or academia nor some physical constraint.

A real estate developer would probably promise you a ten story building made of straw if you seemed willing to pay but a civil engineer will never go make it because they listen to the physics and ethical rules of the trade mostly.

For some reason software engineers bend over and say yes sir when faced with a similar situation.

Perhaps looking at Boeing the same is becoming true of other engineering specialties as well.


> everything including your toaster is running software

Doesn't that just support the author's claim that there's too much software?

FTR, my Dualit toaster doesn't run software.


I don't see how, unless you start with the assumption that more software is inherently a bad thing.


I don't assume that; I simply observe that a toaster doesn't need software, that simple machines are more likely to be reliable than complex machines, and that the ability to run software makes a machine inherently more complex than (say) a clockwork timer.

I have no opinion on how much is too much software. But perhaps if we are employing people to write software for toasters, then we have too many programmers.


The existence of code in otherwise "dumb" toasters is due to it being cheaper than the analog alternative, simple as. I'm not sure why we're talking about all of these things as though they only exist to keep programmers employed.


Is it really easy to enter the field nowadays? I feel that nowadays it's very hard to enter modern software development. So much knowledge is required.


Yes, I would say it is still fairly easy to enter the software development field.

It is worth remembering that software development is a very broad field ranging from changing size and color of buttons on a webpage (which is perfectly fine) to sending rockets to space.

I don't know how true it is these days but people are/were able to find software jobs after a couple months of code camps.


>"We developed methods of building nested layers of abstractions, hiding information at multiple levels. We took the problem of constructing software and morphed it into towering layers."

https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

https://en.wikipedia.org/wiki/Tower_of_Babel

https://en.wikipedia.org/wiki/Hierarchy

https://en.wikipedia.org/wiki/Abstraction

https://en.wikipedia.org/wiki/Abstraction_(computer_science)

(Compare and contrast!)


> It's catastrophic when they don't."

I have found this not to be the case.

Very often, an inaccurate mental model is the ideal user state, and it's my job to afford and reinforce it.

But that's just me, and my experience. I'm sure there's a ton of folks that have found better ways.


Personally I can't bear software that contorts itself to conform to an incorrect mental model. I find it alien and unusable. I don't want the inaccurate mental model and cannot force myself to accept it, but the contortions and don't-look-behind-the-curtain prevent me from forming an accurate one.

That doesn't mean I don't want abstractions. It means I think what constitutes a good abstraction is determined as much by how true it is as by how intuitively appealing it is. An abstraction that corresponds to a naive user's expectations but that doesn't accurately reflect (the essential/relevant aspects of) what is actually happening is not an abstraction, it's a lie.

Edit: And the tragedy of it is that users are, by and large, extremely good at figuring out the reality behind the lies that well meaning developers make their software tell. When our software breaks and misbehaves, its internal realities are surfaced and users have to navigate them, and more often than not they do, successfully. Internet forums are full of people reasoning, experimenting and fiddling their way to success with faulty lying software. Apple Community is perhaps the purest example of a huge population of users navigating the broken terrain behind the abstractions they weren't supposed to think about, with absolutely no help whatsoever from the company that built them. We should have more respect for users. If they were as dumb as developers assume they are almost none of the software we write would ever be successfully used.


> We should have more respect for users

Agreed. In my case, I write software for a lot of really non-technical users, and have found great utility in reinforcing inaccurate, but useful, user narratives.

So "respect" doesn't just mean assuming users are smart. It's also making house calls. Meet them where they live, and do a really, really good job of it, even if I think it's silly.


Just curious, could you give a specific example where you've found it's better to reinforce an inaccurate mental model of the software? And how do you go about it?


I did it a ton, in my day job (image processing/camera control). I won’t directly link to that (personal policy), but we were in a constant war with the hardware people, who wanted, basically, skeuomorphic representation. We found that users often found these to be overcomplex, and that software often allowed us to abstract a lot of the complexity.

Many camera controls are where they are, because, at one time, we needed to have a physical connection between the actuator and the device. Nowadays, the buttons no longer need to be there, but remain so, because that is where users expect them to be.

Almost every car control is like that. Touch interfaces are falling flat, these days. Many manufacturers are going back to analog.

I am currently developing an app to allow folks to find on-line gatherings. It’s not an area that I have much experience in, personally, so I’m doing a lot of “run it up the flagpole, and see who salutes” kind of stuff. I just had to do a “start over from scratch” rewrite, a week ago, because the UX I presented was too complex for users.

The app I just released in January, had several pivots, due to the UX being incompatible to our users. We worked on that for about three years, and had to abandon several blind alleys. Not all the delay was for that, but one thing I did, was have a delicious feast of corvid, and bring in a professional designer, who informed me that all the TLC I had put into “my baby” was for naught.

The app is doing well, but we’re finding some things that users aren’t discovering, because our affordances aren’t working. You can tell, because they ask us to add features that are already there.

Often, I will be asked to add a specific feature. I usually have to ask the user what their goal is, and then work towards that goal, maybe in a different manner than what they suggested, but sometimes, their request opens my eyes to a mental model that I hadn't considered, and that gives me a good place to start.

I’m really, really hesitant to post examples here, as the very last thing that we need, are thousands of curious geeks, piling onto our apps.


Thanks for the info, so you're talking about adjusting or replacing the UX based on user feedback. It makes sense now, but I guess I was imagining something different when you mentioned reinforcing inaccurate mental models.


One of the things that I'll often do, is produce a variant of the UX that gives the designer/C-suiter exactly what they want, but I'll have a variant ready, that does what I think we'll need.

That way, they will hear it from the users.

I am often written off as "a whiner," but they have to pay attention to the end-users.


Apple Community is a perfect example of users assuming the underlying software is always correct, for every possible use-case, and that the user is at fault for going against the grain or, in many cases, *using the software as advertised*.


I reject this thesis totally. It has never been easier to get things done with software. The APIs provided on many different platforms allow useful applications to be developed by a larger number of people than ever before.

I started my career in the era is 8 bit microcomputers. Yes it was great to know the entire stack from processor up through the GUI. But I would never want to go back to those days. Development was too slow and too limited.

We are in a golden era of software development.


> It seems as if this state of comfort is due to a sense of defeat and acceptance, rather than of a true, genuine comfort.

A whole lot of, say, Bay Area software developer salaries/compensation have genuine comfort.

Defeat and acceptance doesn't come into this, for most organizations: they face little-to-no accountability for security problems or other system defects, so... comfort for the developers.


The “inscrutable layers unapproachable to beginners” of today are the bare metal low level computing of tomorrow.

Sure I learned with DOS and Turbo Pascal and it was wonderful, but if you ask my teachers who learned with machine code and microcontrollers, they worried that computers have become too abstract and kids these days have little chance to learn the true details.


If it's done right, by documenting a mature interface and its inputs/outputs, no one needs to know what atoms make up the slice of bread that gets buttered.


Isn't that exactly what the author is complaining about? There is too much complexity to understand all of the layers no matter how well documented.


The author mentioned handmade as a step in the right direction but the handmade creator more or less gave up on the project after a couple years and didn't accomplish his goal of delivering a final product.


The later videos are like cautionary tales… 5 years in, no game in sight, and you’re 30 videos deep into writing a sophisticated global illumination system…


Game design and programming are largely orthogonal skills, and procrastinating working on the former in service of the latter is perhaps the most common game development mistake of all time.


OT: anyone's got an idea why Firefox would not be able to display that page in reader mode (button in the URL bar not showing up)?

For some reason, my eyes cannot cope with white text on black backgrounds, so I usually just go to reader mode in cases like this article. But here, this option does not exist, for some reason?


Thanks for the feedback regarding this, I've been trying to find a middle-ground between my authoring style (writing raw HTML with low overhead, instead of using a static site generator) and "portable readability".

I have a feeling it's just me not annotating the markup correctly, which I am in the process of fixing and porting around!


Personally, I've found that adding 3 lines of javascript to make a style switcher leads to great results. (You know you can simply load a CSS file in javascript and apply it, in a single command?)


They didn't use proper html syntax like <article>, <h2>, etc.; only <div>.

With this it's not possible for the browser to know what is the content of the article.


It's possible, just more difficult. E.g., Safari reader mode works fine for this page.


No idea. And I won't be reading it either.

Someone wanting the world's attention because of a crisis really should not add extra friction.


The missing piece in this and similar arguments is the miracle that we have before us. It is unfathomably wonderful that we have the breadth of technology (including software) that we do given where we’ve come from. As with everything we’ve done and will ever do it is messy, flawed and tragic as well but that doesn’t diminish it.

If you’re going to advocate we change, it might start with recognition of the value we have and the effort it took to realize it. The flaws can only be resolved insomuch as they solutions don’t dilute the gift.


The large majority of the work code does can probably be boiled down to a series of transformations done by a hierarchy of pure functions with a fixed number of inputs/dependencies, all of which are easily testable in isolation.

It's unconstrained side effects and dependencies, resulting in an increase in complexity, that seem to cause the major issues and have to be managed.

The real problem, of course, is the human capacity to comprehend (or not) the entirety of the system or subsystem by modeling it correctly in the brain.


Abstractions themselves aren't a problem; they are in fact a necessity if you want to move anywhere beyond simple bit-twiddling. (Heck, even the idea of software itself is an abstraction over hard-wired instructions.)

The real problem with abstractions is when they are implemented poorly, or have side-effects, or just plain bugs. In other words, we will always be at the mercy of human-produced software.


>The real problem with abstractions is when they are implemented poorly, or have side-effects, or just plain bugs.

Which is always, and is why clean maintainable code uses minimal abstractions to accomplish the task. However it seems the default these days has become "pile it on".


There is also the issue that lower abstractions don't always give enough info to higher level extractions, and vice-versa. Take OS filesystem code for example. It would be useful for the low level storage driver to have info that the higher level FS possesses so it can tell the hardware that certain blocks of storage can go back into the write leveling pool (i.e., when a file gets deleted). Or, the other way around, the filesystem driver can make better file allocation and access patterns if it knows that the minimum write size for a block on an SSD is 128KB, or that starting a write on one block, and ending on another, gives better performance if that first block is picked to be on a stripe boundary on a RAID drive.


The problem isn't minimal abstraction but imperfect abstraction. Imperfect abstractions convince people to dig underneath the surface. Now you've lost the benefit of abstraction because they're thinking of two layers at once.

So when people run explain on their SQL query, IMO you've broken the declarative abstraction.


Asking the SQL engine to explain the query doesn't mean the abstraction is broken or imperfect. It simply means the developer wants to understand how their query is being understood and, hopefully, optimized. This is essential when debugging why queries don't behave optimally, e.g. failure to use an index.

Even with the perfect abstraction, people will need to dig beneath the surface. This is because every layer of code is subject to side-effects due to its actual implementation. That's why we have, e.g., multiple sorting algorithms, and documentation for their individual behavior, speed, etc.: so developers can understand the limits and idiosyncrasies of each and pick the correct one for the job.

We may make incremental steps towards more bug-free code, but because everything is limited by the details of its actual implementation (whether real or an abstraction), there is no magic bullet that will make a huge leap forward in the realm of software development as a whole.


> It simply means the developer wants to understand how their query is being understood and, hopefully, optimized.

Or, in other words, the developer wants to break the abstraction.

If your definition of "perfect abstraction" doesn't include the developer never needing to look beneath the surface, I'd say it's a pretty bad one.


This one is interesting.

The abstraction for SQL is related to how the work gets done, not what it produces. The missing piece is for the user to be able to control that optimization of how the work gets done, which is getting right down into the details of what SQL was abstracting in the first place.

There could be a middle ground where the user provides input about target performance and the engine uses that input during choice of execution plans.

Maybe an OPTIMIZATION clause for difficult or long running queries: ALLOW_EXTENDED_PLAN_SEARCH=True or MINIMIZE_EXECUTION_TIME_FACTOR=10 (1=just do simple, 10=expend more time+space up front to reduce overall time)


User does not want to control optimizations. User wants their queries to run fast enough. Ideally database should monitor the queries and create/remove necessary indices as needed. And some databases do that.


> User does not want to control optimizations.

It's true, it would nice if the user didn't need to intervene, but unfortunately query optimization is a combinatorial problem where the engine has incomplete information causing performance problems (or cost problems if you are on Snowflake), so the user is required to intervene some percentage of the time.


There are no perfect abstractions. Or they are extremely rare. There are abstractions suitable for a given task. And as requirements change, perfect abstractions become imperfect and new abstractions are needed (which would be imperfect in the past).


Complaining about too many abstractions in software is like complaining about too many meetings in a Scrum shop. It's true. Everybody knows it. Or rather, it would be true if efficiency of the software/development process were the thing being optimized for. But it's ultimately a short-sighted perspective because you're not thinking of the people involved.

In the case of Scrum, Scrum is implemented because it gives managers and stakeholders some semblance of observability and control over the software development process. Granted, Scrum shops are merely cosplaying at implementing a rigorous, controlled methodology, but if you take that semblance away you will have angry, frustrated decision makers who resent the software teams for being opaque and unwilling to commit to budgets or schedules.

In the case of abstractions... maybe there are a bunch of junior devs who just can't learn the complexities of SQL and need an ORM layer in order to reckon with the database. (I worked at a software shop where the most senior dev was like this; part of the reason I was brought on board was because I knew how to munge the PL/SQL scripts that formed part of their ETL process.) Maybe one part needs to be completely decoupled from another part in order to allow, for example, easy swapping of storage backends, or the creation of mocks for testing. Maybe some architect or staff SE is just empire-building, but at any rate they're way above your pay grade so the thing will be built their way, with their favorite toolkit of abstractions and advocating for the use of fewer abstractions will get you nowhere.

If you're working on a team of John Carmacks, great! You will be able to design and build a refined jewel of a software system that contains just the right amount of abstraction to enable your team of Carmacks to maintain and extend it easily while still running smoothly on a 386. Unfortunately, most software teams are not like that, most customers are not like that, so the systems they build will develop the abstractions needed to adjust to the peculiarities of that team.


“Modularity based on abstraction is the way things get done” --Barbara Liskov

Something as big and complex as the internet, which covers technologies from email to fiber, is held together by layered abstractions.

Also, software has gotten incredibly better since the 70s. We've built so much better tooling (and I believe tooling can help close the gap on growing complexity). When I was learning how to program, I had to double check I had every semicolon in the right place before hitting compile. I simply don't have to do that anymore. I can even get an entire API auto-completed using something like copilot on VSCode.

Nonetheless, a very thought-provoking article. Thank you for sharing!


The problem is that the abstractions we are forced to create do not drastically change how we can think about the problem domain in question.

We're often just hiding some mechanical details when in truth we should be searching for and codifying fundamental ontologies about the given domain.

It's hard because at the same time we can't yet be users because the thing does not yet exist, but yet we can't really know what we must build without a user to inform us. We can create some imaginary notions with various creative explorations, but images can often deceive.

I do believe the tools most used for software development are fundamentally terrible at modelling these ontologies and are really little more than telling computer to do A then do B and so have never really abstracted much at all.


Just watch all the Rich Hickey videos on YouTube and you’ll understand where it’s all gone wrong.


With a splash of Joe Armstrong!


This is discussing some of the same subjects as Brooks' Mythical Man Month and related essays. Our solutions are made up of "pure thought stuff" and so can be organized in almost any possible way.

We've found better ways to organize things over the years, and reduce incidental complexity too. We continue to chip away at small problems year after year. But, there's "no silver bullet," to give us a 10x improvement.

Believe I agree with the piece that we have too many layers building up lately. Would like to see several of them squashed. :-D


Along the same lines is a general theory of mine that many people in this era generally underestimate the cost of coordination, and overestimate the efficiency gain of bulk processing.


I hate when people try to promote things by inventing a crisis


It’s absurd to complain about abstraction. Hardware will continue to leapfrog the pace of software innovation and things will keep getting better.

Sure layers of abstraction is leaky and has their issues, but I don’t want to write a hipster language in a hipster editor. If you enjoy that, great.

Also, it’s easy to look at the past with rose tinted glasses. Modern softwares are bloated mess but still a million times more productive.


> Hardware will continue to leapfrog the pace of software innovation and things will keep getting better.

This is wishful thinking. Try running Windows 2000 on era-appropriate consumer hardware and tell me just how much better off we truly are now with bloated and unresponsive web apps.

> Also, it’s easy to look at the past with rose tinted glasses. Modern softwares are bloated mess but still a million times more productive.

What metrics are you using to quantify "a million times more productive"?


This reminds me of Jonathan Blow when he says that software has been free riding on hardware leaps and bounds for decades https://youtu.be/AikEFhCaZwo?si=9klvCUW5qHtOpePh


Example - "I made an app" is considered more signal than developing the underlying tools and libraries that app needed to exist. Making everything "easy" backfires because those doing the hard work doing realize it's a rigged game.


Many abstractions do not hold over time. That is the basic problem I have with OOP.


@wryl:

I can't find a definition of the title term, "Software Crisis," anywhere in the post.

Is it "...[the] growing complexity of software..."?

It's difficult to reason about something with no definition.



Software should strive to be no more complex than the underlying problems it attempts to solve. I don't think it can be simpler than the reality it needs to address.


Is this going to be followed up with a “functional programming is the bestest” post?


This is one of those articles that you read and realize that that was a waste of time. Concludes that the prime villain of this supposed crisis is abstraction and provides a simple solution - "... solution ... a constraint on the number of layers of abstractions we are allowed to apply.

Uh-huh, all hail to the coming of the layer police.

Ends - "Things can be better. I'll show you how." - @author - Maybe it would have been better to under promise and over deliver instead?

----

But that got me asking whether there might indeed be a software crisis and yes, I think there is a crises of sorts on the personal level. Maybe for others too. It's not one that is structural as the author proposes. It's that the software landscape is so vast and chaotic. There's so much going on that it's almost impossible to know where to focus. FOMO I suppose, too much to do and not enough time.

So many clever people, so much energy, doing all kinds of amazing things. For many different reasons, some good, some not. A lot of it looks, to coin a phrase, like fractal duplication, e.g. yet another JS framework, yet another game engine, yet another bullshit SAAS, just because. Seems inherent redundancy is built in to the systems.

Good times, I suppose.


I agree with the statement that there's a software crisis, but completely disagree with the author about what the problem is caused by.

The software crisis, if there is one, is caused by complexity. Complexity is the single enemy of a software developer. I would say that reducing complexity is the whole purpose of the software engineering field. I have many small hobby projects where I am the sole developer, and I still struggle with complexity sometimes... I've tried many languages and programming paradigms and still haven't found one that actually "solves" complexity. I am convinced, for now, that the only solution is developer discipline and, guess... good abstractions.

Because complexity doesn't necessarily come from abstractions. In fact, it's the exact opposite: the only way we know to make things "look" simpler, so that we can make sense of it, is to abstract away the problem! Do you need to know how the network works to send a HTTP request?? No!!! You barely need to know HTTP, you just call something like "fetch url" or click a link on a browser and you're done. This is not something we do because we are stuck on some local maximum. Whatever you do to "hide" complexity that is not crucial to solving a certain problem, will be called an "abstraction" of the problem, or a "model" if you will. They always have downsides, of course, but those are vastly offset by the benefits. I can write "fetch url" and be done, but if something goes wrong, I may need to actually have a basic understanding of what that's doing: is the URL syntax wrong, the domain down, the network down, the internet down, lack of authorization?? You may need to dig a bit, but 99% of the time you don't: so you still get the benefit of doing in one line what is actually a really complex sequence of actions, all done behind the layers of abstractions the people who came before you created to make your life easier.

> Various efforts have been made to address pieces of the software crisis, but they all follow the same pattern of "abstract it away"

Of course they do. Abstracting away is the very definition of addressing complexity. I believe what the author is actually trying to say is that some of the abstractions we have come up with are not the best abstractions yet. I would agree with that because as hardware evolves, what the best abstraction is for dealing with it also should evolve, but it hardly does so. That's why we end up with a mismatch between our lower level abstractions (Assembly, C) and the modern hardware we write software for. Most of the time this doesn't matter because most of us are writing software on a much higher level, where differences between hardware are either irrelevant or so far below the layers we're operating on as to be completely out of sight (which is absolutely wonderful... having to write code specific for certain hardware is the last thing you want if all you're doing is writing a web app, as 90% of developers do), but sure, sometimes it does.

> We lament the easy access to fundamental features of a machine, like graphics and sound. It is no longer easy to build software, and nothing comes with a manual.

I have trouble to take this seriously. Is the author a developer? If so, how can you not know just how wonderful we have it these days?? We can access graphics and sound even from a web app running on a browser!! Any desktop toolkit has easy to use APIs for that... we even have things like game engines that will let you easily access the GPU to render extremely complex 3D visualisations... which most of the time working on most Operating Systems in use without you having to worry about it.

Just a couple of decades ago, you would indeed have to buy a Manual for the specific hardware you were targeting to talk to a sound board, but those days are long gone for most of us (people in the embedded software world are the only ones still doing that sort of thing).

If you think it's hard to build software today, I can only assume you have not built anything like even 10 years ago. And the thing is: it's easy because the "hard problems" are mostly abstracted away and you don't even need to know they exist! Memory allocation?? No worries, use one of a million language that come with a GC... even if you want the most performant code, just use Rust, still you don't need to worry (but you can if you must!!! Go with Zig if you really want to know where your bytes go). Emit sound? Tell me which toolkit doesn't come with that ready off the box?? Efficient hash table? Every language has one in its standard lib. Hot code reloading so you can quickly iterate?? Well, are you using Lisp? If so, you've had that since the early 80's, otherwise, perhaps try Smalltalk, or even Java (use jdb, which lets you "redefine" each class on the go, or the IntelliJ debugger, just rebuild it while it's on, specially if you also use DCEVM which makes the JVM more capable in this regard) or Dart/Flutter, which has that out of the box.

Almost any other problem you may come across , either your language makes it easy for you already or you can use a library for it, which is as easy to install as typing "getme a lib"). Not to mention that if you don't know how to do something, ask AI, it will tell you exactly how to do it, with code samples and a full explanation, in most circumstances (if you haven't tried in the last year or so, try again, AI is getting scary good).

Now, back to what the problem actually is: how do we create abstractions that are only as complex as they need to be, and how many "layers" of abstractions are ideal? What programming model is ideal for a certain problem?? Sometimes OOP, sometimes FP, sometimes LP... but how to determine it? How to make software that any competent developer can come and start modifying with the minimum of fuss. These are the real problems we should be solving.

> Programming models, user interfaces, and foundational hardware can, and must, be shallow and composable. We must, as a profession, give agency to the users of the tools we produce.

This is the part where I almost agree with the author. I don't really see why there should be a limit on how "shallow" our layers of abstractions should be because that seems to me to limit how complex problems you can address... I believe the current limit is the human brain's ability to make sense of things, so perhaps there is a shallow limit, but perhaps in the future we may be able to break that barrier.

Finally, about user agency, well, welcome to the FOSS movement :D that's what it is all about!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: