Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Article on software only increasing complexity?
40 points by yonki on Nov 25, 2023 | hide | past | favorite | 26 comments
Few years ago a link to article was posted on HN. The idea in the article was that software can only ever increase complexity of things it does manage, not reduce it. The idea was proven using the example of (possibly) the Australian tax system which got way more complicated since they moved from books to a digital system. Does anyone have a link saved somewhere? I drastically need this article.

Thanks!





Yes! This is the one. So good.

People who enjoy that may also enjoy "All Late Projects are the Same" https://web.archive.org/web/20130818024030/http://www.comput...


Related:

Why it is important that software projects fail (2008) - https://news.ycombinator.com/item?id=24390855 - Sept 2020 (72 comments)

Why It Is Important That Software Projects Fail (2008) - https://news.ycombinator.com/item?id=20109316 - June 2019 (18 comments)

Why it is Important that Software Projects Fail - https://news.ycombinator.com/item?id=932956 - Nov 2009 (26 comments)


Good find :)

I find it interesting how this ties in to the "productivity paradox."[0] The idea the author seems to be getting at--that software accelerates the creation of ever more elaborate solutions (often to problems created by prior iterations of software), and in the process leaves a wake of complexity that frustrates and baffles the society it was supposed to serve--is something I'd like to read more about.

[0]: https://en.wikipedia.org/wiki/Productivity_paradox


The article presents an interesting case study, but it's unclear how generalizable it is from a single example.

One idea the author suggests is that tax agencies will grow to take up roughly 1 percent because 1 percent of the budget is so negligible. The U.S. provides an interesting contrast. In 1955, the IRS was 0.08 percent of GDP [0] [2]. In 2008, it was 0.093 percent of GDP [0] [1]. So it's much lower, but also has remained fairly steady between 1955 and 2008. Looking ahead, the new IRS budget is around $20 billion, which is 0.072 percent of GDP [0]. To pick another example, the Canadian Revenue Service was about 0.2 percent of GDP in 2018 [3] [4].

One other assumption is that the Australian Tax Office does the same, at least when normalized to GDP. I'm not sure that's true, but investigating that would take more time.

[0] https://taxfoundation.org/blog/irs-budget-increase-technolog... [1] https://fred.stlouisfed.org/series/FYGDP [2] https://fraser.stlouisfed.org/title/budget-united-states-gov... See page 1000


expense as a percentage of gdp is a nonsense number. expense as a percentage of the actual budget makes far more sense. why are we accepting silly statistics here?


There are pros and cons to every statistic. Percent of GDP accounts for the fact that the economy is growing, which means more to tax. I used it primarily because the article does. I think percent of receipts makes more sense as receipts are essentially the agency’s product.


thank you! Bookmarking now :)


No problem, I actually hadn't read it myself but was intrigued enough to go looking. Worth it!


I haven't seen that article.

The way I see it, there are a few different kinds of software.

An old kind is something like using a spreadsheet instead of an abacus or punch cards, using a word processor instead of a typewriter, email instead of snail mail / fax machine, etc. In this case, you're using computers/software to increase the efficiency of some external, real-world, non-computer thing, and it works pretty well, especially if you have a complicated logistical problem like running a big warehouse, airline, bank, etc.

Another kind is software that talks to other software, like a trading algorithm, exchange, a search engine or spam filter where the inputs to your software are the outputs from some other person's software. In this kind of software, there is never any permanent outcome/result, it's just a never-ending arms race where you write some software that temporarily produces better results, but then the other side figures out a more elaborate way to exploit it and get through the filter or better SEO results or whatever, and then you obfuscate some more / change again, etc.

Unfortunately, more and more software is now in the latter camp :(


It's not only related to software. Any tool that "makes the task easier" will render the task harder in the long run. Modern agriculture got easier for farmers, until so much productivity became the expected norm that it became an extremely hard job. Communication technology made it easy to keep in contact with loved ones, but now jobs expect people to move very far as if it was nothing. Solving a problem through a technical solution only ups the ante.

I'm also interested in the research you mention, but I think it is a special case of this general human behaviour.


Your argument doesn't really hold.

The task did get easier in the short term and in the long term. The only thing that changed was the task itself.

Local job markets were the only practical thing for most people, but if you found a job somewhere else, you wrote a letter. Farming took dozens (or more?) of people per farm, now it's done by a small team if not a couple -- the task isn't to farm to provide a living or feed your family, it's to supply tons and tons of foodstuffs to a global economy.

Similarly, you can now call or video chat with your immediate family when they're at the store, while you're working from home -- or even if you're in an office 20 mins away. And if you wanted to run a small farm, you can now do that, providing for your family and maybe even generating a small income (see YouTube for homestead channels).


I think if the GP had framed it as "time-saving devices not only don't give us more time for relaxation, as promised, but end up giving us less" it would be closer to the truth.


I think there's a fair argument to be made that computing is the most complex synthetic thing humans have ever conceived all on our own. If we're talking about making systems in the broadest sense, then any category of problems that we create systems for could conceptually be managed on paper and pencil. Let's say "number of things that could go wrong multiplied by the difficulty a layperson would have in fixing such a thing" is what we mean by complexity, a computer would be more complex than a filing cabinet every single time.

There are trade offs for the complexity though, and well managed complexity could disappear behind the interface of a computer. When this is done well it feels seamless, and when it's done poorly it's painful. So maybe I'm arguing the meaning of complexity isn't 1:1 with the meaning of complicated. At the end of the day "did moving this to a computer make it better?" is the question to answer, and a lot of times the answer is no. QR menus at restaurants is my favorite punching bag for this but any home appliance with bluetooth or wifi is an easy target.


Complexity hidden from a user is there for someone to manage.

This reads like empty circumlocution in defense of programmer jobs.

My boring EE and math degrees are from another era, before all this cool software jargon captivated the world. I am not really sure what all the verbosity of the DSLs, config formats, and many programming languages really solves from an engineering perspective. Much of it feels like baggage from the era before graphical computing first; iPhones don’t boot to a CLI, right?

Humanity burns a lot of real resources preserving computing history when the first computers were human mathematicians. What does a Commodore and Borland have to do with mathematics? Feels like nostalgia more than engineering.

I will keep iterating on personal computing experiments. Ye olde cranks of software lore and genius CEOs are just normal people hallucinating about their essentialness to society. Yawn. (I’m doing it too!)

The real energy vampires are not the sarcastic, but the toxic positivity crowd peddling Ponzi schemes passed on from dad and his 80s bitching Camaro crowd. Yes, yes, you did something within the constraints of physical reality. Ooh wow an expensive agency manipulating boondoggle; can I subscribe to your newsletter?

The next generation grew up on the internet. They’re aware of the hustle, whereas the aging out elders were maybe a bit less discerning given lack of education and experience; how were they know it’s just arithmetic and Boolean of memory addresses and semantic babble? Their special boys seemed convinced and the elders might have been a bit more accepting of hallucinations given their religious upbringing.

I ended up in software expecting to have a career in industrial controls, but entered that field at the tail end of offshoring, never got my foot in the door as networking became harder, internet was not so socially organized back then. Couldn’t figure out where to be at the right time.

I’m fucking sick of “software” as we know it. It’s elementary DSL parsers and git pull github.com/everything.git which given how things work with software is great but that that’s how things work in software is ridiculous.


Not an article, but still highly recommend Why Can't We Make Simple Software? [0] by Peter van Hardenberg (head of Ink & Switch [1]).

Disclosure: it was given at my tech conference.

[0] https://vimeo.com/780013486

[1] https://inkandswitch.com


Possible referring to Lehman's laws [1] e.g.

"Increasing Complexity" — as an E-type system evolves, its complexity increases unless work is done to maintain or reduce it

[1] https://en.m.wikipedia.org/wiki/Lehman%27s_laws_of_software_...


> Berglas's corollary, namely that no amount of automation will have any significant effect on the size or efficiency of a bureaucracy

The article you're seeking demonstrates this one way. You can approach it another way: Amdahl's Law.

Any process that takes time T has two parts: a part which can improve and a part which cannot improve. Let p be the percentage of the program which may improve. Symbolically,

T = part that can improve + part that cannot improve

or

T = pT + (1-p)T

Suppose we can introduce an improvement of factor k. Then the improved process time T' is

T' = pT/k + (1-p)T

or

T' = T[p/k + (1-p)]

The overall speedup S, then, is the ratio of the original time to the improved time.

S = T/T'

or

S = 1/[p/k + (1-p)]

It's so simple to derive, I love it. Say you have a bureaucratic process and you're asked to "automate it". You can plug in the numbers and play with them to get a feel for how much overall improvement you can expect. For example, how would the overall process improve in the (unlikely) case that you provided infinite improvement :)

Bureaucracy is not necessarily, although often synonymous with, "composed of many, many parts." This implies that the "part which can improve" is small relative to the part which cannot improve. Amdahl's Law kicks in and improving those tiny parts have minuscule effects overall. No amount of automation will have any significant effect on the size or efficiency of a bureaucracy.

However, this raises an important philosophical question: if you improve a part, do you replace it? How many parts can you replace in a process before it is no longer the same process?


It's not what OP is looking for, but there's a relevant video I know of by a well known game-dev, The Thirty Million Line Problem - https://www.youtube.com/watch?v=kZRE7HIO3vk


I’m giving up after spending 10 minutes on Algolia and Google. The closest I can find is this here: https://news.ycombinator.com/item?id=31279481


I concur that software typically becomes more complex as it incorporates additional features or is modified by numerous contributors. However, using a government agency's role in developing such software as an example doesn't necessarily prove this complexity. It mainly suggests that government-developed software might not always be efficiently optimized.



Bit of a tangent, but I always come back to this read:

https://www.stilldrinking.org/programming-sucks


In 1985 you could run a multi-tasking OS in 256KB of memory and run a word processor and paint program in that. That included application memory, GPU memory, and OS memory. Now to run a word processor and paint program on a PC requires 8GB of memory.


The UK government has improved things immeasurably with their digital systems.


Would love to see the source!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: