Hacker News new | past | comments | ask | show | jobs | submit login
What factors explain the nature of software? (tratt.net)
117 points by ingve 30 days ago | hide | past | favorite | 91 comments



I was once the CTO of a financial services firm that had been founded by a non-technical person. When I was hired the code base was in poor shape. With a bit of guidance we overhauled the codebase in 3 months and swept clean about 15 years of technical debt. They were using a 13 year old version of the compiler.

New clients were requiring audits and attestations we wouldn’t have been able to pass.

I was able to hire a few seasoned engineers to begin working on the next pass over the monolithic core design. We needed to prepare for a container and cloud based future, given where our clients were going.

The founder just complained and complained. He couldn’t accept that we were working on things the client couldn’t see. I spent a lot of time negotiating for a percentage of total velocity for core design changes.. the rest of our time and effort would be focused on things the customers were demanding and which the founder thought we should focus on.

We had increased velocity about 2000% already. He wanted 4000%. We needed sprints for back end redeployments. Founder was visibly angry when he overheard us taking about refactoring back end things like deployment scripts and data architecture.

Senior engineer gave him a copy of The Cathedral and the Bazaar, just to share something that could lead to some kind of common vocabulary. Founder spent weekend marking up his copy and on Monday firmly announced that he rejected the text in whole.

Senior engineer quit on the spot and walked out.


I find Microsoft's 5 Pillars in their Well-Formed Architecture to be a useful way to explain this to non-technical people.

At the start of a project, fully focusing on your features makes sense. But as a project grows, non-functional concerns can sabotage the business model if not addressed.

- Reliability

- Security

- Cost optimization

- Performance Efficiency

- Operational Excellence

The first four have a direct impact on the customer. Buggy or insecure software, or excess costs, will all affect the customer directly.

The fifth point is about devops, and making sure that we can continuously (as a process) meet the customer's needs in all of these areas.

I've found that if I can fit refactoring into one of these pillars, it is much easier to explain to non-technical people. Of course, some people just can't be helped, and aren't open to alternate ways to think about software projects.


CatB is a fine book. That feels like such add choice for the situation, aside from just seeing if the person was willing to engage with ideas within software.

I'm curious what texts people would suggest for a non-developer to get some insight.

The ones that come to my mind are the Mythical Man Month, Peopleware, Facts and Fallacies of Software Engineering by Glass, or maybe even the 1968 NATO report on software engineering.


Every now and then I have to ask my PM at work "If I give you a second copy of the Mythical Man Month, could you read it twice as fast"?

He doesn't like that question, but it does get the point across


What you really need to do is schedule daily status meetings, a longer weekly status meeting, and -- this is the real velocity trick -- monthly OKR reviews as well as quarterly slide presentations.

It's important the slide deck be polished for all the people who want to come learn about the project, so it makes sense to spend at least a month preparing it.

And the neat thing is that all this work will get you lots of suggestions from executives about how you could address all the velocity issues that your project seems to have all the time! So you'd better budget 2-4 weeks after the presentation to follow up, have stakeholder meetings, and make sure everyone really feels heard.

With just these few simple techniques, you too can get your engineers moving at the blinding speed of an average FAANG engineering team.


I’m stealing that quote for my own purposes. I hope you don’t mind.


6 years ago I gave a C-level guy a fresh copy of Perform or Else[0] on my way out. I don't know if he even read it.

[0]: https://www.routledge.com/Perform-or-Else-From-Discipline-to...


I followed the link and -- $180 for the hardcover? wat.


Agree, I bought the paperback.


Whatever it is, it should be as short as possible.


> CatB is a fine book.

Holy shit, THAT'S why Eric Raymonds website is catb.org. How did I never realize it was the abbreviation of the book title?


I have a meta question here, am always curious when seemingly competent engineer such as yourself put up with this kind of management.

It is unbearable to work for someone who doesn't appreciate your work let alone belittles truly hard-earned achievements, for example that statement when he demanded you get 4000% rather than 2000% that you achieved would have made me quit on the spot.

Further from the sounds of it this sounds like a random finance company and with your skills you could have easily walked into any tech/finance company so I am curious why you remained at this company and didn't quit sooner?


Not OP, but I worked for a similar founder. The only work he wanted software engineers to do was writing code, and not just that... only 1. code that added a customer visible feature or 2. code that fixed a major, customer-facing bug that a customer actually complained about. Fixes for bugs not discovered by customers? No. Performance improvements? No, not even visible ones. Refactorings? No. Technical debt cleanup? No. Build speedups? No. Update the code to work on a recent compiler? No. It gets better! Version control? No. Bug tracker? No. Unit tests? No. I had to implement the above three on my free time. Standing up dedicated build and test infrastructure? LOL get real. Eventually, I was able to argue for a few of these, but it was always a fight.

The reason I stayed was that the hiring market is not always that great, and it tends to be much harder to find a new job than HN would have you believe. This idea that most of us can simply "walk into any" company and get a job is fiction in all but the hottest job markets.


I don't understand these kinds of leadership/management. What they want clearly isn't a tech company. They don't want technical wisdom, insight, nor vision. Why bother hiring in-house SWE's at all?

They should just stick with consultants and contactors.


Keeping it slightly abstract because enough people can figure out company/time/place from what I wrote…

I had something to accomplish before I left. I made a mark in the niche we were operating in. Beyond that I’m reluctant to be more specific.


I guess as CTO you're already at the top, but if the owner thinks you're doing bad work and you have to fight for resources despite meeting reasonable goals, what's the actual payoff working there? You'd be paddling upstream forever, I can't imagine the owner would agree to bonuses or promotions, and I'd reasonably expect the owner would be looking for a replacement CTO at the end of that.


"put the donkey, where the owner of the donkey wants you to put the donkey" - egyptian proverb

i dont know how much money this company was making, and why it couldnt invest in both backend and client facing communications

but, on the other hand, being client focused and continuously delivering client facing feature, might be the thing that keeps the revenu streams coming

in other words, i used to always take the dev side in stories like this, but overtime i learn to be more sympathetic to the clients, i have been in situations where the devs overestimated the value of cleaner architecture, its a financial service, not a software company, i think the real solution would have been to completely outsource the software development


> its a financial service, not a software company

I have no experience in the financial industry, but I work in a field which is also highly regulated (medical devices). Failure to pass audits or be otherwise compliant to regulation can very much make the difference between having or not having any business at all.

Paying of decades worth of debt in a few month where clients see no shiny new features was likely more and better service than that founder-CEO type deserved.


I'm also in medical devices and I had the same reaction. "If you're out of Compliance, you're out of business."


I think you are both right.

But I don't think GP was ignoring clients. If anything, he was doing what those clients wanted by moving to containers and a cloud architecture. And I say that as someone that doesn't like those things.


I've never seen outsourcing of a company's core product result in anything but litigation. If a team legitimately made a 2000% leap in throughput and it wasn't just from manipulating metrics, and then that still wasn't good enough then nothing will be satisfactory in the eyes of that owner, you rightfully should give up that battle and find greener pastures. Expecting someone to come in and wave a magic wand of 4000% productivity is absurd. Expecting that you can outsource that sort of demand is plain stupid.


There are a lot of managers who believe you should always put a squeeze on devs and keep the org in constant panic mode. Nothing will ever be good enough. If you achieve one goal, you get hit with another crazy deadline. It never ends.


> but, on the other hand, being client focused and continuously delivering client facing feature, might be the thing that keeps the revenu streams coming

I firmly reject this. If you can't build quality then you don't deserve to be in business

I'm so sick of the fact that everyone is ok accepting "just barely passable"

It has lead to everything new being built is crap. New housing is crap, new furniture is crap, new software is crap

I get that building nice things is expensive and takes time, but I want to live in a world of nice things instead of this constant race to the bottom, produce endless quantities of crap world that we are in right now

Doesn't anyone else feel like we're being crushed under all this crap we're building?


I have learned over the years that there are different types of devs. Some are ok with cranking out features without regard for overall architecture and can handle the chaos of constant quick fixes over previous quick fixes. There are others (like me) who like well designed systems and get stressed when they are asked to just crank out features without regard for the overall system.

Personally I believe that my approach achieves higher velocity over time because the guys who just do quick bandaid fixes get bogged down over time with technical debt. But not everybody agrees.

I definitely think you need to know where you are on that spectrum and find an org that fits your style. Otherwise you are in for constant pain and unhappiness.


People love to be the hero that swoops in and saves the day. From the hot mess they created the past year.


Every situation is different, and sometimes it's prudent to write a kludge to fix something now, while other times it's more prudent to take your time developing a long term solution. There's even the middle ground of monkey patch now, long term solution later.

I would argue that a well rounded developer will have the capacity to write kludges when necessary, and long term solutions when necessary. Now, if your boss repeatedly asks you to write the code one way and you are always disagreeing with them, then that would be a good reason to find another job. But if you trust your boss, you'll write the code however they ask, even if it's not your natural preference.


long term, companies that operate like this are better off paying for third party services that handle the more technical details that they aren't interested in managing. these are the target customers for managed infra services like heroku, fly, and such (or more task-specific services like pack digital). imo if you get into heavy technical requirements territory, the leadership has to re-assess their priorities or risk drowning the org in tech debt.


I do get frustrated with the state of modern software, but often times the comparison ends up being between a crappy version of something or nothing at all.

One of the major problems is that many of our core software assumptions were based on design decisions made decades ago, when the industry was young and figuring out a million details.

I've mostly done web dev, but recently was trying out IMGUI and was amazed by the tiny memory footprint and with how quickly it started. But even that system has limitations, requiring that you learn a bunch of details for targeting different platforms. Isn't it amazing that native cross platform development is still so difficult in 2024? The platform owners could so a lot to ease this burden, but everyone wants their own little fiefdom.


> The platform owners could so a lot to ease this burden, but everyone wants their own little fiefdom.

The thing is that most people use one platform. So the owners try to present one consistent view on how to use that platform. Trying to provide a common paradigm across platform is the goal of companies that love their brands too much. But as a user, I just want for everything to function similarly so I don't have to read a manual for each app. I've done Android dev and dabbled in iOS, but now I'd only recommend React Native if what you want can be a web app, but you want the extra UI performance and some native features. If you want something more complex or consistent, go native.


I don't know where this idea came from that an application should look and feel exactly the same across Windows, Mac, Linux, iOS, Android, and so on. I have encountered this attitude in more than one company, but have not heard of any actual customer who wants this. As an actual computer user, the worst applications tend to be the ones that stick out like a sore thumb by ignoring OS conventions and using custom-drawn controls with nonstandard behavior.


Regulation. Regulation. Regulation.

The software world is coming under more and more of it because our magic incantations affect the real world and peoples lives.

>New housing is crap, new furniture is crap, new software is crap

Old housing was crap, old furniture was crap, and old software was crap, you are letting survivorship bias bite you too much and you're losing focus of the forest for the trees.


> I'm so sick of the fact that everyone is ok accepting "just barely passable"

Isn't everyone ok with that by the definition of "passable"?


"Just barely passable" is the endless customer support carousel, the nickel-and-diming of airline fees, the addition of more and more ad space and cloned-content spam to web pages, and automobiles that spy on you to sell the data to marketers.


But if we updated our standards such that we no longer accept those things, they would no longer be passable, and something else would be "just barely passable".

My point - inasmuch as there was a serious one - wasn't that we shouldn't raise our standards but that the framing isn't particularly helpful in determining whether we should (while rhetorically sounding like it is).


It's a classic problem - there are some people who know exactly what they are talking about and you should trust them, and there are some people who just sound like they know what they're talking about and you should absolutely NOT trust them.


See Honey the Bear's "Metagame" hypothesis. https://adriantchaikovsky.com/dogs-of-war-series.html#anchor...


This puts me in mind of a home owner on one of those renovations shows where the workers discover a structural flaw mid way through the job, and explain this means they don't have as much in the budget now for added features.

The equivalent of the boss in this story would demand to leave the structural issues unaddressed to keep the marble counter tops in the budget, and then the house would cave in a year later.


I think that was a good move by the senior engineer. In part because I've done something similar and want to believe it's the right thing to do.

Did it change anything? Did the founder manage to keep things going?


That reminds me of one case, where an author of software saw any improvement or genuine feedback as a personal attack - directed at him and his "legacy". Can't go into specifics, but only features directly or indirectly praising the original software were allowed. I was student/intern at the time and learned about that the hard way.

Still, one of most important teaching moment to this day.


Would it be ethical in a situation like this to hide all the technical improvements and just show the increased speed of client facing delivery?

Seems like everyone would be happier.


Using the word "hide" in any scenario should immediately get your spidey-sense tingling.


I'm always telling my developers to stop talking to the customer about technical details.

Do you think carpenters talk to their customers about types of nails and dimensional lumber? Or they talk curves, size, and color?


It works in everyone’s favour. Founder got what his culture sets out to build. Customer got their features faster. Senior engineer clearly knows it is not the place for him and moved on. Everyone wins.


Probably feature development slowed to a crawl over a few years. Code debt isn't actually an abstract thing, it's just hard to explain to non-coders the thousand little slowdowns it injects into every project.


What I meant was the founder got what he asked for. Short term gain at the expense of sustainability. It is a feature rather than a bug. For a company looking for product market fit, "good code" might take a back seat. Andy Grove mentioned that an executive job is to, paraphrased, maintain the company culture. His context was about larger established companies. But I think the point still holds. It is the founder's job to sell his vision to his employees. It is the market's job to determine whether such vision thrives.


This is blatant conservatism, and it's a mentality which leads to dead players. The job of an executive is to make good decisions; it's in the name: execut-ive. To decide always in the direction of the existent culture is a very peculiar bias which is only successful in certain situations.


A friend calls it the Logjam Principle. There are always a few logs floating down the river and that's not really a problem. But a small logjam of just a few logs will grow over time until the entire river is obstructed. It's better not to let things get to that state.


Not sure about that, that senior engineer leaving likely led to feature development severely slowing down.


> Customer got their features faster.

Did they though?


Ahh yes, reading this as an SRE. This is the average Ops team experience, having to find ways to prove that we're not a cost center (I cut out AWS bill down 40% last year migrating to ARM64) because your work isn't blasted on the front page of a product changelog.

And things... "just work," Until they don't. And they did because of the front loaded work we did correctly to keep things working.


> Senior engineer gave him a copy of The Cathedral and the Bazaar, just to share something that could lead to some kind of common vocabulary. Founder spent weekend marking up his copy and on Monday

I'd quite like to read those markings. Do you have them?


Shouldn't the free market drive companies with these type of leaders down ? They should become swamped by the faster more agile competitors?

But sadly, it seems the inertia keeps some pretty ignorant organizations functioning for decades.


The problem is that the free market only really works when all information is available to all parties and there's low switching cost. So it works decently for commodities, but no business ever wants to sell a commodity. In fact most business advice I've seen is about how to get away from selling commodified products into a free market and instead sell opaquely differentiated products to a locked-in market with high switching costs and barriers to entry.


Peter Thiel: "Competition is for losers."


Yes. This is why you should immediately start interviewing when you observe this kind of behavior coming from management.


Eventually, yes. Unless they sell to another company and the technical debt goes into the trash with the company's product.

On the other hand, "the market can remain irrational longer than you can remain solvent."


Perhaps, but there is nothing free market about financial services.


What is The Cathedral and the Bazaar about? I couldn't quite tell from a cursory glance.


Disclaimer: been a while since I read it.

It compares two software development styles. It compares typical software dev in a company to building a cathedral: directions come from on high, and you execute your task.

Then it compares typical Open Source development, such as with the Linux kernel, and compares it to a bazaar: everyone is doing their own thing, but somehow, things get done.

It's old, and maybe a little outdated now that the corporate world has taken over Open Source, but I would still suggest reading it.



Yea, but I guess I was looking for something deeper, because at the outset, I'd be a little offput by the suggestion to read it and any disagreement about my disagreement on it. And I fully understand the issues described by the above parent. The book comes off as a bit opinionated with the blinders of a Linux kernel development and 80-90s attitude about open source software. That's a pretty narrow focus of software development.

For example:

> Every good work of software starts by scratching a developer's personal itch.

I've written good software professionally, but none of it was kickstarted by my own personal itch.


How did you measure velocity?


Story points are unitless. Therefore story point velocity is measured in the inverse of time.


Are you saying that velocity hurts?


Only when you hit a hard deadline.


So, with... frequency?


count(work unit) / count(intervals)


These are to some extent manifestations of another factor. In software engineering, we skip entirely an essential step in the engineering process which is central to mature engineering domains involving complex dynamic systems e.g. chemical engineering. If we skipped that step in those other engineering domains, we would see design disasters and gross inefficiencies analogous to what we see in complex software today.

In any high-level systems architecture, whether software or a chemical plant, you have a collection of notionally discrete and independent components that interface with each other. The properties of these components may be constrained in various ways e.g. the alloy type that can be used in a reactor vessel or the RAM available for a data structure. In a chemical system, that reactor vessel is modeled as a set of complex differential equations that govern its properties given many design inputs. In software we have much less sophisticated models of the architectural components; while there is a high-level concept of space- and time-complexity, those models are usually not coupled to a model of the hardware it is running on or temporal dynamics such that you can reliably predict absolute performance across multiple dimensions that are within a few percentage points of ground truth. With the current state of software engineering tools, there hasn’t been much immediate motivation to make the software component models much better than they currently are.

These models are the axioms of your system. In other engineering disciplines, there are sophisticated solvers that will take a large number of these components connected together arbitrarily with their many inputs and solve for the system with predictable and nearly optimal properties every time when you actually implement that design. You don’t have to guess as to the consequences and side-effects of changing requirements, scope, or implementation constraints, you can just re-run the solvers. The specification for what you want it to optimize for is pretty simple — cost, speed of building, throughput, etc. In software, we do almost none of this in a structured, disciplined way, not even manually. We know little about the properties of the design until after we’ve invested inordinate amounts of time building it first.

Ironically, it is much easier to build detailed and accurate component models in software than chemical engineering. We’ve somehow never gotten around to building systems design solvers in the same way they exist in other physical engineering disciplines. The behavior of the system is not the behavior of its isolated components when decoupled from the hardware environment, but in software we design most things as if this was the case.

Despite coming from a chemical engineering background, I didn’t fully recognize the existence of this gap until I started working in HPC. Now I see it everywhere in software. When I worked in HPC, I started hacking together crude models and solvers to address difficult optimization problems that took an excruciating amount of time to iterate on, a primitive version of what you might do as a chemical engineer, and it was a game changer. But it was an enormous amount of effort because there is no tooling to do this in software, even if it occurred to you that it might be a good idea for the same reason it is a good idea in every other engineering discipline.

I think we got away with it for a long time because early software systems were legitimately quite simple. We could live with it. Software is now vastly more complex. While our tooling has evolved to address problems of abstractions, it has not evolved to address the problem of systems behavior and modeling. The ability to see the effects of system design changes without actually implementing them is powerful. Other engineering disciplines recognized this need quite early; one of the earliest applications of computing was solvers for physical engineering systems problems.


I have to preface my comment by saying I am reasonably technical but not an engineer by any stretch. I find this comment fascinating and it seems to highlight some issues I face in my own organisation (tech/design professional services company). We build apps and platforms for enterprise clients and often struggle with predicting a lot of parts of our software.

I might have a poor understanding of your comment but could you expand on the ideas behind solvers and what they are? What data do they ingest from a company building software and how did the solver you build work (in layman's terms)?

Thank-you!


There are tools such as TLA+ that help with modeling at the level higher than executable code. Recent talk for example: Fifteen years of formal methods at AWS Marc Brooker, Amazon AWS https://youtu.be/HxP4wi4DhA0


These don't model key properties of software for systems engineering purposes. Software can be technically correct in a formal sense and also broken in all real implementations if it doesn't satisfy reasonable engineering constraints, like efficiency, scalability, concurrency, latency variance, or strange resonances in the system (a significant performance issue in HPC).

It is an optimization problem, not a correctness problem.

If you applied the same systems solving methods to software as physical engineering, you should be able to predict system performance, concurrency, scaling, etc characteristics on a given hardware environment in concrete absolute terms without writing a line of code. The whole "build then measure" thing would be unnecessary. This is normal in physical engineering.

I know software performance engineers that can predict these kinds of characteristics with surprising accuracy on a localized basis because they carry detailed system models in their heads, but that is a rare skill and to scale that up to a larger, more complex software system you really need some kind of automated solver. We use automated solvers in other engineering domains for the same reason -- human brains struggle to do this bit well.


Software is too complex to rely on predictions along. Predicting is useful but eventually you must measure.

For example: "In conclusion, the issue isn't software-related. Python outperforms C/Rust due to an AMD CPU bug." https://xuanwo.io/2023/04-rust-std-fs-slower-than-python/#co...

Predict that.


Predictions are always going to be limited by the world model they assume. Still useful in 99% of the cases.


99% is hopelessly naive. If it were true, we would live in "generate code from UML" world (it is useful only for niche tasks)


In project management I found it important in many cases to explain why "normal" project management - the kind where every task gets a description and resources assigned to it and you can do critical path analysis and resource leveling doesn't work for software.

The reason is software tasks are often unique. The most important tasks are most often the unique ones. Therefore they are hard to estimate, and that makes your carefully crafted schedule useless very quickly after a project starts.

Agile, mostly in the form of Scrum is supposed to be the answer, but in many cases Scrum people keep banging their heads against the task estimation brick wall just like their Gantt/CPM chart predecessors.

That's why task uniqueness and the inevitable, intractable, why-even-try unreliability of estimates is an important part of the nature of software.


Another way I like saying it is "Non-unique software is generally called a library".

In civil engineering there are a lot of small creeks to cross that you'll pretty much use a drag-n-drop bridge solution for. When you make an estimate of how long it will take to build, it's most likely going to be correct.

It's when you come up to the wide river that you pull out the senior engineers and do a massive amount of homework before you even dig up the first shovel full of dirt. And even then your best laid plans are likely to go into cost and time overruns when one critical part gets delayed due to issues totally out of your hands.

I commonly find software far worse in pre-planning. People commonly bring large amounts of data together, then realize they have a computer science problem and are left scrambling on what to do, or cost surprises on how much computation is needed to process data at that scale/speed/latency.


That's the analogy I use in training. In civil engineering you have mostly predictable task durations that have mostly been done many times. You can even predict the variations for different soil conditions, etc. If you are making the same software repeatedly, something is wrong.

Agile, much because of Scrum jargon, has become annoying to a lot of people. Taking them back to the fundamentals - "oh that's why we don't do MS Project network diagrams" - helps.


> The circular specification problem

> The only way to know exactly what software we want to build is to fully specify it: without doing so, there will be gaps between our vague ideas and harsh reality. However, a complete, abstract specification is, in general, at least as much work as creating the software itself — in many cases, it is substantially more work.

Well, there is a way to do this: it's called requirements. There's a whole subfield of Software Engineering called Requirements Engineering that deals with exactly this. The fact that we don't stop to write down even a one page description with bullet points of what software should do is damning as a field.

We'd instead rather just jump right in and then later spend hours, days, weeks, months, even years later refactoring the wrong thing we started with, or banging on it with a hammer every time a new (often anticipatable) requirement seems to appear.

Sure, software development is a young field, but it will never mature if we constantly ignore and even forget the basics.


Software specification is circular because what we write down as a requirement is going to depend heavily on what is feasible to implement. But feasibility of implementation is often not clear until you actually try to implement the thing.

In fact, I would go so far as to say that, if in the design phase the precise structure of every single piece of what you are implementing, and the scope and feasibility of all alternatives, is known and crystal clear, then the project is trivial and probably does not need a design phase - you might as well just write the code.

The article makes this point as well: "a complete, abstract specification is, in general, at least as much work as creating the software itself"


To expand on what others have said: If the spec isn't the size of the software, then it leaves out details. But "everybody knows what we mean" - until they try to implement it. Then they find out part of what wasn't specified.

"Part of", because there's the other part, which wasn't specified, and they just assumed there was only one possible answer, and so they didn't even realize that there was a gap in the spec there. But there is, and if someone, somewhere, makes the opposite assumption, the gap matters.


Well no, the whole point is that the spec is not detailed enough to be code. Of course it leaves out details. I feel like this conversation will diverge to generalities without an example. So, take the WebAssembly spec for example. There's a reason it's written in "spec" language and not just some C code (or any other programming language, for that matter). But that's a very precise spec.

In reality, a loose spec is better than no spec. For example:

1. The loader should be able to parse an input file of up to 100MB in under 1 second.

2. The loader should support XYZ files version 3, 3.2, 3.3, and 4.0.

3. The loader should validate and reject erroneously formatted files.

4. The application should store internal state in robust storage that is independently inspectable.

5. The UI should be able to function if the backend database is down for less than 3 hours.

If you can't write ~1 page of such high-level requirements, why would you expect that you could start writing code? But we do. I mean, even I do!


I absolutely agree that you need requirements. If you don't know what you're trying to build, it makes it really hard to build it. (And, one page? That's hopelessly inadequate for anything real. But I agree, too often we just start writing...)

But I think where the disagreement comes is in the difference between requirements and specification. The example you gave is, in my view, not a spec at all, just requirements.

Take item 4: That could mean anything. It could mean a SQL database with a published schema, or it could mean a 1 GB file on disk that I can view in a hex editor. (The file probably can't be in /tmp though, because you said "robust".)

So people try to make the requirements more specific, so that they "specify" exactly what the software does. And that's where my previous comment comes in - you can't make it specific enough to answer all the questions. Which was one of the points of the article.


There's only so much requirements can do to prepare one for a project. At some point there's diminishing returns on time spent spec'ing something out. And of course requirements can change, so some amount of flexibility in being able to pivot while work is in progress is ideal.

Building software is usually more of a process than a plan. This is in contrast of traditional engineering where you absolutely need a rigid plan to build something like a bridge or a building. Of course reality is messy and no plan will be perfect


"...a new (often anticipatable) requirement seems to appear..."

And often not, particularly when you are building a new system. And if you are replacing an existing system, the requirements that you discover are usually, "it must work exactly the same way as the old system."


If you're replacing an old system, a great piece of documentation is the original requirements against which it was built, especially if that document is versioned. Goes a long way to understanding why something was built a certain way and whether the new thing can do the job of the old thing. It's so much better than something either overly vague or overly specific like "it must work (exactly) the same way as the old system".


> but often my advice doesn’t gel with those who sought it.

Absolutely, but the author's explanation of why is both incomplete and unnecessary convoluted.

People pursue that which they can reason about, such as bike shedding[1]. Other answers are discarded without consideration. This is called bias. Perhaps the common bias failure that people employ about software are unfounded assertions. Developers make unfounded assertions typically in the quest for least effort without consideration of total cost of ownership[2]. Software managers frequently make the same errors for the purpose of reducing accounting expenses as opposed to personal time.

Senior executives and military leaders solve for this by surrounding themselves with advisors. Advisors occupy a managerial position of domain specific knowledge that eagerly seek to both keep their boss informed and simultaneously disqualify their boss's bad decisions. This works to eliminate bias so that the boss's job is then limited to balancing competing guidance from their advisors against their own managerial experience in consideration of attaining the stated goal.

[1] https://en.wikipedia.org/wiki/Law_of_triviality

[2] https://www.investopedia.com/terms/t/totalcostofownership.as...


Another factor explaining the nature of software is the outsized influence of the shape, politics, and dysfunctions of the org that produced it. When I've dug into proprietary legacy software, some design decisions can only be explained by "we had to do it this way because <something about the org> prevented us from doing it the right way.


This is my multidisciplinary improvisation after cursory scan. Please forgive me. I'll revisit to make sure I'm not repeating the OP or the author.

Liminal is key here. I believe this arises from the fact that we frequently have the luxury of simplicity in software enabled by the fact that the imaginary part is that things like generic pointers allow us to pretend as if a physics of software is non-existent, at least until computational complexity catches up and re-asserts physics dominance over our control theory from the stoics.

What's missing is extensibility. In particular, infinite extensibility. All hail Alan Kay.

The relationship between those two has puzzled me for a while, at least since I started writing on this topic post-Covid.

The last things I wrote were along the lines of "bringing Lacan to software" - getting at the idea that there was a Lacanian frame there which needed further exploration. Reminds me of reading the history of cybernetics which IIRC shares the greek root with Kubernetes.

Lev Manovich should get some credit for applying McLuhan's idea of hybridization to software more specifically than McLuhan's target: technology beyond media.

https://mastersofmedia.hum.uva.nl/blog/2014/09/10/the-softwa...


On the author’s circular specification problem, there’s an aspect of this that happens in physical disciplines also.

We build houses and conduct surgeries, and planning helps the doing, but doing also helps the planning. One difference is those things are not freely replicable and there are physical costs to building copies or building exponentially improved versions of physical things. So we tend to build roughly the same thing over and over. That iterative feedback loop has been running for decades or centuries, informing the process to the point we can now estimate pretty accurately what it will cost to build a house and estimate how long it will take.

But software is freely copyable. One person can build something and share it with everyone. If it’s widely useful, it gets made into an app, library, framework, or language. We are never spending decades in the plan/build loop honing the process of building the same app over and over. We are, in a sense, perpetually on the forefront building something that hasn’t been built in exactly this way before, and the only way to build the new thing is to plan a little, and build a little, and iterate on it. And it often goes about as well as the first time someone tried to do surgery.


> The act of seeing software in action changes what we think the software should be.

This is the ethos of Agile Development.

Get something in front of users as early and often as possible, because this will change their idea of what the software should be and better to find that out earlier than late in the process when the software is more difficult to change.


The more error-prone the human (or the organization), the greater that is amplified by their software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: