Hacker News new | past | comments | ask | show | jobs | submit login
There is No Right Way to Develop Software (dlo.me)
66 points by theli0nheart on April 17, 2013 | hide | past | favorite | 61 comments



The problem here is that these arguments are bulking together many completely different things under the heading of 'software'. This is like trying to find a correct way to 'grow plants'. If you try to grow rice the same way you grow wheat, it's not going to work. This is obvious for plants and should be very obvious for software also.

For example, if you're developing a web-based payment processor, you should have unit tests everywhere and possibly additional runtime tests ensuring transactions are completing properly and are stored in the database, etc.

If you are developing the software for a pacemaker, unit tests just aren't going to cut it - you will need to logically prove all parts of the software work. You will need to test each piece of hardware to ensure it meets requirements. You will need to do hundreds of hours of physical testing to ensure everything works in different environments and with different electromagnetic interference. Just doing a bit of TDD is laughable here.

Lean startup validating an idea? History shows that simply shipping and having something that users can try to use is more than enough. If it breaks and goes down for hours? No problem - customers will sit there hitting F5 for hours for the chance to use it again. If nobody wants to use it, your tests are wasted anyway. In this environment, having something that partially works is better than something that isn't done yet.

There are millions of other cases, each of them unique. Software development is not building bridges. Please never use that analogy - a bridge is a fixed, known, and obvious requirement and solution. Software is the very opposite of this - if you compare 'software' to the combined industries of fabrics, farming, engineering, carpentry, electrical work, industrial building, home building and many more, then your analogy is closer to the truth.


No, there is no problem with the argument. It applies to every special case of software.

For example, you may think that your ideas to develop good software in payment processor are right, but they aren't really right. You have no evidence that such way of developing this particular type of software leads to more reliability and security than every other way. Same goes for every other example you mentioned.

So, if you cannot prove that it's right or better, why do you say that it is? I think this is the point of the article.


You (and the author) make a very good point here, and I wasn't disputing that point directly. The issue I was disputing is a generalization made by the author and the people who are insisting on their own 'only way to create software' ideologies. Arguing about software as a whole is flawed, and discussions like this must always be put into context and evidence for or against must be very specific about the exact context.

This ties into a similar issue in management academics in which, for decades, people tried to prove a 'best way to manage a company'. Management academics have now moved on from this and all management recommendations are always very rooted in as deep a specific as possible. Software best practices need to make the same transition now.

Simply saying 'TDD is best' is the same as saying 'performance linked compensation is best'. Awhile back, a number of idealists very strongly touted the performance linked compensation angle in the same way that a number of software idealists are touting TDD. There is nothing wrong with either idea when used pragmatically and in the correct situations, but the important part for the future and for software as a whole is to discovery the situations in which TDD shines, and the situations it doesn't. Only by breaking it down as specific as possible can we get any real value from TDD recommendations.

So, if you cannot prove that it's right or better, why do you say that it is? I think this is the point of the article.

What I'm trying to get at is that if we can drill down to the specifics we may be able to actually prove things in very specific industries/team structures. We must move away from debating 'software development' as a whole before we can arrive at anything useful though.


You are right that it was a generalization, however you are pretty much arguing the same point as the author.

I remember seeing a matrix/graph in a book, maybe 'code complete' or some other popular book. It basically had complexity on the y axis and risk on the x axis. The point being that high risk, and high complexity projects would justify the most discipline, testing, proofs etc, and the low risk low complexity projects justify minimal process. Something like an inventory management system for a t-shirt shop would be the latter, and sending a pacemaker to the moon would be the former.


I have an interesting experience with this recently. I have been doing ruby now for a few years, I have immersed myself in the culture and concepts. I am wanting to level up on my programming skills, so I am trying to learn Go.

I went into a golang irc channel, and was labeled as "ruby brainwashed" because of the way I approached my questions on Go syntax and concepts. I admit I had a hard time mapping object oriented concepts I had come to love in ruby to the different OO model of Go. These guys genuinely thought I was doing things wrong! - it has been a been a wake up call to the many different approaches and opinions.


Which is why so many top developers recommend that you learn a new language regularly. It exposes you to new thinking and methods. As a result, you grow as a developer.

Kudos to you for doing this.


Eh, the standard "I don't know which way is best, so there must not be any objectively better or worse ways".

There clearly is a correct way to develop software. It's just not clear that we've got enough experience with it to a) have found it, b) to recognize it, and c) to correctly confirm it to a reasonable degree.

As I commented on the article, the whole thing sounds a lot more silly if you replace "software development" with "building bridges" or the equivalent civil engineering task. There are clearly better, safer ways to construct bridges, and I'm fairly certain the same is true of software.


When building the Golden Gate Bridge:

> Most speculated that a bridge would cost over $100 million.

> Joseph Strauss, who had designed nearly 400 bridges, claimed it could be built for $25 to $30 million.

> "Strauss was a strange, at times almost a self cancelling mixture of conflicting traits: promoter, mystic, tinkerer, dreamer, tenacious hustler, publicity seeker, and recluse. He was not a member of the American Society of Civil Engineers nor was he a graduate of a college of engineering."

http://www.structuremag.org/article.aspx?articleID=1493


What a load of baloney! There is no one true way to develop software; it depends on the person/team, the nature of the project, the schedule, and tons of other external factors. The same with building bridges actually; to follow the one true way means a lot of bridges falling down.

The best we can do is have enough experience and intelligence to choose methods appropriate to the context. That means hiring good developers...because process won't save us!


So you're saying choosing methods appropriate to the context and hiring good developers is the one true way to develop software?


Maybe not. Sometimes all you need is developers that are "good enough." I personally don't like hiring junior talent (mostly because I hate all of the hand holding it requires), but that doesn't mean it might not make business sense to go with the cheapest talent that can get the work done.

Ironically, if your company uses a lot of junior talent, the appropriate development methods might just require things like enforcing a JIRA ticket before code, always use TDD, etc.

Perhaps what I'm saying is the one true way to develop software is to choose your methodologies to maximize the effectiveness of your specific team to fulfill its specific end goal.


Ah, that's a good one. Yes, if you want to call that a way.


yes but why in the world would you compare "software development" to "building bridges"? I think this is a major fundamental problem with the entire field of Computer Science, approaching it like it is similar to civil engineering. Bridges are physical, software is abstract.

Creating software is much more similar to creating a legal system. Lots of legacy, nothing ever seems to work quite right. Vested interests.

SO what's the correct way to create a legal system that just works?


I think we're better served looking at incorrect ways to develop software, which are generally obvious and easier to sell each other on.


> There clearly is a correct way to develop software.

That is not at all "clear" to me. I've come across lots of wrong ways, to be sure, but that doesn't point to a singular correct path.


Your point will only valid when it is possible to clone developers. Since all developers are different there is no correct way.


be careful when you elevate "development" to "engineering". Any self-respecting man can put together a study desk for his son. That's development. But now if you want to build and sell desks to the public in such a way that we have to follow regulations and make a profit. Then that is engineering.


The truest statement I have read in these comments so far is that we don't yet know the best way or ways to do software development. There might be one or more best ways, but we don't know.

The article is valuable in pointing out that most purported "best ways" are dogmas, and, despite claims of being scientific, they tend to veer off into pseudoscience.

And yet, if you don't have a plan, if you don't apply the best available tools, and if you don't have expertise and experience, you're going to have a bad time.

That leaves software development as something less than engineering, and more like a craft. You can't make it happen, or manage it, without expertise, creativity, and intellect.

Attempts to stuff it into a contained, measurable space almost always result in perverse incentives and efficiency comparable to a Soviet 5 year plan.

But that doesn't mean that a best way doesn't exist, or that it will never be found.


Novices generally don't have enough experience to distinguish between situations that require a "best" practice and those that don't, hence advice given to novices is generally phrased in absolute terms: "Never...", "always...", etc. Experienced programmers have used the "best" practices enough that they know their limitations, and so they pick and choose accordingly.

pragdave gave a great talk on this very subject a few years ago: http://www.infoq.com/presentations/Developing-Expertise-Dave...


It all boils down to maintainability. Do I want to even attempt debugging or adding features to a 10,000 + line ASP.NET Web form page? What about a MVC project where someone took the time to observe separation of concerns and wrote unit tests for everything? Automated tests of any sort will save you when the code base grows, and it will. There are different code smells for different languages and platforms but the real developers will smell them out and see them for what they are, the wrong way to write software. Bad code is bad code and it is always the wrong way.


I completely agree with the author. I think this can be summarized as "do whatever works best for you". Best practices are great - if they work for you. "Best" is highly subjective in my opinion.


For just a second I considered posting "lol" after reading the comments on this that missed the point, but I realized that's not really appropriate for HN. And besides, I didn't want to make it seem that I don't agree with the author, because I do, completely.

Then I realized that you did both for me. So, thanks, lawl.


No, look, there really are good and bad ways to develop software. If there weren't, no-one would have invented TDD, in fact we'd still be doing everything in FORTRAN. Yes there are disagreements, but we should respond to that by looking deeper and figuring out which parts of each approach are good and which are bad, not throwing up our hands with some relativist "oh, whatever works for you must be the right way, there are no absolute truths".


> If there weren't, no-one would have invented TDD, in fact we'd still be doing everything in FORTRAN.

TDD and FORTRAN are completely orthogonal concepts....

> not throwing up our hands with some relativist "oh, whatever works for you must be the right way, there are no absolute truths".

Yep, that's what works in practice. Ideologues don't ship.


>TDD and FORTRAN are completely orthogonal concepts....

I originally had some other examples in that sentence - we'd never have invented OO, we probably wouldn't have any notion of a programming methodology at all. My point was that if some ways of programming weren't better than others we would never have invented another programming language.

>Yep, that's what works in practice. Ideologues don't ship.

Citation needed. I've seen more projects doomed by "pragmatism" (oh, we don't need to make these projects consistent, we can ship quicker if we just leave those two similar-but-slightly-different functions the way they are) than any other factor.


> TDD and FORTRAN are completely orthogonal concepts....

False. The testing I've seen heavy Fortran math frameworks go through would probably make the average frontend coder's face melt. Just because it ain't easy doesn't mean it ain't done.


You could totally do TDD and FORTRAN together, hence they are orthogonal.


Don't mean to disagree with you substantively, but just a note on words:

The colloquial meaning of "orthogonal" seems to be "all combinations of these things exist or could exist in theory". That's all good and well, but there's a more faithful way to translate the mathematical concept of orthogonality: "uncorrelated". (Correlation is scaled covariation, which is basically dot product.) Adopting this usage might add some signal to online conversations, e.g. when one hears something like "TDD is orthogonal to Fortran", one could reply "hmm, that's not strictly orthogonal, the real-world correlation seems to be nonzero and negative, which means some real phenomenon must be causing it".


I see where you are going with that, fair enough. I've seen orthogonal used often to denote mutual exclusion instead of independence. My bad. :)


I love "strong opinions weakly held"!! I couldn't agree more.



wisdom is to know when to apply a theory, knowing when to use a methodology. :)

WHen someone says, hiring remote workers is necessary they are talking about it in certain context. Although they might fail to mention that context, or may even fail to notice that underlying context. Not noticing the context doesnt make them fundamentalist, it means that they are not much rigorous or they reached the particular conclusion by trial and error.

However to believe in any such conclusion without trying to understand the context or just not being aware to the fact that there might be a context is a folly on your part and not someone who mentioned a particular strategy which worked for them.


Agree that there's more subjective than objective in best practices. That said, there's nothing wrong with developing and publishing best practices. The fault is in those who turn them into a rigid "bible", rather than food for thought.


These kind of posts are a natural response to all the TDD hype.


This reminds me of the politically correct, relativist nonsense that one often sees in response to the existence of multiple, mutually exclusive religions.

There may not be a single right way to develop all software, but as with questions of morality, there are certainly better or worse ways, and in principle, we can find them through science. Hopefully we can at least come to some definitive conclusions about "worst practices", if not best practices.


The problem is that these better ways are mostly based on opinions of others and usually lack any real evidence.


There is no right way, because the term "right" is ambiguous and subjective. Even a ridicuously bug-ridden software can be considered "done right" if the requirements were quick development time and correct results for a single certain case (think of single-use one-liners). However, for example, the only known way to [provably] have a bug-free code is, unsurprisingly, to prove the code is mathematically correct.


Proving the code is mathematically correct is insufficient.

Partial list of bugs I've found before, that (for at least some definitions of "prove") would not be detected by what you suggest. Note that rigorous testing can (and, in fact did) find these.

* CPU Bugs (Including one that was unknown to the CPU manufacturer after nearly a decade of deployment)

* Memory Bugs

* Standard Library Bugs

* Compiler Bugs

Regehr even gives an example of a proven correct C compiler generating incorrect code (due to an incorrect system header), which was uncovered by testing.

[edit] Lest people misunderstand my position, I still think TDD (as it has been explained to me) is BS, but testing, in general, is a necessary part of making reliable software.


Standard Library and Compiler are also pieces of software, so assuming they could be mathematically proved correct, they won't have bugs either. CPU bug & Memory bugs, yeah, you are right.

EDIT: Oh ok, so the system header was not proved mathematically correct :)


In fewer words:

There exists many context-related local maximas.


I think the key here is environment. If you are in startup, writing thousand lines per day, then sure tests are not for you. But guy, who takes over code after startup is sold, will have different opinion.


nope. you're wrong. there _is_ a right way.

and furthermore, the best way to write an essay is to organize a meticulous outline in advance. and then _stick_ to it, executing it faithfully.

and the best way to take a trip is to _plan_it_, with maps and tourist books, and not waver from the schedule you made, even if (especially if!) you're lured to a bar by someone you wanna boink.

stop doing it wrong!

-bowerbird


My wife is a planner. I'm more of a "let's get in the car and see where we end up." A healthy mix of both planning and spontaneity adds a lot of variety and enjoyment to life.

Which is one reason I love the start-up scene. You start out with a plan, but the challenges come when you have to make important decisions in a moment.


i couldn't turn the sarcasm up any more, so i guess i'll just have to turn it off. ;+)

-bowerbird


ah, the sweet voice of sanity


My comment without even reading this article: Yes. There is.


I asked an experienced Bridge player about bidding conventions, and if there was one right way to bid a hand. He said, "there isn't one right way to bid, but there are plenty of wrong ways."

That's how I feel about software development (and business operations in general). Is there an inflexible, one-size-fit-all, "right" methodology? Hell no. Even open allocation (which isn't really a methodology) isn't right for every company out there. There are, however, a ton of wrong ways to do things. The sad thing is that, because most people are driven by a mix of ego and incompetence, wrong ways of doing things are the most common.

You're Doing It Wrong if:

* managers have more power than engineers.

* mutually positive contributions (good for engineer and for firm) are disallowed for political reasons.

* people can't get shit done because of too much pointless process (to start writing code, you need a Jira ticket).

* stand-up takes 45 minutes and people sit down.

* people are micromanaged and start doing things badly, or overmonitored (the hidden danger of too much Jira activity) and start panic-coding.

* designs are made by incompetents.

* ... and many, many more.


Interestingly, you did exactly what the author warned against, but replaced "right" with "wrong".

Some things are obvious (designs made by incompetent designers), but then giving the example of "No code without a Jira ticket" is almost certainly your personal pet peeve.


My point is that "no code without a Jira ticket" is a bad process. Slapping reporting overhead on small code changes is ridiculous. Maybe I didn't make that clear. I don't support that.

I have a lot of pet peeves, because I've seen so much done wrong. Competent software managers are extremely uncommon. Maybe 1 in 20.

In the '90s, people who studied day traders found that they were reliably making money on their core strategies, but those were generally only available for a small amount of time, and that most of them lost their shirts through "boredom trading" outside of their core competence that had zero-to-negative expectancy and only added noise. Boredom trading occurs because ego, combined with a need to feel active, results in a lot of activity that cuts away at the profits earned during one's good hour or two per day.

Managers are a case of trader boredom. They'd actually be quite effective if they scaled back to 1/10 the amount of process and interference, but their trader boredom just causes them to throw everything out of whack. If managers were socially permitted to work 5-10 hours per week instead of being expected to fill time by bothering people, they'd probably do a much better job.


Why is it a bad process? I know little to nothing about Jira (although I have worked in environments in the past where you don't start work on a defect without taking some form of ticket, and it didn't seem fundamentally bad).


You find a typo on the website. Instead of pull, switch the "i" to an "o", commit, push which takes 30 seconds you have:

1.) Open web browser 2.) go to your JIRA url 3.) enter your username/password 4.) enter your password again because you mis-typed it 5.) find the project page for that particular project and wait for it to load 6.) find the place where you add new tickets and wait for that to load 7.) type in a bunch of bullshit on a long form and submit that 8.) Hopefully you get to assign the bug to yourself. God forbid you have to have a PM do the actual assignment! 9.) pull, make the change, push 10.) go back into jira and open the ticket 11.) write a description of how you fixed the "bug" 12.) close the ticket

Ok, so that's the worst case, but the only real work you should /need/ to do is #9. The whole process of creating/assigning/closing the ticket probably only takes a minute or two, but it's probably unnecessary in most software systems and, over time, adds a ton of frustration where none is needed.


I like the way that the stuff you think you need is rolled up into a single "pull, make the change, push" yet the stuff you don't see any value in is spread out into having to open the application, go to the site, get things wrong etc. If it's browser-based, could you not simply have a browser link to to "Add ticket" to replace most of the first 6 steps?

Also, "type in a bunch of bullshit on a long form and submit that" - like I say, I don't know Jira, but I doubt that it forces you to type a long bit of bullshit. I'm sure that "correct spelling" would do. And I'd also expect that you'd be doing something similar in your source control system so that someone looking through the code at some point in the future can see why (or when) the particular change was made.

And how often do you make this kind of change as opposed to actually working on bugs/improvements that are coming in through JIRA?

I'm sure that there are occasions when your tool gets in the way of getting the work done, but from your description I'm not sure it sounds like it's a major inconvenience most of the time, which needs to be balanced against the benefit you get from having a centralised place to track, prioritise and allocate the work.


Of course, it was a bit of hyperbole. It's meant as an illustration.

I agree that there should be a centralized place to track, prioritize, and allocate work. But the comment I was responding to was about having a ticket for every code change, which is absurd if you take it to the extreme. If I'm working on feature X, but come across a typo, do I have to have a ticket to fix the typo or can I just fix it as I go along?

Again, we are taking things to the extreme in the examples.

In a sane workflow, someone prioritizes a list of tasks, someone (hopefully the group) divides up the tasks, and then we work on them one at a time. All of that is in a central place and developers generally don't have a problem tracking like that.

But what about the case where you find a bug (especially a small one)? In larger teams you probably want to indicate that you fixed something so that the testing team can have something to verify against. In a small team you just fix it and go on without the overhead of making a ticket for every one of those you discover.

An example from today. I just came across some code that was

"unless @blah.blank?", but it should probably be turned around to "if @blah.present?", so I made the change. If I had to write a ticket for that I would quickly want to strangle someone. And that's what the original comment I responded to seemed to be asking about.


The problem is that I've seen far too many examples of people fixing a "clearly a bug" that either turns out to be entirely correct for a reason the developer wasn't aware of, or even when it is a bug it breaks something down the line that depended on the bug being there (e.g. for a spelling mistake, it turns out some other application was screen-scraping it and depended on that spelling mistake to identify a given page).

> In larger teams you probably want to indicate that you fixed something so that the testing team can have something to verify against. In a small team you just fix it and go on without the overhead of making a ticket for every one of those you discover.

I think you may have hit the nail on the head there. "to start writing code, you need a Jira ticket" isn't a bad process per se. It's a bad process if you're working on certain types of project e..g. in a small team where everyone intimately knows the codebase and all of its uses.

And I think that's what the article was about - blindly applying a process designed for one type of development to an entirely different type.


Yes, where I work, we really do have to type a long bit of bullshit. At least seven mostly-meaningless fields all need to be filled out or the form will not submit. Worse, several of the fields are drop-down boxes with dozens of entries. Half the time I just click random shit because I don't care and no-one else does.


That bit sounds pointless, but I'm guessing that it's not a core part of Jira?


I'm hoping it's not, but it's what we're presented with.


You forgot:

9a.) Make a Crucible review 9b.) Refresh a few times because Crucible isn't loading correctly 9c.) Figure out who to invite to the review 9d.) Wait 3 days until the PM notices the review and signs off on it 9e.) Close the review 9f.) Wait another day because the tools team decided they'd upgrade all the Atlassian tools today

Is this shit really common? I thought it was just where I worked.


Right. That's the kind of nonsense that managers are OK with because they think that it's more important to monitor performance on an hour-by-hour basis than let you get into flow and actually perform.


I think our disagreement comes from different understandings of what you meant.

When I think of "no code without a Jira ticket", I usually don't mean "create a Jira ticket for the code", I mean "find the Jira ticket that links closely to the code you're writing".

I do agree that creating Jira tickets all over the place creates unnecessary work for everyone - even those trying to read the reports.

It is helpful, however, to know the general area and background for the code you've written e.g. what the feature is and any comments on it. Stuff like "developer X was working on feature Y when he wrote that code".


You just described my workplace. No wonder I'm frustrated working there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: