"When I had my bathroom remodelled, at first I thought it was organized amazingly well and why couldn’t software be like that? There was a designer from Expo, a primary contractor and subcontractors for tiling, painting, etc. a project workbook containing all the documents, including the design, and a logbook for every contractor to record their visit. But it turned out to be just like a software project.
As soon as they started ripping up the walls, the painstakingly drawn design for the tub/shower was hastily and arbitrarily adjusted because the casing for an outside fire extinguisher was in the way. Many of the components turned out to be incompatible with each other and many were not the ones originally ordered (the bathtub was not even the one labelled on its box!)
Communication was terrible – the city inspector would tell the primary contractor to change something, and then a subcontractor would show up and ask me what he was supposed to do. When there was a disagreement about whether the tiling was done properly, one of the contractors disfigured the tiling to force another contractor to redo it. They all had other projects, so for long stretches I had a pile of dirt (for the cement) in my patio and a non-functioning bathroom for months. And toward the end there were some quick patches, e.g. the walls were slightly curved so edges of the tiles looked bad and they just painted the patches, which worked but cost me extra. And a couple of years later when the tub sprung a leak, the plumber couldn’t figure out how to get in there, said they must have done a shoddy job on connecting the pipes, then when he finally got a good look at it, realized why they did it that way. It was exactly like a software project!"
Perhaps the biggest difference with software might be that physical reality places greater constraints on the solution space. With software it's easy to innovate the constraints of the pieces to a greater extent, which can easily have unforseen consequences. The difference is not so much that working around constraints is harder, but with every project they can be hugely different. A problem that I suppose design patterns attempt to ameliorate. If only design patterns could be foisted on designers the way physical reality is.
I've always thought of the various kinds of testing as adding some physics to constrain a purely abstract idea within some bounds.
If the tub is too big, you do "git rebase --squash" a few times, followed by "git clean -df".
On the other hand, learning to manage software projects gets you an almost unfair advantage at managing "simpler" and more mundane projects where everyone and his mother underestimates the complexities and all the ways things can go wrong.
I'd add that learning to write software also gives you an unfair advantage in managing the "complexities" of all other processes you'll encounter in day-to-day life, to the degree that it's (at least for me) incredibly frustrating going through life seeing inefficiencies and "incompetence" (not meant as an insult) everywhere around you.
When you go in to inspect a software system someone has constructed for you, though, you just don't know. It could have a gaping security hole; it may be dreadfully underpowered for your needs - or ridiculously overpowered. And you just can't tell.
And while, if you need repairs on your bathroom, you can get in any plumber and, even though he may criticise the original installer's technique, he can pretty reliably open things up and expect to find things he understands - pipes, for example, rather than, say a system of motorized buckets. But if you get in someone to look at software you've had built then who knows what the original developer will have constructed.
And as a customer of a bathroom installer, you can (hopefully) understand why you can't just have the showerhead suspended in midair with no pipes to feed it; that you can't have an electrical outlet installed actually in the bathtub; that the bathroom is only 8 by 12 so there isn't room to include a hottub. But no such immediate understanding is available to someone commissioning software.
The physicality of real things renders them much more readily comprehended by the user, which means they have a chance to grasp what goes into creating them. But software's complete lack of physical existence means that it is virtually impossible for anyone except its creators to fully comprehend.
You wonder why buildings take so long to construct? It's because nothing goes as you expect.
As they say: A plan is just a list of things that don't happen.
Contrast that to software, it is typically more than enough that you can click through the links and it basically looks like it "works" and if you delivered all that on time then it is already better than probably 95% of all software projects out there and in all but the most critical cases (money or human lives involved) absolutely nobody will check what exactly you programmed and how terrible the underlying software structures and architectures are. If anything, symptoms are being checked and there are requirements you have to meet when money or lives are involved. But that is it then.
So, here anyway, it is also safety which triggers the regulations.
My dad is a contractor. He can literally build anything. I grew up with this, going on jobs with him, being in that whole space. I do software (but can build things too! :) and the one thing I've inherited from him is the 'do it right' conviction. Just like some of the tv shows where the hero goes in and says "omg, how could they do this?" or "this is all gonna have to be ripped out and re-done the right way because the wall is load bearing", that's my dad.
And for software more and more as the years go by, that's me. Yah, there is no software test we have to take - we just have conventions and patterns, none of which (I've seen) are governed at all, although I'm sure some are somewhere. Sometimes I wish this was the case. I've worked with bad coders and seen code that is unbelievable and part of me thinks this could all be avoided with regulation. But then, obviously, the other part (that wants to stay alive and put food on the table) knows that when this happens, the world changes and software will not be the 'easy' path it is now.
Like my dad, I have that dna in me to do the best job and the right work. But if there were regulation like in the building industry, a whole universe of different types would be out of a job. I don't see it happening in my life time even if AI starts building stuff.
My complaint along with a few others were compiled, leading to a hearing where his license was revoked. Additionally, the state had a fund set aside for people who lose money on shoddy work (up to 15K). We were one of the lucky ones, as he only cost us $3K (which we got back). Others lost 10-20K.
Bringing it around to the topic at hand, a code inspector for 'code' is an interesting idea. That said, I'm okay with devs/agencies being held accountable for their coding work -- as long as clients are likewise held accountable for paying on time, proper briefs, etc.
If they ignorantly (not maliciously) wire the bathroom in an unsafe manner, however, they will fail inspection and be made to fix it properly. This is useful to the individual homeowners. In a way it's similar to the adage that "locks keep the honest people from breaking in." With both regulations and locks, they are less likely to stop the criminals.
Because as much as the software industry loves to adopt the work "engineer", almost none of the sort of websites and software discussed here on HN ever gets anything much like "engineering" in the sense of, say, "civil engineering" done.
I think in my ~20year career, I've had only 3 projects that were specified well enough up front that we just "built it according to the plans" and had a satisfied customer at the end. Overwhelmingy, some (or most) of the design gets "made up as we go along". Even on projects with several small forests worth of up-front documentation, there's almost always large areas of vagueness or outright contradictory requirements, which need decision making on the developers part halfway into the job.
There are _very_ few websites that couldn't be coded from the ground up in a few weeks with a few good programmers - _IF_ you had already thought through all of the things the site needs and all of the consequences and the contradictions in those consequences. Even the big ugly complex projects, like Google's search or Facebook or Wikipedia or Twitter - if you had a spec that answered all the details about how you wanted them, a small team of experienced web guys could get it up and running in a month or so, and let you now how much cloud/hardware/sysadmin/support you'd need to budget for as your customer acquisition kicked in.
It doesn't take very long to "build" the software or the website (obvious "large scale" projects like OSes excepted). What takes most of the time is identifying and solving the problem. (And, I suspect a lot of the reason software/websites have a reputation for taking "so long" is that they're _way_ too often rushed into the "building" phase way before all the things that need designing are even identified, then all the new solutions take longer 'cause we try to fit them around all the work that's already been done.)
A lot of assumptions have to be met for that to be true. It shouldn't take very long provided:
1. The programmers are very familiar with the tools.
2. The programmers are very familiar with all necessary 3rd party components.
3. The 3rd party components account for 99% of functionality.
4. The programmers have built that exact project before, preferably more than once.
I usually work in a best-case scenario for development: I'm building projects with limited scope, solo and with no other decision makers involved in the project. I'm also experienced enough to know how complex things will generally be from a high level standpoint. And I have never once met my own estimate for complexity and the amount of time something will take. Exactly 100% of the time silly, seemingly inconsequential things add up to an extra 30% or more.
It doesn't matter if you can write 1000 shippable lines of code in a day because tomorrow you'll spend most of your time hunting a bug in an external library. Suddenly your machine-like pace has been cut in half. This happens on every project I've worked on and I would assume all software projects.
I agree that poor planning adds to the amount of time needed. Of course it does. But there really is no "best case" scenario where you and a couple of buddies could hammer out Facebook in three weeks.
I'm there right now - why doesn't this Concrete 5 ecommerece plugin's Paypal addon work this week, when nothing's changed since last week???
It's not just you…
I spent a day with a bug that I found in OpenCV (a computer vision/machine learning library) that spooks me to this day. How it works internally is that for images to be processed you create a header (with usual image header info) which contains a pointer to the actual image data. There are two ways to do it: one you create the header and then create the image data separately and the other you create the image all in one go. I ended up in a situation where the image (it was going through a couple of different filtering algorithms) would get processed and be displayed regardless of whether or not I had set the pointer to the image data. That image data was already in memory since it was getting read from disk in a separate step, but there was really no way for the library to know its location in memory. I spent a day trying to sort out what was going on -- and threw my own schedule -- because I had to assume something was seriously wrong and would lead to other unintended consequences. I never did figure out what it was but after a lot of testing I decided it wouldn't cause any other problems (it never did, the library I was writing is still the core of a product I'm selling today).
If you don't know it OpenCV is about as mature as a software project can get. It was most likely the best tool for the job I had to tackle. But even with the library's stability and widespread use there was just plain voodoo that set a project I was working on back. Other 3rd party tools can be much worse. I don't do any web stuff but I have had to integrate Facebook Connect into products. Interfacing with Facebook can be a nightmare. Their API changes fairly regularly and the documentation can be a mess. Even using a 4th party, regularly updated, helper library I've seen bugs that took ages to sort out.
And that's just talking about integrating 3rd party, hopefully mature, components. I don't care how good you are, writing new code means a lot of testing and debugging. I've never seen new code that didn't introduce some voodoo of its own into a project.
Bugs in really mature software (i.e. libtiff, libxml, sqlite) are very rare.
I seriously doubt the code for reading images into an OpenCV-native format is being actively developed though. Anything being done in pattern recognition would likely not effect the library's basic IO stuff. What I was actually using it for (their RDP implementation, image thresholding) were likely pretty stable as well.
Sometimes I waited for a filled form for few months or even more (not paying much attention), and some of these projects were very successful. If I'd have started building right away, I'd spent these few months struggling and waiting for decisions to be made. What is even worse, I'd probably estimate my time and salary based on wrong assumptions, so I would be mad and underpaid. The project would take way longer than I estimated, so customer would be angry too.
Do yourself a favor and never start coding before final documentation arrives.
It's probably worth mentioning that this approach is correct only for small and medium projects, like things possible to deploy with Wordpress ninja in team.
This may be a good place to start: http://www.methodandclass.com/article/write-a-web-site-brief...
I got this process from another Ruby consultant and it works great
- Get wireframes of main page views (usually there are about 6 of these. You can help them create this
- Get them to write out, or describe the requirements and take notes
- Break the wireframes and requirements into user stories and load it into pivotal tracker with estimates
- Tell them "we are going to build all these user stories PLUS 30 'freebie' points of stores. Any changes you want you can spend your freebie points on, and after that you can substitute out currently scheduled points if you really want those changes
It works great so far.
Civil projects are a constant dialogue between client and engineer - but we get further monkey wrenched by one additional stakeholder that software doesn't have: local and at times federal government review. If you think software dev is difficult now, wait til your code is regulated.
Another big difference between physical engineering and information engineering is that when an overpass is completed, the project is done. If you're lucky, someone will come by and look it over or maintain it over the years. There's no such thing as "done" software. When a program wraps up, everybody rushes back in and starts monkeying with it for the next iteration.
So the underlying development method, nowadays denounced as 'waterfall', wasn't sufficient? I am not an Agile zealot but change and 'knowing better' needs to be embraced during the development process and not ruled out.
On the other hand when requirements are nailed from the very start, the development proccess has been swift, well executed and largely bug free.
While developers love discussing the pros and cons of different languages, libraries and frameworks I often think that more work should be put into developing ways of better capturing requirements, as this has such a huge effect on the amount of time a project will take.
There is also a gap in discussion of software architecture. Everyone seems happy to use whatever their framework forces them into, or add more machinery to make up for it. Concepts like coupling and cohesion don't get much airtime, particularly with the prevalence of dynamic languages.
If the change is at the customer's request, they get charged a nice premium and the necessary changes are made. At the end of the day, everyone still gets paid regardless of how often the customer changes their mind.
This, then, is perhaps what the trades do better than us: they're very upfront with the customer about added costs. "Yes, we can make your bathtub a jacuzzi. It'll cost you $2500 more and the job will take a couple days more. Do you want to go ahead with your change?" Because there is a clear and complete plan that the customer signs off on before construction begins, it's very easy to show them what a change in requirements entails.
Everyone can envision what knocking a wall down means, and can see it will cost money and time. Many decide, upon reflection, that they really don't care enough to pay that price. Others go nuts and end up with expensive projects that take forever. Either way, the contractors get paid.
E.g. use Agile and smart guys with good knowledge of application domain for first version, then drop it and rewrite from scratch using Waterfall and outsource professionals.
Why not? Professionals knowns their tools much better, so they will made better choices in every aspect of project. If you place restrictions on their tools, they will work less efficient, unless you are already professional developer.
> The source (with doc) of a software is a compiled version of business/technical knowledge of the team that developed it and not really that good as a guide to the next team.
Of course. They will ask questions, so somebody from first team should respond to them.
Jack Reeves is the man here though - writing source code is designing the product. Before code can be written you need to take all the trade offs, the decisions and the research (or just make wild assumptions).
Once you have source code, it has been designed.
Which is why instead of sending out a 12 page form telling a client they need to think harder about something they don't understand, put out version 0.1, and ask them what they want changed.
It's difficult to take technology or software from one place and transplant it into another. The best we can do is take what we learn on past projects and past companies and keep those lessons in mind when making new decisions.
If the design and spec job was done with full knowledge of current state-of-the-art "web frameworks, databases, rpc servers, log frameworks, etc etc." - and requirements adjusted to suit "off the shelf" tested and reliable code, only speccing "custom code" where absolutely necessary, I think my claim is still supportable. I don't want a spec that says "you need to write this in php - therefore you need to write a php compiler to scale it". I want those problems solved by the spec. The "coders" just want APIs and datastructures and wireframes and finalised graphic design and content/content-inventory. I reckon I know people who could do it.
All you need is wheels, brakes, a frame, a power source, a transmission, some fuel, and some crap to connect the power to the transmission to the wheels and the wheels to the frame. And a seat. Oh, and a steering wheel. Boom. Done.
You can hand-build a car by yourself over a couple days. I mean sure, the engine, transmission, frame, wheels, tires, seat, gas tank, carburetor, brakes, etc are all manufactured by hundreds of people and dozens of companies to get you to that point. But basically, you decide what components to use and how to put them together. It takes you a relatively minute amount of time to assemble them and it's much easier than trying to manufacture all the parts yourself.
Somehow, after building cars for over 200 years, both hobbyists and huge corporations find new ways to build them (and seem to enjoy themselves). They even come out with new ones every couple years and keep finding people to buy them. You'd think the general public would wise up to the fact that it's all the same thing over and over and demand our jetson cars already.
These days I write in an interpreted language using complex libraries that handle a multitude of protocol choices for me. Even if you argue that each of those tools were at one point hand-crafted, even down to the compiler, there are now development tools that help me write code, from basic code completion and tooltips to the most complex static and runtime analysis. That is machines writing software, with a human in the loop.
This works just fine for construction of physical things because the cost sunk into to the "non construction" bit is tiny compared to the whole project - so no one spends too much time thinking about methodologies and automation of an architect coming up with the concept of a building.
(1) I note in the article the person mention spent time as a consultant in a big consulting company - this view is still held by them at least with the last brush I had with organisations like that (and they are incredibly frustrated by it).
And of course, since I'm in Boston I'm required to mention the Big Dig (2), which was a tunnel and bridge project that cost over $14 billion. Oh, and a ceiling panel fell, killing a woman in her car. And the guardrails in the tunnels tend to kill motorcyclists who would otherwise suffer only minor injuries. Plus the all 25,000 of the 120 pound (55 kg) light fixtures in the tunnel ceilings have to be replaced lest more of them fall, maybe killing more people.
But yeah, let's keep trying to make software engineering just like civil engineering.
I think the analogy is becoming a little stretched though.
I feel like the ISO/ANSI etc standards-making bodies are where the analogy breaks into time and space.
In one sense there _is_ this part of "constructing software", and _largely_ it can be done by the software equivalent of stereotypical "construction workers". (This is what a lot of people who've tried outsourcing to India are trying to do.)
The problem is, while you can collect a pickup full of Mexicans who can lay bricks / hang sheet rock / tar roofs on most street corners in the south of The Mission, and they'll do a great job of it if you give them good directions - you don't expect those guys to be making architectural or structural decisions, or zoning or permitting or code decisions.
"Code writers" have to make those sorts of decisions every day - a current high-profile example is Marius Milner and "his" decision about what data Netstumbler should collect from the Streetview cars. One of the biggest software companies ever, having ethical/legal/policy decisions made by the coder-on-the-spot (at least if you believe Google's representations on the topic). Or Apple with Lion debug-logging clear text passwords for FileVault, and having it escape "into the wild".
The "architect" and the "civil engineer" and the "structural engineer" who have important roles in the world of building physical things, the guys who sign off on bridges or tunnels or even just-repaired airliners, the guys who put their careers on the line when they sign the paperwork, the guys with qualifications and certifications and often indemnity insurance to satisfy society that they understand the risks - for the vast majority of software/websites discussed here that's reasonably likely to be a 22year old college dropout aiming to be "the next Zuck". Even in small and medium enterprise sized businesses, those roles are largely thrust upon whichever developer seems to be good (and doesn't duck their head quickly enough). And if the shit hits the fan, they say "sorry boss, it seemed like the right answer at the time" (and hopefully doesn't get hung out in the press like it seems Milner has been…) (And the "big consulting companies" mentioned in the post I'm responding too, in my limited experience they often seem to want to make all the architectural/engineering/policy/ethical/legal decisions, then leave with their paycheck before the "codemonkeys" implement it all, and not be contactable when their "solutions" turn out to be incomplete/contradictory/impossible)
I _hope_ government regulation of "software construction" isn't the answer (at least not for software that'll just cost investors money when it fails, as opposed to bridges or airliners that'll kill people), but I think lines of responsibility and authority need to be more explicitly identified in many software projects, with appropriate authority conferred on the people burdened with the responsibility. Holding developers to deadlines without giving them the authority to adjudicate on them or be involved in the determination of them, is a startlingly common way to have your developers cut corners - and worse, feel entirely justified in cutting corners and convincing themselves they're "doing the right thing".
You could probably regulate "software construction" and mandate specific methodologies, but we can see already what the result would be: just look at the SEI CMM Level 5 Certified software development teams that already exist: Wipro, R Systems, and so on — technically inept companies that only exist to rip off clients who don't know any better.
Any country that mandates that kind of development for all software will be rapidly left behind by the countries that don't as software becomes an increasingly important part of the 21st-century economy. They'll still have human beings laying their literal bricks and tarring their literal roofs, while the rest of us are living in robot-built houses full of fountains and sculptures, or dynamically-reconfigurable programmable houses.
Those days are gone too. In fact currently you just have to know how to press ctrl+space at the right time and wait for intellisense to do the magic. Nearly 99.99% of the Java world works like that.
In other words, these days we learn how to learn tools that write programs.
"Stuff that is made by hand is hard to make, and even more hard to make well, and tends to be less sturdy than things made by machines."
The last portion of this statement is inaccurate. Hand tools (for example) benefit in terms of usability, durability and quality when hand-made. This is why top end cutlery, wood carving chisels, etc. are typically forged by hand. This also typically applies to furniture.
"How often do you think two plumbers argue over the right way to plumb a bathroom? Almost never!"
Also inaccurate. Clearly the author has never worked in the trades.
"Finally, can you think of any job where people are making really complex things by hand, and which requires a ton of experience and training to be good at, yet everyone and their uncle has an opinion on how long a project should take to get done?"
Pick any form of construction known to man. Budget overruns, issues with building plans, and problems with materials crop up constantly, even with modern building materials and best practices.
This is the same way I feel when programmers complain about writing "just another CRUD app." CRUD apps are in fact very difficult to write because of usability concerns. If all of your "create" screens look exactly the same no matter what is being created, that means you are making no effort as a UI designer to anticipate common creation patterns. Even assuming a cookie-cutter UI, a CRUD programmer has to properly model the concepts in data, which is not trivial either.
The world of software development needs to be divided in two -- the component creators, and the component assemblers.
To stick with the given analogy, component creators are the people inventing new kinds of plumbing: easier ways of connecting pipes, taps that don't ever drip. Component assemblers are the people fitting out bathrooms.
Creating new components is high-end engineering. It needs to happen far away from the day-to-day challenges of making a client happy.
To a large extent, this division is already present, but it doesn't go far enough. There has been amazing progress - nowadays we work on top of an incredible stack of technology that we don't have to re-invent, but so far we've not achieved the "last mile".
We'll know when we're there because component assembly (i.e. making something for a client) will start to look more like a trade.
Today, just getting a regular been-done-a-million-times-before database-driven website (or whatever), requires FAR too much low-level code. This, I think is what is meant here by "hand made". We should be snapping things together, and often we are, but suddenly you get to an awkward bit and you're back to forging a new kind of pipe joint that never existed before.
Inevitably. Which means you'll have to resort to "real programming" (as opposed to component assembly) for /that part/ of your project. I don't see this as a fundamental reason why component-assembly can never become viable. As we get better at creating flexible components, these situations will get less common, but they will never go away.
> you'll run into a requirement that's simpler to implement directly than it is to write the glue code for all the components you could use to solve it
100% this. For me this pretty much sums up why component-assembly isn't viable today. For all but the simplest components this turns out to be the case. (e.g. date-picker, file-uploader, or maybe something a bit bigger with /very/ fixed requirements, like a disqus comment trail). But jumping from this to "I don't think it's every going to happen" is overly pessimistic. The glue code is too hard to write? We need a better way to write glue code. That could be a fundamentally different type of language, or a fundamentally different conception of what we mean by "component".
(Aside: In my foolishness I am working on such things).
It's not just that you'll run into one unique requirement; you might run into a unique combination of requirements, each of which already has proven solutions, but with no good way to glue it all together. That's the reason C programmers still sometimes write their own string handling or memory allocation code despite that stuff being literally in the standard library.
Writing general purpose software components is hard. If you're creating a product, you know what kind of component you need, and you don't really care about anyone else. If you're making a general purpose component, you have almost literally no possible way of even comprehending, much less fulfilling, all the requirements of every product that could potentially use your component.
You're always going to notice a difference between something that's been cobbled together out of spare parts and something that's been designed to fit an integral product vision. There's a reason we've been hearing about reusing program components for literally decades. I'm sure some chunk of the problem will be broken off and solved, some kind of standard solution to the CRUD app or something, but there are still going to be products out there that need real engineering, not just component assembly.
I know hard core CS types will say that making programming easy for the masses will mean it won't be fast, it won't allow for the optimal algorithms. However computing speed and bandwidth is rising exponentially and countless applications for software do not need to be as optimal as possible.
In the future people who are not programmers but who are one step removed (e.g. okay with spreadsheets, SQL, some basic scripting when needed, visual programming like Labview, and who understand software architecture) will need to be able to do more complex things with computers, things that today only programmers can do.
We need to see the separation between programmer and the technical creative masses disappear a bit. Or at least as tablatom says, there has to be room for two types: those who build the tools for easy programming and those who do easy programming.
I would say the main idea is to shift focus from control-flow to how data ("information packets") flow through a network of (black box) components and only use traditional control-flow style programming for the most simple, atomic components. The book presents some convincing examples from business programming but I think the idea should work very well for other areas than text processing, certainly for image manipulation or sound processing.
The split between component creators and component assemblers (application programmers) is highlighted in the book, I hate it how often I tend to slip from one role to the other in regular programming, maybe such an explicit split would help a lot (well, or certainly a lot of experience will ...)
The division you propose already exists.
The "component creators" are called "programmers".
The "component assemblers" are called "users".
That's why you can use, say, Google Docs Forms to slap together a signup form in ten minutes that would have taken you a couple of hours in PHP with MySQL in 2000.
The thing is the rest of the world is also in a state of flux. Plumbers changed materials quite a few times in the past decade which changed the fixtures a bit. So the concept is the same (ie. design patterns) but the actual material and its characteristics changed (ie. framework, programming languages) and there are different ways to put them together: welding, glue, etc (ie. APIs).
But changing some material, fixtures and binding material doesn't even compare to the kind of flexibility we have with software!
The solution for plumbers is that they always use what's current. Which makes their field as much of a fashion-driven field as ours (think about it).
But give them an existing house with multi-layer pipes and tell them to fix it and they will also comment that it's old tech and you should switch to copper or whatever.
In conclusion, I don't see how software is any different than other fields except that we are much more flexible and go with a much greater speed.
Software is at its infancy because science in general is at its infancy.
The fact I can push a software update to a million users in minutes makes him green with envy.
So yes software is hard... but cost free replication is an enormous upside.
Not that I'm complaining, just giving some perspective. Maybe this will make your dad less jealous :)
I don't think this guy has ever been to a construction site. The people who work for me argue about stuff like this all the goddamn time. About the most minute details like making a connection from a wall point to the drain going first straight then left, or first left then straight, and a million other things that are completely irrelevant in the grand scheme of things. So just like software. They also come up with the same justifications for doing it one way or another - 'oh but it'll be easier later on' (programming language nerd wars), 'this way saves work', 'this way is more robust'. Get 10 of them together and you get 11 opinions. And you know what - in the end, the best ones are the ones who don't come up with different solutions every single time, but who just get stuff done, good enough, in time and within spec. Just like software.
This looks like some who uses Dynamic/scripting languages to get the job done, like Perl. Especially 'get stuff done', 'different solutions every single time'(TIMTOWDI), 'Good enough' seem to perfectly match that definition.
I've always felt that comparing software with architecture, building a car etc. is what people naturally reach for since it's something they're familiar with in the physical world and much of the same terminology is used.
I don't think that software is at that point, though. Physical things "play nice" together because they are part of the physical world. Materials have characteristics that are inherent and don't need to be conceived of where as the way things behave and interact in software needs to be defined and constructed entirely by humans.
I guess one could argue that happens to a certain extent in the world of materials science but ... I don't really think it holds up.
I arrived at a point where I started to think of building software as writing a novel. Once you start to use that as an analogy, it doesn't seem so weird that it's hard and takes ages, because so does writing a novel.
In a novel, one must define the entire world, and one can make the choice of using/re-using story lines or doing something original. People that trot out formulaic drivel make better money than the tortured geniuses on average, but the few tortured geniuses that manage to hit, hit it big and serve as an example to all the others.
I kind of stopped thinking about it at this point and got back to work, but I think that using that analogy, things really start to make more sense ... what do you reckon?
"Once you've written a subroutine, you can call it as often as you want. This means that almost everything we do as software developers is something that has never been done before. This is very different than what construction workers do. Herman the Handyman, who just installed a tile floor for me, has probably installed hundreds of tile floors. He has to keep installing tile floors again and again as long as new tile floors are needed. We in the software industry would have long since written a Tile Floor Template Library (TFTL) and generating new tile floors would be trivial."
it's late where i am, so i'm gonna go to bed. hopefully the site will survive until morning.
After all, you're really just describing to a machine, the idea(s), design or algorithm(s) you need it to implement. And we have languages, tools, processes, practices (etc) to make this task easier. Yes, it can get frustrating at times but a lot of other professions can be just as frustrating. By the way, if you think building websites is hard, you should try writing code to run on some electro-mechanical system, like a robot.
1) Software is in it's early days (in the grand scheme of things.) Though things may seem "hand-made" now, it won't always be the case. In fact I'd say OO, design patterns, frameworks, cloud technologies and more are the early equivalents of regulations, well-accepted standards, etc.
2) What other profession is as egalitarian? What other profession has as much upside? Entire industries can be changed by a few smart people. That kind of opportunity makes the "hassle" worth it.
3) The hassle of having stakeholders/customers constantly wanting things faster and cheaper can easily be mitigated by being picky about what company your work, for, it's business model and it's culture. Not all software companies face that issue.
* While there are certain constraints a house has to satisfy, architects do have freedom to be creative. They MUST plan such that it will resist certain standard incidents, and they have to plan it such that the floors can carry ~10x the weight you would ever expect there. These regulations vary depending on the region you live in. E.g., in Austria there is the possibility that meters of snow lie on roofs, and architects (or their structural designers) have to keep that in mind when calculating the max weight. You won't find this constraint in Portugal.
* Keeping to these constraints does not guarantee a building won't break. Constraints (like requirements) are prone to grow outdated, or be incomplete. In Austria, roofs will sometimes break down when it snows for some days in a row (4m of snow are very heavy).
* As incidents in e.g. Turkey and China show, plumbers or other construction workers can do such a bad job ("botching") that buildings just break down after some years. Governmental constraints don't assure quality. Some countries like Dubai won't let local workers unguarded near any expensive building for this exact reason - most of the times, German and other Western workforces are hired to oversee the construction process.
I think one could compare governmental constraints on construction work with requirements. A building's floor must be able to carry a weight of 100 tons, and a software system should be able to serve 1000 requests/second - I think that's comparable, and software constraints might even be easier to test.
What I think this really boils down to is that software requirements evolve much more quickly, and due to this it's not feasible to establish legal requirements - government would have to issue new requirements every week, adjusting them for technologies like nodejs. Also, how do you establish best practices for bleeding-edge technologies within weeks? Not at all. We are in a constant state of learning and experimenting.
Plus, in the history of the world, he said, is there one
thing you can think of that has been hand-made, and on such
a large scale as software, that was as complex? [I cannot
There are probably plenty of commercial composers who write music mostly because it’s a job they can do, and they derive a modicum of satisfaction from it. But the composers who have made history — just like the programmers who have made history — do it because they must. If they give it up, they cease to do what they were made to do.
Problem is, the two aren't really comparable.
A while ago someone wrote an article that we are Software Gardeners not engineers. And I think that comparison is much more apt.
Software is maleable, it can change, the layers of complexity build up so high it's impossible for any one person, or team of persons, to fully understand. Even the mathematical theory of software isn't fully understood yet, and yet we are piling those little lacks of understanding on every layer of the architecture ... they add up and things become monstrous.
BUT! This is a good thing. Software isn't a physical object. I want to be able to change the specs half way through a project rather than having to wait another 50 years to design a better bridge. This makes progress quicker.
As always, when progress is quick (and easy), it is also messy.
For example: it took builders thousands of years to invent The Arch and trully revolutionize the industry. It took software only a couple of decades to go from machine code to python.
Oh and never forget, the Turing Machine was "invented" in 1948. Our lives revolve around an industry the theoretic principles of which were first defined less than 70 years ago.
It is a testimony to the innovation in any field that leads to tools become outdated so quickly. Easy commoditisation is directly related to the tools one has at their disposal. Creative fields like photography or movie-making also face the same issues.
Interestingly, if we are given a choice to constantly upgrade our tools then we would always prefer to. For eg: I don't find most people using the cork screw on a swiss-army knife. I would rather have a lighter multi-tool. But redesigning, testing and manufacturing a new multi-tool takes a long time.
Software has relatively speaking a much shorter lifecycle. Most of the clients recognise this and sometimes have higher expectations of the turnaround times. This leads to unintentionally short timelines or bug-ridden software.
As in any uncertain venture, it is often better to go with iterative and agile methodologies rather than a big bang approach.
The work of a financial analyst is also entirely manual by the way.
EDIT: I'm sure the db connection will come back up, but this error is the root of this discussion, in my opinion. No one wants to be yelled at, so to prevent this, we revert to what "just works" and your idea of that concept is different than mine, thus the more we can agree on standards the better off we will be for the general use case.
As an architect... easier said than done.
I started a rails project very meticulously. I had recently read through the 'Agile' book which closely tracks the development of the framework. I had a decent idea of what I was going to build and how.
It would be a tight ship, and code would be kept clean from start to finish. Before even starting the project, I invested a ridiculous amount of time and effort learning all the tools I would be using, evaluating every case where I had a choice between popular and well-supported tools. For each tool I chose, I thought about why using it was a good idea and how exactly it would fit into the workflow. I was determined that every commit would keep tests and documentation up to date with any code changes in that same commit.
All that is to say that I came into the project with carefully considered, but fairly rigid, opinions on most aspects of how to do the project. Perhaps not surprisingly, the other developers on the project did not share these opinions. They did things that ran against my idea of how to keep a project organized. Perhaps I also did things which ran against theirs.
Eventually, the state of disorganization reached a point that felt to me like letting go of a piece of furniture in a pitch-black room. I no longer had a sense of how the whole thing was put together. I no longer knew what all the 3rd party libraries we were using did, or why they were included in the project. I had planned to keep all libraries up to date but, for various reasons, we were falling behind in that regard and various roadblocks stood in the way of bringing things current. We had started using esoteric features of the database I didn't know very much about. I would do a deep dive of research to try and catch up my understanding, and then I'd fall farther behind in keeping my finger on the pulse of changes in the project. Soon, the codebase had dependencies on various servers in our organization that prevented me from simply running my own isolated instance of it.
I don't know a solution to this, yet. Right now I'm still trying to feel my way through that dark room.
My experience is that only a fraction of developers are interested in keeping things clean and meticulous. The rest are more interested in cranking out features. Both are needed. In my experience, a ratio of 1 "meticulous" to 6 "git-r-done" programmers works fine if you're using collective code ownership & pairing.
(The real trick is getting the rest of the team to agree to it.)
If I get a new feature working, but I do so at the cost of adding more technical debt to the project, I don't feel fulfilled at all; I feel I've done a net disservice to the world, and especially to whoever comes onto the project after me.
Being the type of developer interested in keeping things clean and meticulous, while being pressured to always sacrifice quality for speed, seems like a recipe for stress and burnout.
I follow the boy scout rule: the code isn't done until it's at least a tiny bit cleaner than we left it. I'll generally spend about one hour out of every four on cleanup (and not budget for it separately). I also focus my cleanup efforts on what I'm directly working on, and what's causing me the most grief today. This allows me to keep my code clean and make the parts of the system that I use the most gradually improve.
1. When things don't line up like the plans in an analog project, you can just "line them up" and the original intent isn't seriously disturbed. That doesn't work in a hard-logic digital world.
2. That said, "real" projects still can come together more quickly because they are better about using interfaces and being loosely coupled than we (software developers) are. You can slightly move joists, light switches, etc, because the interfaces of how they interact with other components are better defined, and the glue code (ie wires, nails, cuts) can be trivially adjusted to make things fit and still get the desired end outputs.
#2 is partly why software "integration" projects can be hacky as hell and still work, they're just glue code for clearly defined interfaces. If we designed our internal applications with such simplicity and clearly-defined interfaces and wired them together rather than coupling them, it'd be more predictable.
Unfortunately that's really tough to do for software inventions, as there are so many new things where the interfaces might not even be able to be clearly defined.
Construction would look a lot more like software if you ordered wood but when the stack of wood showed up it was metal rods instead, or if it came pre-molded to the shape of another house.
There are just too many variables in de novo software. I used to have this debate with my old boss all the time. You either live with the reality and create new, valuable stuff, or if you want to build bridges, go write glue code for legacy systems.
As in the construction of any artifact by human means there will be bumps in the road. No amount of technology we can use today will solve that issue. It's a matter of communication and flexibility. One might speculate that great buildings such as the Eiffel Tower or the Empire State Building weren't so perfectly specified as to allow the construction workers to simply, "put the pieces together." Problems certainly arise and must be dealt with accordingly.
Software is no different. We still write programs that guide fighter jets to land on a tiny boat in the middle of the ocean. We've written programs that drive robots across the martian surface. And we also write programs that share our inane thoughts and activities with our friends, family, and random interested strangers. The degrees of failure are still the same and for the most part assessed accordingly: while Twitter will strive for 100% up time there is little money or life lost when it does go down. Whereas when a martian robot plummets into the side of the planet it's a big loss (and has happened). Mistakes are made in engineering. Unanticipated side-effects arise. And we do our best to deal with them as well as we can.
What distinguishes software from contemporary engineering? A couple hundred years of development at the very least. I'd say we've come a long way and have some ways to go. But if you stick with it you might find it.
The reality is that these things get complex and intricate because that is what the system demands. I can make a website in <1 minute. It's not hard. However, it won't stand out amongst the sea of sites out there, which kind of defeats the purpose. This is why tools that facilitate the process, while improving the quality of the final product, don't actually reduce the time it takes to build a site, because improving efficiency just raises the bar higher about what a quality site needs to be.
The app logic itself can be written itself but we spend so much time on micro-integration of libraries, or writing them ourselves that we end up wasting a lot of time on things that aren't solving a problem, rather helping to deliver it.
How much R&D is involved in your project?
It doesn't matter if it is a software project or a physical project. If the percentage of unknown terrain is high, the project will become complex.
When a very progressive and innovative bridge/house needs to be build with new materials and a unknown construction etc., the project will be complex and you can't design it (completely) upfront.
For example, if your software project is just a CRUD application (without complex business logic), a qualified and experienced software developer can estimate it and deliver it on time. (There are a lot of frame works and tools already available for this kind of software. So, nobody doesn't need to reinvent the wheel.)
If the wheel is not invented yet for your problem, yes, it can take a while and your luck is in the hand of gods.
The other issue is that there are a lot of developers that can't get working any shit. But incompetent people are everywhere.
Yet. Robots will get a lot better this decade.
But with software development we are in charge of the computer. When there's a problem with the computer, there's no ability to shrug it off or make excuses - it's our entire purpose on a project is to make the computer do stuff. Oh, no doubt we can (and do!) make excuses - the compiler versions are wrong, the tool doesn't work with this other platform, etc... but no one outside our field can even understand those problems, nor do they want to. It's just our fault, and we have to fix it.
That's also the reason why is so easy for people to justify pirating software: is something unsubstantial, "how can it be so expensive? how can it cost more than the computer in which I'm using it? I can see the computer!" is what I hear the most...
And it gets even funnier when you consider most coders and pro users can deal with half-baked apps and find a workaround for bugs and glitches, but the average enduser, the kind that barely appreciates good quality software or how much it costs to make it is the first one to complaint when something doesn't works perfectly, even if it actually does but they are too dumb to know how to use it.
It just takes time to make an entire industry work smoothly and software engineering is still a young discipline. When I look round at the innovation going on lately - the rise of alternate JVM languages, increased expressiveness in languages like Ruby, frameworks like Rails, Django, Bootstrap, Less & SASS, compile-to-JS projects like CoffeeScript & ClojureScript, renewed interest in visual programming ideas like LightTable and many more, it seems we've put our foot on the gas again. I'm pretty optimistic the programming tools of tomorrow will fix the day-to-day annoyances, leaving us more time to concentrate on the core problems.
A classic opinion piece on why building software will always be hard is Fred Brooks’ “No Silver Bullet – Essence and Accident in Software Engineering” — there’s a version of this available at:
And if you don’t agree, one of its rebuttals, “What if there’s a Silver Bullet … and the Competition Gets it First?”, by Brad Cox — with a version available at:
…even gets into the idea of an “industrial revolution” for software.
Good classic reading… 8-)
I also wonder whether teaching practitioners to diagnose and identify what type of project one is dealing with might avoid blind spots when estimating projects. For instance, an LDAP project should be identified as an integration project, where one might have to deal with unexpected schema.
The real problem with software projects at the moment is we lack a proper framework to think about the nature of each project and therefore don't understand where the risks are, and properly advise clients.
For a software app, if it's well written, the end product builds itself. The cost of this last step is virtually nothing. Since the design can be used repeatedly, the software development costs can be spread over a lot of end users. For example, take Excel. A "spreadsheet machine" is pretty useful, it's complicated. You could compare "spreadsheet machine" to a house (or maybe something like a tractor). Aside from tweaking, the software is done, the actual machine (a PC) is a commodity. It's a done deal, anyone who wants a "spreadsheet machine" can get one relatively cheaply. But now, since the machines are out there, along with a huge amount of cheaply distributed software (the OS,languages, compilers) everyone is tempted to make something new. How about a multimedia playback machine? Brand new things that haven't even been made before. Even though there are a bunch of general purpose tools available, it is the "newness" of the app that makes it hard to predict how long it will take to code. New problems don't have existing solutions, so no one can predict how long it will take to find them.
Also, since the actual machine is super complex, which is necessary for such a general purpose device, there is looseness in the designs, leading to many different ways of doing the same thing and inevitable bugs. There are bugs in the OS, so you are going to have bugs in your app. Think about how much more complex a PC Playing back a DVD is compared to a DVD player.
As long as software is considered a craft, it will always be an expensive, laborious, and frustrating process for clients and customers. That may be great for software developers' egos and pocketbooks, but does little to improve the image of software development projects among the general public.
> meta name="generator" content="WordPress 3.0.1"
Was it hard to create this blog entry? Obviously he has a point but, in general, the abundance of predefined frameworks and libraries makes it easier to create "web sites and software". Fortunately for us (developers) at the same time user demands grow higher and higher. Web-Sites 'need' to become more dynamic, styled, responsive, ... simple CRUD 'needs' to turn into a 3-trier distributed middleware architecture.
tl;dr software complexity is a constant. Progress in easing development is compensated by increasing demands.
In my professional experience (I started programming 20 years ago and worked on my first 'enterprise' Java project 12 or so ago), software complexity has become much greater. If you look at just the different areas you need to be proficient in now to build 'enterprise' software compared to 20 years ago, it's insane.
What I think has happened is that experienced developers have constantly had to keep up with learning to account for this. I don't think complexity has stayed constant - we have expanded our skill set and knowledge to keep up.
I'm not sure if your memory serves you well. Anyway, the human mind is limited wrt complexity. It cannot grow infinitely. Also, complexity shifted from languages (e.g. C++) to frameworks and tools (e.g. JEE).
A lot of the work involved pipe welding. Pipe welding pays extremely well (for a trade) and that's because it requires such consistently high performance. As a pipe welder, you have to stamp your name into every weld you do. If your weld fails, it could kill people.
They check pipe welds using an xray. If a guy's weld failed xray more than once(and usually more than 0 times), he was out of the job.
Also, the whole post is a big subscribe link for me when NoScript is enabled (pretty smart, actually, if it's not a bug).
Hand made items - essentially think back to before the industrial age. Most things were hand made. Art, tools, food etc
Now we have machines to make things easier. Or we have off shore labour.
If it is a precise trade, well sorry but you still need to struggle by hand. You should therefore new charging a premium to do it.
If you have an aversion to using Google cache and would like to see a clean version.
I think these meatheads are complaining because it's work...
1. Language that doesn't require me to connect to the external system and use new brain-dead language to do such basic thing as persisting and efficiently accessing data. How do you declare persistent array of objects that you will be wanting to access by some of their properties in your favourite language without keeping the whole thing in RAM? I'd love to have my data stored in XML if I could query it without loading the whole file into memory and if I could hint it what XPaths I'll be accessing in the future so it can create some indexes to make those accesses faster.
3. Tool that could present me with graphical view of what components my webapp consists of and how they are connected. Something like database diagrams but for living breathing components. It's really tiring to making sense of what your app actually does just by looking through a small hole of the size of letters in the debugger and stack trace.
4. I'd like to see, while inspecting a method, all the places in my code where this method is surely called from and all the places this method might be called from (because of inheritance, function pointers, dynamic calls etc.). Function call is the connection between component. It's really nasty that when looking at source point of the connection then you can see the destination but when looking at destination you have no idea what are the source points.
Heck, in my day job i even have to fight to learn my damn goal. Which never is "to code" something. That's always only the means to the end. But still, every Coworker and manager start stating that as the ultimate goal of an engineer in a software department.