Hacker News new | comments | show | ask | jobs | submit login
"Careless" employees (niniane.blogspot.com)
192 points by esigler on Nov 7, 2013 | hide | past | web | favorite | 77 comments



Systems, systems, systems. Spot on.

Just as in manufacturing, you cannot produce quality by blaming the individual worker. Japanese manufacturers learned this from W. Edwards Deming (http://en.wikipedia.org/wiki/W._Edwards_Deming) and it continues to be true to this day, but for some reason the natural human instinct is to blame the individuals instead of the systems.

Improve your individuals and you can improve your quality maybe twofold, threefold at best. You might chance upon a "rockstar," but probably not.

Worse, blame your individuals and you lose productivity, lose trust, lose culture, instill fear, and break ties. Negative reinforcement brings unpredictable negative consequences.

Improve your systems, your culture, your process, your communication, and everything surrounding the production of your product, and you can improve your quality tenfold or more, and more importantly, be better prepared for a 100x or 1000x growth.

Blame your systems and they can only get better.

I can't think of a time when it's incorrect to think from a systems-first perspective.


Agreed.

We're feeling a lot of pressure at my workplace, especially for a team of new hires that only formed less than 90 days ago. We've barely had a chance to get a handle on this legacy codebase or form habits, never mind start cranking out new features right away.

I don't expect a factory to produce maximum widgets halfway through its own construction process.


I'm a huge fan of Deming, and many of the problems we see in software can be fixed by paying closer attention to his 14 principles.

One question that I've had applying him to software, though, is how you align his concepts on systemic defects (most defects are common causes) with the high variety of performance differences in programmers? In a factory or call center, the worker performance is uniform enough to view everything as a common cause, with the goal of reducing variation. In software, you want the opposite - you want to unleash your 10x performers.

I'm interested in your views on this. Again, I'm a huge fan of Deming and have read and applied his ideas throughout my career, but this is a sticking point.


This is like the Grand Unification Theory of Deming for me. I've been thinking about it for years.

This is a classic question. It doesn't come up only in the software realm—in every company, there are always high performers, and there is always resistance to the idea of treating their output as systematic, or treating them in an individual focus, rewarding and seeking them out.

My thoughts are that fundamentally, the relative performance of your workers is irrelevant to Deming's prescription for quality. Your job is to improve the output of the system as a whole, and by no small coincidence, improving all your processes optimizes the company for all individual worker output the best. Continual improvement of process, as it applies to the product, management, and worker alike, will result in higher gains than a focus on individual performance. This is the difference of kind—the Profound Knowledge Deming talked about.

If you think of it like that, it's more about human systems than it is about unleashing individual performance. Traditional management is about motivating the individual; Deming management is about recognizing that most of the output results from the system, and thus, improving the systems surrounding the individual.

The really fun part is how you look at performance. Firstly, you have to use statistics (after all, this is Deming we're talking about). Recognize that the performance of a group of humans is a sample, and as in most natural distributions it will resemble a bell curve. You will always have outliers: mythical people who seem to outperform all others (most people focus all their attention on these). But most of the group will be somewhere within 2 standard deviations of the average—AKA, normal. You will always have a handful of low-performers, too. This appears to match reality in my experience at several companies, as I'm sure it does for you.

Now, keep in mind that Deming never talked about reducing variation in people. He talked about continual improvement, he talked about psychological effects of management on workers, he talked about enabling pride. His message was to understand variation in the workforce, not to abolish it. Reducing variation is the goal for the output (as it should be in software also!), but not for the employee. He never applied those numerical style goals to workers at all, and in fact decried them as inhuman, and more importantly, ineffective. See Principle #10: "Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity." In other words, recognize and understand statistical variation; don't demand a reduction in variation from individuals, improve the system to improve the output. Remember that: I'll come back to it.

To your second point, that software somehow does not have a "common cause" consistent enough to build upon. I also want to correct this a bit. Deming's first principle, "Create constancy of purpose toward improvement of product and service, with the aim to become competitive, stay in business and to provide jobs" applies to absolutely all companies imaginable. This constancy of purpose is not a constancy of performance or constancy of variability, but a shared knowledge and goal that everyone strives for. It's an abstract and unmeasurable concept. This was Deming's only foray into "common cause," though it was very important.

Back to the individual versus the system. Improve the system to improve the output. If ever there was a sentence to sum up Deming, that's it.

Yet you think you want to "unleash your 10x performers." That this is somehow the goal in Software, or in any company for that matter. It's not. And here's why.

Your 10x performers are your outliers: the top end of the bell curve. The instinctual reaction when you get your hands on one or two is that you've gotta hire more people like this, and to reward and bonus and hold them up as pinnacles of achievement. After all, they perform better than everyone, they should be rewarded. And if we reward this kind of performance, we'll motivate other people to achieve more, right?

Wrong. This instinct is incorrect, and Deming based this (quite correct) conclusion on his extensive knowledge of psychology and sociology. Firstly, while you're focusing on one or two high performers, you lose the recognition that your output and your entire company is actually systematic. You might have 50 other programmers in the body of your bell curve, the lot of whom are still collectively producing 5x what your one 10x programmer supposedly outputs, and possibly more due to complex interrelated factors. Yet, by focusing on individual performance, you unintentionally demotivate the great majority of your employees, who are almost by statistical certainty "just average." The result of your efforts at improving and rewarding the performance of your 10x'er might be simply to keep them, or perhaps motivate them into being an 11x'er (big deal). You may motivate one or two people who happen to be motivated by money to improve their performance, but it probably won't last. The main outcome will be a slew of unintentional negative consequences throughout your company, including corporate politics, ladder-climbing, blame, mistrust, fear, and demotivation. Not good. (Side note: this is Corporate America in a nutshell, the result of a century of individually focused management. We've become complacent about it, sadly).

Deming would look at this in many ways. There would be laughter and enlightenment. There has to be a paradigm shift.

First, look at the performance as a statistical anomaly. It will sometimes happen, quite by accident, that two programmers will have random variation in performance month-to-month or project-to-project. These are not unexpected occurrences. Sometimes the right state of mind aligns with the right motivation and personal circumstances to produce extraordinary performance. It has happened to all of us at times, surely. By looking at it this way, you get many realizations: that circumstances impact performance; that individual variation will occur naturally; that you might be rewarding an individual for performance which is an anomaly. This might sound like a small step, but in fact it's the root of many cultural problems and corporate politics, mistrust of management, power games, and infighting.

Second, how are you measuring performance? How do you determine 10x? Deming would scoff at the very idea. His 3rd "Deadly Disease" was "Evaluation by performance, merit rating, or annual review of performance." For similar reasons to the above, these imprecise methods cause more harm than good. They'll cause secrecy, fear, more infighting and game playing in an effort to reach the performance or review goal, instead of the true goal: quality work. Removing these false metrics reduces fear, improves culture, improves collaboration and allows people to be prideful of their work without ulterior motives. This unmeasurable metric is more important than it sounds.

Third, and most importantly, Deming would surely say: ignore the outliers. Focus on improving the system. Focus on the majority of your people, and the whole of your organization to improve quality of the final product. Implement improvements system-wide, remove rewards and quotas and individual expectations and replace them with leadership, training, continual improvement, and knowledge. Deming would tell you that most of the quality of your output is not tied to individual worker performance, but rather the systems of management, technology, and collective improvement that the company has implemented.

The point of this is to improve your entire organization: your average 50 "1x" programmers might double their performance, resulting in 50x more performance than hiring one holy grail 10x programmer, or unleashing your existing one. That is a simplistic way to think about it, however. The actual improvement is much greater. By focusing on the company as a whole, and focusing on the needs and performance of all employees, you create something far greater: a cohesive culture driven by a desire to do good work as a whole, and produce quality output toward a clear purpose. That should be the holy grail, not "unleashing your 10x performers."

Having 10x performers is surely not a bad thing; they will occur, and you should attempt to hire and keep the best employees you can, of course. But the end of this story gets even better: high performers like one thing above all else, in my experience, and that is working in an environment in which they flourish, are able to take pride in their work, and are able to work with other high performers. By enabling all your programmers to perform at their best, you simultaneously enable your best performers to perform better and flourish as well, which is exactly what they're looking for. Focusing on the whole system does unleash your best performers; and everyone else too! It's counter-intuitive, but that's the lesson here.

The result is positive on all fronts, and not only that, but exponentially better than an individual-reward system due to the intertwined self-reinforcing effects of systematic improvements. Instead of having a handful of 10x workers, you instead build a 10x company which regularly nurtures its employees into becoming them. A 10x culture of performance.

This was the true genius of Deming's perception of the workplace, and of quality. This was never a sticking point for him; it was right there in his perspective if you look for it.

(Apologies for the length; I've been meaning to start a blog, and will sum this and other concepts up and submit to HN at some point...)


I am going to approach this a bit from the other side. And I'll make it personal, rather than asking a series of indirect questions.

More than once, I've ended up in a position where I've put considerable effort into fixing what are often frankly the shortcomings of other co-workers. Co-workers who sometimes may be observed to be very busy discussing their weekends, or the latest movie, etc.

I've fixed conditions that come about as the employees responsible continue to be rewarded, promoted, etc. -- in short, considered "acceptable".

I tried to do what I felt and what I had been taught was "the right thing".

IN HINDSIGHT: When you find yourself persistently in such conditions, when the problem is not a one-off, GET THE FUCK OUT. Unless you can very demonstratively take control of the situation -- of the conditions -- and steer it in a better direction, you are caught in a system that will chew you up at the least and most likely, sooner or later, spit you out.

As a relatively unempowered employee, the single solution to bad management and counter-productive compensation, is to GET THE FUCK OUT.

Anything that prevents your mobility, e.g. employer-provided health insurance, a non-liquid mortgage -- I won't, I refuse to, add "a family" to this list. But otherwise, any such thing becomes an anti-pattern.

One perspective on what is wrong with U.S. society these days: So many people locked into anti-patterns.


The answer to your pains, and how it relates to the article and its insights, is that culture is a system.

In fact, I'd go so far as to say culture is the root of the organizational tree. It's underground and most organizations sort of ignore it, or worse, treat it like it was a hypothetical illusory nuisance they have to lie about to attract rockstars. Ugh.

So culture is the root of the system. It defines how people work together, and how people and work are treated. Culture defines the unseen and unmeasurable motivations people rely upon, without which you get exactly the problems you describe: lack of common purpose, lack of knowledge of process, lack of improvement, infighting, game playing, reward seeking. These are all cultural problems.

I wholeheartedly agree that this is a sign of major problems in perspective in the US. The anti-pattern here is individualistic-dominated thinking, which doesn't accurately describe or solve the problems of an organization of more than one person. It's actually painful to watch corporate culture in this country if you have an understanding of systems design and process control, and especially if you apply it to the human systems of which we are all a part. Science has the answers, but no one cares. Painful.

Read up and spread the systems knowledge: http://en.wikipedia.org/wiki/W._Edwards_Deming


I think you're actually talking about a very different situation than the source.

The source is dealing with subordinates in each case while you're dealing with peers.

I've been in both positions.

In the case of co-workers whom you have little or no influence over, yeah "get the fuck out" is likely great advice.

In the case of subordinates, or any situation where you have the power/latitude to address things from the top down it makes sense to address the processes in place.

That said, sometimes the process that needs addressing is the identification, swift firing and future avoidance of individual bullshitters and assholes.


That's a fair point. But I did say I was approaching this problem "from the other side". Admittedly, rather quickly and off-the-cuff, and personally.

I think more readers of this thread may be in the relatively "powerless" position, rather than the empowered position.

And, again from my perspective, I wish someone had made clear to me sooner how the world really works, today (and likely always). "Paying your dues". Earning respect. There are environments in which this works. But there are many in which it does not.

From the perspective of the OP, they've already made the point. But I might add a note of succinctness. GIGO -- garbage in, garbage out. The leadership I see more clearly upon rereading and assume they are addressing: They're stuck at GI.


>"And, again from my perspective, I wish someone had made clear to me sooner how the world really works, today (and likely always). "Paying your dues". Earning respect. There are environments in which this works. But there are many in which it does not."

Very true. This is why I think "get the fuck out" is often great advice. Keep moving, onward and upward. Sit still too long and you run the risk of getting run down and becoming what you hate without even realizing it.

It seems to me there are plenty of books which claim to teach you how to be a great leader, but not so many about how to manage up, lead from the rear and survive among hostile peers.

Personally, one of the better resources I've read for this was The 48 Laws of Power [1]. The book sometimes gets a bad rap from people who look at it as a manual for your own action. While it could certainly be applied that way, it's at least as useful for understanding the mechanism of others actions and how to protect yourself against or benefit from them.

1: http://www.amazon.com/dp/0140280197/


This post hit home for me. I've personally implemented code review strategies which directly and immediately led to much improved code quality and generally better product. But management doesn't see code quality. They see deadlines. The insignificant time it takes for code review is the first thing that gets nixed by non-technical management even when the time required for bug fixes, last-minute changes due to their own indecisiveness, and slow development due to giving employees second-rate hardware far eclipse the marginal time it takes to make sure we're deploying halfway decent code to production. And then they bitch at the engineers that they're underpaying/overworking when shit stops working. But let's not hire more developers and improve salaries for the people we have. No, that's not what we need. What we need is more charismatic biz-dev bros with poly-sci degrees. Surely that will fix things! (/rant)


It seems like code reviews are partially for inspecting the product, and partially for teaching and inculcating cultural norms.

Are the managers who don't like this just non-technical? Or are they just not being presented the value in a clear enough way?


> partially for inspecting the product, and partially for teaching and inculcating cultural norms

Yes!

I would go further and say that - without discounting their value for catching bugs - the _largest_ benefits of doing code reviews are cultural rather than technical.


You pair with people you like and do code reviews with people you don't like (misquoted from someone way smarter than me).

With large teams, especially if distributed or partially outsourced, code reviews can ensure code quality. But also be a total bottle neck if over-bureaucratic and some reviews are of low quality due to lack of context. Often combined with ivory tower architects as well.

In smaller, agile and especially collocated teams code reviews will flag issue unnecessarily late in the process. Just pair from the start instead to ensure no short cuts or dodgy code slips through, and automatically spread the knowledge. If you do not trust two of your developers combined then you do have a serious problem.

You can though in addition have small and short swarming/tripling/quadrupling sessions in front of 1 computer to look at especially important issues.

If you do neither code reviews nor pairing then you are in trouble.


Can you expand on what you mean by this? What cultural benefits are you talking about?


One of the huge benefits I've found from code reviews is that my reviewer will say, "Actually, we've faced that problem before and there's a solid and proved solution in our utils that handles it already, along with a couple other cases. How about you change it to use that instead?"


Yes - building on this, I see 4 quick benefits:

1 - Suggestions on where problems have been solved before. (Your point)

2 - Having people say, "Here's the style we use to make this easier to support in the future"

3 - Mentoring on tougher problems, and turning quick hacks into elegant solutions.

4 - QA. (This is the stated benefit, but falls below the other 3)

The key is not to turn code reviews into a bottleneck. If you just view the purpose as any 1 of the 4, you're likely to under-prioritize or over-formalize it.


Additionally, just knowing that code reviews are going to happen often results in developers putting in more effort to submit quality code.


The word "non-technical" seems generous. How about newb or non-functioning or subhuman? Systems are great, but as you point out it's really the people that matter.


> The insignificant time it takes for code review is the first thing that gets nixed by non-technical management

You have to fight back against that shit. Testing, reviews, these things are part of the job. You're the expert, you tell them how long it takes to do things. Don't let them just hand you a deadline without saying something. Speak up!


I agree 100%, and am especially excited to see automated testing as going from "impossible dream" (c. 1999) to "reasonable, broadly expected quality practice". It has been a long road.

However, there's one obvious problem that isn't mentioned: hiring mercenaries half-way around the world who have never met you, don't care about you, don't care about your product, and don't care about your audience.

I think it can be ok to do that sometimes, but it's idiocy to do that and expect to work in the same way as having a permanent employee who sits next to you and who will lose their job if the business fails.

Software developers, even the ones 8 time zones away, are actual human beings not coding robots with coin slots in their chests. If you are going to strip out all of the human connection and replace it with 3 milestone payments plus some spec documents, you can't expect them to care beyond what's necessary to cash the checks. (They might anyhow, out of a sense of professionalism, but you can't expect it.)

The only contracting or remote-team situations I've seen work even moderately well have done a lot to create real human connection.


Such a situation may enhance the issue the author addresses, but his point remains paramount: don't expect what you don't inspect. If anything, hiring "mercenaries half-way around the world" requires more of what he enumerates, which is the objectively practical form of, as you say, "do a lot to create real human connection".


Requiring unit tests is a great idea, and I am 100% behind using the techniques she describes, but it's not the real human connection I'm talking about.

One of the best distributed teams I know spends a week per month together despite the travel nightmare that entails. Another reasonably good remote project had the product manager spending 1-2 weeks every 6 weeks with the development team. Having developers participate in user tests is also great, as is finding some way for them or their friends to become actual users of the product.

If the developers don't give a shit about you or your users, you'll have to do a lot more inspecting than if they are personally fired up to make things work for people they care about.


Did you gave a look on this: http://37signals.com/remote/


Code review ranks just behind design review in value (cost/time savings). In fact code reviews are so beneficial that if I was working on a solo project I would either pay for them to be done or review the code myself after a suitable cooling off period, depending on what I was working on.

On the other hand, I have also witnessed sloppy, lazy code reviews that catch nothing except the occasional typo. This amounts to an unjustifiable waste of time. Fortunately, it is easy to tell a good code reviews from bad by tracking defect discovery and digging into review comments as needed.

One thing that code review catches that nothing else does is code that is poorly written but functional (i.e. passing tests).

The example I always trot out is

    for ( int i=0 ; i < this.MyControl.TabPages.Count ; i++ )
    {
       this.MyControl.TabPages.Remove ( this.MyControl.TabPages[i] );
       i--;
    }
This code works according to spec, passes all the tests, but is bordering on unmaintainable. At best it's a WTF.

(Written up here: http://cvmountain.com/2011/09/whats-wrong-with-this-code-rea...)


I find that one of the causes for wildly different levels of code review (and value derived from them) is a lack of training. There is a real lack of materials for explaining how to do a code review, how to do deal with the human aspect of giving feedback, what is/is not valuable to talk about (arguing over tabs vs spaces should not happen in a code review). Most of my experiences have involved a trial-by-fire process - new engineers receive a few code reviews from more experienced people and that is your "training".


I agree - and your comment gives me an idea for a series of posts on this exact subject.


Great - would love to read :)

http://exercism.io/help/how-to-nitpick is a good resource as well.


That code snippet is a matryoshka doll of brainfuck.


The first thing I thought of is, even in a hypothetical world without a Clear() method or any means of adding one, why use a for loop instead of a while loop? And if you are set on a for loop, why not simply set the first count to a variable? It is bad on multiple levels.


    <list of dead items>
"I need to delete every item"

    for (i = 0; i < length; i++) {
        remove item[i];
    }
"Shit, that didn't work. Why isn't it deleting everything?

    for (i = 0; i < length; i++) {
        printf("%d", item[i]); 
        remove item[i];
    }
"Huh, it's removing every other item."

    for (i = 0; i < length; i++) {
        printf("%d, %d", item[i], i); 
        remove item[i];
    }
"Weird, it's removing an item, skipping an item, then removing the next one."

    for (i = 0; i < length; i++) {
        printf("%d, %d", item[i], i); 
        remove item[i];
        i--;
    }
"Well shit, that seemed to work."

    for (i = 0; i < length; i++) {
        remove item[i];
        i--;
    }
"Ship it."


Consulting has its challenges, but one of the most amazing perks is charging for value, not time. Once I demonstrate the value of the project to the client, and bill by the week, I no longer have to justify unit testing, continuous integration, code reviews, or any other productivity decision as a tech lead. I know these are the best ways to achieve consistent long-term productivity, and so that's what we do.

Learning how to sell results has not only made me more money, but a better technologist.


Yeah, code review is great...until you find out that some of your reviewers are rubber-stamping the commits from their favorites, and a large percentage of the rest are doing a sub-standard job of reviewing, and pretty much everyone is just barely finding the time to do the (decidedly un-fun) chore of reviewing code, instead of writing code. So you're back to the root cause of the problem: you have to hire good people.

Truly careless employees will (ironically) work hard to find ways around any system that you put in place to prevent carelessness. There are no magic bullets.


The solution is code meta-review! Just have reviewers review the quality of reviews until review quality is up.


Slashdot was the future.


"You get what you measure" - Tom Peters

Quality must be designed in, but people won't do the effort if it's not valued.


Great read. This sentence sums it up best I think, "Why, why, why would people expect to get great results if they flaunt all the best-practices that have developed over the past 20 years?"


I don't think he's using the word "flaunt" correctly. It struck me as off -- sure enough, the definition agrees with me.

"Flaunt: to parade or display... conspicuously. The use of 'flaunt' to mean 'to ignore or treat with disdain' is strongly objected to by many usage guides.'"

http://dictionary.reference.com/browse/Flaunt


"Flout" is the word most likely intended.

flout (verb) 1. openly disregard (a rule, law or convention).


It also suggests the word that "flaunt" was likely confused for - "flout." So perhaps the teams are flaunting their ignorance by flouting industry best practices.


"She" not "He"


Quit blaming the individual and focus on the system dang it.


Ok, I will go out on a limb here and say that I don´t unit test (at the moment), as I estimate it would take at least twice as long to write the test code and data as the actual code, as there are many related objects that need to be put together correctly for each test case. I am in the fortunate position of writing an in house Django app, so basically it is in a constant beta state, and I have around 40 beta testers to tell me when things go wrong.

Now I see that unit testing would have caught a few of the bugs over the last couple of years (but not that many of them) but in our case, adding new features and adjusting the data model to the constantly changing requirements is more important. My code does get tested, just not automatically.

I am not saying that it is a bad idea to unit test, and that I never intend to use it, but for the time being the time costs don´t outweigh the benefits.

Also whenever I look at tutorials there is no advice on how to test the parts I want to test. Instead they demonstrate how to test 2 + 2 = 4. I don see the point in that when my application is mainly outputting a moderately complex SQL query results. I can generate a load of objects in the database, and set up unit tests, and have them run each time I update a completely unrelated part of the application, or I can use the real data, and check the results are as expected on my development machine. I know which way is more productive for me.


I have projects where I can't manage to get any kind of automated tests in, because it's just too hard to figure out how to do it, way too time consuming. And indeed they don't have tests.

I have other projects where I manage to get good automated test coverage.

Having this experience, I know for sure which projects are more enjoyable to work on,of a higher quality, with fewer bugs, better architecture and more developer productivity -- the well-tested ones every time.

If you write tests from the start, it tends to effect the architecture of the project -- creating a testable architecture, but also generally a better more maintainable architecture. So the projects where testing is 'too expensive' are often those that were started without tests. Also, certainly, some environments/frameworks/platforms support testing better than others. And I think it's true that some domains are better suited for testing than others -- sadly, in my experience, typical web apps are actually among the hardest things to test well.

I have sympathy for not having figured out how to test in an economical and maintainble way. Sometimes that's me. But at this point I am confident from my own experience, that when I can figure out how to test in an economical and maintainable way, it leads to better software and less frustration.

(I suppose there could be a correlation fallacy here, where the 'less problematic' (in some ways) projects are the ones I manage to test on, and it's because they are 'less problematic' that they are higher quality, not because they are tested. All I can say is my experience leads me to believe in tests, even though I still don't use them in every project, because in some projects I can't figure out how to do so economically.)


"Unit" and "automatic" tests are not synonymous. We have a non-trivial system, and mocking the necessary components for unit tests seems like a productivity loss to me. But we absolutely have automated tests; we don't test components in isolation, but as they will behave in production.

However, carefully crafted system tests can exercise the parts of the system you want to exercise. You know the little examples you write up yourself to convince yourself that a new piece of functionality actually works? Turn those into tests, and keep them around. I have been saved by those when a seemingly unrelated change caused an error in something I had not anticipated.


I tend to have strict unit testing with near 100% coverage for libraries. For application code, I use primarily sloppy integration tests. This seems to be the right balance for me anyway. That being said, I try to extract as much app code into libraries as possible.


>I know which way is more productive for me.

How do you know this? It frankly doesn't sound like you have worked on an application of similar scope that has good test coverage. I went the first decade of my career without writing tests and said the exact same things you are saying.

I do agree that learning how to write effective tests is difficult and that you cannot do it by reading the web tutorials and the scant coverage given in most books. I learned how by working on an existing project with good coverage.

On my own personal projects what I began doing was writing tests instead of writing most of the exploratory code in the REPL, or instead of writing a stub of a template to test some new code in the browser. I took all that ad-hoc scaffolding that naturally pops up and just structured a little bit and called that my tests. My tests are more integration tests ... I still do not understand the compulsion for low-level unit tests that do nothing other than prove the underlying framework works correctly.


How do I know this?

Because I have on numerous occasions tried to get tests set up, and every time another feature request / data model change comes up before I get anywhere close to building all the related objects required for the tests I want to run. (I agree it is integration tests I need rather than low level stuff that proves the framework works as expected).

The majority of the bugs I see I would not have written a test for anyway, as they are usually subtle interactions between the objects and a variation that were not anticipated in the feature request.

The design is constantly changing, as I work for a DNA sequencing centre and the technology is constantly advancing. A few bugs would have been caught by automatic testing, but not so many considering the time it seems to take me to get them put together.

It is different working for a big corporate entity that has the resources to devote to these things, but in our case we don't.


> I am in the fortunate position of writing an in house Django app, so basically it is in a constant beta state, and I have around 40 beta testers to tell me when things go wrong.

And I'm sure you can see that if you were in a position where you had thousands of external paying customers relying on your service (who will possibly leave and stop paying if it breaks) that you might appreciate having unit tests and other automation to help you worry less when it comes time to push the big green launch button.


Yes, for sure I can see that. Like I say my app is in a constant beta state.


I think these are all fantastic things to implement but do you know how much pushback you get from engineers on this:

Me: "Do you have a standup every morning, so that you know about schedule delays after at most one day?"

In general folks HATE these, but I would love to hear other cases where people have found them successful. We are small enough that the conversation is ongoing so haven't needed to implement it.

What I have done in other cases is the "walk-around" to speak to people individually rather than in a massive group meeting - and that seems to have been well received.


I love our daily stand-ups.

That said, we often have to take things "offline" because they spur plenty of larger cross-dev, cross-functional discussions. Also, I've made a point of documenting my daily work effort, so I have plenty to report.

I hypothesize that one's enjoyment of daily stand-ups is a function of (a) the team's general openness to communication, and (b) the degree to which one's daily report reflects positively of their effort.


I've done daily stand-ups under the Scrum methodology that the whole team liked and found successful. In my experience, it goes best if the emphasis is strongly focused on getting the team members to communicate to each other and to the team as a whole. If everyone is just standing around waiting for their turn to deliver status to the boss, the stand-up is a poor use of time since, as you suggest, the boss could just do the walk-around and collect that status one-on-one. When I've been "scrum master", I make sure the boss/customer/product owner stays quiet in the stand-ups and nudge the team culture towards using the time for the team to talk to itself, synchronize everyone's knowledge and expectations, and build coherence and comraderie, ideally ignoring the extra people in the room.

It's definitely work to build and maintain that kind of culture, but I've had many people tell me it makes them want to come to work in the morning because they enjoy starting off this way. It also helps that I try very hard to make sure this is the one and only recurring "meeting" they have.


I hate it because some people on the team come in at 6, others come in at 9:30, and anything in between. Some leave by 3:30, and others leave from 5 to 7 pm. I was at work last night until 11:30, working on a particularly gnarly task. (different schedules, because different lifestyles, different obligations -- remember, diversity is good).

When we need to talk to people to find out what's going on, we just talk to them.


I think the important point is that a boss needs to touch-base with each employee daily, and how you do it is up to you.


I prefer it when mine stay out of my road and let me get on with things. They have a pretty poor understanding of software engineering, but can code enough to think their input is helpful, when it usually isn't


So basically you want to implement Enterprise QA processes for a tiny team so as to make up for incompetence and bad hiring decisions. Sorry, I don't buy it.


Unit tests and basic code reviews aren't exactly exclusive to enterprise-level system architecture these days.

Regarding unit tests, their utility is actually mostly independent of the size of the team. The more relevant factor is the size of the codebase. A small team can end up producing a pretty huge codebase, and solid unit tests can end up saving a lot of frustration in the future. They also can be critical in helping new developers familiarize themselves with the codebase and its interdependencies.

Code reviews are an investment not just in the code and the product but also in the human capital producing it. One thing you'll learn with experience is that even very good developers will write bad code sometimes. If you've got millions of users, simply doing code reviews can be a lot less stressful than finding small mistakes later on when bugs pop up in production and a hotfix has to be pushed. It leads to less blame, fewer production bugs, and a more collaborative, academic environment. People can learn and grow a lot from code reviews (both receiving and giving). They'll improve the product and codebase not just in the short term, but doubly so in the long run.


Yep. I don't think doing an 'art review' is going to turn many amateur painters into a picasso.

Code reviews also suck up time of your most senior people. Personally, I'd rather just have some fucking TESTERS. (manual or automatic script writers).


You'd be surprised. A crucial part of learning to be an artist is open critiques. It is the one moment when lovey-dovey artists suddenly turn into the same kind of nitpicky curmudgeons as us coders.

Where do you think great artists come from?


As a professional developer who started college in an art major, I can verify this from personal experience.


It's funny, the similarities between a critique and a team code-review didn't occur to me until penguindev's comment. But they really are a lot alike.


I definitely agree with all of the sentiments in the blog post. With the exception of daily scrum (we do twice weekly), we try to follow all of these habits. There always seem to be two things which we run into though; superficial code reviews and haphazard integration.

On the code review side, I find that most engineers look for trivial crap that could normally be picked up by running lint. It's nice to have similar lint-y style, but for me, the most important things to look for are whether the code is going to break with unexpected input (ie. is the logic sound and are the unit tests good enough), and did the engineer write in an idiomatic style which would be easy for other engineers to understand. I don't mind comments like "maybe use this other variable name", however using recognizable patterns which allow other engineers to easily follow the logic is much more important. Often it seems like people get lazy during reviews and write really superficial comments instead of taking the time to really get down and dirty in another person's code. And why would they? They've got their own code to write.

The other thing I feel like I'm always up against, particularly with younger engineers (sorry younger engineers!) is not thinking through all of the integration points when your code needs to work with other code which is being developed concurrently. One engineer will say something like "Oh, just call function X", which when you do, doesn't provide the functionality which the other engineer was claiming it had. That, or there was some additional step which one engineer wasn't being explicit about and there was an assumption that you were going to take care of it. There's nothing worse than finding this out on the last day of the sprint when you're trying to button everything up.


There are lots of software developers out there, that shouldn't be developing software in first place. They could be excellent farmers, musicians, athletes, but for whatever reason they decided to be software developers. And it doesn't matter how many scrums or code reviews you throw at them, they just won't get it. They will be producing miserable results making everyone around them miserable.

On the other hand, there still are a few decent, old school devs, who don't need hand-holding, constant poking and distraction of standup meetings and writing meaningless test cases that check if 2+2 is still 4. They just (1) understand the problem and (2) write code that solves it. As simple as that. Good old engineering, like these guys: http://www.youtube.com/watch?v=8kUQWuK1L4w. Or original SAS system - its reference manual was better quality then any statistics textbook. Unfortunately those days are gone now and we live in the kingdom of Scrums and Frameworks.


This may not be obvious to some people (like my boss), but code reviews alone is insufficient; having a good technical design early on is more important.

I've sat through several "code reviews" and they're always conducted at the end of small-ish projects and when I look at the code, I would very much want the guy to rewrite it but by then it would have been too late.


I feel like we're not getting the whole story. For example, what do you do when you follow all these best practices, but end up with a product that no one wants?


We've tried to institute technical design docs and reviews and failed.

Management keeps asking we do them but doesn't enforce it.

Developers don't want to do it and take it personally when you suggest a different approach during the review.

Management sets deadlines on projects without consulting leads or architects.

I (database architect) have suggested we add steps for technical approval and code reviews to our feature/bug tracking system but have been ignored.

I'm sure people can relate out there.


You need to read "Good Boss Dead Boss".



Yes sorry. (Can't believe I didn't remember the title. I'm reading it right now and it's sitting on my desk. FacePalm)


I know that these points are often touted as best practices. I agree with most.

I have _never_ seen useful (daily especially!) standup meetings.

That very well might be a cultural problem or an issue with the people I work with etc., but even after giving the idea a couple of chances: 'Daily standups' make me cringe inside.


Building automated test is like getting a flywheel going. Sure it might be difficult to start, but once it's gets going, it will take you very far with little incremental effort.


Does anyone have a resource for a full checklist of practices? I'm a processes guy but I'm curious what all is out there these days.


I don't think you can beat the joel test for simplicity. It's a bit dated perhaps, but still surprisingly relevant.

http://www.joelonsoftware.com/articles/fog0000000043.html


Oh wow, I remember reading this years ago, but it's still surprisingly relevant (took the words from me :) ).


Perfect analogy with the architect.


Early impressions of the article:

some of that is getting what you pay for. If you go after cheaper or younger people, you're more likely (all other things being equal, general case, etc etc) to get lower quality work. Also, there's a self-created problem factor where if you force somebody else to give you an estimate (which is just a guess however much you pretty it up and repackage it), and then turn that around and treat it as a deadline, then you can expect those deadlines to go whooshing past. And it will happen even more often if you have cheaper/younger/less-experienced programmers.

Hire better people, which also means paying them more. And don't ask for estimates. Just see what happens, and iterate.


Long story short, I get put on performance review, threat of termination. Boss gives me exacting standards for project to complete by X date, reviews everything I do and I have to write a progress report every 2 days that's reviewed (usually). Boss asks, "why did your performance improve so much?"

I'd never even seen a project plan before a few months ago...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: