I've seen coders who knew this by heart forget this less than 5 years after entering management and become champions of forcing everybody into the office for 8:30 stand-ups and time tracking systems that enforce minute by minute "project accountability".
I don't know exactly how this happens, all I know is its like a damn force of nature. The only thing I've ever seen kill morale and tank projects faster is random periodic layoffs.
I don't think any of this is unique to software engineering, either. You don't just one day decide to build a bridge and show up at the bank of the river with a cement mixer tomorrow. To someone that wants to drive over the river, it looks like no progress is being made on the project. Hence, lots of meetings (delaying the project further) to make sure everyone is working hard enough. (Because there is always "that one guy" who actually isn't doing anything. And it looks the same as someone who is working hard to an outsider.)
As for why we spend money hiring people that don't actually do anything, tracking the progress of other people's work... I think it's just loss aversion in action. If you spend 10% of the day working on timesheets and pay an additional employee to check up on them... you know where your money's going. But if someone just shows up to work and does nothing, then they're stealing from you and must be punished! It is human nature.
I don't know what we did in the past that's different from today. Somehow we got to the moon. But now companies are so bloated that even doing nothing seems like it requires 30,000 employees. I don't have much hope for the future generation of megaprojects. And, you know, we're not really doing them anymore. I wonder what changed. Or maybe looking back at the all the achievements of the entire human race at once makes it difficult to put the present day into perspective.
Outside of a few niche areas, calling what's going on in software "engineering" seems laughable to me, and I never refer to myself with that term.
If it was super easy, we'd very likely change things once they were in use. Perhaps we didn't account for pedestrians using the bridge and they're walking across it all the time or tractor trailers are having trouble making a right off the bridge. Software is inherently "soft". It's malleable. When done well, it can create better fit to purpose solutions than almost any other discipline. I agree that most "software engineering" is more like plumbing, but I do feel like "real" engineering disciplines would likely follow a similar paradigm if they had the same luxury to rework things.
Look at the bay bridge, we built it using steel from china which, after it was already being used to construct the span was later found to be inferior.
So in SW dev -- you might choose a tool for a massive project only to find its the wrong/not best tool you could have selected based on the budget at the time the project started.
This can also be said of the team. Employee A was the right eemployee at the start of the project, but they later prove to not be the best resource, but youre already X% down the project timeline... what do? Esp at what cost.
One Job I had (at a top 5 consulting engineer) was to reverse engineer the code from an onsite survey computer and build a GUI to analyse the data allow one of our Senior Engineers to look at what happened.
So while your bridge scenario is correct, no one would do that midstream, software solutions often (but not always) can be flexible enough for change.
It's the exact same trajectory that happened with Lean Six Sigma. It starts out as a good idea in a specific area. People start making money on managerial self help selling based on the idea. Market saturates. Product is pushed into other areas to keep growth. Engineering management idea becomes cult-like. New method comes onto scene and starts to displace the old.
But to assume no real engineering goes on discounts the fact that aside from stories there are "detailed" requirements, modeling, and designing that still occurs. If your team is skipping this, that's a problem.
If developer teams would be treated like the blackbox they apparently want to be treated as would mean that whoever is managing procuesses would fly blind just hoping for the best - but the organization as a whole needs to be understanding what is going on right now.
Time tracking can be a surveilling tool, but it can also be a tool to improve things - someone could find out that a team is running over capacity, and consequently assign more resources to the task at hand or redefine the task.
And, well, for morning standups: when done right they are a valuable tool for efficient communication, and probably better than having people chasing each other for hours on end. IMHO an organisation should agree on a communication strategy that is the most efficient and helpful for it. It is not a religious belief that a standup meeting must be done first thing in the morning, I think that tends to conflict w/the teams personal schedules the least.
If we know anything in IT, it's Brook's law: "Adding manpower to a late software project makes it later".
So if the manager asks that for a late project, they are probably incompetent.
>If a good team of developers estimates the effort needed for anyone task with a 20% accuracy (which would be very good indeed), that can make a 9 month project being 2 months late, which might or might not be acceptable.
For a small (a few days) project, 20% accuracy doesn't make any difference. For a large one, 20% accuracy is science fiction stuff compared to the actual accuracy rates of big projects, especially the kind with changing requirements, new issues discovered en route, etc (which is most of them).
>If developer teams would be treated like the blackbox they apparently want to be treated as would mean that whoever is managing procuesses would fly blind just hoping for the best - but the organization as a whole needs to be understanding what is going on right now.
If the manager is not part of the team / actively involved (and thus he is inside the "black box") then any numbers they get is bogus, time tracking or not. Software is a qualitative process, not a quantitative. "5 hours spent on X" mean nothing at all without knowing what was done for X.
There is a lot of wisdom to it. But there are good ways to add resources and bad ways. Perhaps a team isn't getting help from another group. Perhaps there are some tasks that others could offload in a way that doesn't distract the team on the critical path, etc.
It's a good adage but it doesn't mean that you can never take steps to accelerate project delivery.
Then the team should communicate that to the other team themselves.
> Perhaps there are some tasks that others could offload in a way that doesn't distract the team on the critical path
That should be the decision of the team. If they need to cut down in order to stay on the critical path, then they should do so.
In my experience, good managers can be quite valuable negotiating with other organizations in the company and with getting needed resources. In fact, I'd say that's a pretty significant part of their job.
Not necessarily. I've seen/heard engineers who take great glee in pointing this out, but they don't understand the real dynamic. The managers have probably read the mythical man month too.
If as a manager you report a problem to your boss, you're likely to get "help", even if you already have a plan. Sometimes the least costly "help" is to accept additional "resources" and fence them off someplace they don't do damage so that the rest of your team can execute its real plan. Turning down help can damage your credibility/relationships which is what you depend on to be "allowed to be successful". It's emotions, not logic.
Or, if you really really want to, you can tell your boss they're an idiot and need to learn about Brook's law.
Some sure, but not the one's violating all of its conclusions...
>Sometimes the least costly "help" is to accept additional "resources" and fence them off someplace they don't do damage so that the rest of your team can execute its real plan. Turning down help can damage your credibility/relationships which is what you depend on to be "allowed to be successful". It's emotions, not logic. Or, if you really really want to, you can tell your boss they're an idiot and need to learn about Brook's law.
Well, this doesn't contradict what I wrote above though. Just finds for a more polite way to handle the situation and not let the extra "help" make a mess of things! But it's not like I suggested telling anyone directly they're an idiot!
"For a large one, 20% accuracy is science fiction stuff compared to the actual accuracy rates of big projects.." (exactly, I made that point as well) "..especially the kind with changing requirements, new issues discovered en route, etc (which is most of them)." If you are working in an environment where shipping something at some random point in the future is fine, well, good for you. The rest of us doesn't work in that world, which means that if you start a large project, lets say over the scale of 1 year, you cannot not talk about progress during that year. This is the point that I want to make.
In that sense: "5 hours spent on X" means, dishonesty aside, that 5 hours have been spent. This has some meaning, namely:
- a) if the estimate has been 1 hour, and this doesn't cancel out over time there is a systemic problem which puts the entire estimate to question. So someone should do something about that, and, whatever that something is - adding new or different resources, changing the goal, changing the timeline, whatever), it is usually not something that developers could do. (Not that I think 5 hours is a useful estimate size for any feature; 1 week, on the other hand, for a set of features would probably be useful)
- and b) if you figure out that team members are frequently spending more than 8 hours per day for whatever features (adjust 8 hours to whatever the team agrees upon) then the team is overloaded, the workload is not sustainable, and, again someone should do something about that...
Really, it shouldn't come as a surprise that - opposite to developers' lore - managers really do add value, especially when they are good managers.
That's an insane idea (it does lead to the conclusion that the optimal team is zero sized), and not an answer to the GP's claim that a good manager must know when more people will/won't help.
Lets take say Civilizations for example and say you need to build a spearman in 7 turns to not lose - but it would take 9 right now you already have production maximized without adding new buildings. Technically building a forge could speed it up to 6 turns but it takes 20 to build it. Even if it would help adding it right now would only make an intolerable delay even worse.
To get away from the geeky metaphor bottleneck and chokepoint management are a crucial part of making projects parallelizible across many developers and even then there are expenses to making them interoperate.
Unless you are grossly behind it is unlikely adding more people will help unless you are vastly overambitious and try to do something with one person that requires a team of thousands. In which case you have probably already failed too massively to help.
The output of n programmers working on a single task is O(n), but each one of those programmers must coordinate with O(n^2) colleagues. As more people are added, the coordination costs begin to to take over, and the project slips further and further behind. Thus, going from 2 to 4 programmers might be a great idea, while 20 to 40 or 200 to 400 may doom the project.
It's potentially a bit better than O(n log n).
If a hierarchy settles into a strong tree structure, it approaches O(n) connections people have to handle - one bidirectional connection between each person and their immediate superior.
To put it a different way, a hierarchy has potential to scale without limit, or to put it differently again, the larger a system of people, the more they will be forced by necessity into a strict tree structure for the majority of 1:1 communications.
It's much more advantageous to form cliques. Small highly connected groups that work on the same thing, possibly with some lose connections to other cliques. That's the model that most closely resembles towns, cities, and even tribes.
Talking to your team boss is good for high level guidance, but wont do anything when what you really need is the details on the workings of X a colleague knows.
Where did you got this idea from what I wrote (in fact, from my direct quote of Brooks' law)?
>That's an insane idea (it does lead to the conclusion that the optimal team is zero sized)
That adding people to a late project makes it later doesn't "lead to the conclusion" that "optimal team size is zero".
That's what's actually insane (one common form of insanity is following a logic to extremes without caring for nuance and limits).
Just that more people is more overhead (e.g. managerial and communication and agreement overhead), more people later equals more time to bring them up to speed and hand-hold them until they're ready plus the added overhead (related with more people) even when they're ready.
Brook's law doesn't say you should never put more people on a late project. It does say that you should not realistically expect the project to finish at (or sooner) than the initial estimate because of you adding more people.
In other words, for a late project with X persons working and M estimated months to completion. The real completion with X persons might be MR months, and with more persons X' it could get to MP. Adding extra programmers won't (per Brook's law and based on typical observations) ever help it reach M.
Sometimes adding more persons will make things worse, where MP > MR, other times they can help finish faster than the "actual" (not the initially estimated) finish date, so that MP < MR.
So it might still be advisable to add extra people -- it just wont (per Brooks law) get you to finish in M, and in some cases might even get you further from MR.
Like the idea of throwing good money after bad, if the problem isn't rooted in lack of resources (e.g., if the problem is actually rooted in bad communication, bad training, bad requirements, etc.) it's not going to help.
He didn't even get deeply into differences between people in that chapter. It's just very basic stuff that is on project management 101 since forever. Yet he had to write it down, and most people still don't seem to understand it.
It's more about communication and coordination overhead as teams get larger (and new additions need to be brought up to speed etc).
If, by “resources”, you mean people, then Fred Brooks might have something else to say about that.
I think the biggest risk introduced by time tracking is well illustrated by a sibling comment:
> My boss knew how long particular task took and asked if I need some help afterwards. It was great support and mentoring. But I now experience exact the opposite. My managers come to me if it took me longer second time than first time to complain about me being to slow.
It produces a misalignment of incentives: If you do a great job one week getting visible things out the door, then you're punished for the rest of your time in that job, rather than rewarded for doing great. So you know in advance, it's better to deliver slower all the time.
I think a lot of people have in the back of their mind an instinctive trepidation for detailed time reporting, because of fears it will invite that sort of paradoxical push, preventing them from doing the best work they know they could be doing. "To speed you up we need you to attend more meetings."
I've also witnessed a different kind of time-tracking problem. Someone logs into a reporting system that, say, they worked 200 hours on a project so far, since the last 3 month report.
Some people don't take the person's word for it. I've seen bosses, peers, coworkers say "I don't believe you, no way that took 200 hours! You've done maybe 10 hours at best. I could do it in a weekend, if only I had the free time. Anyone competent could! If only I had the time to organise someone to replace you. You're lucky to work here, your job should be given to someone else who doesn't lie about what they've been doing. (etc.)"
What's really happening is the person has put in 250 hours (350 if you count that unpaid overtime they did on their holiday and weekends), but reduced it to 200 in the report because they know what their boss/peer/coworker is likely to say.
If freelancing, they only bill for 200 hours, feeling like in a just world they would bill 350, but it's not a just world, and if their client is unhappy, maybe their work doesn't deserve full rate.
Often, boss/peer/coworker is quite incompetent, and wouldn't be able to implement the thing in 200 hours, let alone 10 - their lack of ability or familiarity with the job is what's making them estimate such a low figure. And no surprise to anyone, they can't find a replacement who will do the same thing in less time - although they sometimes find replacements who say they will, and then don't. The cycle continues, rinse and repeat.
So unlucky time-reporting worker is forced to do things to "prove" they are telling the truth about the time they've put in at least, while feeling a heavy dose of imposter syndrome.
Things like ass-on-seat, make sure screen is visible to others at work (to prove not using social media), timing of repo commits designed around reputation rather than problem solving, participate in social chats/IMs just the right amount (too much = slacker, too little = slacker), same with meetings.
And weirdly, it works. Because visibility matters a lot more than results.
If the above sounds a bit like I might be talking about myself... not really. I've had those sorts of accusations a few times, but it's outweighed by the rather pleasant discovery that I've worked for people who are surprised that the hours I bill for (I often freelance) are lower than they expected.
However, the few times it's happened, I took from that the importance of not doing it to other people, even if I'm unhappy with their work. Because it's so undermining at a rather fundemantal level.
We are still doing megaprojects just fine. And less people die doing them.
California High-Speed Rail, 520 mi in length, began construction in 2015, cost 90x as much per mile, even after accounting for inflation, and.... was indefinitely postponed in 2019. For a rail line in a single state.
Admittedly, with much higher speed, and much more humane construction. But nonetheless, even with a much higher per-mile price they still couldn't deliver it? I can see how someone someone might get the impression we aren't able to successfully deliver large projects any more.
Or we could compare successful project of today (A380) vs failed project (Spruce Goose).
Or, I know it's not as romantic as a real living person on the Moon who then comes back home, but we have robots on Mars. And on comets.
>I can see how someone someone might get the impression we aren't able to successfully deliver large projects any more.
We're all clever people and can convince ourselves of anything if we really try. Why not be positive? :)
Seriously, though-- whether true or not, there's undoubtedly an impression of extensive graft throughout the U.S. construction industry. Perhaps there are just so many people involved in such work that the inevitable few bad actors stand out more (a dishonest one is more likely to have stories told about them than one who simply does proper work for a fair price), but there is almost a tradition about it. If you talk to almost anyone directly involved in the industry - at least where I've been - they seem to act like these 'shady' types are a noticeable and constant presence.
I should try to find some hard data, but it always feels like construction costs outpace inflation. I don't know if if it's regulation, labour costs, or what.
I've seen many analysis of the situation pointing to all kinds of different "reasons". Many of those reasons have been rebutted by someone else. I cannot figure out who is right or what factors cause things, but the bottom line is some places transit costs a lot more than others after you factor in cost of labor and land which seem like the only things that should be different.
Federal infrastructure projects are in Spanish.
But here is a report in English anyway:
I believe it's about 50/50 Spanish/Catalan in Barcelona.
Either way though, they don't speak the same language as the people who want to know what they are doing different.
In the US Spanish speakers tend to be poor and thus not in position to do anything about costs if they did look into it.
We might be able to learn from the Chinese or Russians. I'm not sure what their cost structures are like. Learning best practices from high cost areas won't help you control costs, which rules out all English speaking countries.
I forgot about this joke yesterday, but it is a better reply than my previous ones.
I'm confused by this sentence.
Labor: Basically slavery.
Regulation (safety and environment): Basically none.
I live in Minneapolis, and remember when the 35W bridge fell. Republicans tried to prevent adding lanes for future light rail to the new bridge (particularly then-governor Coleman, who was eyeing a presidential run), and our mayor basically held the bridge hostage - he told the state and federal governments that if there weren't provisions for future light rail, that the city wouldn't be issuing the necessary construction permits for the new bridge. (A call I fully supported, btw.)
This is why it's more expensive than the 1870s.
You can always get something cheap if someone else pays
Compare feeding and clothing the world now to 200 years ago. We get much more done at a much lower consumer price tag.
And of course CHSR is far more complicated -- travelling through dense, developed urban areas at much higher speeds.
Is it really any surprise that after the easier projects are all done, the remaining projects are harder?
Today no one wants a train in the backyard.
Plus 1000s of people died constructing these railroads. That's not acceptable today.
I think that's the correct tradeoff, but it would be nice if we didn't lie to ourselves (and our constituents) about the precision of our estimates. I can't recall a megaproject completed this century which was less than 3 years late.
Maybe that's part of the secret, communication was more expensive so you wasted less time with noise
But how do you put a switch on the panel of the 747, make the wire go all the way to the tail of the aircraft and make it do what you need it to do?
Very similar to modern corporate email, sending crap around forever and bringing in more "eyes" trying to avoid making a decision or—god forbid—actually personally doing real work that produces an actual good.
[EDIT] tail end of 1961, actually, got that wrong.
I think Yuval Harari was onto something when he said that what guides the vision of a society and thus science spending is ideology, religion and politics.
See medieval churches, the Manhattan Project, the moon landing. I could put a name on it, but better to leave politics out of HN.
You remember a country still suffering from WWII and loss of world leader role (and colonies), in a much poorer global period. For starters, the '70s had a global recession and oil crisis, and many countries were affected, not just England. Have a look at pictures of New York of the same period -- it looks like a third world country .
In the 80s and 90s, countries that didn't join the EEC, as well as countries that joined EU much later, had the same economic uplift.
(And of course several countries that did join the ECC/EU had economic decline and crashes, e.g. the P.I.I.G.S).
Depending on where you live I’d say access to resources you could not imagine back then for peanuts.
People complain that technology is expensive now, but prices in the 70s and 80s were out of reach for most.
> how do know you‘re not living in 1973
I would glance at my local grocery store, shopping malls, cars, empty cemeteries, and inflight wifi.
First off, most manufactured things are incredibly cheaper now, due to real, complex behind-the-scenes advances.
We have a global supply chain that largely removes the constraints on geoghraphic locality for foods and manufactured goods.
That's paired with an almost-always-online wireless mobile internet, (that works up to 7 miles up in the sky!) that is absolutely amazing and darn inexpensive.
Our cars are much better at not killing us.
Many more diseases are treatalbe, and injuries curable.
Finally, why subtract screens? Modern screens are great.
Dunno about churches but the other two were prompted by active large-scale military conflicts. Bring back the Cold War, I guess?
I do realize that far too many people put their faith in the healing power of judgement.
But that is partly because, as a component of that cognative circuit, is that most people have some sort of innate desire to see things as "fair". Certainly not to be the victim! (That's bigger, actually, by far).
People who work with people know who the slackers or the non-producers are. I try to separate those, because some high-output producers can appear to be slackers because they think for 8 days before they type that one magical line of code that saves the project. (It's hard to tell when you are in the trenches day-to-day which may be which).
In any case! (I almost got lost in my parenthetical there), Workers know which co-workers are getting paid to not produce, and know that this wage, applied to a producer would lower their own work load 10% (or whatever), and besides: it's NOT fair! He get's paid $100K/year to not work! And I work my ass off for the same wage! It's enough to make me stop working myself. That will certainly show "them".
At the end of the day, if we do not purge the slacker, morale-rot sets in on the rest of the team, and that's just "bad". Even without the needing to be punitive in our heart, those people must be culled for the continued positive morale of the herd.
Or, so, I thought once. Now - I'm not so sure. But it seemed reasonable at the time!
The Apollo program had 400,000 people involved at it's peak.
The programmers for the Apollo software (which is what we're discussing), were at their peak, around 350 . And most of those roles were secondary and only needed because of the primitive tooling and utmost reliability requirements that modern software doesn't suffer from...
 David G. Hoag, 1976 - http://klabs.org/history/history_docs/mit_docs/1711.pdf
- work with devs to make sure whatever gets into the work pipeline is well understood
- everybody agrees on the planned work scope
- prevent anyone from adding "just this tiny thing" into planned scope|
- visualise all of this tough, tough dev work, especially the ad-hoc work (putting out fires, for example)
- keeping both management and devs accountable
- and more!
Also I disagree with the bridge metaphor; while true, you can't use the bridge when it's 50% done, you should ideally be able to to use 5% of the planned feature. If you can't, then maybe it was planned out badly and shouldn't go into planning in the first place?
If there's a person that's "working hard" in a team and a person that's "hardly working", then the team has a big issue and should do everything to fix that.
However, This takes a lots of guts on part of (both!) the manager and developers and always has an emotional toll.
* "command&control model" the risk is born by the client, paying cost+15% for everything where they oversee all the work. Low trust, lots of bureaucratic overhead, but if the are competent enough, they can get the project done.
* "free market" model: risk the is born by the vendor -- payment upon successful delivery -- very high profit margin if the project suceeds, but $0 pay if the project fails or is even just second best.
One trick is, even in the "free market" model, you have nested command&control inside vendors, with the same problems that the original client had. You can go "free market" all the way down, but you are still stuck with all the human risks -- people can cheat, be accidentally incompetent, or lose everything just for being second best at project A when they could have been the best at project B.
> Somehow we got to the moon.
The moon isn't hard to reach, it's just not worth the price. If you allocate another $250B like last time, and it will get done -- either publicly command&control NASA, or by paying SpaceX.
We spend collectively about $100B/yr on the Internet, and it's super complex and huge and works amazingly well (works well technically -- how people you hate choose to use it is a human social problem)
That is why even the best soloists should try pair/mob programming from time to time. And it is essential to increase visibility in the teams where people work solo/distributed teams. It's okay to shout out in a public chat, group email, etc. of what you're doing, what approach you're going to try, things that unclear to you, what's left, how you'd like to improve it, etc. Don't be afraid to look stupid. It's better than trying to come up with excuses later. Post diagrams drawn on the napkin. Ask your teammates what would they think if you add a library, or remove one. Delayed feature for unknown and speculative reasons never looks good, delayed feature backed-up with your publicly shared notes is always better. Better for you, better for the team.
The best teams operate that way and if a manager cannot create a culture of complete transparency, they'd loose the ability to trust their peers.
For the first couple of years, you fight against this, passionately arguing for genuinely good engineering practices.
But every time there’s a problem, you compromise and it creeps a bit more in the direction of ‘command and control’. And all those little changes add up to the results you observe.
I'd imagine the smart developers quickly get replaced by the obedient clock-punchers.
I have never understood why the clock-punching is so a big problem. It doesn't take much time to track your time spent with typical softare, in my experience. And apparently being clock-puncher now makes you dumb as well?
I have personally also tracked time on my personal projects. While not sure if it does bring much value, it is not a huge investment time-wise.
So what the super productive guy learns after a while is that they just need to "work overtime" and delay presenting results to get ahead. If this attitude starts pervading your team the productivity might start tanking too.
Situationally. With programming there are loads of different tasks. For some tasks I think time can be quite a good metric. Still, metrics matter and I think having lousy metrics is better than not having metrics at all. Of course you have to realize what purpose the metric has in certain situation.
I fundamentally disagree. Metrics don't live in their own world. They affect the world around them and incentivize certain actions. Bad metrics can produce bad incentives. There are certainly many cases where simply not knowing something is better than knowing something that's only partially right. However, that requires a level of humility and acceptance of one's limitations that can be hard to defend to others looking for straight answers to all questions.
But generally I think my main point stands. We need to be humble enough to sometimes accept lack of control (i.e. not using certain metrics/processes) rather than deluding ourselves by through certain metrics that really aren't very good.
Never mind that the task spec changed 5 times in those 3 days.
Because: 1) you know what you are lacking 2) you won't take action on wrong information.
But if you're a manager giving out raises to your team, taking no action (i.e. giving no raises) might seem a worse choice than using the unreliable data - and there's no expectation reliable data will arrive if you wait.
Happy employees, at a certain level of knowledge work, have a degree of autonomy.
Unhappy employees who could otherwise be happy employees are a waste of potential and money.
Having a clock on/clock off and a 9-5 butts-in-seats management style and incentive structure reduces perceived autonomy.
I don't agree with using it as a slur, but I do think it's an awful way to work in a creative field. Some of my most productive work happens as a direct result of my taking breaks - a walk here, a cup of tea there. All the while I'm thinking about the problem at hand. Time tracking discourages this of lateral approach in favor of brute forcing your way to the answer.
I would like to understand why on earth time tracking is against "genuinely good engineering practices".
I have used time tracking for personal projects and also professionally and I genuinely think that it can add value to my workflow, in a way that I can analyze how much I spend on certain tasks etc.
I would guess also painters and script writers track quite often track their time. I think some level of time tracking is good for almost any level of work. For example, if your goal is to produce paintings and sell them for living, you probably want to know how long does it take for you to produce something so you can see if you are approaching sustainability. Similarly if you write movie or tv scripts it is probably relevant whether it takes 2 months to write the script or 2 years. Some work is just structurally different so different approaches to time tracking make sense.
I think asking whether it is discuss "whether you did line this amount of lines within X hours" is more like management and workplace culture than software development issue. It seems that people here are fed up with their managers and funnel that bad feeling against time tracking. In my opinion time tracking is valuable tool and should be used in principle almost everywhere. Bad managers of course make any type of work a pain.
More of late I've thought it'd be better to look at the rate features are added and ignore 'hours' completely. As in Bob seems to be able to complete one hard feature a month, or three medium features. You got to ask, do you care how many hours bob worked? Not really.
Bonus points if said manager does not understand that writing tests makes one go faster rather than plugging the code and trying to run scenarios manually...
Why do I know exactly what this code is going to be? Is it because I’m doing something repetitive? I’m a software developer. My job is to make the computer do the repetitive parts, not get paid for spending four hours doing something we’ve done ten times before.
Mastery isn’t doing more, it’s about being more effective. You develop intuition. And the way your intuitive brain surfaces warnings is as discomfort. If you cultivatea sensitivity to this discomfort it means you are adverse to powering through. On some level you “have to” address this issue instead of tolerating it. You have to scratch.
Now you’re not doing rote work. How long will it take? Who knows. But it’s less time than doing this thing eight more times this year. And it’s not just time. It’s energy and face. A repeated process that can introduce human errors makes you look like a clown to your customers, and your boss’s boss.
I think the only known-length task is a completed task.
We'd add in some padding because you don't know if something will need a bit more research or if the lead is going to come down with the flu or whatever. You write deadlines that are under the client's control, rather than yours, into the contract.
So you don't know exactly. But if you have a decent handle on the scope of work and aren't trying to solve wide-open problems, you can usually get pretty close.
Certainly, a lot of people don't have this level of experience in a domain. When I do similar work internally at a company now, people are often surprised that I can come up with time estimates (and meet them) pretty easily.
And frankly if you get too smug about it in a work environment you’re going to antagonize someone into scripting your job out of existence. So I wouldn’t pull on this thread too hard if I were you.
I'm on the fence regarding this statement. If you are accumulating increasing amounts of fatigue by being there and writing bad code, then yes, go home, rest, and recover. But, optimal is the enemy of good. Write some shit code in your own branch, learn from the mistakes you made, so that the code you write tomorrow might be "the good code".
With scriptwriting (and often art in general) you first develop the "product" and then try to find a buyer. This is IMHO not a feasible model for individual software developers.
If change the flattering comparisons then why not? Instead of someone writing a script for a movie, imagine it is someone writing sales copy. Instead of a "painter" presumably working on some masterpiece, imagine instead it is someone in one of those Chinese factories churning out still life 3265.
Most programming tasks are well understood - most programmers in big corporations are working on CRUD apps. The correct comparison is not to some highly creative process but to some semi-creative process. Sure the details might be different but in the main it can be well understood how long it takes to add crud screen 26.
I guess pg himself popularised the flattering "programmers are just like painters" nonsense: https://idlewords.com/2005/04/dabblers_and_blowhards.htm
Tailorism was good for simple, repetitive and immutable tasks, where the only planned improvement would be the worker. If the task was to be revised, it would be top down from an 'analyst' that designs a new task and test it on workers.
What changed this perception ? Toyota like factories, and the idea that tasks should progressively evolve, workers can change their environment in non top-down ways. The metrics was not how much time it took to do a single task, but how much the whole pipeline was efficient.
It's just an example, but I think time tracking should be a relic of the past, as an idea that was too simplistic and too seductive to micro-managers to be really valuable.
It's easy for one person to increase in time-tracked productivity in ways that decrease the aggregate productivity of the group they are in.
Sometimes, the person is happy with their own work and unhappy with everyone else's, and they have a blind spot because it's so hard to understand how improving their own productivity can slow down other people so much to be net negative.
But I've seen it, I think it's real. I think it's sometimes a very strong effect, which can make or break a business.
Just like manufacturing pipelines, people affect each other's work in non-linear ways that clog up the system.
As with pipelines, performance improvement lies in working with those non-linear dynamics well enough to reconfigure the system, so each person can excel at their best.
Engineering practices vary wildly depending on what type of engineering you are doing.
That's one of the problems they had when developing BS5750 and ISO9000 , "where do you hang the defect ticket" was one of the areas where they had problems - that was what the head of the BSI software quality project told me back when he was auditing a project I worked on ack in the 80's.
I suspect the number one factor to a successful project is a happy, healthy team. I'm pretty sure there's a section in Peopleware that backs that up. I'd be really careful with any approaches that risk losing that dynamic (or preventing it from forming).
Also, for a programmer who did not code for 5 years because they got into management, it is a huge source of anxiety and forces them to contemplate the obsolescence of their skills. Having to ask a junior dev how to compile the project you are supposed to overwatch can be felt as humiliating.
When I had a team to manage, I tried to keep involved enough so that I could do the whole build process. Most of my work was complaining about broken master branch and regressions, praising the correct person when a feature gets implemented, shout with the higher management to stop adding irrelevant features every time they meet a client and have a weekly discussion on how things are going on.
We had one lay off during that period, on my recommendation. I was worried about the team morale, but got told we should have fired that person earlier.
It's possible that a junior dev who showed you the button might then joke to other junior devs about you not knowing. But a smarter junior dev, wise beyond their years, will realize it's just bullpoop trivia, one of thousands of bullpoops one has to pick up and later discard in favor of a fresher bullpoop, and be happy to help.
In any case, you, as an experienced person, should know it's just a different flavor of bullpoop, and not the most valuable thing anyone knows.
Your decision to stay in the loop, with the bullpoop that your team is currently enjoying, sounds like it might come in handy. But I wouldn't want anyone to ever feel humiliated over bullpoop.
A build process is one thing, knowing C++17 or rust is another. Knowing if refactoring all these templates structures was really worth 2 weeks is something that is hard to ask to a junior dev.
Why not both? Time tracking with tomsething like toggl doesn't take much effort, and you can combine that with reviewing git activity.
I fought the management culture hard for a few years, eventually realising that I needed to either become one of them, or quit.
It was kinda awkward initially, mostly for the people I had hired and been manager of who I was now colleagues with, but not for long. Hands down it was one of the best life decisions I've ever made. I feel as though the experience I gained helps me to perform my current role even better than I would otherwise be able to as I have an appreciation of the politics above me that I'd not have had I not been up there and back down again the way I have.
I've recently had to step down from a senior non-management role (tech lead) that my company forced into management tasks into it, specially things that no one else wanted to do. It didn't work and I ended up leaving the company. It was
hurtful because I left believing I had a lot to contribute and was motivated too.
If you create a system to measure butt-time in a seat coding, you're going to get lots and lots of butts in seats coding. This may not necessarily result in a product.
Looking at Roy’s chart it looked like he was achieving 99-100% Chicken Efficiency every day that month that must be good. However, I still didn’t know what the measure entailed. Roy explained: “Chicken Efficiency is basically a scrap measure. You take how much chicken you cook and divided it the number of pieces of chicken sold. If you sell all the chicken you get 100% chicken efficiency, which is the goal – no waste or scrap.“
When I asked Roy how he achieved near perfect Chicken Efficiency every day he answered: “I’m gonna let you in on a secret, but don’t put my name in the report saying you heard this from me. My secret is I stop cooking chicken around 7:30 and only cook to order after that. That way none of the chicken sits under the lights for too long, everyone gets hot and fresh chicken, and I don’t throw any of it away – 100% chicken efficiency.“
I asked, don’t you get people coming in here all evening until the restaurant closes? Roy replied: “Oh yeah we get tons of kids coming in here after baseball or soccer practices or games, families, and lots of people. I tell them I’m going to make their chicken to order and it will take 15-20 minutes.”
I asked, do most of them wait? Roy answered: “Heck no, they file right out the door and head off someplace else, but management doesn’t measure that. They do want to make sure I don’t throw any chicken away every day, however. My buddie Leon taught me this trick – he got promoted to run three restaurants by not cooking any chicken“. I went on to visit other restaurants and every one of them talked about the importance of Chicken Efficiency.
They start putting pressure on engineering managers to make sure their team is "working". Their team clearly needs more discipline and those 'slackers' need to be cut.
What they don't see is that the engineer tossing the ball figured out that the solution they were about to spend 2 weeks coding can be achieved by leveraging an existing library requiring only a couple days effort.
Maybe this is an unpopular opinion, but that wouldn't mean that the engineer shouldn't implement in a couple days and then work on something else. HN loves the trope of the super smart engineer who is smarter than _everyone_ and can look like they're doing nothing.
No matter how much your team delivers, if they make it lose easy, folks think it's easy!
If they are running around, sweating, stomping out fires, looking seriously stressed, then my gosh they must be working really hard. :-(
Actually this is how RAD /DSDM stand-ups are supposed to be done -at the end of the day so everyone knows what they are working on tomorrow.
Writing code and working on software is not a linear process, so writing "worked on feature X" for your entire morning may be accurate, but it's not fine grained enough for what the managers want to see and it doesn't really serve any purpose at that point when you already have daily stand-ups and sprint meetings every week.
This likely isn't the case at all companies and with all managers, but I suspect many have the same experience I've described.
The dynamic goes like this:
1. The manager is responsible for team's output/metrics.
2. Metrics are driven by improvements to the product.
3. Software functionality is hard to estimate (not a widget factory as other comments mention).
4. Manager gets anxious as it will be judged by an outcome not entirely within control.
5. Knowing everyone is putting their best effort increases confidence that team will reach goals by which the manager will be measured.
The manager evaluates team members on longer intervals depending on past performance. This allows the manager to course-correct before the team is evaluated. The seniority/past-performance of a team member determines how much the manager trusts them to spend the team's resources (time; man-hours) to accomplish the team's goals.
The brute force approach to make sure people aren't slacking is watching them work/check-in/report on progress. You can quickly tell if someone isn't putting an effort after a couple of standups/reviews with less-than-expected output or 'yet another excuse'.