Hacker News new | past | comments | ask | show | jobs | submit login
You've only added two lines – why did that take two days? (mrlacey.com)
964 points by gregdoesit on July 14, 2020 | hide | past | favorite | 507 comments



A variant of this that has driven me to quit more than one job is having a non-technical manager look at a UI prototype and consider that 90% of the solution. "The UI guys had this page ready two months ago! Why doesn't this work yet?" It's even worse when you present a working prototype. They simply don't understand that the backend functionality is what's doing the bulk of the work, and just because you can see something, that doesn't mean it's secure, performant, scalable, or even functional beyond demoing with dummy data.


Early in my career, I learned a simple 'demo day' rule: never demo things that aren't end-to-end done.

When you do, it can easily confuse folks who aren't deeply involved in your project ("haven't I seen this already?") and can hurt team morale because they never get a "shipped it" moment that feels good.

More to the point: enforcing this rule incentivizes teams to build things in small, shippable components. Nobody wants to be left out of demo day multiple weeks in a row.


Also, never use the word "done" in any context in a meeting like that. Do not even say: "I'm not done". They won't hear the "not". Say, "Development is still in progress" or something similar. I got chewed out for something being released (where it was found to be broken) to a customer because I said something like: "I'm about 80% done with testing, but I haven't run into any issues yet." They released it even though it wasn't ready, and both the customer and I found that there were issues in that last 20% I hadn't reached.


> Early in my career, I learned a simple 'demo day' rule: never demo things that aren't end-to-end done.

I take a different but similar approach when I run into situations where I want to get something small in front of a business user before it’s completely ready: I make sure it’s visibly broken in a way that doesn’t detract from my goal for the meeting.

For example: I have a registration form that I want to talk through. The state drop down will only have 3 entries, one validation error will always display, and on final submit you get a “failed” alert instead of a dummy page.

It lets me walk through the page and get whatever feedback I need, but it feels completely broken so non-technical users expect it to take more time to conplete.


While I agree, the feedback I get from a session like that is useless:

"While you mentioned this during the demo, I noted: 1. Your state-drop down has only 3 entries. 2. There is a validation error 3. You get an error at the end.

We need you to fix this ASAP !!!"


Yup, that'll happen, and if you omit it entirely you can get comments going "I'm not seeing this field!".

Would it be better to show no progress at all until it's completely done? I know agile methodologies tell you to demo regularly, but I'm more and more under the impression that they are to provide progress feedback / reports to management.


I remember someone suggesting using deliberately crude and hand-drawn looking UI elements in the demo. That communicates to non-technical users that it's just a prototype.

Something like https://wiredjs.com/ might work, if you are building a web-ui.


I did this, but by not fine tuning the CSS until the end. Just use black and white, leave off the border radius, and make it look "unfinished".


Fabulous idea. I will definitely use this for future early-stage demos.


Honestly, that feels like a huge waste of your own time to game a broken system. And I'm guessing it is, but that it's a defensive move because the darker timeline is miserable.

Sucks it has to be this way so often.


What's broken are humans. For some psychological reason, a thing that looks unfinished gets much higher quality feedback than something that looks polished. There's no way around that flaw in people, so making prototypes look unfinished is something we have to do anyway!


Perhaps because “unfinished” offers stakeholders the opportunity to make major contributions, whereas “finished” means it’s already too late to make changes now so anything they do say will just be ignored.

Yeah it’s all perception, but cultivating the right perception is vital to effective, productive communication.


It's an example of the Doorway Effect: brains are wired to work in modes related to context cues. Change the context, and you'll change the memories and trains-of-thought easily accessed.

Polished things have a context of "use", not "evaluate", because we use so many polished things every day, but very rarely have any need to evaluate them (mostly only when we're buying them, or when a repair person asks us to describe what's wrong with them.) Whereas unpolished things are mostly "for" evaluating; it's rare that people use unpolished things (outside of, say, disaster-relief infrastructure.)


> For some psychological reason, a thing that looks unfinished gets much higher quality feedback than something that looks polished.

Does this really imply “humans are broken”? We all have limited resources and I think this could just be a first order prioritization mechanism. Of course one has to be aware of the bias but that’s a different problem.


Here's another way of thinking about this:

it's not a way of gaming the system, but a way of prompting better feedback. For instance, in a Design Thinking process prototypes are more useful when they look rough/unfinished. A more polished piece of work might result in people being afraid to break things.

I use both approaches in my work:

1) demo small, tested bits during show and tells and any meetings where the point is to demonstrate progress. 2) demo large, unfinished and barely stable pieces of work when ideating, trying to figure out the next steps

2) is hard and works only if: - we know what the unstable parts are (because we have tests, so we know the gaps) - we know the audience and how much context they have


I've done this on quite a few occasions as well. It isn't necessarily time wasted, as you evoke a positive view from the client (progress towards the solution) and often get valuable feedback.

The alternative is that they think you are done, adding more pressure, and requiring more wasted time later explaining the mis-aligned expectations.


If hes talking about JavaScript an alert dialog is a single line of code so I dont think its that bad. Also as long as frontend and backend have agreed on what certain objects structures will be hardcoding shouldnt be an issue.


Spot on! I like the “conplete”. I hope it was intentional.


Or even better: a Freudian slip!


I don't do that. When I demo something I need feedback on, I demo the best version I've got. If it still requires work, I say so. This seems to work quite well for me.


Did you really get chewed out? That seems pretty much like a deal breaker for me. Managers are collaborators, not parents, and have no place talking to co-workers in such a diminutive manner.


Yes. He came to my desk a week or so after he okayed the release of the software and took me to a conference room. I don't know how long he chewed me out for because I was red with rage but too terrified of losing my job to say anything. I've experienced similar rage with only one other manager [1]. He was 3 levels above me in the management chain, I really liked the two above me (the test manager, as I was in test at the time, and the software manager) and the guy above him [0]. I wanted to stay with them and I had a good friend there as well. But I did apply for a new job a couple months later, and left by the end of the year.

It was my first professional development job and I learned a lot of things. One of the keys was this:

Your boss and employer are not your friends. When you promise something to your friends, you owe it without any expectation of reward or return. But your employer owes you something for everything you give them. If they fail to meet their end of the bargain, that's on them and you are absolutely 100% free to leave. That can be money, time off, good working conditions, respect, trust, or any number of things. For me money was not primary (though it is nice): I wanted a good work environment, interesting work, and respect. Fail any of those and I will be looking for an exit, even if it's just a transfer within the same company. But respect is first in that. A boss who shows little respect to his employees is not someone I want to be around. I had that boss who chewed me out, I had another that kept me doing busy work for a year and kept saying worthless platitudes like "you're important to our work, we can't do this without you" when everything pointed to that being false. He was just empire building and trying to grow his direct report count, but it hurt those of us under him because he didn't have the work (but he did have the money) to keep us there. (I was still relatively junior at that job and stayed longer than I should've.)

[0] The hierarchy was something like: VP of Engineering (several divisions below him) -> Chief of Engineer (Aviation) -> Chief of Software (Aviation) -> Chief of Testing (Aviation) -> Me; other production lines had similar chains below the VP.

[1] When she was my manager I quit, or more accurately transferred to another group. Years later, I quit my last job for several reasons, one was her. She was not in our division while I was in that position, but they'd just hired her on. When I heard the name I had to verify, when I confirmed it was the same person I was ready to exit.


Reading your comment reminds me of how sheltered and lucky I've been in my career. If my manager released something after I said I was 80% done with it I'd go have a talk with him about how "we" can avoid making that mistake again and what process needs to change to prevent similar errors.

If he tried to chew me out, I'd just say "That's not how I remember things. I said I was 80% done, and, frankly, it was a mistake for you to pull the trigger on the release without confirming with me first."


Yeah, I'd have the same conversation with my manager and she'd apologize. I don't think you have been lucky, such a reaction is just a normal adult civilized reaction.


This kind of thing tends to happen when young and inexperienced. After it happens once or twice we have the experience and clarity to speak up.


> that's on them and you are absolutely 100% free to leave.

Not if you are on an H1B visa ;-)


Well, that gets to my other lesson: Save money aggressively so I'm not beholden to any employer. It's harder when you're taking care of a family on one income (and not making FAANG money), but I saved aggressively when I was young and single. I never actually had to use that option, but I was in a position where I could have lived (as a single healthy guy with no debt) for 4-5 years without needing a paycheck. I wouldn't have lived well mind you, it would've been tight to stretch it that far, but it was possible. My main mistake was putting too much in my 401k. I'm set when I retire, but I still don't have enough assets to get me to retirement.

This works even when you're on a visa, but can take longer to be truly comfortable. I had a classmate in college who'd secured a work visa. His plan was to work for perhaps a decade in the US, save aggressively, and then return to his home country. Salaries in the US were 10x higher at the time than in his home country. Saving even 20%/year meant saving 2 years' of income every year he worked. And he was frugal enough to save a lot more than that. Invested well, taking what he learned back home, he would be set for life at this point (I didn't stay in touch so I only know the plan started well, not how it turned out).


Look up the 72(t) rule relating to a 401k. You can retire early and start withdrawals at any age without penalty, as long as you continue the withdrawals for sufficient time (the longer of five years or until age 59.5.) This fixes the "too much in the 401k" situation.


The problem I see is that if you're 30 and use that rule, you are basically forced to drain your 401k over the next 29.5 years.


Yes, but you don't have to spend it all. You could take just what you need to live on and invest the rest elsewhere. You'd give up some of the tax-deferral advantages of the 401k, of course.

(Perhaps you could roll over part of the 401k into an IRA first, and then take the rest as 72(t)?)


> My main mistake was putting too much in my 401k

Can you expand on this? I recently moved to the US and people keep telling me to get a 401k, but I haven't yet.


A 401k is specifically a retirement account - you pay an extra 10% for withdrawing from it before age 59.5, which should add context to the rest of their paragraph.


Look up the 72(t) rule relating to a 401k. You can retire early and start withdrawals at any age without penalty, as long as you continue the withdrawals for sufficient time (the longer of five years or until age 59.5.) This fixes the "too much in the 401k" situation.


Less than full liquidity. If you get a decent matching contribution from your employer, it's probably worthwhile. Otherwise, it may be kind of a toss-up.


  At a minimum, you should save enough to your 401K to get the company matching.
  Let's say the company match 50% on the first 6% you save.
  So if you put 6% of your salary to 401k, the company will add another 3% in addition to what you did. That's a free 3% raise. 
  Tell me another way that you can get guaranteed 50% return of your money.

 
  In addition, there is the tax advantage where you do not get taxed on your money until you retire. So you money grow without taxes until you actually need them.

  Are there bad 401k? yes, but I think very few. Check the rules of your company. But most of them are very good deals and you should take advantage of them if you can.


Another idea aside from living 4-5yrs on savings is to save towards F.I.R.E and get financial independence for a lifetime https://www.investopedia.com/terms/f/financial-independence-...


>> Not if you are on an H1B visa ;-)

But one must also 100% expect themselves to evaluate thoroughly T&C, pros and cons before signing up for anything, including a visa.


You can find a new job though.


> He was 3 levels above me in the management chain, I really liked the two above me (the test manager, as I was in test at the time, and the software manager) and the guy above him [0]

I happen to get along really well with the person three levels above me, but I can't imagine dealing with getting direct negative feedback from him. Honestly, more of my conversations with him over the years have probably been about things unrelated to work than work-related; we don't just hop over my direct manager and his boss unless there's a really good reason for it.


Parents shouldn’t talk to kids in a diminutive manner either for that matter!


Parents are not supposed to talk to kids the way chewing managers talk to people. Also, business oriented managers are not collaborators in my experience and take such idea as offence.

And yep, I was actively avoiding that environment too.


One thing I learned very early in workplaces where there are "project management professionals" is to NEVER to casually mention even the implication "dates" and "completion" anywhere near each other.

Someone is bound to jot that down and interpret it as a hard commitment regardless of any other details or conditions.


I think I've spent most of my mentoring time in various careers teaching new folks that the worst thing you can say is "done, except for".


In a company that understands and embraces agile software practices, this works well. You demo small things that are done, and prototypes are understood as just mockups designed to drive future work. Alas not everyone in power gets it. In more egregious cases, I've been in adversarial environments where teams were pitted against each other to appear "more done." Obviously a recipe for failure. I'm fortunate enough to be able to be more selective in where I work now, and have the experience to drive the choice.


Urg agile, scrum, some-other-magic-words

I've come to realised this, "true" agile is more like being funny and smart(not that I am either). If you have to tell people you are smart or funny, you probably are not. Ever noticed how smart people(the really clever ones) are just absurdly smart without walking around telling everyone "hey I'm smart", usually the nicer they are the more intelligent they are(yea you get exceptions), same with funny :)

I feel it goes double for "agile-processes" if you have to walk around and tell everyone (management or interviewees how agile your process is, it's probably not)


It feels like "agile" is such an overloaded word these days. In most settings it just means a specific workflow centered around Scrum or to a lesser extent Kanban. The only true difference from old school waterfall and month long specs is shorter iteration cycles. A "sprint" or "iteration" is still treated as a rigid block of work.

On the other hand if you have a look at the original Agile Manifesto[0] it is a different beast all together. It specifically seems to go against using set processes altogether and basically boils down to nurture organic communication, to adapt and focus on getting shit done.

I suppose the "agile" in the former sense is a compromise to edge closer to the latter, while still maintaining a familiar corporate structure.

EDIT: [0] https://agilemanifesto.org/


That's...a great point, actually. The most humane (and incidentally, agile) workplaces I've met didn't mention agile much: "yeah, we do this and that, we just want to have a sane environment."

The places that went "we do all the agile incantations in the book, because that's the only way," well, those actually had a scrum-o-fall culture.


Yeah, but there's a big difference between working on a TV show that's meant to be a comedy where people are trying to be funny, and working on a TV show that's meant to be a drama that isn't a parody. Both might have the same "actual goal" of trying to get good ratings, but if you're trying to put jokes in the show that's meant to be dead serious because you mistakenly thought everyone in the writer's room was trying to be funny...

If you want to be working with a team trying to be funny, at some point you have to use the word "funny" and "comedy" to make sure you all are actually trying to do the same thing. And make sure the producers and show runners agree that it's a comedy you're making.

Same with agile.


Spot on. I believe there is a word for this.

If a country explicitly calls itself "demoractic" in the name, it's likely not.


I've been with a few companies that waterfall in two-week cadences and call it Agile...

They usually expect a fully working demo


There's no greener grass on the other side. I've been in the big consulting business, and sometimes you can't get a client to schedule a meeting for months, so you're flying blind, with absolutely no feedback. And then we go into the typical cycle of "Lessons Learned" etc, because what you've developed is irrelevant to the client.


That's a funny comparison to demo days early in my career. The company I worked at had too much upper management that would hijack dev teams to make "cool" features constantly. We only had to make something good enough to appear like it worked in a demo though, because then that upper manager would get his bonus or look good to the CEO or CTO and we could move back to doing real work.


I'm still conflicted about this. On the one hand I agree. On the other, I wish we could educate people enough for them to understand Proof of Concepts and Minimum Viable Products.


It depends on the audience. A customer demo is different than an internal demo, and I assume you are referring to an internal demo.

You must demo the product in a form that leaves the right impression of the current state. If you are painting a picture of a polished product, expect polished expectations. Instead, show the bugs and say "we are still working through this section" Show missing pages, show your work in progress. Show wrong colors. Show potential, wave your hands and tell them to imagine this part working. Don't fake it. If that feels wrong, then a demo isn't right.

If you show it in a form that looks complete and polished, even if you say it is not, how can you expect any other conclusion from the viewer other than "it's practically ready!"?


Great point with the visual appearance: show what you currently have, just include ugly.css. People are wired to understand Comic Sans and hot pink on green intuitively.


I can only agree with that. It is kinda just being honest. I don't know from where come this idea to showcase our work better than it actually is. Is it our own ego, showcasing unfinished/buggy work making us question our competency? Is it the fear of getting negative feedback because it is buggy and unfinished? Is it deeper issue, society making us believe, that what counts is superficial (apparence) way more than it should?


If they could be educated, they wouldn’t be managers.


> incentivizes teams to build things in small, shippable components.

Isn't this a bit of a fallacy? Not everything can be broken down into chunks of work that fit into a single sprint.

There's a reason I stopped bothering with Scrum a while ago, and this is high on the list.


There is a reason that teams with mostly non-technical leadership deliver broken software as your choices are feature driven architecture with absurd glue or being asked "why isn't it done yet"?


This doesn't require that things fit into a single sprint. It just incentivizes teams to ship their work in smaller chunks.

If it helps motivate a team to divide some four-sprint piece of functionality into two shippable chunks that each take two sprints, I'd call that a win. Customers get something a bit faster even if it's not single sprint-sized.


Reiterating what gp said , not all dev is like that, lesser you are doing IT and more comp science or any unfamiliar territory really it is harder to break down ahead of time .

Many times I can’t tell you what tasks need to done let alone how much time is needed and break it into smaller chunks during planning phase.

If planning has to work you should be familiar with what you are building , with poor information on the bug/ code / stack planning agile is just useless overhead . it works great for yet another CRUD app where you know the requirements to the dot and know exactly how to build or fix not always , most management fail to differentiate

All the reasons in the TFA are also why it is hard to estimate , what and how much time will take .


When you’re doing exploratory work, the discipline of stopping, examining what you have learned already, deciding whether the goal still makes sense, and correcting course and reprioritizing seems even more important to me. You don’t know what you will be doing more than a few days ahead? Then your sprint length should be a few days and at the end you recombine and replan.

I mean obviously this only makes sense as a way of organizing a team who are trying to build something exploratory - that’s what scrum is meant for. If you are trying to pursue a solo research project within a team the rest of whom are doing scrum then... that’s not a problem agile can solve.


It is not just only research that is exploratory, even the kind of bug fixes the article talks about can be hard to predict. replication of an issue or understand a new module can be uncertain, race conditions or data specific issues are uncertain too.

Identifying and solving something similar to [1] with a team is not simply possible when you plan with agile. I am likely going to go the next item once I mitigate the effect without bothering to dig deeper just because someone is clocking me on a timeline I committed.

It kills all the joy and fun, work becomes boring, this is by design, it is hard to run an organization unpredictably. If only management trusted you to deliver without looking over shoulder constantly ( when the situation warrants it)..

It is not only a engineer's gripe, it applies to management too, the board/ market forces them to be very short sighted, unless you are musk/jobs/buffet it is hard not to buckle to market pressure and invest in longer term opportunities.

The point is not that planning is bad, it can do a world of good in many including unpredictable situations, it is more that blinding pushing a framework especially agile because it worked somewhere and everyone says so, or the manager can't be bothered or won't risk doing something different as the situation warrants.

[1] https://cloud.google.com/blog/products/management-tools/sre-...


Ah - you’ve been subjected to management-by-scrum. I am sorry.

Scrum is a collaboration hack for creative problem solving teams, not a managerial accountability tool. The version of scrum where stand ups are for checking on the team’s progress and velocity is reported on up the org is using the tools of scrum to solve a very different problem than the one that it was designed for.

I’m sorry you don’t believe it’s possible but I can tell you from experience that it is possible to use the processes of scrum and the principles of agile to help a team collaborate on open ended creative problem solving tasks.


Maybe it's finally time to give up on Scrum. Whatever good intentions the original inventors had, whatever idealized situations it may work with perfect unicorn teams, the general experience in the wild of this benighted framework is a tool in the hands of mediocre, inexperienced managers to micromanage, infantilize and monitor developers. It's essentially warmed up command-and-control Taylorism with some feelgood buzzwords and neologisms thrown in. I would even prefer the old horrible Waterfall approach, at least we didn't have all the endless, pointless groomings, standups and retros to attend along with the "we so agile" gaslighting.


And if you are that familiar, why are you building instead of buying?


Not to put it bluntly but you can not do even small bits of work in 2-3 weeks?

Sometimes your demo is nothing more than 'here it is in the log doing xyz' or 'I added this thing to this config file'. Not all demos are big flashy ordeals. The team I am on right now most of my demos look exactly like that. I can usually do them right after the standup. Our team allows it because we are mostly remote and talking to each other helps.

I personally use scrum as a weapon to make sure management does not overload our teams. Those made up story points are a good way to say 'you have tasked us with 4 months of work in 2 days'. You have to know your manager too. You have to talk to them. Know what they are looking for. Some take a very hands off approach. Some want the nitty gritty details. For both of those a 'oh that is going to take 3 months' may sometimes work. But it does not give them actionable items to help you. The task broken down into some sort of chunked out work does. Sometimes you do not know. It is OK to admit that. That is when you make a discovery story. Make sure they are onboard with that story is to help you find out what is needed. Even then you will still learn along the way.

I worked with one guy who wanted to task things down to 15 minute increments, 6 months from now. He kept failing. Because he was being too narrow. He refused to do story points. Because they were 'stupid' yet management kept piling more stuff on him to fail at. He was in every weekend and in until 9PM every night. Because he had no tools to push back. Give your management numbers and actionable items or they will assume everything is hunky dory.


Why would it be?

For all the things that can be atomic like that, it's good practice.

The bigger ones just take time, and aren't shown, until ready for use.


It's an anti-pattern to simply not demo any progress until done. If you're doing agile right (loaded statement), the solution is to make sure everyone understands what's being done. If the audience is expecting all demos to show complete products, find a different audience to demo done-but-incomplete work. The idea is to get feedback before you've sunk six months into something that may not meet expectations.


Yeah, I could've been more precise about what I meant by "demo day." In my comment, I was thinking about the broader, often companywide demo days that many companies hold.

I wasn't talking about intra-team demos to, say, product owners.


"can we sell it" demo day then. Makes perfect sense.


I totally agree. Having weekly demos where we showcase unfinished features has been, in my experience, one of the main avenues where we have learned of issues / conflicts / problems with those features at exactly the right time where we could still fix them.

Not having demos of incomplete features would just hide the issues until they are released to the final customers, creating a problem you didn't have before, and making it much more complex to solve.


That isn't what I got out of the comment.

Progress gets quantized. Some quanta are small, easily shown, etc... other quanta are a bit bigger, less easily shown.

There is a similar problem in manufacturing.

Atomic releases.

While making something, there are many subtle tweaks to the BOM. Changes, substitutions, removals, adds.

Upstream people can make a real mess out of all that, and one way to prevent it is to only deliver releases that are resolved and intended for manufacture.

"where is revision 4?"

Doesn't exist, won't get manufactured, etc... "Use Revision 5 plz."

For the case of insuring expectations get aligned, a mock up can be used. Deliberately used to generate a spec.


"Shippable" is not equivalent to "done", in any way.


> Not everything can be broken down into chunks of work that fit into a single sprint.

In my experience everything can be broken down if you spend five minutes actually trying to break it down. And the benefits are very much worthwhile.


How much brownfield work have you done?

People can hide the fact that they have a big ball of mud fir a very long time, and they only want to talk about improvement after hunts have gotten miserable.


> How much brownfield work have you done?

4-11 years depending on exactly what you'd define as "brownfield"

> People can hide the fact that they have a big ball of mud fir a very long time, and they only want to talk about improvement after hunts have gotten miserable.

True but beside the point. The same point stands: you can always find a way to make a worthwhile improvement in two weeks - something that's useful on its own, even if it's also the first step of a much bigger improvement plan.


Consider, then, that your experience might be limited in ways that you're unable to see due to that experience-bias.


Unlikely. I used to go looking for tasks that couldn't be broken down; I'd get excited when someone would claim that their task couldn't be broken down. But they always could, and it was never even hard.


My task is to implement a model that takes advantage of unified field theory to simulate arbitrary bodies in spacetime, first the mathematical models behind it need to be created then implemented in software.


> My task is to implement a model that takes advantage of unified field theory to simulate arbitrary bodies in spacetime

Sure, sounds straightforward enough. Start with simple cases (e.g. universe is a unit circle), you can definitely implement useful pieces within two weeks.

> first the mathematical models behind it need to be created then implemented in software.

That's not a real (i.e. user-facing) requirement.


You should follow your own plan, a Nobel prize in physics awaits.


There's no money in doing useful incremental pieces of physics (and precious little even for the big milestones), unfortunately.


The problem with this is that to produce quality, you actually need iterative feedback from stakeholders and users/user representatives. If you don't show anyone anything until it's "done", you are doing work in a direction that would have been better informed by more feedback.

(Cause another lesson is that people have a lot of trouble giving worthwhile feedback on a verbal/written description of something, they gotta see a thing in front of them).


But I think that's still possible. For example when testing a new idea that just takes some months, it's pointless to show an alpha version where half of the buttons create error messages - even if it solves a much more valuable problem and these errors could be ignored. Instead one should make a presentation of a mockup or a screen recording how a single feature will work. That's much more digestible for everybody and gives far better results if the one watching has anyways only 3 minutes time.


> Early in my career, I learned a simple 'demo day' rule: never demo things that aren't end-to-end done.

I've learned this adage years ago (I think it was from Spolsky), but nowadays I'm in a project (re)building an UI from scratch as a sole developer aaaand I made the same mistake.

I was doing some UI prototypes about activating a process, big green Activate button, opens up a confirmation dialog, spinners with some artificial / simulated delay because I didn't do the back-end, and it caused confusion with our tester because she was wondering if activation actually works.

I've got three options; remove the button for now (I should do that), partially implement the activation (changing a status in the back-end), or fully implement the activation (which has a lot more prerequisites).


Or intentionally show them the rough edges. "Oh, oops it can't handle that input yet, let me try with something that does work."


> Early in my career, I learned a simple 'demo day' rule: never demo things that aren't end-to-end done.

I've learned the opposite. If I communicate clearly that what is being shown is little more than a mock-up, executives of all levels and technical skill, all the way down to managers just above myself, all understand that a very thin and scripted demo is not anything close to a finished product.

Describing the demo as a "house of cards" that will collapse with a single misstep gets the point across nicely, while also giving the demo audience an eyeful of what can be accomplished if everything is handled appropriately.

Demos are carefully scripted and rehearsed, values are hard-coded, and absolutely nothing exists that does not prop up the demo for the purposes of the script and the talking points.

It is hard to describe to someone who hasn't written an application demo like this just how little actually exists behind the UI.

Anyway, my point is that if you choose the correct words, anyone can understand that it's like a painting of an application, and not an actual application, just like a painting of your mother is not actually your mother.


On my team, we call this the "Jobs Rule" (i.e. Steve Jobs). As opposed to the Elon rule: promise the moon (or Mars).


For me, the purpose of the "demo" is to get agreement on specifications and to remove ambiguities. This can also be used to reassure that the requirements of the client have been well understood.


That's excellent advice. I can see how demoralizing it could be to work on something you've already demoed.


I currently do the opposite and I think it's the right way. Do the part that can provide quick validation first. Usually that's the front-end. You can find out if you're making the right thing pretty fast that way.

What are they going to do? Fire me? I can go anywhere. They can't find me anywhere.


One of the most important lessons. It must be presentable, however extra effort and time it would take.


I was on a call and the client said they wanted something added to a UI.

So I very quickly used MS Paint to mock it up... Just so I could clarify that is what they wanted.I shared my screen and someone said "great, you've done it!". Even though I was clearly showing a screen that showed me editing a screenshot of the UI, in MS paint...

MS Paint.

sigh, they don't tell you in University that the biggest skill you will need in this job is patience and learning how to channel your inner zen.

Developers don't need 3 monitors to get through the working day, they need regular sessions with a psychiatrist.


> Developers don't need 3 monitors to get through the working day, they need regular sessions with a psychiatrist.

Amen. I can see this as dev perk in job ads.


You could have said: yes, button is done; but do you also need functionality if it gets pressed?


We had a hard and fast rule at my last job. ALL demos were either 100% real, or were mock-ups from Balsamiq. If it looked like someone doodled it on paper we didn’t have to worry it would be taken as working.

We came to this rule after far too many incidents where some sort of mock up (Photoshop, HTML, whatever) was shown and taken as done. Then we got the questions (possibly unhappily) about where it was and when it would be done because obviously ”it must be” since we “showed it”.

The rule served us incredibly well for years. It put a hard stop to all miscommunication. Everyone understood exactly where we were in the project. Either we knew what it was going to look like, or (if it was ready) it was done and awaiting their approval.

Shortly before I left we got a new graphic designer. He wasn’t embedded with the programmers. He didn’t know THE RULE. Sure enough, we were asked why a design we’d never seen before wasn’t ready yet. Because he made a mock up in Photoshop. He told them it was a mock up. Doesn’t matter.

We basically follow THE RULE at my current job. It still works wonders.


Ah yeah, I had a marketing/sales friend who did that the other way around: he made nice mock-ups, he even built powerpoint presentations that seemed to show the UIs actually working; he made credible ads and sales documentation with it, and he sold countless non-existing snake oil products this way ("available soon!").

As he simply was the marketing/sales guy, in case he found himself cornered he could always pretend that the failure was on the dev/technical side... "see with the support".

He could also successfully sell himself this way: once a billionaire proposed him to be the sales director of some company. He set the meeting at 7AM at the Ritz bar, and got the job (not for long : he had negotiated to keep the perks when leaving anyway).


For mockup images, you can use an unmistakably prominent watermark.

Superimpose the word "MOCK-UP" in a gigantic font that takes up nearly entirely the image.

Make it translucent and red or make it black outline. And maybe tilt it diagonally to catch attention and to make it easier to visually separate it from the rest of the image.

To streamline the process, you can just keep an image like this around to import as a topmost layer into other images.


Honestly? I’m not sure that would work. As sad as that is.


I’d like to pick your brain on how you got this started. My current company seems to run into this problem occasionally, and we use Sketch for a lot of our UI designs.


I was there at the time but to be honest I don’t remember exactly. I know that we used to use Photoshop for mock ups. I think one of the other developers found Balsamiq when they were looking for something that would be less work and we found the side benefit that the doodle looking mode prevented confusion.

Whether we realize that immediately or only after we stopped having misunderstandings I’m not sure.

But once it was realized I don’t think it took very long at all for it to become a rule. It made life so much easier for us developers.


Our non-technical stakeholders are almost guaranteed to have some minor design thing they jump on, so our equivalent was to show them only things that were functionally done, no matter how un-polished. That way they were the ones blocking any release.



Never heard that name, nice to know.

We sometimes have to do that in technical reports. The proofreaders always feel like they have to give at least one comment, so you give them a clear mistake to point out to avoid useless debate over minor points.


If you can do it that sounds like a very good strategy. We needed to be able to show mock ups of what things WOULD look like before we even start at the project so that everyone was on the same page. So we couldn’t develop functional prototypes or full working implementations to show.


Yes, very much like the time I replaced a getfakedata() method with a getrealdata() method and then management complained that it was much slower now.


After this (and a few emergent race bugs), I started burying the equivalent of setTimeout(() => getFakeData(), 1500) in my similar code.

Best part is, I'm almost certain to beat 1500, so I've gotten compliments that it "feels snappier".


This reminds me of the infamous Speed-Up Loop. https://thedailywtf.com/articles/The-Speedup-Loop


And this reminds me about another story, I can't find the link but it was something like this:

A game developer was making a game for PlayStation and they were over their memory limit. They were approaching a deadline but couldn't remove anything else from the game to fit it in memory (or disk, I can't remember). So a senior dev came by, changes the code in 1 minute and everything could fit into memory now.

The thing was that at the start of each project, he had declared a variable of 2mb that does nothing, so when every optimisation has been done and it still doesn't fit, he could just remove that variable and free up some more space.

It was also his way of insurance.


There’s an episode of “Star Trek: the Next Generation” where Scotty, the engineer from the original Shatner Star Trek tells the next-century engineer LaForge that this is how it’s done. You never tell the captain all that you have so you keep something to squeeze at the fatal moment.



I'm surprised I haven't seen this before, this is great.


This is sound advice from both technical and nontechnical perspectives. I like to use this middleware in node to inject semi-realistic delays in my mock data: https://github.com/boo1ean/express-delay


A lot of people are building big systems these days and in many cases we have a guess at what is a reasonable amount if time for all of the steps in the process to take if we want an answer in 600 ms.

While your trick makes you look good, setting the times to match the budget might be more honest. And when the app slows down you can blame the people who take 250ms to do their part when we agreed to 100ms.


This is a UI trick, I couldn't imagine doing this for backend services but then again, I've never been asked to demo those!


100ms?! That’s insane, if we set the bar at 1000ms then maybe 25% of our requests will qualify.


Really? Our 95th percentile only goes above 1s when we are having problems, and nobody with any power in the company thinks that's good enough. Think about how much hardware capacity you need for a site getting even 100s of requests per second. If you can halve the p95 you can decommission or re-allocate close to half of your servers.

As several other people on HN have pointed out more eloquently, it's the variability that kills you faster than the average throughput.

The 100ms was not about end-user response times, it's referring to internal response times between servers. To make a page in 1 second you can't have 3 different services taking 700ms to respond, even if you can make all three calls in parallel. And if you have to call a bunch sequentially, you need the 75th or even the 95th percentile for those services to be pretty good otherwise your 95th percentile for the entire interaction will be very spiky.


I am feeling suddenly inspired by this idea, so thanks!


Lesson learned: make demo functions slower on purpose so the real one matches or exceeds it. It's not even deceptive: you're setting realistic expectations instead of giving a false impression.


Maybe print out the data to show them the difference?


> look at a UI prototype and consider that 90% of the solution

Management tip: Make the UI reflect the actual state of the project.

The UI should be UGGGLY and should get prettier as the backend work gets finished. If if the artists prettify it; make the animations and interactions janky and sluggish.

Never make the UI better than the actual implementation.

Bonus tip: Always have something slightly off in the UI that management can point out to fix. Useful managers will simply quickly point it out and move on to more important problems; useless managers will focus on it.



This is the best option. Creating low-fidelity UIs that prove the business functionality are faster to develop/iterate and provide an immediate signal to even the least technical individuals that the feature is still incomplete.


"Sketch" themes or "xkcd" themes (named after the pencil look of the comic) can be quite handy for this.

For instance, Bootsketch: http://yago.github.io/Bootsketch/

I wish some of the bigger CSS frameworks would adopt sketch theming support as a "progressive" option, rather than the entire page all or nothing being Bootstrap or Bootsketch have the ability to add a "sketch" class to any element on any page. (Or maybe better yet, forcing sketch styles by default and needing something like a "final" or "final final" class everywhere, like the documents folder of someone that has never understood source control.)


Joel Spolsky has a great write up on this phenomenon [0]. The opening paragraph is great:

> “I don’t know what’s wrong with my development team,” the CEO thinks to himself. “Things were going so well when we started this project. For the first couple of weeks, the team cranked like crazy and got a great prototype working. But since then, things seem to have slowed to a crawl. They’re just not working hard any more.” He chooses a Callaway Titanium Driver and sends the caddy to fetch an ice-cold lemonade. “Maybe if I fire a couple of laggards that’ll light a fire under them!”

0.https://www.joelonsoftware.com/2002/02/13/the-iceberg-secret...


Today I gave advice to wifes friend friend. Who wants to build something "like ebay or amazon". (but cannot programm and wants to contract)

Well, after a while they understood, there is a small difference, between a website - and a virtual market place.

Seriously, it is easy to forget, that for most people, all these technical things - is just dark magic in a black box. Which sometimes work and sometimes won't. And I find this sometimes hard to deal with, because society gets more and more technologized. At least a very basic understanding would be helpful.


I find it useful to tell people around how many engineers that company has. They won't understand why, but they may understand that they don't understand.

I also point out when a requested feature actually exists as a whole company.


I've started doing this too. The director of the company I work at is non-technical and will semi-regularly come to us expecting that we can build a competitor to Product XYZ in 6 months.

Pointing out that Product XYZ has 3000 employees and has been carving out a niche since 1995, while there are six of us with no knowledge of that market, is usually the only thing that gets him to accept that just because he understands what something does, it doesn't mean he understands what it takes to build it.


On the other hand, this is my profession and I still have no idea why Twitter has so many engineers.


Ah, the eternal temptation: "but that's Easy! I can tackle this single-handed! produces a minimal, text-only prototype which scales enough for a couple thousand users"


Scaling for infinite users as a solo developer is easy, just go Serverless!

Just make sure you have infinite money first.


What do you mean 1 part time IT guy can't run our entire SaaS product? It is just a few clicks on AWS to get it up and running.


Classic buy vs build! Are you sure that they were not looking for a simple eCommerce site made using Shopify/BigCommerce?

Maybe they are just looking for a side hustle.


Yep. buy vs build was a big change for our company, VP made the decision to stop building but the culture is ingrained and difficult to change.


There doesn't have to be a difference, you can throw up a static list of items with price descriptions and a number you can text to purchase things.

This is how we used to sell drugs in college. It was a simple URL you could go to with pricing and you just sent texts to a burner phone to arrange a transaction.


Yeah, this would be a website. Maybe ok for one primitive shop. And for webshops there are tons of frameworks, also possible with reasonable amoumt of work.

But a virtual marketplace .. where different actors make transactions, is a different story. Consider you have a bug and people loose money because of you. It really needs to be solid.


FB Marketplace - or, you know, actually ebay or amazon - are all viable if said friend just wants to sell some shit online.


No no, it was not about selling things, it was about creating a marketplace.


Has happened to me too more than once. I usually reply that the UI prototype is like a Hollywood movie set: what looks like a real building is fake, just a single wall. The real building still needs to be built. That explanation sometimes works, often doesn't :(


A friend of mine who works in UX always uses literal paper prototypes.

Not screens that look like doodles, but actual paper and cardboard and maybe some Blue Tac or paint.

Ordinary users apparently behave very differently, because it's obvious that a piece of paper can be changed and they know how to do it so it only takes a little nudging to find out what the customer actually thinks the system should look like.

She still has all the CS background to estimate that a change maybe that seems simple to the user just isn't viable, but using the paper prototypes encourages users to leave that to her and not second guess themselves into accepting a bad design because they're mistakenly assuming it would be hard to make a change when actually this is the perfect time to make such a change.


I seem to remember Joel Spolsky recommending that UI demos be done on paper... with pencil. That way even a manager can see what it will look like without thinking that it's all done. (Can't find a reference quickly.)



"They simply don't understand that the backend functionality is what's doing the bulk of the work"

The truth is that in a lot of cases nobody has ever bothered to explain it to them. What I tend to find in a lot of situations is that managers often prefer younger and/or less experienced developers because they can be bullied - but the reality is that this also means that people are then unlikely to tell their managers what they need to know. Ultimately it is the managers creating the problem, but in most cases (mind you not all) the managers don't understand that they are creating a problem.


I don't think there's much overlap between "managers who select developers who can be controlled" and "managers who are receptive to new ideas".


Not even just the backend! Frontend interactivity itself can be very complicated, and it generally is not fully specified by design (do your design specs come with state machines?) A few things that are often missing from a Figma/similar are things like error states, loading states, handling longer text content than the mockup, and responsive design.


And you can add paging, empty states, accessibility, security, browser compatibility...


There was an old look & feel for Java UIs that made them look as if they were drawn on the back of a napkin, specifically as a reminder that the code was a work in progress: http://napkinlaf.sourceforge.net/


Thanks. If only this was available for C++ toolkits or C# WinForms, I'd use it in a heartbeat.


Yeah there's CSS themes that look like either sketches or with gratuitous comic sans as well.


I have the opposite experience working on the front end team. I get delivered API’s that are done. Then when I start calling them I get errors. Turns out done for them means they made a basic REST interface, but the API just returns mock data.

Now my boss is happy that I’ve demoed API integration, but every next meeting he asks me why nothing has changed. It’s because I’m still waiting for something that is done.

For some reason the fact that it’s not done is always the fault of the front end team. I’d love to get an actually finished API for once...


I sometimes look at small mass-produced plastic items and marvel at how hard it would be to make one from scratch. If you showed me, say, a fluted plastic bottle of wite-out with a screw-on cap, I would have to assume you had a very expensive operation up and running that was already churning these out.

In software we can put a few shapes on a screen in just the right context and people will believe there's an operation as sophisticated as the wite-out bottle factory behind it.


You could spend a few hours making a 3D print of that bottle to produce a very believable mock-up. And then use it with someone gullible to make a deal on an order to manufacture a million of them. You get your commission on the sale, toss the 3D print to your manufacturing engineer, and let them know the customer needs delivery next week.


To be fair, we Devs have a tendency to often overcomplicate, over-design, get caught in the weeds.

It's not fair for managers to assume, but it's definitely fair for them to ask. And it's our responsibility to show and explain.

There's the famous story of the first iteration of Gmail being done really, really quickly. The demo was the product. And then just iterations from there. Definitely a good model if possible.


> They simply don't understand that the backend functionality is what's doing the bulk of the work

Even people who ought to know better don't understand this.


> They simply don't understand that the backend functionality is what's doing the bulk of the work.

You say this in present tense which makes it seem like a generalization. This is also a mistake that non-technical managers (and others) often make. The frontend frequently rivals and sometimes exceeds the backend in complexity. It is very application specific.


Fair enough, although in the case of a complex browser-based application, the same problem exists. The "backend" is complex code that happens to run in the browser.


I literally just had a discussion with a client last week about a feature taking 2months to develop.in my time this week I built the UI, with no backend, and she asked why I quoted two months when it looks like it's already completed.


Yes, it "looks like it's already completed" because you did the "looks" part. I get that you want to show them that part to get approval of the design approach, but it opens the door to misunderstandings like this.

Maybe as part of the demo, you should demo that it *conspicuously fails to actually do anything", to help reset unrealistic expectations.


Although for some teams these days, the backend might be finished while the frontend team is still setting up their build pipeline!


Best policy I've figured out as a developer dealing with product teams is demo everything in a command line until the backend actually works.

No chance for suckass product guys to push if they can't understand what's going on.


Even worse if non-technical marketing folks get their hands on prototype tools and use it to embellish the product with non-existing features in sales pitches without devs even knowing about it until after the sale. I've been there, and it is terrible. Also a vicious cycle as frustrated sales reps (by 'slow' dev progress in their perception) started inventing more and more of this stuff. A marketing + sales completely out of touch, chasing for bonuses. Brrr.


This kinda swings both ways though.

We've all worked with the perfect is the enemy of good guy who deeply considers all aspects, takes 10 times as long to deliver and then eventually, after much blood sweat and tears, delivers equally as bad software as the rest of us.

Personally, I've gotten over myself and try to just ship it.

That said, this is a bit different from the gnarly bug type scenario of the OP; though I'd probably ask how they wrote tests in 2 lines of code? :)


I mean... are they dumb, or just playing dumb to bully you?


The person who promoted them doesn’t care which, as long as they are consistent.

I think we spend too much time focused on the trigger man. Whatever person in your org is making your life difficult, there’s a person above them who knows and hasn’t done a goddamn thing about it. Who is the real problem?


I'll gladly blame the entire chain from the immediate supervisor/lead all the way to the top.


Ugh but where does it end?

I dig out my "The Dilbert Principle" book and will start reading it again to promote sanity.


I made this mistake the first time I ventured out onto my own and did some contract work. The thinking was that it'd be a nice way to give them something concrete to play with as we vetted the ideas / flows.

But, as you describe, from a perceptions stand point, it was the worst thing I could have possibly done. It went from a very happy client, to a very unhappy and confused client when progress "stopped." I actually started recording development work as a way to make them understand all the invisible stuff which goes on behind the scenes.


It's all correct: 90% takes 5 days and the rest 10% will take 3 months.


Except the UI guys chewed through 2 months in elapsed time getting the design approved and across the line and you now have to compress the 3 months you told them you would need into one to meet the delivery date management had in mind.


A smart idea for the individual employee but terribly unfortunate for the company. Building cheap prototypes is a really good way to minimize wasted effort and gathering valuable feedback.


Don't blame your stakeholders or customers. These are all just communication issues and common ones at that.

Unfortunately communication for software projects isn't often discussed or considered valuable in this community but you can learn it like anything else in tech.

The hardest language to learn is the one that communicates with people not computers.


Yes the incompetence in IT is astonishing. For a lot of folks IT is pictures that respond to clicks.


If you have a non-technical manager, quit your job immediately.


To a user, the user interface is the software.


> just because you can see something, that doesn't mean it's secure

I have yet to see secure software... does that even exist?


Sure - it's the one that's firewalls all the way down.


How does this go ? do you explain frontally or do you just quit without explanation ?


Yes! I stopped building working prototypes because of this misconception.


Ahh, yes. This is so common in my experience as well!


To me it just sounds like your frontend has a better architecture than your backend.


Oh this reminds me of what went from one of the most infuriating questions I would get asked by investors to one where I almost wanted to bait them into asking it...

"Why/how is this worth X dollars/time? I know someone who says they can do it in a week." To which, I eventually learned to reply: "Wow, well... In that case, let me shoot you an article on how to build a Twitter clone in 15 minutes. [awkward pause while I smile at them] There's a lot more than just literal lines of code that goes into building a successful software product."


It reminds me of a pycon talk [0] which, while i dont agree with the whole thing, has the message "we ship features, not code". That also reminds me of "when a measure becomes a target, it ceases to be a good measure".

LOC is a decent measure, but features are our targets

[0] https://m.youtube.com/watch?v=o9pEzgHorH0&t=1235s


> LOC is a decent measure, but features are our targets

Only if your measure of success is tied to fewer LOC.


that just sounds like it will have different unintended consequences like making every line as terse, complicated and unreadable as possible just to get it down to less LOC.


I'm not sure I got across what I mean.

Ones target should be shipping features. If you use 10k LOC to get a feature out, or 500 lines of more concise, optimised code, what matters is the feature.

If you have LOC targets to meet you are incentivised to produce the former rather than the latter.

My point is that a high number of lines isn't as important as good features. Though the two can get conflated


Gaslighting occurs in the workplace too.


Nice comeback


it's a good point but pretty cringe tbh


I prefer to just tell the client to go with the guy who says they can do it in a week. Often, the client returns to me in a few months' time, having wasted a lot of time and money on the other guy and equipped with a better understanding of why they require my services.


Had a similar experience selling my car.

It was an old Audi S3, like, 2011 model. Had a guy tell me the car was in better condition and the asking price was less than what I had listed. My car was listed at $10K AUD and that car was listed at $8K.

Them: "Why is your car listed at your price, and not matching this car?"

Me: "Well for starters, that cars in Adelaide, we're in Brisbane. If you want to go to Adelaide to check that car out and find out it's been in a bender and had most of it's body fixed and the listing avoids that. Be my guest."

Them: "I doubt that, I think you're asking too much. Will you match that price?"

Me: "No."

Them: "Why are you wasting my time, I should buy that one just to annoy you.'

Me: "You fucking do that then, laters"

I sold my car the week after this for the price I wanted. Straight up https://www.reddit.com/r/choosingbeggars material.


This is true. It is less pain for you in the long run to be free of people who believe magic occurs in nanoseconds overnight.


An alternative take to this article would be that this person wasted two days because he was reluctant to ask more questions from the person who filed the bug report.

How often do you actually receive quality bug reports at work? My experience is that external or internal users almost never provide sufficient information and you as a coder are always expected to drill down on what they reported with a barrage of questions.

ie. if you are not doing https://en.wikipedia.org/wiki/Five_whys then you might be doing it wrong and wasting time because of it.

I'm referring to this:

> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.

Which seems like being stubborn and making a mistake because of it.

Couple other parts also seem a bit overdoing it:

> Because I investigated if there were other ways of getting to the same problem, not just the reported reproduction steps. > Because I took the time to verify if there were other parts of the code that might be affected in similar ways.

These seem like taking a gamble. Maybe something comes up, but is it more probable that this work should be minimised until there is more proof of "other ways of getting to the same problem"? Developer time is expensive, is this really the best way of using it? Would it make sense to just fix the issue at hand and only put in more time if more bug reports come in after the fix or if there is some other indication that this part of the code might be more broken?


> How often do you actually receive quality bug reports at work?

Very, very often. I work as a QA engineer whose main responsibilities is to go through the bugfixing queue and add needed info where necessary. And I have to spend a lot of time every day doing this. Sometimes it gets so bad I have to assign the ticket back to the reporter to add more info, because even I don't know where to look without it.

Interestingly enough, it's always the more senior people at our company who are guilty of writing crap bug reports.


I went from huge shop to medium shop and I really miss that layer of QA engineers before the bugs got to us, it filtered so much nonsense that I had tremendous respect for those guys.

Half of my current tickets barely have 2 sentences in so-so English.


I think you mean not very often here? Otherwise the rest of your comment makes no sense.


Almost all the developers I work with never started working on a bug until i gave them the right steps to replicate it - even if the bug is reported by a user. A very stupid example i can think of is this: the developer designed a login form on web and mobile with the password field as expected, but forgot to uncapitalize the first letter of entry in password. Like if your password is abcdef, on mobile keyboard (unless you are careful), it would be entered as 'Abcdef' and would not work. The issue was reported thrice and he said that there is no error, and did not fix. Then later it struck me (and i tried it on safari while he tried on the mobile responsive version of chrome) that it is this issue. Not saying the developer should not start working right away, but the expectation is that if they actually paid attention to what was reported, it would not have taken this much time to figure out what the issue was. There needs to a midway which varies from org to org depending on their workload.

> is it more probable that this work should be minimised

I guess this is where automated tests can come in. You fix something and see if passes the unit tests. But then everyone has their own approaches. For him, fixing a similar bug twice is worse than finding all possible mistakes at once.


I like to use two principles:

* smoke means fire

* a contained smokey fire is sufficient to hide the start of a wildfire

This means:

* keep your errors at 0. If it "can't be kept at 0" you're either too far gone or thinking about the issue incorrectly.

* user complaints are errors. Just because they aren't clear doesn't make them any less so.

There is a perception that users go out of their way to make unfounded complaints. In my experience, getting any complaints is the issue.

There is also a perception that some errors aren't important. If you have a channel to recieve an error its because it has business value. If a dev I was managing ignored a p/w entry bug due to non-dupe / assumed user error without significant digging and user interaction I'd be livid. Most businesses will lose significant value if users perceive the act of logging in as difficult.


> If it "can't be kept at 0" you're either too far gone or thinking about the issue incorrectly.

Okay, how do you handle a network failure, a full disk or a faulty RAM stick?

I see your overall argument, but at some point you've got to accept that you can't handle everything.


keeping errors at zero does not mean errors don't happen, it just means you resolve them all, you don't ignore any. Perhaps it should be 'keep errors at 0 or 1'.

So if you're small, a RAM issue is something you deal with manually and rarely. As you get bigger you'll transition to automated failovers that still get looked at individually. Then you'll scale up to the point where these aren't freak occurrences. Now its important for you to have a strategy to identify the issue and its follow on issues, and resolve them. Its also past time you think deeply enough about your setup to be able to contain them so you can stop surfacing them as errors - they are now part of a normal business process. You want to make sure that too many are surfaced as an error (and too few), and any effects you can't currently recover from automatically are also errors.

Its perfectly possible to "keep errors at 0" without ignoring any output.


I agree with you on most parts, but not about keeping your errors at zero. In a fast moving environment, there will be mistakes or things that are missed. ideally, bugs should not have any attached meaning - yes there was an issue and now it is fixed. Good thing we found it today compared to a few days later. That is it.

Getting complaints from users is a good thing, but yeah missing them or not fixing them when you come across them is. As a user, I would be overjoyed if I reported a small issue, and the company fixed it quickly.

Being livid is natural, but I was pretty sure the developer himself knew he screwed up this time, so no point expressing it. Plus as I explained in the earlier comment, this escaped for so long because we did not have many cross device users. So did not affect that many users. Just for my experience, I check this every single time i test a website before it goes live. (and also check with other keyboards than just Gboard)


I think that, but then our feedback report include things like "How do I change my outlook password?".

I don't work at Microsoft.


Sounds like you aren't being clear about your role to the user.


After your suggested fix, how does a user enter the first letter of their password in upper case if that's what they want?

Maybe you were expecting the dev to go through all the passwords and fix them so the first letter of all passwords is lower case? Oh but if they were following best practices they don't know the password, they only know its salted hash.

Since the app was already shipping without the first letter being auto-lowercased that would suggest there were plenty of passwords with the first letter already upper cased, also something you can't test for easily if all you have is salted hashes.


Sorry, it requires more context. The reason this issue wasnt highlighted before was 1/ Many of our users were on laptop and did not use mobile site as much 2/ There was rarely a switch for those who used the mobile site (as in they rarely used desktop, else we would have caught it sooner). The fix was a longer one. We obviously had to have the same convention for a password in desktop and mobile web. For mobile users, after we made a fix, if they had trouble logging in, we asked them to capitalize the first letter of password and try again- when they logged in, we made them change the password and if they could not, they reset it. At the point we found it, we were pushing mobile site to users as an internal growth activity, and we were able to navigate it. It wasnt the best UX to be fair, but we potentially averted a bigger disaster at the time we did it.

PS. Yes, we were hashing the passwords.


Actually the correct fix is correctly flagging the field as a password field, which fixes the capitalisation problem on iOS and Android.


We also had to take care of the few folks who signed up with wrong password(in the sense that they never intended the password to start with a capital letter.). Changing the field value was a part of it - applicable for all new users. The complication was the people who already went through the flow and would have trouble logging in now.


"I try and do as much as I can with the information provided."

I know I'm guilty of this one, and I've stayed away from high paced jobs and appreciate jobs where people are ok with my reluctance to bother a lot of people even if that means it takes me longer to figure things out on my own.

This also means I build a much deeper understaning of the systems I work with, or at least I like to think so, and some people have confirmed that about me, indirectly, by praises of my insights.


Their praise is obviously great for knowing how you're doing, but it doesn't confirm the reason for your good insights. Plenty of people manage to be good at what they do without doing 100% the best way possible, so maybe you'd be even better if you changed your approach!

Of course I know nothing about you so I'm not trying to give advice, just replying about the "confirmed" being a potential cognitive bias.


I also think I could improve a lot in this regard, asking sooner is my problem and I'm working on solving it, but it is a slow and long process.


Awesome, none of us are perfect and improving isn't always simple :)


Do people actually have fights like this with management at their companies? Not trying to knock the author, but I'm just surprised anyone would actually hear this kind of comment in 2020. I'd think by now any and all metrics tying lines of code to productivity would be long dead.


Yes. The industry moves at a snail's pace, and is very different from what you'd read on HN. A huge % of dev jobs are still using old software/processes, with managers that haven't written software in 20 years, if at all.

In 2014 I worked at a company that switched to Git and then started measuring LoC to assess performance/involvement. Engineers took to committing/removing things like node_modules directories to make the data meaningless.

It still happens, even today, quite a bit.


> Yes. The industry moves at a snail's pace, and is very different from what you'd read on HN. A huge % of dev jobs are still using old software/processes, with managers that haven't written software in 20 years, if at all.

Spot on. Talented engineers can usually be picky and change job if they don't like the environment, but not everyone has this option. There's plenty of developers who are stuck in shitty companies (lack of skills/experience, or struggle with interviews, or just live in places with limited opportunities). And the longer you stay in a bad place, the harder it gets to escape it. 2 years ago I was the hiring manager for a few open positions and I was honestly shocked by some candidates. So many "senior devs" that despite having 5-10 years in the industry wouldn't have passed the interview even if they applied for a junior role.

It's very easy to have a distorted view of the industry if you are privileged enough to have only worked in great tech companies. I'm guilty of this myself, making friends with other devs in my city was definitely eye-opening for me.


15 years of experience isn’t good when you’ve done the first year 15 times


This reminds me of a story from my first job.

The business I worked at was a typical office, like the one you saw in Office Space. Departments had their own TV screens on the wall that showed performance of individuals in a department; the sales department had a screen that showed who was making the most sales that day.

After we'd pretty much finished working on the web apps that supported these TV screens, the CEO met with me and a colleague to tell us how good a job we did. Then he said:

"You know, you're the only department now that doesn't have a performance monitor. Maybe we out to get one for you. We could base it on lines of code."

My coworker and I were speechless at first, but we started laughing because we thought it was obvious that he was joking.

"What's funny? Why are you laughing?"

We quickly stopped laughing when we simultaneously realized that our boss was not kidding! I said we'd get right on it in the next sprint, and he told us that sounded good and left. I'll never forget that look on my coworker's face.


Did you write something with fake data that trended upwards every week so all of you got raises and bonuses regularly?


I didn't have the chance to. ;) Getting another job doubled my salary anyway.


Why not simply explain that LOC is not a good performance measure?


My company has just started to use Git in the last year. No timeline yet on when existing projects (like mine) will be migrated.

I agree with you 100%.


You can start using Git now right? Just wait for them to catch up.


> Engineers took to committing/removing things like node_modules directories to make the data meaningless.

Poisoning the well, I like it.


sounds about right to me. i've had product managers argue that we fix bugs fast enough that we should not actually try to implement real solution (python2 to python3 upgrade). This lady is like 'every py2->py3 bug takes less than 2 hours to fix. Why would we do the solution that takes multiple sprints?'

jaw dropped. since then i'm committed to being the biggest office space corporate schmo possible. let me cog it up, keep payin me 6 figs, lettin me work at home full time.


Honestly, product managers should be told to fuck off when it comes to the engineering part of the job - they have their job, you've got yours. You stand for your product, they have to work with the reins they've been given (in terms of productivity of their team) without trying to micromanage it.


Sheesh, python 2 to 3 is definitely something worth taking the time to do.


I know someone that worked at some skeezy company in Menlo Park that got passed up for a raise after spending months navigating the bureaucracy to save the company millions on their operating costs because they didn't write enough code. This was in the last four years.

Edit: And they quit right afterwards.


That skeezy company [1] at least has a fairly clear review process; it's clear what they will reward. It's not clear how what they reward is related to things that are useful to ~users~ people, advertisers, or the business, or even not breaking everything by rushing to push at the end of the review period. If you care about those things, you either have to not care about your reviews, or enough review positive stuff that you have a little time to do real work.

[1] Unless there are multiple skeezy companies in Menlo Park.


Menlo Park has Facebook and Robinhood connected by Willow Road.


For readers who, like me, don't keep track of SF Bay area municipal borders: I looked it up on Wikipedia and that skeezy company is likely Facebook.


It’s better to find ways to make more money than to save it. There’s a floor but no ceiling in much of what we do.


I wouldn't say it is better, just different. Both have value. If you find a way to save millions, that might be quite worthwhile, because that might be easier than building something new that makes millions of dollars. It's all about ROI.


People shouldn’t downvote him. He’s right. i’ve done lots of small things to save time and money in the past. Management really doesn’t value it as much as making money or working on more visible projects.

Telling someone that it only took me an hour to eliminate an hour of someone’s work every week doesn’t go far.


Someone was building an empire by having employees do manual data entry instead of parsing json files. All the way up the mgmt tree there are fewer direct reports. Nobody ever gets a job title promotion by having fewer direct reports.

Even worse, someone else now has to explain "why they did it wrong to begin with" regardless if the replacement technology existed or not at the time of original process creation.


That's just bad management, not an axiom.


A lot of this just visibility and internal salesmanship. Having an engineer standup at an all-hands and be lauded for saving $X million right after the sale-gal is lauded for closing the deal with $BIG_CO is something I've seen done, and it does it get recognized.


Making a million or saving a million are equally important in the end assuming one of them doesn't require particularly unreasonable ways. Sometimes it happens that the floor is really close to the ceiling and lowering the floor is the easier thing to do. Go for the low hanging fruit first. If you have too much useless clutter it makes sense to remove it rather than break down walls to expand.


This heavily depends on the industry you're in. In software, yes this may be true, but in low margin industries it's absolutely not.

The maths is simple. If my margin is 1%, and you make me an extra $100, I keep a buck. If you save me $100, I keep that whole $100. If I'm smart, I can use that to drive my prices down, and take more market share without reducing my margin.

Obviously it becomes less clear if your margin is >20% or so.


If it's a high margin market, you should absolutely focus on capturing more revenue, it gives you more available capital to work with. Once that slows down or you get competition, you can focus on improving your efficiency.


Certainly many companies prefer this approach because so many goals are tied to increasing revenue, and certainly personal bonuses are too. Very few people get rewarded for decreasing costs. In my last (big!) company it was never even talked about. Increase revenue was the only goal.

Also making more money is better, because it means growing the company with more employees and more job safety for everyone.


If your revenue can grow exponentially, that's probably the right priority.

Most companies aren't like that though. Know your company!

Also consider if you can run out of money before growth kicks in...


Companies that only ever learn to spend money to make money tend to flame out when their market matures. They never learned how to sustain.


Also know your reporting metrics. Increasing revenue is growth, which capitalism jerks off over. Profits, not so much.


The ceiling is the size of the addressable market.


Yessir, Mr. Belson!


Thanks. This comment blew my mind. This perspective is so far on the other side of things that I never considered it.


It feels like saving money might also be more environmentally friendly than growth.


The defense I've seen is that it works reasonably well as a measure of productivity _if_ you're working in a similar team and in a stable codebase.

From what I understand a lot of product teams at FB have nearly frictionless development tooling for their use cases so the pressure is to produce volume.


I call bullshit. Even if you have high-powered tools you still have to spend time thinking about how to use them, and you still continually improve them to reduce line count.


Perhaps, but it’s designed to be a consistent process. At least that’s the defense I’ve heard.


Was the team that caused the outage of half the apps on iOS one of them?


Didn’t the same team do it twice?


Not everyone has good managers.

$JOB-2, admittedly about 4 years ago now, the good manager with a background in software left for a better opportunity and was replaced by someone who's background was management. With no insight into the subject matter, they fell back on whatever they thought they could quantize.

Got numerous things like that, though my team lead and our project manager did a great job of shielding the team from that crap, we still occasionally hear it come up in group meetings and the like.

I even got a task handed directly to me, bypassing everyone above me, to "estimate how much it would cost to migrate all those linus apps your team has to windows. They'll run better there". Just the windows licenses alone would have cost us about half the existing server costs since we were using AWS instances. Also included a line item that included recruitment costs for a new developer, and verbally informed him that it would likely involve hiring a new team, as the existing team was hired specifically as linux developers.


Many people assume their manager knows what the employee is doing. They often aren't, meaning they go by what they can see, which is lines of code.

The smart thing to do is to regularly keep your manager updated on what you're doing, especially if they don't come by regularly and ask you.

Especially if you are WFH.


Rather than an integer number of lines of code, how about using a boolean : "it works" or "it does not work"? or "Does it meet the spec" or "it does not meet the spec"? Even a mediocre manager should able to test that one out.

Code is often improved by removing code.


Task completion should be the metric, not line counts, otherwise the incentive is creating the most bloated piece of software and the most comprehensive unit-test possible. Any system is going to be (ab)used by the employee to their benefit.


Relevant post: https://news.ycombinator.com/item?id=23762526

Task completion would also be abused, because it's too vague. It simply shifts the burden to the one formulating the task (e.g. preventing holes in the specification like missing performance or hardware requirements).

Exaggerated example: if the "task" is to automatically deliver a report containing certain data and formatted in a specified way, the easy way might be an implementation that stalls the DB server for hours with deeply nested FULL OUTER JOINs on non-index fields.

The task would be completed quickly and arguably correctly, since neither runtime nor memory requirements were explicitly specified...

But you said it yourself, any system is going to be abused...


It's even harder then you toss support tickets into the mix, and if the team has other non project work.


Task completion as a metric incentivizes hacky approaches that cause issues down the line. You can counteract some with strict code review standards, but a bad job requires a longer review, and then the reviewers start falling behind in their task completion metric.


Indeed and it's why organizations that execute well treat this process as a collaboration rather than a dictation and report-back.


Organizations are always imperfect, which is why I strongly recommend taking the lead and being pro-active in keeping management informed of what you're accomplishing.

Out of sight, out of mine <== don't let that happen to you

I don't believe it is any coincidence that the highest compensated engineers I know also are highly visible through their own efforts.


That’s true. And I guess depending on the job/manager you could replace lines of code with “user-visible changes to app”, etc.


That'll be exploited as well. The number of pointless UI and feature changes would skyrocket without the app actually getting better or gaining meaningful functionality.

Meta-metrics might be much more helpful, though harder to come up with, quantify and monitor. Things like defect rates, user reported incidents, user satisfaction, stuff like that.

Things that cannot easily be gamed from within the development process and that are still directly linked to the success and economic viability of the product and its development methodology (though on a higher level).


I wasn’t suggesting that as a good metric. More like “if isn’t LoC, it’ll be some other terrible metric”


I've seen that abused all the way to management, where meaningless new features are prioritized ahead of bugs


So far I've worked in one place which didn't use version control in 2007, one place which had reluctantly started using version control just before I started in 2010, several places where automated tests were considered pure waste, where everybody had full access to production, where backups were untested, where one or two people held crucial knowledge which was not shared with anybody else, etc. The real world moves a hell of a lot slower than best practice.


Devil's advocate: or best practices are rarely incentivized...because there's little to connect them with real, tangible value in an organization. Having fully linted, automatically tested code doesn't matter if the product isn't valuable and useful. And if the product is truly valuable and useful, it likely would be as well in the absence of these practices.


Fair point. I did take a week's tally of "how much developer time did we burn on hunting for backups and rogue edits", and suddenly SVN was orders of magnitude more attractive (ages ago).


I'm a maintainer of an OSS project with other contributors. I have fights like this all the time because others are constantly just wanting to fix the symptoms. This is for a project that is a framework/API, where the saying "the best programmers remove more code than they add" should be even more true.


You would hope so, but I've seen an Engineering Director at an otherwise well run software company use number of Github commits as the central reason to put someone on a performance improvement plan as recently as 2019. Granted, this was someone who was a professional manager and hadn't been an engineer for the last 90% of their career, and many people were shocked by it, but it happened.


Were they right about the person being a low performer?

I'm curious if this is "everybody knows this person isn't doing any work, here's a blindingly obvious metric to use to defend this move to HR" because HR orgs often hate things that don't have metrics attached, or if it's "I don't understand what this person is doing, and I don't see any commits, so it must be nothing"?


Completely valid question, and I think you make a great point here. A lot of baffling managerial behavior is because the manager is working in a baffling bureaucracy. Unfortunately, I think this situation was largely of the latter case, and maybe some motivations I'm not aware of.

They were a senior engineer that spent a lot of time coaching the rest of the team, and less time on their own personal work. The commits data point, in my opinion, should have been the beginning of a conversation that ended with the manager asking this person to adjust their priorities. Instead it was presented as an accusation that the person wasn't getting their work done, which ultimately lead to them feeling insulted and quitting.


I agree with you on what a manager who doesn't know what they're looking at should do there. You gotta dig in and find out![0]

I've been fortunate recently to not work in orgs where mentoring and collaborating like that is looked down upon - instead, it's encouraged - but my ongoing struggle is to figure out how to quantify it.

I've got two similar but distinct motivations for wanting to quantify it:

* If I have a manager who looks less favorably on it, I want to be able to demonstrate my worth; but also,

* If I'm spending most of my time trying to mentor the rest of the team, I want to see how well I'm doing - and I want to be able to change things and see if it results in positive or negative changes

Sadly I've completely failed at coming up with how to quantify this so far. It all comes down to qualitative peer feedback/manager feedback...

[0] As a technical person, I try to practice the reverse of this: if someone asks for a feature or claims there's a bug, I try to dig until I fully understand why they're asking for what they're asking. So I think asking for similar curiosity and depth from a non-technical person is fair.


Well, just spit balling, but you should see it in team velocity. The effort you're spending coaching should show up in the output of your team members.

Assuming there's some kind of constant ongoing engagement with one person or group, I'd expect an immediate dip in velocity as your productivity goes elsewhere, then it growing, and maybe evening out to before-engagement numbers (approximately), as they reap the benefits of learning from a senior engineer. Then, as you're able to return to normal duties and they're able to apply the lessons, you should see velocity greater than before the engagement. That delta should be somewhat quantifiable and, ignoring other variables, should represent the benefits of coaching.

There's also huge value in the increased job satisfaction for both mentor who enjoys mentoring, and a mentee who is learning. That should show up in any kind of employee satisfaction survey, or retention numbers.


Yeah, I’ve been in this situation before (without the PIP), where I had to explain to my manager that I can either help everyone get their work done, or do my own work, but given the number of people asking for help it was not going to be both.


Which leads me to want to fill the rest of the story in: the manager of the senior engineer had a friend he wanted to promote, and this was the expedient way of getting the obstacle out of the way.

That could be completely wrong, of course. But would it surprise you?


I was wincing a bit when I made my initial comment fearing I might actually be wrong. This hurts to read.


I think it's fair to use this as supporting evidence. If they are assigned tasks that aren't too difficult and they aren't completing them really fast, and there is a PR process in which changes are landed into the repo, then I think it's fair to use the number of commits to the repo.

If anybody can push commits to the repo, then it's a useful metric. Finally, this sort of action should be take after the manager has worked with their report, by having somebody else help them, or put them on a different project.


I regularly hear a friend complain about BS like this and variants. "Why did adding a button take a week, it's just a button?!" is also very popular. Not sure what's worse, that or the recurring "that element should be 1px to the left, drop everything you're working on and fix it asap"...


A funny side-effect of this that I ran into: customers who went to tortuous lengths to avoid requesting a new page or button, because they’d had that past experience of ‘just a button’ taking 2 weeks. They still wanted all the new functionality, but glued onto the existing pages/buttons because they thought it would save time! An interesting misunderstanding :)


See that all the time. It’s why you need proper requirements gathering, to get past that kind of thing.


Try changing the colour and getting told it isn't the same shade of red as the drawings presented by the designer on his PDF presentation.


"It looks different on my iPad, we need it to look exactly the same everywhere"

No, No you don't.


I mean, that's totally reasonable if that's the spec (not to mention hopefully trivial to fix).

Now if there's 5 drawings with different colors, or asking "what color do you want it?" leads to a 5-week email chain...


Oh, that's actually also pretty reasonable (sorta): https://bugs.chromium.org/p/chromium/issues/detail?id=44872

I discovered that bug report after our designer noticed at a glance walking behind me (I use Firefox) that the colors on our site were far darker than she intended, somewhere around 2017-2018 (that bug was opened in 2010).


Yeesh my blood started to boil just reading that.

I've left companies because of devs like that. People who just stand in the way of getting the software to do the correct thing. I do not understand what makes these people tick.


that's totally reasonable if that's the spec

I don't think you grok what's happening. Some middle manager sees a shade of red on his screen in a PDF, and the dev is expected to reproduce the content in that shade of red. There are simply too many variables.

Even if you have access to the PDF, the red will often be rendered differently by the browser than it is by the PDF engine.

I've had middle managers tell me to "fix" a web site because the colors looked different on his office CRT than it did on the laptop screen of a person in another building.

trivial to fix

Everything is trivial when someone else has to fix it.



They don't always make the comment out loud, but you can tell they're thinking it.

They absolutely use lines of code metric at my company. I don't miss any chance to tell my manager it's complete bullshit. His answer: "Engineers are supposed to write code, just like construction workers are supposed to build houses."


What an absurd response from your manager.

If you give a construction worker a design to build a wall, and the worker is given 2,000 bricks that must be used to build the wall, then, yes, of course the worker must lay down all 2,000 bricks to build that wall. However, if I am asked to build a computer-simulated model of said wall, and if there is a way to build the model with 200 lines of code that looks and performs identically to a model built with 2,000 lines of code, then, yes, of course I am going to build the wall with 200 lines of code.

I hope you find better pastures.


Hell, if you can find a construction worker that somehow, magically can build the wall with 200 bricks instead of 2000, everyone would want to hire them because they’re saving them 90% of the cost.


"Okay. And when 5 years down the road you discover it's 10x as expensive to replace the floor joists. Why? Because you based the builder's performance on how many materials were in the house and as a result the piping in the basement is an overdone rats nest anyone maintaining the home has to work around."


What if the construction workers just stack all the windows in one corner and pour all the concrete in the other? It must be just as good as building the house properly since they used the same amount of materials. If they just double the amount, the resulting pile of garbage will be twice as good as a functioning house.


Oof, that's a false equivalence. Construction workers have blueprints for how things are supposed to be built. You're (generally) making the blueprint as you go along, based on vague specifications in many cases. This can depend a lot on how your organization designs software, of course.


And... guess who made the drawing (blueprints have not been used in a while)? An engineer. Probably using a vague spec by an architect, too.

And I can assure you, they were not judged on the number of views or pages on their drawing, or on the number of variables in their structural calculations.


"If you wanted someone who writes as many lines as possible, you should have hired a secretary."


Can you please help your brothers out and hint at what company this is so we can all avoid?


It has other good things that keep me there. Everything's a tradeoff.


Set prettier on VS Code to make each line one character long on commit. Rebase your code on your branch after done and run it.

It took 23 minutes for that rule to dissappear after my lines of code metric jumped 250,000k


At most of the big companies I worked at (over 500 employees) there is a steering committee doing prioritization work and continuous integration test suites and an elaborate change control committee and process such that fixing a spelling error will take much more than two days.

Whats nice about the proceduralism is you can document that the steering committee only meets once a week on Tuesday afternoons and change control meets on Thursday. And everyone knows the automated test suite on DEV takes about half a working day. So if a change can't be worked into the schedule in less than a day, it'll never pass CI testing before the change control meeting so it'll take more than a week.

Whats bad, is mgmt would like you to complete multiple changes perhaps at the same time which always complicates the change control process especially if change #7 failed last week so company policy is to roll everything back and now we have 13 changes, two weeks worth, to complete next weekend. Also whats bad, is knowing its a corporate nightmare to make any change, why did I make a mistake to begin with of having the buttons swapped or a misssspelling or whatever.

I find the big metric now a days is backlog. Lets see the number of request tickets decrease this week instead of increase. That leads to intense pressure to roll multiple problems into one ticket.


> Do people actually have fights like this with management at their companies?

There was a time when I thought this video was funny:

https://www.youtube.com/watch?v=BKorP55Aqvg


I used to think that was hilarious.

These days I can only look at it and think "the expert is terrible at calm confrontation and good communication. There would be no problem if he had developed those skills."


I have to agree. Of course the requests in this sketch were obviously silly and not doable when taken literally. But my experience is, that the requestor of a new feature often has big problems communicating that request. It could be that the requestor describes something, which sounds almost as silly and contradictional as those seven lines, the actual request is something different, which actually makes sense. But just is completely badly communicated.

It is important to rule out miscommunications. Contradictions often are not obvious to the requestor. It also helps to understand, which of the contradicting requirements can be dropped to resolve the problem. Sometimes the problem is just a small feature which made it to the requirements, because no one considered it a problem.

That doesn't mean, I haven't gotten requests which were as silly as in the sketch and had discussions along the lines shown :) And of course, I never hesitate to speak up about actual issues.


That has to come from two sides though. The people in this sketch are clearly not interested in listening to what the expert says either.

Getting sensible requirements is not only on the expert.


Sure. Both sides need to have good faith.

But the expert is the one who can know if the requirements are sensible.

In requirements gathering, the whole job is to hear people's attempts to describe their problem and figure out what problem they actually have.

By definition they don't have your expertise, or they wouldn't need to talk to you.

So, of course they will say contradictory things and use terms completely incorrectly - they cannot do anything else. That's why they hired you.

The expert in this scenario gets hung up on their incorrect language and gets flustered and stymied, telling them "What you want is impossible!"

What they _said_ is impossible. It's our job to persistently, patiently, calmly help them understand their needs, without judging them for needing our help.

I'm not particularly good at it, but I understand the mission.


Still think it's funny. But yes, it's no fun while you are the expert.


One would wish. Personally, I don't think I've ever worked under a manager who understood software development. It tends to be all about what they can see (GUI) or about nearly meaningless metrics on a dashboard (LOC, tickets closed, etc.). Again, just in my personal, limited experience.


The problem is not "manager/client doesn't understand software development". That's fine.

The problem is lack of trust in you as the expert and possibly a lack of self-awareness (they think they understand).

A manager/client should make mostly strategical decisions like: We should solve this problem, here are the resources. And almost never tactical.

They also shouldn't even care or look at LoC. They shouldn't be worried about metrics of 'effort' at all.


Ah, agreed. I guess that, in my experience, not understanding SW development tends to go hand-in-hand with not trusting the developer.

I found your phrase "lack of trust in you as the expert" a little jarring because (again, in my personal experience) considering the developer to be a domain expert is somewhat of a foreign concept. I suspect/hope the situation is better elsewhere in the industry. :-)


I meant 'expert' as in the person responsible for the technical side. Not in the sense of 'they know everything they need to know all the time' or anything like that.

> I suspect/hope the situation is better elsewhere in the industry. :-)

I do freelance, client work. Alone and in (very?) small teams. Typically my/our clients only have a superficial understanding of everything technical (if at all). Trust often needs to be earned.

One way to gain trust is being creative/optimistic and explaining feasible possibilities and strategies. Another one is being pragmatic and not selling them something they don't need, or might not need.

And then the most important one is to have conversations about their problems and wishes. Showing that you understand them by asking questions and writing a specification. And explaining your (iterative) workflow: "Let's figure this part out after we've done this other part." I guess this is the "domain" part of the process.

My experience is that if trust is in danger then the work is less valuable, less fun and less sustainable. Indications of this are things like we discussed before and similar:

- Trying to measure effort instead of rewarding value.

- Nitpicking, bikeshedding and other distractions.

- Overstepping their expertise (typical for UI design, a bit less for programming)

Now most of my interactions are good but sometimes I get the above. We're actually discussing of doing more upfront communication work in the offers and initial discussions to prevent these things (even by filtering out clients/collaborators) and to set a tone. Because again, this is unsustainable on multiple levels and it never ends well...


Thanks for sharing your experiences! This is good advice. I also agree that, at times, some "filtering" must occur with regard to who we work with/for.


Bingo.

My leader isn't a technical person by a long shot. Instead they focus on getting people that they can TRUST on their team. Yes, sometimes it means we have to go back and 'fish' for metrics to throw the business. But we do notice that the less we chase metrics (and, yes, the arbitrary goals set out by the company as a whole) the more productive we really are.


A lot of software development still happens at businesses that are not 'software companies'. The experience is very different, with a much different culture around software.


I have never had anyone indicate to me that this is a problem. However, every time I spend days on a problem that ends up being a trivial number of LOC, I get a feeling of anxiety that I am going to be seen as incompetent. That's been the case my whole career even though I know it's unfounded.


I probably can't tell this story correctly because it is 2nd hand but ...

I was on a browser team. A fellow co-worker decided to add the Fullscreen API to it which means not just add the API but first discuss it in the relevant standards committees.

I'm pretty sure he thought, and so did management, this would be a 2-3 month project at most. IIRC though it was like 18 months, maybe longer.

Some problems that aren't obvious at first

* What is fullscreen mode? Is it a mode for the page, a mode for an individual element? What?

They eventually decided it was for an individual element

* What happens to the CSS above that element when none of its parents are being rendered?

I'm sure that took a while to argue over. Like if it was position: relative or absolute and suddenly it's parent is no longer displayed. What if the parent has CSS transforms? Okay, you say we ignore those and consider it the root element. Okay so does that mean none of the other styles from the parents apply like color or font-family? If some do and some don't we now have to go through every CSS property and decide if it does or does not continue to inherit. I don't actually know the answer to this.

* You have a DOM A->B->C->D->E. C asked to go fullscreen. While there E asks to go fullscreen. User presses ESC or whatever to exit fullscreen. Should it pop back to C or A? Does it matter if E is a video element and they clicked fullscreen? What if C is an iframe does it matter?

* Testing across all devices that support it requires all new testing infrastructure because going fullscreen is effectively something that happens outside the page not inside so testing that it actually happened, that a user can actually exit it correctly, requires entirely new test systems that were not there for previous APIs. Then multiply by 5 at least (Windows, MacOS, Linux, Android, ChromeOS, ...)

And so even though I'm sure everyone ended up understanding that, it turned out to be way more work than anyone expected. Yet, in the back of their minds it was arguably always "this is taking way longer than it should, goals not met" or at least that's how it seemed to be perceived.


Oh, very much so. I don't think it's a matter of how "old" the industry is - there will always be people who have never worked with software folks or are familiar enough with the ecosystem not to make the assumptions stated in the article.

This made me realize that working in a purely engineering team can sometimes be a perk. Not because technical people are "better", but because it leads to fewer frustrations like this one.


This scenario seems like something that's more common in freelance work where the client has access to the code for whatever reason.


Last time I met this, it was 2008 IIRC.


Yes, it happens, quite a lot.


You'd be surprised...


"My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."

~ Edsger Dijkstra


I'm sure Dijkstra could get away with saying that, but how would you phrase something equivalent to someone above you in the food chain without getting nuked from orbit in response? I can just see how the conversation would go:

PHB: You've only added two lines - why did that take two days!

Dev: Because it took that long to understand the requirements, understand the code, find the root cause, write a test to verify the root cause, and then 2 minutes to write the code to fix it. The resulting code verifiably fixes the root problem for at least reasonable inputs, with minimal code and thus minimal chance of a bug.

PHB: Why not add 50 lines of code and be done in an hour?

Dev: Because that would be enormously more likely to be buggy.

PHB: But it would fix the symptom, right? Don't let perfect be the enemy of the good!

Dev: 50 lines would not to be "good", it would be crap. And we would have to revisit that code in a week or a month.

PHB: But in the meantime we will have delivered, dammit! We're on a deadline, with three managers and five customer representatives breathing down my neck, and you're telling me you want to spend two days on something you admit could be fixed in an hour?

Dev: I never said I could, only that 50 lines would be a crap solution.

PHB: Are you calling me a liar? Or just an idiot?

[Continue until the developer is fired or gives in, loses hope in humanity, and starts planning their exit.]

Thankfully I have never worked with someone like that, but if the starting point is as bad as the first sentence above then it's only going to go downhill from there.


Yea, but then you see two idiots spend countless weeks “perfecting” things that don’t matter. Shipping matters. Customers don’t care about the fires behind the scenes, they care about bouncy pixels. The PHB meme has to die.


> Don't let perfect be the enemy of the good!

Exact quote from my manger, several times. They just don’t get it. And then a trivial feature takes a month to more or less implement because it’s all built on a toppling tower of crap.


"In our case, we're letting good enough be the enemy of barely tolerable."


PHB: Are you calling me a liar? Or just an idiot?

If situation is that far degraded, the answer is "IDK, does the distinction matter?" And it doesn't: either the PHB is putting words in your mouth on purpose, or couldn't hold 30 seconds of context.


> Thankfully I have never worked with someone like that

Please tell me how to find a place like this?


Not just spent, but every loc written adds a small amount of maintenance cost to the TCO. I find non technical people like when engineers use their vocabulary.


> Don't let perfect be the enemy of the good!

We're going 'fairly functional' versus 'broken'.


I did work in a place like that, and it turns out that hastily writing lots of code to do something yesterday whilst under stress just resulted in a constant turnover of staff, with nobody knowing how any of the millions (literal millions) of lines of code worked, plus duplicate objects and structures and functionality littered throughout the codebase, with the main "technical manager" (a coder himself) just far too busy fire-fighting and frantically hammering into his keyboard to ever inform anyone else in the known Universe how anything he'd ever written actually worked, and berating other developers for bugs resulting from them not knowing how it worked, leading to an introduction of thousands of policies regarding coding and check-in, build and style, all of which were out of date and irrelevant and maintained on an unnavigable website that you were expected to somehow just "know", and then "disciplined" when you fell foul of these many policies. No documented design, specs or anything. Just tonnes of code.

I think the company is going through a slow-motion death spiral as they try to combat this ("let's do design! DOCUMENT! That's the key!") but it just results in deadlock in the stages prior to coding since the "designers" don't know how any of it works, so push garbage onto the coders to "implement", which just gets pushed back as it's nonsense, and nothing gets written. And permanent complaints over how long everything is taking to write and document. So just bullying of the developers with shouting and swearing - that'll get things written, right??

You were also expected to estimate how long things would take to write (eg expand an existing structure) without an understanding of how anything worked, and then held to it. I was literally asked by a "manager" once: "Why do you need to know how it works? Can't you just write it if you've been told what to do?". That sealed the "let's get the hell out of here" at that point.

Meanwhile "management" couldn't work out what was "wrong with the developers". "And why are they all leaving? Why do they not know what these millions of lines of code do without any explanation or time to look at it? They must be RUBBISH developers. We need better developers. Employ more GOOD DEVELOPERS. What? Isn't any of this documented????! It needs documenting right away! Do documentation! Who knows how to document this? Ahhh the developers. Developers - DOCUMENT THINGS.", then "Why is there no code being written? What? Why are you documenting??? It's taking too long! Write code! And document! At the same time even though it's impossible! WE NEED MORE DEVELOPERS! Where are the GOOD DEVELOPERS???! We need to write a new policy that the developers should adhere to."

Always the developers' fault apparently. Absolute stupidity.


This is achilles heel of non-tech managers. You can link as many articles like that as you want, it will still be a problem. They may be more aware of it, but it will still be, daily struggle. Non-tech manager should be paired with tech-lead with high trust relationship for this not to be a problem.

This example is just one case/symptom of much larger problem.

Non-tech managers will only see fast progress on combination of: - poor developer - doing fast progress - with shitty code - on good quality codebase

They will see tech lead/good coder as asshole in general, with poor performance in general, who for some unknown reason is respected for their code and sometimes magically ticks off hard problems quickly which "must be a coincidence, overestimated work to begin with or something like that" on combination of: - person who actually cares about the project - who repairs shitty code/tech debt - who thinks more deeply about the problem - and as a bigger picture issue, not just ticking off ticket with the lowest resistance possible - writes good quality code - if the problem breaks current abstrations, refactors abstration itself - who cares about readers of the code


People who don't know how to write software have to use "boss" heuristics when determining who is a superstar. Qualities that take prominence over quickly writing high-quality software:

1. "Can do" attitude 2. Never backs down from a challenge 3. Being the glue that holds the company together 4. ...

As you can see these are all the kind of things you can't put your finger on but you "know it when you see it". When they see it, it nearly always looks like their reflection.


This rings very true to me. Unfortunately the subtleties of development like code quality aren't well represented if you only look at cards moving on a board.

In a lot of ways what you are describing is how it should work, that the tech lead works on the hard problems and big-picture problems, like abstractions and architectural issues, as well as mentoring junior devs.

In my opinion any manager (especially a non-technical one) should only measure the team as a unit. This is particularly important when evaluating the performance of a tech lead.

One thing I like to look for is natural lines of conflict in a situation that can arise when different individuals are working towards different goals, and question the underlying reasons why a manager may be acting the way they are. In a lot of cases you can get to a win-win situation if everyone is willing to play ball. Of course if the conflict arises from a fundamental organisational flaw, like poor management methods, or poor company culture, then it is time to move on.


The one that drives me up the wall is the schedule/cost anchoring question, "this should be easy to do, right?" every time they asking for a new feature. It's set to manipulate you to lower the schedule or the cost. If you say it's not easy, they would question your competency. If you say it's easy, they would say, well then you can get it done this week.

It always gives me a pause, and then I would double the schedule.


On the other hand, programmers are very good at wasting weeks creating incredible software architectures to try the latest library they read about or try a design pattern, or in general over engineer something so that it’s “better designed”, more generic, more “flexible”, and so on.

In other words, if a project manager sets an expectation that a task is a “task that takes two weeks”, the developer will somehow manage to use about two weeks to make it, that is find the “best way” to do it given that timeframe, where “best” is probably evaluated under some kind of metric which has nothing to do with product or user value.

Sometimes the question “explain me why it should take more than two days, when i know it takes two days to do just that in the context of a minimal MVP” goes a long way in making programmers focusing on delivering the maximum value for the product.

So I think there is actually some value in having a technical person continuously challenging developers in shortening their path to implement a feature.


Programmers actually hate working on the same thing for longer than necessary. They like new problems and new challenges, as your identifying in your misguided take on always wanting new libs/frameworks. If you are finding that you actually believe this, you've somehow cultivated a very toxic technology environment where your developers are hiding necessary tasks from you and attempting to make up the difference wherever they can. You have probably browbeat them and cowed them from speaking about their ideas in the past.


That's a very broad generalization.

I've been getting paid to write code since 1998. The last few years in management. So I know something about actually being a programmer.

I hired someone who was awesome from a technical knowledge perspective. Friendly, personable, smart, driven, etc. Loved talking tech with him and he had really great ideas.

Anyway, the problem was, I'm running a startup and every single project he was on he tried to model it as "the perfect open source project". So instead of doing something simple in a couple of days he would build this really well abstracted, over engineered (but pretty damn good code!), "beautiful" thing that would take 3 or 4 weeks to deliver.

In the end, it didn't work out for him.

Anyway, my point is that it is not true that programmers hate working on the same thing for longer than necessary. I do. You do. Many others do. But some just love being architecture/purity astronauts and refining that 3 line method into a 7 class inheritance hierarchy.


This is a pretty common management strategy to stroke your ego and put you in a elevated position where you awkwardly feel the need to agree.

I do the opposite, I say well it's not entirely clear how much work this is going to take, I'll need to do some initial assessment to get a more accurate idea.

If they continue to pressure I ask them what changes they imagine need to be done since they're so assured of the scope and timeline of their request. I have not a care in the world of letting someone else be the "expert." If they are, I'm more than happy to go with their idea.

I've yet to get a response back outside of a collaboratige conversation with another developer working on the same codebase for similar efforts. This is usually where people realize just how much ignorance they have around their request and that they should leave it to the people more familiar with the work to make accurate assessments.


I'd always prefer the honest path. You could try to explain the problem and why it might seem easier than it is in simplified terms.

Also, fast and slow, hard and easy - those terms don't always correlate. Easy tasks can require a lot of time while hard problems might result in a quick solution after careful consideration. Furthermore getting hard tasks done quickly might be exactly why you're billing that much. Maybe you've dealt with a similar problem before - in that case your experience and competence in that particular subject shouldn't devalue your work.

What I'm trying to say is: Don't base your price on time spent alone.


slow clap you've pointed out something very true and very problematic that I've never even really noticed


> "this should be easy to do, right?

Yeah, couple of months - me, every time.


The assumption that asking for more information to recreate the bug is a lazy tactic to get out of the bug fix is also a terrible assumption and discredited the opinion IMO.

Often times asking for more information can speed things up and lead to a quicker resolution.


I have worked on both sides of this in the enterprise world (as a support agent in the middle of dev team, as a end user and as a developer on the receiving end) and it is never simple. A bug report takes time to make and many times it might take time to reproduce or narrow down to a simple test case. A developer should of course get the information needed to fix a issue, but very often (s)he can find the issue with very little detective work if they know the product. The problem I've seen is that you have a fundamentally broken system where: 1. The devs don't really know the product in the sense that they do not use it. They have very little experience how it actually is used or the workflows and might not have good overview of the code base. 2. The support agents, if they exist, have little clue of either development or the product and just as a filter to remove the absolute majority of known issues but at the same time mess up the communication between end-user and devs 3. The end user have neither time or experience to test or report issues so you get very bad reports that vary wildly.

When working as a support agent I have spent a lot of my time acting as a filter to stop making the end-users go mad with all the requests from the devs and preventing the devs from being flooded with crap reports (of course, some always slip through). That means a lot of time spent on reproducing and testing (and sometimes pointing out where the bug is in the code, even suggesting patches). But very few organisations pay for that knowledge and very few allow agents to muck around for a day with a tricky issue to make sure the devs can fix it without a major hassle.

It is very easy to ask for more information as a dev, and very often it is needed, but a lot of the time it is just pushing the cost to the end-user when a minimal amount of detective work (or even a single quick test) would do the same and save everyone a lot of hassle. I would argue that in a lot of time when we, as devs, ask for more information we do it to push the issue into the future and to save ourself a minuscule of work now, not to have a quicker resolution.


Assuming the reporter is internal, you don't need them to go ahead and type out everything. A quick call with a screen share is all that's needed.

Not even reaching out (again, assuming they are internal...if they are external then there are many other considerations) is just not smart IMO.


In my experience sometimes this is true, sometimes it is not. In some cases when asking for more information it takes much longer. Some of the reasons:

After asking the question I get a meeting schedules where design by committee happens. The story or bug your working on gets completely redesigned. Sometimes in an illogical way. Requiring more questions.

When working remote, like most of us right now, now I will end up waiting for a long time to get a response. I then must start some other story/task/bug. Trying to context switch multiple complex tasks at some point your productivity slows down.

I have also received criticism for asking when I could figure it out myself even when figuring it out may have taken hours more time then just asking a yes/no question.


I know this can be nuanced, as pointed out in other replies. But as a product manager, I admit to having used this as a form of triage. People have told me they don't have time to write a proper bug report. I tell them if it's not important enough for you to spend five minutes reporting, it's not important enough for a developer to spend five hours fixing.

I of course don't respond to everything as black and white as this, and I'm not expecting them to do the kind of troubleshooting that me or a developer could do much quicker.


Unfortunately, working in a big enterprise world, I've seen both sides of this. After a while it becomes somewhat obvious if someone is legitimately asking for more info, or if they are just punting the ticket down the queue. (The former is when you can tell they are becoming focused whereas the latter seems to be more general, swirling questions.)


Agreed, although we (mostly) solved this a number of years ago. Our validation team has screen recording software running all the time. If they see an issue, they attach the video to the bug report. These videos dramatically improve everyone's ability to accurately report bugs and quickly resolve them.


I too have seen the case when a developer will ask for more information in order to game the process. While this is arguably (and I would agree) a failure of the process since it incentivizes undesirable behavior, you can quickly wind up in a situation where there is a large cadre of developers who will reflexively ask for detail. In this situation you often have good "measured" performance across the engineering org, but there tends to also be a large swath of tickets which find themselves in limbo. Some of these tickets can take months to resolve when a day or two of concentrated effort could have resolved it.


True, having seen this as well, the motivation driving the dev does eventually become pretty transparent.


Also, asking for more information lets you work on something else with more actionable feedback while they do so


I agree with this. If you are generally swamped with work, you need to do triage and possibly delegate work.

This isn't perfect behavior. But in some situations the perfect solution isn't feasible.


Yeah I thought that was really bad advice, especially for incomplete bug reports. There's literally no harm in asking for more information to fix a bug, except for your ego...

A 5 minute phone call or direct message can save hours of hunting and frustration trying to replicate the bug...


I suspect most developers have been on both sides of it. I've most often seen it as a delaying tactic where access/system bureaucracy prevents easy recreation. So generally dysfunctional enterprises.


Sure, it depends on the context. But I've seen many examples where the reportee is asking for information that's already in the bug report, or can be easily inferred from the report.


“I’m sorry for the long letter, but I didn’t have the time to write a shorter one.”


Because I had to explore all the things that wouldn't work before finding the two lines that did.


This summarizes the argument pretty well and should suffice as a tl;dr.


My most challenging bug was fixing a memory overwrite of a COM reference counter (of all things!) that would only happen under very rare conditions - when a 3rd party C library that was compiled with different calling convention would be called with a certain amount of parameters. It took me a month to chase down - frankly, it would have been impossible for me to figure out if Visual Studio did not have data breakpoints implemented for C++. It took a month... and the fix was also a two-liner. Still proud of that fix 17 years later!


Nice! I had something similar. It took over a week and the fix was to _remove_ one line of code.



I smirked reading your linked text. Commonsense required though.


It's like the old story of the engineer who charged 10k to fix a loose screw[0]. It's not just the obvious effort, it's the effort behind the experience and know-how to recognize and find the most appropriate fix for the problem.

[0]https://www.snopes.com/fact-check/know-where-man/


One solution to the OP's problem is to continuously document the activity leading up to those two lines of code. That way you can point to the notes and say "here's why" .. and I've found that quite often justifiable and shows you've not been goofing off. Furthermore, it also helps someone who'll have to look at those two lines later on to grab some context and understand why they're there.

This could be as simple as notes logged against an issue about experiments done, etc.


Also, adding this log to the commit message itself, to preserve context for future developers reaching that code. Duplication might be necessary for discovery depending on the tools used.

https://dhwthompson.com/2019/my-favourite-git-commit


Some fun folklore involving Bill Atkinson at Apple - logging "-2000" lines of code written.

https://www.folklore.org/StoryView.py?story=Negative_2000_Li...


There's no paradise anywhere. I've been self-employed for 13 years in a rather niche area, as an involuntarily solipsistic army of one for lack of ability to grow my business. Most of my customers are highly technical, but don't understand what I do quite enough to have a "lines of code" resolution on it.

I wish they _would_ ask me why those two lines took two days (in my case, it might be simple burnout; been coding for too long, and longer than a more conventional career track would have prescribed). Instead, nobody much cares whether I write 2 lines of code or 2000; same difference to them, and boils down to the all-important "delivering".

There's some intellectual and creative freedom in that which I suppose folks don't have with a code-involved boss who scrutinises their commits. But the opposite--nobody scrutinising your commits--isn't all it's cracked up to be, either. I almost never have to explain why I did something a certain way to anyone, not because I'm so important and command so much distinction or recognition of my expertise, but because nobody gives a crap. :-)


Oh, that reminded me of my "two lines" moment. Years ago I was writing software for some university project, and one feature took me a few days to figure out and implement, and when I checked the diff I realized that "all" it took was removing some 20 lines of code. I literally added a feature by removing some constraints that I had previously introduced.


Deleting code is giving your future self a gift of not having to maintain that code anymore.


Yes, I like to say it as "A line of code deleted is a line of code debugged."


If someone is measuring productivity in code, based on the amount of lines of code written, then they have never written code. Anyone with a tiny understanding of how programming works would totally get why something so small could take so long.


Seriously, Albert? You spent 7 years and "E = mc2" is all you produced? We need to talk.


Michelin Star: 1kg food

2 Stars: 2kg food

3 Stars: 3kg food

“Well KFC can do it...”


I know the reasons for not producing code consistently. They're logical and I can even verify them. BUT I am still frustrated with myself when I don't produce significant code each day.


> I don't like having to fix bugs.

???

I rather enjoy fixing bugs, particularly really hard ones. They can be fun logic puzzles that take some sleuthing to figure out and offer multiple pay outs... First time reproducing it. Figuring out the problem. Figuring out the best fix. Test case fail -> test case pass.


It's kind of a personal style thing. Debugging can be really rewarding to the extent it actually does feel like a puzzle rather than digging through a sewer, but I myself like green-field development lots more.


This article made a number of great points.

> I know some developers don't like having to fix bugs, and so do whatever they can to get out of it. Claiming there isn't enough is a great way to look like you're trying to help but not have to do anything.

God, this behavior has annoyed me so much at times. I've worked with a few developers that were not bad overall, but would use the slightest excuse to punt on fixing an issue they were tasked with but didn't want to track down. Regularly weaseling out of tasks like this wastes the time of multiple people and either ends up back with the original dev or gets dumped on a more responsible worker.

> Because I took the time to verify if there were other parts of the code that might be affected in similar ways.

Not looking for other places in the code that are very likely to be affected by the same issue is bafflingly common, in my experience. Although I would say that managers are much more often to blame for this behavior than the devs. Any workplace that puts less weight on fixing an issue well than on artificial metrics like number of tickets closed is incentivizing exactly this type of behavior. Why bother getting criticized for spending all day fixing a simple bug the right way when you can fix 5 different iterations of that same bug and close 5 tickets in the same amount of time?


> Regularly weaseling out of tasks like this wastes the time of multiple people and either ends up back with the original dev or gets dumped on a more responsible worker.

A lot of this depends on the environment and circumstances. If you're in the middle of working on feature X it's very annoying to have to drop it to look at a bug, sometimes it's necessary but usually it can wait. This is where having great user support comes in too, capturing what the user was doing, getting relevant logs and knowing how to reproduce are important and if you don't have a good support team that falls to the developers.

The other big factor is external pressures, if you have management asking for frequent updates and putting pressure on to get through tickets quickly (especially common at consultancy type shops) then bug fixing is miserable high pressure work that I will avoid at all costs. An environment without those and bug fixing can be fun, give me the biggest most spaghetti like enterprise system and no time pressure and it feels like getting paid to solve a giant Sudoku puzzles all day.

While we're all swapping war stories I'll share my most epic 2 line fix at a place were I was afforded the time. I was working on this huge mess of over abstracted, multi threaded, spaghetti enterprise OO, trying to track down a bug that happened maybe once a fortnight. I tried narrowing it down to reproduce but nothing was working, the stack trace was about 15 levels of indirection away from the trigger so the most I could narrow it down to was hundreds of thousands of lines of code. After a couple of weeks of trying things and getting nowhere I told the boss we'd probably never track this down but they insisted I keep trying. Eventually I wrote a script to copy all the logs locally where I could search and do some analysis on them, after grepping 18 months of logs I noticed that on 3 or 4 occasions the same error was happening within 5 seconds of each other. From there it was a matter of finding "Sleep(5000)" in the code to know exactly where the error was. Turns out that 15 levels of indirection was quite slow and getting a stale piece of data we already had anyway so the time wasted turned into a nice little improvement.

The scripts for the logs become invaluable too. So many times we got "you incompetent idiots broke x with your last update" we could reply a minute later with "x has been happening since <time before any of us worked there>, you only just noticed".


Oddly, I've worked at places where they PREFERRED minimized changes to bugs. i.e. "You made a whole new method to deal with this problem, is there a smaller fix?"

Thankfully most places I've been at -prefer- a smaller fix even if it takes a little longer to figure out.


You've only rendered one two-word verdict -- why did that take five days of deliberation? You had twelve people working on it!


Those two lines took two minutes to write.

Knowing which lines to add, and where, took 25 years of experience.

That's what you are paying.


There's an anecdotal quote from Picasso to that effect: tl;dr he doodled on a coaster and asked 60K for it. Not because it took him 5 minutes, but because it took him 50 years.

Personal anecdote, the 'old' UI I'm working on had an issue where a dialog window's action buttons would be outside the visible area and people had to scroll all the way down. One guy they hired part-time spent several days trying to figure it out. I came in and added a few lines of CSS and it was fixed. The bug had been open for three years. The CSS in question:

    .dijitDialogPaneActionBar {
      position: sticky;
      bottom: 0px;
    }


Another Picasso anecdote: someone saw his doodles and said "pfft, I could do that!" He replied "So why haven't you?"


> I know that reporting errors can be hard, and I'm grateful for anyone who does. I want to show appreciation for error reports by trying to do as much as possible with the information provided before asking for more details.

This might be coming from a noble place but sounds a little like shooting yourself in the foot. Bugs that can be reliably reproduced are the easiest to fix, and I've found the quickest way to get to a set of reliable reproduction steps is just to ask exactly what the user was doing when the problem happened. They don't always remember, but often do. Sometimes they even remember the time, which can be really useful for digging through logs, which otherwise are too voluminous to be relevant.

Maybe it's a cultural difference. But maybe we could "show our appreciation" for the bug report by just saying so ("Thank you so much for taking the time to report this issue. Users like you play a big role in helping us improve our software"), instead of soldiering on in the dark for 2 days.


It’s not just management that feels this way. A lot of junior engineers feel the same way. It’s what leads to bloat because they assume “it can’t be right” if it’s just a small change.

It takes a while for a lot of junior engineers to realize small elegant solutions are better, and requires good mentorship and code review to get there.


Because I procrastinated due to the overwhelming complexity overloading my mind with the myriad scenarios resulting from those two necessary lines and my lack of experience with this scenario depriving me of the intuition necessary to prune the aforementioned tree of mental complexity in an efficient manner.


Because I have spent time thinking, not merely typing. (The error "programmers are just overpaid typists" is widespread)


My first boss and my boss two positions ago both thought that way. It was incredibly frustrating. Along with: If only you programmers would stop putting in bugs we'd ship perfect software every time. (When most of the "bugs" were specification errors and not program logic errors.)


Where does program specification ends and where does program logic starts ?


Program logic error:

  // return the maximum value from an array
  return array.min();
Specification error:

  Only users with role X can access this content.
  [where X should've been Y]
In the former case, a test should catch it (because the code isn't doing what we believe, per spec, it should be doing). In the latter case, only validation (confirming with customers) can catch it. Any test that is run against the code will be based on the spec which tells us to do the wrong thing.

Though we usually didn't run into the latter case, ours were mostly embedded systems. So the problem was more like: You implemented this against Industry Standard 1234-A, but we're implementing against Industry Standard 1234-B which says that message Foo word 15 now has a range of 1-20, not 1-19, and 20 means "cease operations", and 18 has been changed from "cease operations" to "halt and catch fire".

So one system was developed per one spec, and another per a slightly different spec. The other common scenario was that the specs weren't created from thin air, but rather based on a preexisting system. And the authors of the spec misinterpreted what the preexisting system did and gave an incorrect specification for a behavior. When testing against the old system (or with the old system as these were mostly communication systems) you'd see a difference in behavior or failure to communicate. But since tests were never truly comprehensive, many of these errors could make it out into the world.


What happens in case where specification is:

[User inputs X1, system displays Y]

On system crash user input was X2.

Is this specification or program logic error ?


I generally consider system stability an assumed part of the specification. Your system should handle most input errors from users more gracefully than a crash. Specifications are never as detailed as the program. So a description of what it should accept implies what it shouldn't. The questions for the programmer when faced with invalid input are:

1. Should it crash? (almost always no)

2. Should it process the garbage input as though it were valid? (pushing the input validation problem further down and potentially causing issues in random locations of the program)

3. Should it reject the input and request a different input? (probably)

Once you get to 3 you've got a number of ways to re-prompt the user or indicate that the input is invalid in a way that won't crash the program. You may need to go back to the customer to figure out their preferred resolution. But crashing is almost certainly not what they want and a sign of a program error (not specification error).

I would classify it as a specification error if they told us X1 would work, and then supplied X2 in a way that was close enough to pass most validation, but not close enough to work correctly.

Like, "The data format is a series of messages. Each message consists of up to 512 16-bit words. Word 0 specifies the length of the message, including itself." and then it turns out that word 0 specifies the length excluding itself causing us to not grab enough data in the first message, and then random amounts of data after that.


If I understand correctly, specification describes "what/when" and program logic describes only "how".


Where management says it does.


I recall a story about someone at a government contractor who did a major refactoring and removed thousands of lines of code from a project, increasing its performance, only to be told by management that they'd signed a contract that said the company got paid by lines of code delivered, and his improvement would cost them tens of thousands of dollars, so revert the whole thing.


As long as we are counting LOC before compiling, that can be solved easily:

    if (false) {
      /* original code stays here, as we are payed by LOC */
    } else {
      /* write your new code here */
    }


Realistically, it would take some compile time conditionals in a bunch of places to get rid of the dependencies. The best part, you'll get paid for all those #ifdefs, whens and #[cfg()]s! You can even split longer functions to be able to wrap each one of them in those conditionals! Where do I sign up?


As long as this is contained to the metrics-metagame branch of the git repo, I am fine with this.

Get 80k commits in, before lunch.


That's a pretty bad breakdown in communication- the manager should have communicated how the contractor was paid and the programmer should have spent their time working in line with that - whichever end that breakdown happened on that sounds like that led to an awful lot of misery and wasted time - ouch


> That's a pretty bad breakdown in communication- the manager should have communicated how the contractor was paid and the programmer should have spent their time working in line with that

Seems

more

like

a

breakdown

in

negotiating

the

contract

than

a

breakdown

in

communication.

No

customer

is

actually

interested

in

having

the

programmer

do

their

work

in

line

with

a

compensation

scheme

that

pays

more

for

more

lines

of

code.


What if a single line change, given it took 2 weeks, fixed 40% of your crash rates? That fix alone is worth millions, almost 100x - 1000x the engineer's hourly salary in down time.


A manager would certainly argue the other way around: The person who created the bug costs the company a lot of money. Fixing that single line was just necessary because someone didn't do his work in the first place. So fixing that line is something you should do off-the-clock.

This is just one of the reasons why I appreciate it when managers have at least some coding experience.


To that manager: that bug wasn’t the creation of a single developer. It went through code review. It had unit tests. A QA verified its initial implementation and a Product Owner signed off on the feature as done.

If it wasn’t caught before prod, it’s either such an edge case as to be almost impossible to catch OR (more likely) it’s representative of a systemic failure within the organisation.


The only mistake of the procedure is that if you don't have enough info you should ask for help before. This is very common and I have done that so many times to understand it's a waste of time. The only exception IMO is when you really need to understand certain parts of the codebase at a very low level so spending time solving things by yourself it's well worth as an exercise and helps a lot. If it's a part of the codebase you're not likely to be working on any time soon, just don't do it. Talk to whom reported the bug and also talk to the last person that worked on that piece of code (git blame is your friend here)


One of my favorite bug fixes took me two weeks to find, and the fix was to swap two assembly language instructions (this was a bug in the Apple Newton context switch code, and swapping the instructions let timer interrupts happen reliably, which is kind of important for thread scheduling). We'd been having intermittent problems for months, with no smoking gun. I got mad at it, and found it.

No one was upset at the fix -- if anything, the checkin's brevity communicated its correctness -- and I got a couple of pats on the back for it.


I would add: "Because we keep punting on our tech debt, and our infrastructure is so bad that after I spent 2 minutes writing the code, it took 2 days to get it tested, committed and deployed and deal with the fallout"


If the reporter hasn't provided enough information to recreate the issue (it's obviously not a major deal breaking issue otherwise it would be obvious and easy to recreate) and they are internal to the company, tell them to provide more information before moving forward.

The author's approach is good for external bug reports, but they don't clarify that's indeed the case here.

I have to strongly appreciate the author for finding the root cause and tackling that instead of the symptom.

So often, especially in front end coding, you will see an exception being thrown because of a null value being passed in, and the "fix" checked in by the developer basically returns the default value if null is passed in, when they should be investigating and fixing why a null was passed into the function in the first place.

If your function has a contract that forbids nulls from You resolve the immediate bug, but this almost certainly leads to multiple bugs being created in the future (or worse, something that is quietly wrong, because 1 row in a 100 row table is missing and no one noticed) until the root issue is resolved.


Reminds me of that time a PHB decided to measure efficiency by kLOC. All the pull requests in the following week had a net negative line count.


They PHB should put extra bonuses on kLOC, like 1k$ per 1kLOC, and watch the codebase increase exponentially and since documentation is better than code, double the amount for line of comments.


That's a literal Dilbert strip, 1998-ish, IIRC.



Ah, that's the one. I have misremembered it slightly...


Conclusions:

1. Picking a good manager is very important!

2. Communicating with your manager is very important!


1. Being in a position where you have the privilege of being able to pick a good manager is important :)


I remember hearing somewhere that the reason n.1 people leave their job was their manager. Don't know if that's true but the idea's there. So it makes sense that your manager should be a major decision criteria when you pick a job (being able to actually pick a job is also a privilege, but it's fairly common in SWE)


Many companies have a team-selection process that starts after (and is separate from) the actual hiring, e.g. you might not know who the available managers are or if there even are any 'good' ones.



My personal best was -112,000 on a legacy project.

The code base was full of commented out code (the best case scenario as you know immediately it can be deleted), methods which had been deprecated and replaced with ‘myMethod2()’ and eventually ‘myMethod3()’ with all of them still in the class (and for extra fun, it wasn’t always the case that all references had been updated to the newest method), and thousand line blocks of code which static analysis helped me pick up were actually not possible to ever actually execute, etc.

Basically a static analyzer with an analysis mode for finding dead code just flagged pretty much the entire codebase.

And in the process I split a totally unrelated project which had been grown inside that codebase like a tumour into its own codebase.

Needless to say two weeks of my time invested really pepped up development velocity for that team.


One time I spent 3-4 months working on a project that amounted to adding 11 lines of code to a config file on kubernetes. Tbh, a lot of it was bad communication, but it was also because I wrote and rewrote so much of the same code.


I’d rather write 2000 lines in two days since if I’ve only written 2 lines, figuring out those 2 lines must have been miserable.


I know this behavior.


Let's not forget meetings. Advising on sales. Talking with customers about new features. Helping out tech support and services. Traveling to customer sites to review specialized requirements.

Or the real reason it takes 2 days is that the code base is a big ball of unbuttered spaghetti with no tests.

I was lucky enough to be able to rewrite a couple of products from scratch with the benefit of hindsight. When the code is loosely coupled and well organized, it's rare for any of the reasons listed in the article to stall development. When the code base evolved unpredictably over a couple decades, the article is spot on.


Two lines ? Bah, a one line fix took me 5 weeks once!

(issue deep in the storage stack with exotic hardware only available at a customer with all debugging going ping-pong over an issue tracker between me and a customer engineer)


This is a great write-up. This is real life. And then you get a corporate dude berate everyone and ask: what are you doing to make sure that it never happens again? And I always want to respond that "I can guarantee that the same thing won't happen again, but something very similar will happen, if you won't assign sufficient resources to fix the real problem, instead of putting on a band-aid. And you won't even know the difference."

I think the OP covers this case very well.


E = m c² is just three letters and a bit of gutter, how long could that possibly take to figure out?


Thanks, gonna use that argument in the future


I agree with the general sentiment of the article, but the following is a big mistake IMHO:

> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.

Some developers have a hard time with interpersonal communication but you can't isolate yourself if you're working in an org. That mindset will inevitably make you less effective (I learned that the hard way).


> You've only added two lines – why did that take two days?

If this is hard for them to understand, the confused look when you reveal a change was a net removal of lines/statements must be amusing!

> Because the issue was reported with a vague description of how to recreate it

This is something my current management fully understands, but I wish we could get through to our clients. Short of being actively rude about bad requests I've run out of ideas over the years. Luckily these days we are big enough that I usually have a bit of a shield (provided first line support people and BAs who I trust) between me and direct client contact.

I'm quick to put tickets on hold as "needs more information", and been around long enough that I've developed the confidence to respond to "this is urgent, there is an SLA" with "and that service level agreement covers a minimum level of reporting from you before even an urgent matter can be progressed" or more facetiously "then it is urgent that you furnish me with the information requested", but while they accept it each time they never learn to give better details up-front next time - we still get reports of "an error" or something "not working" or, even worse, the open-ended question "is there a problem with...?".

Obviously this can't be applied to truly urgent issues, but they are usually massive problems that we are already aware of and working on before the client contacts us because, for instance, we've had an alert that something is down (sometimes we tell the client about an issue and resolution ETA before they notice, which they seem pleasantly surprised by and thankful of).


Decent logging of all changes and error happening on the platform I work on usually lead me to pretty much the following dialog :

" - I have a problem, the platform is broken" " - Sure, do you have any rough idea when that happened " (I discovered that weirdly, people are very good to report a bug a day or even more after it actually happened) " - Around [some hours range]" (Get the logs for this time and this user) " - Ok, I see every information I need in the logs, it will be fixed soon"

And that's pretty much it. Oftentimes, I don't even need the time bit, just finding the user history in the logs is enough.

I'm always puzzled at engineers rambling about "users never learn how to do a correct bug report", but it seems like they themselves never actively learned that and integrated it in their everyday life.

Since users indeed never learn, stop expecting them to !


> Since users indeed never learn, stop expecting them to !

I don't expect them to. That doesn't mean I can't be irritated that they don't!

We have various logs and audits too, that can usually be used to work out what has happened. Sometimes alerts based on those logs mean we are already fixing the issue before any user reports it. Even with full audits and other logs, it is usually still quicker to locate and diagnose a problem if given what useful information we know the user has access to (the standard set: approximate time (at least the date if not today), what were you asking it to do, what did it do instead, any error messages they were displayed).

If the user is give a message with a code that explicitly says "report this number when you report the issue" and they don't bother (they just say "I got an error") and that information would save me time, you bet your arse I'm pushing the issue back onto the queue and getting on with some interesting dev/infrastructure work until a better report arrives, or looking at another issue where there is decent information.

Want to be at the head of my TODO list? Then make at least a minute amount of effort to help me help you.

It particularly bothers me when clients who negotiate a discount because they won't need first-line support as they are "big enough to have a department that triages that sort of thing", still send through terrible reports because their idea of triage is just hitting the forward button whenever an email comes in. I've got better things to be getting on with than providing free outsourced work for a bad IT department!

I'm not very customer facing these days, since we have grown to the point where I have a bit of a shield provided by our support/PS/BA teams, which is good for both my irritation levels and the clients!


Too bad about the low-contrast gray on white text. It is a source of eyestrain.


Agreed, came here to say the same:

https://contrastrebellion.com


Came here to say this. Gray on white for anything other than disabled options is a sin. This website is particularly egregious.


https://speakerdeck.com/jallspaw/findings-from-the-field-dev...

this reminded me quite a bit of this deck -- in particular, a focus on a shallow metric (in this case LoC) as a proxy for measuring complexity.


One of my favorite contributions involved two commits: one that added 5 lines, another a bit later that removed a different 5 lines. This was after more than a week of building tiny models to understand how the system might behave. Thankfully I was on a team that appreciated it, although the reduced load on on-call quickly drove the point home :)


Nobody should manage programmers who is not themselves a very experienced programmer. Otherwise, you are in Dilbert Country.


The tricky part of this is that not all programmers make good managers, or want to be promoted into management. They are different skill sets. That said, having a basic understanding of the difficulties and things that take time (learning new technology, investigating/debugging an issue, etc.) would be a good thing to have as a manager. Also, being able to assist where possible -- asking if the developer needs someone to help out, finding people with the relevant skills to mentor the developer (or finding suitable courses/training/books), etc.


I mean, is it really this hard? This is not a good/bad manager problem to me, this is an organizational culture kind of problem. As a manager I'd expect my three top priorities to be ensuring enough devs are on the team, communicating priorities effectively, and unblocking the devs on the team as needed.


"Because I had to spend a lot of time to compile this long list of things that I also did while fixing the problem which are essential for a good solution but aren't immediately visible from the two lines I actually added so I would have something in hand to rebut this stupid question of yours!"


Because product managers struggle to comprehensively understand value add and have instead replaced stating business goals and value add with bullying based micro managing tactics like counting lines of code and conflating other such arbitrary metrics related to code with having a 1:1 ratio of accomplishing the goal and do not respect the thought of troubleshooting, architecture design (unless you take another two days to turn it into a diagram presuming it needs to be consumed by some other party) and finding an elegant way to implement code to accomplish the goal as work because they can't see it, and they can't understand it because they are too busy collecting visual days to prove they are properly micromanaging you to take the time to learn the challenges inherent to the architecture and challenge at hand.

Does that help?


That sort of question is ussualy asked by a someone that either has no clue how programming works or are just from the sales team. I do admit I left one software company because of a manager like that that. Fixing bugs is painful(to say the least) if you don't know the code properly. Even worse when you have to "hit the ground running" and take over a project because the main guy for it left "due to personal circumstances" or "difference in oppinion". Ever since then: - "If you think it can be done faster please go ahead." - "It will take as long as it takes, not a minute more" - "if you really want a time estimate: it will take me 4* <time I think it will>"


At what point we, as an industry, failed to mandate a level of technical education for being a manager? Almost always the non technical people who can't think beyond quarterly profits and their resumes are the biggest problem of this industry.


What industry requires a relevant 'practitioner' education for being a manager?


Aircraft carrier commanding officer must be a pilot...


Well, I guess the whole concept of "officers" in militaries kind of fits?


A good commit message can also help to explain why the two lines of code took so long.


We had a non-tech lead at some point (although his title was something like development manager) and he would praise my coworker for how many check-ins he did. Except that coworker would do things like:

1- Copy-paste an entire class into a new class and change a single constant in it, because he was too lazy to do inheritance.

2- "Solve" multiple bugs a day that he had introduced himself the day before.

3- Loudly complain about other people's frameworks/codes.

He was the super confident type even though he was wrong more often than not. But paired with a non-tech lead with his own impostor syndrome, it was a recipe for disaster.


That word (imposter syndrome) does not mean what you think it does :)


Hindsight is 20/20.

It takes more work to produce succinct code and it can take surprising amount of effort and care to land at a simple solution to a complex problem.

My solution is to try to involve other people in my "process". This helps me transfer some knowledge of the decision process, helps debug ideas early on and hopefully is useful mentoring for the team.

I can do this because I am senior engineer / tech lead at my org. For other engineers I highly recommend pair programming and constantly rotate developer pairs so that everybody can get some appreciation of everybody else.


With unit tests included it would probably be more than two lines.


This.... so much this.

I remember working on a bug and writing tonnes of unit tests, a bunch of scripts and eventually implementing an entire e2e suite when it wasn't reproducable anywhere else.

Change to actual shipped code: 5 chars.


Only two days? I've spent two weeks on a single line bug fix before (although I wrote quite a few more lines of unit tests to make sure it continued to work).


So much good in this short piece. He sounds like someone I want on my team.

Users who work with IT often tend to give better descriptions and test cases. Quite often I need more information though. He's right that you try not to bother the reporting user. Sometimes there's no other way.

Reproducing a bug is often the most time-consuming part of a bug fix. It's doubly difficult when you have a shared test environment and the bug leads you into a shared data set. For instance, we have a scheduling table that's used by many applications. I can't change the data, even on test, because it can easily mess up other teams. So I have to make a copy of it to my schema, alter data, and point the code to the altered copy.

"If some code is throwing an error, you could just wrap it in a try..catch statement and suppress the error."

Yes, these developers exist. I worked with several over my career. They are frustrating because they leave damage for other people to fix, often at the worst time.

"Finding the exact cause of a problem, and looking at all the ways to get there can provide valuable insights."

Yes a thousand times. Bug fixes are opportunities to learn more about a system and, often, the user area for whom we wrote the software.

"I want a fix that isn't likely to cause confusion or other problems in the future."

A good fix takes into account the overarching software design and fits it if possible.

"I don't want a bug to be found in the future and for me to have to come back to this code when I've mentally moved on. Context switching is expensive and frustrating."

Writing software is building an abstract machine in your mind. These machines get complicated. Even when you're fixing a system written by someone else, you need time to "load" the machine into your mind.

The only time I don't like fixing bugs is when I'm against a deadline on writing/modifying another system. The context switching is a deadline killer. But, it happens. Nowadays I let the project manager know that I'm switching to a bug fix, give an estimate of how long I'll be away from his/her project, and my best guess on whether or not it'll cause a deadline slip.

Great post.


One of my favorite personal commits was removing about 5000 lines of dead code tightly woven into many other parts of the overall codebase.


This reminded me of this commit from my GSoC internship: https://github.com/lihaoyi/Ammonite/pull/93/commits/a5e30eff...

This single change took more than a week of debugging.


I remember removing a single line taking me about a whole day. Was working on an embedded c++ project (a tv) and for some reason it was randomly stucking in an infinite loop. Debugging tools were minimal and it took me a while to figure out it was stucking in some kind of mutex and all I did was removing one mutex lock because it wasn't necessary.


This reminds me of the story about an older engineer called in to fix a computer problem. Previous efforts by the local IT staff had failed. In two clicks, the problem was fixed. He charged $100 for it.

Outraged at having to pay $100 for two clicks the customer demanded an itemised bill. The engineer wrote:

- 2 clicks: $0.05 / click

- knowing where to click: $999.90


I wrote a whirl article about a single line of code, I hope the mentality starts to change.

https://www.theguardian.com/info/2019/dec/02/faster-postgres...


There is an annoying tendency that business people get the credit for coming up with the ideas that bring in the value, and the IT people get the blame for the defects and the problems that costs the business money. It is not like that everywhere, but in companies where it is, it is unhealthy to work as the IT staff.


For a previous job long ago, I spent several months off and on working on an obscure and difficult-to-reproduce bug, without success. A year after I was laid off, a customer encounter a clear reproducing case, and I came back for a day to work on it. The fix was one character: < vs <=


Actually this is one of the upsides of web development: Since every line contributes to the file size of your software and therefore to the loading time there is a motivation for keeping the code base small.

Sadly, there are many projects out there which obviously failed to reach this target at some point.


This can also go suprisingly wrong when management decides code size is the performance metric needed to improve the site and hires a "performance engineer" who abolishes all structure and abstraction in the code, making it near-impossible to debug and maintain. But at least it runs fast.


True, the quest to reach a small file size can also lead to various down-sides. Cryptic architectures and weird code are just two popular symptoms.


This is not an IT specific problem.

For most jobs they don't do themselves people tend to understand the time it takes.


Had a PM ask why we were taking so long to build a future-proof platform when this other team over here built an emergency project in 3 weeks.

Had to explain that that project was held together with duct tape, not one bit of it was reusable, and even the engineers who build it were saying it was shit.


I would've loved to be a fly on the wall in that meeting.


Bill:

  Adding two lines:               $     1.

  Knowing which two lines to add: $10,000.
https://www.snopes.com/fact-check/know-where-man/


The real reasons: depressed due to the current world situation, browsing reddit, HN, youtube, doing housework, checking stock portfolio, chatting with friends. In between all these distractions, one does manage to get some work done, at least 2 or 3 days a week.


If you work somewhere you're defending yourself from this question repeatedly, just move on at your earliest convenience. They don't value quality developers and you'll eventually turn into the low-quality developer they've always wanted.


Trying to fix a very minor bug of a game of solitude of c heard from this site. It is somewhere it must be missng a range check causing a core dump (or whatever the name linux/macOS gave to the MVS thing). The problem is where is it. 2 days now.


I can’t like this post enough. We are not accountable by lines of code. Delivering healthy, maintainable software requires thought, experience, and thorough testing. This is why companies have taken so long accepting the idea of unit testing.


>Because I tested the change thoroughly and verified that it addressed the problem for all the different code paths that were affected.

Doesn't this imply that the'd be more than two lines of code unless they're not counting the tests.


Not necessarily, not all testing means automated tests written in code. Manual testing can be more effective in the short run sometimes


I love the bughunt. I hate greenfields. I procrastinate when I have to write a project someone else has to maintain. I feel the pressure to perform. Squishing bugs is fun. That was someone else who done the foulup.


I spent a month figuring out a single line of configuration. That really hurt.


I personally do value velocity not in number of lines, but in terms of unblocking users and other team members.

So:

1) user wants X done. Focus on that. Help the user X done. First iteration you have something crappy, doesn’t look great, too simple. That’s okay. Did it get X done. Cool. If you shipped some pretty UI but users don’t actually use it since it doesn’t solve the problem. That’s not productivity.

2) now that there are users using the tool, watch them use it. Ask questions, look at th me dashboard. Optimize their flow to do X. Make the UI delightful. We know with good certainty that this is a decent solution. Make it fast. Add tests. Harden it up so someone else doesn’t break it accidentally.

So if someone added two lines of using some existing node module that solves the user’s problem, those are very productive two lines.


The fix is a refactor of a refactor of a refactor of the actual fix.


I once reduced the amount of code for a client by ~75% and added usability and fault-tolerance. I wasn't even half done and they thought it was good enough. LoC is no measure.


Because coding is like the art and science of distillation. Cheers.


It's really sad that people (managers) still thinks of code this way. Luckily this has never happened to me, but if it did I think it would be a great sign to change jobs.


Most of the article is good, but this is weak:

> If some code is throwing an error, you could just wrap it in a try..catch statement and suppress the error. No error, no problem. Right? Sorry, for me, making the problem invisible isn't the same as fixing it. "Swallowing" an error can easily lead to other unexpected side-effects. I don't want to have to deal with them at a point in the future.

I actually did work with someone else who did this sort of thing, however it is certainly not the normal in my experience, not for anyone who takes any pride in their work.

Including this one really devalues the "it took so long because I'm so professional" message IMO.


Nobody I ever worked with cared for LoC.

They had a shitty bug. It took me a month to find and fix in the code. Wrote about 5 LoC. Nobody asked how I fixed it, they just were happy that I did.


Symptom v Root cause. Bad spec/no spec. Known issues in manufacturing since pre-75. Software gotta get with the program...


I sympathize with this, but taking two days to add two lines of code is unproductive. Sorry, that's just not a lot of output. Two-day bugs happen, but they should be rare. The real question is: is this rare, or is it typical for this developer?

IMO the real problem is trying to evaluate an employee's productivity on the scale of two days. There isn't enough context to understand the situation.


You CANNOT make any statements about whether or not a code change is productive / unproductive if you do not know the context. It doesn't work like that. Take any random commit on https://github.com/torvalds/linux/commits/master; most change just a handful of lines but I can guarantee that each of them represents a significant time investment in analyzing, documenting, discussing and testing the code change in question.

I mean I get where you're coming from, if your job is basic data wrangling (webapp -> rest API -> back-end -> database and vice-versa) then you don't need to put too much thought into it and just need the output. But that's only one part of software development.


I don't think we disagree. One line per day can be a reasonable rate of change. But usually it isn't. Usually that's too slow, and it's indicative of an unproductive developer.

Everyone in the thread is defending the OP because the issue is relatable. "He's productive, his manager doesn't understand!" Ok, maybe. But also maybe his productivity is way below what he's being paid for. We don't know, we don't have enough context to judge.


The hardest and most time consuming work is the architecture/design. The code should be the easy part.


Dude, you could have just said “because I take my job seriously and executed with complete diligence”.


Cuz trying to understand someone’s code, testing, debugging and finally apply the fix, and test again.


Programming is like playing chess, the move only takes about seconds, but what to move takes time.


The response to a manager who says this is, you don’t do anything at all, why are you even here?


MAKING CHALK MARK ON GENERATOR $1. KNOWING WHERE TO MAKE MARK $9,999. -Charles Proteus Steinmetz


You've only written a few letters. How long does it take to write: "e = mc^2"?


"Because the text was so light it was difficult to read"

Snark aside, it's quite a good list.


Because i wrote failing unit/integration tests before implementing the solution.


I once spent a week just to add a ";" in the right place to fix a bug.


I just spent 3 days figuring out that I needed to remove a single line.


It's beside the point, but grey-on-white fonts give me a headache.


Because looking at this code feels like walking on broken glass.


Mmh there should be more than two added lines if he wrote tests, no? I know some bugs are difficult to track, reproduce, and fix. But whenever I can I write as much as automated tests to check that the bug has really been removed.


Hey, sometimes it takes two days to REMOVE 2 lines of code!


Because the code is architected like a Jenga tower.


2 lines of code? Where's the test cases?


anxiety === lack of documentation + lack of frequent communication + general desire to be _done_ as soon as possible


this happened to me.. except it was one line: free(ptr); / here it is ---> */ ptr = 0x00;

a classic.


you think adding code takes a long time? try removing code!


Doesn't stand good to ask a doctor


There should be more than two lines of codes added anyway: whenever I can I add as much as automated tests as needed to check that the bug has really been removed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: