When you do, it can easily confuse folks who aren't deeply involved in your project ("haven't I seen this already?") and can hurt team morale because they never get a "shipped it" moment that feels good.
More to the point: enforcing this rule incentivizes teams to build things in small, shippable components. Nobody wants to be left out of demo day multiple weeks in a row.
I take a different but similar approach when I run into situations where I want to get something small in front of a business user before it’s completely ready: I make sure it’s visibly broken in a way that doesn’t detract from my goal for the meeting.
For example: I have a registration form that I want to talk through. The state drop down will only have 3 entries, one validation error will always display, and on final submit you get a “failed” alert instead of a dummy page.
It lets me walk through the page and get whatever feedback I need, but it feels completely broken so non-technical users expect it to take more time to conplete.
"While you mentioned this during the demo, I noted:
1. Your state-drop down has only 3 entries.
2. There is a validation error
3. You get an error at the end.
We need you to fix this ASAP !!!"
Would it be better to show no progress at all until it's completely done? I know agile methodologies tell you to demo regularly, but I'm more and more under the impression that they are to provide progress feedback / reports to management.
Something like https://wiredjs.com/ might work, if you are building a web-ui.
Sucks it has to be this way so often.
Yeah it’s all perception, but cultivating the right perception is vital to effective, productive communication.
Polished things have a context of "use", not "evaluate", because we use so many polished things every day, but very rarely have any need to evaluate them (mostly only when we're buying them, or when a repair person asks us to describe what's wrong with them.) Whereas unpolished things are mostly "for" evaluating; it's rare that people use unpolished things (outside of, say, disaster-relief infrastructure.)
Does this really imply “humans are broken”? We all have limited resources and I think this could just be a first order prioritization mechanism. Of course one has to be aware of the bias but that’s a different problem.
it's not a way of gaming the system, but a way of prompting better feedback. For instance, in a Design Thinking process prototypes are more useful when they look rough/unfinished. A more polished piece of work might result in people being afraid to break things.
I use both approaches in my work:
1) demo small, tested bits during show and tells and any meetings where the point is to demonstrate progress.
2) demo large, unfinished and barely stable pieces of work when ideating, trying to figure out the next steps
2) is hard and works only if:
- we know what the unstable parts are (because we have tests, so we know the gaps)
- we know the audience and how much context they have
The alternative is that they think you are done, adding more pressure, and requiring more wasted time later explaining the mis-aligned expectations.
It was my first professional development job and I learned a lot of things. One of the keys was this:
Your boss and employer are not your friends. When you promise something to your friends, you owe it without any expectation of reward or return. But your employer owes you something for everything you give them. If they fail to meet their end of the bargain, that's on them and you are absolutely 100% free to leave. That can be money, time off, good working conditions, respect, trust, or any number of things. For me money was not primary (though it is nice): I wanted a good work environment, interesting work, and respect. Fail any of those and I will be looking for an exit, even if it's just a transfer within the same company. But respect is first in that. A boss who shows little respect to his employees is not someone I want to be around. I had that boss who chewed me out, I had another that kept me doing busy work for a year and kept saying worthless platitudes like "you're important to our work, we can't do this without you" when everything pointed to that being false. He was just empire building and trying to grow his direct report count, but it hurt those of us under him because he didn't have the work (but he did have the money) to keep us there. (I was still relatively junior at that job and stayed longer than I should've.)
 The hierarchy was something like: VP of Engineering (several divisions below him) -> Chief of Engineer (Aviation) -> Chief of Software (Aviation) -> Chief of Testing (Aviation) -> Me; other production lines had similar chains below the VP.
 When she was my manager I quit, or more accurately transferred to another group. Years later, I quit my last job for several reasons, one was her. She was not in our division while I was in that position, but they'd just hired her on. When I heard the name I had to verify, when I confirmed it was the same person I was ready to exit.
If he tried to chew me out, I'd just say "That's not how I remember things. I said I was 80% done, and, frankly, it was a mistake for you to pull the trigger on the release without confirming with me first."
Not if you are on an H1B visa ;-)
This works even when you're on a visa, but can take longer to be truly comfortable. I had a classmate in college who'd secured a work visa. His plan was to work for perhaps a decade in the US, save aggressively, and then return to his home country. Salaries in the US were 10x higher at the time than in his home country. Saving even 20%/year meant saving 2 years' of income every year he worked. And he was frugal enough to save a lot more than that. Invested well, taking what he learned back home, he would be set for life at this point (I didn't stay in touch so I only know the plan started well, not how it turned out).
(Perhaps you could roll over part of the 401k into an IRA first, and then take the rest as 72(t)?)
Can you expand on this? I recently moved to the US and people keep telling me to get a 401k, but I haven't yet.
At a minimum, you should save enough to your 401K to get the company matching.
Let's say the company match 50% on the first 6% you save.
So if you put 6% of your salary to 401k, the company will add another 3% in addition to what you did. That's a free 3% raise.
Tell me another way that you can get guaranteed 50% return of your money.
In addition, there is the tax advantage where you do not get taxed on your money until you retire. So you money grow without taxes until you actually need them.
Are there bad 401k? yes, but I think very few. Check the rules of your company. But most of them are very good deals and you should take advantage of them if you can.
But one must also 100% expect themselves to evaluate thoroughly T&C, pros and cons before signing up for anything, including a visa.
I happen to get along really well with the person three levels above me, but I can't imagine dealing with getting direct negative feedback from him. Honestly, more of my conversations with him over the years have probably been about things unrelated to work than work-related; we don't just hop over my direct manager and his boss unless there's a really good reason for it.
And yep, I was actively avoiding that environment too.
Someone is bound to jot that down and interpret it as a hard commitment regardless of any other details or conditions.
I've come to realised this, "true" agile is more like being funny and smart(not that I am either). If you have to tell people you are smart or funny, you probably are not. Ever noticed how smart people(the really clever ones) are just absurdly smart without walking around telling everyone "hey I'm smart", usually the nicer they are the more intelligent they are(yea you get exceptions), same with funny :)
I feel it goes double for "agile-processes" if you have to walk around and tell everyone (management or interviewees how agile your process is, it's probably not)
On the other hand if you have a look at the original Agile Manifesto it is a different beast all together. It specifically seems to go against using set processes altogether and basically boils down to nurture organic communication, to adapt and focus on getting shit done.
I suppose the "agile" in the former sense is a compromise to edge closer to the latter, while still maintaining a familiar corporate structure.
EDIT:  https://agilemanifesto.org/
The places that went "we do all the agile incantations in the book, because that's the only way," well, those actually had a scrum-o-fall culture.
If you want to be working with a team trying to be funny, at some point you have to use the word "funny" and "comedy" to make sure you all are actually trying to do the same thing. And make sure the producers and show runners agree that it's a comedy you're making.
Same with agile.
If a country explicitly calls itself "demoractic" in the name, it's likely not.
They usually expect a fully working demo
You must demo the product in a form that leaves the right impression of the current state. If you are painting a picture of a polished product, expect polished expectations. Instead, show the bugs and say "we are still working through this section" Show missing pages, show your work in progress. Show wrong colors. Show potential, wave your hands and tell them to imagine this part working. Don't fake it. If that feels wrong, then a demo isn't right.
If you show it in a form that looks complete and polished, even if you say it is not, how can you expect any other conclusion from the viewer other than "it's practically ready!"?
Isn't this a bit of a fallacy? Not everything can be broken down into chunks of work that fit into a single sprint.
There's a reason I stopped bothering with Scrum a while ago, and this is high on the list.
If it helps motivate a team to divide some four-sprint piece of functionality into two shippable chunks that each take two sprints, I'd call that a win. Customers get something a bit faster even if it's not single sprint-sized.
Many times I can’t tell you what tasks need to done let alone how much time is needed and break it into smaller chunks during planning phase.
If planning has to work you should be familiar with what you are building , with poor information on the bug/ code / stack planning agile is just useless overhead . it works great for yet another CRUD app where you know the requirements to the dot and know exactly how to build or fix not always , most management fail to differentiate
All the reasons in the TFA are also why it is hard to estimate , what and how much time will take .
I mean obviously this only makes sense as a way of organizing a team who are trying to build something exploratory - that’s what scrum is meant for. If you are trying to pursue a solo research project within a team the rest of whom are doing scrum then... that’s not a problem agile can solve.
Identifying and solving something similar to  with a team is not simply possible when you plan with agile. I am likely going to go the next item once I mitigate the effect without bothering to dig deeper just because someone is clocking me on a timeline I committed.
It kills all the joy and fun, work becomes boring, this is by design, it is hard to run an organization unpredictably. If only management trusted you to deliver without looking over shoulder constantly ( when the situation warrants it)..
It is not only a engineer's gripe, it applies to management too, the board/ market forces them to be very short sighted, unless you are musk/jobs/buffet it is hard not to buckle to market pressure and invest in longer term opportunities.
The point is not that planning is bad, it can do a world of good in many including unpredictable situations, it is more that blinding pushing a framework especially agile because it worked somewhere and everyone says so, or the manager can't be bothered or won't risk doing something different as the situation warrants.
Scrum is a collaboration hack for creative problem solving teams, not a managerial accountability tool. The version of scrum where stand ups are for checking on the team’s progress and velocity is reported on up the org is using the tools of scrum to solve a very different problem than the one that it was designed for.
I’m sorry you don’t believe it’s possible but I can tell you from experience that it is possible to use the processes of scrum and the principles of agile to help a team collaborate on open ended creative problem solving tasks.
Sometimes your demo is nothing more than 'here it is in the log doing xyz' or 'I added this thing to this config file'. Not all demos are big flashy ordeals. The team I am on right now most of my demos look exactly like that. I can usually do them right after the standup. Our team allows it because we are mostly remote and talking to each other helps.
I personally use scrum as a weapon to make sure management does not overload our teams. Those made up story points are a good way to say 'you have tasked us with 4 months of work in 2 days'. You have to know your manager too. You have to talk to them. Know what they are looking for. Some take a very hands off approach. Some want the nitty gritty details. For both of those a 'oh that is going to take 3 months' may sometimes work. But it does not give them actionable items to help you. The task broken down into some sort of chunked out work does. Sometimes you do not know. It is OK to admit that. That is when you make a discovery story. Make sure they are onboard with that story is to help you find out what is needed. Even then you will still learn along the way.
I worked with one guy who wanted to task things down to 15 minute increments, 6 months from now. He kept failing. Because he was being too narrow. He refused to do story points. Because they were 'stupid' yet management kept piling more stuff on him to fail at. He was in every weekend and in until 9PM every night. Because he had no tools to push back. Give your management numbers and actionable items or they will assume everything is hunky dory.
For all the things that can be atomic like that, it's good practice.
The bigger ones just take time, and aren't shown, until ready for use.
I wasn't talking about intra-team demos to, say, product owners.
Not having demos of incomplete features would just hide the issues until they are released to the final customers, creating a problem you didn't have before, and making it much more complex to solve.
Progress gets quantized. Some quanta are small, easily shown, etc... other quanta are a bit bigger, less easily shown.
There is a similar problem in manufacturing.
While making something, there are many subtle tweaks to the BOM. Changes, substitutions, removals, adds.
Upstream people can make a real mess out of all that, and one way to prevent it is to only deliver releases that are resolved and intended for manufacture.
"where is revision 4?"
Doesn't exist, won't get manufactured, etc... "Use Revision 5 plz."
For the case of insuring expectations get aligned, a mock up can be used. Deliberately used to generate a spec.
In my experience everything can be broken down if you spend five minutes actually trying to break it down. And the benefits are very much worthwhile.
People can hide the fact that they have a big ball of mud fir a very long time, and they only want to talk about improvement after hunts have gotten miserable.
4-11 years depending on exactly what you'd define as "brownfield"
> People can hide the fact that they have a big ball of mud fir a very long time, and they only want to talk about improvement after hunts have gotten miserable.
True but beside the point. The same point stands: you can always find a way to make a worthwhile improvement in two weeks - something that's useful on its own, even if it's also the first step of a much bigger improvement plan.
Sure, sounds straightforward enough. Start with simple cases (e.g. universe is a unit circle), you can definitely implement useful pieces within two weeks.
> first the mathematical models behind it need to be created then implemented in software.
That's not a real (i.e. user-facing) requirement.
(Cause another lesson is that people have a lot of trouble giving worthwhile feedback on a verbal/written description of something, they gotta see a thing in front of them).
I've learned this adage years ago (I think it was from Spolsky), but nowadays I'm in a project (re)building an UI from scratch as a sole developer aaaand I made the same mistake.
I was doing some UI prototypes about activating a process, big green Activate button, opens up a confirmation dialog, spinners with some artificial / simulated delay because I didn't do the back-end, and it caused confusion with our tester because she was wondering if activation actually works.
I've got three options; remove the button for now (I should do that), partially implement the activation (changing a status in the back-end), or fully implement the activation (which has a lot more prerequisites).
I've learned the opposite. If I communicate clearly that what is being shown is little more than a mock-up, executives of all levels and technical skill, all the way down to managers just above myself, all understand that a very thin and scripted demo is not anything close to a finished product.
Describing the demo as a "house of cards" that will collapse with a single misstep gets the point across nicely, while also giving the demo audience an eyeful of what can be accomplished if everything is handled appropriately.
Demos are carefully scripted and rehearsed, values are hard-coded, and absolutely nothing exists that does not prop up the demo for the purposes of the script and the talking points.
It is hard to describe to someone who hasn't written an application demo like this just how little actually exists behind the UI.
Anyway, my point is that if you choose the correct words, anyone can understand that it's like a painting of an application, and not an actual application, just like a painting of your mother is not actually your mother.
What are they going to do? Fire me? I can go anywhere. They can't find me anywhere.
So I very quickly used MS Paint to mock it up... Just so I could clarify that is what they wanted.I shared my screen and someone said "great, you've done it!". Even though I was clearly showing a screen that showed me editing a screenshot of the UI, in MS paint...
sigh, they don't tell you in University that the biggest skill you will need in this job is patience and learning how to channel your inner zen.
Developers don't need 3 monitors to get through the working day, they need regular sessions with a psychiatrist.
Amen. I can see this as dev perk in job ads.
We came to this rule after far too many incidents where some sort of mock up (Photoshop, HTML, whatever) was shown and taken as done. Then we got the questions (possibly unhappily) about where it was and when it would be done because obviously ”it must be” since we “showed it”.
The rule served us incredibly well for years. It put a hard stop to all miscommunication. Everyone understood exactly where we were in the project. Either we knew what it was going to look like, or (if it was ready) it was done and awaiting their approval.
Shortly before I left we got a new graphic designer. He wasn’t embedded with the programmers. He didn’t know THE RULE. Sure enough, we were asked why a design we’d never seen before wasn’t ready yet. Because he made a mock up in Photoshop. He told them it was a mock up. Doesn’t matter.
We basically follow THE RULE at my current job. It still works wonders.
As he simply was the marketing/sales guy, in case he found himself cornered he could always pretend that the failure was on the dev/technical side... "see with the support".
He could also successfully sell himself this way: once a billionaire proposed him to be the sales director of some company. He set the meeting at 7AM at the Ritz bar, and got the job (not for long : he had negotiated to keep the perks when leaving anyway).
Superimpose the word "MOCK-UP" in a gigantic font that takes up nearly entirely the image.
Make it translucent and red or make it black outline. And maybe tilt it diagonally to catch attention and to make it easier to visually separate it from the rest of the image.
To streamline the process, you can just keep an image like this around to import as a topmost layer into other images.
Whether we realize that immediately or only after we stopped having misunderstandings I’m not sure.
But once it was realized I don’t think it took very long at all for it to become a rule. It made life so much easier for us developers.
We sometimes have to do that in technical reports. The proofreaders always feel like they have to give at least one comment, so you give them a clear mistake to point out to avoid useless debate over minor points.
Best part is, I'm almost certain to beat 1500, so I've gotten compliments that it "feels snappier".
A game developer was making a game for PlayStation and they were over their memory limit. They were approaching a deadline but couldn't remove anything else from the game to fit it in memory (or disk, I can't remember). So a senior dev came by, changes the code in 1 minute and everything could fit into memory now.
The thing was that at the start of each project, he had declared a variable of 2mb that does nothing, so when every optimisation has been done and it still doesn't fit, he could just remove that variable and free up some more space.
It was also his way of insurance.
While your trick makes you look good, setting the times to match the budget might be more honest. And when the app slows down you can blame the people who take 250ms to do their part when we agreed to 100ms.
As several other people on HN have pointed out more eloquently, it's the variability that kills you faster than the average throughput.
The 100ms was not about end-user response times, it's referring to internal response times between servers. To make a page in 1 second you can't have 3 different services taking 700ms to respond, even if you can make all three calls in parallel. And if you have to call a bunch sequentially, you need the 75th or even the 95th percentile for those services to be pretty good otherwise your 95th percentile for the entire interaction will be very spiky.
Management tip: Make the UI reflect the actual state of the project.
The UI should be UGGGLY and should get prettier as the backend work gets finished. If if the artists prettify it; make the animations and interactions janky and sluggish.
Never make the UI better than the actual implementation.
Bonus tip: Always have something slightly off in the UI that management can point out to fix. Useful managers will simply quickly point it out and move on to more important problems; useless managers will focus on it.
For instance, Bootsketch: http://yago.github.io/Bootsketch/
I wish some of the bigger CSS frameworks would adopt sketch theming support as a "progressive" option, rather than the entire page all or nothing being Bootstrap or Bootsketch have the ability to add a "sketch" class to any element on any page. (Or maybe better yet, forcing sketch styles by default and needing something like a "final" or "final final" class everywhere, like the documents folder of someone that has never understood source control.)
> “I don’t know what’s wrong with my development team,” the CEO thinks to himself. “Things were going so well when we started this project. For the first couple of weeks, the team cranked like crazy and got a great prototype working. But since then, things seem to have slowed to a crawl. They’re just not working hard any more.” He chooses a Callaway Titanium Driver and sends the caddy to fetch an ice-cold lemonade. “Maybe if I fire a couple of laggards that’ll light a fire under them!”
Well, after a while they understood, there is a small difference, between a website - and a virtual market place.
Seriously, it is easy to forget, that for most people, all these technical things - is just dark magic in a black box. Which sometimes work and sometimes won't. And I find this sometimes hard to deal with, because society gets more and more technologized. At least a very basic understanding would be helpful.
I also point out when a requested feature actually exists as a whole company.
Pointing out that Product XYZ has 3000 employees and has been carving out a niche since 1995, while there are six of us with no knowledge of that market, is usually the only thing that gets him to accept that just because he understands what something does, it doesn't mean he understands what it takes to build it.
Just make sure you have infinite money first.
Maybe they are just looking for a side hustle.
This is how we used to sell drugs in college. It was a simple URL you could go to with pricing and you just sent texts to a burner phone to arrange a transaction.
But a virtual marketplace .. where different actors make transactions, is a different story. Consider you have a bug and people loose money because of you. It really needs to be solid.
Not screens that look like doodles, but actual paper and cardboard and maybe some Blue Tac or paint.
Ordinary users apparently behave very differently, because it's obvious that a piece of paper can be changed and they know how to do it so it only takes a little nudging to find out what the customer actually thinks the system should look like.
She still has all the CS background to estimate that a change maybe that seems simple to the user just isn't viable, but using the paper prototypes encourages users to leave that to her and not second guess themselves into accepting a bad design because they're mistakenly assuming it would be hard to make a change when actually this is the perfect time to make such a change.
The truth is that in a lot of cases nobody has ever bothered to explain it to them. What I tend to find in a lot of situations is that managers often prefer younger and/or less experienced developers because they can be bullied - but the reality is that this also means that people are then unlikely to tell their managers what they need to know. Ultimately it is the managers creating the problem, but in most cases (mind you not all) the managers don't understand that they are creating a problem.
Now my boss is happy that I’ve demoed API integration, but every next meeting he asks me why nothing has changed. It’s because I’m still waiting for something that is done.
For some reason the fact that it’s not done is always the fault of the front end team. I’d love to get an actually finished API for once...
In software we can put a few shapes on a screen in just the right context and people will believe there's an operation as sophisticated as the wite-out bottle factory behind it.
It's not fair for managers to assume, but it's definitely fair for them to ask. And it's our responsibility to show and explain.
There's the famous story of the first iteration of Gmail being done really, really quickly. The demo was the product. And then just iterations from there. Definitely a good model if possible.
Even people who ought to know better don't understand this.
You say this in present tense which makes it seem like a generalization. This is also a mistake that non-technical managers (and others) often make. The frontend frequently rivals and sometimes exceeds the backend in complexity. It is very application specific.
Maybe as part of the demo, you should demo that it *conspicuously fails to actually do anything", to help reset unrealistic expectations.
No chance for suckass product guys to push if they can't understand what's going on.
We've all worked with the perfect is the enemy of good guy who deeply considers all aspects, takes 10 times as long to deliver and then eventually, after much blood sweat and tears, delivers equally as bad software as the rest of us.
Personally, I've gotten over myself and try to just ship it.
That said, this is a bit different from the gnarly bug type scenario of the OP; though I'd probably ask how they wrote tests in 2 lines of code? :)
I think we spend too much time focused on the trigger man. Whatever person in your org is making your life difficult, there’s a person above them who knows and hasn’t done a goddamn thing about it. Who is the real problem?
I dig out my "The Dilbert Principle" book and will start reading it again to promote sanity.
But, as you describe, from a perceptions stand point, it was the worst thing I could have possibly done. It went from a very happy client, to a very unhappy and confused client when progress "stopped." I actually started recording development work as a way to make them understand all the invisible stuff which goes on behind the scenes.
Unfortunately communication for software projects isn't often discussed or considered valuable in this community but you can learn it like anything else in tech.
The hardest language to learn is the one that communicates with people not computers.
I have yet to see secure software... does that even exist?
"Why/how is this worth X dollars/time? I know someone who says they can do it in a week." To which, I eventually learned to reply: "Wow, well... In that case, let me shoot you an article on how to build a Twitter clone in 15 minutes. [awkward pause while I smile at them] There's a lot more than just literal lines of code that goes into building a successful software product."
LOC is a decent measure, but features are our targets
Only if your measure of success is tied to fewer LOC.
Ones target should be shipping features. If you use 10k LOC to get a feature out, or 500 lines of more concise, optimised code, what matters is the feature.
If you have LOC targets to meet you are incentivised to produce the former rather than the latter.
My point is that a high number of lines isn't as important as good features. Though the two can get conflated
It was an old Audi S3, like, 2011 model. Had a guy tell me the car was in better condition and the asking price was less than what I had listed. My car was listed at $10K AUD and that car was listed at $8K.
Them: "Why is your car listed at your price, and not matching this car?"
Me: "Well for starters, that cars in Adelaide, we're in Brisbane. If you want to go to Adelaide to check that car out and find out it's been in a bender and had most of it's body fixed and the listing avoids that. Be my guest."
Them: "I doubt that, I think you're asking too much. Will you match that price?"
Them: "Why are you wasting my time, I should buy that one just to annoy you.'
Me: "You fucking do that then, laters"
I sold my car the week after this for the price I wanted. Straight up https://www.reddit.com/r/choosingbeggars material.
How often do you actually receive quality bug reports at work? My experience is that external or internal users almost never provide sufficient information and you as a coder are always expected to drill down on what they reported with a barrage of questions.
ie. if you are not doing https://en.wikipedia.org/wiki/Five_whys then you might be doing it wrong and wasting time because of it.
I'm referring to this:
> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.
Which seems like being stubborn and making a mistake because of it.
Couple other parts also seem a bit overdoing it:
> Because I investigated if there were other ways of getting to the same problem, not just the reported reproduction steps.
> Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
These seem like taking a gamble. Maybe something comes up, but is it more probable that this work should be minimised until there is more proof of "other ways of getting to the same problem"? Developer time is expensive, is this really the best way of using it? Would it make sense to just fix the issue at hand and only put in more time if more bug reports come in after the fix or if there is some other indication that this part of the code might be more broken?
Very, very often. I work as a QA engineer whose main responsibilities is to go through the bugfixing queue and add needed info where necessary. And I have to spend a lot of time every day doing this. Sometimes it gets so bad I have to assign the ticket back to the reporter to add more info, because even I don't know where to look without it.
Interestingly enough, it's always the more senior people at our company who are guilty of writing crap bug reports.
Half of my current tickets barely have 2 sentences in so-so English.
> is it more probable that this work should be minimised
I guess this is where automated tests can come in. You fix something and see if passes the unit tests. But then everyone has their own approaches. For him, fixing a similar bug twice is worse than finding all possible mistakes at once.
* smoke means fire
* a contained smokey fire is sufficient to hide the start of a wildfire
* keep your errors at 0. If it "can't be kept at 0" you're either too far gone or thinking about the issue incorrectly.
* user complaints are errors. Just because they aren't clear doesn't make them any less so.
There is a perception that users go out of their way to make unfounded complaints. In my experience, getting any complaints is the issue.
There is also a perception that some errors aren't important. If you have a channel to recieve an error its because it has business value. If a dev I was managing ignored a p/w entry bug due to non-dupe / assumed user error without significant digging and user interaction I'd be livid. Most businesses will lose significant value if users perceive the act of logging in as difficult.
Okay, how do you handle a network failure, a full disk or a faulty RAM stick?
I see your overall argument, but at some point you've got to accept that you can't handle everything.
So if you're small, a RAM issue is something you deal with manually and rarely. As you get bigger you'll transition to automated failovers that still get looked at individually. Then you'll scale up to the point where these aren't freak occurrences. Now its important for you to have a strategy to identify the issue and its follow on issues, and resolve them. Its also past time you think deeply enough about your setup to be able to contain them so you can stop surfacing them as errors - they are now part of a normal business process. You want to make sure that too many are surfaced as an error (and too few), and any effects you can't currently recover from automatically are also errors.
Its perfectly possible to "keep errors at 0" without ignoring any output.
Getting complaints from users is a good thing, but yeah missing them or not fixing them when you come across them is. As a user, I would be overjoyed if I reported a small issue, and the company fixed it quickly.
Being livid is natural, but I was pretty sure the developer himself knew he screwed up this time, so no point expressing it. Plus as I explained in the earlier comment, this escaped for so long because we did not have many cross device users. So did not affect that many users. Just for my experience, I check this every single time i test a website before it goes live. (and also check with other keyboards than just Gboard)
I don't work at Microsoft.
Maybe you were expecting the dev to go through all the passwords and fix them so the first letter of all passwords is lower case? Oh but if they were following best practices they don't know the password, they only know its salted hash.
Since the app was already shipping without the first letter being auto-lowercased that would suggest there were plenty of passwords with the first letter already upper cased, also something you can't test for easily if all you have is salted hashes.
PS. Yes, we were hashing the passwords.
I know I'm guilty of this one, and I've stayed away from high paced jobs and appreciate jobs where people are ok with my reluctance to bother a lot of people even if that means it takes me longer to figure things out on my own.
This also means I build a much deeper understaning of the systems I work with, or at least I like to think so, and some people have confirmed that about me, indirectly, by praises of my insights.
Of course I know nothing about you so I'm not trying to give advice, just replying about the "confirmed" being a potential cognitive bias.
In 2014 I worked at a company that switched to Git and then started measuring LoC to assess performance/involvement. Engineers took to committing/removing things like node_modules directories to make the data meaningless.
It still happens, even today, quite a bit.
Spot on. Talented engineers can usually be picky and change job if they don't like the environment, but not everyone has this option. There's plenty of developers who are stuck in shitty companies (lack of skills/experience, or struggle with interviews, or just live in places with limited opportunities). And the longer you stay in a bad place, the harder it gets to escape it. 2 years ago I was the hiring manager for a few open positions and I was honestly shocked by some candidates. So many "senior devs" that despite having 5-10 years in the industry wouldn't have passed the interview even if they applied for a junior role.
It's very easy to have a distorted view of the industry if you are privileged enough to have only worked in great tech companies. I'm guilty of this myself, making friends with other devs in my city was definitely eye-opening for me.
The business I worked at was a typical office, like the one you saw in Office Space. Departments had their own TV screens on the wall that showed performance of individuals in a department; the sales department had a screen that showed who was making the most sales that day.
After we'd pretty much finished working on the web apps that supported these TV screens, the CEO met with me and a colleague to tell us how good a job we did. Then he said:
"You know, you're the only department now that doesn't have a performance monitor. Maybe we out to get one for you. We could base it on lines of code."
My coworker and I were speechless at first, but we started laughing because we thought it was obvious that he was joking.
"What's funny? Why are you laughing?"
We quickly stopped laughing when we simultaneously realized that our boss was not kidding! I said we'd get right on it in the next sprint, and he told us that sounded good and left. I'll never forget that look on my coworker's face.
I agree with you 100%.
Poisoning the well, I like it.
jaw dropped. since then i'm committed to being the biggest office space corporate schmo possible. let me cog it up, keep payin me 6 figs, lettin me work at home full time.
Edit: And they quit right afterwards.
 Unless there are multiple skeezy companies in Menlo Park.
Telling someone that it only took me an hour to eliminate an hour of someone’s work every week doesn’t go far.
Even worse, someone else now has to explain "why they did it wrong to begin with" regardless if the replacement technology existed or not at the time of original process creation.
The maths is simple. If my margin is 1%, and you make me an extra $100, I keep a buck. If you save me $100, I keep that whole $100. If I'm smart, I can use that to drive my prices down, and take more market share without reducing my margin.
Obviously it becomes less clear if your margin is >20% or so.
Also making more money is better, because it means growing the company with more employees and more job safety for everyone.
Most companies aren't like that though. Know your company!
Also consider if you can run out of money before growth kicks in...
From what I understand a lot of product teams at FB have nearly frictionless development tooling for their use cases so the pressure is to produce volume.
$JOB-2, admittedly about 4 years ago now, the good manager with a background in software left for a better opportunity and was replaced by someone who's background was management. With no insight into the subject matter, they fell back on whatever they thought they could quantize.
Got numerous things like that, though my team lead and our project manager did a great job of shielding the team from that crap, we still occasionally hear it come up in group meetings and the like.
I even got a task handed directly to me, bypassing everyone above me, to "estimate how much it would cost to migrate all those linus apps your team has to windows. They'll run better there". Just the windows licenses alone would have cost us about half the existing server costs since we were using AWS instances. Also included a line item that included recruitment costs for a new developer, and verbally informed him that it would likely involve hiring a new team, as the existing team was hired specifically as linux developers.
The smart thing to do is to regularly keep your manager updated on what you're doing, especially if they don't come by regularly and ask you.
Especially if you are WFH.
Code is often improved by removing code.
Task completion would also be abused, because it's too vague. It simply shifts the burden to the one formulating the task (e.g. preventing holes in the specification like missing performance or hardware requirements).
Exaggerated example: if the "task" is to automatically deliver a report containing certain data and formatted in a specified way, the easy way might be an implementation that stalls the DB server for hours with deeply nested FULL OUTER JOINs on non-index fields.
The task would be completed quickly and arguably correctly, since neither runtime nor memory requirements were explicitly specified...
But you said it yourself, any system is going to be abused...
Out of sight, out of mine <== don't let that happen to you
I don't believe it is any coincidence that the highest compensated engineers I know also are highly visible through their own efforts.
Meta-metrics might be much more helpful, though harder to come up with, quantify and monitor. Things like defect rates, user reported incidents, user satisfaction, stuff like that.
Things that cannot easily be gamed from within the development process and that are still directly linked to the success and economic viability of the product and its development methodology (though on a higher level).
I'm curious if this is "everybody knows this person isn't doing any work, here's a blindingly obvious metric to use to defend this move to HR" because HR orgs often hate things that don't have metrics attached, or if it's "I don't understand what this person is doing, and I don't see any commits, so it must be nothing"?
They were a senior engineer that spent a lot of time coaching the rest of the team, and less time on their own personal work. The commits data point, in my opinion, should have been the beginning of a conversation that ended with the manager asking this person to adjust their priorities. Instead it was presented as an accusation that the person wasn't getting their work done, which ultimately lead to them feeling insulted and quitting.
I've been fortunate recently to not work in orgs where mentoring and collaborating like that is looked down upon - instead, it's encouraged - but my ongoing struggle is to figure out how to quantify it.
I've got two similar but distinct motivations for wanting to quantify it:
* If I have a manager who looks less favorably on it, I want to be able to demonstrate my worth; but also,
* If I'm spending most of my time trying to mentor the rest of the team, I want to see how well I'm doing - and I want to be able to change things and see if it results in positive or negative changes
Sadly I've completely failed at coming up with how to quantify this so far. It all comes down to qualitative peer feedback/manager feedback...
 As a technical person, I try to practice the reverse of this: if someone asks for a feature or claims there's a bug, I try to dig until I fully understand why they're asking for what they're asking. So I think asking for similar curiosity and depth from a non-technical person is fair.
Assuming there's some kind of constant ongoing engagement with one person or group, I'd expect an immediate dip in velocity as your productivity goes elsewhere, then it growing, and maybe evening out to before-engagement numbers (approximately), as they reap the benefits of learning from a senior engineer. Then, as you're able to return to normal duties and they're able to apply the lessons, you should see velocity greater than before the engagement. That delta should be somewhat quantifiable and, ignoring other variables, should represent the benefits of coaching.
There's also huge value in the increased job satisfaction for both mentor who enjoys mentoring, and a mentee who is learning. That should show up in any kind of employee satisfaction survey, or retention numbers.
That could be completely wrong, of course. But would it surprise you?
If anybody can push commits to the repo, then it's a useful metric. Finally, this sort of action should be take after the manager has worked with their report, by having somebody else help them, or put them on a different project.
No, No you don't.
Now if there's 5 drawings with different colors, or asking "what color do you want it?" leads to a 5-week email chain...
I discovered that bug report after our designer noticed at a glance walking behind me (I use Firefox) that the colors on our site were far darker than she intended, somewhere around 2017-2018 (that bug was opened in 2010).
I've left companies because of devs like that. People who just stand in the way of getting the software to do the correct thing. I do not understand what makes these people tick.
I don't think you grok what's happening. Some middle manager sees a shade of red on his screen in a PDF, and the dev is expected to reproduce the content in that shade of red. There are simply too many variables.
Even if you have access to the PDF, the red will often be rendered differently by the browser than it is by the PDF engine.
I've had middle managers tell me to "fix" a web site because the colors looked different on his office CRT than it did on the laptop screen of a person in another building.
trivial to fix
Everything is trivial when someone else has to fix it.
They absolutely use lines of code metric at my company. I don't miss any chance to tell my manager it's complete bullshit. His answer: "Engineers are supposed to write code, just like construction workers are supposed to build houses."
If you give a construction worker a design to build a wall, and the worker is given 2,000 bricks that must be used to build the wall, then, yes, of course the worker must lay down all 2,000 bricks to build that wall. However, if I am asked to build a computer-simulated model of said wall, and if there is a way to build the model with 200 lines of code that looks and performs identically to a model built with 2,000 lines of code, then, yes, of course I am going to build the wall with 200 lines of code.
I hope you find better pastures.
And I can assure you, they were not judged on the number of views or pages on their drawing, or on the number of variables in their structural calculations.
It took 23 minutes for that rule to dissappear after my lines of code metric jumped 250,000k
Whats nice about the proceduralism is you can document that the steering committee only meets once a week on Tuesday afternoons and change control meets on Thursday. And everyone knows the automated test suite on DEV takes about half a working day. So if a change can't be worked into the schedule in less than a day, it'll never pass CI testing before the change control meeting so it'll take more than a week.
Whats bad, is mgmt would like you to complete multiple changes perhaps at the same time which always complicates the change control process especially if change #7 failed last week so company policy is to roll everything back and now we have 13 changes, two weeks worth, to complete next weekend. Also whats bad, is knowing its a corporate nightmare to make any change, why did I make a mistake to begin with of having the buttons swapped or a misssspelling or whatever.
I find the big metric now a days is backlog. Lets see the number of request tickets decrease this week instead of increase. That leads to intense pressure to roll multiple problems into one ticket.
There was a time when I thought this video was funny:
These days I can only look at it and think "the expert is terrible at calm confrontation and good communication. There would be no problem if he had developed those skills."
It is important to rule out miscommunications. Contradictions often are not obvious to the requestor. It also helps to understand, which of the contradicting requirements can be dropped to resolve the problem. Sometimes the problem is just a small feature which made it to the requirements, because no one considered it a problem.
That doesn't mean, I haven't gotten requests which were as silly as in the sketch and had discussions along the lines shown :) And of course, I never hesitate to speak up about actual issues.
Getting sensible requirements is not only on the expert.
But the expert is the one who can know if the requirements are sensible.
In requirements gathering, the whole job is to hear people's attempts to describe their problem and figure out what problem they actually have.
By definition they don't have your expertise, or they wouldn't need to talk to you.
So, of course they will say contradictory things and use terms completely incorrectly - they cannot do anything else. That's why they hired you.
The expert in this scenario gets hung up on their incorrect language and gets flustered and stymied, telling them "What you want is impossible!"
What they _said_ is impossible. It's our job to persistently, patiently, calmly help them understand their needs, without judging them for needing our help.
I'm not particularly good at it, but I understand the mission.
The problem is lack of trust in you as the expert and possibly a lack of self-awareness (they think they understand).
A manager/client should make mostly strategical decisions like: We should solve this problem, here are the resources. And almost never tactical.
They also shouldn't even care or look at LoC. They shouldn't be worried about metrics of 'effort' at all.
I found your phrase "lack of trust in you as the expert" a little jarring because (again, in my personal experience) considering the developer to be a domain expert is somewhat of a foreign concept. I suspect/hope the situation is better elsewhere in the industry. :-)
> I suspect/hope the situation is better elsewhere in the industry. :-)
I do freelance, client work. Alone and in (very?) small teams. Typically my/our clients only have a superficial understanding of everything technical (if at all). Trust often needs to be earned.
One way to gain trust is being creative/optimistic and explaining feasible possibilities and strategies. Another one is being pragmatic and not selling them something they don't need, or might not need.
And then the most important one is to have conversations about their problems and wishes. Showing that you understand them by asking questions and writing a specification. And explaining your (iterative) workflow: "Let's figure this part out after we've done this other part." I guess this is the "domain" part of the process.
My experience is that if trust is in danger then the work is less valuable, less fun and less sustainable. Indications of this are things like we discussed before and similar:
- Trying to measure effort instead of rewarding value.
- Nitpicking, bikeshedding and other distractions.
- Overstepping their expertise (typical for UI design, a bit less for programming)
Now most of my interactions are good but sometimes I get the above. We're actually discussing of doing more upfront communication work in the offers and initial discussions to prevent these things (even by filtering out clients/collaborators) and to set a tone. Because again, this is unsustainable on multiple levels and it never ends well...
My leader isn't a technical person by a long shot. Instead they focus on getting people that they can TRUST on their team. Yes, sometimes it means we have to go back and 'fish' for metrics to throw the business. But we do notice that the less we chase metrics (and, yes, the arbitrary goals set out by the company as a whole) the more productive we really are.
I was on a browser team. A fellow co-worker decided to add the Fullscreen API to it which means not just add the API but first discuss it in the relevant standards committees.
I'm pretty sure he thought, and so did management, this would be a 2-3 month project at most. IIRC though it was like 18 months, maybe longer.
Some problems that aren't obvious at first
* What is fullscreen mode? Is it a mode for the page, a mode for an individual element? What?
They eventually decided it was for an individual element
* What happens to the CSS above that element when none of its parents are being rendered?
I'm sure that took a while to argue over. Like if it was position: relative or absolute and suddenly it's parent is no longer displayed. What if the parent has CSS transforms? Okay, you say we ignore those and consider it the root element. Okay so does that mean none of the other styles from the parents apply like color or font-family? If some do and some don't we now have to go through every CSS property and decide if it does or does not continue to inherit. I don't actually know the answer to this.
* You have a DOM A->B->C->D->E. C asked to go fullscreen. While there E asks to go fullscreen. User presses ESC or whatever to exit fullscreen. Should it pop back to C or A? Does it matter if E is a video element and they clicked fullscreen? What if C is an iframe does it matter?
* Testing across all devices that support it requires all new testing infrastructure because going fullscreen is effectively something that happens outside the page not inside so testing that it actually happened, that a user can actually exit it correctly, requires entirely new test systems that were not there for previous APIs. Then multiply by 5 at least (Windows, MacOS, Linux, Android, ChromeOS, ...)
And so even though I'm sure everyone ended up understanding that, it turned out to be way more work than anyone expected. Yet, in the back of their minds it was arguably always "this is taking way longer than it should, goals not met" or at least that's how it seemed to be perceived.
This made me realize that working in a purely engineering team can sometimes be a perk. Not because technical people are "better", but because it leads to fewer frustrations like this one.