Hacker News new | comments | ask | show | jobs | submit login
Be Kind (briangilham.com)
2475 points by bgilham on Oct 14, 2016 | hide | past | web | favorite | 434 comments

When he returned to the air field, Bob Hoover walked over to the man who had nearly caused his death and, according to the California Fullerton News-Tribune, said: “There isn’t a man alive who hasn’t made a mistake. But I’m positive you’ll never make this mistake again. That’s why I want to make sure that you’re the only one to refuel my plane tomorrow. I won’t let anyone else on the field touch it.”

There are two schools of thought, especially in stock trading.

1) Reversion to the mean. This is what the above person believes and that this likely won't happen again.

2) Indication of a trend. The pilot is actually incompetent, and this will happen more frequently with this pilot than an average pilot.

I know you're probably not implying regression to the mean is causational, but that was my initial reading so I want to clarify for those who may not be familiar with the concept.

Regression to the mean is simply that any given datapoint is most likely to be the mean, or close to it. This means that any exceptional data point,up or down, can be expected to be followed up by the mean.

The example of this being misinterpreted that I am familiar with is that of a flight instructor's belief on training. When a pilot performed well, they wouldnt comment. If a pilot performed poorly, they would be punished. They believed this was better because when they praised a pilot, they would usually do worse the following run, and when they punish them they do better. This isn't technically wrong, they are just ascribing causation where there is none. I think I saw this example in Signal vs. Noise, but I'm not sure.

Basically, regression to the mean isn't a reason to pick someone who did poorly, it's a reason that that person will do no worse or better than they do normally.

Yeah, I always make an example with coin flips to show how this is true.... lets say heads is success and tails is failure.

Flip 100 coins. Take the ones that 'failed' (landed tails) and scold them. Flip them again. Half improved! Praise the ones that got heads the first time. Flip them again. Half got worse :(

Clearly, scolding is more effective than praising.

That's a brilliant example. I wont forget that now.

Except, coins are not humans with emotions, and neither can they dupe probability to improve their outcome. The 50 heads(success) that you left are not going to do any better than the 50 tails you scolded. You are just changing the sample space in a biased fashion to prove your point.

I think you are totally missing my point. The whole point I am trying to make centers on the fact that the coins are all actually equal. The observed difference in performance is entirely due to chance.

While, of course, human performance is not always equal in the same way our coins are, the fact that performance (or whatever it is you are measuring) will still regress towards the mean still holds true. The coin example just gives an extremely obvious demonstration about what is happening when things regress towards the mean.


hilarious! thank you!

The pilot performance example is in Thinking Fast and Slow, I think.

I've read that story in "How to win friends and influence people"


Ah, yeah. I think that's actually where I saw it. Thanks.

Now if only Intel would add that to their instruction set.

Isn't regression to the mean a phenomenon of random variables?

Humans aren't perfect improvement machines, but they surely beat a random variable?!

Correct. I think reversion to the mean is a non-sequitur here.

Neither the GP's quote, nor the OP, are making the statement "I understand that this mistake was an outlier, you'll probably never make this mistake again if you remain unchanged".

The claim being made is that acknowledging the mistake and learning from it can dramatically reduce the base probability of such mistakes happening again.

Regression to the mean applies any time a process has high variance. If I have an unusually productive week, odds are the next week will be worse, simply because the previous one was an outlier. But people often make the mistake of noticing week X+1 was much worse than week X, and attributing it to some variable that changed between X and X+1.

In my model, humans have an average performance, that increases over time (unless something terrible happens) and a random component, that makes performance fluctuate around the average.

The question is: is the day-to-day random variation much bigger than the day-to-day average increase? If so, regression towards the mean makes total sense.

I disagree with your interpretation. I think this is a case of understanding that the pilot is now motivated to save face, and will be extra careful.

This is psychological insight, not statistical insight.

Wow, you totally missed the point of the story:

3) Out of remorse, pride, being shocked out of complacency or some combination of these, the man would do his absolute best the next day, especially for the very same pilot.

There's also differences in management and industry style.

If labour is so abundant that there's little problem in replacing any given worker, people can and will be fired for trivial offences. If there's a shortage, or there's a considerable on-boarding process, less so.

The threat of a firing-at-any-moment also makes for a more tractable workforce, at least from a management perspective.

3) Experience is a series of non-fatal mistakes.

Not everything is a financial metaphor.

3) You might argue for versions of Bayesian thought as well (full Bayesian, MAP, ML).

I think more appropriate in this context is learning though. And I would suggest here multi-armed bandits (with every arm representing a pilot).


There is tradeoff between exploration and exploitation that has to be made.

The goal in this case corresponds to a particular type of bandit. It is to postpone death for as long as possible by pulling the right arm. I actually didn't find this type yet (a mortal multi-armed bandit has a birth-death process of the arms themselves).

Edit: This is only about the learning process on the non-pilot side, as one of the other commentators already articulated.

Totally ruined that warm feeling I got from the anecdote with your damn logic.

There is a perhaps apocryphal story about the time when Mizuho Securities lost ~$225 million by attempting to sell 610k shares at 1 yen instead of 1 share at 610k yen. When asked whether he intended to fire the person who physically keyed in the order, the relevant department head stated that he did not intend to waste the firm's substantial investment in teaching them to act with due deliberation.

I wish that story ended with the developer of the trading software learning a lesson about user interfaces

This is a good point--I remember even the online game "Runescape" had a warning that would pop up if you tried to sell an item worth X for a very low price.

It would seem to me (somebody who knows squat about stock trading) that a warning screen would be much more beneficial for stock trading software than it is for an online video game.

You'd think that, but then what happens is users get used to the warnings coming up and consciously/unconsciously adjust to click by it without reading.

I'd imagine in the case of the stock trader, where quick reaction is valued, he'd have quickly entered whatever key combo is necessary to dismiss it the warning and made the same mistake.

Only if the warnings come up too often. I imagine it should be quite possible to define criteria that discriminate selling shares at a price of 0.0000016 yen from normal prices, by comparing with the market price, or the range of prices ever seen by the software, for example.

Alarm fatigue is an important design consideration, but not a reason to not put warnings at all (given that this isn't amenable to an "Undo" button, which is preferable when possible).

Alarm fatigue is indeed an issue -- but if the trader is repeatedly trying to sell for way below market price that often, then perhaps there is a much bigger problem...with the trader.

It ended up with order sanity checks being implemented for all manual trading systems at Mizuho. Things like blocking an order if it's more than 10% away from the market mid, etc.

You shouldn't fix an issue like this with UI validation alone - it needs to go between the component that creates orders and the stock exchange, so that it also protects against software bugs. For instance, a possible bug is a developer multiplying the order size by the lot size in the backend, when it has already been multiplied in the frontend, causing huge orders to be sent. Sanity checks can catch this.

Automated trading systems have traditionally been under a lot of scrutiny, and nobody in their right mind would run one without sanity checks and a kill switch. That incident taught Mizuho that manual trading can, in fact, also be quite dangerous :)

I've seen several versions of that story, often with the quote "Why would we fire you? We just spent $X million educating you." Probably some combination of truth and exaggeration, but it makes for a memorable story regardless.

I was not familiar with this story; that prompted me to research it. I found it interesting that this aviation incident had a positive outcome, in the form of safety innovations: the Hoover Nozzle and the Hoover Ring. Wikipedia states:

"A perhaps-undesired recognition is the Hoover Nozzle used on jet fuel pumps. The Hoover Nozzle is designed with a flattened bell shape. The Hoover Nozzle cannot be inserted in the filler neck of a plane with the Hoover Ring installed, thus preventing the tank from accidentally being filled with jet fuel.

This system was given this name following an accident in which Hoover was seriously injured, when both engines on his Shrike Commander failed during takeoff. Investigators found that the plane had just been fueled by line personnel who mistook the piston-engine Shrike for a similar turboprop model, filling the tanks with jet fuel instead of avgas (aviation gasoline). There was enough avgas in the fuel system to taxi to the runway and take off, but then the jet fuel was drawn into the engines, causing them to stop.

Once Hoover recovered, he widely promoted the use of the new type of nozzle with the support and funding of the National Air Transportation Association, General Aviation Manufacturers Association and various other aviation groups (the nozzle is now required by Federal regulation on jet fuel pumps)." [1]

[1] https://en.wikipedia.org/wiki/Bob_Hoover#Hoover_Nozzle_and_H...

In general, this design principle is known as poka yoke:


In terms of physical connectors, I've always referred to this as polarisation. The connector is polarised so that it can only be inserted in one orientation, generally by use of a slot and key.

And the problem it solves is Murphy's Law: https://en.wikipedia.org/wiki/Murphy%27s_law

This is a great design concept to know the name of! Thanks!

this is a great anectdote, but the reason i am replying to your comment is because of your user name

i am a huge fan of the turn of the century dancer isadora duncan and her lover who she first had a child with, the theatre set design theorist, edward gordon craig who she affectionately called endymion

a complete aside, but if you have, or anyone reading this has, yet to read duncan's autobiography 'my life' i highly recommend it to anyone and everyone

she is a brilliant writer, lived an eccentric life, and she was and her writing is imbued with a mad passion for expression and both life's hardships and joys

Seems like something my wife would like to read. Just bought her a copy. Thanks for the tip.

Good idea. I've just done the same. Thanks both

I can't resist chiming in with another Endymion reference: the Hyperion Cantos by Dan Simmons. Fantastic series.

That is some really quality nerd stuff right there. Thanks!

I'm now reading My Life because of this comment. I'm greatly enjoying it so far -- thanks!

In my experience, while someone might be extra careful not to repeat a mistake that has burned them in the past, making a serious mistake is often a sign that someone is careless, and more likely to make other, different mistakes in the future.

This doesn't jibe with my experience. As long as the person doesn't evaluate their mistakes in a vacuum they become more careful in general because they learn that things can bite you in the ass in totally unexpected ways.

So I'd say it depends on your environment combined with the individual. Someone who is apathetic and/or lacks critical thinking skills will probably learn very little beyond avoiding that specific mistake again.

*jibe (jive is a dance, jibe is agree)


Shows the other meanings of the homophones - English huh!

Right you are. Thanks, I was able to edit it in time.

One mistake, even one very serious mistake, means nothing. People are human, even the best of them are going to make mistakes occasionally. Be as careful as you want, it doesn't matter, literally nobody is perfect.

A pattern of making mistakes can tell you somebody is careless, or sloppy, or in over their head. But a single mistake? That's just inevitable.

> A pattern of making mistakes can tell you somebody is careless, or sloppy, or in over their head.

Or that you need to fix the process.

We all make serious mistakes from time to time. Yes, you too. If you haven't yet, you will - don't worry. Nobody gets an exemption from that.

When that happens, I hope the people around you are more forgiving than you are.

> making a serious mistake is often a sign that someone is careless, and more likely to make other, different mistakes in the future.

Or under external pressures they have no control over.

Right. I've been in a situation where I attained a huge amount of responsibility and authority in a short amount of time. I was learning a dozen new tasks simultaneously and getting something like 4 hours of sleep a night (military). I made some pretty big mistakes, and I think it's fair to assume that anyone would under those circumstances.

Agree. Some people simply have unrealistic expectations.

Well that's not the experience I've had at all. I bet you if you ask any senior engineer they'd tell you the numerous different ways they have really messed over their careers.

I think it's especially true in tech jobs where there is a lot learned on the job. I would be deeply skeptical of any tech company that takes a fire first attitude. That just tells me they 1) treat devs as disposable and 2) they get rid of everyone that has a chance to learn from their mistakes

It's probably not unrelated that American air travel is very safe because there's a no-blame, learn-from-the-issue attitude to problems.

More generally it is very difficult to build highly reliable systems of any kind without having a very open culture where you focus on exposing problems and fixing them rather than blaming the message.

I'm reminded of the time Tony Blair visited Silicon Valley to figure out how to replicate it back home. He was at a round table with tech royalty (Gates, The Sun guy, Schmidt etc etc). Steve Jobs was there too.

Everyone was chipping in with their theories about why the U.S and Silicon Valley were so good at what they do when Jobs lost his patience and butted in in true Jobsian fashion.

"Listen! Take a look around this table. Everyone here has a massive failure in their past. Big, epic failures. In the US, we think thats a good thing. In the UK you think it's a bad thing. Thats it."

Or soemthing like that, you get the idea.

Anyway, i've always liked that outlook regardless of wether it's true or not.

Link to the Bob Hoover story: http://www.squawkpoint.com/2014/01/criticism/


This was so nice to read. I think back to my very first coding internship after my freshman year of college when I messed up big time. I made a bad mistake that ended up forcing my supervisor to put down everything he was working on for a full afternoon and do an emergency fix. I so vividly remember sitting down in his office, trying to just calm down and keep it together. He never got upset or annoyed (at least he never showed it), instead he listened to me explain what I had done, figured out what was wrong, showed me what my mistake was, fixed everything up, and calmly walked me through the entire process, turning it into a learning experience. At the end he asked me some questions to make sure I understood everything that happened and told me "everyone makes mistakes, don't sweat it, just make sure you learn from it" and that was that. I remember leaving his office that day thinking "if I ever make it to the point where I have my own interns, I'm going to do everything I can to support them and treat them as well as he treated me." A little kindness really does go a long way!

To follow up, have you had interns yet and how did you treat them?

I had an intern for the first time last summer! I tried my best to focus on being as encouraging as I could and helping him whenever he ran into trouble. He clearly was working very hard and doing his best, so I focused on being as supportive as possible and praising the work he did (which was very good!). If I'm being honest though, we was a really terrific intern and we never ran into a situation like I did, so I wouldn't say it was much of a test :)

Mistakes happen. I can deal with that.

People lying about their progress however, that is hard to deal with.

Having been in this situation and been supervisor to people in this situation, it's possible to not be lying but also to not have complete understood the task or have it mis-spec'd.

Having someone rip into for lying when the spec was terrible to begin with is massively discouraging and the OP lesson applies just the same there.

Oh no, absolutely. There is no doubt this happens in the majority of cases, so when two things don't pair up when you look at them you don't accuse people of lying. Of course this does land you in an incredibly tricky situation when you do find out there is a habit of systematic lying.

Thankfully that situation is resolved now, though it lasted for too long, and miraculously didn't fall off the cliff it teetered on.

when a supervisor gives a freshman work which can mess up erverything it is the supervisor's fault and not the fault of the freshman

Why do you need to assign blame?

My opinion is that trusting people and giving them responsibility is one of the best ways to let them learn and grow. The bigger the mistake, the bigger the lesson. As long as it is possible to recover, it's a calculated risk which may well be worth it.

Exactly! A sane supervisor should have put a rule that disallows deploying on Fridays. End of story.

Maybe he made the same promise you did back in the day?

As software engineers:

As beginners, we're over-confident in our ability, even if we actually suck and make lots of mistakes: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

The opposite seems to become true - experienced engineers (who have learned from their mistakes) seem to be extra paranoid. I've seen also older engineers that seem to be confident still, talk a bit game, but they just never learned. It seems paranoia is a great indicator of experience, and over-confidence/arrogance is an excellent indicator of a lack of learning. Not to say those are the rules, as there are certainly arrogant engineers that are excellent but I'd rather work with the softer one personally.

I know as I have grown I have become softer, not harder, as I realize my humanness.

Overconfidence is a huge benefit in playing the political game - getting people to take you seriously, managing up, interfacing with the outside world. Like it or not, humans are primed to respond to confidence; for most people, listing the dozen ways a system may fail signals that you shouldn't use the system, while for engineers, it signals that these are a dozen things to be fixed.

The trick - when getting to higher levels of technical management - is learning how to context-switch between being overconfident for the benefit of non-technical stakeholders and being paranoid with the technology so that the things that could go wrong don't actually go wrong.

Right, this is really a tricky game. I help run a nonprofit wiki farm, so a lot of our business is out in the open. We lost one of our clients because they were hanging out in the IRC while we were discussing the cost of an upgrade and the technical issues facing it, as the exit question revealed that they thought we were running out of money.

Now of course donations in the past few months have been roughly double our expenses -- information which is also public. But it's very hard to project confidence during a problem solving session, because it gets in the way of actual problem solving.

I wish I knew where the balance point for secretiveness and openness was for public organizations. But given that all of the candidates for U.S. President seem to be at a bad place, I'm starting to wonder if a perfect balance point even exists.

This is why I don't really want to get back into management, even though sometimes I think at some point it will be the only thing I'll have the patience and mental capacity to do. I saw the best and worst of myself when I was a manager- the worst is what scared me. But moreover, I learned after I stopped being a manager that I didn't deserve the accolades I got. If you were to take the best manager in the world, give him/her a shitty team, a hell of shitty code, shitily designed application and infrastructure, a shitty relationship with the customers/users, shitty support team, and shitty project/product/upper management, and you don't let that manager work to fix these things, they'll quit or fail, guaranteed. If you give them the best of all of these things, they will succeed, guaranteed. Being a good manager- knowing how to manage well and doing it, can be critical. However, it is nothing on its own.

One big difference is that a good manager can take an average team and build a good team over time. If you can't do that, you aren't a good manager.

A manager that can't tell who on their team is good will (in the long run) take an average team and turn it into a bad team.

>Like it or not, humans are primed to respond to confidence

Exactly. My feeling is that while perhaps many of us learned this the hard way, well at least I did--from experience--in hindsight it seems obvious that our emotional impulses and intuitions, while albeit valuable, are not fine-tuned to success in technical careers.

I think intuition and emotional impulses is immensely important for communicating with non-technical stakeholders. They haven't learned the art of emulating a computer so we've got to be a human being. IMHO.

Yea, I agree with that actually.

Fun fact: Dunning & Kruger measured perceived vs actual performance on very simple, and somewhat subjective tasks. Ability to recognize humor was one of them.

It's strangely difficult to find this important piece of the methodology. Wikipedia's article doesn't even mention it, and this context is fairly critical to understanding the effect.

I think you're right that some amount of inexperience and youth makes the kids overconfident, but what has been demonstrated is that the task difficulty is a large determinant in how good people are at estimating their own abilities, and Dunning Kruger only applies to easy tasks [1]. For very difficult tasks, the Dunning-Kruger effect actually reverses and becomes Imposter Syndrome [1][2]. Software engineering may fall into the latter category -- something that is difficult enough that, statistically, on average, beginners are actually pretty good at knowing they can't do it.

[1] http://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-k... [2] https://en.m.wikipedia.org/wiki/Impostor_syndrome

What do you mean "reversed"? DK says that both low and high skilled people estimate themselves as close to average, due to the obvious bias in one's available data

I mean that for difficult tasks, people who are unskilled rate themselves as unskilled, and people that are skilled rate themselves as skilled.

"Our studies replicate, eliminate, or reverse the association between task performance and judgment accuracy reported by Kruger and Dunning (1999) as a function of task difficulty. On easy tasks, where there is a positive bias, the best performers are also the most accurate in estimating their standing, but on difficult tasks, where there is a negative bias, the worst performers are the most accurate. This pattern is consistent with a combination of noisy estimates and overall bias, with no need to invoke differences in metacognitive abilities. In this regard, our findings support Krueger and Mueller’s (2002) reinterpretation of Kruger and Dunning’s (1999) findings. An association between task-related skills and metacognitive insight may indeed exist, and later we offer some suggestions for ways to test for it. However, our analyses indicate that the primary drivers of errors in judging relative standing are general inaccuracy and overall biases tied to task difficulty. Thus, it is important to know more about those sources of error in order to better understand and ameliorate them."

Burson KA, Larrick RP, & Klayman J (2006). Skilled or unskilled, but still unaware of it: how perceptions of difficulty drive miscalibration in relative comparisons. Journal of personality and social psychology, 90 (1), 60-77 PMID: 16448310

To clarify, I think what he is saying is that the effect exists on both ends. On the low skill side it's over confidence and on the high skill side it's imposter syndrome. It's the same effect. You are both saying the same thing.

Yes, and that makes "being kind" a lot harder.

Many people reading the article think of a nice person that humbly tries their best but makes a bad mistake. But what if they're a hugely over-confident asshole?

I mean, not everyone likes everyone, so when someone you don't like to begin with is arrogant, you might easily label them as an asshole.

I think the deeper lesson of the article is not to consider other people assholes, and to be kind to them even if you don't like them at all.

>But what if they're a hugely over-confident asshole?

Then they have absolutely no business working on a team effort.

They never should have been hired in the first place.

I don't agree. I've seen some very junior people change their attitudes rapidly (for the better) when working on a team; if we wrote them off immediately we'd lose out on a lot of promising people.

I was also one of those developers and have since changed my ways :)

Our definitions of "over confident assholes" may differ, perhaps? :)

Absolutely however I do agree with you. If a young developer doesn't integrate into a team role immediately, that's no reason to not hire him.

A lot of great developers are so great because they've spent years alone with their computers just cranking out code. So the best ones won't have the greatest social skills.

But if the kid's still having trouble integrating into their team role after their 6 month probationary period, for example, I personally wouldn't vouch for them any longer.

If there are five equally inexperienced applicants in the wings that will get along with everyone without having an attitude-smackdown first, then that's absolutely a reason not to hire them.

The worst problems I've ever seen on teams spring from people that are arrogant and standoffish, followed closely by people that are too dogmatic about technical minutiae and best practices in the face of situations that require flexibility (my classic example is the prototyping team that spent two weeks setting up an automated testing and deployment infrastructure, even though they only had six weeks to do their game prototype).

I'd take a pleasant junior that's really green but is happy to learn how to code better over a more technically proficient asshole that needs to have manners jammed down their throat any day - tech skills can be taught to a newbie, personality cannot. People have had a couple decades to cement personality habits as junior programmers, so those are mostly fixed, whereas tech skills are super fresh and malleable.

Yes, we might always change someone: a friend, a spouse, an employee.

But that's a risk; depending on that is ill-advised.

Well, part of my point is that assholeness is subjective. Moreover, a certain group of people seem to think there are a lot of assholes, while the majority gets along with most people.

Of course, there are genuinely poisonous people, and I agree on your points with regard to them.

And yet here they are.

With respect, this doesn't tell us what to do if we run into a hugely over-confident asshole.

>With respect, this doesn't tell us what to do if we run into a hugely over-confident asshole.

Fire them, if you can. Before they poison your entire talent pool with negativity.

When I was 19 I started working as a video conference support person for Europe in a multinational. This meant interfacing with the highest people in the company, where a failure could make an insecure CEO or similar complain to the IT manager screaming how this was shit etc. this was using multiple ISDN lines making international calls, so it was prone to failure once in a while. After the first instance, the manager told me to not worry, but to just let him know when something failed so he had answers when the angry user came complaining. This was my first learning experience in this.

A few years later, in the same company, I was managing the middleware servers that handled all communications for the sales people in Europe. Here, when there was a failure, the sales people would have to fax in their copies of the data if there was a failure, so even more pressure. This is where I really learned to both appreciate the criticality of my mistakes as it could make several hundred people redo their work. However, the most important thing I learned with this was to slow down, and pause for a few minutes so that I could explain what was going on to the non technical managers and team leads. This was more important sometimes then actually solve the problem.

Both of these instances and a lot of others that happened over 11 years in that company thought me ot really double, triple and quadrouple check any change. It created a healthy paranoia, but also taught me that mistakes are ok, as long as you learn from them, but also, lean to communicate with higher ups that could not follow the technical jargo. I think the communication part is something that a lot of very confident people have problems with. However, it's very hard to teach this to younger techs, without letting them fail, so as a senior tech, I have seen problems from new juniors, but I have let them actually make the mistake, and feel the stress and pressure that they have messed up. I have found this to be the quickest method to teach responsibility, while standing by them and their mistakes.

Just as a note about detecting mistakes and not correcting them, you choose mistakes that does not hurt the company, and of course you try to show them the mistakes before the cause harm. However, sometimes you catch the mistakes late, and that is what I was referring too.

I think that I have become softer and harder. Harder in the sense that I have no tolerance for bullshit.

Softer in the sense that I want to foster an environment of respect. As a lot of the senior leadership that I've worked with has retired, I've noticed that the 30-40 year old crowd now is more brutal.

The self-confidence of amateurs is the envy of professionals

>It seems paranoia is a great indicator of experience, and over-confidence/arrogance is an excellent indicator of a lack of learning.

This is certainly true, and it isn't limited to the field of engineering.

This was literally the biggest issue for when I started at my first job. We had a team member who was terribly condescending and talked down to everyone but especially me. You could tell that he hated the fact that he was on a team with a junior developer and took every chance he had to made sure I knew I wasn't as good as him. It makes a terrifying environment to ask questions because who knows what kind of response you're going to get.

Maybe I have just had bad experiences but a lot of developers do this thing where they answer a question with a snarky question. Sometimes it's annoying to be asked basic questions but behaving in a condescending way lends to people not asking questions at all and not communicating issues. It doesn't matter if the person is Jr. or Sr. I've seen good capable Sr. developers miss stories because they would rather be silent and struggle trying to research and figure something out instead of asking questions due to the snarky competitive nature of the team. At my last company I became the default mentor of a non Jr. team member because his real mentor, who was also my mentor, was so condescending the guy was terrified to approach him. Having been in his shoes I had no issue repeatedly helping him get his build working or helping him understand the undocumented proprietary framework etc. A little empathy goes a long way

I think it comes down to trust. In a lot of teams where the trust is poor, one can't ask a question without your competency coming into question in a subtle way: many developers will latch onto it and try to exhibit their business-specific or technical knowledge in a form of one-upmanship. So they have turned it into a competition instead of a conversation.

A good way to detect these environments is when you observe those who have been there longer not asking obvious questions. That usually means the trust issue is there. When you start asking those obvious questions, at first you get that feedback loop of slight condescension. But then others start asking questions and you often get a fruitful conversation.

Having worked outside of the startup world and in the startup world, I think this is a little more prevalent in the startup world because there is another axis besides experience involved in these conversations: how long the person has been with the startup. It's common to have an official or unofficial hierarchy based on experience but in the startup world, there is another hierarchy based on how long you have been at the startup. That additional axis means it comes up a little more in the startup world (in my experience so far).

> Maybe I have just had bad experiences but a lot of developers do this thing where they answer a question with a snarky question.

Extends far further than the workplace. A few minutes in some of the more popular IRC channels on Freenode, you'll get the same experience. It's extremely frustrating, especially seeing it happen as an outsider to communities you like.

I feel like, in the same way that power generally corrupts people, "feeling smart" can corrupt the same way, coming down to the same attitude of feeling like you have an advantage over other people. Regardless of whether you actually are smart or not.

On one hand, I get it. Especially the popular channels on Freenode or places like the Arch Linux forums can get swamped with people who could not care less for rtfm'ing. Ungrateful, entitled, obnoxious folk, robbing people of resources and time who're putting in work for free, at least in a lot of cases. Sucks. I usually get splendid answers from these sources because I don't even bother unless I have a specific question or error, logs to back it up maybe. Yesterday I had somebody on #Openvpn help me with an issue in three lines. Nice.

But if the snark is heavy without being warranted at all, it more than pisses me off. It reflects poorly on whatever you're representing and it's just bad communication. If all you have to offer is a lmgtfy.com Link, don't bother. Those immune to that train of thought won't, the others already have.

And that means the developers will all try to fix their own mistakes without bringing issues to the wider team, leading to a worse solution to problems in general. Premadonna "10X" developers might write great code themselves, but if they're pushing the quality of everyone else's code down then the overall effect of having them on the team is a negative.

Anyone who treats someone else on the team in that way, regardless of seniority, is demonstrating that they're not really interested in the project as a whole. It takes a group of people to build something complicated, and someone who is antagonistic towards other people holds back the effort of the group. You're better off without them.

Maybe he meant pre-1958 developers, in the same way we used to use "BC" as "Before Christ"

"Madonna" is the Italian name of Christ's mother. The singer chose her name for that reason

I had a similar experience at my first job. I was given a lot of access to important, core functionality like server configurations (e.g. using staging vs. production API servers and DB config), but mistakes weren't well tolerated. I've always been rather frustrated that I was given this unsupervised leeway yet judged when something inevitably went wrong - what did you expect would happen with a junior developer given master keys to all of your stuff? I still don't understand the thinking behind it.

I've always felt this attitude towards giving trust to the new hire is a good thing. It remembers me to something I read not so long ago where a Perl core contributor makes the same thing with collaborators.

What I don't find ok is to blame a person for an error going to production. Specially when the error is caused by the lack of supervision to the new hire code (code reviews) and probably QA.

As a junior, I've found a similar experience as well. Programming environments seem to have a really unprofessional attitude when it comes to this.

Oh his behalf, I apologize. I literally started laughing at a junior's code once but in my defense, he had spent 3 years at Amazon.

I'm sorry, but you have no defense. Laughing at a junior developer'a code is completely inexcusable.

Simple fact is, if you worked for me, I would have fired you for that. Junior devs are supposed to do bad things - that's why they aren't senior devs.

"that's why they aren't senior devs"

Amen. I don't know about firing, but I'd definitely be having a conversation about proper mentorship. We hire juniors because they show promise, but need to learn. Being snarky and shutting them down doesn't just hurt the individual dev – it hurts the entire company.

Firing for laughing? Did you forget the point of the article? Be kind. Don't hand out harsh punishments for mistakes. Teach instead. (Only fire if the person is immune to teaching.)

Let me relay a story about people who laugh at and mock other people's work.

My wife used to work as a copy editor at a major newspaper and worked with one of those "laugh at and make rude comments about other people's work" kind of people. The copy editing checks had to be checked by another copy editor before it was allowed to be forwarded to the layout team for placement in the news paper. How the story checkin and checkout system worked was that your submitted checks were anonymous to other team members. So, there was this one guy on copy editing team who openly mocked and call other people's work "stupid" and "idiotic". He could never say the person's name that he was talking about, but since every copy editor was in the same room, you knew he might be talking about your work. Everyone hated the guy and everyone complained about the guy. My wife would say how much everyone just hated working with the guy and he was just one of those people that made the job unpleasant.

Then layoffs came around (Newspaper in the Internet Age). When this guy was laid off, champagne bottle were opened up and people celebrated. You know when someone is bad when there is a "sorry to see you go" bar get together and they are not invited. A strange thing happened though, productivity went up because people felt better about submitting their work for final approval. Less copy editor mistakes, less stress, more learning, more openness and more engaging with other team members happend. Barriers between teams fell and even with layoffs, people felt better.

It turns out that the person who mocks other people's work, make for a crappy work environment. You can be tolerant of failures on technical levels, but failure in personal level should not be tolerated. Be pleasant to work with, or should you should be fired.

hmm, seems a bit harsh. For instance if your guy had done that once and been taken to the side and told "stop, shut up, never do that again, next time you're fired" then (a) the work environment would have improved much faster and (b) he might have never done it again. Firing instantly for one harsh comment/action seems like foolishness, just like keeping a known problem person around forever (as in your story) seems like foolishness.

Fireable offense? Software Development is a profession, it's not coding school. Junior Devs are expected to have have an education, written code before, and know the basics. If they write something so crazy that it induces a chuckle, it's probably pretty bad. By all means tell them how to fix it, but don't mask the fact that the job they are getting paid to do expects them to know how to do this.

Maybe an analogy would do better. In the Army, Soldiers go through basic training and are taught just that - the basics. Once they show up at a unit, they will have lots of questions and lots to learn. But if they show up with their name patch velcroed upside-down on their uniform, I guarantee their Sergeant is going to chuckle before they tell the Soldier to turn it right-side up.

Yeah, there's an obvious lack of details in the story that should preclude judgement of the poster.

I've seen people write things like: if (true == true)

I won't be mean to you about it, but I can't promise I won't chuckle a little before explaining why that's unnecessary. If you've been working in a professional setting for 3 years and are writing code like that, then yeah, I might struggle to be empathetic.

> I've seen people write things like: if (true == true)

Which, if you make that an === in JavaScript, can actually make sense in some situations :)

Can you give an example? I can't think of any situation to literally write if (true === true) rather than just if (true). (If anything.)

Sorry I not only mis-read but mistyped and I can't edit or modify my original post. Please ignore it :(

I've never waded into JavaScript, so I'll take your word for it. I've heard some...surprising...things about JavaScript, so I'm not really taken aback that this might exist.

I'm referencing a time when I was in school, and most everything was taught in Java, where that doesn't make sense. This was written by someone who was almost finished their degree and had already started at their professional job. Definitely junior, but I still feel like that's stretching the acceptable level of noobishness. That said, I have an EE degree, so maybe had to solve / reduce a lot more boolean algebra expressions than a straight CS major?

Your comparison is apple and orange if you think a person's education in software is comparable to an army. in army, the training is real hands on. rigid. while learning how to code is never a real hands on, always partly figuring out on your own, thus each person having more various understanding/competency on how to do the job.

I think you're quite possibly judging the guy too much based on a simple internet comment.

His statement could mean anything from cruelly laughing in the face of the junior dev during a face to face code review to having a quick heh under his breath from the privacy of his office at the developer's use of "if (foo === true)". Or anything in between.

Perhaps, but the "but in my defense, he had spent 3 years at Amazon" quip seems to confirm that this guy has a poor attitude.

> I think you're quite possibly judging the guy too much based on a simple internet comment.

There are such judgements in every thread.

It's even better when it is paired with the famous "I would not hire you" (or its variation "I would fire you") although the conversation does relate to this at all, the OP has shown no intent no be hired in any company, let alone the answerer's company, and, icing on the cake, the answerer is not in a position to hire anyone.

If it was a completely private laugh, seems weird that they would apologize for the previously mentioned jerk.

Due to open office layouts there is no such thing as a private laugh.

Again, the guy spent 3 years at Amazon and was talking himself up big. I looked at the code and started laughing. He wasn't a junior in title, in fact, he was on his way to being senior and had his eyes on an architect position.

That is my defense and the only time I've laughed at someone's code. I guess the combination of him talking himself up big about all the amazing things he did at Amazon and the code vomit that he actually produced was too much for me.

I regret it, but he shouldn't really have been that junior after 3 years?

Though I guess I still write pretty shitty code now too on occasion.

Alright, I'll take the karma L in this case.

You are an asshole if all it takes to fire someone is some laugh. Everyone shoukd be able to laugh at everyone's code. It's just a code.

> It's just a code.

Well, no. It is the product of someone's work. Someone who is trying to learn and should be supported and taught how to make it better and why it is not as good as it should be.

Laughing at someone's code is like laughing at someone's painting or their novel or really any other creative endeavor. It's not ok period.

My team playfully makes fun of each other's code all the time. Every once in a while, someone does something stupid in code and we playfully make fun of that person, everyone on the team takes it in stride and our code is better for it.

Hell, every now and then I stumble across some idiocy in the code base and think "LOL, whoever did it this way is an idiot". git blame. "OH, hahaha, it was me, what an idiot past me was, amirite guys?" Everyone on the team takes levity about mistakes very seriously ;) It's healthy.

Mocking condescension on the other hand is a different thing entirely.

Self-deprecation is classy. Other-deprecation is pathetic

If someone's work is poor enough that a reasonable person might laugh at it, then there's nothing wrong with that. Hence the phrase "laugh at our mistakes". It doesn't mean you are a bad coder/painter, it just means you made mistakes that were a little funny and can learn from them like anyone else.

There's a difference between laughing at your own mistakes and laughing at someone else's. People who are ridiculed for making mistakes are incentivized to hide their mistakes rather than learning from them.

We "laugh at our mistakes" not "laugh at other people while they are making the mistake".

Is it? Or is it more like laughing at someone who built a table with 3 legs and wonder why it fell over? Or someone who fixed a car but forgot to put the engine back and wonders why they can't go anywhere?

Senior devs make mistakes too. I think your tone is rather inappropriate given that the poster seems aware of the inappropriateness of his response and is apologizing for making that sort of mistake.

While I admire the sentiment, I've NEVER seen an asshole fired for belittling juniors/co-workers.

Harsh, withering treatment is quite common not just with developers but in all technical fields. I wish things were different, but that's the reality as I've seen it.

At more than one place where I've worked with at least semi-functional management, people with attitudes like that tend to get moved to special "one person" teams and then are given a long stream of crap work (or at least work they feel is beneath them) until they get bored and quit.

I've seen it once, but the guy cursed and laughed at the team architect when we had a meeting instructing us to start writing tests and adhering to a certain level of code coverage. Seeing the guy literally begging for his job after witnessing him demean and belittle people for months was kind of eye opening to me.

Hopefully after you make the mistake of firing someone for laughing, your boss is kind to you (as in the OP) and lets you keep your job

Trading one form of radicalism for another. That's the spirit...

Maybe I don't get the joke, but why is this a defense?

(I had a collegue at my last company who worked at Amazon before. His code was just fine.)

He said "In my defense", justifying laughing at the junior dev because he (the junior dev), had 3 years of experience at Amazon.

Right, which suggests that coders from Amazon generally have rubbish code, excusing the laughter.

No, it means he expected that particular person's code quality to be better than that, because he lasted at a 'Big Four' company for three years and thus should have decent coding chops, but apparently didn't, thus the incongruence made him laugh. It doesn't mean all Amazon coders are shit.

The original wording is ambiguous. I too read it as if OP had some issues with people coming from there.

I like figuring out how to look at a sentence how others do (like those images that can be viewed two ways). But I'm having trouble with this one. Amazon devs being generally bad would not be suitable for use as a defense ("in my defense") for his behavior. I think it would require the assumption that he thinks that Amazon devs are generally bad as well as the assumption that he thinks that laughing at them is something that's generally permissible.

I too originally read it as saying that it was expected to be bad coming from Amazon. And the "in my defense" part is that, if you know the coder is from Amazon, then you understand why the code might actually be truly bad enough to elicit a chuckle (as opposed to just chuckling at normal bad code). In other words, it's like saying "in my defense, it was super bad code".

That said, I think the interpretation that "he had 3 years experience at a big company, he should have been better" is probably the correct one.

> I think it would require the assumption that he thinks that Amazon devs are generally bad as well as the assumption that he thinks that laughing at them is something that's generally permissible.

They are so bad that laughing is permissible.

Maybe you just don't spend enough time around arrogant people to interpret this sentence this way :)

I had expected better code to result. It was borderline comical.

That is completely unacceptable behavior. If you were in my company I would petition to have you fired. I hold a strict no asshole rule, life is too short to work with people who would do something like that.

Sounds like I wouldn't want to work with you anyway. Life isn't rainbows and unicorns.

Then you're as bad as he is. Especially given that he's already acknowledged he shouldn't have done it. Indignation doesn't make you right, it just makes you stubborn.

Wheaton's Law violator here. Don't be a dick.

Laughing at a junior's work when you are their mentor is a firable offense in my eyes.

People ready to fire someone for laughing should not be allowed to work with other human beings at all, in my opinion. What's going on with this ready to be offended for wind blowing the wrong way culture and the fragile egos?

It's not just about laughing. It's about laughing at someone's work in a derogatory manner. I hope you can tell the difference and understand the extremely negative impact the latter can have on culture.

I hope you can understand the extremely negative impact firing someone over a simple mistake can have on culture.

Laughing in this situation is merely inappropriate. Firing someone for being inappropriate once or twice is incredibly toxic behaviour.

I disagree. Firing assholes is never bad for culture. On the contrary, it increases morale and makes the workplace better.

Keep in mind that I don't categorize laughing at someone's work as a "simple mistake." Bugs can be simple mistakes. Offensive jokes can be simple mistakes. Laughing at someone's work however is deeply troubling behavior that actively undermines trust and discourages cooperation in the workplace. That's why you have to kill it with fire.

> Firing assholes is never bad for culture.

Firing somebody based on your own subjective opinions is toxic for culture.

I've worked on amazing teams with plenty of good natured ribbing and I've worked on great high performing teams where you could say "this code is rubbish, you can do better". I've also worked on teams where saying that would really hurt people's feelings and impact moral.

Put aside your own pre-conceptions and look at how your team responds to an event/situation. That's the only way to build a high performing team.

Sometimes that will mean laughing at someone's code is toxic and needs to be addressed, other times it will be a non-issue and addressing it creates an issue, and other times it can even be a good bonding exercise.

I already specified that this isn't just about laughing, but about laughing at someone's work in a derogatory manner. You seem to be saying "but a laugh can mean other things!" and it's kind of besides the point.

No. I'm not. I'm saying people, cultures, and teams operate differently and assuming that laughing at someone's work is toxic rather than looking at the actual impact to the team is unprofessional and a sign of poor leadership.

I worked in a team where if you broke the build you had to put on a clown nose for the rest of the day. It worked well and was a bit of fun. We still catch up a couple of times a year even though I left that team more than 5 years ago.

I've worked in other teams where pressuring a team member to wear a clown nose would be harassment and deeply unsettling.

Everyone seems to be taking this too far--parent just mentioned a laugh. It could be anything from a well meaning jab to a mean put-down, and I'm reading it as just something funny for the junior to learn.

Wasn't his mentor. Just a guy reviewing code.

Nice little read. There is that little answer in the back of my head of:

"It is a good question why I caused this problem. It is weird to think there is a competent company that has been around for so many years, yet they have no procedures in place to stop this from happening. You would think that any changes that could cause downtime on a clients website would go through an automatic test suite and only after passing all tests would it be tested by human QA and finally made live. In this particular case I am the one who has made a mistake, and being a sensible person who is eager to constantly better myself, I will try and learn from this error, however I work alongside dozens of other people on my team who like me, have every chance of making a mistake at some point. It seems like a bad policy for us as a company to say that we should expect every person on the team to break a clients site and then rely on overtime from other people to fix it. So of course I will try and do better, but this is not fixing the root of the problem, the root of the problem is something that needs to be fixed at a much higher level. Have you, as my boss, not considered this problem already? What has the company learned from this? I would be happy to be part of the team that solves this problem by creating unit tests and creating policy to avoid this."

Of course I purposely make this slightly stand-off-ish, over the top, and written from a very specific perspective, to illustrate a point. But the reason I do this is I absolutely agree a single developer shouldn't feel bad about making a mistake (they should try not to, but these things happen), but the company should put measures in place to minimise the impact of mistakes. When the company fails to do this, the company has screwed up a lot more than the developer.

Think of what you're saying on a meta-level here. You would basically be telling this guy, "I fucked up, but I'm blaming you for treating me like an adult instead of a child. You screwed up by giving me enough autonomy that I could make a mistake." That might not exactly be a smart career move..

On a more personal level, if someone shows you kindness in the face of a mistake, the last thing you want to do is throw it back in their face and go on a tirade against them. That's a very quick way to make sure nobody ever wants to work with you again.

I disagree. If you have something that is of significant value to the company, you need to hedge your risk. Automation is a particular hedge. People fail, and while processes do too, they often fail less.

Would you be comfortable moving something fragile by hand that is worth a lot of money? Maybe. Would you prefer if the fragile item being moved was done with an automated process that was shown to best protect fragile items? I would.

"I'm really sorry I caused that car accident." "That's okay, next time take Driver's Ed and don't drive on sidewalks." "Thanks, I'll make sure to do that... by the way, why can I even drive on the sidewalk in the first place? Why wasn't it required to take Driver's Ed?"

I'm very obviously using hyperbole here, but questioning a process, to me, often shows a level of maturity in engineering. It's not blame-shifting - you clearly still made the mistake - but it's fixing a root cause.

I do think it has to be brought up carefully and with the right tone at the right time though.

I think you've hit the nail on the head at the end with "right tone at the right time" - the right time is almost certainly not when one has screwed up.

Or if you think it can and should be fixed at that institutional level tell them what you've learned to do better on a personal level and then volunteer to help lead the effort to create the automation/systems to prevent anyone else from making the same mistake in the future.

Well, "process" and bureaucracy tend to march hand in hand. I think it's easy to get carried away with creating "processes" (i.e., additional bureaucracy) when sometimes people just need to show good judgement. Pushing code right before you leave for the weekend is just bad judgement, and the guy learned his lesson. Heavily bureaucratic institutions laden with processes are not exactly known for their efficiency or pleasant work force.

OTOH, at the most sophisticated organizations, being afraid to push on Friday demonstrates an embarrassing lack of QA infrastructure.

Reminds me of an old VP of mine who came thundering out of his office, absolutely livid, shouting, "What idiot gave me permissions to delete the source backups???"

From a security perspective, there is the least privilege concept.

Why does the VP have that kind of access by default? I understand having a separate account if needs are there, of having some sort of priv-escalation.

In my new job as network engineer, all the RH boxes have SElinux off. I don't get it, nor can anyone give a cogent answer. And up 4 floors, we have a selinux kernel Dev.

I'd greatly like to even limit roots potential damage against users' data (in this case, DBs).

> Why does the VP have that kind of access by default?

Two common reasons. First, because the VP or someone in a similar position resented not having such access; it's sometimes hard for people to accept that people who work "for" them hierarchically have permissions they don't. And second, because either they or the person who previously held their position had a legitimate reason for such access in the past, and didn't drop it when they no longer had such a reason.

But yes, even if they have a legitimate reason for such access, they shouldn't have that permission all the time, only when they intentionally don that hat.

I read what he was saying differently. When errors like this occur it's very much worth evaluating what could be done to avoid it in the future beside me just learning my lesson. There are many general team behaviors (e.g. Automated functional testing, automated load testing, static analysis, code reviews, non-production environments, etc.) and things specific for the problem at hand (what test is missing that would have caught this?) that must be considered after a major issue. If these aren't evaluated then there it is likely they will happen again. I don't believe these are being treated like a child. Instead these are the team deciding to be adults by avoiding emergencies and being more confident when releasing.

Why not both? Of course you can accept responsibility for this mistake but then suggest creating a testing suite to run commits through before making them live?

Yip, that's exactly what I did today. I made a mistake which borked layout on one page. I thought I'd looked at it, didn't, and it got deployed. I'm now suggesting the company use continuous integration and some automated screenshot type tests for catching these issues.

I hate making mistakes, but if more robust procedures are the result, well, it's an overall win!

Good point. It's important to take responsibility even if you think there should have been guardrails.

I worked for a boss once who essentially engineered this kind of problem to exist: he knew not having automated testing etc. was a problem, and instead of asking his (too junior) programmers to build up a test suite, he decided to let them fail and discover for themselves why writing up test suites is a good idea.

His reasoning (later revealed) was that you can't give people fish and expect them to become expert fishermen. They have to experience hunger and ask to be taught to fish. If people don't have the soul-crushing experience that teaches them why something is really important, they never really internalize why it's important. That the reason why we appreciate that this stuff is so important is BECAUSE we suffered through something that taught us that it was important, and that if we don't give junior folks that same kind of experience, then good practice just becomes something on the list of priorities and not a moral imperative.

There's something to that, but at the same time, it's basically a defense of institutional hazing on the investor's dime. So take it for what it is.

> His reasoning (later revealed) was that you can't give people fish and expect them to become expert fishermen.

I think you can, it's just not nearly as efficient. College and trade schools are essentially about mixing a very small amount of actual necessity (deadlines, tests, etc) with copious amounts of being told what's best and how to accomplish something. This works well because it's both more palatable to a wider audience of people, and it accounts for people's different interpretation of events. There may be multiple things to learn from any particular failure, and without some guidance you may come away having learned few or none of them the first time. Being told what to expect before hand, or what you could have done to mitigate problems afterward, goes a long way towards making sure you consider all the useful aspects of the problem.

Will you learn any one single lesson as thoroughly or as fast as you would in the real world when it affects you so much? Likely not, but then again, that assumes you actually saw a solution to the problem, and that still might just be one aspect.

There is a cost in not having certain procedures in place. On the other hand the red tape has a cost too.

A single mistake shouldn't lead unconditionally to a mandatory multi-step process that involves multiple people and may take several months to finish. The examples of the "cover your ass" decision making overpowering the common sense engineering are abundant.

That's true, you can definitely over-correct. I guess if I was the supervisor in this case, I'd be doing some analysis with the developer after the fact. Were procedures correctly followed? If not, use this event as an example of why the procedures are in place. If they were followed, was it a brain-dead simple error? If so, then obviously there is something wrong with our test plans. If it was one of those errors that turned out to be a perfect storm of environment differences, unexpected inputs, code side-effects or what-have-you, then there probably is no quick solution and the imposition of some overbearing system to try to limit something that will probably not happen again would be overkill.

It's always best to do an after action analysis to see where things broke down, I think. Changing procedures or adding protections might be warranted, as long as it's not the "get my permission before ever putting a Y in that field again" kind of enhancement.

It depends on the project. Is it a web application for keeping track of daily tasks? You probably don't need that much rigor. Are you building something that if it fails will harm/kill/cost people a lot of money? More rigor is needed.

I feel where you're coming from, but in reality it's not like any test suite can guarantee to prevent downtime caused by bad deploys. All it takes is a little networking related bug, something that differs ever so slightly between environments, whatever, and your site could be down.

I agree with the other replies that while you may be the 'technically right dev' kind of person, it still comes of as being a dick and you will likely not get a positive response from something like this, especially if you caused it.

What makes it more reasonable to say it's the company's responsibility to protect you from fucking up the site instead of you just being a good developer, and not fucking up the site?

It's not about being a good cowboy developer, it's about beijg a professional engineer who builds systems that work.

I don't expect the folks who built my house to detach and reattach the door pefectly every day, I expect them to build a house that works.

Exactly. If you're one mistake away from a disaster, you already lost.

Ideally you would have either test gates, or a staged rollout with automated rollback.

One way to avoid Friday deployment issues is to go to the pub. Obviously you need to spend all afternoon there and not be tempted to go back and deploy, otherwise issues may be compounded! It seems to be a common mitigation technique in some shops I've worked at ;)

I can confirm it is an approach taken by different companies out there.

I used to work for this company where every single Friday we would go out for lunch and get tipsy (wine with the food, some digestive liquor shots just at the end, then maybe a mixed drink or a beer at another bar) because we were the only team not allowed to go home early on Friday... for no reason. Mind you, we were some kind of internal IT team and there would be no one to request anything from us, so we never had urgent stuff to do.

Back in the office we would enable the "fire extinguisher mode" which meant "only move if there's a fire" and watch silly videos in Youtube, have some coffee with Baileys because why not...

I like the way you drink ... wooops I mean 'think', I like the way you think.

I hope Kevin learned not to let a junior dev deploy to production on Friday.

Well, he'd have to sooner or later. Better start learning as soon as possible :)

Nobody should be deploying to production on Friday... Nobody.

My company just had to do it today in response to a critical security issue we identified in production.

Saying that nobody should ever do it isn't realistic, sometimes things go wrong on a Friday, sometimes you can't afford to wait 3 days to watch them become even worse.

It's a good rule to live by, and like all rules it can be overridden when the circumstances require.

It's good to instill in your team that Friday deployments should be done only in special circumstances, or they become a habit (and people inevitably end up working Saturday). But yes, sometimes it's necessary.

IMO, if you deploy on Friday, you are promising to be available for fixing it on Saturday... which makes it not really a Friday in the sense in which it's meant.

Don't deploy when you won't be around to support it is a better description; but it doesn't roll off the tongue as easily.

So I should just leave a security vulnerability rather than fix it on a Friday, because I won't be available Saturday?

If it has the potential to take down your site? Unless you have someone who can fill in for you, absolutely.

Really, it will come down to a cost/benefit analysis. Are your chances of the site going down due to being hacked over the weekend higher than the chances of a last minute update taking down the site?

The answer is almost always no (much to a sysadmin's chagrin). If the answer is yes (i.e. another heartbleed), then you are probably going to be working through the weekend anyways.

Read to comprehend, not to argue.

Great, now what am I going to do with my life?

This should on the top of every page here.

See if you like this version better:

"Only deploy on Friday if you like working weekends"

Yay pedantry!

I don't think I was being pedantic. The parent poster was very clear, there was nothing there for me to overstate.

I'd rather have a robust infrastructure and enough confidence that deploys won't break anything that we simple stop paying attention to the day or time!

And you don't get to that ideal state by breaking production on Fridays.

Friday is the best day to deploy to production. If something screws up, you still have the weekend to solve the issue before the big bosses are there.

On weekdays you have the added pressure of other work.

(Of course depends if your business is mostly or evenly loaded on weekends as on weekdays. In most businesses I've know it's usually the lower load/customer visits period).

No, the best time to deploy is when everyone you need to fix the issue is in the (metaphorical) building already.

You should not be pulling people in from their time off to fix shit.

And if they've done their job correctly, you wouldn't have to -- even if you deploy on Friday.

+1. Some systems aren't used heavily on weekends so it acts like a beta test.

In my experience this strategy has not worked out very well. When we tried to deploy risky changes during low traffic times, we would end up creating time bombs, scale issues being masked until a high traffic time rolled around.

This also creates like a kind "brain latency" for developers I think. I'm coming at this from the operations side of things, so maybe my observations are a little biased here. I have observed that if people deploy changes and it breaks something immediately, it's a very clear correlation and they can fix the bug generally pretty quick. If they deploy and then it breaks 72 hours later, any number of things could be the culprit, especially in a fast moving environment (times 1 billion percent if it's a microservices architecture without a strong devops culture, which most of 'em are). Debugging then takes much much longer. This is made worse if the person who deployed the change is not quickly available when their thing breaks, and it makes being on call for someone else's unproven feature very stressful.

So instead I think it's better to make sure deployment and build systems are rock solid, and deploys are as accessible and as idempotent as possible. Chatops type systems are good here. Then you can roll out big changes during peak traffic and be confident that you can quickly revert if it goes bad, and that the changes were reliable under load if it goes good. I also think it's critically important that big changes are behind rollout flags, such that you can dial up or dial down traffic at will. This is also useful when introducing new cache systems or something like CDN if you need to warm up slowly.

This is a better approach I think than trying to use the time of day to modulate user traffic. I would rather developers can control traffic to their feature themselves and have the person deploying the change with their hands on the wheel until they are confident they can take them off. That way people can do stuff independently, and everyone can trust everyone to deploy and yet still feel safe.

[Client, Friday 4PM]"Hello, client here. I know it's Friday 4pm but we messed up and did X. Could you deploy Y fix, thanks!"

[Client, Friday 4PM] "We are having a big sale this weekend we told nobody about. Could you quickly deploy a fix where all the product's prices are red and bold? That shouldn't take you long, right?"

[Project manager, Friday 4:45PM] "Hey team, X just released an important security fix for Y platform. I need you to deploy it right now or the client could get hacked."

The appropriate response to Client is "Great, but it will cost $Z in Friday fees"

In my opinion, only the last one may be justified.

And my own darn fault for using platform Y from team X that would release a security update on a Friday. Yes, fix it. But if this isn't a one-off, get off of platform Y.

Are you suggesting delaying security updates because it's Friday? I mean, you can update only on Monday if you so wish, you're unprotected either way.

If platform Y regularly releases security updates on Friday, that's poor management by X and I would reconsider using platform Y. 0-day issues of high severity should be the exception.

Timezones can make it so. It's not necessarily their fault.

Time zones do make things interesting, but weekends are more or less synchronized. (Holidays are not, notably.)

Reminds me when my boss got pissed at me because we were discussing moving all deployments to production after 5pm and I replied something along the line "I'm not going to be on the hook after hours to fix problems other people caused".

At one place I worked I put in a 4pm deadline on deploys, because I was tired of the devs seeing the end of the day as their deadline, tossing it over the fence to me, and then sodding off at 5. Invariably, it led to me trying to hunt someone down at a time when everyone was commuting home or similar. 4pm was enough time to do the deploy determine if something went wrong, and get the relevant dev started on the fix.

Easier said than done. It's best avoided, of course. But when on a startup you generally cannot afford to be an idealist.

Because missing three days of velocity will definitely cause your growth hacking to fall off the hockey stick and reduce the tempo of your disruption of unique hackathons for people with a left little toe deficiency?

Nothing of that. The reason has nothing to do with "startup culture", head in the clouds buzzword crap, quite the opposite.

The changes are pushed and working in staging, the tests are passing, Q&A is done. Why hold off? So that you get your Good Practices™ badge? I'd say it's better that you don't have to deal with last week's work on a Monday if at all possible.

I think being down to earth, and keeping your good judgement is key. I don't recommend making world-shattering changes on a Friday, but even then, well, it really depends on the circumstances.

If it's a startup the more likely scenario is no QA employees and minimal test coverage.

Surely you can envision some scenario where it makes sense?

* It's the holiday weekend and your new website with its curated comparison feature needs to go live

* It's the end of the quarter and having this is the only way you can sign a deal now (enough of your partners will be bound to quarters that this is possible)

* You're in the business of live sentiment analysis from TV video and a critical bug needs to be fixed before this weekend's Presidential Debate or your news channel partner will be pissed

Reduce the tempo? Some of these can kill.

Well, when do you do if you have a submission of your project on Saturday morning, and someone finds a bug on Friday afternoon? You fix it and hope it doesn't break anything else, that's what :P

"Never deploy on a Friday" assumes that you don't work weekends.

I can see not PLANNING to deploy to prod on a Friday, but rules were meant to be broken. When a change is needed, a change is needed.

Trial by fire. Unavoidable in battle, untenable in surgery.

He did :-) It's a lesson I also learned that day.

Definitely a lesson that was re-learned that day Some lessons only come from pain.

Kevin might have learned more than Brian that day. It goes both ways :)

The reverse takeaway is perhaps even more valuable. Most are not going to be as angry as you expect them to be. When you mess up don't hesitate to tell people, it's going to be okay.

This is pretty much rule #1 for my team. If you make a mistake, I'm not going to yell at you, but

1) If you try to hide the mistake I'll be mad. As soon as you know there's a problem, we can mitigate the damage by addressing it immediately.

2) I want to see that you're learning from it. A pattern of repeated mistakes may take some explaining.

This should be upvoted to the top! Definitely more than some story about turn of the century dancers :)

Most of my career I've been the guy people come to when they get stuck and need a second pair of eyes or just advice. I enjoy it, we all have to start somewhere. For the most part this has been considered a positive by my bosses, because they see the value this creates even if it's not technically my core job.

I did however once work at a company that was very metric-focused. Turns out that the metric they used for our team was closed tickets, and they felt that given my salary I didn't close enough of them. No understanding whatsoever that others on the team were able to close more tickets because of my help (let alone that the tickets that made their way to me tended to be the complicated ones that others couldn't solve). After the first talking to, I stopped helping out my teammembers (I explained why) - and jumped on the first opportunity to get out of there.

Mentoring is hard. It makes me question everything that I know, and worry about what this guy's code will look like in a year if I criticise this, or praise that. I wish there was some way we could all just work together, for real, in real time.

I miss construction. Back then, I could just tell someone "Hey! You! Don't fuck that up, I'm pouring concrete around it tomorrow!" And we would still be cool at lunch break.

Ha, it's true. I grew up around lots of construction and farming. Those people just aren't as sensitive as white collar workers. Probably why I'm not working at a place like that.

If you have the culture you can make similar jokes with people who write code.

I run a number of side project sites and I do much of my work late at night.

I have one simple rule for myself - which is to never deploy anything at night before I go to bed. I've made several critical mistakes which I deployed to production and then went to bed only to wake up and find out that users couldn't use my products.

Thesedays, I do all deployment in the morning - that way, even if there is a critical bug, I am awake to catch it and fix it quickly.

Generally a good idea, if you don't have a setup where everything is validated end-to-end before you go to production - which is most software, and even if you have end-to-end automated verifications there's still risks outside of that scope / vision. (a server may run out of hard disk space from the new code, to name something random).

Definitely. I have a similar policy about deploying late in the day on a Friday. Rushing to get something deployed before an arbitrary deadline is generally a bad idea. By all means, push hard to finish the feature and qa it thoroughly, etc., by whatever cutoff, but wait to hit that launch button until the next morning. Saves so much stress.

This reminds me of a similar story about being nice. It has nothing to do with software development but I want to share this anyway.

I was in Seattle and I took a public bus. The fee was $1. It was my second day in the US and I didn't have a dollar bill in my wallet (nor a transit card). I was assuming that I can pay with a larger bill just like Japan. Big mistake. The driver yelled at me but nonetheless she let me ride for one time. Then the guy sitting next to me suddenly offered a dollar bill and said "take this, so that you can pay for it by yourself." I probably looked at his face with amazement. "Thank you very much, sir." The guy got off the bus in a few stops later and I never had a chance to say this, but I know it's my turn now. I want to thank the nameless person who slightly changed me. Kindness is contagious.

I had a similar one. I was working in Rio de Janeiro, here in Brazil and money was really really tight back then. I would fly from Rio back to my city where my girlfriend is every other week.

So sure enough I got into a taxi cab and asked him to drive me to the airport, but make a stop at an ATM. When I went to withdraw the money for the cab, I noticed my payment didn't go through, it was still pending. I was desperate, didn't know what to do.

I kid you not, the guy withdrawing money from the ATM next to me somehow managed to notice my despair and got a chunk of money bills out of his pocket and said: "how much do you need?". I think I just stared back at him for a minute or so. "What?", I replied.

He told me that one day I'd have the opportunity to do the same for another person. I always remember this and always help strangers any time I can. And I realized that we are the ones that gain the most when we help.

EDIT: somehow somehow

You are allowed to overpay. The drivers don't like it because they are Seattle nice and would prefer you just not pay at all.

I wish my first development job was like this place. Instead there was a blame culture and I'd get told off when I made mistakes, eventually getting fired for not progressing quick enough.

It's now 4 years since I was fired from that job and I'm still in development, despite that incident nearly causing me to decide it wasn't for me.

I was extremely happy to read this post, it gives me hope. I try and take a similar approach when working with newer developers, or those that are inexperienced in areas.

This is underrated for managers and leaders trying to develop young talent. The takeaways:

  - With a system outage, don't lose your cool. Rationality is what's needed at the time.
  - Elevating "perfection" at the expense of potential breakage will ensure nothing ever moves forward and your team doesn't grow.
Some might believe that a serious ass-chewing is needed in this scenario; that would be a decision that's simple, quick and stupid. An emotionally-charged ass-chewing is almost always a sign of weakness in the ass-chewer. The only thing anyone ever learned about a beating is how to avoid it in the future.

Instead, this manager leaves his engineer with thoughts about how to avoid that problem in the future. Always remember -- an outage is eventually fixed, but a damaged relationship continues forward.

I have found that difficult people are a function of both themselves and the environment they’re in, and you can’t necessarily say “be kind” without fixing the 2nd part.

The level of stress matters a lot. If a team is being run in a standard “everyday crisis” mode, they quickly reach the point where there is very little they can take. Every tiny mistake wears people down, reminding them of how much more there is to do, and turning them sour. Managers seem to panic and cut into their team’s time even more by starting to have long, daily meetings to “fix” things.

If you want “nice” people, you have to set them up for success. Reward completing the whole chain, not just hacking away (e.g. for software, not just coding but also testing, documentation, and seeking peer review). Keep meetings to a minimum. No overtime. When short-cuts were taken to meet hard deadlines, open up your schedule and scribble in the exact window after the deadline where you will stop everything and clean up the mess that the short-cut created. Give your people the best equipment that money can buy. And so on.

Seems like this story is being received pretty well here.

Though, in my opinion I think it's kind of insulting to be asked something like "What did you learn?". The question isn't really necessary. You know that you screwed up.

That kind of dynamic between an engineer and a team lead is off-putting to me.

I think the proper way for a team lead to handle it is to instead work with the engineer to help find ways to eliminate the human error by implementing tooling or processes. The conversation should go something like this:

Lead: "I wonder how we can make sure none of us break XYZ widget again"

Engineer: "We can build out ABC and run that, also generally just test better before pushing to prod".

Lead: "Cool, do you want to go ahead and take care of that?"

This statement wraps it up:

“Great. It sounds like you get it. I know that you can do better.”

Giving folks a chance to communicate what they learned, and then encouraging them to "do better," is the best way to lead.

As long as it's not overdone. If every review ends in "you can do better" it just becomes the new "must apply themselves more in class".

So long as you're not having to "you can do better" for the same issue every time, I think it's fine. There's a lot of minutia to learn in software development; a lot of opportunity to screw up.

I don't want to ever work at a job where to can't get better.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact