I like to do minor refactoring as part of any given new feature (even if unrelated). It helps take bites out of the larger pieces over time.
"Tolerating a bad employee versus firing and rehiring"
Inversely this also gets into something I often see where companies sit and wait for the best possible candidate (everyone claims to hire the best...) and so rather than hire and try people out they take a long time and hire someone and hiring being a non exact science.... doesn't work out. Maybe people need to try folks out, even maybe the person who doesn't have every bit of alphabet soup on their resume?
This hits the mark.
I don't think developers should even necessarily explain that this is what they are doing, because then they have to ask permission to do it.
I'm constantly thanking "myself from the past" when I go to implement some new feature and find that the surrounding code is clean and well organised and ready for that new feature to be easily implemented.
In my experience, it pays to be open with product managers and QA about what you are working on, how you are implementing it, what the risks and benefits are, etc. For example, it helps QA to focus their testing efforts on the areas that are most likely to be broken by a refactor. And having some understanding of how an app is built and where the problem points in the code lie helps product managers to develop a sense of how much effort a certain change might take, and reduces the chance that they will be surprised by some behavior in the app as the result of a change.
And if someone on the team wants to question me about the cost/benefits of a certain refactor and discuss alternatives, I think that is totally fair game. Same way that if I want to ask a question about the cost/benefits of a new feature and discuss alternatives, I feel empowered to do so. It's not about asking permission so much as it is about developing a shared understanding of the costs and benefits of the decisions that we make, and working together to optimize them.
On the other hand I remember on some projects there being real (HUGE) pushback from the client about doing anything that wasn't implementing features exactly as they define and "are paying for".
As soon as you declare that you are doing something "optional", it gets political, people start talking about it, it escalates to project manager, their bosses boss etc and the shitfight starts. Ugh.
The reason they're paying so much for it is that you didn't do the refactoring last time. Every time they refuse a refactor, all their future costs genuinely go up.
If they ask why the refactor is needed, why you didn't do it properly first time round: “You're constantly refining your business processes to satisfy ever-changing business needs; that's what I need to do to the code.”
The more experience you have with the reverse of that situation ("Dammit past koolba, what the hell were you thinking?!"), the more you value dedicating time to both doing things right and going back to progressively clean them up.
The real skill to learn is to properly isolate quick and dirty solutions so that they can be easily replaced down the road. The sausage factory does not need to be spotless on day one, just the entrance and exit doors.
It's rare to fire new employees in their first few months, but it does happen. Similarly I have seen people join a company, and leave after their first week because a better job offer came in. It sucks when it happens... but overall I'd say the system works pretty well.
Yeah, in an at-will state you might as well consider the "trial" period to be in place no matter what. If things don't work out well they'll just toss your ass to the curb. Maybe if you're lucky like me your boss will have a soul and give you some amount of severance at least.
Two people were hired. Me and another dude.
The other dude was invited to look elsewhere (they let him stay and work while he looked for a job so that was very nice), and I got a series of raises after they clearly were comfortable with me. They then started looking for a replacement for the other dude.
Effectively I assumed (correctly) I was on a sort of "trial" anyway.
3) Tolerating a bad employee versus firing and rehiring. There is always a short-term reason not to fire someone who’s underperforming
Soooo you encouraged the board to fire you and hire someone who understands sales already, yes? And/or you understand that underperforming can lead to learning which can improve performing which you tolerate in yourself, so you tolerate in your employees, yes?
But in a startup situation where every new hire is going to make a huge impact on the team, and "money is running out" (as the article is titled), you might necessarily need to be less forgiving.
Bad employees fail often, never try anything outside their comfort zone, and don't learn from their mistakes or take responsibility for anything. He didn't give an in-depth version of what he considered a bad employee to be, but it's fairly obviously implied by "not pulling their weight" to the degree that they eventually demotivate the rest of the team.
The study you link to confirms the correlation between the ability to delay gratification in the marshallow test and later success in life. However, it also says that when taking other factors in to account, in particular the wealth of the parents, the marshallow test does not add a significant amount of information any more.
So all we know now is that we have a number of variables that correlate:
- succeeding at the marshmallow test
- having wealthy parents
- be successful in life
Which one of these causes what is open for interpretation. For example, growing up in a stable and reliable environment (which more affluent parents can easier provide) might help to develop long-term thinking, which helps both at the marshmallow test and generally in life.
In any case, the study confirmed that if you don't know anything else about a person, the marshmallow test provides significant information about the probable success later in the life of that person.
>Ultimately, the new study finds limited support for the idea that being able to delay gratification leads to better outcomes. Instead, it suggests that the capacity to hold out for a second marshmallow is shaped in large part by a child’s social and economic background—and, in turn, that that background, not the ability to delay gratification, is what’s behind kids’ long-term success.