This is a popular story in dev/product culture but after a little more Googling, apparently it's apocryphal: https://www.psychologytoday.com/blog/games-primates-play/201...
You can see people looking at it, wondering whether it's worth the humiliation of finding it's broken and having to go to the back of the queue, or the joy of finding it is working and getting one over everyone else.
Often nobody cracks.
My reasoning is a little different (and a little more charitable I suppose in its generalization about human nature): I don't want to go step in front of the register with no one waiting in it because I don't want to look like a sociopathic jerk. I also assume that is why most people haven't done so.
At the same time, I know as soon as someone does step up and do it, people will grumble but the single ad hoc queue will redistribute itself into 3 more balanced queues. I'm usually waiting for someone else to be that sociopathic jerk.
What I wish is that the manager of the store gave a little more thought to this issue in the first place.
It only seems like it would be a problem at businesses with bad staff.
I've also noticed the acquired habit in more experienced retail staff, when confronted with a queue of indeterminate order, to say, "I can help the next customer in line" and leaving implementation of the next method as an exercise for the customers themselves.
FWIW, I find the best arrangement is a Fry's style mega-queue with boundaries clearly defined by racks of candy, magazines, or other impulse-purchase crap that feeds into multiple registers.
Teaching customers how to use them, on the other hand.....
And for some situations, like a movie theater box office, a single long line actually drives people away (on non-blockbuster-opening-nights) because it looks like a big crowd, compared to two small lines. Even if it means they'd get through it faster. .... at least, before the age where everyone just buys their ticket on fandango.
The less obvious reason is that after a certain number of back-and-forths, you can interrupt the other person saying "aft-", cut them in line sideways so you both sort of swoop through the doorway, which then leads to a warp zone.
However, I noticed that the line on this register was moving rather slowly, and not wanting to be late for my film, I decided to take a risk and walked up to the second register. Lo and behold, it was actually open, and I had my popcorn in about a minute. If I hadn't been late for my movie, I don't think I would have gone to the second register.
Their places have X number of cash machines, yet only one of them is in use even though there is a massive queue.
And if you wander up to one of the unused ones you get yelled at...
It sounds like a fable. I know that primates are smarter than other animals but I doubt that without some strong negative association they would be able to keep up that convention when there's an obvious prize right there, especially if they got hungry.
BTW if you haven't seen "King of Kong" you owe it to yourself to watch, regardless of whether or not you're a video game person. It's hilarious.
This shouldn't affect its value as a story, though.
My life got much, much easier once I learned to stop straining so hard to fix things that are bigger than me. If you don't like your managers, or the culture, or the business, find another place to work, it's that simple. If you can't do that, you're simply going to have to learn how to compromise.
I've been in both types of cultures and, by far the healthier culture was the one in which "making the organization better" was treated as everyone's responsibility. I've seen very junior engineers drive quite significant culture changes, simply by leading by example. They did it, not by whining or complaining, but by taking baby steps of trying out small experiments with their immediate team and then broadcasting their success to incrementally greater circles within the company until it became the new normal.
If you seriously believe that all (or most) companies disallow anyone not in leadership to improve things, I'd consider getting out of your current situation and seeing things again with clear eyes.
I've been around a bunch, including the US Air Force, and I can confidently tell you that heavily hierarchical organizational culture is the norm, individuals contributors who want to have effects beyond their job scope do so at risk of offending other stakeholders. They may tolerate you and even give some limited help, but they're not going to shout your name from the rooftops just because you want to be a boy scout.
Not saying you can't buck the system and get away with it, one of my favorite books was about one of my heroes, John Boyd, but if you want to be a hero, you need to go into it with a clear-eyed assessment of what you're up against and so you can tailor your objectives appropriately.
The military's size, scope, and mission are unique. You need top-down control, because otherwise most folks won't decide to run up a hill into a machine gun on their own. That mission focus (do stuff with force), cascades into the supporting services.
And the Air Force is not the DoD. While, technically, the individual branches of the US military fall under the DoD, they all have very different organizational cultures, and the DoD has staff and culture of its own.
I won't tell you what falls within the scope of this discussion or not, but I can tell you why they don't add a lot of value to it either.
It's already quite hard to somewhat objectively discuss whether a business company is dysfunctional, in part, or not at all, and what are the reasons for that.
It's almost entirely useless to attempt to have that discussion about government. Because politics. Because people root for their home team. Because people are inclined to dismiss the other team's ideas even when they're real good. If I told you the US Government is dysfunctional, someone would counter that I'm not American and I should see my own government. If we move past that and we got into the details of why things are not working properly, invariably Americans will start quoting bits of the constitution, bill of rights and foundling fathers at each other (and from there on it just becomes religion to me).
Discussing whether the function of Military organisation is working properly is even more difficult. In addition to the above problems, there's also hazing and indoctrination, which are incredibly strong psychological forces (without those, like said, people won't run into their deaths without question). And it pretty much divides the crowd in two slices, those "outside" that have no idea how it really works, and those that have been "inside" who are unable to disconnect the indoctrination bits from forming an objective judgement about how well the organisation functions and how that comes to be.
Certainly, discussing businesses has similar problems, but this just amps up the personal emotion factor to eleven. Also one of the reasons why the businesses were kept anonymous in the featured article.
There may be awesome companies out there where anyone is empowered to just go fix some practice that's not productive, no matter where they are in the org chart, but those companies are few and far between. I'd love to see one.
One thing I would consider doing, if I ever start my own company or rise high enough up in someone else's company to implement this, is to give every single employee some sort of discretionary budget to spend on things that make the workplace better. The more experienced and trusted you are, the bigger of a budget you get. I think this would go a long way in fighting the learned helplessness dynamic.
20% is not specifically for "improving the organisation" of course. It's most famous on the outside for the new products it led to, like Gmail. But actually most 20% projects were small internal things intended to smooth the rough edges off a particular tool or process, or to improve the company in some way. If the particular bee in your bonnet was a type of bug that cropped up frequently in other people's software, making a linter to spot it and driving adoption through the organisation would be a good 20% project, for instance.
However, this is not to say that in regular administration there isn't still a lot of de facto trust in the hierarchy. I think unbounded hierarchy is generally one of those memes that people grow up taking for granted as something that just works with any layer count. In some ways, it's a self-reinforcing thing - if you climbed ranks for 10 years, you don't want to see the system change and your "sunk cost" go away.
[edit: clarified sentence]
That book made it into four different reading lists from various branches of the military (USMC, USAF, Army, and a chair of the House Armed Svcs Committee). That's encouraging in that at least they're aware that his ideas/approaches were productive.
With that in mind, "lean" as a principle is organized around the idea that it is better have people request functionality (pull) than for you to imagine what people want and then try to sell it (push). Many "lean" organizations happen to be very good at continuous improvement, but you can be "lean" without any capability for improvement.
"Kaizen" is the Japanese word for continuous improvement. It was introduced to western eyes primarily from the "Toyota Way", which is a lean manufacturing system that concentrates on continuous improvement. It is important to understand, though, that while workers have a responsibility for identifying problems, it is the managers who are responsible for working with the workers to implement improvements in the Toyota way. So there are clear demarcations of responsibility, unlike many of the IT processes that borrow the word "kaizen".
Even "continuous improvement" can be a loaded term with respect to responsibilities. The first time I ran across the term was dealing with CMM (the Capability Maturity Model). Once you are up to level 4 or 5 you are using metrics to improve your process in order to obtain continuous improvement. In every organization I've seen that attempted to achieve CMM level 4 or above, the primary driver was management (and often workers were not even consulted about improvements or why they were necessary -- they were only told to do things a certain way). I'm not saying it was successful, but it was very common ;-)
I guess my point is that if you are lucky enough to be working on a team that does continuous improvement well, be aware that what you call "Lean" or "Kaizen" is unlikely to be what other people call it. You can not communicate what it is you want to communicate using those words alone. If you have been reading literature using these words and believe that if you simple follow a "Lean" system and embrace "Kaizen" that you will transform your organization into one that does continuous improvement well, I'm sorry to burst your bubble. If only it were that easy. Unfortunately, all the difficult problems are people problems and those problems almost always require unique solutions. It can be quite tricky no matter what you are doing.
System of the organization comprises the relationship between systems within it, how those systems change and vary, the methods of knowledge learning and transfer, and the psychology and behavior of the people within it. And importantly, it's how all those pieces interact with each other that makes the most difference.
It's a system. There are some common elements. Above that base level, there is much truth in what you say, but the most important thing you can learn is a mental model for organizations in the most general way possible.
1) Heat up milk
2) Find something with the consistency of yogurt to put into the milk
Having a living thing there that knows what the final product should be and can stand as an example is largely how we transfer culture.
Regarding people problems, my one note is that we are all more alike than we seem, and you eventually start to see the same classes of problems everywhere. This is why an understanding of psychology and behavior is so critical.
So instead of asking "are you Kaizen?" you would ask about practices performed that imply Kaizen, including the spirit of the law and not the letter.
I've had some success, but they're always very deep interviews, which can either be drastically good or quite startling to the other party depending on what direction it goes. So your note to do it 'tactfuflly' is very prudent.
* have your antennae out for anything that seems weird (weak signals, indeed) in your conversations about the work
* check references (as an interviewee). That's a fancy way of saying, look to see if anyone you know knows someone who works there, who can give you the straight scoop.
Lets take for Leadership decides to incur technical debt to allow a faster entry into market, Development Team Lead decides this is unacceptable, and does not take on the Technical Debt, but instead does things 'The Right Way'. This leads to competitors entering the market first, snagging customers and mindshare, which eventually leads to the company going under.
Was the Development Team 'Right'? Even if lets say he was 'Right', is it his decision to make, should he intentionally sabotage or ignore direction from his superiors ?
Most true WTF processes tend to come from decisions made that are out of the control of the parties that they directly impact.
That's a bit of a loaded question.
You assume there is a right and a wrong that can be determined before knowing the outcome (a.k.a. deontology).
It's easy to see this is not at all clear-cut if you consider the other possible outcomes. What if it turns out that DevTeam's decision was what saved the company (and maybe even their competitors are now buckling under Tech.Debt).
Would that have made them right? Maybe yes. But in other's eyes maybe not because they still disobeyed their "superiors". But maybe the company would have fallen if they had listened.
Can you categorically say, beforehand, that one decision is right and the other is wrong?
You could, but others might disagree (and rightfully so, IMHO).
Say you claim it's always right to do what management tells you, and it's wrong to decide to go against that. And you can argue this because a large business needs that kind of structural dependability, otherwise it would fall apart in chaos and specialisation is a good thing, so management specialises in determining long-term vision that a Dev.Team leader is not as knowledgeable about. Sounds like a pretty tight justification, no?
Well, except when you're Dev-team and your livelihood (maybe family) depends on this job and following management's will crush the company, you decide to disobey management, and it turns out that indeed the tech.debt would have crushed the company instead of the entry-to-market delay. Then Dev-team gets to claim they were right. Even if management says "you saved the company this time but in general you ought to always listen to us even if you think it's a bad idea", except that (obviously) protecting their livelihood ranks a lot higher on Dev-team's ladder of oughts than following orders to facilitate a smooth tightly-run company.
Typically people resolve the issues that directly cause them pain pretty quickly if it is in their control. Further "control" requires understanding how the levers a party influences impact the obstacles that cause the pain.
WTF level issues typically occur when the above conditions aren't met, usually that means the 'pain' doesn't impact the party that has the control to end the pain. next most common is the party that has the control doesn't understand how its levers affect its obstacles.
just fixing what they can is not going to resolve WTF level issues...
If management is planning to do things quickly and badly, the dev lead should know that and plan accordingly, trying to do it in the best possible way.
If he's kept in the dark, of course he might blunder. But that's not his fault, assuming a company that encourages initiative and fix it yourself attitudes.
Best leadership leads by empowering people, not by enslaving them.
To be honest, how often does this actually happen? I'd doubt it happens often enough to worry about it. Most startups aren't doing anything that special, and so this whole, "We have to move at twice the speed of light or we're all dead!" attitude doesn't belong anywhere.
burn rate/runway is a critical survival factor to startups
The second thing is people ignoring opportunities to make engineering changes that would be beneficial for both engineering and business reasons. They do this because "it's not how it's done" or "it's impossible" or "we don't have time for that". And this is a problem that affects both engineers and leadership. Some changes can't be done because leadership doesn't allocate the resources, some can't be done because people on the ground don't give a shit.
Since there are problems that can't be solved without leadership buy-in, you're definitely right that you can't fix these problems without having a role where you have a real ability to change the culture, and that you should be realistic about that.
"What can I do, I'm just a dev, the team lead should figure it out" turns into:
"what can I do, I'm just team lead, the department head should figure it out" turns into:
"what can I do, I'm just the department head, the CTO/CEO should figure it out" which finally ends with:
"I'm just the CEO, these decisions should be made by the department heads and team leads because they have a better understanding of the problem."
Responsibility gets passed from the bottom to the top back down to the bottom, where the process repeats itself. Taken to its logical conclusion, everyone has a perfect rationalization for why all decisions are either too general or too specific for them to make. The net result is that decisions still get made, but no one questions them or defends them or even knows where they came from, because no one perceives them as a choice they had any influence over.
Ultimately what "they're paying you" for is not so specific. Generally, leadership doesn't have the skills to know there is a problem. Even in the cases where the leadership is technically exceptional, there is no way they can consider the myriad technical decisions and how they affect the future of the business.
Maybe leadership still doesn't care. If technical issues stunt the growth of the company or cause it to go under, the response could be "Meh, we had a good run." In that case, I, as an individual contributor, want to know that's the attitude up front. This lets me know to move on before everything hits the fan instead of be one of thousands laid off at the same time.
I think you'd find in most organizations, roles are well-defined, but not articulated to those assuming those roles, because it helps morale to let employees define themselves, and also keeps them from getting too complacent. A manager's job is not to lead but to manage, i.e. get the greatest possible output from the employee.
If you want to know what your defined role is, a simple way to do so is to simply stop doing your job and seeing what people complain about first.
You're right in that leadership can not know the intricacies of your domain, but you're wrong in assuming there's something special about that state of affairs as it concerns technology. Leadership does not know the intricacies of, say, human resources, that's why they hire a specialist. Even in areas where the leadership does know how to do the grunt worker's job, their attention isn't focused on that area, so they're not going to know about problems until someone, typically a manager, brings it up.
A company's leadership is typically engaged outward, towards the broader market. Our CEO handles big deals with other service providers and retail outlets. If you think about medieval Europe, the petri dish that modern organizational methods evolved in, it makes sense. Someone needs to keep tabs on what the neighboring states are doing, that someone needs all the resources of the nation at his disposal at his command so he can deal with regional situations.
Individual contributors, when they try to "rock the boat", are perceived as unnecessarily taking time away from the much more important job of keeping tabs on the broader world / market. It's not directly making the company more competitive, so it's just noise, is the attitude.
The attitude / culture I described above is the norm, everything else is an exception. You can take this paradigm, and use it carve out a little fiefdom in any traditional hierarchical organization. Essentially, you figure out your defined role, do only the minimum required to not get fired, and devote the rest of your time to company politics. As you come to understand the organization and its needs, you'll be able to position yourself as someone who can meet those needs. If a company needs it, then that means it can't get it with the current resources it has, otherwise it would have it already. So you will need a budget and staff.
Whenever I've done something like that in the past, the complaint is always some variant of "I noticed you goofing off".
Historically this was a source of incredible frustration because it basically acknowledged I was undertasked; the complaint would never be some variant of "Why isn't X done yet?"
I suppose that means I had no defined role in the company?
OK, there's a political status-quo you have to learn how to internalize. If you work at the front desk, you can fuck around on the computer, but you can't take a book out and read. One looks like work, the other looks like fucking around. You can't make your organization look bad. The perceptions are more important than the substance. If you can't get the perceptions right then you were never going to make it in corporate America, and should probably stick to contracting.
So the way to actually do this is, to look like you're working, but actually be producing nothing. When they complain, you can say you were busy on X, where X is some obviously unimportant meaningless detail. That forces your boss to clarify what he expects you to be working on.
That your boss never asks you about specific tasks means that your defined role is to be a repository of knowledge and not a cog in a machine. This is a good thing, knowledge workers can bullshit their way to perks that the rank and file could only dream of. My current job is just such a thing.
The key to this is understanding that nobody actually knows what you do. Management can only guilt you into being productive, they have no way of actually knowing if you are being productive or not. Perception is reality and you control the face you put out to the organization.
For one, you don't become a leader by being paid to be a leader. That's how you end up as an incompetent manager. You become a leader by taking responsibility and doing what you can to ensure those responsibilities are taken care of. One of those responsibilities is to make your organization better.
And you shouldn't think you're getting paid to do a job. I'm going to do a job whether I get paid by my employer or not. What I'm getting paid for is to take my employers priorities and goals into consideration, and the strength of that consideration is proportional to the pay. If you don't pay me very much, my priorities will be considered first, and one of my priorities from a workplace is to be comfortable and happy within a good organization -- I will work to help and support my friends and coworkers. So that's what I'll focus on. If you want me to 'just do a job' or 'maximize company profits', you need to pay me extra, because I won't care about those things for cheap.
You must be miserable at work if you think you're getting paid to just do labor. That's a waste of an education, assuming you have one.
I did not mean these things the way you seem to think I meant them.
> What I'm getting paid for is to take my employers priorities and goals into consideration, and the strength of that consideration is proportional to the pay. If you don't pay me very much, my priorities will be considered first, and one of my priorities from a workplace is to be comfortable and happy within a good organization -- I will work to help and support my friends and coworkers.
I use a similar rubric to decide how to prioritize my time. My defined role comes first over everything else. Because I am accomplishing my role extremely efficiently my company sees me as a very good employee.
What I do with the rest of my time I consider to be my sole discretion. If I have an idea for something I'd like to build for the company, I'll go over it with my manager to gauge interest. Sometimes he's interested, sometimes he's not. I do not sweat lack of interest, I am an idea machine, I can come up with new ones.
I look at this surplus time as the primary benefit I receive from getting better over time at my job. I spend maybe a couple hours per week on defined roles, the surplus time I mostly re-invest back into my own capabilities. This helps both the company and me, but mostly me. The company simply isn't set up to be able to utilize my talents effectively.
However I do want to note that in general these things do have real concrete negative impacts on the bottomline of the organization.
Oh absolutely. But your duty to that organization is to raise these issues to the person best-equipped to see the bigger picture, and to do your best to convince him it's a real problem. Once you've done that, your work is done, you have just done more to help your organization than 100 blog posts would have accomplished, and more than 90% of the other employees would have ever done.
You can comfortably use this approach once a month and be vastly more effective than everybody else at your company at bringing about change. Simply raise an issue, have a conversation about it, then drop it if you get no support.
To expand a little, I don't think the point of having a positive effect on the WTF parts of an organization has anything to do with my effect relative to other employees.
If I see a WTF security or ops practice, I don't sleep better at night by telling myself, "Welp, I noticed it and said something to someone, and that's a lot more than most people do."
Maybe I think of my job too broadly, but my job as a developer is, at least in part, to protect the business, to find the WTFs and get them sorted out.
The idea that IT is just there to solve the "business problems" and they should shut up about anything that isn't directly related to making money is absurd to begin with. But even if I'm being completely naive about that, it still implies a really shallow understanding of "business problem."
Security is a business problem. Stupid ops habits that result in downtime are a business problem. Groupthink that results in acceptance of worst practices is a business problem. All the whatthefuckery the article talks about boils down to business problems. Many of these problems are such that the people primarily concerned with running the business are not in a position to recognize as problems.
It absolutely is my job and every developer's job to find these things and take care of them.
If I ever found my self in a situation where I saw some WTFs going on and I went to my boss or a coworker and got no explanation other than 'This is how we've always done it.' I would be concerned. I would go back to my desk and finish what I was supposed to be working on. Then I would go home that evening and write out the clearest and most concise explanation of why this is a serious business problem with citations and take it back to the coworker or boss.
In my experience, the WTFs don't usually come from this is how we've always done it. They had some reason back at some point in the past where someone really needed it to be that way for some reason, and usually temporarily.
Here's an example from several jobs ago, one of my first in the industry, actually:
Why does this set of boxes still have password auth enabled? Why isn't it locked down to ssh key only? Why is port 22 accessible outside of the VPN?
Oh, it's because our CEO likes to get his hands dirty with code every once in a while, but he didn't want to mess with pub/private keys, so we just left it open because these boxes weren't that important. They were 'just' dev machines pointed at test DBs with no real data in them.
But it got written into a setup script or Wiki somewhere, and when those dev boxes got repurposed for production, people followed the scripts or rules or whatever that were specific to those machines. So now you have prod machines running with password access with SSH exposed to the public. Ones that are pointed at live DB servers with creds for them.
That's a serious WTF.
I'm not even close to a security expert of any kind. I wouldn't claim to be in a million years. I work with databases and Python, almost exclusively. Though I dabble in DevOps when I need/want to. Even I know that the above is a serious WTF.
What did it take to get that fixed after my coworkers said, "Yeah, that's how it is."?
Going straight to my boss, who also said, "Yeah, that's how it is."
Then spending maybe an hour of my time writing up how much of a business problem this is and going back to my boss, who said, "Yeah, I know it's a problem, but I don't even know why it's like that. It just is."
So then I ask him who he can think of who might know why that is. Oh, quelle suprise, his boss might know.
And indeed, his boss did know. But he had long since stopped trying to get the point across to the CEO who liked to dabble, so he just went with it. Here comes my very brief paper explaining why this is--you guessed it--a business problem. The CEO responded within a couple of hours requesting that someone come setup SSH keys on his computer and shut down the passwd auth and close port 22 on all prod machines immediately.
That's a lot more work than mentioning it to someone. But I think that it's my job to do that, even though I've never worked in DevOps or Security teams.
My work was absolutely not done when I asked a coworker about it once, and then asked my boss.
Edited to add:
Astute readers will note that there was a lot more WTF-ery going on than just auth and port exposure on prod. Like, for example, that the non-technical CEO who liked to dabble was doing so on a production box because he was not made aware that the boxes he was logging into to play with had been repurposed.
There was a lot of cascade in tracking down that one particular WTF, and many other WTFs were solved because of it.
At the time for that company, my job description was C# middleware for a web app. I maintain that it was absolutely part of my job to pursue that WTF to its endpoint, and that doing anything less would have been completely irresponsible.
Not true. Sometimes, things that have concrete negative impacts can BENEFIT organizations.
U.S. prisons are just one of many such examples of this. A concrete positive impact would be lower incarceration and re-offending rates. American prisons have the exact opposite effect. This leads to lots of "customers" in the form of prisoners and victims seeking a more abstract "sense of justice".
The American prison system is probably not the only industry with that kind of business model. There are, no doubt, many contexts where a "concrete negative impact" can be very good for the bottomline. It's also an important lesson about making money in general. It's not about creating value. It's about making others pay for your actions regardless of whether or not said actions are "positive".
That's a very good point too. If you wander, unaware, into a quest to change something with negative externalities on workers or the public at large but with a positive effect on the business, you'll find yourself suddenly in a minefield of opposition with a target painted on your back and no idea who you've just made enemies of. Tread very carefully.
Is there no sense of ethics in our profession?
Though when I got out of that sort of thing, I recall there being (relatively) new rules requiring a different entity to do the maintenance from the ones that created the software... but my understanding of FAR is basically non-existent, so I don't know.
There's two basic outcomes that can happen here. Either you become a leader, and gradually and often quite begrudgingly at first, people start following when they see how well it works. (Unit tests can be a real eye opener sometimes, especially when you start having reasonable cause to show how the problem is unlikely in your code, being either in the other code or the specifications.) Or you get quashed from above. Contrary to the cynical answer, the latter is not inevitable, but it is certainly a possible outcome. At that point, yeah, it's just time to say you got some valuable experience and move on.
What you MUST NOT do is simply whine... from the point of view of those above, anyhow. Even if it's perfectly sensible complaints from your point of view that are all but objectively correct, it's unlikely to be heard as anything but whining. You need to lead by example. You also very much need to do so with some idea of cost/benefits analysis; you can't go from no discipline at all to a perfectly disciplined project in one step, so consider your steps carefully. Keep them small; stay away from "big rewrites". (Probably the biggest failure case I've seen is someone who thinks that code X is in the wrong paradigm and sets out to entirely rewrite it to make it "better". YMMV but in my personal experience this is usually someone who thinks the code needs to be OO, or a different kind of OO. This is guaranteed failure. Even at surprisingly small scales! You destroy everybody else's knowledge of the code.)
And I'll say it again to underline it... the cynical answer that this is impossible is wrong. It certainly won't be easy, but I can guarantee great growth as a developer if you follow this path, both technically and in dealing with people. Even if you have to change jobs.
As for the bottom line point, arguably you have a duty as a professional developer to be doing this stuff I describe, precisely because it does mean resources are being continuously and avoidably drained on issues that shouldn't exist. If you find yourself unable to discharge it, you should find somewhere you can.
: I like to say that it lets me develop with monotonic forward progress. I even use unit tests during prototype work quite often, after I got sick of the way during prototyping I couldn't count on anything to work, ever, due to changes, and I realized that itself was actually inhibiting my prototyping ability. Sure, sometimes I dump entire subsystems but even then it was usually because the unit tests showed me a fundamental flaw far earlier than the rest of my prototyping would have, and I do so with far more information about the local landscape than I would otherwise have had.
I wanted to tack on that while the "change jobs for de facto promotion" technique is solid and time tested, doing the sort of stuff I mentioned can help you climb faster. If you're planning on that career path, that's great skill development. You may still advance if you just clock time in before moving on, but you'll find you don't advance as far and that you seem to be getting the same job over and over. (At least, statistically.)
Best of luck with the visa issues. If nothing else, keep an eye on the long term. Development today may still pay off later.
(I mean, don't go crazy. I'm not big into unpaid work. But not all "eight hours" are created equal.)
I admit that by straining to fix these hard things, I'm often going outside of the formal role I hold at the company. I'm fine with that. Others may not be at times. For me, the straining is required. We have a cultural goal to "think like an owner" and that is how I typically justify my behavior.
If this dichotomy exists -- if doing your job does not by definition make the organization better -- then that's a dysfunctional organization.
Managers are also acclimated to the business's practices and may find others absurd. The essay is suggesting that hey would be better managers were they to listen to the WTFs from newcomers.
You won't get fired, but what might happen is that you get a reputation for being "that guy". The "guy who's always complaining about unit tests". Or, "the guy who emphasizes process of 'agility'". What then happens is that the moment you speak up, management tunes out. They know what you're going to say. They know what you're asking for. And they've already told you no. At that point, you're basically screwed, organizationally. You have a reputation as a troublemaker, which will make transfers difficult. You're not going to get the desirable assignments on the product that you're currently working on. So even if you don't get fired, your life is often made so miserable that you leave.
If you're upset that there's lousy test coverage, and you respond by complaining, that's not useful. If you respond by making the test suite run faster; creating better mocks; writing docs; making the test results more visible; teaching others how to write (better) tests... that's a different story.
The trouble comes when you think that pointing to a problem is, in and of itself, valuable. We're all surrounded by problems; pointing out one that we likely know about probably isn't useful.
If you respond by making the test suite run faster; creating better mocks;
writing docs; making the test results more visible; teaching others how to
write (better) tests... that's a different story.
Yes, in theory, you get to say, "I told you so," when your peers' features are found to be bug-ridden pieces of crap that have to be reworked two or three times before they can be deployed. In practice, a boss whom you can say, "I told you so," to is a boss who'd have listened to you in the first place, making the whole exercise moot.
Two 9s of availability? Half the customers I've had would be ecstatic to have even ONE 9 of availability. And those guys hardly ever ship any code due to how encumbered developers typically are in those places and release maybe once every 6 months to a year perhaps. In fact, this is basically my typical experience with most enterprise customers I've worked with as a consultant - they're unable to execute almost anything materially important and customers put up with them because nobody else is in that niche enterprise market that's keeping people employed by lack of choice / market consolidation (healthcare.gov is just a visible example - plenty more projects are even worse with perhaps even larger budgets with zero media attention).
I've worked in places where some teams ship with this three or six month frequency. They consider it completely normal and find ideas like continuously delivery or even weekly deployments as not just abnormal but risky and irresponsible! This is the very point OP is trying to make, the _normalization of deviance_.
If a company is hell-bent on focusing for development and new features over stability / security, that's something that can be fixed by leadership - I've worked with plenty of companies that turned themselves around and have wise leaders that know that it's time to spend the resources to do spring cleaning while trying to keep existing employees excited by feature development happy.
"There’s the company with a reputation for having great engineering practices that had 2 9s of reliability last time I checked, for reasons that are entirely predictable from their engineering practices. This is the second thing in a row that’s basically anonymous because multiple companies find it to be normal. Multiple companies find practices that lead to 2 9s of reliability to be completely and totally normal."
There is a company Dan Luu knows about.
This company has a reputation for great engineering practices.
This company had 2 9s of reliability when Dan last checked.
The reason it has 2 9s of reliability is a predictable result of its engineering practices.
Although this example is about a specific company, you can't identify the company from the description.
You can't identify the company from the description because it is a description that applies to many companies.
You also can't identify the example from the previous paragraph [of Dan Luu's post] because that paragraph's description also applies to many companies.
Multiple companies have engineering practices that cause such reliability problems and find these engineering practices to be completely and totally normal.
And I could easily double it (as in half the unavailable time) if any ok ISP become available at my place.
I didn't say that this is a normative viewpoint in engineering or whether I personally agree with it. As you can see from other commenters, many do hold this view. A sometimes opposing philosophy, however, is “release early, release often” which many open source projects adhere to.
1) Maybe yes. If the practices are really bad, it could be surprising that they're widespread and that people don't see a problem.
2) In general, when someone is pointing out something bad, saying "you're surprised?!" is counter-productive. You don't have to be surprised to call something out (I'm not surprised anymore how much our government spies on us, but it seems bad).
"As far as I can tell, what happens at these companies is that they started by concentrating almost totally on product growth. That’s completely and totally reasonable, because companies are worth approximately zero when they’re founded; they don’t bother with things that protect them from losses, like good ops practices or actually having security, because there’s nothing to lose.
The result is a culture where people are hyper-focused on growth and ignore risk. That culture tends to stick even after company has grown to be worth well over a billion dollars, and the companies have something to lose. Anyone who comes into one of these companies from Google, Amazon, or another place with solid ops practices is shocked. Often, they try to fix things, and then leave when they can’t make a dent."
It ended up being three weeks before we agreed that I'd successfully handed off everything.
No bad feelings either way -- I'd joined the company because they had said that they wanted to 'grow up,' but that was a feeling percolating up from below. The lower levels of the company wanted to grow up and stop firefighting all the time. The top levels of the company would fight you every last way.
The longer I live, the more I realize that everything is a market, and incentives control it all. The reason you follow company policy most the time? You're incentivized to follow the rules so you get the raise, or at lease don't get fired. When there are competing incentives for different responses to the same subject, that's when you need to take extra care to realign the incentives. Trying to institute new behavior? You have to fight momentum, familiarity, and sometimes easiness. That often requires more than a few dictates.
This is why it's important to know how to think in systems, and about psychology, statistics, variation, knowledge and everything else that influences systems—if you want to work in one.
There's more to most systems than just incentives. They are a small part of what goes on.
This was the only book worth reading when I was researching metrics for our team at work.
TL;DR: Don't use performance metrics for human beings. You almost certainly won't get what you want, and you'll probably get nasty side effects instead.
Donella Meadows is one of the most articulate writers and thinkers on systems. She is the easiest to learn the basics from: http://www.amazon.com/gp/product/1603580557
Same person, shorter, free, and condensed format: http://www.donellameadows.org/systems-thinking-resources/
Same author again, the final chapter from the book above: http://www.donellameadows.org/archives/dancing-with-systems/
And if you only get one book on how to apply it to business, it's this one: http://www.amazon.com/Leaders-Handbook-Making-Things-Getting...
A more recent applied-systems-thinking manual for organizations that I've found hits home with more traditional managers (very useful): http://www.amazon.com/The-High-Velocity-Edge-Operational-Com...
Another good one that's a bit long-winded, but another set of applied examples: http://www.amazon.com/The-Fifth-Discipline-Practice-Organiza...
Senge, in the foreward of that book, gives almost all credit to W. Edwards Deming—who is the originator of many of the ideas of organizational systems thinking and how it integrates with Management. So, if you want to go deeper, Deming's book "Out of the Crisis" is a good tome.
All the things that he writes about are normal - they happen. People (myself included) with an engineering background are surprised when things don't "make sense" or people don't do things the "right way." The trick is to get to the point where these things are not surprising, where you see them as part of the systems you are trying to understand and consequences of forces that aren't mysterious, they are just part of human social dynamics. From that vantage point you can get a better sense of what you can change to influence outcomes and whether you can or can't in a particular context.
It's a word in American English, just not super common.
Well, I'll say this- the "@flaky" thing is pretty mind-blowing. In my own company I have noticed many engineers have a disturbing level of comfort with deciding something is a "mystery". There are no mysteries in what we do. The test fails because something is fucked up. Flappy tests are annoying, but the right thing to do is to address the situation.
On the other hand I've seen something like @flaky used in the Ruby world to run Capybara tests with a lot of Ajax, where you get failures because of timing issues deep down inside Selenium. It's not a problem with production or your app, but your testing tools.
In the second case I still am not totally comfortable, since it makes it easier to overlook flaky tests that really do need attention, but I can understand it there.
But, and this is crucial-- notice how you understand why the tests were flappy. As opposed to just grumbling like "oh that dumb old thing again. I wish it would shut up, the thing totally works."
There are plenty of times I've seen people think of things as "well, it just does X" because of a lack of depth of knowledge, but you also can't necessarily go down every rabbit hole to the bottom, unfortunately.
Not ever having engineering time to investigate why it's unreliable is a (mostly) unrelated problem.
Can someone describe a real life production scenario in which this flaky behavior is desirable? That is, preferable to these other generally accepted practices:
- flag the test and mark the bug as an issue and at some point, attempt to fix it
- delete the test, if it happens to relate to dead code or was poorly conceived in the first place
- use fixtures and other libraries to mock dependencies, e.g. Webmock and/or vcr to intercept http request and respond with a pre-recorded fixture.
I understand that there are scenarios in which production most go on even when a test fails. But to throw on another testing layer that tells you, "hey, it kind of works, for some unknown reason", instead of just marking the test as a failure to be investigated...what possible value or insight could outweigh the additional noise generated? I guess one possibility is that it lets you know that something is truly fucked up...but that is not at all the tone of the Box blog announcement:
> When testing Sync 4, Box's desktop sync application, we also ran into this issue, but we also didn't want to simply remove our flaky tests. When we noticed that most flaky tests would pass when rerun, we realized we could make doing so automatic. Flaky is a nose plugin that can rerun flaky tests without interrupting your test run. Using it is as easy as decorating your test methods with @flaky
So, it was expected that the tests would fail once in a while. Maybe there could be a better way to handle this, but what would happen is if that test failed we would just check the data manually, waiting for it to come up.
I think so? My company develops a processor that is meant to be compatible with processors from other vendors. To try to ensure compatibility, we have a test suite that compares our "golden model" of intended behavior against the observed behavior of competitors' chips.
We sometimes have run into cases where a competitor's product "randomly" gets wrong results, but when we run the instruction again, it gets the right answer. This happens frequently enough that we've arranged the test suite to automatically try again to see if a failure is reproducible before bothering a human with it.
But what do you do with the test in the meantime? If you disable it you lose coverage.
The best temporary state might be to re-run the test until it passes. If it fails after allowed re-runs, you have a regression.
One's answer to this question is a look into one's soul, at least in terms of engineering. The question is exactly what do you do. In my opinion, good engineers find out why the first one failed and why the second one passed. Either could be a false result. Sometimes the tests failed because the build box ran out of hard drive space, that's a false result. Sometimes a test passes because its condition for passing is incorrect, that's a false result too.
Tests passing is not necessarily a good thing in itself. That is why we have the notion of "testing the test"- a green icon means nothing if it's a lie.
I run it a third time, to start.
If it fails again, perhaps you've got a timing issue, or something that's switching back and forth.
Of course you're not happy to ignore the failure.
That said, it's a trade-off, and if you're a startup, it's usually far better to do it dirty today than perfect next year.
So it posits that there's messed up practices considered normal, then it talks about companies that are clearly abnormal? Even in one of the very first paragraphs, "the company whose culture is so odd that ...". And since when is marking flaky tests is "completely messed up practice"?
Ok so a lot of this is about coding practices and such but... like the guy said, those problems sort themselves out. Bad security gets broken into, the companies eventually get hit and either die out or fix themselves. Etc...
There's a lot of completely messed up practices in tech. Oh lord, especially as a european looking in to the SV world. Some of those messed up practices I can't even mention on HN because people think they are so normal, I get mass downvoted and have to engage 5 people telling me how normal this is (in fact I might even have to engage in it by just mentioning this).
1. The issues highlighted tend to be problems specific to people. Programmers that don't know how to do some aspect of security properly, that's a problem specific to those guys. You put me next to them, I'll do that bit properly but will be clueless about a different bit. It's all fixable.
2. The actual programming & design is the least problematic, mostly because it's the one that's most easily changed. Trying to change culture gets you fired. Fixing a pipeline with noticeable improvements gets you promoted. The one bit I did agree with was that it is hard to show improvement when you prevent a fire. I also think that's fixable and I also think that's a people problem, except at the manager level.
You want messed up practices? Look at the game dev industry and its mandated crunched and burnouts. "Everybody's crunching for the next 6 months because we really want to see the game released on time".
But it has a lot to do with culture. Adopted traits that are only there to serve themselves. And since culture is by definition subjective, I can't really say anything bad about it now, can I?
It's possible for process to serve a purpose, but still not be worth it. (Not going to argue with your specific examples though)
I'm 99% certain that I know about whom the author refers here, having worked at an office with somebody of the same name where a drama matching this description took place. It was one profoundly weird situation that should never have been allowed to fester.
That way, when someone eventually actually breaks that function, they'll notice.
ICL also though that even if the standard said such and such index MUST start at 1 that they woudl start it at 0 - cue some nice divide by zero errors - Not surprised we don't have a UK Mainframe company any more.
More often than not I would have failing tests in official Python release version tags, which became par for the course.
One, that is depressing and implies the company seems to have a corrupting influence on society at large.
Two. The girl is right and it is called insider information. Another corrupting aspect of our society.
In other words, corruption is a new viable normal and workers have no problem with it because most workers are all desperate and happy to have a passport to middle or upper middle class.
we don't have an effective data driven reputation system. we use gameable heuristics to track social capial.
when metrics for evaluation are flawed, people behave in ways that exploit the flaws even if they increase the likelihood of failure.
"we are not rewarded for necessary grunt work as much as shiny advances", for example. That's a failure of the reputation system to account for the value of that work.
My solution to this problem is a mathematical reputation system based on the same concept as page rank. The system is available here:
I'd love your feedback.
I suspect we might need a "taxonomy of trust" so to speak, that allows the trust data to be anonymized and aggregated into commonly-accepted meanings of trust contexts, trust roles, trust relationships, etc. That might let these companies to release the trust data into such a format through a blockchain perhaps, and be able to participate in consuming the aggregated data. I'd need someone well-versed in game theory to figure out if an advantage is conferred to "leeches"; a company in such a scenario who only consumes the aggregated data but never send into the blockchain what they accumulate on their own customers. I think that's a real danger with such a scheme, but am not sure how to strongly dissuade that behavior.
What are your thoughts modeling respect not just as a unitary quality (still highly useful for quick, ad hoc, high-level evaluation), but also along crowd-created and crowd-defined axes? Then people can refine their description of respect and for example, say they agree with one crowd-group's definition of "good manager" for a specific person, but at the same time that person is not respected as another crowd-group's definition of a "good leader".
Gaming reputation systems over extended time periods and via aliased entities is a perennial problem. What are your thoughts on random latencies before respect scoring is evaluated on new respect data for an entity, securely tying a hash based upon fully-sequenced DNA to real-person accounts, interaction of entities in a specific context (someone might be respected as a great athlete, considered toxic in one of the companies they own, but respected in a different company), and tracking corporate aliasing (through mergers, acquisitions, spin-offs, name changes, etc.)?
> The Chinese government is building an omnipotent "social credit" system that is meant to rate each citizen's trustworthiness.
I had to add max-width, margin, and font-size styles before I could even attempt to read that page. For all that markup, there sure wasn't any attention payed to readability.
And, by the way, it's not like there's no css at all in the source. It's just UX-ignorant, so to say.
The author seems to have concerns for UX because his page isn't just HTML. He put some CSS on the nav header, used some HTML5 semantic elements, some Aria roles, and added a viewport header element for mobile scaling. Sadly his concerns were passing as 1/3 of his CSS is rendered moot, and his desire for a semantic page was abandoned quickly after implementing his navigation and footer.
His page's UX is improved dramatically by adding 2 simple CSS rules (margin, max-width). If those rules add too many bytes to the page he could convert his navigation divs to a proper unorder list and get some bytes back and make his page more semantic to boot.
His page is just a drive by attempt at standards and it's lazy.
There are others like it if you don't like this formatting.
That's a really good insight, and it's something to keep in mind with all the recent controversy about development cultures.
Every single time I've tried to implement a newish, reasonably complicated algorithm from a paper and contacted the authors when I've run into trouble, this is the reply I've gotten. How is it not normal? It's research after all, and if you've worked in research you should have a good idea how the paper mill works.
It may seem normal to you and people with a "good idea how the paper mill works", but it is absolutely insane from the point of view of a lot (hopefully most...) people outside of that bubble, who would likely mostly expect the results in a paper to at least be possible to replicate with the information in the paper.
Because how the hell is anyone supposed to validate their findings?
I'd like to know more about these practices that lead to 2 9s of reliability. Can you give specific examples of such practices, albeit not the companies themselves?
There's a lot of completely inconcistent definitions about what a service is sold as and what is actually delivered, and lawyers half the time only care about regulatory requirements rather than functional ones. Someone will use the ITIL definitions of service and say "it's available!" meaning that it exists in a CMDB or something and another person defines availability as "I can ping it" and doesn't care if there's an HTTP 500 error being thrown repeatedly. But gosh, if something used an insecure MongoDB server that is the reason they immediately cancel a contract (not hyperbole, saw something very similar happen).
"These hidden problems are the true gold standard of entrepreneurism and it’s amazing how little discussion there actually is about how to find them. It’s hardly a surprise though since they; as we can see, can be hard to find and I think there are a couple of reasons why.
Hidden problems aren’t obvious even to those who experience them every single day.
Most people have enough human problems. They are often hired to do a specific job and don’t necessarily think about these problems as something that could be solved. Many just see them as part of the actual process. So to even understand they are problems, require a certain kind of attention, most people simply don’t have. (I have later learned that this is called functional fixedness and is a cognitive bias. Which explain why people sometimes say “Why didn’t anyone think of this before?” — most people simply don’t think like that.)
Hidden problems often only reveal themselves over time.
Not all problems are even instantly recognizable. Instead they only reveal themselves over time or through years of experience. This also means that many of these problems require a certain age and experience to even notice let alone understand. Perhaps this is one of the reasons why the average age of a founder is 38 and with 16 years of working experience behind him."
Listening to weak signals.
I'm about to do a shameless promotion for a book I have nothing to do with, but a book that has been a guiding light for me: Michael Lopp's Managing Humans
When I was reading the article I couldn't help but think of Lopp's advice about regular one-on-one meetings with each of the people on your team.
I think this is one of the points that Lopp intends for managers to be listening to during those meetings.
They aren't so much for feedback from the manager (as they are often treated), but more as opportunities for the manager to listen.
If I understand the book correctly, those one-on-one meetings are exactly the place where the managers are supposed to be listening for the "weak signals."
I am not an expert in every area of development, and yet I have somehow been inserted into a management role.
As Lopp explains very clearly, this happens often, and the single biggest thing you can do when that happens is care about being a manager. It's a different skill set than being an IC.
Recognize that, but don't get totally caught up in that. I don't think Lopp would disagree with anything in this article. I think, in fact, that following Lopp's ideas would lead to far fewer cases of WTF than what we see in the wild.
This is why it's important to learn about psychology if you intend to work with humans—or even just with yourself.
It is not constructive to blame employees for failure to heed poor alerts and protocols.
Which companies? That's pretty scary.
Another company I worked for used rot13 for their back end risk management system's password storage. Found it completely by accident when trying to add the platform I was supporting at the time. I had a setting to the effect of 'resolve data from defined functions' enabled, so every password stored would be resolved to plaintext instead of showing their 'hashes'. It was batshit scary - scariest being the production r/w credentials for the credit card and mortgage databases.
When I reported that one to the devs, they responded with, "We know. We needed to push the code out as quickly as possible, so we got lazy". Fuck. That.
This is so, so, so easy to do and generally has no negative repurcussions.
Sometime older works or those in exit interview, just don't GAF and call "group think" what it is.
You want to fix these problems. Don't hose your monkeys and hire some old deep thinkers and empower them.
What may seem dysfunctional and WTF from your vantage point may be perfectly logical from someone elses.
If you are going thru your work-life with the idea that social relationships and business practices are always going to follow the same strict rules as your software or hardware does, well good luck with that.
Normal is as normal does.
Perhaps I was being a bit too abstract, but I stand by my assertion.
Start using encryption, people. There's no reason not to.
Otherwise, the reason becomes $$$ .