Something that I find interesting is that career advice coming from professionals having many years of experience focuses almost exclusively on the people aspects and not the technology: communication, trust, teamwork, documentation, clarity. The advice is clear, precise and honest.
This is the opposite of what you get from new hires/juniors: they tend to focus on which stacks matter, what to learn, how to develop, deploy and maintain. Not much real advice on the behavioral side, to the point that people often take trainings for behavioral interviews and memorize “leadership principles” and other nonsense.
What you're saying is important for everyone to internalize. I'll spin it like this:
Your job satisfaction/pay are a function of your impact. Your impact is a function of your leverage.
If you're a "pure coder" who doesn't have any of the other skills you mentioned, your output is incredibly limited. At best, you produce a day's worth of code in a day, but you also require someone to manage you closely to make sure you code the right stuff.
The more of these other skills you have, the more you can (a) work independently (b) make others more productive and (c) make sure your team/business is doing the right things and (d) drive overall efficiency.
The more of these things you do, you become orders of magnitude more impactful than a pure coder.
One of the most important skills is the one he describes like this:
>> The more specialized your work, the greater the risk that you will communicate in ways that are incomprehensible to the uninitiated.
In my experience (35 years), this isn't just about knowing the right way to describe things, it's also understanding what things to concentrate on when communicating, and what to ignore. If you are a tech person communicating with a decision maker, they are typically looking to understand options and their implications and risks, not the details of how the options work. They can then decide, based on their (presumably) better knowledge of the wider context, which options to select. If the decision makers are genuinely intelligent and motivated, but their eyes glaze over when you describe something, chances are you have chosen the wrong things to communicate to them.
Thank you! (Actually the raptor conservancy [0] is interesting from this perspective as well. I'm a newbie / junior in that context, but I can see standard employee patterns that I recognise from the tech industry, e.g. in terms of people who know to report and / or delegate well, or to make effective decisions vs. be indecisive. Perhaps I'll have to see if I can document my observations on what I've learned in my career).
Very well said. It's stuff like this that makes me think all CS majors need some sort of business communications requirement, something to at least give a foundation on how to speak to decision makers.
Good business communication is something you only learn by /doing/ it. Expecting a course to facilitate that is flawed. I'd be all for increasing the roles of internships in CS education in order to accomplish that -- and I have noticed that's begun to happen as well. CS majors these days have internships lined for summer and winter if not more (especially due to COVID). I am sure they'll learn a lot more than I did during that era of my life just by sheer osmosis.
If it is a good course on business communication they _should_ be doing it - in class. And getting feedback and suggestions on improving with every attempt.
Internships are a wonderful thing but many people don't realize how they are coming across so don't look to improve. There is some learning by osmosis, but mostly people keep doing what they have always done because it works (as far as they can tell).
They won't be doing it, they will be doing a simulation of it. How are they going to learn business communication without doing it in the context of a real business with a real P/L line and real consequences for good and bad execution?
This is the party line. It misses that “a day’s worth of code” is highly unequal across individuals. The people whose “day’s worth of code” are most valuable, quickly get pulled off of spending their days that way.
It appears that they key skill is coordinating a lot of developers, because the ones left developing are the ones you need a lot of (and who need close supervision) to get anything done. Managers will never rock this boat, because their career KPI is the number of people they manage.
That means you have bad middle/upper management. Large tech companies have heard of the Peter Principle and don't promote this way; they have separate technical tracks.
Which is not to say they're doing everything right.
> That means you have bad middle/upper management. Large tech companies have heard of the Peter Principle and don't promote this way; they have separate technical tracks.
That doesn't really jive with my personal experience, which is that contributors really don't want the only path to career progress to be a transition to management. I saw this happen firsthand at a consulting company--the technical side was pushing for the split, not the current level of management.
Having no track even better. Some people like what they are doing and the field is constantly changing. You want your best surgeons doing operations not managing other doctors.
You need some kind of title so people can feel responsible for career growth and so you can calibrate against other companies' pay at least; it doesn't need to control what kind of work you're allowed to do.
Even if they don't transition to managing people, someone in an individual contributor role presumably needs to be growing in some way if they want to get raises. While someone doing something valuable even without growing will probably continue to get small raises anyway, it's not the ideal path.
And, as you say, to the degree that position titles mean something cross-company, it gives a point of comparison.
People high on our technical track don't spend their days on code. The job of a Staff, Sr. Staff, or Principal engineer is about coordination, negotiation, and consensus-building across bureaucratic distances. Most only look at architecture diagrams. Some review code. Vanishingly few contribute. When they do, the contribution is about keeping their skills fresh and their understanding of the project grounded in reality. It's not not what they're really getting paid for.
> If you're a "pure coder" who doesn't have any of the other skills you mentioned, your output is incredibly limited. At best, you produce a day's worth of code in a day, but you also require someone to manage you closely to make sure you code the right stuff.
This seems to miss the point of teams. Everyone should have a role if you hire someone as a software engineer and they spend all their time doing devops because they like it more you've made a bad hire. If all your engineers are having product meetings with various departments then you've again made bad hires. Everyone should have their role and fullfill that role to work as a successful team. People thinking they're above fullfilling the role they were hired for is one of the fast ways to have a poor performant team. To put this into sports you don't want your defender to be constantly hanging around the other team's goal mouth.
And yet, there's incentive for somebody (say, in soccer) to try and score a goal even tho it's not their role - because scoring a goal is rewarded to the individual.
Individual coders/contributors' compensation don't scale until they are shown to "score a goal". And yet, to do so, they may have to stop contributing in their usual, assigned role, but go "above and beyond" - such as redesigning the system, or to make their mark on the product and be recognized as such.
This, i find, is probably what the fundamental problem/friction with teams are.
I can appreciate how this kind of specialization helps a team to scale. At the same time, what I like least about my current job is how structured my role is. ‘Draw within the lines’ is constraining and can push creative folks away.
> The more of these things you do, you become orders of magnitude more impactful than a pure coder.
Your job satisfaction/pay are also a function of how unique your skillset is.
With this in mind, increasing your impact (through soft skills) is just one of two ways to increase pay - the other way is to do something that not many others can do. For many people, doing something unique is far more satisfying than dealing with people problems.
I will point out that a lot of engineers are bad communicators, and those that are not often change roles. So communication is a unique skillset. As is understanding the business.
> For many people, doing something unique is far more satisfying than dealing with people problems.
These are not mutually exclusive. If you want to accomplish big unique things, you can't do it yourself. You are going to have to convince others that your big new idea is worth doing so that they will help you achieve it.
"Your job satisfaction/pay are a function of your impact. Your impact is a function of your leverage."
I'm skeptical. I mean, sure, for most people, the ability to convince a large number of other people to do things, to do your things, without worrying how they do it, is going to have more impact than any other contribution they could make.
On the other hand, Van Jacobson's algorithm is like four lines of code (I used to know where it is in *TCP/IP Illustrated, Vol II.) and is responsible for the Internet as we know it. That's an awful lot of impact.
I don't think we disagree. I hadn't heard of Van Jacobson but a quick glance at his wikipedia doesn't make it look like he was just a "pure coder" in the cartoonish sense of someone who doesn't know what's important and can't collaborate with others.
I'm not sure how "(a) work independently" fits. In fact it seems the opposite. The whole point is about working with and influencing others. That's not "working independently"
"Working independently" is not the same as "working in isolation". Not sure that this is also what the GP meant, but "work independently" usually means that you understand what the business needs and can initiate new tasks on your own, without waiting for someone to tell you what to do.
Found the manager. This is just more developer hate drivel. Yes, if you think managers are more valuable, then obviously you are going to say engineering work is "incredibly limited" and manager activities are "orders of magnitude more impactful" Listen, no amount of ass kissing and brown nosing is going to solve actual tech problems.
It sounds like you're making assumptions in bad faith. I'm a SWE (i.e., not a manager) and I didn't read the comment at all like you did. On the contrary, I quite agree with it.
Well, as someone who has saved my company's ass multiple times to the tune of millions of dollars I think technical solutions are orders of magnitude more impactful and foofoo talk bullshit is incredibly limited. So I guess we will have to agree to disagree. Maybe noob coders "produce a day's worth of code" and that's all they can do, that's your perspective.
I've sat through dumb two hour meetings about deciding which words in a document should be capitalized. Is that what is meant by "make sure your team/business is doing the right things?"
I think what you're missing here is that the original comment isn't suggesting code doesn't solve problems when it counts. The thing is, developers often code things they never needed to. Or they code things off spec. Or they code things outside of the convention of what's appropriate for their immediate team or long-term needs of the product. The list goes on. Output could seem good for a long time before it becomes problematic, then the pure coder simply codes more to solve those problems. This is very circular and makes up a lot of work done by software developers in my experience.
I agree with what you're saying in part. Pure coding skills are essential, especially in critical situations like that. Soft skills won't fix broken things, for example. Salespeople can't deliver the features they promise without someone to develop them.
However, soft skills can help someone with excellent coding skills to know what to apply their skills to and when, and how to integrate their skills within a broad team of different disciplines.
This is arguably true in any field; I think it's often missed in software development because people have such a difficult time distinguishing boundaries of things. The problems you're solving, when you're passively or actively solving problems, when output is applicable to a specific problem, etc. Even software engineers themselves struggle with this.
Your ability to save your company's ass is an excellent skill to have, but it isn't directly related or exclusive to what the original comment was saying.
You have a very noob view of software engineering. You make a lot of generalizations about coders to support the theory that coding is low impact because coders fuck up a lot. Noob coders fuck up a lot.
When I saved my company's ass those times, no non-technical people were present and it was wholly technical knowledge that solved it. I could have and probably should have ignored the problems and let the talkers try to fix it and take the blame for millions in losses. So it is very apropos to the original comment.
a) I never said coding is low impact, although I do believe software engineers make a lot of mistakes
b) my generalizations are very common in software
c) saving companies’ asses with code is not common
d) I will never not be a noob
The tech lead at my place has just burned himself out and quit after making a load of poor decisions. He reinvents the wheel over and over again rather than use some prebuilt solution. I am going to have to maintain his undocumented, untested code. Experienced coders can be just as bad for over-engineering as noobs.
> Well, as someone who has saved my company's ass multiple times to the tune of millions of dollars
Even though it sounds good, this is actually a bad measure because all coding consists of constantly making new zillion-dollar mistakes and then fixing them as you go. You can always do a worse job, and if we're supposed to be impressed by you visibly fixing something, then mess up and fix you will.
I entered the industry with no degree, having taught myself to code. After approx 15 years as a developer, I decided to get a degree, because it was becoming a problem (Australia is very sensitive to qualifications). I decided to get an MBA because all the hard problems I'd met were people and/or business problems.
The tech is generally easy in commercial coding. There is usually a definitive answer, and if not then the trade-offs are generally well-known. It's rare to run into a problem that requires complex technical knowledge, and in those cases it's fine to hire a consultant to help.
But the people problems are hard. Getting clued-up on these is important.
A mature MBA done after 10-15 years experience is very different that some one who went from BSC direct to a MBA (to game immigration hurdles in a lot of cases)
I think everyone agrees that bad management is the biggest problem facing a lot of software companies. The part that's disputed is whether managers with MBAs are actually any better than managers without them.
I think an MBA will help you be a better manager if you want to be, because at least there's some clue about what a "good" manager should look like.
I've met sooo many bad managers. In most cases they were bad because they didn't really know what they were doing and they felt they couldn't look "weak" or not be in charge. There's always the urge to authoritarian leadership because it's the default (for some reason).
If the MBA course is any good, it will at least have exposed such a person to other leadership styles and some management theory. They might reject it, of course, but at least they'll know it.
So, yeah, I don't think having an MBA makes you a better manager or leader automatically. But I think a manager who wants to get better could be helped by doing an MBA.
Of course, there are lots of managers who are convinced they don't need to get better, and so won't/can't be helped. And there are lots of people who get MBA's because it's a ticket to promotion and don't really care about the learning.
> I think an MBA will help you be a better manager if you want to be, because at least there's some clue about what a "good" manager should look like.
I think even that part is disputed; we really don't know what a good manager looks like. Some of what's taught in MBA classes now is the opposite of what was previously taught, and there seem to be as many failure stories as success stories. I do take your point that maybe just thinking about anything other than the authoritarian mode is enough to raise a manager above the average, which is distressingly plausible.
> The tech is generally easy in commercial coding. There is usually a definitive answer, and if not then the trade-offs are generally well-known. It's rare to run into a problem that requires complex technical knowledge, and in those cases it's fine to hire a consultant to help.
I'm totally on board with highlighting the importance of communication, team work, people skills (and I've seen how awful it is to work with people who lack these skills despite their technical expertise), etc. but this statement I simply cannot agree with.
Everyone's experience is obviously different. Some people may work on problems where there are few legitimate technical challenges, or where they aren't solving any problems that haven't been solved before. That hasn't been my experience in most of the jobs I've had so far. There were significant architectural and sometimes even algorithmic challenges to be solved, and not solving them properly would mean either bugs or unmaintainable software.
ITT we talk about the importance of communication, but good coding patterns and architecture (as well as the right level of documentation) are part of communicating to other developers.
> There were significant architectural and sometimes even algorithmic challenges to be solved, and not solving them properly would mean either bugs or unmaintainable software.
I get that, and I don't mean to suggest that some bits of this aren't tricky.
But with an architectural problem (for example), it's usually a choice between 2 or 3 options. We know what the options are, we know what the trade-offs are, we can make guesses on what we think the impacts will be. It's a "known known" problem, with usually enough time to research it fully. Implementing it properly can be difficult, but that's the kind of thing that can be iterated if necessary - we don't have to get that right first time.
There are lots of management and people problems where none of this is true and it's very much a "known unknown" that has to be got right on the first try, without knowing all the possible tradeoffs or consequences, and under time pressure.
> But with an architectural problem (for example), it's usually a choice between 2 or 3 options. We know what the options are, we know what the trade-offs are, we can make guesses on what we think the impacts will be. It's a "known known" problem, with usually enough time to research it fully. Implementing it properly can be difficult, but that's the kind of thing that can be iterated if necessary - we don't have to get that right first time.
No, I disagree. Some problems are truly novel (or maybe, there are just a handful of competitors and you obviously can't see their source code) and you just don't even know what kinds of solutions might exist. There might be bits and pieces in the research literature, but good luck even finding them, let alone understand if they are applicable. The space of possible technical and architectural solutions is infinite-dimensional, so it's not always a "known known" issue. There might be crazy solutions out there that you simply didn't think of.
Now, if it's about "create a CRUD interface to you e-commerce store", then I agree that the situation is more similar to what you described, but that's not what everyone is working on.
I had to work on a scheduling problem once that involved going to a university to talk to Comp Sci PHD's who were working on similar problems. That was fun. In 25+ years of commercial coding, that's happened once ;)
Now you talk about an algorithmic problem and there I agree with you - there isn't much algorithmic complexity in normal business coding.
But in your previous post, you talked about architectural problems and there I disagree with you strongly.
Taken in isolation, individual technology pieces are relatively simple and easy to evaluate. But when you plug in one such piece into a middle of a larger system (let's say typical corporate IT landscape), then it becomes way more complex on all fronts. (Enterprise) architecture is also interfacing heavily with business decisions and company structure. You need to think a lot about how your ivory tower architecture will be actually used by the teams.
But if you get it wrong, what happens? You have to refactor (if you didn't get too far in), or start again if you did. No big deal - code is perfectly amenable to this.
But the political and social problems of "we got it wrong, we're going to have to start again from scratch" are the real problems. Dealing with a marketing manager who has a product launch scheduled for the 2nd quarter and you're about to tell them that you need to reschedule because you got the architecture wrong on the first try - that is a business problem not an architectural problem.
The tech problems get a lot easier if you have some kind of framework for dealing with the business problems.
> But if you get it wrong, what happens? You have to refactor (if you didn't get too far in), or start again if you did. No big deal - code is perfectly amenable to this.
This works nicely on a small scale, but not with the architectural problems.
> that is a business problem not an architectural problem
It's an architectural problem because this business constraint forces me to get the architecture right the first time - I won't get another chance.
There's no business reality where it's OK to say "just give me another 2 years to re-do the system in a (probably) better way".
But there is a business reality where you can say "I don't know which architecture option is better, give me 3 months to do some prototyping and I can make a definite decision".
But knowing how to frame that, and manage the expectations of the other people involved in that negotiation, and recognise their objectives and priorities, is a business skill.
And, y'know, if you spend 2 years building a system on the wrong architecture because you didn't know how to ask for more time to make a better decision in the first place... well, you need some management training ;)
> But there is a business reality where you can say "I don't know which architecture option is better, give me 3 months to do some prototyping and I can make a definite decision".
And now back to your original claim. You need 3 months of prototyping only to reach a decision, but you still insist that it's an "easy" problem?
Then there's a problem that prototypes often don't uncover unknown unknowns. Investing effort into prototype improves your chances at arriving at correct solution but by no means guarantee it.
No it's not an "easy" problem - like I said, I get that these can be tricky. But in my 25+ years of commercial software experience, I've not bumped into one like this. I have bumped into complex technical problems, but the answers were/are always out there and findable. As I said, these kinds of problems are "known known" - you know what the problem is, there's a decent definition of what "good enough" is, and there's usually a lot of Comp Sci literature around to help.
Remember, this is in contrast to the people/business problems. For these, because they're involved the specific personalities involved, there is no definition of the problem (people will react to a situation according to their nature, and you can't check their source code). Often there is no good definition of what a good solution even looks like (except a broad "get everyone happy and working again" maybe). There is no literature describing the solution to the problem, or usually even addressing similar problems. And the problem always has a time limit - taking no action is seen as an action in its own right - and it's usually days at most. You literally have to make up some solution as you go along, not knowing if it's going to work or not. That's why I called these "unknown unknown" problems. In 25+ years of commercial software experience, I've bumped into at least a dozen of these (that's not including the "normal" run-of-the-mill management problems).
Your experience may vary - mine led me to conclude that the tech problems were not as difficult as the people problems, and that therefore I should get some training for the people problems.
For example: the network admin comes out as gay, starts having a relationship with someone on the night shift. The number of "emergency network outages" during the night shift suddenly spikes. A quiet word didn't seem to have any effect. The night shift supervisor is getting fed up of the disruption. Sacking the admin isn't an option. Sacking the nightshift worker isn't an option. Going down any kind of formal disciplinary process is the "nuclear" option as the company has to make absolutely sure it's not in breach of discrimination legislation. Ideally everyone would go back to work and be happy and the network would stop having problems at night (one of those where there is a well-defined "good result"). I didn't manage to solve this problem - the network manager ended up leaving in a huff (though thankfully not sueing us). I still to this day have no idea if I could have found a better solution to that problem. There is no technical problem that I've ever faced where I wonder 20 years later if I could have found a better solution (though plenty where a better solution has become available later).
If you want to get into management and deal with those kinds of people problems, then there are lots of management training courses around. An MBA is at the upper end of that range.
If you're having problems fitting into teams and getting along with people (actually quite common in dev teams), then maybe look at therapy or personal coaching. I spent a few years in therapy and it really helped.
If it's office politics and the like - some workplaces are toxic. I can't deal with those even with the training and experience. Life's too short to deal with that bullshit ;)
There were lots of things I was curious about, like company financial records/statements, some of the legal stuff around employment. The accounting stuff was really useful.
But the real wins were the leadership and management units, to my mind. Learning the "formal" knowledge around this has really helped in subsequent management roles.
The entrepreneurship unit was vaguely hilarious, as I was running a blog for the local startup scene at the same time and neck-deep in Lean Startup, which no-one on the MBA course had heard of. Writing a really tight business plan seemed to be the hardest part of starting a business ;)
I think this is on point but I've work with a lot of MBA's with garbage people skills. I'm not sold on the value of MBA except as a piece of paper on a resume. I'm not claiming that it is entirely not useful, but I don't believe it necessarily means someone is qualified either.
Did you find the MBA trained you to solve those people problems better? Did you get any practical, hands-on experience dealing with people problems during its course, or did it provide an interpretational framework, with true learning follow afterwards, like with a programming degree?
I think the latter. We studied formal models of leadership and management. Knowing those really helped me to put some kind of structure on the things that I was dealing with, but ultimately dealing with them came down to interpersonal skills and "walking the walk".
One of the really useful-but-unexpected things was having some kind of head-canon for "what a manager is and isn't" and (in one case) "what a CEO should actually be doing". It was really helpful having a kind of connect-the-dots picture of what should be happening and therefore what dots needed to be connected to make that picture happen.
Generally speaking - your technical qualifications value over a replacement hit their peak sometime between 3 and 10 years into your career for most folks. Meaning while you may be particularly skilled, learning one more discipline, stack, technique, or language probably won't let you get a particular job done any better or faster than the next engineer.
Many engineers move to management at this point, which requires that you can mentor and grow a team of engineers, set goals, drive projects to completion, and manage your team through performance reviews.
Folks who stick to the technical path need to become an indispensable piece of technical glue across N teams, keeping them all building in the right direction and not crushing each other. This job requires deep technical knowledge but often doesn't involve a ton of coding e.g. Linus Torvalds.
If a junior engineer showed up with the behavioral qualifications of Torvalds and the technical skill of a junior engineer - they would still be a junior engineer and unable to build trust, guide a team effort, or other activities.
Hum... maybe it's because they have so much technical expertise that they are able to have insights into other things?
Technical knowledge is still very important, and it's the basic foundation of being a software engineer, and maaaaannyyyyyy people out there don't even know the basics.
This is the opposite of what you get from new hires/juniors: they tend to focus on which stacks matter, what to learn, how to develop, deploy and maintain. Not much real advice on the behavioral side, to the point that people often take trainings for behavioral interviews and memorize “leadership principles” and other nonsense.
This was exactly the experience I had not so long ago.
Management brought in a new "team leader" to "shake things up" with "new ideas."
She spent most of her time reading management books, taking online management courses, and going to management seminars. She almost never spoke to the "team," and when she did, treated us like underlings.
They gave her two years to make a difference, and then kicked her out in the first round of COVID cuts.
Wow. While I do not have experience with management courses, and seminars, one book I read about management (https://www.amazon.com/First-Break-All-Rules-Differently/dp/...) talks about exactly this scenario. I am surprised to hear about it in the wild.
This is not just a junior problem. A lot of "people problems" seem to come from fighting over which stack to use. (and most people fight for whichever stack they are best at, to maximize their own personal contribution). I always spend a lot of time asking people what technology they would use to solve a particular problem, what technology they really don't like, etc. It saves a lot of trouble fighting over things if you get a team that is willing to go with a particular technical approach and is more important than most "cultural fit" issues.
In my experience, this mostly happens with people who only know one technology. Many juniors are like this. On the other hand, when this happens with a senior, they fight extra hard.
I suppose it also depends on company culture. What happens if a technology you have never seen before is selected for your project? Are you given some time to learn and is it expected that your initial contributions will be smaller, or are you supposed to be just as fast as people who used it for last ten years and get negative reviews otherwise?
>Not much real advice on the behavioral side, to the point that people often take trainings for behavioral interviews and memorize “leadership principles” and other nonsense.
This is what FAANG interview generally look for so why is it a surprise that potential hires focus on it? Amazon literally says that you need to highlight all the leadership principles in the stories you tell when answering behavioral questions. Granted, FAANG barely asks behavioral questions for senior engineers (and then it's like 90% project and bureaucracy management) much less junior engineers.
Do you know why Amazon is focusing so much on the Leadership Principles questions, yet there seem to be so many horror stories from people who work there? I am genuinely curious why they don’t manage to filter out the jerks.
As I see it, the Amazon leadership principles are designed to select for jerks. Specifically jerks who make money. Just look at them. One or two out of fourteen indicate to not be a jerk (Earn Trust, maybe Hire and Develop the Best) while multiple other ones rewards you for being a jerk if you succeed (Disagree and Commit, Bias for Action, Insist on the Highest Standards, Are Right A Lot, Ownership, Deliver Results, etc.).
edit: Amazon's goal isn't to be a nice place to work, it's to make a lot of money and everything else is secondary to that. They are so far succeeding splendidly at that goal across multiple verticals so arguably their approach works. I wouldn't want to work there myself but you can't argue with results.
I think that you may be onto something there. I have noted that when I meet (many) ex-Amazon employees, a lot of them (especially on the business side) are not nice people to work with (backstabbing, destructive political games etc).
It's come to a point where I kinda assume that they are likely to screw me/other people if I don't know otherwise.
(Obviously this is a generalisation, I'm sure there are loads of really nice people at Amazon).
You should read the descriptions[1] of those leadership principles, not just the titles. They're not what you think.
Have Backbone; Disagree and Commit is specifically about NOT being a jerk who digs his heels in, and sabotages projects or decisions that they disagree with. Rather someone who is a team player and embraces the decisions of the group:
Are Right A Lot is about constantly questioning your own understanding and being CAPABLE of changing your mind. We literally interview for it by asking about a time your mind was changed on something important.
Ownership is about not saying "That's not my job". If there is some work to be done, and nobody's doing it, don't have a high and mighty attitude about it being beneath you. Just get it done. (e.g. Developers doing QA, Ops, Documentation, etc)
Bias For Action is about taking calculated risks.
Insist on the Highest Standards is the only one that I know jerks abuse.
I recently looked at some of their hiring material, and one of the before after leadership showed a lot of bullshit in the after. The content was nearly the same, but it felt embellished in a way that left me feeling hollow. It was some oncall event, of which I’ve experienced dozens, and the second version just felt oddly misguided for the sake of saying the magic words the interviewer wanted to hear. The leadership principles seems to be reinforcing bullshit. We’ll tell you what we want to hear, and you’ll tell us what we want to hear. Do our principles line up? ‘Who cares? You told us what we wanted to hear.’ It’s effectively a jerk pass filter at that point.
Mind you, there’s some of this in all interviewing, but from some people I’ve talked to, Amazon seems to really love these leadership stories.
Amazon asks senior engineers MORE leadership/behavioural questions.
But no, you don't need to memorize the leadership principles themselves, and your individual stories can't highlight ALL the leadership principles. Some of them are deliberately in conflict with another.
Most juniors have a much narrower focus in their day to day activities. Their responsibilities tend to be far more focused on churning code and dealing with cool or bad tech choices, whereas the more senior you get, the wider or high level your job can become.
professional interview sign seen at the "interview desk" for work placement at an expensive school for digital creatives: "sit up straight; look the interviewer in the eye; wear presentable clothing; answer the questions asked of you" .. I believe the sign was there because that was not occuring in many cases !
My experience from being on both sides of the desk has been that devteams are relatively tolerant of candidates exhibiting personality quirks during the interview process - it's the candidate's skills that are in demand, not their winning personality.
Maybe it's different when you interview at a FAANG company though? I imagine FAANG recruitment is a merciless sausage machine.
> My experience from being on both sides of the desk has been that devteams are relatively tolerant of candidates exhibiting personality quirks during the interview process - it's the candidate's skills that are in demand, not their winning personality.
I'm growing further and further away from this mentality in hiring because the sad truth is most of those folks who come in with "quirks" lack maturity. To me, quirks are "yellow flags" in the interview stage because it means that you're not able to put your own weirdness aside to fit socially when you arguably need to the most. Invariably these same "quirks" come up down the road in the employment with output problems, copping attitude with superiors, weirdness in internal/external meetings, and just general antisocial behavior.
I get that we're all in hella demand right now but I'm starting to go back on my, "I'm just hiring for the engineering skill/talent" as an excuse to overlook yellow flags.
The other thing is contextually, what are the quirks? If the "quirk" is someone being super shy/timid it doesn't even register on my radar as a problem unless it's extreme... I'm talking about the stereotypical "oh so cute dev quirks" like: not dressing appropriately for an interview, interrupting me/talking over me, talking down to me, comparing their last junior position to being Steve Jobs, being sing-songy, inappropriate jokes in any way/shape/form, getting in arguments with themselves... etc etc.
I'm at a point that if you can't act 95% professional in your first interview you're done as you're not capable of being "on".
> interrupting me/talking over me, talking down to me, comparing their last junior position to being Steve Jobs, being sing-songy, inappropriate jokes in any way/shape/form, getting in arguments with themselves... etc etc.
Besides being "sing-songy", none of these things are quirks, they're ineffective communication or untruthfulness of their past positions.
As far as dress and "sing-songy" go, I would agree with the above poster: software development tends to be much more inclusive of these sorts of characteristics. Jeans and a T-shirt is perfectly acceptable attire for an interview. Do your software devs wear suits 5 days a week? Why expect a candidate to do so, either? A candidate I interviewed going, "uh-oh sphaghetti-oh!" when they hit a segfault when running their interviewing solution was kind of ridiculous, but ultimately has no impact on their contributions as a developer. Factoring in these kinds of thins into the hiring decisions is ultimately about including people from a culture similar to your own. Different cultures have different definitions of "weirdness" so selecting based on the ability of the candidate to identify what is "weirdness" is ultimately a cultural litmus test. And expecting male candidates to be shaved, as per your other comment, is just blatant cultural discrimination - some demographics like Sikhs could even sue you for illegal discrimination over this. This cuts both ways, both the case where a candidate in a T-shirt and jeans gets rejected for not wearing a suit and when a suited up candidate gets rejected for being too formal. Both ultimately hurt the company by excluding effective workers.
What is "dressing appropriately". I wore a suit to my very first interview for a dev position and very nearly didn't get the job because they were worried I wouldn't fit in (luckily I was able to explain that I don't usually wear a suit). The other ones in your list, sure. But I don't think those are what people typically means when they talk about quirks.
> I wore a suit to my very first interview for a dev position and very nearly didn't get the job because they were worried I wouldn't fit in
Yeah - I hate this. I had people at my last position give candidates negative points because they wore a suit and I went $%&#ing ape. In positions that I've held in the past it's been the appropriate action to wear a suit due to heavily corporate environments, and I see it as 110% normal that someone puts on a nice suit for an interview. The action of wearing a suit (or tie for that matter) to an interview should never be seen as a "cultural fit" problem - I'm looking at you SV...
Sorry that almost happened to you. That's some bullshit.
---
Overall a nice well-fitting button down + a pair of well-fitting dress slacks is what I would recommend. Personally, I wear a slim fit brand new white button-down (ironed), nice dark blue chinos (ironed, not pleated), and not-scuffed brown wing-tips. Often I also wear a tie, but am considering not thanks to a derelict SF "big-wig" grilling me in an interview as to "why are you wearing a tie?" All I wanted to say is "does it matter?" (offender: you're reading this don't do that again.)
Also get a solid hair cut, ensure you're shaved if male, and yeah - sit up and be confident while you're in the chair. Body language does matter, and your interviewer will subconsciously notice regardless if they're giving you a "pass" or not.
> The action of wearing a suit (or tie for that matter) to an interview should never be seen as a "cultural fit" problem
I would agree and would go as far to say that dress in general should never be a hiring criteria for a non-customer facing position. Unless there are hygiene issues or they are wearing something actively offensive.
I'm slightly more conservative - I've had people show up in t-shirt and jeans and have turned them away because I feel like they're breaking the social contract of what an interview is. Like - totally clean/non-offensive t-shirt and jeans.
In a lot of my engineering roles I have had to be internal/external facing and do things like budget presentations, work with partners, etc. Before that, I was a general web/app dev and we would get pulled along to client meetings all the time to collect spec, ensure that it was a good sales fit, etc. I'm not saying I had to be fully decked-out but being able to throw on a button down and tuck it in means that you "fit-in" to these situations.
For me, if someone is showing up in a t-shirt and jeans it's a red flag that you can't be bothered to dress semi-professionally which has been an occasional (but hard) requirement of me since day-1 of my career. 100% this all is anecdotal, and specific to my own needs/experiences.
I think this is fair to both sides.
I dress casually for interviews not only because that is what I prefer to wear but also in hopes of getting rejected by anybody who would consider that a red flag.
This is in fact the advantage of having a widely-understood dress code for things such as interviews: that way no one has to play guessing games, you just show up as expected and it's one less thing to think about.
Unfortunately, our industry has no such thing, so I always ask before an interview. So far the answer is always "uh, we just wear normal clothes", "casual is fine" or once "just wear clothes, please, this isn't quite Burning Man" (ended up working at the last place), but: this was in the Bay Area, so YMMV.
Dress code doesn't really factor for me, but I've worked for the same place for a while now and we've always been 95% remote. Half of us just wear whatever we slept in the night before. Most people don't get dressed until lunch time. Even in the office it doesn't get much fancier than a t-shirt and jeans. I guess our tech lead wears a button down shirt, but his jeans usually have giant holes in them, so I would call that a wash.
Expecting an interview candidate to wear something nice would feel a bit hypocritical.
Ha, you "went ape" when someone saw as negative wearing a suit and you're doing the same in reverse.
Bradley speaking, if you go to interview for lawyer position you wear suit, if you go to interview for developer position you don't wear a suit.
> you can't be bothered to dress semi-professionally
It seems to me that you're making an assumption about what is considered professional dress. I feel like it's fine to have such expectations, but you should express them clearly in the invitation to interview. It's not a case of "not bothering" if you have never expressed that preference in the first place.
I think nowadays it would be fair to tell applicants what is expected for an interview in regards to dress. I've been judged for underdressing and for overdressing. Just can't win and it's unfair to expect someone to read your mind.
Dressing appropriately does not mean "dressing up". It means dressing slightly above what is generally accepted as the normal dress code for the role your interviewing for, and the culture of the company. This is what makes it so hard, if it was just about putting a suit on everyone could do it.
If you're completely unsure, just call the hiring manager and ask. "What's the dress code at company X?", "How do people in role X normally dress at your company?". No one will be mad at you for making an effort and trying fit into the culture.
:-) I went to an interview (London UK) for a big name agency suited and booted and was interviewed by some one in a scruffy t shirt that looked like his dog had been sick on it.
I sort of twigged that I was over dressed when on the way to the interview some one stopped me in the street and asked me the way to the "Ivy"
I was treated poorly and mocked at an interview for wearing a sports coat over my jeans and t-shirt. Just can't win sometimes. In the end, not the kind of place I wanted to work.
Believe it or not, checking if the candidate is able to dress professionally and appropriately for the occasion is one of the major things being tested on most white collar job interviews.
> Invariably these same "quirks" come up down the road in the employment with output problems, copping attitude with superiors, weirdness in internal/external meetings, and just general antisocial behavior.
Yep, I’ve been that guy! Had huge problems for a while. Mercifully my team/department/business put up with me for the most part, but I did have to suffer for a while. Learnt some invaluable lessons though and I’m quite happy to be a fitter happier and more productive individual! (Yes Radiohead reference but really not as bad as it seems:)
To me, quirks are "yellow flags" in the interview stage because it means that you're not able to put your own weirdness aside to fit socially when you arguably need to the most
My response from the other side of the table: as an interviewee, I view these initial interviews an an exercise in expectation management. I don't consider myself as weird, so I act like I normally would, otherwise I'm setting myself up for failure later on.
Also, I don't wear to an interview what I wouldn't be comfortable wearing on a normal working day. I dress casual on purpose (but representably casual, not weird casual -- but that's my personal opinion of course).
Personally, I don't really care how anyone dresses. (Oh, up to a point. One of the jobs I had evolved to, "No tube tops, bicycle shorts, or French maid outfits". The last two involved one senior hardware support guy.)
But the other things? Oh, yeah. I've worked with (or attempted to) people like that, and I won't do it again. The first time a technical difference of opinion turns into a suicide threat, I'll just clean out my desk on my way out.
I would agree although sometimes this is a little too much. I have seen instances where they solely focus on the skills and end up hiring jerk. Ultimately, that causes way more trouble than a weak skillset.
An interesting experience I had at a previous company was that they really put culture almost above all else. It sounds great, but in all honesty it was actually depressing and horrible after a while. It almost always came down to, "Well we don't want to hire X person, because it's not a good culture fit." The real issue was that they wanted to hire people they could hang out with / mold into their way of thinking. No new ideas, and everything pretty much stagnated. If that's what you were looking for, then it was great, but people who think like that aren't typically the ones that can hit the homerun when you need to.
I think it is because the older you get you realize that all those projects are anyway just notches on some stick. For example everybody with some experience will choose a boring project with aweseom colleagues over a cutting edge project with a bunch of self-obsessed nerds. Life is short.
I really wonder whether this is a symptom of an industry that's bad at evaluating technical ability rather something that should be elevated. IME the best programmers by far are those who solve difficult technical problems quickly, whether or not they're abrasive. I don't like when they're assholes, but it doesn't matter. They still have more impact.
I used to buy into the whole "communication is king" idea but the more I program the more I just don't think it works that way. Most good coders I know communicate clearly as a side effect of being good, even if they're otherwise completely socially crippled, and they're way more productive.
You ever wonder if there's some kind of bias? Like successful people people (not a typo) will be more outgoing and likely to give advice. A successful tech focused person might not be blogging and giving unsolicited advice.
Definitely. In my experience, the nitty gritty "labor" part of software development, the actual process of learning and writing code, gives great satisfaction and enjoyment. But after a few years, a few projects and / or jobs, and especially once your hard work is effectively thrown away in favor of something newer and shinier written and advocated by younger and louder people, you get more cynical.
But once you get over that, you realize that code itself is just an implementation detail, and it's the people higher up that have more influence. As developer you get paid an X amount a year, that's about it; as a higher up, you get to play with millions, both as money and as 'resources'.
As a developer you'll learn your limitations, that you cannot solve everything and that you alone are not good or fast enough to tackle nontrivial projects (currently in the middle of that, single developer, at the current rate it'll take years for my software to become viable. At least I'm on the payroll). If you have bigger ambitions, climbing the ladder is the way to go. Personally I'm hoping to be able to get a small team together in the coming year.
The author doesn't state much about himself, so it's hard to tell from the article who he is and what he's been doing last 10 years. Obviously if he's mostly in the management/not coding he'll not focus on the programming advice.
If you are junior you would make mistake to ignore technical aspects of work (stacks, what to learn, how to develop, deploy and maintain) in order to focus on communication, trust, teamwork, documentation, clarity. While communication is somewhat important for juniors, but usually the technical aspects are even more missing. It is when you have technical aspects down when the other aspects start to matter more.
Also, juniors are not in political situations where the other aspects matter all that much. Generally, they find themselves in simply political situations. As long as they are not downright toxic, it is ok.
I think there is the effect of weeding out people with strong technical skill but low focus on “people” skills (not talking about jerks, just people who don’t play politics).
Honestly I don’t think it’s a good trend or that it’s unavoidable.
We shouldn’t look at the world today and take it straight as a lesson of what should be done. A bit like saying being gorgeous, extrovert and good negociator is the key to success, it sure can be but it shouldn’t be the goal of everyone.
Both, to some degree, matter. If your team makes poor technical decisions, that could spill over to eroding team trust. Your success is generally tied to how your team performs in aggregate. If your peers mess up, that could impact you professionally as your team's overall standing within your organization declines.
Technical skills were greatly undervalued anytime past a couple of decades ago so it makes sense, at the time they were building their career there where no highly paid people who weren't manager types.
And over the last decade or so this has "trickled up" due to the interview process: experienced and even senior engineers are essentially funneled into the areas of focus you mention.
Goldberg is talking about a longer term view and I think in general, after working for more time your job becomes more about interactions than the "knowledge" you have.
1) All of those people aspects are very, very important.
2) Stacks don't matter. Languages don't matter. Editors/IDEs/whatever-the-hell-else doesn't matter. What I used to call a "firm theoretical grounding" does matter.
What does that mean? At the base, the ability to write clear code that other people can read, and the ability to read code that other people have written. (Don't laugh, it's not uncommon to get called in when someone has bugs (or features) they can't get fixed after they've painted themselves into a corner.) (For me, the key to this is formal logic and what is variously known as axiomatic semantics (http://homepage.divms.uiowa.edu/~slonnegr/plf/Book/Chapter11...) or Hoare logic, or predicate transformer semantics. Theoretical, right? But the ability to think about a piece of code as a block of text, without "simulating the computer" is darn useful.)
Further, algorithms and data structures. No, you don't have to memorize a bunch of algorithms. But it's a good idea to understand what kind of things are out there and what they can do, as well as having experience writing them yourself. (I get downvoted a lot, but I do have to point out that everytime anyone puts code in an editor, they're building a data structure or writing an algorithm.)
Then, at least some knowledge of computer architecture and all the stupid little electricy bits. :-)
Then there is a stack of things that build on that, some of which are only relevant to some tasks: databases, network protocols, and so on.
None of this has changed fundamentally in 30 years. The only major change I've seen is an increase in the importance of continuous math---which I noticed because I've always had a hate-hate relationship with trig and calculus and so on. But all of machine learning and statistical techniques are based on that nastyness, so you can't ignore it any more.
Why does nobody mention technical things? For one thing, they're hard. You can practice communication at the grocery store. Not so much with technical matters. Further, the people aspects are important everywhere, whereas technical things just aren't. And at some point, it's easier to convince some one else to do the work while you have the ideas. (Personal motto: Ideas are cheap. Implementation matters.)
Finally, technical mastery is not really encouraged. Outside of academia, there are no real incentives for it. (Inside academia, there are almost no incentives for it.) After 20, 30, or 40 years, people will migrate on to something else, even if they don't like it or are really horrible at it.
I only half agree with this. I agree that there is no one perfect language or stack and that there is a reason for there being so many alternatives. I also have worked with a fair share of languages and stacks by now and am not afraid of picking up any new technology when necessary.
But languages and stacks do have trade-offs. Sometimes the trade-offs can even be almost prohibitive (e.g. there are stacks I've worked on that were so immature they were a legitimate liability to the product and business, even if of course we would find some ways to mitigate them (but you can't mitigate something you're not even aware of)). In other situations, it depends a lot on the context: what languages/stacks does the team/company already know? What sorts of libraries are you going to use (there is no point in trying to use Ruby for an NLP project, for example)? Do you have special requirements in terms of performance, parallelism, etc. (that might exclude some languages)? And so on.
Agreed with the rest of your comment, and I want to add that "the ability to reason about a program in your head" is also why I tend to prefer functional programming with immutable values (even though, of course, that has its own set of trade-offs).
True, it's not that the stacks and whatever don't matter at all, just that they'll change, you will have to learn new ones, and so any kind of religious attachment to one is a sign of an amateur programmer (in the Gerald Weinberg sense).
I, too, really like functional programming for that very reason. :-) It just isn't the approach I grew up with, and the functional programming manner of dealing with imperative is still a bit hard to get my head around.
I couldn't agree more, that said, I believe the difficulty for new hires in software engineering typically comes from the relentless focus on coding interviews.
3. Simplicity
Fighting complexity is a never-ending cause. Solutions should
be as simple as possible. Assume the next person to
maintain your code won’t be as smart as you. When you can
use fewer technologies, do so.
This is the one I appreciate more and more as I age. Because frequently, I'm the next person looking at my own code. And there's no better gift to your future self than well-written, easy-to-understand code that is straightforward to maintain.
I somewhat disagree with the way he connects simplicity and cleverness.
I once had to explain the behavior of a particular subsystem. It was governed by a bunch of simple rules, but their combination created a complex behavior. This was similar to social insect colonies, where components (eg. ants) are driven by relatively simple rules, but the whole system exhibits remarkable "emergent behavior".
What takes cleverness is to anticipate, predict, understand,explain the global complex behavior from the simple rules. If someone can predict what comes out from cellular automatons like in the game of life, I am amazed. Simple is not easy (no, sorry Rich, following your simple advice is not that easy either). Simple is not always easy to understand, even less so easy to do. That's why complexity tends to accumulate.
At the risk of being wronged, I would say that being comfortable with complexity is a form laziness. I belong to the messy type of person. I believe I am that way because I can rely on a good long term memory: because of that, I can yield to laziness and not put away things where they belong when I should. Being able to handle complexity is a great quality, but not being discomforted by complexity is like not being ashamed by your messy home.
When you make the effort to really simplify, under the favorable circumstances (like in hobby projects, where you can afford to move goalposts a bit), you can make complexity collapse - not by its own weight, for once. I understand that the author emphasis teamwork and communication so he gives this kind of John Wood's "Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live" justification for fighting complexity, but preventing complexity from collapsing by its own weight, crushing a product in the process, is to me on equal grounds.
Something Alan Kay has talked about multiple times is how sometimes raising the complexity of your building blocks slightly can reduce the overall complexity of the system. He uses the example of modeling orbits with circles vs ellipses. Ellipses are more complex building blocks, but yield an overall much simpler model for how orbits work (not to mention the correct answer).
I think an important continuation of this same thought however is that since complex systems can emerge out of simple rules. And since reasoning about the behavior of entire systems (emergent or otherwise) is an essential element of being a programmer. It true that over simplification of the elements and generalization can actually result in a much more complex and difficult to reason about system, just as you point out about the game of life.
I guess my point is that sometimes the systems model essential complexity, and by trying to over simplify and generalize elements of that system, you may make the system as a whole more difficult to understand and reason about.
At a previous job, I was the only person to document architecture structure and design choices in a document in each repo I worked on. They turned out to be very helpful for the new employees we took on over my time there. While that's a great plus, the real reason I wrote those was for myself because there is no way in hell I'll remember how to deploy this app to a new EB instance.
That describes my motivation for writing documentation perfectly.
In my last job, I wrote documentation so I could quickly get back into the context of the different problems I was working on. Context switching was a real challenge in that environment and I referred to my own documentation frequently. And sometimes, writing documentation helped me better understand the problem I was working similar to the idea that teaching a topic is a great way to learn.
The the idea of simplicity in software engineering is one of those hand wavy stuff that the conventional wisdom is full of.
The simplicity (or the complexity) depends on what you want to do (requirements). If the requirements themselves are complex then expecting the code to be simple just doesn't make sense. Also piece of code would look simple to a person who has been working on it for long time and has the background knowledge vs a person who is new to the system.
Another thing to remember is that software systems are built incrementally i.e for each new requirement the developer have to figure out "how to implement this new requirement in this given code base with minimal changes" and this eventually will lead to complexity (unless you want to do big refactor on each new requirement).
Yes, simplicity is so important. I often adopt the stand of designing and writing software that I could forget. The intent is such that when I come back to the code I've written, it'd better be understood easily. That means keeping things simple, keeping things clear, keeping things consistent, and keeping things well documented. Simplicity helps a lot.
I came here to say the exact same thing, but I want to extend it a bit more.
I've seen some developers, including myself, looking at some code and complaining "who the hell wrote this mess!", only to find out in repository history it was themselves. lol.
"When I was at GM, you were a failure if your next move was not up—managing more people or taking on bigger, more complex projects. For many, this made for a miserable career path"
As I've said before, I think this is a corrosive aspect of the perf/promo process at many FAANGs. The "level" system encourages/pushes people to "upgrade" in this manner, and I think contributes to a number of problems. For one, organizational incompetence when people who were valuable contributors where they were are elevated up into roles where they no longer can apply those skills as effectively (i.e. technical team lead to management or architect) leaving a vacuum below. A form of the Peter Principle I guess, except the individual may have competence in their new role but not be happy, or make the team itself less successful.
And most importantly, as he touches on: being asked to 'level up' and told that this is your mission can lead to an unpleasurable career. Either when you do get that promo and then find that you don't enjoy the new responsibilities (but become trapped by the position / upgraded compensation etc.) or when you don't get the promo (or don't try) and find that your value in the eyes of yourself and others seems less.
And finally, I think this type of thing can really take hold in places with a highly academic background / focus / origin (like Google, etc.) as it mimics in many ways the grade / peer review achievement structure of academia. And that reward / grading structure may not at all correspond to either the monetary or cultural success of a corporation.
Smaller companies looking to grow/formalize should exercise caution when looking at the rating/performance/promo process @ FAANGs / MS as a model for their own.
(I'm at 20-25ish years in the industry, but really feel junior in so many ways when I read the words of veterans like this.)
First real job I had there was this guy who had one specialization. He was in charge of some software that drove tape drives.
Every outside new manager would come in and in some form or another look down on this guy in some form due to his age and generally not doing "a lot" of tasks and new products.
It took the local VP to come down and regularly high five him after his product git rave reviews from customers (regularly) and make the point every time some manager didn't get it.
That guy's code, documentation, everything was rock solid, and the number of support cases for everything he did was so low that the dude was without a doubt the most productive person as far as income goes. You could sell the product that he worked on and just rake in money with almost no costs after that. Almost everything else had a lot of support costs and etc.
Meanwhile the guys who were doing all the new stuff, sucked at trying to juggle 12 things because it looked good on a resume.
It's similar to the systems administrator dilemma.
Do your job well, it looks you are not doing very much, why are we paying you?
Everything is on fire, it looks you are not doing your job properly, why are we paying you?
I've worked with a few engineers over my career like the person you described and frankly they are worth their weight in gold, been able to bank on their output working reliably and consistently over time is hugely valuable, doubly so when what they do is the bedrock of many other things.
Of course the unhurried, thoughtful person looks like they are slacking and the hurried, frantically working person looks like they are a hard worker.
Fundamentally it's because impact is harder to measure than perception.
I once worked at a place where the development process for the Windows version of their product was GLACIAL because there was no way to run the CI scripts locally or even set up a dev environment that could build the product. People actually edited code in a text editor and then submitted it to github and waited an hour for the CI job to build a virtual Windows instance, run updates, install Visual Studio, Oracle, and all supporting software, run the compile and bail with a syntax error :((((((
So I made a Powershell script to install everything from scratch on a fresh Windows 10 or Windows Server system using Chocolatey. The script took about an hour to run, but once it was done you had a Windows box (or VM) that could build the project in 2 minutes at most, or just run the CI script directly before committing. Suddenly it didn't take weeks to fix bugs in the Windows client, and corrupted installs or DLL hell was one "wipe", "run script", "walk away while it churns for an hour" cycle to a perfect dev environment again.
I was fired 6 months later for "not stepping up enough".
I solved a bug at my company that was costing $10k per month. The whole thing was not acknowledged because the leaders were embarrassed and probably afraid of repercussions for not realizing they were pouring that much money down the drain. (this was at a small company). Then I was sidelined by the CEO (a marketing guy) because I wasn't "senior enough" to do the job, whatever that means. I wasn't good enough, but the system I built for them is apparently still running several years after I left and looks the same.
Management should clearly see impact and that your work multiplier is higher. Their loss for sure.
Typically, higher leveled engineers have higher workforce multipliers —- their output is measured not only on their own work but how they move the whole team, division, company, or industry forward.
If your test loop depends on waiting for what sounds like an end to end test, the core problem is that you’re missing lower level functional/unit tests. Nobody should have to regularly wait for a server to install visual studio and oracle crap just to get feedback on a code change.
Now this isn’t the sys admins fault at all. The sys admin definitely helped here but the devs still have a broken workflow.
This solution was designed to solve the whole package. The script is designed to run on BOTH a CI system and a standalone dev box (real or virtual), giving the same deterministic, stable environment so that everyone is always on the same page (Honestly I'm surprised this isn't done more often).
As a developer, you run the script once on your fresh Windows dev box, and then you can develop, run unit tests, or even run the CI script locally to have confidence that it will actually pass CI when you check your code in (and not overload the server with bad builds). In fact, I did all of my dev work in a VM because it's super easy to tear down and rebuild if I break the OS somehow or introduce an unexpected dependency ("it works on my machine" syndrome).
As an admin, you only have to include this one script onto a fresh Windows image, then run it and save the resulting VM image for the CI server to run (instead of installing everything on every run as part of the build script like they were doing).
The original issue was that everyone was using the CI server AS a dev environment because they couldn't get the project to build on a standalone box (it was tricky to set up, and undocumented, and the build scripts assumed a complicated CI environment). And management wasn't even raising a stink about this, despite the HUGE cost (in a large corporation I can understand this sort of thing falling through the cracks sometimes, but not in a small startup!). I'm actually not a sysadmin (I'm a developer), but I do hate inefficiencies like this enough to do something about it.
> Fundamentally it's because impact is harder to measure than perception.
Absolutely. Another nontrivial variable is the tendency for managers/team leads to get the "credit" for successful work, even if they aren't trying to.
I've had managers before that did nothing but impede a high performing team. Luckily we delivered despite this. But it never failed, the manager would get accolades (often very public) for successfully releases, customer feedback, etc, and would end up promoted up. Meanwhile the team kept doing our thing, getting barely-matches-inflation annual increases and more and more micromanagement from the scrum diehards. Didn't take too long for the team to move on to better things. I have a dream of someday reuniting the super team for a sweet startup idea. Not likely to happen but I can dream :-)
> Do your job well, it looks you are not doing very much, why are we paying you?
> Everything is on fire, it looks you are not doing your job properly, why are we paying you?
Option 3: Everything is on fire, be really quick to respond to your manager and then poke and prod at things in production until it kinda works, repeat daily. This guys a hard worker! I'll have to keep him in mind for a promotion.
On the other hand, lots of places have people who they _think_ are this guy, and who are actually just a bad dev who is good at building an impenetrable moat for job security...
I ask my junior sysadmins / new hires to write a daily log for their first year or so.
What did you do today?
What do you need help with?
What went well?
What was frustrating?
What should we change or fix?
When we get together to talk about their day or their week, these logs are extremely useful. Sometimes they generate tickets, sometimes book or course recommendations, people to talk to, sometimes just conversations.
As a sysadmin, I actually keep a log, but I'm under no illusions that anyone will ever see it but me.
I used to track my tasks in Jira with the thought that maybe my boss would look at it and see what I was doing. Of course he wasn't looking at it so I stopped doing it.
My log is mostly for me to provide a list of achievements to my boss whenever there's a review, or if I'm questioned about what I spend my time on.
It would, but these logs can also feel like oppressive and meaningless bureaucracy. A lot of sysadmins are not even good at typing...
It’s one of those things that would probably be naturally suited to voice-activated systems (“Hey Siri, today I updated system X to avoid Y, and I wrote a script to improve process Z on input A” - 5 seconds to say, log saved in the right place, keywords like “updated” and “system X” recognised and formalized into some record, etc), if only such systems actually worked on any word beyond the most trivial (good luck with context-less mentions of pythons, rubies, native-american tribes who somehow serve pages of webs, androids in phones, etc etc).
> Meanwhile the guys who were doing all the new stuff, sucked at trying to juggle 12 things because it looked good on a resume.
This isn't helped by the current trend of trying to hire mostly fullstack developers, also known as hiring one person to do the job of three people. It does almost nothing but incentivize packing resumes to hopefully make the cut and burn people out from switching modes.
Fullstack does not mean that you will burn out. Fullstack can mean 40 hours a week, never overtime. Also, for smaller apps, after you had at least few months of experience with all technologies, it is not difficult to switch modes.
People who specialize will be better at their specialization, but you will be good enough for majority of apps.
At the end of the day, some people like Javascrip and CSS. And "full stack" only means that they will work with those, SQL, and some backend language (that may be Javascript too). It doesn't imply anything on the workload, all it does imply is that the place is hiring a generalist instead of specialists.
The last job offer that made me smile was for an AI / fullstack developper.
Aka the platypus of software dev.
The problem is pretty simple.
Let's say you are cutting-edge in all the technologies that make you very adaptable at time T.
You get hired.
You decide a stack A/B/C/... for your next project.
You get stuck in that project for ... let's say 3 years.
(a reasonable time for a project to reach production and have a few real-life iterations).
If you are not an assh... who leaves after 2 years (leaving the production/maintenance to others, i.e never having lived the "hell" of maintaining things for real), then you are 3 years-behind in all the technologies that are not in the A/B/C/... stack.
Now, at time T+3, you are ... the obsolete man [for TTZ fans only :)]
Frontend has very few to do with HTML and CSS nowadays.
It all has to do with framework choice, the build toolchain, the unit-testing of the front, the server-side rendering vs the client-side, the architectural layers (or bricks) to use on the client-side, SEO, performance, CI/CD, browser compatibility, components lifecycle (yumm, library versions upgrade!).
That guy could perfectly be some tech lead, establishing his rock solid approaches on team of junior 4-10 engineers and delivering rock solid new products.
Another story may be because he may not be promotable to such position because of mentioned "prickly" behavior, which may or may be not just engineering honesty and straightforward approach in contrast to bs(sugarcoating, ass kissing, exaggregating, sweeping dirt under carpet, etc) culture established in many companies.
Expecting a talented software engineer to "upgrade" to become a manager of a development team makes exactly the same amount of sense as expecting a talented accountant to gain a bit more skill and suddenly become a lawyer.
There are other paths at Google than upgrading to mgmt, but all of them involve "cross-team collaboration" and other quasi-political (with a small-p) aspects.
Actually just writing code at Google is, from my experience, a small part of the job and not rewarded. The faster you get out of writing code and get into designing and delegating it, the better you're off. Sucks if you don't like it.
Coming up with a way to reward and improve nose to the grindstone technical contribution seems to me key to having an effective organization.
EDIT: also consider, doing management or team lead at a company like Google is so different from, say, a small company, that the trajectory may make no sense for somebody. Before I came to Google I felt myself on a career track that was team-lead/architect focused, mgmt interest, etc. Once I got there, that evaporated. I just couldn't imagine myself doing that kind of work in this large of a company.
Over the course of a number of years and a few jobs at IBM starting in the early 90s, I knew 2 people on their "technical track". Their jobs involved huge numbers of airline miles, continual meetings, and essentially no technical work.
That was one of the primary reasons I stayed a contractor most of my career.
Interesting that your contractor jobs haven't also filled with non-technical, biz type work? Lack of desire to deal with accounting, client meetings, all the organizational aspects of running a business have kept me away from contracting or consulting.
Might be time for me to revisit that, this being the year when I really think I need to start something new. I have no idea how to make that switch though.
Are you contracting through your own business, or someone else's?
Typically, no. I almost always worked as an employee of a technical contractor---I was an IRS W-2 employee; they withheld taxes and provided (sometimes decent) health benefits (although I decided it was better to buy my own rather than changing insurance when I moved). I had no more than the usual accounting and organizational challenges.
For me, getting into it was easy. I packed off a resume to several of the local contracting companies. They pass the resumes to the ultimate company, who will do an interview (usually very low stress because it's easy to get rid of you) and then the contracting company and the employer work out all the details. (It may have changed, but at the time of my last such deal about 2005, the contracting company absorbed about 15% of what the end company was paying for you.)
Do realize that you need to keep at least 6-9 months of your income in a readily accessible account (savings, money market, or (whoo, I'm old) CD). I never went for more than a week before I could another job, but I have known people who had more trouble.
I have since gone into federal government contracting, which is another whole bag of stinky fish heads.
I've moved into contracting after being a software engineer/developer/team lead.
Essentially since you are contracting, you are outside of the "bubble" where decisions regarding business get made. So while you may have important work on the project, you are not there when they are deciding on business aspects of it, you are usually not invited to any type of sales meetings or meetings with the clients. This is reserved for company people.
Another aspect you can do is that you can outright reject that type of work or slowly move away toward a client that is not requiring that type of work. When you are working in a company, you usually do not have power to do that. Your only option in a company is usually to quit and then hope next company doesn't pull you into that meatgrinder.
The down side is that you don't have any direct influence on those business decisions. (At one point, I wrote about three different login/single-sign-on clients for a web app infrastructure (including Kerberos/Active Directory) before they finally decided that SAML2 was politically/technically the right choice. Frustrating.) You do have indirect influence if you have a good relationship with your team lead, who will likely be a "real person", a real employee.
If it's anything like my experience hiring contractors, it's definitely that they get the most "fun" part of the work. Most companies use contractors (in engineering) precisely for a well-defined tech work. They don't have to manage people or sit in meetings that are not relevant. We pay them by the hour and we try to get them to work on the stuff they can contribute the most - dealing with politics is expensive.
In the 2000's I did contractor work by myself (and occasionally hired a friend or two to get a project done). It was a disaster. I spent >50% of my time on client, non-technical work. I then joined a ~20 person firm in San Francisco and worked there for a few years purely coding. It was a joy. They had great business people, designers, copy writers, and then about a dozen great engineers, and I could just focus on the code and get paid for that. My hours went down, pay went up, and got to do what I love.
If you are thinking of contracting/consulting, I'd highly recommend joining a diversified team in the 10-100 range so you can do the parts of the work you love and rely on your team for the rest.
Yeah, I no interest in running my own consulting business.
In my case, it was usually a contracting company with 10-100 contractors working for one or more client companies, but the contractor had 0 day-to-day influence on the work I did. (I did try to pick up my paycheck in person and chat with the office staff.)
Personally I "think" I want to just code forever, but I migrated from coder to more of an architecture role. However, for me, the growth is just not there. It is in the cross-collaboration, mentoring and industry influence is how you develop. In some companies time in grade is a factor in layoffs. Due to ageism companies might not be looking for someone at a certain age and experience to just grind out code full-time, but to work on a product holistically.
I've seen people from small companies playing the tech lead or manager roles, move into google as "Senior Software Engineer" or whatever equivalent - basically writing code again. And the reason is in your edit. In fact a lot of the "hot shots" at small startups that play nice with the boss and can wrangle Jira but were actually excellent coders - can't do the politics at the big companies.
Part of that is that the big companies typically hire suits to do management - which were basically the jocks in high school that always got the girl... that type - they run everything in the world now. Loud mouth tech bro asshole types.
I think a lot of technical leadership is a side-effect of ability. People with a lot of technical experience are able to view projects at a high level (and judge other programmers accurately) as a side effect of being good. You want them cross-collaborating because they have better eyes than someone without experience. Whether they know to iron their shirts doesn't matter.
In baseball, coaches and managers are all ex-players, but not necessarily good ex-players. (Well, anyone who plays in MLB is “good”, but relative to the rest of the league I mean.) Just like in other industries, the skills required for a good coach or manager don’t always correlate with being the best player.
> I can't think of any sport where players are expected to become coaches.
In amateur sports it's common. E.g. in rowing, cycling, fencing, cross country skiing, etc., people are generally expected to coach at some point. Albeit usually while they're doing the sport, and not necessarily as their primary careers.
To some extent, coaching something you practice is good for giving you another perspective on what it is you're practicing. In an ideal world, this will elevate your skill level.
This does, however, not necessarily mean that everyone is a good coach, nor does it mean that everyone should stop practicing what they're doing and focus solely on coaching.
Football/soccer? Of course there are more players than coaches so they can't all become coaches, but I would say star players are expected to become coaches, and lots of them do.
Football team coaching staff are almost like another team. Position coaches usually are ex players of that position but as you move up the chain it gets more into strategic management and thus by the HC level it's more of a manager role with architects underneath for offense and defense so less position technical knowledge more game knowledge.
Well, only 1/3 of them played Pro before going into coaching. The rest were college players (and two only played in High School) who switched to coaching, mostly because they didn't get drafted.
In my mind its not that far fetched, depending on the person. The transition from experienced dev, to mentor to leadership doesn't seem all that unnatural.
The only difference between a manager and the other on-the-ground engineers is that the manager works more closely with the stakeholders to understand their requirements in more detail than the other engineers have time to. With that additional knowledge in hand, the manager helps guide the other engineers to make the right tradeoffs. It is still engineering, just at a different level. A level not everyone enjoys working at, granted.
I guess if there is an accounting analogy, it would be selecting one accountant to work closely with the government to ensure that the tax law is fully understood and to watch the other accountants to ensure that they are following that tax law, reducing the inefficiencies of all the accountants in a firm needing to spend their days acquiring a perfect understanding of the tax law and not spending their days doing the practical accounting work.
That’s a really weird analogy... Writing software (in any reasonably challenging context) is not administrative grunt work with clear and easy definitions of right and wrong.
It’s an art and a science. So I much prefer the comparison with pro athletes who become coaches or actors who become directors.
Having been employed at Google for about nearly two years, your take doesn't seem accurate at all.
> And that reward / grading structure may not at all correspond to either the monetary or cultural success of a corporation.
The key feedback/suggestions I see for my own performance review is to define my impact on both the monetary and cultural success of my org. Exactly the opposite of what you are saying.
> being asked to 'level up' and told that this is your mission can lead to an unpleasurable career
I don't see this happening either. I commonly hear others say the opposite and make it known they are no longer trying to level up and that they are happy where they are.
> technical team lead to management or architect
This does not jive at all with the various career ladders I see. There is no ceiling that requires me to move to management in my tech ladder.
> find that you don't enjoy the new responsibilities
To some extent, the promo process levels up employees already working at that n+1. Sure, some may not want to maintain that, but that is ultimately up to the individual.
Disclaimer: I work at Google, opinions are my own.
> "to define my impact on both the monetary and cultural success of my org"
What you get out of that is people frantically chasing impact and visibility instead of focusing on the humdrum drudgery of keeping the lights on. The result is penny wise and pound foolish behavior that definitely does not correspond to the monetary or cultural success of the corporation. People who are good at gaming the system in this can create their "impact", collect the rewards, and make an internal transfer before the costs or superficiality of their "impact" catch up with them.
> As I've said before, I think this is a corrosive aspect of the perf/promo process at many FAANGs.
So I have to ask: do you or have you worked at a FAANG with these systems? I may be wrong but I suspect you haven't, particularly if you're equating them with more traditional hierarchies.
The whole point of a system like this (and I have direct experience at Google and Facebook) is so you can go pretty far as purely as an IC. You're not forced to become a manager.
So at Google for example, new hires start as a T3 (T4 for PhDs). There is an expectation for growth up to T5, meaning technically you are meant to progress over time. When I was there this wasn't strictly enforced (eg I knew people who had been T4 for 5+ years) but it may vary from PA to PA or manager to manager and it may well have become stricter.
IIRC the general guidance was 2-3 years T3 to T4 another 2-3 years for T4 to T5.
At that point you can sit at T5 forever if that's what you want to do. People's desire to get promoted leads them to becoming managers because it is demonstrably easier to promoted from M1 (T5 equivalent) to M2 as en EM than it is from T5 to T6 as an IC.
These higher levels are really an indication of your organizational and technical impact and for this you really have to influence others. This is not being a people manager however.
But the point is that there's no "up or out" (beyond T5) like you may find at IBM or KPMG.
Now there are definite issues with Google's approach here and that's really a whole other topic. I just don't think your observations here adequately describe the FAANG career paths (IMHO).
> So I have to ask: do you or have you worked at a FAANG with these systems?
Yes, 9 years at Google. But yes, not experience beyond the L4/L5 tier.
It is true that the IBM type structure isn't the same, and yes, Google is fine with you staying around L5 forever (well, L4 now). But the matrix of things to get beyond L4 really is, in the grand scheme of things, about moving beyond development and into delegation, or at least ownership. It's not management, but it is about leadership/cross-team collaboration and "demonstrating impact" to others. So, like I said elsewhere, small-p political.
At least that's my experience from seeing the L4 to L5 transition. But it may also be a product of my smaller office, where the number of projects is smaller, team size is smaller, and larger technical contributions of impact are harder to find.
EDIT: Also I've seen a lot of change in the 9 years, in terms of how the organization as a whole behaves, and it is becoming more and more like a traditional BigCorp. I just checked percent/ and within engineering 86.43473% full-time employees are newer than me. And a lot more if you count non-engineering. It is a way larger company than I started in.
EDIT2: I should underline, that the perf/promo process obviously works for a large, perhaps the majority, number of people. But it doesn't for all. It requires adapting to an organizational model that not everybody accords with. And I think that's in the spirit of the original topic: organizational structures / procedures that become your career goals may not make you happy, so find a company whose process matches what you want to get out of life. Or try.
The senior IC ladder saves you from being responsible for people, their job satisfaction, their career progression. But it is still a kind of management.
Facilitating meetings, reviewing documents, tracking schedules, reporting progress, convincing teams to prioritize the work, negotiating with those challenging the technical decisions, securing credit, deflecting blame, getting resources, etc. The model of an effective L6 or L7 IC is part secretary, part Frank Underwood.
You're right that you don't have to pursue L6/L7, but L5 is attainable in < 4 years. No 46 year old wants to be doing the same thing for the same pay as a 26 year old.
> No 46 year old wants to be doing the same thing for the same pay as a 26 year old.
Why not? Being a T5 SWE at Google is (or at least it can be) pretty chill. It's a sweet spot for low stress and relatively high compensation. Why exactly do you need to "advance" your career, particularly if you don't want to be actually or effectively managing other people?
The alternative is the "up or out" approach that drives engineers in other industries into being (usually bad) managers.
L5 / "Senior Software Engineer" expectations are such that you have to have "influence beyond yourself", own some area of work, set technical direction for some other engineers. You're either a team lead or an "exceptionally strong individual contributor".
It's not really that chill, and if you don't continue to do those things (lead or be exceptionally strong) that will show on your perf and therefore your compensation. Going from L4 to L5 means committing yourself to doing that on an ongoing basis. Remember at Google that "consistently meets expectations" is only a 2 out of 5 rating, just above "Needs improvement."
Coasting at L4 ("SWE III") could be fine. Large independent technical contributions, manage your own priorities, participate in design, etc. Solid individual contributor. Really equivalent to "senior developer" at most other jobs.
But now let's say you want to go transfer to a new project. The manager on the other team sees you've been at Google many years, but still at L4. Hm. Results may vary.
Not everyone makes a good team lead. Especially in a place like Google surrounded by PhDs and super achievers. But the expectation at Google up until very recently was basically that you should become that, or get out. Now in the last few years, it's been stated it's perfectly fine to plateau at L4. But I'm not convinced that that's the reality of the culture or expectations of managers.
So are you basically describing stack ranking here?
It's just impossible for everyone in a company to be a team lead, so not everyone can become that to become an L5.
Which leaves the other option, to become an L5, which is to be an "exceptionally strong individual contributor" and being that "on an ongoing basis". Not everyone in a company can be an "exceptionally" strong person, otherwise it wouldn't be exceptional any longer.
You're saying the expectation was to either become L5 or get out. So now we have all these PhDs and super achievers working at Google, but only some of them can be exceptional vs. the others.
You were exceptional vs. regular people and made it to L5. Now comes in a super achieving PhD that's a tad better than you and now he's exceptional. You are no longer exceptional. You're just an average super achiever. So get the eff off Google's lawn. Stack ranking completed.
Two years is a long time to stay in one job. Two years is an eternity for a team's headcount not to grow. It's virtually certain that you'll be a mentor to several new hires in that time. You'll also have a long head start on understanding the codebase and tools relative to many others who will be working with them. If you're not some kind of leader at that point, something's wrong.
Unless you found one of those rare teams that stays together for the long haul, but the experience with such a special phenomenon should make it more than worth your while.
If L4s are not getting opportunity to lead, they are on a sinking ship anyway and the smart ones are LeetCoding.
(Obviously this all reflects the crazy economic moment of the last 10 years in Silicon Valley, but so does the level system you're critiquing).
> Two years is an eternity for a team's headcount not to grow.
Teams do not expand until infinity. In fact, it is often better for teams to not expand.
If you are expected to be leader after two years, because the teams are expanding regardless of whether more people are needed and because everyone else is fresh new, then there is issue with managemet
Where do you think the crazy stock growth for the entire tech industry over the last 10 years is coming from? Having a massive backlog of productive opportunities, scaling up to execute on more and more of them in parallel.
I don’t expect that to last forever, but it’s the context that this system is built for.
Yes. The challenge is how to measure individual contributors. It’s easy in Sales, which is why salespeople can do very well without being managers. Harder with engineers, whose work is very interconnected.
For sure. And it's why frankly in my opinion it's very hard to scale software development effectively beyond 25-50 engineers. After that politics and faux-meritocracies take over as people lose personal touch with each other.
Team mates, usually know who is a good engineer, and who's not - much more accurately than management.
When I was in Google, Perf process was based on peers feedback, and it worked well.
I'm happy to lead as long as I also am allowed to work on the code itself from design to actually writing code. I have been doing this full time for 35 years and I would be miserable had I switch to a pure management role at any point including now. Meetings kill motivation for me.
> Smaller companies looking to grow/formalize should exercise caution when looking at the rating/performance/promo process @ FAANGs / MS as a model for their own.
Robert Townsend had a comment that stuck with me, you don't get to be GM by behaving like GM. Which is beware of cargo culting successful organizations as they exist now. I'll also suggest don't cargo cult hyper funded startups if you aren't one either.
I was 33 years in the industry, as a developer. Some times I considered going into management, and was asked about it, but fairly early on I realized I would hate it and it would hate me.
> I think this is a corrosive aspect of the perf/promo process at many FAANGs
Disagree on personal experience with two of the FAANG Cos I have worked for.
A person content in their position, performing/delivering as expected can (and many do) coast. There was little to no pressure from these firms to "move up or move out".
There is a corollary to this - having a career of sideways moves and simply not progressing even in ways that are positive and to your liking because each move effectively puts you back to zero and companies taking advantage of the 'flat structure' myth to avoid having to provide career progression.
I think of it as vertical vs horizontal career path. Vertical is becoming a manager of more and more people and/or projects. Horizontal is becoming an expert at more and more technologies/languages/etc.
At FAANG there's always an IC track and management track. Additionally, promotions are almost always lagging, meaning the individual needs to performs at that level for some time before being promoted.
> “ When you know something it is almost impossible to imagine what it is like not to know that thing. This is the curse of knowledge, and it is the root of countless misunderstandings and inefficiencies.”
A few months ago I came across an interesting actively developed project in embedded rust which was very technical. Full of acronyms and concepts that I was not familiar with. It took quite some time to get up to speed with how it all worked and so I took the time to add / expand README.md files for each example, explaining what the acronyms meant and the general gist of each example. It was a worthwhile exercise for myself and I figured that it was worth sharing the perspective of a beginner too. My benign pull request remains ignored and unmerged and it makes me feel like a bit of an imposter. I thought nothing of it at the time but now I wonder if this is classic case of “the curse of knowledge”. Perhaps the author just can’t see the value of it.
When new collegues are joining our team or people start working on a subsystem unfamiliar to them, I usually tell them that this is a great opportunity for us as an organization and that they should be very vocal about everything that appears strange. Their lack of knowledge is a valuable asset that must be exploited as long as it's fresh. Unfortunately, often it fades within a few days when their mental mode adapts to our quirks.
I think they are doing themselves a disservice by ignoring something that elevates overall understanding, ease of onboarding/troubleshooting and generally, the polish, of the project. I love documentation and usually manage to set aside time to add details in the README/wiki etc. It is a great way to share details with others and myself, six months down the line, because I don't have a super sharp memory.
It depends. A README for a custom TCP implementation does not need a section explaining what TCP is, that’s just clutter. Some types of projects are targeting a specific user group that has to have some level of knowledge.
Example: The readme for ripgrep does not explain what a shell is or how to pipe output.
This is true but the “curse of knowledge” from the article explains why this can happen. It is easy for outsiders like you and me to see the value of such things but as an insider you can become blinded by your knowledge.
That is possible. Though I equally like documenting things where I can be considered an "insider" that I know only other "insiders" will see. I think that it is about being able to see the usefulness, and beauty, of a well made documentation irrespective of the level of familiarity/proximity. Either you see value in it or you don't.
"Perhaps the author just can’t see the value of it. "
Maybe so. But maybe he sees more the work of it.
If the project is activly developed, means it is probably not stable.
Documentation is only of value, if it is updated with the code.
Worse than no documentation is only wrong documentation.
So your explenation of the acronyms should remain true, but everything else maybe not. And it is work to figure out which is which and to verify, if a beginner comes and help with that.
So now while it would be nice, if they would have written better documentation in the first place, or merged your PR, they probably think just too high of the cost.
You did nothing wrong. My team produces roughly equal amounts of documentation and finished product code or other artifacts for bank consulting engagements. Knowledge within the customer's organization is usually scattered all over the place and very few people know how it all fits together due to employee turnover and attrition. The actual customer (VP) is grateful when we are able to piece together what is really happening and the high level state of the system while we streamline their systems.
This is actually the much less known flip side of the Dunning–Kruger effect [1].
> Moreover, competent students tended to underestimate their own competence, because they erroneously presumed that tasks easy for them to perform were also easy for other people to perform.
What I'd be really interested in is how tech churn is perceived by people older than me. I'm only 30 years in, so I tend to defend my generation's choices such as POSIX, SQL, XML, SOA, Java, and C/C++ before that (plus special-purpose pet peeves of mine such as markup/SGML and logic programming which came before) though I'm also claiming to be proficient in and generally open towards new tech. I consider most of supposedly new tech as rehashes of things past rather than real progress, and to be just lock-in schemes and sucking in another way compared to the tech that it is supposed to replace. But I'm really uncertain if I'm just falling victim to generational effects, like, say, the proverbial COBOL programmer being ridiculed by younger devs to instinctively grow their own tech career. Still, for me, there's a moment around 2006-2012 when the industry went all-in on "the Cloud" and consumerisation of IT, when before I saw canon, progress, and consensus through standardization.
I guess this is something that can't be objectively measured; but I can say that initiatives for standardization have almost dropped to zero compared to the 1990s and 2000s.
> initiatives for standardization have almost dropped to zero compared to the 1990s and 2000s
I think OSS adoption explains this. Standards were all about advocating for common interfaces, even if the implementations were proprietary. Proprietary software was more a thing in the 90s and 00s.
Nowadays, common practice is to use open implementations, not just open interfaces. Nowadays, we only see widespread initiatives for standardization when people are trying to cement the long-term survival of their implementation's interface.
See, for example, the Open Container Initiative, put forward by Docker (a company that engineers were concerned about betting the barn on, in 2015).
Contrast that with S3, which does not have a corresponding commitment to an open interface standard. Their interface has been copied nonetheless by several vendors (DO, Linode), making it a de facto standard, albeit a fragile and autocratic one.
Basically, I'm not sure if standardization has evaporated, I think it's diffused, and OSS is (ironically?) a contributing factor.
Before the triumph of OSS standardization required multiple parties to come to agreement and then each produce their own implementation, so implementers had an incentive to fight for their own objectives so that they didn't get backed into a corner of having to do unprofitable work. Lots of bickering about was was and wasn't worth including. To navigate this environment requires committees and formal standards documents.
But you know what's a great consolation prize for not getting exactly what you want? Not having to do any work! So when Docker comes along with Docker Containers or Google comes along with Kubernetes, the consolation prize of not having to do any work is a lot larger than the downside of not getting to have any input. Working code is working code, especially if someone reputable has also promised to maintain it for free.
Adding on to your point, the phrase “designed by committee” carries a strong negative connotations for a reason.
People may have some gripes with the implementation of some subsystems, but they are free to fork and submit patches for review.
This opportunity for grievance remediation did not exist in the old world we’re describing, and contributed to the outcome that standard interfaces were the unholy logical union of each member’s opinions and beliefs.
I agree that the benefit of not doing work far outweighs the downsides, for most time horizons (<10 years).
If you’re thinking longer term than that, interface standards might be better for you. But few people are.
I'm glad you've added that last paragraph, because I had the impression you'd welcome our new F/OSS overlords, even though I factually agree with your point about open implementations having replaced open standards.
What I mean is that, while people love to argue about F/OSS free vs open licenses nuances ad infinitum, the reality is that power has shifted to a very small number of players (FAANG, RedHat/IBM, MS et al) who've captured F/OSS, without choice, discussion, evolution (of competing ideas). What you call "design by committee" can alternatively be seen as a defense against unilateralism with the power dynamics we've headed into.
"Open implementations" means nobody can make a buck and sustain development and innovation. The only gain to be made is through integration and attention economy, creating perverse incentives.
Take Linux: they're trying to implement an operating system for like 30 years now in a monolithic fashion. The power of Unix is not that it's the most advanced operating system, even by 1970s standards, but that it is a minimal portable system that can be created from scratch in one or two years.
Linux being GPL hasn't prevented it from being used for a giant spynet (Android) nor a lock-in scheme ("the Cloud", k8s) taking Unix principles of site autonomy and small parts working together ad absurdum (the absurd part being that to shield against minor F/OSS version conflicts we need opaque almighty container orchestration and an accompanying zoo of tools, all the while we're doing largely the same we did 20 years ago on much less capable hardware).
Take so-called web standards: the idea of standardization is originally motivated by digital humanism, eg. that we do the best we can to come up with idioms and languages for digital communication and its preservation, accepting inclusion over perfection. The reality is that this idea has been usurped by an ad company taking all communication into an analytics-heavy medium more idiosyncratic than ever, leaving a single browser capable to render web content in its entirety, where "standards" (HTML, http) are created by the dev team of said browser. We didn't need that; we had CompuServe, AOL, and desktop operating systems for this purpose.
Oh yeah, I totally agree that the long term picture here is quite murky at best and frightening at worst.
> “Open implementations" means nobody can make a buck and sustain development and innovation. The only gain to be made is through integration and attention economy, creating perverse incentives.
I don’t know if I wholly agree with this point.
Especially in the most recent decade, we’ve seen a lot of the F/OSS developed at attention economy companies applied to completely disjoint industry sectors, with positive effect.
I work at an insurance company, for instance. The fact that we’re using gRPC/protobufs allows us to have a small engineering team that hits way beyond our headcount. Happy to elaborate more on the point, but I’ll leave it here in good faith.
I think that our dependence on this technology is a smart choice in the short to medium term. In the long term (10+ year), the worst case scenario is that we have to maintain a fork, or migrate to something more secure. We would have had to do this anyway, if we built our own RPC framework.
I agree that OS’s and web standards are becoming less and less user serving, and more corporate/attention economy serving.
I wouldn’t be shocked to see the web bifurcate eventually, between HTTP/HTML/JS and something simpler served over a simpler protocol.
That being said, I do think there’s some value in having an opinionated author compelling people to conform to their standard. When that leadership is absent, you end up with something like the Bluetooth or USB standard, and everybody suffers.
But at least those standards will exist forever, until they’re superseded by something that’s a superset of them. The same can’t be reliably said about the F/OSS we’ve been talking about.
It’s all a series of trade offs. I’m not too pessimistic. I think we’ll eventually end up in a better place, but we will likely stub our toes and bump our head am any times on the way there.
I'm only 15 years in but have thought about this a lot. I agree with you that there is nothing new under the sun; that we just cycle back through old ideas; fashions come and go and come back.
However, the environment changes so often old ideas go from being "possible" to "they just work" or "possible but really slow" to "instant".
These environmental changes can make orders of magnitude differences, and cause old ideas to suddenly feel very different in practice.
When I think of programming languages for example, of the languages you mention, a lot of their design (like matched brackets in XML), were decisions that made a lot of sense when we had monochrome editors, but now with extremely fast and advanced IDEs, we can take old PL ideas from the 50's and design languages with IDEs in mind that are a lot less cryptic and concise.
Good points. Though if you've picked up computing from almost the ground up (soldering, hardware hacking, low-level programming, of which I did only a little however), there's that experience when sitting in front of an overwhelming, notebook-melting IDE where you say to yourself "I don't need all those arbitrary abstractions; I've got a pretty good understanding of what I want to achieve, thank you very much" and realize the cognitive overhead can become a net-negative compared to the perceived problems that modern IDEs are attempting to solve. Matter of taste, of course.
As to XML, matched end-element tags (if that's what you mean) actually were a simplification compared to SGML from which XML was derived/subset. In SGML, you can omit/infer end-element tags or can type "</>" to make SGML auto-close the most recent element. I agree the verbose-ness of XML looks especially redundant since it's always the most recent element you have to close anyway whereas SGML (in principle, at least) has overlapping ("concurrent") markup. And I can assure you that around 1998 we had 32bit color monitors and IDEs didn't look all that different from today ;)
But back to the topic, I believe a lot of material has simply vanished from the 'net, or isn't accessible through search engines anymore.
IDEs and languages that an IDE can exploit are awesome. Especially if you don't really have such a good idea of a project (yet). It helps architects or contractors tremendously (as examples of people that might jump from one code base to another quite frequently).
What I mean is that I personally really like statically typed languages like Java and compile time safety. I even like some of 'verboseness' that people always complain about. I can take a modern IDE and a Java project that hasn't replaced everything with runtime magic yet (Spring comes to mind) and I can simply click my way through things to get the info I need and/or to build a mental model. However, I don't need a finished mental model already just to be effective at every task. Also refactorings that are really braindead simple and that you don't even have to think about are possible, precisely because they're simple and guaranteed to be correct.
Contrast that with other languages, such as Javascript, Python, Perl (yeah mentioning that because I loved Perl back when I was doing almost exclusively Perl - with a bit of shell scripting and lots of SQL - at my first 'real' job), where you have to have a mental model already and you have to know certain 'magic' to even be able to search for all the right things. Something that stuck in my head in that regard was AngularJS. I forgot the specifics but some type of identifier was use underscores in one place but dashes in another. How the eff am I supposed to find things easily? I have to know that magic conversion and grep for it specifically. If I come to a FE project written in Angular and have never done Angular, I will not find anything whatsoever and I have zero chance but to learn Angular to do basic things. And coming back to refactorings, even a simple rename can be a pain in the rear to do (and you won't do it and be stuck with really bad naming) because you can't be certain that you found all the right places to change until runtime i.e. your 2 a.m. batch run failed and you have users screaming at you. That teaches you to just stick with bad naming real fast.
Good languages and frameworks, if you ask me, allow someone that is senior in his role but a total noob with the specific language or framework to look at existing code and easily make certain modifications.
I wrote a simple T-SQL (Sybase, not MS-SQL) parser that output graphviz format to get a call graph of some backend job we had that consisted of a gazillion individual stored procedures strewn across a gazillion files and printed that and hung it up on the wall, just to be able to easily navigate that thing. Queue IDE where you just Ctrl-Click for the same end result.
Very interesting indeed and I had some glimpse of that in my dad, who all his life worked for only one company. And especially as I got older and interested in computers he started talking about some work stuff sometimes. His first programming language at work was 360 assembler. For the uninitiated/too young to remember: https://en.wikipedia.org/wiki/IBM_System/360. When I started working at the same company for my first job and I had to log into 'the host' - meaning the IBM mainframe they had (I believe by that time a Z series https://en.wikipedia.org/wiki/IBM_Z) I noticed how the custom login screen for their entire system had actually been written by my dad.
So all this just as a little context. But when I was talking about SQL like 15+ years ago he would always tell me that sure that's nice and all, but all those joins. They're painfully slow. Why would you do that? They had a table that had exactly all the information that they needed to have and they accessed it via a key. Fast and easy. Need a different view of the data? Make a new table that's organized the way you need it. Done and fast. Ring a bell? (NoSQL :))
I also started playing with Linux and system administration and obviously virtualization (vmware at the time, qemu stuff etc.). So I go talk to my dad about it and how I love the concept and how it does X and Y and Z cool thing. Yeah well, for him that was really old stuff, coz they had that on their mainframe since like forever (now called z/VM, then called VM/370 - in 1972).
We had to learn and use Java at university, which my dad always made fun of. Especially with articles like "The state of Java application middleware" and such that I was looking at. All that new fangled stuff, pah! He had been working with IBM CICS as the middleware for forever. Initial release of CICS July 8 1969, latest release June 12, 2020. (https://en.wikipedia.org/wiki/CICS)
I get it. I say the same thing(s) about some of the new fangled stuff I have to work with nowadays. With the kids. Some of it is great. Other things I'm just asked myself why the wheel had to be re-invented just to come back to square one. Like NoSQL databases that add features you'd expect from relational databases.
As a young blood, 5 years in, got hired by learning react in the great JS hiring of the mid 2010s, I look back towards those standards bodies as something that would be very beneficial now. All we have is chrome creating standards by monopoly.
Maybe I have a older mindset aswell. My training in in Civil Engineering and I have my expectations for standards bodies set pretty high because of it.
The code is so high level now, going through algo and data structures I see the benefit of that fundamental knowledge, but also see that you can be a very valuable engineer to a company without it(thanks to the rich developer ecosystem for that)
If we think about the maturity of software like a biological ecosystem, maybe the zen garden built by past engineers has overgrown into a dense and varietal forrest. Im not sure if its bad or good, maybe there are more niches to move into.
I think one source of tech churn people don't talk about is just growth. If you're 5x size you were last year, an entire system rewrite is relatively cheap, and if you expect to grow in the future, you can make riskier bets knowing they'll be cheaper to replace down the line. It's not ipso facto irrational. I'd be interested in seeing a breakdown of the correlations.
I'm 30 years into this industry also. We all have a tendency to defend and use what we're comfortable with, which often is what we learned ages ago. I try to challenge myself regularly by looking for my biases and assumptions. (Easier said than done, of course.)
However, I'd say the rate of "tech churn" has become impossible to keep up with in the most popular stacks (Java[Spring], .NET, front-end, Node). If you are in other areas, things tend to move slower.
I think the initiatives for standardization move at about the same rate as they used to, but the increasing amount of industry churn makes it seem like standardization has slowed down.
Why wouldn't it? I've always been a fan of polyglot programming and use of languages oriented towards specific use cases, as opposed to language-centric ecosystems. Now Java doesn't exactly fit that criterion ;( but makes up for it by being portable, relatively open, and cross-platform, and being instrumental in having prevented Windows dominance on the server-side in the 1990s. And it pays the bills. Haven't used it for new personal projects since 2008 or so, though.
> Clean, understandable, and navigable code and design
It's interesting to see "navigable" in here - I've struggled to articulate in the past this problem and I think this nails it. We often see simplicity come at the cost of "navigability". All the configuration by convention frameworks have this problem particularly heavily. I'm often at a complete loss to trace the mechanics of what is happening in things like vue-cli, Rails / Grails, gradle, and many other dynamic frameworks.
They achieve remarkable simplicity, yet I often end up hating them and swearing at them because I can't exercise my understanding of fundamentals to reason about them and rationalise what they are doing. "The database connection must be getting established somewhere - it should be traceable back from the point where the connection is used in some way" - well, no you may have to understand most of the entire underpinnings of the framework before you will achieve that understanding.
I think this "navigability" idea really fills that gap well. Things should be simple, but they should always stay navigable based on fundamental knowledge.
Webpack is one of my absolute favorite tools. It's easy to reason about and a pleasure to use. It absolutely checks the "navigable" box in my book.
However, it got constant flack (esp. here on HN) for being too complicated. Apparently asking a dev to spend a day learning the tool before getting started on a new project was too much to ask. Now the best practice is to use a meta-tool like vue-cli or react-scripts to abstract away Webpack, trading navigability for simplicity. Simplifying complex tasks requires magic and good luck debugging when your magic incantations don't work as expected.
I totally agree that navigability is super important and I wish I knew how to better emphasize it in my projects over the "batteries included" approach that has the best marketing.
I agree, and I like the concept of "navigability". But I'm not sure I agree that many of the listed frameworks like Rails, are simple in my experience. There is a lot of "magic" complexity, but they just go through efforts to hide that complexity from you. It appears simple on the surface, but on deeper dive the complexity is still there.
what does navigable mean? I expect the IDE/tools to allow you to navigate the code fast and easily. If your IDE doesn't let you click-thru to definition, usages, and references, it's a poor setup imho.
One of the tenets though, "6. Be Honest and Acknowledge When You Don’t Fit the Role" only is fair to the person if the team/company also follows that advice.
If a team or company does not value, or cannot tell, or does not act if someone is incompetent and unfit for a role, then this advice penalizes those who are honest, and rewards those who are good at faking competence.
You need to have a structure of integrity within which to operate, in order for honest behavior to be rewarded.
I don't see how your statement disagrees with what he said. He's talking about self-acknowledgement which has nothing directly to do with the organization. For example, if the organization values fake competence and you don't then you need to self-acknowledge that and either start faking it, change the organization or find a new job.
I suspect that "or the role can evolve" implies that some roles come with unrealistic expectations.
I see this when I see job postings with a long laundry list of qualifications and responsibilities. I suspect that part of succeeding in such roles is figuring out how to evolve the role to be more reasonable.
I think it's fair. For me, the penalties and rewards you talk about are outweighed by some countervailing factors outside the realm of career development. Even if your current team isn't honest, the ultimate structure of integrity is your very own.
It's clear doesn't say "acknowledge it loudly to everyone around you so you get kicked out of the role". He says there is more than one solution (e.g. grow to fit the role), and that it's about having the self-knowledge not to stay in a bad place.
If an organization pervasively rewards competence fakers, where will that whole organization be in ten years anyway?
I mean, the most important wisdom is never "mountaintop guru" insight, it's always simple things that just need to be reinforced and actually observed in practice.
It's one thing to say "keep it simple" and a totally other thing to actually be keeping it simple for four decades :)
Agreed. I'm ~15 years in, and so even though I didn't come across any new ideas here, it's very helpful to hear what a 40 year vet thinks are the most important signals and try and harden those paths in my mind.
I have gotten a lot of advice but sage wisdom is rare. The closest I can think of is:
“Expertise isn’t coming up with the perfect solution, expertise is knowing you can get a good enough solution on time.”
Frequently you have a fast but risky option and a slow but guaranteed one. It’s fine to work on the fast option, just remember to abandon early enough to still finish in time.
i can't remember where i read it but "engineering is the science of good enough" has always stuck with me.
I'm reading Algorithms to Live By and the section on explore/exploit feels relevant here. At what point do you stop looking for new things and rely on the things you already know. lol I feel like i lack the IQ to connect all the dots but there's something applicable in that section to this discussion. maybe i should re-read it...
> Computer Assisted Software Engineering (CASE) tools, COTS, Enterprise Resource Planning products like Peoplesoft and SAP and, yes, even Ruby. They claim amazing reductions in cost and time if you buy into their holistic development philosophy. What is not always as obvious is the significant up-front costs or the constraints you may be committing yourself to. Lock-in used to primarily happen with vendors, but now it can happen with frameworks too.
It is no? It's not easy to rewrite into something else if you need to. With rails the biggest reason you might need to do that is the abysmal performance. You need twenty servers to do the work you could have done with one.
Is it not? Even if it's open source, I would view monolith frameworks as a form of ecosystem lock-in. We use Spring at work and I would certainly describe us as suffering from "lock-in" to the Spring ecosystem.
This is somewhat true, but that is why separation of concerns is a best practice. Maybe you do have one piece of the stack "locked in" to Rails, but the rest of the stack can stay in place if you want to move from Rails to a different framework.
In Rails, this is unfortunately somewhat difficult (though not impossible) to achieve due e.g. to the autoloading magic that Rails adds and that can make isolating library code a bit more challenging if you're not aware of some intricate details of Ruby's and Rails's loading behaviours.
Not quite the same. I moved away from Django some years ago when I ran into a bug that I traced back to the Django ORM. At the time it was neither obvious to fix nor easy to replace the Django ORM with something like SQL Alchemy.
It’s not necessarily about avoiding rewriting entirely, but rather minimizing the amount of rewrite required if you have to improve, change, or fix something. If your roadblock is in the framework, the bigger the framework, the more you have to rewrite.
It's hard not to become jaded when every framework turns out to be just another team's personal preferences. I would say the statement is generally true about frameworks, but not true about rails. Perhaps the author did not spend enough time with rails.
Having spent time with Rails, I'd say you're biased.
I went from a dotnet shop to a rails shop, and despite the attitude that Rails is "marvelous", I can't help but feel like Rails is the bastard child of ASP.net.
It feels incredibly similar to working within the constraints of ASP - the framework knows best. Don't questions their choices. Don't do it any other way. Lock yourself into their good choices.
Their choices turn out not to be great for your use case? Fuck off.
Compounded by the fact that Rails is currently in the same death spiral dotnet was before scrapping everything and releasing dotnet core - Rails is great in v1. Rails is much less great in v6, where documentation is shoddy, splintered across versions, there are 5 ways to do anything, but god help you if you don't know the most recent incantation. Memorize all of our conventions, but hey - best practices have changed like 6 times in the last 10 years, so our conventions from yesteryear don't apply, and no, we won't update our documentation and stack overflow answers are bad/outdated.
Basically - Coming in from all sorts of other languages and frameworks I've used in prod (Golang, Dotnet, Dotnet Core, Node, Ts-Node, Rails, PHP, etc) Rails is currently sitting in my shitlist.
That's too bad. I worked at a dotnet shop when I learned rails (on my own) and it was night and day for me. ASP.NET is a poor framework that was tries to take the windows decktop app experience and port it to web. Turns out that is a bad idea...
If I'm building an API for clients - I've really been enjoying Typescript and plain old Express. Setup takes an extra 30 minutes or so compared to Rails - probably longer if you aren't familiar with Typescript and Node already - but it works nicely, has a great minimal default, and mostly gets out of the way. Big plus is that type information can be shared across the client and the server, so you avoid a lot of duplicate effort redefining types, and you don't accidentally change a type on the client and forget the server, or vice versa.
If I'm doing something experimental or hacky (last time I was creating a MITM proxy) definitely GoLang. The language is slim and powerful, they let you peel back most of the abstractions as needed, and the code is fast. The downside is it will absolutely take you longer to get running. The upside is a lack of hurdles once you're actually moving.
If I'm just spinning up a simple static/mostly static site - I actually think DotNet Core is a decent framework. Honestly, so is Rails in this case. I'd pick C# over ruby in a heartbeat, though, explicitly because the older I get, the more I want a type system.
Honestly, I think Rails (and really Ruby) is moving in a direction I support. Particularly adding types. But it's not a language/framework I would pick at the moment, unless I was just doing one-time contract work for a product I know won't be updated.
Too many are... Some are built upon a novel ideas though. Can we build web apps like desktop apps? Can we use one language on the frontend and the backend? Can we write everything in java?
Rails was built upon the novel idea of, "Can we build a CRUD app framework developers enjoy using?". As there is no silver bullet, this comes with tradeoffs many do not want. Performance is not a first class citizen. Code is prescriptive. A lot of rails apps are just gluing together gems.
I will say though, if you are building a CRUD app. There is no other option that will get you to done faster than rails.
I read '5. Beware of Lock-In' section as "be generalist".
It looks to me as generic career advice to not stay fixated to single all-in-one tech choice and role - SAP dev, Ruby dev etc. Ruby is just an example here ('even', 'frameworks too'), I guess the author witnessed the "rockstars" and evangelists hype wave of RoR circa 2005-2010. Possibly it could be e.g. ReactJS or Kubernetes today too.
yes, but lock-in can be worth it, and Rails is increasingly worth it as the ecosystem and core grows. i wish we had an alternative term to "lock-in" that had more neutral instead of negative connotation. it's got tradeoffs, like with anything, but doesnt mean we gotta be scared of it.
But all too often, the very best possible advice is, "Get out. Get out now!"
Get to a place where the Good Advice quoted above works. It can be very hard to tell that, up front. You have to try and see, and plan to skip along if you guessed wrong.
Most of my regrets, over 45 years in engineering, are over sticking around when what I had to offer was not what was welcome. Sometimes, what they needed, I didn't have. Other times, they didn't know what they needed, didn't want to know, and didn't recognize it under their noses. Either way, you would much better be elsewhere sooner than later.
For the young, it is tempting to try to prove your bosses wrong. That never, ever works. First, sometimes they aren't. When they are, they will be committed to not seeing it. It is always overwhelmingly better to deliver solutions to people eager to get them. So, find those people.
Often, you will have to prove yourself and the value of your ideas. That is not a reason to scram. But make sure before you start that success has a definition. "Ten times faster", "ten times cheaper", "ten times fewer server instances", "ten times less latency", "ten times less downtime" are hard to mistake, are remarkably often achievable, and are sometimes welcome.
In some cases, "two times" stands out; I recently got Quicksort to go twice as fast, and that drew some notice.
1. Politics is everything - how you are perceived is politics, whether you get assigned the projects you want is politics, career progression whether technical or managerial is entirely politics. You will be told various nonsenses about 'flat structures' and 'meritocracy' throughout your career but it's bollocks. Play the game badly and the flat structure is your career.
2. Most programmers don't care - as much as you might care about software engineering principles or even minimal levels of quality in your work, you will be shocked at the degree to which most of your colleagues do not. They might give lip service to it but when it comes to it most justify doing a crappy job by hand waving about being practical. If you talk even mildly about code quality expect to have it patronisingly explained to you 100 times over that you are a perfectionist and customers don't care and etc. Etc. (Of course these arguments are easily rebutted straw men but good luck getting that across).
3. Raising what's right usually hurts you - people do not like to hear uncomfortable or irritating truths. See point 1. Pointing out that something is a risk or severely broken is more likely to get you hassle and lower people's view of you and God help you if you are proven right - there is never a 'oh you were right!'. Nobody likes to be made to look bad. Again see 1.
4. Never underestimate how shit the code is in your next job - the interview very rarely gives you the slightest insight into the quality of a new employer's codebase and never be surprised at just how terrible it can be.
5. We'll fix it later means we will never fix it - technical debt is a lot like national debt - ever growing and rarely paid down. If you are fobbed off with a 'we will refactor that later' comment take that to mean 'this is shit and I am fine with that'.
6. Sideways shifts reset your career to zero - there is no such thing as treating development experience as fungable. You are only as good as your years in this particular slice of the industry and more often than not you are only as good as your years in this particular company.
7. Fuck you pay me - at any time if you are told by an employer that you are part of a family or that your pay rise couldn't happen because you are already paid highly for your title or other such nonsense, start looking for another job, you are being used.
> 6. Sideways shifts reset your career to zero - there is no such thing as treating development experience as fungable. You are only as good as your years in this particular slice of the industry and more often than not you are only as good as your years in this particular company.
This one has burned me a bunch of times in my career already and I'm not quite a decade in. I really want to find a corner of the industry I can focus on and gain momentum in my career but it's been challenging to do that.
This is really good advice. Myself personally, I have been programming for decades as well and struggle with the curse of knowledge.
Curious what other people do and what struggles they have had.
Having spent time studying Category Theory I see a lot of value to using it in my code but it is not common knowledge and often confuses others when you start throwing around terms like functor, monoid, monad, etc.
At the same time I don't want to write code that isn't as reusable or to that isn't as maintainable.
Are we as an industry limiting our own potential by deliberately not using knowledge when it is beneficial for the sake of greater communication?
My personal philosophy when working with teams is to exclude some of that stuff but slowly introduce it and train junior developers on some of the concepts in the hope that I can push that knowledge forward.
Curious what others think of this dilemma and how they address it.
I also struggle with this issue, but I've never heard the term "the curse of knowledge." I'm happy I have a name for it now. When you try to explain to people that it's difficult to understand that other people don't know something that you know, you sound a little crazy.
I'm only 7 years into my professional software engineering career and this really stuck with me. Over the last two years I had the honor to work with an awesome team on an interesting project, and yes, trust is the basis of exceptional teamwork.
I'd like to add, that knowledge can lead to "knowledge paralysis": The more knowledge and experience you have on a certain topic, the more problems and caveats you'll spot right away. Awarenes of those may prevent you from actually engaging in productive activity and instead tiptoeing around the problem at hand, instead of just implementing a "straightforward" solution (that ignores the 0.1% of corner cases, you know about, but won't have to care about).
This is omething I'm struggling more and more these days.
> At EDS, the culture wasn’t like this. People moved in and out of management roles. There was no stigma associated with moving from roles with greater scope, like strategic planner, to roles with more narrow scope, like PM or project-level developer.
This sounds amazing. I'm curious...how does salary change? Or does it change? How is all of this handled...
Stuff I'd add that I think is crucial, and all related to one topic: good writing!
1. Learn to write specs
Call it an RFC, call it a PRD, call it whatever. Writing out a plan for any project taking around a week or more is always worth it. Use it to establish scope and priorities. Keep a "Questions" section that you whittle away at as you seek out feedback. Make the body a hierarchy of design tasks and implementation ideas. Track revision history and keep it up-to-date.
You now have a great way to get sign-off from peers + management, an execution plan, a task breakdown for ticket management and historical documentation on what happened and why.
2. Be terse
Is that email looking a little long? Write a "TL;DR:" section at the top and then see what you can delete. If it's nothing, leave the "TL;DR:". Otherwise you may find you've included a lot of intermediate thinking and people only need to know the conclusions.
3. When learning something new, keep a dedicated list of "this looks like magic!" items
Use your "down time" to research items on this list instead of refreshing your favorite news aggregator. Ask your peers and mentors about them. Notice when tangental issues come up and spend a few extra minutes seeing how they're related and how that might yield some easier answers.
4. When you want to ask for help... defer clicking "send"
Whether its an IM or email, write it up on the side first. Start with your question and follow-up with what you've already done to walk through finding the answer to save the person the effort of starting from step one or pointing out the obvious thing you overlooked. Often this will lead you to answering your own question. If not, still wait 15-30 minutes (if you can) before sending it and work on something else. In that time you're likely to think of something you overlooked and avoid interrupting anyone.
> When learning something new, keep a dedicated list of "this looks like magic!" items. Use your "down time" to research items on this list instead of refreshing your favorite news aggregator.
Brilliant advice. It's also applicable to other areas of life, not just programming.
I'm afraid not. The best I can do right now is offer an example outline of what I do. That said, if I'm working with a good project manager my format will differ: I'll focus a lot more on the technical details and leave all organizational aspects out (e.g. timelines, dependencies and impact on other teams, etc).
The key principle is that anyone (sales, marketing, support, product, etc) should be able to benefit from reading it and find it intuitive to see where you're getting into the technical details and skip ahead to the next major point.
1. Overview
A paragraph of two explaining the what and why of the project. Maybe some links to other related key documentation. Don't put anything technical here.
2. Change Log
Bullet-point list of dates and major changes. Think of it like a git commit history.
3. Scope
What systems are affected? Are there phases to the project to be called out? If its principally one system/repo/etc in question, what major components are changing?
4. Requirements/Tasks (the main body)
Use a numbered list and aggressively refactor as you go so high-level items rise to the "left" and all related details are broken out as sub-lists. You can loosely think of it as something like "[section] > [epic] > [task], [task], [task]". Prioritize the list based on dependencies so you're always referring back to already-mentioned requirements instead of yet-to-be defined things.
5. Supporting Docs
Put long-form examples, scenarios, diagrams, etc in their own section. Some people will only care about these, especially if they provide integration examples.
6. Concerns and Questions
Ideally by the time a first draft is done this section no longer exists, but usually you'll find that you need to do research or get feedback from others to finalize the whole thing.
> Do you happen to know of a properly written (publicly available) spec? I'd love to see a good example.
As somebody that has written hundreds of specs and PRDs over the years, I think one of the counter-intuitive things about doing it well is that there is no such thing as a "properly written spec". Or more accurately, "properly written" isn't a single path. You can't really templatize this in a generic way.
Instead, I'd offer the following basic advice which will help to build good specs:
1. Create an outline first, enumerate the list of stakeholders and the sections/topics that those stakeholders are most interested in addressing. Use this outline as the basis for filling in the details.
2. Don't write more than you have to. Rather than being overly verbose, establish the context of what/why/when at the top of the document in a summary, and then focus only on the most relevant conclusory details in each section after. The more shared domain knowledge the stakeholders have, the less you have to write.
3. A good spec results in a finished product/feature/widget. Be clear about what you know, what you don't know, and what you believe. Leave room for things to be discovered during implementation. Be concise.
Basically, don't write more than is necessary to actually achieve what the output of the spec is trying to achieve given the team/organizational context that the spec is going to be used in. Be as concise as possible, eschewing verbosity when possible because shared knowledge already exists. For this reason, "properly written specs" are very team/organization/project/product/person dependent. There's no universal format that works.
Not sure this is exactly what the commenter had in mind, but I think of Clojure's "design rationale" documents as good examples of thinking through a problem before executing. This is one for the language itself, but there are loads of others out there for other sub-projects.
* ITU Recommendation on H.323 Protocol for Packet Based Multimedia Systems.
* The NIST documentations on FIPS and the various Crypto algorithms.
The first two are an "umbrella" of protocols and thus the specifications go from overview to extremely detailed in a very nice step-by-step manner. The third one shows you how to specify detailed and complicated algorithms.
Writing good Specifications is extremely demanding! But this is the only way to really understand the Domain.
"What even is a good specification, or the Huaqiangbei test"
Those three points are pure gold!
I'd also add that you don't need to be working on a major project to benefit from standing back to writing it out first. Much of what I write is just a few pages, but it's still useful. It saves time overall, helps others get up-to-speed fast, and is fun!
Even if I am moving my first steps into the industry (I'm a simple trainee at a Tech-StartUp and MSc student) I've already experienced many times the first point. I think trying to keep your feet (and mind) on the ground, even when you have a huge expertise is fundamental in communication, teamwork and goals' chasing.
For example a friend of mine, graduated in one of the top programs in europe at Rotterdham Business School, has a huge expertise in the R language, he is able to manage data promptly at work, efficiently delivering in 2 hours what his colleagues do on Excel in the whole day. However he has big problems in communication here in Italy, he is not able to understand what other people with different background/expertise are asking to him precisely and this is becoming a huge issue in terms of career development.
>> When you know something it is almost impossible to imagine what it is like not to know that thing. This is the curse of knowledge, and it is the root of countless misunderstandings and inefficiencies. Smart people who are comfortable with complexity can be especially prone to it!
The more I learn about smart people, the more I realise how dumb they are.
Regarding simplicity, I’ve observed the following about design. Usually, designs become more complex with time. If the RATE of complexity increase exceeds a certain level, the design is crap and that approach needs to be scrapped. Occasionally, a design gets simpler as one goes forward. THOSE are magic!
Most of this is perfectly archetypical "hindsight says it's all about the little things", exactly what we expect our elders to say.
But consider no 100 year old dying in 1950 would bother impart:
> At the end of the day, sweaping the barn really is how all my children made it to age 5
> A rust-free butterchurn keeps those fingers attached, don't rest though there's a wall of glass
Good Timeless advice is a function of stagnation.
> Fighting complexity is a never-ending cause.
> Beware of Lock-In
These ones, though, I like. We should strive to understand problems in full generality, which may increase simplicity. But our current cloud-and-Docker-snakeoil trend is either epicycles, not ellipses, or lock-in, never neither.
It was good to read this... But today most of the big/famous companies (except Tier 1s) aren't giving importance to these. Infect in one of JDs of a big company for Engineering manager role I saw "Ability to negotiate" as a requirement skill for that job. I would blame the fast-fail culture brought up recently.
I was pleasantly surprised by the list of "fundamentals". Here I was, expecting something rather specific about the discipline. And I'm so happy whatever I was expecting wasn't what was there. The qualities described do seem like true "fundamentals". It's a human effort after all.
45 years ago when there were, oh, 5 computer languages: YES.
Today, with a dozen languages and hundreds of frameworks and tens of thousands of packages in those frameworks, maybe not so much. We could do with a little "lock in" for maybe a few years. IMHO.
Instead of fighting and arguing all day, and getting fired anyway you can often just walk away from the situation. In fact I would say that's the only real conflict resolution that ever works.( In and outside of work )
Family not treating you right, don't complain about it on Reddit. A friend of mine ended up staying in a homeless shelter for a short while at 19 since he didn't like how his family was treating him. And he ended up with a great career after that.
Ruby the language is not really a lock-in problem.
It’s specifically DSLs, and the concept is non-Ruby specific.
People that got bit by Chef (which is still great, but it can get unwieldy) may blame Ruby, but the problem there was things getting much too far away from the actual CLI commands, so it was a DSL lock-in problem.
The warning against DSL lock-in may be founded.
But, when it’s kept simple, DSL lock-in may not be there as much. If you can quickly translate it to something else, you’re good.
This is the opposite of what you get from new hires/juniors: they tend to focus on which stacks matter, what to learn, how to develop, deploy and maintain. Not much real advice on the behavioral side, to the point that people often take trainings for behavioral interviews and memorize “leadership principles” and other nonsense.