Hacker News new | past | comments | ask | show | jobs | submit login
What's a senior engineer's job? (jvns.ca)
227 points by akshaykumar90 on Oct 7, 2018 | hide | past | favorite | 85 comments



One thing that I consider to be a really important part of a senior engineer's job that's not enumerated here is: helping more junior engineers come up with estimates for difficulty and time on projects they're about to undertake. To a senior engineer, there are a large number of projects that are fairly easily scope-able, e.g. "add a new API endpoint"; "refactor this medium-large model" that a junior engineer may not have the necessary degree of confidence to estimate. While this can sometimes be the responsibility of an engineering manager, not all EMs are sufficiently technical to know how long it'll take a _junior_ engineer to take on an otherwise easily-scoped task.


Speaking as a senior engineer myself, I disagree because time estimation in software is not a useful activity. The only thing it helps is political manipulation in a layer of management above you. Engineering work takes as long as it takes and often there are unknown blockers or surprise aspects of a problem that make it far more difficult than expected, so frequently that the initial estimate has no value to anyone, not even in a rough sense like, "will it take 2 hours or 30 hours." In my experience, a task estimated at 2 hours is equally as likely to take 30 hours as to actually take 2 hours, and there is no systematic way to know which case you're in.

Regular check-ins to catch the blocking issues early is much more important, and so teams should not waste time making estimates or tracking velocity. It's pure junk. Instead, start working and meet often to detect blocking issues as you go.


> I disagree because time estimation in software is not a useful activity.

Anyone who can say this has the privilege of being fully insulated from revenue generation. But at some level of the company, resourcing decisions are made, and they depend on understanding the costs and benefits of various tasks. If you are deciding between having your engineering team build an Android app and adding an API integration, you need to understand how much revenue it will bring you and how much it will cost, and that’s where estimation comes into play.

Man months are very non mythical when it comes time to write paychecks.


What you've said is true in the same way that if you could just estimate the winning lottery numbers for me, we could increase revenue drastically.

Even though it would be incredibly useful to have, it is effectively impossible to give an accurate value other than for trivial or massively constrained problems.


Even insulated from revenue generation, things have to get done and you need to let people know when that will happen. Estimation is difficult, yes, but it's vital for working in teams of more than one person.


> “Anyone who can say this has the privilege of being fully insulated from revenue generation.”

You are incorrect. I work in a directly client-facing capacity and often have phone calls to help our actual clients and their product managers, and I have many internal stakeholders for my team’s work that are sales-facing and client-facing. Most of the quarterly planning meetings I am required to give input into are directly focused on revenue generation.

Because software velocity estimates do not have any correlation to the actual delivery timeline, yet they will be used for political bikeshedding by people who don’t know about the technival details, it is exactly in revenue critical situations that you want to get rid of the pretense of estimation and admit the truth that you have to simply measure by doing and frequently reporting blockers.


If you can’t at least say which of two projects will take longer, the architecture must be fucked to hell.


Usually it is sociological, and has not much to do with good or bad architectures. The more surprising blockers tend to happen when someone on another team can act as a gatekeeper to a resource you need, like permission to make a change, and uses this blocking for some political purpose.

It can cause bang-simple engineering tasks to take weeks, during which time you never know how much longer you’ll need to wait, and depending on political capital of the entity blocking you, you may not even be allowed to publicly explain they are blocking you, and you are forced to absorb the negative externalities of their choice to block you.


Can't believe nobody pointed this out yet:

Estimation is NOT commitment. Committing to estimates paves your road with good intentions (and leads to hell). Don't commit to estimates!!

Estimation is EXTREMELY valuable to help the business get a sense of engineering capacity. You can't tell the business, "we can't launch a Facebook competitor in a week of effort." No shit, Sherlock. So what can engineering work on next? What can the business ask engineering to prioritize, that's chopped up into a small enough piece that it can work it's way through development and into production in a more or less predictable manner, while still being large enough to have demonstrable business value?

Orgs that can't produce reliable software estimates suffer from one of the following: unreliable infrastructure (can't deliver new builds if your build system is down), insurmountable technical debt (can't reliably and quickly roll out requested changes if you don't have automated integration testing), bus factor 1 (look who decided would be a great week to be in a car accident! /sarcasm), lack of senior technical leadership involved in planning / chopping up tasks.

I hate to break it to people, but those are all pretty much fixable. You can have highly-available tooling. You can have competent technical leadership that balances technical debt and creates clear task work for engineers. You can have team leadership that prioritizes getting information out of team member heads and into source code or wiki (when appropriate), and cross-training. The fact that most organizations fail at best practice does not mean that best practice is inaccessible.


Estimates rarely correlate with engineering capacity. Even so, they _are_ treated as commitments any time it’s politically convenient for someone to treat them that way, regardless of how publicly you might have qualified your estimate as not a commitment.

Also, with these types of estimates, they are basically misleading without some form of error bars, yet nobody ever incorporates that into it. Capacity is not some number, but a whole distribution of possible numbers, for which the mean might not be a relevant value (for example if it has several sharp modes that depend on discrete external events).


I agree that estimating time of individual tasks is unhelpful, which is why story points as abstract units of complexity are, in my experience, more useful measures.

That said, I've always been able to provide fairly good estimates for how long work will take, something that I got a reputation for doing well.

In my experience, that's not really true; I find value in time estimation to come from actually scoping the work again and probing for traps or pieces of complexity that weren't obvious in the first estimation passes. It's not true that software takes as long as it takes. Software projects expand to fill the amount of time allotted. Holding yourself to deadlines creates room for compromises and creativity.


> Holding yourself to deadlines creates room for compromises and creativity.

In other words, when it turns out the estimate was too optimistic one can either:

- allocate more resources to the problem,

- make compromises about quality of work,

- miss deadline or

- adjust scope.

The first solution is rarely possible (or doesn't help because new hires take time to be productive), the second has unwanted side effects, so the last two are the best options... Pick your poison I guess.


- (1) More resources don't always make things go faster.

- (2) Compromising does not always compromise quality of work. You can always cut scope as you mention in (4) and ship less features at a higher level of quality. Although what I was getting at was more that we have a tendency when given time to go about refactoring and rearchitecting things that don't necessarily need to be done, or introducing unnecessary / premature abstractions and optimizations. Deadlines tend to curb instinct to do that.

- (3) You can miss deadlines, but that's not a clean win either. It hurts morale, allows for feature-creep (oh you're not shipping next week, well boy do I have some extra things you could do with your newfound time) and hurts relationships with teams depending on what you're setting out to deliver all across the company.

- (4) Adjusting scope can make sense, so long as you're very good at figuring out what doesn't need to go out. Not every team is.

tl;dr: Shipping the right 70% of the feature set at 100% quality without premature / unnecessary optimizations and abstractions can be pushed along by aggressive timelines. This must always be balanced with sustainability.


The root problem seems to be that committing to getting something done is somehow tied into honesty and dependability. I feel like in Software Engineering at least, we should be careful to not make that association right away simply because the nature of the work is kinda unpredictable, especially when working with new/unfamiliar systems.

It looks like a few people have reached somewhat similar conclusions and create frameworks around this idea (Agile, Extreme, TDD etc.) to formalize the processes. But it perhaps makes sense to realize that they are just that: processes and heuristics trying to make a hard problem (delivering software predictably on schedule) more manageable.


This is false.

Software devs can become very good at estimation if you practice it.

Where 2 hours really is 2 hours—and even a week really is a week.

I know this from experience but it’s pretty basic to see that it’s true. A senior engineer (5+ years, say) is rarely encountering fundamentally novel problems. Most everything we do we’ve done before in some form.

To get good at estimation simply track how long things take. Over (short) time you will see that work is very predicatable and you can be very precise with your estimates.

If you buy this claim that ‘engineering work takes as long as it takes’ of course you won’t take the effort to get good at estimation and of course your lack of the skill will seem to reinforce your belief.

Don’t do that. Estimation is a highly valuable and acquirable skill. Acquire it!


This goes against most of the rigorous studies in the industry which have consistently found, going on 50 years that no estimation techniques have high degrees of accuracy.

The only systems that do have high degrees of accuracy tend to throw out a big chunk of the work to get there (for instance some systems don’t estimate how long a proof of concept will take & only estimate post that).

I’d love to hear what your mechanism is for estimation that is repeatedly accurate & how to implement it at scale because otherwise it’s an open problem in the industry.


SEI at Carnegie-Mellon publishes copious resources on software estimation success and practice.

What references are you referring to?

I more or less gave the pattern — track your hours. I’ve been around the block...from large FANG companies to small startups across many varied tech stacks. Estimating an API design and impl in Java vs Python vs... Estimating impl a UI library in React vs some other server-side MVC framework... We are not inventing new bleeding edge academic paradigms — our work is estimatable.

If someone wants to pay me to teach the skill, sure...

But I can guarantee you you can estimate software efforts with accuracy.


The thing I've seen with estimation practice last I researched it is, it works best if you can calibrate. That's something an established shop can do by extrapolating from their previous work, but it is also often as simple as "This other team took 7 months to do a similar thing. Therefore we will also take 7 months." An estimate like that is usually only wrong by days-to-weeks, since it encompasses all phases, eliminating the fudge-factor, unknown-unknowns and wishful-thinking aspects.

When it takes much longer, it's almost always due to design issues or political issues that create design issues. When the design is well-understood(and prototyping is hugely important to finishing design ASAP) the implementation goes smoothly. When stakeholders take turns stirring the pot to "make their mark", it goes haywire very quickly.


Expert committee estimation (most commonly the Delphi method is cited) is one are that has shown improved estimation accuracy:

https://www.computing.dcu.ie/~renaat/ca421/LWu1.html

The SEI recognizing the weaknesses of expert judgement estimates has layered on QUELCE which applies Monte Carlo simulations to the estimates:

https://www.sei.cmu.edu/research-capabilities/all-work/displ...

Note that QUELCE is an ongoing research methodology without a lot of data available about its effectiveness. But it says something that this is a very active research area in 2018. If it were a solved problem I’d not think Monte Carlo distributions would be valuable.


Very rarely is estimation about simply the work required.

Estimation is normally off because something unexpected happens. Staging environment down, build server is broken. An external dependency takes longer than expected. On a new unfamiliar legacy code base where our previous work estimates no longer apply. Those unexpected problems vary widely in how long they take to resolve.

Yeah, if you estimate on a familiar code base, when everything goes right it's easy. But very rarely is that the case.


Exactly! Development tasks where the within-task technical software issues are the determinant of time estimation are so fleetingly rare that estimation is not useful as a general practice.

In fact I might even _define_ “senior engineer” as someone who has been around long enough to know this and has come up with some way to pacify managers with meaningless estimates in a way that protects the team so it can actually still do work.


After 15 years I still struggle to estimate how many times the client will change the requirements within an ~8 week project, let alone how long it will actually take.


Also, in an org that resorts to estimating and enforcing timelines, a task estimated at 30 hours will take 30 hours, even if it requires 2.


At one of my previous companies, during a death march like project to get a 1.0 out, it was widely known that the offshore team was outright lying about completing their work in the estimated hours (scrum). They routinely worked overtime and sometimes weekends and yet the management pretended everything was hunky dory and on schedule. The non-offshore team looked bad unless we too worked long hours. A mess on both sides. We eventually shipped the release, 9 months late.


Estimates themselves are relatively useless. But the process of estimating is extremely useful, in my experience.

If a junior engineer gives some hand-wavy estimate of a project, I always ask them to sit down for 30 minutes and break the project down into bite-sized concrete tasks. "Bite-sized" means they need to estimate the tasks at some level. This usually uncovers some unexpected ambiguity or open questions which we can work to resolve. At that point we can also search for long poles, parallelizable work, unnecessary work, conceptual misunderstandings, etc., and it makes it easier for other engineers to swarm if the project starts slipping too much.


I find the opposite. Take the hand wavy eatimate at face value and just move on, update it as you do the task and discover the reasons the estimate was bad.

The other failure mode, where the team spends time discussing every ticket, wasting N people’s time when only 1 or 2 people have the expertise on that topic to debate the estimate, is way worse. It wastes more time, makes people act petty about who is doing more fictitious Fibonacci units of work, bikeshed over meaningless things like if a ticket is a 3 or a 5, and can make work scoping way more antagonistic than it needs to be.


Agreed, the other thing that gets me is that estimates are often expected to be on the spot decisions. If you have done a task that is similar before then that's fair enough, but no one has ever told me to go off for a day or two and investigate the difficult / unknown parts to see hat the options are.


Really? We have spike tickets all the time to go and explore a problem and see what edge cases we can find, explore possible solutions, and stuff like that. We usually scope spikes to a day or so. But we've had a couple where we were given almost a full week. The minimum I've spent was an afternoon and that was mostly because I actually found a lot of info a lot faster than expected.


Spike tickets are one of the funniest ways I’ve seen this handled. All it can really tell you is whether, in some short initial investigation, there is a known blocker, usually on the technical implementation side.

But the problems that make estimation useless are problems that only surface after detailed digging that takes time and cross-team communication not realistically possible in time-boxed spike tickets, involving happening onto things that were not known and could not be known within a short spike timeframe.

Spike tickets are just an Agile bureaucracy thing to paper over the fact that estimation is intrinsically problematic to some fundamental aspects of one-size-fits-all methodologies like Agile.

Essentially for a spike ticket to be helpful in the common case, you need a spike ticket that just says, “actually go and complete the whole task you’re trying to estimate and then come back and tell us how long it really took.”


We go off to investigate for scoping quite often. If you're not doing that, you're just guessing, which seems counterproductive.


Yet on the spot guessing is what most product managers and executives require.


That hasn't been my experience at all (major bay area tech co). PMs and execs who I work with are generally totally okay with "I'll get back to you by [date] with scoping and ETA for that request." We even have a name for it: we call it an "ETA for an ETA". It's much better to give an ETA-for-an-ETA and then come back with a real ETA once you know the scope, rather than just guessing an ETA that turns out to be totally wrong. Sorry if that hasn't been your experience; being asked to pull ETAs out of your ass without doing due diligence sounds like it would be demoralizing.


I’ve also worked for major Bay area tech cos, and your description hasn’t matched my experience in any of them.


What kind of task are we talking about here? A bug report opened by the QA team or the development of a new feature?

I agree that hunting bugs is very difficult to time-estimate correctly: it could be either a simple overflow bug that it only takes a dozen of lines to be fixed or it could be an architectural problem not spotted before that needs some serious evaluation before taking action.

But a development task should be predictable to estimate to some extent. An architectural analysis should reveal the parts of the system that need to be modified and it it takes more than 2 weeks, maybe the problem should be partitioned into smaller problems easier to estimate.


That’s a bit of a cop out though. First you are throwing a big chunk of the part of a system that can take high variance time (the analysis) out of your estimation accuracy. Second no rigorous studies have shown that breaking down tasks into smaller chunks for estimation purposes changes the accuracy rate of the broader actual ask.

Anecdotally what happens when you do the breaking down into smaller tasks thing you are either just putting off giving the broader estimate (in systems that only estimate the currently workable tasks) or you are likely missing small tasks from your broader goal that will impact your estimate later leading to standard estimate overruns.


I’m talking about development tasks. They are never predictable in a useful way, and the reasons why estimation is not useful have nothing to do with the technical details (usually). It is about sociological blockers in resources, IT blockers, unknown legacy code issues.

It’s so rare to have tasks without these blockers that estimation generally is unhelpful. Whereas measurement (beginning work and alerting people to blockers) is much more useful.


>> Speaking as a senior engineer myself, I disagree because time estimation in software is not a useful activity. The only thing it helps is political manipulation in a layer of management above you.

This is truly an amazing statement to make, especially from a senior engineer. Without a clear estimate and communicating that estimate to those who relies on your output, how can the others plan their activities, or how can you plan your activities without knowing how the others will take?


I don’t see why it’s an amazing statement since it’s been a very common perspective since at least The Mythical Man-Month decades ago.

If other people makes plans based off of junk (read: any) estimates, it just amplifies the problems.

If you’re at least honest that the estimates are meaningless, everyone can acknowledge it and come up with different solutions, especially regarding speeding up the process to get started and make checking in about blockers more meaningful and consistent.


I am curious and not trying to be a dick. If you don't any time estimates for your tasks, how do you tell your stake holders when your delivery date is going to be, or do you leave it open ended and deliver when it is finished?


> time estimation in software is not a useful activity

Only so long as you're developing your software in complete isolation from anyone else.

In terms of possibility, if you're not doing moonshot R&D then you should be able to give a reasonably bounded estimate (which occasionally will be wrong, but c'est la vie) of how long it will take to fix an issue or implement a feature. If you can't do that then IMO you aren't a senior engineer.


I agree with this also. We have to make estimates as to how long it will take. If it's not a simply READ endpoint or working on a piece of code that I've previously worked (bug fix, minor feature add) on I don't feel I can give an accurate estimate at all. I'm Mid-to-Senior (3 years exp) and fairly new (2-3 months) at my job so the institutional knowledge of the code base and specific systems just isn't there yet.


I tend to go with small (a couple of days or less), medium (a week or two), and complex (needs to be broken down into smaller chunks)...and the ever popular, no clue. :)


That and a couple of other items on the list would fall (for me) under "mentoring/growing." I feel like a senior engineer should be working only on the really difficult stuff and also enabling other engineers to eventually be able to also work on the really difficult stuff.

Enabling people to work to high standards, helping them when they're stuck, and helping with time estimates are all great parts of mentoring.

It's a little shocking to me that "make sure folks are working well together" is tossed aside as "the manager's job." Part of being on a team, to me, is learning to communicate and work together effectively -- that's not a "manager's job," it's everyone's job. A senior engineer, by virtue of being senior, should also know and be able to teach effective communication strategies.

It's just as important for an engineer to know how to communicate properly, whether it's within the team, with superiors, or with customers. This can reduce and lubricate many kinds of friction that ultimately cause needless work.


It might be shocking to you, but it is exactly the manager's main job. If people would magically communicate well and work together fluidly, we wouldn't need managers at all. It would just be team leads talking to product owners.

You can try to hire around this, and sometimes this works, especially in small teams or early stage startups. But at some point this breaks down, humans simply can't be rational and social all of the time.


Yes, it’s the manager’s ultimate responsibility. But it’s not only the manager’s job. Every team member, and especially those with more experience, has a responsibility to ensure the team communicates and operates cooperatively and effectively. It’s the sort of thing that can’t be imposed by above by a manager if individuals aren’t taking responsibility for it in the first place.


> but it is exactly the manager's main job

I would disagree; a manager is not necessarily an engineer and thus cannot mentor a junior engineer as well as a senior engineer could. I'm not saying that a manager cannot or should not mentor, but that a senior engineer should ALSO be mentoring based on shared experience. There's no need for a mentor monopoly. :)

This is a bit off topic, but to me, a good manager is basically an umbrella and a funnel for the team. The manager covers the team and protects them from crap to keep them productive, then funnels their communications and output to the correct places. Basically, the API for sales or execs or whatever to communicate with the team. That's where a manager differs from a senior dev for me -- the responsibilities are completely different.

You can't have a good team driven from the top; everyone has to be working and pulling their weight.

Now, I'm not saying that we should expect people to be rational and social all of the time, just that a large part of mentoring should include "soft" skills like how to deal with other people and yourself when you're not rational or social. This is definitely something that can be learned and eases workplace friction so, so much.

Edit: If this sounds familiar, I think it might be because I tend to harp on this on HN whenever it comes up. I feel like the "soft" skills are under- or completely devalued here sometimes, but they can really make or break a team just as much as technical skills.

I saw this comment on another thread that's sort of speaking to the same effect but from a practical perspective: https://news.ycombinator.com/item?id=18158042


Don't know why you are being downvoted, this is exactly how I see a manager as well. To add to protecting the team from external shitstorms, is also to be an advocate for team members so they don't have to constantly worry about raises and promotions.


I’m really disappointed at the responses in this thread. I’m finding as I go into industry more and more that there’s this philistinism amongst programmers.

“Quality code can’t be achieved because look...”

“Accurately estimating software is impossible because look...”

It’s so easy to claim these things. It’s way harder to earn the associated skill sets.

We’re taking “programmers should be lazy” to a whole new level where laziness means not practicing, not growing, but instead using “hard to learn” as an excuse and equating “hard to learn” with “generally impossible”.


> not all EMs are sufficiently technical to know how long it'll take a _junior_ engineer to take on an otherwise easily-scoped task.

That is a problem if you estimate time as opposed to (intrinsic) complexity of the task.

If you choose to estimate complexity of tasks, then you estimate it without anticipating who would actually execute it and let the routine sprint capacity adjustments calibrate the rest for you.


You still need time based planning. You can't go to client and say it will take 500 points to finish the project. You also can't bill story points. Yes, after some time velocity will enable you to easily convert points to time, but in real world when customer is de facto product owner and team size is in constant flux depending on each customer wishes, you need experience engineer to give you ballpark value how long something will take long before the coding even starts.


Software development as an external agency is such a broken concept that I never apply for those jobs.

The interests of both parties have serious misalignment.


Story point style planning makes estimation a team problem rather specific engineers making estimates.


> I put “write code” first because I find it surprisingly easy to accidentally let that take a back seat

As a first-year-out-of-school developer, I shadowed an interview once from a "Principle Engineer" at another company. By the end of the interview, we understood two things: 1: He did not code in his job and hadn't in a while. 2: He could not code anymore.

This role was for a senior engineer position, where he would need to mentor people on writing software. It was a pretty hard 'no' by the time we did our group follow-up with the other interviewers.


> As a first-year-out-of-school developer, I shadowed an interview once from a "Principle Engineer" at another company.

A lot of people don't realize how easy it is to get impressive titles at very small or low-quality outfits.

If it's just you and the CEO running the place, you would be a "CTO" even if you couldn't pass a phone screen for any decent entry-level engineer position.

We constantly get resumes from "senior engineers" working for no-name tiny companies and startups. They are often our worst candidates, especially when it comes to hard skills like coding.

Reality is that these skills are rare, and no-name outfits will be competing for the rare candidates who can code against all the better employers out there. The results are what you'd expect.


>We constantly get resumes from "senior engineers" working for no-name tiny companies and startups. They are often our worst candidates, especially when it comes to hard skills like coding.

That's funny because I remember not too long ago about complaints regarding Senior Engineers at BigCorp that got too comfy and also couldn't code. It seems the walls are closing in.

It's a good anecdote that titles are bullshit in our particular field. It's also why I've opted to remove 'Senior' from my titles for external resumes to avoid people making judgments too soon.


> It's also why I've opted to remove 'Senior' from my titles for external resumes to avoid people making judgments too soon.

I wouldn't go quite so far. The fact that titles don't mean much in sub-par outfits doesn't contradict the fact they can mean a lot in solid companies.


Even at large companies this can happen. Architects and high level engineers can simply go out of touch with current engineering practices simply because it's not their job anymore.


You know, I heard this claim about "architects" and "high level engineers" before, but I've never seen much evidence for it in real life.

How many of these detached astronaut "architects" actually exist?

The only real-life example I ever became personally familiar with: academics who were parachuted to some quasi engineering leadership positions thanks to their impressive publication credentials.

I wouldn't call these folks "engineers" of any level. They are scientists who head teams that employ engineers.


I defently been at places where the architects define tbr boundary of a microservice, or the api contracts rather actually implement it.


But do you know for a fact these "architects" forgot how to code, and would bomb an engineering interview?


I once walked into a really sharp architect’s office and asked him how it was going. He shrugged and showed me what he was working on: task scheduling in Microsoft Project. He hadn’t coded a line in months.

It is not necessarily that the architects don’t know how to code but more lack of opportunities to do so, as they increasingly become a human API between the engineering team and management and execs. Meetings all day, some of them of low value.


> He shrugged and showed me what he was working on: task scheduling in Microsoft Project.

Right, that's not an "architect" then. That's a manager of some sort.


He probably spent all of his time on developing principles...


Totally disagree with the section "What’s not part of the job"

Part of any job is stepping up and deputising for your manager, sure if you don't want career progression or want to go into the technical side you could argue they would try and do some of that instead, but if you want to be a senior you have to be a rounded individual that can mentor new starters, help with sprint planning when your boss is not around. Thats part of being a senior, otherwise you are just a good Mid.


I also did a double-take on the title of that section. I resolved my conflicy by mentally renaming it "What's more another role's responsibility." So yes, a senior engineer does these things, but it's not primarily on them.

With that renaming, the two lists in the post more accurately describe my recent positions as senior/staff engineer than anything else I've read, so I have to say I very much agree.


I find help with sprint planning something every engineer should do. Sprints are owned by the team not the product owner


People should do what lies within their realm of capabilities. I know enough people who are fine programmers but who lack the overview or sense of connectedness between certain task or the ability to think about the business value of certain tasks in order to make decisions when things start to take too long or become too complex. Those people should not take care of handling the sprint when the person who normally does that is away.

So no, not every engineer should take care of the team's planning.


Well in my team sprint planning is a team job. Everybody gets together and plans the next stories. Usually different people have knowledge/interests on different stories, so nearly everyone gets some input.

Heck in the scrum guide this is the way it's meant to be.

If your engineers can't factor in business value, or work dependencies its usually because people are hoarding knowledge so they dont have the ability to factor that in.


I didn't say the team as a whole wouldn't be able to contribute. I was saying that not all people are able to take over day to day planning individually.

"its usually because people are hoarding knowledge so they dont have the ability to factor that in."

That is like saying that everybody who isn't smart, must be good with their hands. Some people are just not good at certain things, no matter how much they are enabled to do those things. Which is completely fine.


Good read, I like JE's posts. But I definitely feel like this falls into the trap that a lot of similar posts fall into - attempting to "bullet-point" a role description inevitably will result in people criticising the inclusion or exclusion of individual points. Which is a shame because it detracts from the overall spirit of what's being said.

Struggling to find the link now, but I read another post on the subject of seniority, and it basically summarised the crux of it for me. This unravelled everything. Seniority is measured contextually by asking: "to what degree can I leave stuff with this person and expect it to get it done with high quality? Further, to what degree can I NOT leave stuff with this person and _still_ expect that to get done."

From this, one can derive their own context-appropriate "check list". In an environment such as finance, where an engineer is absolutely not the expert, can I trust that this person can work with the experts to get to an appropriate solution? In an environment where you have many intermingled teams, will s/he be able to propose solutions and get buy-in across the org? In a consulting/client scenario, can this person represent our company? What might they need help in? In a bootstrapping startup scenario, can I entrust them with the entire build of (some major component)?

In different contexts, the above set of questions would unroll a different set of measurements for seniority, and may require a different mix of soft/technical skills. But that's fine, there's no one-size-fits-all seniority ladder.


I’d love to read that link.

I usually summarize the role as doing what’s necessary to get the job done reasonably, but your point about knowing what to entrust others with is spot on.


In the section where she explicitly mentioned what is not part of her job, I bring two of her points which I disagree with.

* Make sure work is allocated in a fair way

* Make sure folks are working well together

While she is not directly responsible for these two items, she and everyone else on the team should be responsible for alarming their manager/team leader when either of these items are not working. With more eyes and ears monitoring the team, it helps to reduced the risk of bad behaviour disrupting the team's cohesion.


I think what the author described is mostly any engineer's job. You don't need a title for this.


I think there are some less concrete responsibilities. This post assumes all your colleagues give a shit, while some work might get delegated to a colleague who then waits for you to step them through it. How do you motivate people to take responsibility for their work (when firing or changing teams isn’t possible)?

I think a senior engineer is also going to be one writing fundamental proofs of concept when time is tight (or at least I have been doing this)


> "...review design docs"

In my opinion, one of the most important and most difficult parts of the job. Architecture and design shouldn't be limited to senior engineers--it won't be in practice, anyway. Doing so is a sure fire way to stilt the growth of your team.

But, reviewing designs is hard. It requires recapturing much of the context that the engineer gathered in a very short period of time. I also find it sometimes difficult to separate, "this is a fatal design flaw" from "this isn't how I would do it." I really like the suggestion of providing feedback via additional information.

Mistakes are a very important part of learning. I try to make sure everyone has the opportunity to make their own instead of making mine.


what ive found works is to /always/ pair a jr with a sr engineer to write any ddoc. 2nd, always assign a specific sr reviewer. only after that review, release the hounds. others that have interest or particular insight can reflect on deficiencies (or, rarely, strengths) without the dread feeling of having to deeply understand the context or underlying dependencies.

same reason you don’t just throw a code review out to “everyone”. everyone = no one


I think that setting explicit job boundaries is a good thing to do. Otherwise you end up with role creep and potential burnout. This happened to me. There is a considerable pressure to pick up as many activities at the job as you can as hiring more developers is hard or impossible at some locations as the demand for developers has skyrocketed. This demand buries you even deeper under tasks once your colleagues leave elsewhere for better offers.


It's interesting the author mentioned estimating but said they are not very good at it yet. I have read somewhere else that what distinguishes a senior engineer from a junior engineer is exactly the skill in estimating work.


As I said in another reply, it really depends. If its a task that I have done before and new code, then I can give a fairly good estimate. If its something new, I have no idea. If it's on a monstrous old codebase, then there are a lot of unknowns to add into that.

And most people expect you to do such estimates on the spot.


More like senior engineers have learned from experience and just triple any estimate.


No one can estimate worth a damn. With experience you realize how terrible all estimates are — unless you are estimating something you have exactly done before, which is more likely with more experience. Everything I have done in my career was mostly unrelated to anything done previously, so estimating would be as effective as using dice.


Jvns does it again. We need more women like her in the industry. Such an awesome engineer and educator.


What's a senior *software engineer's job?


I have my own sort of loose rules about this. I long ago came up with it, and it works for me, but maybe there are a few holes worth poking.

Basically it goes like this: There are no 'senior'/'middle'/'junior' level developers. There are _A, _B and _C.

_A guys sit at the top. _B supports _A. _C support _A and _B, and .. other _C's. _A supports all _B's, all _C's, and of course.. all other _A's.

The position is self-determined, i.e. up to the individual. Occasionally, when enough _A's, _B's and _C's serve together, they self-organise. Sometimes, you need to tweak a few things.

For example, there is a kind of _A who doesn't want to work on things without a few multiples of _C around to clean up after him. This guy needs a _B.

Then there are _B's who ignore _C's and just wanna work with _A's. This guy needs a better _A. And, also, a few _C's.

And there are _C's who want to be _A's, while ignoring their duty to _B. This guy needs a few more _A's, and either becomes a _B, or an _A. (Or a _D, which is 'goes and does marketing stuff instead'.)

Either way, there is another 'type' of developer, and this guy is an _X. He gets all the _A's and _B's and _C's happily playing together, executing on the plan. He can be an _A or a _B or a _C: he doesn't care, as long as things are executing.


Why would you try to argue this point with A,Bs and Cs? Even if you called them monkeys, horses and lions it would have been more understandable. There’s a reason we don’t use these kind of variable names...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: