> If people are trying to assign blame to a bug or outage, it's time to move on.
This is one of my favorite excerpts. I once worked in a lab where we would have frequent catastrophic failures because there was never any disaster planning or contingency management plan. I personally triaged 3 such incidents alone or with people who happened to be there when the problem arose and attempted to disseminate some suggestions for how to prevent similar problems in the future. No one was interested. People were primarily interested in tearing my head off because I hadn't handled the problem the way they would have done it (of course, they were out drinking beers or sleeping while I was dealing with the issue at 12 AM or on a weekend).
After the third time I said fuck it, the next time there is an issue I am going to insure my own projects are safe and then I'm going home and turning my phone off. Let someone else deal with it. That is the not the culture you want to be promoting.
I was on call as a new developer on a system. I was not given any procedures or trouble shooting documents. I got a call at 1 am, missed it, and waited one minute to see if there was a message. I did see a voicemail, so I started listening and logging on. Before I could even get halfway through, the person called again (why not leave the voicemail on the final attempt?). So I'm looking for the issue/fix for 5 minutes and they tell me they know who the SME is for the functionality, so they will call them. Why even call me if you're just going to call the SME without giving me time to look at it? I got negative feedback from my manager about the way I handled it. So, I asked how I should have handled it without any training or documentation. They said I should have called the SME. Well, I didn't know who the SME was and there's no documentation or list of what who is the SME for which part of the system, nor was I instructed to immediately call the SME. Again, why not just call the SME first if they knew who it was and the SME didn't create documentation because they are "too busy".
The hiring process for the company wasn't special. Of course half the stuff they claimed in the interview changed later (was hired as a Java dev but was assigned to Filenet, they said rhet dont outsource or layoff but have started doing both).
This was an internal transfer. There were definitely warning signs in that interview. I was desperate because they were outsourcing my job in an obscure tech (Filenet) and we were expecting a kid.
The hiring manager said something to the effect of, "I was surprised anyone internal even applied to this job".
'Warning flag' doesn't do this justice. I have no idea what to call it, but desperation required I ignore it.
What do you mean exactly? There are tons of problems with the company. Stay long enough at any large company and I'm sure there are plenty. The issues can change dramatically from department to department.
The lack of documentation/procedure, and the process issues with others contacting you needlessly instead of the SME. They just seem like structural issues that would not be specific to one team.
Basically. Except there were 3 other tech leads in that area. They didn't know that specific piece of functionality, but they could be given the new work to take stuff off that team's plate to make time for documentation. The leadership in that area didnt really care about anything other than delivering fast. Testing? Eh... Security issues? They're not that big of a deal - do them on an above and beyond basis (contrary to enterprise policy). On call documentation? Not even going to try to create it. I mean really, all you have to do is create a knowledge document out of the SNOW incident ticket. Then the next time it happens there will be a link to the steps taken. But no.
Eh, that's a nice thing to say, but it only makes sense at certain scales, and no matter what, there's always a person that can break it.
If any random person can break it, it's already broken.
If any employee can break it, it's probably broken (there are very small scales where even this doesn't apply. Ever worked for a company with less than ten people? There's probably something any employee can break).
If any employee that's an engineer, sysadmin or developer can break it, well now you're at least reducing the problem to a more specific set of people.
If only the people on a specific team responsible for a system can affect the system, now you've reached fairly good point, where there's separation of concerns and you're mitigating the potential problems.
If only a single person can break the system, you've gone to far. That effectively means only a single person can fix or work on the system too, and congratulations, you've engineered yourself into a bus-factor of one. Turn right around and go back to making sure a team can work on this.
Finally, realize that sometimes the thing only one team can break is an underlying dependency for many other things, and they may inadvertently affect those. You can't really engineer yourself out of that problem without making every team and service a silo that going from top to bottom. Got a shared VM infrastuture, whether in house or in the cloud? The people that administer that can cause you problems. Don't ever believe they can't. Your office IT personnel? Yep, they can cause you problems too.
Some problems you fix by making it so they can't happen. Other problems you fix by making it hard to happen and putting provisions in place that mitigate the problems if they do.
There are lots of places where we require that no single person can break the system at least in a certain way.
For example code review and LGTM ensures that a single individual can't just break the system by pushing bad code.
Often there are other control planes that don't have the same requirement, but I think the idea that there must always be one person who can break the system isn't clearly true.
I'm making an (admittedly subtle) distinction here between complex mistakes, where something was missed, and simple mistakes/bad actors where someone used a privilege in a manner they shouldn't have.
LGTM ensures that, for example, a single individual can't push a code change that drops the database. On the other hand, that same individual might be able to turn of the database in the AWS console.
> LGTM ensures that, for example, a single individual can't push a code change that drops the database.
Personally, I've seen LGTM let slip complex bugs in accounting code (admittedly, not great code) that went on to irreversibly corrupt hundreds of millions of accounting records.
Yes, it will catch "DROP DATABASE", but when it's still letting through major bugs that similarly require a full restore from backup... It seems functionally equivalent?
Given:
> There are lots of places where we require that no single person can break the system at least in a certain way.
I don't think code reviews are a solution. I mean, they're one of the better solutions I can think of, but they're not actually a solution.
> For example code review and LGTM ensures that a single individual can't just break the system by pushing bad code.
There's always someone with rights to push code manually, or tell the system to push an older version of code which won't work anymore. Someone needs to install and administer the system that pushes the code, and even if they don't have direct access to push the code to where it eventually goes, someone's access credentials (or someone that controls the system that has access credentials) has access somewhere along the way.
But who controls that the code system is up and available even allow checkins? Can one person break that? What about who controls the power state of the systems the code gets checked in on? Is that also ensured not to be a single person? What about office access? What about the power main at your building? Is it really impossible for one person to cause problems there?
It might sound like I'm changing the goal posts, but that's sort of my point, these are all dependencies on each other. It's impossible to actually make it so one person can't cause any problems, because you can't eliminate all dependencies, and you can even accurately know what they all are. What you can do is focus on the likely ones, put whatever in place you can that's sane, but focus all the crazy effort you would have to do to track down the diminishing returns of trying to make failure impossible and start spending that time and effort on making recovery quick and easy.
Unfortunately, some work that goes into attempting to make sure any one person can't cause a problem might actually make that harder. Requiring someone to sign off on a commit to go live is great at 2 PM Tuesday, but not so great when it's required to fix something at 2 AM Sunday. This is the tightrope that needs to be walked, and also while even if you don't necessarily know about it, there probably is someone that has access to break something all by themselves, because they're who is called in to makes sure it can be fixed when the shit hits the fan and all those roadblocks to prevent problems need to be bypassed so the current problem can actually be fixed.
Any system that doesn't have some people like that at various levels persists in that state only until they have a problem and in the incident assessment someone needs to answer why a 5 minute fix took hours and the answer includes a lot of "we needed umpteen different people and only a fraction were available immediately".
Even at Google (which I see you work at from your profile), my guess is that people in the SRE stack can cause a very bad day for most apps. My guess is that even if the party line is that no one person can screw anything up, you probably don't have to ask to many SREs there before someone notes that it's more of an aspiration than a reality.
Sorry if that's a bit rambly. I know you weren't specifically countering what I was saying. I've just had a lot of years of sysadmin experience where it's pretty easy to see the gaps on a lot of these solutions where the face presented looks pretty secure.
What systems are you working on? Many are held together by ritual, and deviating from the ritual causes outages. They’re very fragile in some form (deployment, change, infrastructure, dependencies, etc.). They won’t break if you follow the happy path, but to say they’re so robust that an active attempt at breaking won’t bring them down is ... naive? Not sure if that’s the word I’m looking for.
I say this as someone who’s worked at large tech companies that are “internet scale”.
Maybe. I've seen the opposite, where no one takes responsibility for anything, and it's also bad. In fact, the situation you describe could also be a lack of anyone else taking responsibility for disaster planning and etc.
I think what is needed is a culture of -ownership-. That's basically people saying "I'm responsible". Not one where everyone tries to avoid responsibility, and not one where peopel point fingers.
Why does someone need to take responsibility when you can have a culture of blameless postmortems where everyone focuses on making sure what ever happened never happens again instead? In blameless postmortem culture, everyone is responsible by default
"Everyone focuses" = nothing gets done. I've been at places like that, where a post-mortem happens, a course of action is decided on...and then no one owns actually carrying out that course of action.
You could argue that "It should be assigned" - yeah, it should. But assigning it implies either "here is the team that is responsible for it", i.e., this is the team responsible and they need to be told to fix their shit (which very much sounds like blame), OR it implies "here is the team that I am entrusting to fix it DESPITE their obviously not being responsible for it", which is just as bad, since it implies that the team that 'is' responsible for it is incompetent.
The only healthy option is that the 'responsible' team stands up to say "hey, that's ours; we'll fix it", and the only way they'll do that is if you have a culture of safety and ownership.
Also, one thing to make clear - ownership = responsible = blame. They're all words for the same thing, just different implications. You can't have someone 'own' something without making them responsible, and apt to be blamed if you don't ensure the culture is one that does not attach blame. That's really what I was getting at; of course you shouldn't blame. But, you can't also avoid ownership. But ownership implies you know WHO to blame, and so blame comes very easily. And it's very easy to mistake pointing out responsibility/ownership for something as blame; I have had multiple managers tell me "it's not us vs them" when I've raised up the fact that I'm unable to deliver to deadlines because I have been unable to get anything from product.
The people most capable of taking the action items are assigned it. This could be expertise, resourcing, proximity, etc..
In an open discussion of the root cause, many times the issue is across multiple services / organizations within a company. You’d assign tasks appropriately across teams as needed. The key is to find and create actionables to address the root cause, not to punish / blame individuals.
"The people most capable of taking the action items are assigned it. This could be expertise, resourcing, proximity, etc."
Expertise and proximity are facets of responsibility (well, technically they are facets of knowledge, but ideally knowledge, empowerment, and responsibility are aligned, else things ALSO won't get done). Resourcing is a red herring; I've seen things get assigned to teams based on "they have the capacity", without it being an area whose domain they're familiar with (i.e., they don't work in that area, and ergo are not responsible for the outcome) - those things rarely get done, and never get done well.
The blameless postmortem an "legal fiction" that don't really mean that blame cannot be assigned just that blame cannot result in punishment or loss of face/standing.
At the end of they day you are going to have someone stand up and say: yep we should have planned for this, and we will correct this in x, y, z, ways.
What does it mean to be responsible? Just to say it? Responsibility should be accompanied with fines corresponding to the damage or something like it. Otherwise those are just words. I'm responsible, but I'm not getting any fines if something goes wrong, so whatever, but I'm responsible. Fire me if you want, I'll find new work in a few days, but I was responsible.
It's business owner who's responsible, because ultimately he's getting all the expenses when critical event happens, client leaves, client sues the company, and so on. Other people are not really responsible, they just pretend to be.
So I've actually written about this in the past, but, responsibility is -actually effecting the entity-.
That is, "you're responsible for this" - if they do it, and it succeeds, what happens? If they don't do it, and it fails, what happens? If the answer is "nothing" in either of those cases, they're not actually responsible. If the result is too detached, they're also not actually responsible (i.e., if I decide not to do one of the ten tasks assigned to me, and I don't hear about it until review time, if at all, then I was never responsible).
Responsibility is innately tied with knowledge and empowerment, but without going on at length, and to just give an example - if I'm the one woken up by the pagerduty alarm when something breaks, I am responsible for that something, because its success or failure directly affects me. If, however, there is a separate ops team that has to deal with it, and I can slumber peacefully, responsibility has been diluted; you won't get as good a result.
Honestly, I don't know; I unknowingly followed the author's advice. About half a year after the last incident, a friend who I went to school with called me up and offered me a job at his fledgling biotech. I accepted and never looked back.
This, and the current “sober” posts on r/ExperiencedDevs, makes me think of Herodotus describing the way the Persians made important decisions - once sober, once drunk, and if the drunk and sober decisions were the same they knew it was a good one.
Alcohol really does not help me with code either (except maybe a relaxing beer), but Marijuhana works(occasionally). But you really, really need to do the sober clean up part. Otherwise it becomes a mess.
I cannot work while stoned - velocity drops to a crawl, and any complexity becomes overwhelming. And dealing with colleagues becomes much more difficult.
I can code with a few drinks in me; in fact the activity of coding seems to reduce the amount I drink (I'm a functioning alcoholic).
It's decades since I drank during working hours. In my early career in the City, it was the custom to drink at lunchtime. Those days are passed.
Oh the Ballmer peak definitely exists for POCs, school work and side projects. A couple beers in and then you lose the fear of doing something stupid and start cranking out code.
Not sure about producfion code though, the values are different
Sort of a guilty secret but I used to save POC work for right after a company talk or party and a few beers. I could spew out a few hundred lines of code that was a bit messy but was got the job done. I’d go over it and clean it all up the next morning. Almost always an incredibly productive exercise for me but ymmv.
I think Tacitus had a similar anecdote about the Germans.
(Even without getting drunk, I've always found it useful to consider a hard decision once analytically and once intuitively, and if I don't agree with myself, think about it some more.)
This is right for the drunken dev thoughts, but not for the Herodotus example. "In vino veritas" - in wine there is truth - means that people expose their true thoughts when they're drunk rather than the filtered version they might present when sober, but the story of the Persians is more about the fact that there's value in considering both drunk and sober reactions, particularly when they tally.
> The most underrated skill to learn as an engineer is how to document. Fuck, someone please teach me how to write good documentation. Seriously, if there's any recommendations, I'd seriously pay for a course (like probably a lot of money, maybe 1k for a course if it guaranteed that I could write good docs.)
I agree but think it is more than just _documentation_: effectively communicating ideas through text was one of the most underrated skills in software engineering. I say "was" as I think there is much more focus on it now with remote work and video call fatigue becoming the norm for many.
I would suggest you avoid thinking in too general of terms like this. There are dozens of kinds of docs. It's better to think about the document's specific purpose, audience, what you need to convey, what the audience wants to know, and how they want to absorb it. Then write, then read it as that intended audience, see if that person can make sense of it, and if it provides enough information. If you can't put yourself in their shoes, have the audience proofread it.
Two important lessons I learned:
1. Formatting and direct communication is very useful. It can make the difference between someone stopping and noticing critical information, or skipping it because they're lazy readers.
2. You probably don't know the correct way to convey information, and the audience probably doesn't know how to tell you how to convey it either. You need to listen for when the docs fail: when somebody says they read the docs but still don't know something or don't do something right. That means your doc has a "bug" in how it gets through to the reader. Experiment, change things around, add/remove information, etc until the bug is gone.
Agree with the sibling comment, but a starting point is also just to write docs for your future self, which is usually going to be type 3 or 4. Most of us who have been programming for a few years have had the experience of being mystified by something we ourselves wrote in the past, so it eventually becomes fairly straightforward to predict what kinds of things future-me will need a hand in piecing back together.
And it turns out those kinds of docs are pretty useful to my colleagues in piecing it together also.
In contrast to the current sibling replies, I think this is a very fitting categorization of documentation. Off the top of my head, I can think of several examples where one type of documentation is excellent but others are very lacking, for example:
* Rust Library Documentation: Most libraries have complete and up-to-date reference documentation, but are lacking even basic introductions (tutorials/guides) on how to use the library. This is totally just my personal experience so maybe I've been looking at the wrong crates, but with most of the crates I spend several minutes looking trough all the modules in order to find that all the juicy functions are hidden in the Connection struct, or something similar.
* Linux Kernel Documentation: The Linux kernel has excellent in-depth explanations on several high-level concepts, but on the other hand a little more systematic reference documentation on the supporting library code would help a lot.
* While I can't think of a good example right now, a lot of projects have a few basic getting-started tutorials but don't explain advanced concepts or high-level design at all, leaving you to wade through the sources yourself in order to understand how to actually use them.
I end up reading the source half the time, anyway; documentation is often incomplete, dated, and possibly incorrect. For code, I'd prefer the time go into designing a cleaner interface and making what calls do obvious.
That said, I find high-level documentation for larger systems to be very valuable. I also find Python's docs to be lacking compared to Java's; I'm often left wondering about the definition of what type is returned, exactly what parameters should be, and which exceptions are raised. Java's docs are very explicit about all these things.
Thoroughly commented code tends to end up a couple changes out of date. A separate file in the same directory ends up a couple major refactors out of date. A separate file in a separate system ends up a couple company-wide reorgs out of date.
I agree. I'd like my documents to contain overview, intent, and exceptions. The actual implementation I can look up in the code. Also generated stuff is appreciated, like Swagger.
You should be reading the source all of the time. The point of documentation is to tell you what the code doesn't. If it tells you the same, you should delete the documentation.
That's useful, but only covers documenting the code, and API usage.
Depending on the project, various other documents may be required, e.g. installation guide, user guide, operations manual, architecture diagrams, networking diagrams, module/component diagrams, information flow diagrams, high-level design, low-level design, docs at various "views" (such as "business view", "information view", "technology view"), design decision tracker, ontology...
I'm actually in the middle of using Postman to generate a more "modern" inline docset for a new engineer that is coming into a project that uses that server.
One can hope, but sadly I don't think it's going to be an easy shift.
People managing work, from what I've seen, still prefer to babble over their scribbled 5 basic points than taking the time to do their job and create relevant textual information. Then you listen, you take notes, and then you go and produce whatever documentation of the objective is required to at least understand if it's going to work. Of course you still will have gaps in your understanding, so then more calls, and repeat. In the end, unnecessary/missing features, a whole bunch of time wasted in crap, deadlines missed, burnt time, all of which could have been avoided if someone just had taken the time to do their supposed job - this is not to say it wouldn't have to be discussed, or that there's no need for back and forth and calls, etc, it's just instead of starting halfway you start from -50% or something.
Even in outsourcing platforms there's been a shift contrary to that. At least two years ago and before that, video calls or calls weren't really usual unless you were in some months long collaboration - now even there everyone expects video calls on the interview... It doesn't matter if it's a $100 one time job or whatever.
Also "if the code is clear, it documents itself". Which in my opinion completely misses the point. Good documentation doesn't tell you what the code is doing, it tells you why it's not doing something else.
We had a guy that always opened PRs with no description. Guy always said “read the code”. Manager wouldn’t do shit and he kept doing it until we all just stopped reviewing his code and then he couldn’t merge.
Like dude, tell us why we should read the code in the first place.
Exactly. That's not a documentation problem, that's a writing problem. A lot of tech people don't enjoy writing, but also they're sometimes not very good at predicting or empathizing with the future reader of their writing. Sometimes that's also manifested in speaking, and failing to context frame concepts, arguments and ideas before jumping into excruciating detail. Senior managers notice this, and this limits your career.
So I argue that the issue is writing skills, of which technical documentation is a subset speciality of writing skills. I will add, similar to math problems or programming, writing wants you to do it over and over so it can get better.
> A lot of tech people don't enjoy writing, but also they're sometimes not very good at predicting or empathizing with the future reader of their writing.
Agreed, wholeheartedly. Will hitchhike on your comment to recommend two things: Brett Victor's pdf stash [1] and, specifically, the Walter Ong essay "The Writer's Audience is Always a Fiction"[2].
Long story short, we form our audiences by subjecting them to our writing. In writing software documentation, we are implicitly informing the next generation's thought by the simple power dynamic that underlies all technical documentation: "you must understand this in order to do your job properly".
It is no wonder that "form", "inform", and "information" are such closely related words.
We dictate the level of rigor and intelligibility we expect out of our technical documentation, when we write technical documents. It almost sounds like a tautology when put this way, but "bad docs" are exclusively the result of a professional culture that puts up with the existence of bad docs. I've been there; too tired and overworked to care about writing something properly, or wanting to avoid writing a doc enough that I setup some autodoc thing and called it a day. We literally don't get paid for writing documentation.
But good documentation is what made us into good developers (if we are good developers). We should get paid for doing that...
> effectively communicating ideas through text was one of the most underrated skills in software engineering
Absolutely. So many hour long meetings could be shortened by better communication skills (just more targeted), or even a small email chain.
Communicating in short form confidently is a skill. Many people, including myself (something I've been working on) struggle with saying something that was a complete idea in a meeting and not stopping because they feel like they need to say more. Short and sweet is the way to go pretty much whenever you can, for technical work that is.
And many many people don't do it well. On both ends. Reading is a skill too.
I find 'small email chains' don't help at all. Too many people just don't read past the first sentence or paragraph. And email is slow.
I usually 'escalate' quickly. Support didn't understand a ticket comment I made on how this isn't a bug or why there really is a workaround for it?
I send them an IM trying to coax it out of them/get them to understand for a few minutes. Doesn't work? Quick call and do screen sharing. Problem usually solved after a few minutes. Problem is they might stay longer than that took just to 'catch up' ;)
This is really not that different from when we were all at the office just that the last part would be walking over to their office and looking at the computer together. In some situations that makes it even easier nowadays because I don't need to take an elevator down 20 stories and back up again after.
Having asynchronous means of communication is great. But as soon as the back and forth is more than maybe twice on each side, there's probably a miscommunication happening somehow that will be much easier to get solved with a really short feedback loop. Some people you have to call right away coz they just type soooooo slowly ;)
Not only in writing but also in plain English.
I see situations like described below over and over.
Example 1 - too much details.
Morning stand up.
Manager: what's the status of that new feature.
Senior Engineer: well I tried to call that service but it was timing out so I spoke to Bob and he said to check with DBA on why that stored procedure but it's so slow and turns out index is missing so we tried to add it but mysql and varchar something fckn something...
Dude couldn't you just tell it's delayed due to DB and then expand if needed.
Example 2 - insufficient details
I return from the meeting and discover avalanche of emails, chat messages and urgent meeting invite, all with same topic - "Blah service fails, we are blocked" but no details apart from that. On the call I get description of the problem - blah service fails and how everyone is blocked and how infinitely critical it is and what ETA for resolution would be.
What endpoint? How does it fail - timeout, connection aborted, 503 response, 200 response but with error message?
I like documentation that starts with a minimal but functional example, followed by the most common additions to improve the solution and finally a complete documentation of all functions.
Having links to actual code, like GoDocs have it, is something I appreciate too.
Effective communicating is a crucial part of every job. I think in software engineering, a lot of us are introverts who want to deprioritize this soft skill, but the truth is still that people matter more than the code.
And communicating gets even more important the higher up you move in your seniority.
Documentation can also be anything. Is it your initial RFC or ADR, or is it a spec? Is it a set of product requirements? Is it inline with your code so it produces a manual when you build it? Is it a set of articles or tutorials written after the fact? Is it a README?
Effectively communicating through text amplified due to WFH. If you can avoid syncing up on video chat and resolve something with a couple of back and forth messages. That's a win
I find documentation to be relatively straightforward to write. The issue for me is sitting down and slogging through the activity, which seems almost antithetical to writing code. It's like doubling the work in a far less rewarding way. And then you have to go back and update it any time the code changes. It's a sort of necessary evil I suppose...I think most of the problem is that devs just can't be fucked to take the time, and I'm often guilty.
>Don't meet your heroes. I paid 5k to take a course by one of my heroes. He's a brilliant man, but at the end of it I realized that he's making it up as he goes along like the rest of us.
I thought they were going to go the direction of "he's an asshole" and was ready to accept that, but this particular criticism is actually disturbing. People with strong visions can often appear to be "making it up as they go along," when really they are just subpar communicators.
Short story, I am helping my current company switch from a monolith to service-oriented architecture, and in the process have built a framework for spinning up new fully-deployable services from scratch that gets engineers 90% of the way there (minus the business logic). I have a strong vision for how it works and had a dozen-page RFC accepted by the engineering team for it. Yet there are engineers who think I am making it up as I go (I have been asked this indirectly), without any vision guiding the pieces into place. I have chalked up this feedback to me needing to improve my communication of the vision.
So the post's response of "I realized that he's making it up as he goes along like the rest of us" is disturbing because it makes me realize just how difficult communicating a vision is... if this hero that the poster paid $5k to go see can't even convince one of their fans, what chance do normal people like you and I have of convincing people that we're not making it up as we go?
>EDIT: I realize that I am posting under the assumption that the person's hero does in fact know what he's doing. If he truly is making it up as he goes, the above doesn't apply.
What is wrong with making it up as you go? I mean everything I've ever built I had a notion of what I was doing but most of the real work was in the details. Anyone could say I was making it up as I go. I'd be like yeah, if I knew completely how to do it, I'd already be finished it.
Then there's the times where you think you know exactly what you're doing and after going down a road you realize it's the wrong way. Failing to learn and see the signs and make the embarrassing declaration that you got lost and need to turn around is never good. But some people just keep driving. It's hard though when there's a big line of cars behind you that think you actually know where you are headed.
Nothing wrong with making it up as you go, and I didn't mean to sound like I was knocking it, if I did. Sometimes everyone fumbles around trying to find solutions that work...it's a totally valid way to approach some problems.
Sometimes it's a hybrid of knowing what you are doing but not knowing the implementation specifics. You know you need to connect high-level pieces A, B, and C with specific constraints, but it won't be until you get into the low-level implementation that you'll know if it is indeed possible. I think that's an example of both having a vision and also improvising as you go.
I am concerned about how to effectively communicate visions to people, because it gets everyone rowing in the same direction. If nobody thinks that you have a vision, when you do, there is no reason they should choose your direction vs just do their own thing.
Ok this is helpful. If people are doing their own thing and not following the established, agreed on (or even dictated) way or vision, then you need to figure out why if you are the lead.
Maybe it is communication related. Does everyone know that this is the way they should do something? But they still don't? Have you created docs and edicts around these areas? Have you been assertive in code reviews? Have you been proactive and requested design sessions before a lot of work was done?
Has a decision been formally communicated? I see this step not happening enough, people are hesitant to be authoritative after a discussion on architectural concerns. If you are the lead, that needs to happen.
Most of the time it's simply a case of them not knowing how to do it. People are afraid to show their lack of skill and knowledge and ask for help. They get deadline pressure and deliver their default way.
Have you provided a feature or cut through the system that shows this vision for people to follow? Maybe example code, resources on the web that go very deep into the ideas and tactics? Have you paired with them to help them get started or get over obstacles. Perhaps pair your most senior person with juniors for a while to get them on the same page and capable with this vision.
Using a phrase like "fumbles around" still sounds like you're placing it on a lower rung, whereas I would say that basically everything I've ever done has been an iterative, collaborative process, including in situations where I'm highly confident of both the problem domain and technology choices. There are always going to be new discoveries made during implementation, and you can't have a written-in-stone design doc that prevents anyone on the team suggesting a refinement.
For myself as an opinionated person in a devops role, the vision that I try to communicate to my colleagues is mostly broad principles like configuration as code, helping people help themselves, consolidation of systems, and then some more pointed specifics like don't touch prod, don't make changes without a consensus, start by understanding why it's the way it is before changing it, etc.
>I am concerned about how to effectively communicate visions to people, because it gets everyone rowing in the same direction. If nobody thinks that you have a vision, when you do, there is no reason they should choose your direction vs just do their own thing.
I used to have visions. Now I have collaborative design discussions driven by some starting designs. I found that if people don't contribute to the overall design then they have little impetus to actually understand it. This tends to lead to a better design and a more engaged team so a win on all counts.
Nothing if you're good at it. But if you're hoping to learn something from someone, it is pretty disappointing. How to make it up as you go along is far less teachable.
I take this really just to mean "everyone has faults".
People often idealize heroes and think of them as beyond human. If you do that and met your hero then your illusion will often be shattered. But the problem is just that you were putting them on an unreasonable pedestal.
Of course some people are frauds and some people have no idea what they are doing but manage to make people think they do. But I didn't read this as being one of those situations. Just someone they saw as beyond human being only human.
I like the phrase “kill your heroes”. Not literally, of course. But in your mind. They are just flawed people like everyone else that happen to have been mythologized. Learning more about your heroes often leads to disappointment.
I've recently found a podcast called "your favorite band sucks" that's along these lines. They have real criticisms of popular bands, but it's also a bit tounge-in-cheek. It's a nice contrast to the typical worship of rock bands. I think it's healthy to be able to enjoy something, or be inspired by someone, without buying into the mythology.
I "know" guy who's conference speaker, so he knows other conference speakers, drinks vodka with them and so on.
He says there's significant amount of bullshit aka things that work nice on slides, things that only work cool in theory, but in practice they arent as great
A common meme I've seen recently is "no one knows what they're doing".
I think people like to believe this because it helps them cope with impostor syndrome, or maybe they think it puts them on even ground with people who do in fact know what they're doing.
My model for people who "know what they're doing" is that they tend to have a well-organized hierarchy of rules. At the base are principles; at the top are opinions.
The foundation tends to be pretty simple, deeply held, and unchanging, while the higher levels are increasingly fluid and specialized. The higher you get on this stack, the more "making it up as you go along" it becomes, but every improvised part is perched on something more stable.
They key to "knowing what you're doing" is organizing this hierarchy well, having the right supports in place to successfully guide improvisation and course-correction while steadily fortifying the foundation.
It's clearly not true in all circumstances. You can bet that an airline pilot has a very clear idea of what they are doing, and so will your dentist. Closer to home there are plenty of sub-fields in software where I'd be completely lost but when (say) we have to add a new endpoint to the webservice I work with, I absolutely don't have to make it up as I go along.
Your assessment of why people like to believe this seems spot on.
"For some the productivity drops to zero without stack overflow."
And it is bad to be a newb? And even for experienced devs to go to stackoverflow regulary ... isn't it productive, to not always reinvent the wheel?
I can solve allmost everything on my own. But if I have a new problem, I assume someone else already had - I would be stupid, to figure it out on my own, when I could get a working solution in 5 min googling.
But I actually programm without internet connection most of the time, as I like being outside, away from noise (and wifi)
The issue does arise when you aren’t able to understand the problem space enough to realize that what you copied from stack overflow has a mistake or doesn’t fit the requirement you need (e.g. perhaps it doesn’t match your error handling architecture or so on).
That said, stack overflow can be a great source and I’ve written plenty of code with a comment pointing to a SO link to further explain a pattern or snippet for a future reader.
"The issue does arise when you aren’t able to understand the problem space enough to realize that what you copied from stack overflow has a mistake or doesn’t fit the requirement you need "
Yeah sure. A stupid programmer will remain a stupid programmer, even if he reaches a certain productivity by living off of stackoverflow ...
There is nothing wrong with copy pasting code from stack overflow.
I do see two kind of people doing that. One group learns from the code in order to become better, and can use it over and over to be more efficient. The other group doesn't care how it works and just wants to have a snippet that works.
The second group usually misses a curiosity, of which the effects show up in many more places than just copy pasting from stack overflow. They also tend to have a flatter learning curve. I don't want to generalize, but in this group you will encounter people who don't care about the difference between a list and a set, or think that code works when it compiles. In both cases the juniors know very little, but one grows and the other one doesn't (or less)
There is space for both in the world, but I prefer the first group in my team.
i grew up a long time before stack overflow. actually used man pages and read books. there is just _no way_ to program in Rust or Go without access to a search engine and the package libraries.
I've done various rust and clojure projects by downloading a lot of git repos ahead of time for reference while on a long-haul flight. This works pretty well, but you need to do a bit of research ahead of time on which libraries you might want access to. This is probably slower, as you have to read source code and think more about the type signatures (rather than looking at some misc example), but if you have 15 hours, what else are you going to do?
You can spin up godoc locally and access it locally fyi. Wont help pull in a new package. And you can always drill down into the stdlib implementation right from your editor if you have jump to declaration. I wrote a custom consensus protocol implementation in Go on a flight for work sans wifi.
The criticism may depend on different values of "making it up as you go along", i.e., it may not mean so much "just wing it in ignorance" vs something like "even if you have many answers you don't yet have all of them, and new answers generate new questions exponentially...". So, perhaps less "everyone's ignorant" vs "we're all living in a land of many unknowns". But, yeah, he did find it disillusioning, and maybe is over-generalizing from a one-off experience (in contrast, I've done similar and was nothing but impressed, finding it is extremely valuable to learn from the best in a field).
How do you distinguish between what you do and "making it up as you go along"?
IME most people will generally appear to be making things up as they go – even if they have significant relevant experience. Every situation is unique, and experience tends to look more like having a list of techniques with varying degrees of expertise, rather than having a playbook for every situation. You have to look for the expertise rather than raw confidence.
In sports terms it would be something like a baseball pitcher being able to throw a great curveball, a great fastball, and an all right slider, and knowing roughly what situations to use them in. There will still be a high degree of randomness and mistakes will be made.
Agreed. What experience and talent gives you are instincts that improve the chances of whatever it is "you're making up as you go" working well.
I would much rather work with people who have a good track record of making it up as they go as opposed to people coming in with a fixed idea of how something should happen and are more likely to misapply whatever lessons led to those views (probably someone elses anyway).
> built a framework for spinning up new fully-deployable services from scratch that gets engineers 90% of the way there (minus the business logic)
I'm guessing that this was based on first-hand experience building such services and witnessing engineers struggle getting new services up. And not so much that you've had specific training or past experience in developing bootstrapping frameworks. This would be my definition of making it up as you go and is great way to do it. Another way is learning how to make bootstrapping frameworks and applying it wherever you can which doesn't go as well.
Related thing I'd add: be very careful about taking a dream job; I've seen this happen a few times--it's likely to disappoint. Also dating a minor celebrity crush.
In your particular instance, I would have collaborated with a single team to work on converting a single service over to the new framework. Once some success was made, it would be much easier to make traction with other projects and teams. Also, a vision or a plan doesn't mean you're not winging things as you go.
Building a new framework should be way at the bottom of your list of things to consider. If you do, please make it a blackbox.
It's tiring how many details one often needs to get into before being able to do something they could have summarized in a sentence the whole time. But this is a general issue!
> Hacker News and r/Programming is only good to get general ideas and keep up-to-date, the comments are almost worthless
That's a weird one. I don't know anything about that subreddit, but HN comments are frequently great. I submit stuff because I want there to be HN comments on it for me to read. I typically read the comments first and only bother opening the link if they were interesting.
>HN comments are terrible. On any topic I’m informed about, the vast majority of comments are pretty clearly wrong. ...
>And yet, I haven’t found a public internet forum with better technical commentary. On topics I'm familiar with, while it's rare that a thread will have even a single comment that's well-informed, when those comments appear, they usually float to the top. On other forums, well-informed comments are either non-existent or get buried by reasonable sounding but totally wrong comments when they appear, and they appear even more rarely than on HN.
Pretty much this. HN is still gazillion times better than everything else on the internet. And that is excluding the absolute gold comment from members that were part of those battle stories.
It is also a reason why I dont want to mention or see HN links in mainstream media. Although I think most reporters sort of know this as well and tend to not mention or link to HN as source.
My experience of Hacker News comments is not positive. They tend to be very convincing but actually kinda bullshit (or just vacuous), which is arguably a lot worse than communities that are transparent.
I was responding to the idea that it's better than everywhere else on the internet. That hasn't really been my experience. Discord is better, StackOverflow is better, even Reddit is often better if you're looking at specialist communities.
But to be honest I rarely find pearls here. Even many of the "pearls" in the article are just well-written articulations of stuff that is... kind of obvious. Some are inversions of ordinary wisdom for the sake of inversion. Only some carry new information.
I find the most valuable stuff here tends to be arguments where someone knows their shit but is going against the grain, and that person will usually be flagged into oblivion. But there aren't many places you see someone like that responding in context to the mainstream dogma.
---
Interestingly though, reading through it I explicitly remember a lot of the comments he quoted, e.g. the FedEx Airport one (which was really interesting). It's kind of crazy to think the site is small enough that we're all reading the same good stuff.
That’s not obvious to me at all. I comment on both YouTube and HN, and I don’t feel I’m part of any “community”. I just think the quality of the average, highly upvoted HN comment is about 100 times greater than the equivalent YouTube comment.
>I don't know why full stack webdevs are paid so poorly. No really, they should be paid like half a mil a year just base salary. Fuck they have to understand both front end AND back end AND how different browsers work AND networking AND databases AND caching AND differences between web and mobile AND omg what the fuck there's another framework out there that companies want to use? Seriously, why are webdevs paid so little.
Full stack compresses two jobs into one. It's purely for cost-savings. They're paid so little because companies revert to "well, you can still only do 8 hours, so you do half as much of each", but really that's just them trying to weasel out of paying for knowledge. They also blur the lines by putting full stack along-side other devs, even though the other devs may not have invested the same time to gain as much knowledge as full stack.
When you take a full stack job, you undervalue your knowledge (and the time invested) and are selling it for roughly half of what it's worth.
I've been a "full-stack" developer at large tech companies, and my experience is there at least it means "frontend developer who can put together a basic API server". My fellow full-stack developers and I would spend most of our time building out frontends, which was generally regarded by others as challenging and specialized work, and maybe 20% of the time adding API endpoints to fetch or update some data, which was considered straightforward.[0]
Not having to wait for some other engineer to make the backends made us a lot more efficient. It definitely was rewarding to be able to complete products end-to-end.
Hiring standards and pay were the same as for any engineer, at least in FAANG.
[0] Yeah, occasionally we had to optimize some SQL queries or whatever but we're competent engineers, we can figure it out even if it's not what we do every day.
> my experience [full-stack] means "frontend developer who can put together a basic API server"
This is 100% accurate in my experience too, and it's also true in the other way around: "full-stack" means backend developer who can put a basic SPA using React/Vue.
From the frontend perspective: they call themselves full-stacks for knowing how to spin up a NodeJS HTTP Server powered by Express with MongoDB inserting JSON in the database. But they are missing:
- AWS/cloud computing: not necessarily creating the infrastructure (although that's a must on more senior levels), but how to orchestrate the different components together.
- databases: why SQL/NoSQL, beyond basic SELECTs, knowing how to properly add indexes, debug why queries are slow, modeling, understanding locks implications and transaction levels, and so on.
- tooling: how to set up a bundler, linter and formatter, testing, CI/CD. This overlaps a bit with the responsibilities of the DevOps engineer, but a full-stack should know at least on intermediate level all of those things. I can't say how many times I've seen "senior full-stacks" that had no clue about how webpack worked at all.
From the backend perspective: they call themselves full-stacks for knowing how to spin up a React/Vue app that does basic CRUD operations using forms, powered by a UI framework like Material UI. But they are missing:
- CSS: most will find CSS hard/annoying and won't bother understanding at all how it works even on a fundamental level, will defer to hacks most of the time to make things work, especially when it comes to adjusting for edge cases like responsive design or cross-browser support.
- the DOM: normally they don't understand it at all, or to a very limited extent.
- Web vitals: how to measure and makes things faster and performant—not really including here overly optimized, but just making sure your app is 60fps or close to that most of the time. Usually when things get slow either on the network side or in the app itself, those engineers will blame is the framework/library, not their misusage of it.
--
Those lists are definitely non-exhaustive, as I didn't even mention more advanced stuff like protocols (how HTTP works? most can't answer), caching, etc etc, but you can get the point I'm trying to make:
The problem with the term full-stack is that only very few engineers really are sufficiently great on both sides of the stack and could say that they mastered both, simply because there's just too much to learn! Frontend has become so much more complex with SPAs compared when it was about rendering static HTML with some CSS and basic behavior with jQuery. Same for backend with the advent of cloud computing and several types of databases.
I've been coding professionally for a decade and I've met only a single engineer that I'd consider him full-stack (he checked all those boxes I mentioned and more). I think I would also include myself, because I spent 50% of my career spent as a front-end engineer, became senior, then I transitioned to back-end engineering because it pays the same or more and it's less stressful (for most of the regular companies that most of us work at). My current title is "principal full stack engineer", but in practice I only do backend/devops, I don't actively code front-end but I keep up with the industry by following the new trends and testing things here and there in personal projects.
Ultimately, I believe for being a full-stack engineer you have to be first a front-end (or back-end engineer) then learn the other, what we have today is most people doing the same from the very beginning of their careers and they either go deep on a single one or in none of them.
Full stack is not really a skill level qualifier - that's what junior, senior, principal, staff, etc. are for. It means you work in different areas and can't say "oh I don't do that work here" when someone gives you work in those areas. People call themselves front or back end engineers long before they've mastered it, and you don't have to wait for full stack either.
The front or backend bias you and your parent's comment talk about are team specific things so you don't give a hard front-end task to someone who is biased towards the backend. That still means they can take the less difficult tickets.
You can replace full stack for your example with "extremely good/talented/gifted engineer" and I'm almost sure it has nothing to do with the full stack label because it's the only example in a decade you've found.
But also the jobs that they get feed back into what they call themselves. So even if that's your example, it isn't how the business defines it, which IMO is ultimately why it's just a cost savings label to get people to work harder for similar amounts of pay as front or backend people.
There's also an issue with how many full stack web developers who are actually capable of doing all the things he lists.
My experience is that at some scale it works out okay, but beyond a certain point it just falls flat for most. We deal with insanely talented developers, who will trash a database, because it don't understand how it works. Talented JavaScript developers, who don't really understand how HTTP works... or load balancers, or caching... or webservers. Sometimes you get these fantastic software machines as deliverables, complex, you can't monitor them, or configure much, and the it just implements a basic feature of HA-Proxy or Apache, but badly.
My point is that they should be paid poorly, because they fail to be excellent at every part of their job, but rather than: Yes, this should in most cases not even be a job title. If you find someone who can do all of this well, you almost can't overpay, but are you really sure that you want to tie everything up on one person anyway?
Full-stack developer knows some frontend, some backend, some sql. They are paid good money because they are convenient, not for their knowledge. A dev who does only one thing knows way more about it that a team of full-stack devs.
Full-stack devs earn a lot and will probably earn only more in the future.
This is mostly a result of tech advancements like cloud infrastructure, tools etc that takes all the hardest things from you - like managing a DB, implementing security, managing infrastructure, deployments etc.
You don't need deep experts because of it, generalists are perfect for quickly shipping new features.
I would do "full-stack" over a decade ago, when there was less of the notion of "front-end engineer" (which still sound a bit ridiculous to me) - the front-end was mostly HTML and CSS. It was a good experience, to go from requirements gathering to the database schema and back to presentation - it helped me see the whole picture.
"Sure, $120k + bennies + pension sound great, but you'll be selling your soul to work on esoteric proprietary technology."
It sounds great because it is great. I don't make that much and I work on boring systems.
I guess those numbers also explain why the author can recommend maxing out the 401k. People supporting a family on less than $100k don't have $19.5k per year to put into it.
You have a great point, but let's balance this out with a few of the author's other comments.
- OP uses the phrasing senior engineer.
- Never worked for FAANG. This is relevant because $120k + bonus/benefits is basically a FAANG new grad. Fairly normal for SV tech companies.
- Those numbers likely provide a solid standard of living, but as a senior engineer you are likely underpaid.
- Esoteric and proprietary knowledge means if you choose to go elsewhere, you will be at a competitive disadvantage compared to using industry standard tools. There are of course tradeoffs and lots of general learning that comes with experience, but all other things equal it's a disadvantage.
> I guess those numbers also explain why the author can recommend maxing out the 401k
Yes, but again fairly typical for the target audience I think. There are even startups trying to target this market to optimize the flow of money from salary -> 401k -> post tax contributions/megabackdoor -> other investments, brokerage accounts, etc., e.g. https://www.helloplaybook.com/
Just as a sanity check, BLS suggests the median SWE wages work out to ~$110k.
I'm an intermediate dev, masters degree, 9 years experience, non-FAANG, higher cost of living area (not SV, NYC or NVA), and work with obscure tech and proprietary tools.
You might suck, but it seems more likely that you're getting ripped off by an employer who hopes you don't know what your skills are worth.
Are you on LinkedIn? Do you ever speak with recruiters about other opportunities? That's a great way to get a feel for the 'market rate' for your skillset in your area. When's the last time you changed jobs?
Yeah, this is my situation. I am AWS certified and started working on a team that uses it, sort of. So maybe I can transition off of there in a year or two because the subject matter sucks.
Programming is programming no matter the language, don't let anyone tell you otherwise. They are all just tools. If you have a masters degree you should be able to pick up anything proficiently in a matter of a few months, just grok the existing codebase as much as you can.
The programming part is easy to pick up. The tools and ops parts are more difficult, mostly because there are so many. And we are a 'microservices' shop (read distributed monolith). I don't get to read/work a single system or language. One sprint/day I might be in ECS Java, the next might be Python Lambda, then no code stuff like Splunk and Tableau. There are a bunch of minor and bureaucratic tasks too.
The real problem is I deal with this sort of stuff. I started doing analysis about modifying a system to provide a new field to another system for the purpose of reporting. After spending a day looking at it, they pulled the story because they didn't actually need that field. And this isn't a one time thing - pulling back work. Then they give me BS stuff. They wanted me to increase the code coverage on an app that we were going to transfer to another team. The target percentage - 100%. It was already at 97% line 98% branch. Why am I wasting my time on this miserable task?
Dude run from your current employer. I actually transitioned into software dev after working as a Mech Eng after 2 years and I started at $120k. With you experience I feel like you could do way better.
So while some employers require X years of (specific tech), many, MANY don't. They expect X years of development. Broadly. Can program and are AWS certified? Start looking. And if there's nothing in your area, look remote. You can hit that salary and solid benefits (no pension) in most metro areas (I hit it with 5 years dev experience, and only a bachelor's, back in 2015 in Atlanta, for a non-tech company).
You are almost assuredly more desirable in this market than you think. Consider making finding a new job your new hobby.
Glassdoor's market rate/comp tool says I'm actually making market rate for the area. One major downside to switching is that it involves more time to come up to speed, like putting in extra hours. I can't really commit to that because I have to watch my kid as soon as i log off of work (after 8 hours).
Glassdoor's tool is not very useful I've found. It only even somewhat works for salary, since it doesn't require bonus or equity incentives (which is sufficient in some markets, not others), leading to deflation of total comp. It also doesn't track things like overall years of experience, or how long a person has been in a position. All of those matter, as internal raises have tended not to match the market's increase; the lower end of the market is filled with people who have been in their position a long time, the upper end of the market is people who have job hopped recently, style of thing.
The past couple of jobs I've had I came in at the upper end of Glassdoor's reported salary, for the specific company even, even when I had relatively few years in the role, and without negotiation on my part. And Glassdoor didn't at all represent bonus and equity properly. Levels.fyi did a much better job of it (but has fewer data points for non-tech companies).
I have never worked extra hours to come up to speed (and in general haven't put in extra hours, though I've sometimes had to work weird schedules due to working with people across timezones), and have pretty consistently been a high performer.
I'd still recommend just doing some searching. Worst case, you validate that your current comp is the best you can get. Mediocre case, you find you could get paid better, but not doing anything you feel comfortable taking. Best case, you find something that is interesting and exciting and will pay you better.
"I have never worked extra hours to come up to speed (and in general haven't put in extra hours"
What kind of job do you have? I thought extra hours were normal in tech?
I have looked around. This area (Philly region) seems to be pretty terrible for tech jobs. There are some higher paying ones, but they tend to be niche.
Every place I've worked I've seen the same thing. Work/life balance is stressed as being important, BUT you will totally end up working extra hours if you cave to implicit pressures others set on you (oftentimes the business, product, etc). Someone will try and schedule you for a meeting at 5 PM, or say "we need this by next week", or whatever. And every place, I've said "No". Sometimes it's "I can't make that, I have to be home", sometimes it's "That isn't what we committed to this sprint" or even "That's what we committed but the sprint has been broken because (other thing)". Occasionally it's even been "Hey, we ran into something unexpected; even though we committed to that it isn't going to happen by that date".
On call pages happen; I always take time off the next day. As a manager, I -tell- my team to take time off the next day if they get paged.
Even where I am now, where I will have meetings scheduled at 7 AM, and 6 PM, routinely, I just close my laptop up in the middle of the day. Sometimes I'll even book time pre-emptively just to prevent people from trying to schedule me straight through (and thus leading to a > 8 hour day).
Etc. The perverse thing is that by doing this people actually get a -better- impression of me. There's an element of confidence; couple that with the fact I do deliver, and they don't question it or push back on it. And ultimately even if they wheedle enough I say "okay, fine, yeah, I have to talk to the guys in China; 6 PM meeting it is", I just book time 9-5 for my own stuff, or (when in the office), leave at like 3 so I can get home, unwind a bit, have dinner, and take that call, and still only work 8 hours.
I can't speak to Philly; you might be right. If you're tied to the area, consider remote. If you're not tied to the area, consider looking for things outside of it.
I've found that if I only work 8 hours they will say that I'm not getting things done fast enough. It makes sense if they are comparing you to people who work more. I know one department where the tech leads all work 10 hours consistently.
I was once in a discussion about how to get to a senior dev position (after filling the role of senior dev for a year and tech lead for another year). I was told I had to work an extra hour per day. That's a 13% increase for a 7% raise and a role with higher expectations...
First, a great deal of research shows that working past 8 hours of day for long stretches actually -reduces- overall output compared with working 8. Now, to be fair, the bulk of this research was done on physical labor, not mental, but interestingly, what research -has- been done on mental found the same thing, except the actual hours was more like...6.
Second, you're working at a place that actively -encourages- you to work unpaid overtime. That sees it happening and rather than saying "what can we do to prevent this and not risk burnout" instead says "good. Keep at it". If you're salaried, you're effectively getting paid a lower hourly rate for the work you're doing. So...not to beat this horse again...but...look for another job. It doesn't sound like the pay is great, it doesn't sound like the environment is great. The only thing keeping you there is a belief you can't get anything better; maybe that's true, maybe it isn't, but it certainly will be true if you don't at least look.
Get yourself on Linkedin. Find a tech focused resume writing service to help you with verbiage, both for your LinkedIn profile and your resume. Let recruiters know you're looking on LinkedIn (https://www.linkedin.com/help/linkedin/answer/67405/let-recr...). Start looking for roles in your area, and remote (and in any area you'd be willing to relocate to). That sounds like a better use of an hour a day than giving it, for the same price (free), to your current company, in the hopes of a 7% raise and higher expectations in the future.
Obscure tech and proprietary tools do translate… if you can translate them.
Programming languages are all the same, so learn 3 or 4 new ones and discover that you can probably write in any language for an interview (then do some in relatively unfamiliar languages for kicks and giggles to practice).
Tech is all the same. Take data in, poop data out. That’s the whole job. The formats and protocols change, but once you starting thinking about your stacks as data-in, data-out, they all start looking the same.
Make video games or hardware drivers from scratch, those are the hardest things to make. Video games from complexity overload, and hardware drivers from interface complexity.
I already know Python, Java, Java for Android, Neoxam script, scripts (bash, bat), and Angular to some degree. I've also used JS, AngularJS, C++, C#, powershell, assembly (Intel), and COBOL in the past. So yeah, stuff translates and it isn't that hard to learn a new language (neoxam is probably the hardest since there is limited documentation and examples).
I don't have any interests in games or drivers. Those aren't applicable in my company either. I am currently working on an Angular site. I will host it on S3 with a Lambda and maybe SQS for a marketing email list. This is tech that we use at my job, and many other places.
I really didn't mean for the comment to come across that way. I fully agreed with "it sounds great because it is great". I just wanted to provide another perspective too.
No, I completely get that. I'm just adding some of my background to show my thoughts. I know there are other areas and other people that command higher prices.
Learn the skills for the job you want, claim you do that stuff on your current resume (within reason), and jump ship. Most prospective employers won't push too hard with needing references from your current place of employment. Find your best mate at your last job, edit that portion of your resume, and fill them in on the details.
I've been at the same company for 9 years straight out of college. The company doesn't allow employees to give any references for people who are leaving.
If you're in the US, there is only an -infinitesimal- chance that they'll actually ask for references for a software job.
They'll likely run a background check and make sure your resume isn't obviously lying (they'll confirm start/end dates and title, basically), and make sure you're not a criminal, MAYBE do a credit and/or drug test.
$200k for salary for a new grad strikes me as being on the (unrealistically?) high side. You'd need to be quite talented and have negotiating leverage for this. FAANG generally won't offer this out the gate. That kind of salary would buy senior devs no problem, even in big tech cities.
$200k total comp, absolutely. A standard new grad offer with no negotiation might be something like $120k salary, $25k cash, $120k options vesting over 4 years with 6 month periods. So in the first year, no negotiation, you're taking home $177k pre-tax. I know people who have gotten up to $60k cash bonuses (split over two years) out of college, simply by having another offer.
I think they are being serious. If you're in silicon valley, I could see $200k being a new grad salary. I wouldn't move there for less than that. The cost of living is extremely high in that area. The vast majority of areas would have much lower salaries. My starting salary was less than $60k. After 9 years and a masters it's still under $100k in a medium-high cost of living area. Median salary for a developer is about $110k in the US.
Meant to reply to you but replied to parent by accident. $200k total comp seems plausible. $200k salary seems like a bit of a stretch to me. Companies would much rather give you a fat cash bonus or stock than raise the salary so high.
This rings home for me as well. I remember my (now ex) wife saying she was going to quit working and be a stay at home mom the day I hit $100k salary. Effectively cutting it in half economically. I feel like folks who say these kind of recommendations also need to preface the fact that not everyone can afford to. I couldn’t afford to put anything in a 401k for years.
> It's not important to do what I like. It's more important to do what I don't hate.
This one has started to dawn on me. I'm never going to love my job as much as I love my personal projects, so a job that I don't get very excited about but doesn't drain my energy is better than one that I get somewhat excited about but does drain my energy (not that it's impossible to have both, but it's rare)
This is a very pessimistic long-term view, in my opinion.
Life is extremely, infinitesimally short. If it's at all possible for you, you should try to spend as much of it as you can doing things you love. I know many people can't, but it's bleak to just give up and permanently settle, I think. (Especially if you don't currently have any dependents who rely on you; it changes the equation if you do.)
I don't look at it that way at all. I'm choosing to conserve my finite energy for the things that matter most. It's all about priorities: of course I would like to have a job that I'm both excited about and not too stressed about, but I can be happy and fulfilled with one that simply doesn't dominate my life (in terms of time or energy, because those are separate things), because my job is not my life. And being uncompromising about having a job that really excites you can force you to make all sorts of other sacrifices (time, stress, risk, maybe even things like money and lifestyle), which is where the prioritization comes in.
Maybe. I'm definitely not necessarily saying it in terms of work or career or anything like that. Just in general, I get a little depressed at the idea of just slogging through most of one's hours for most of one's days in existence.
It seems to me that many if not most people I meet fall into one of two camps.
The folks that just don't care at all and do the minimal amount of work they can get away with not to get fired.
The other side is folks that care so much about it that they spend so much energy on as to ultimately be unhealthy for them and also making it worse for the rest of us (not taking vacation, unpaid overtime etc.
It is very rare to find people that care 100% while at work. But after ~40 hours each week that's it. Sure I'll stay later for that one meeting that's sorta important and needs to start at 4:30p.m. But you better not try that every day from now on. Want me to be on call for a fixed rate that has nothing to do with my salary? I can't log an hour for a 5 minute call at 1am and I can't take time off in lieu? Good luck getting me on that rotation!
Can confirm this and it was surprise for me when I discovered it. I assumed they would try to maximize $$ in my offer in order to maximize their profit. Yet most of them were trying to get offer accepted as quick as possible. Seems like more offers with less money in each is preferred over less offers with more money in each.
The same dynamic occurs with real estate agents. A successful agent optimises for quick sales, while giving the appearance of trying to get the best price. Very important to know when dealing with agents if either selling or buying a home.
To be more explicit: as a buyer, especially in a tight market, the seller’s agent will often do things or reveal information to the buyers benefit (against the interests of the seller they are supposed to represent).
There are “rules” to how the game is played, and how the information is revealed, and a lot of it seems hidden (in fact I have seen agents that don’t understand the perverse incentives!). A seller’s agent will usually need to make sure they have deniability, and they will prefer options where the vendor remains happy (so the vendor will use them again, and recommend the agent to other sellers).
I have only a very little experience, and none in the USA, but that was what I noticed in my own country.
It surprises me how little effort people put into understanding the game, given that playing it well can easily make the same difference as many years of income.
Some real estate agents will, if asked, agree to a more top-heavy commission -- e.g. instead of x% of the total sale price, y% of the amount over $z. (I don't know the exact details, but my dad did this when selling his flat.)
I think someone like Derek Sivers (maybe not actually him) talked about doing this. From what I remember, it was quite painful finding one who would agree to it, and was probably only possible because it was a stand out property.
This would likely be more difficult for an average person to pull off, but can be a great mutual win if you pull it off.
A quick tip for homebuyers/sellers to save on realtor fees. Contact the title company and they'll gladly tell you what you need to do to close the loan. Maybe not ASAP, but they are more vested in you doing the busy work and doing it right than they are the worthless realtors. Half the time realtors in my state wouldn't even sign the offer to purchase and include their info even though it's illegal and they can lose their license.
Interesting must be a strategy towards women selecting the top 10% only so men are not getting any matches. Getting matched before reading profiles saves tons time kind of makes sense. Then you can review and filter afterwards.
Why is that sad? Have you ever heard of speed dating? Similiar concept.
Well I've witnessed people not even looking much at the photos. Men do get matches, and I don't really agree with how factual or defined your initial statement is. I guess it's sad because of how arbitrary the selection is.
The selection is arbitrary because they're not the ones doing the selection, women are.
There's a well known study from OkCupid about this, women are way more selective so men only have very few matches a week (and some of those don't even respond) unless they try a lot.
> Good code is code that can be understood by a junior engineer. Great code can be understood by a first year CS freshman. The best code is no code at all.
Yeah it's striking how this is not the majority opinion even amongst seniors. I'm mid career I still have to ask people very experienced why they always jump on the most long-winded complicated way to solve a problem, often simple enough to require very little.
There is still to this day the idea that we're supposed to plan to future like oracles or pad our resumes on the company's dime.
which is why leetcode makes no sense, as it focuses on premature optimization and writing the most advanced code possible which no freshman will be able to understand, at least not without dissecting it for a few hours first.
> Algorithms and data strictures are important--to a point
For me, 90% of day-to-day algorithms use is not writing O(nm) or O(n^2) code, and knowing which data structures to use to avoid it (usually a hashtable, occasionally a heap or balanced tree).
It all depends on your use case. If your input is bound to a small N, I’ll take the simplest to understand implementation regardless of algorithmic complexity. Future you and other people will thank you later.
It’s safer when it’s hard coded data and really bad versus dynamic data and sorta bad. I’d rather have O(n!) with a list of length six because it’s going to break hard before it gets to 20. O(n root(n)) on your customer list will slowly creep up and it would be easy to not notice.
If you're really cool, you make it throw an error if n > 50 along with a comment explaining why, or at the very least leave a comment saying "this is not the optimal algorithm and will break if N gets much bigger than 50".
As the guy who has to deal with your code years later when the company is bigger...please think about the complexity, especially if it is a simple problem. Or not - that’s why I have a job
I don't know why you have been downvoted, you do have a point. Algorithms are tailored towards specific purposes. If you for example have something that runs with an n of about 5, and it's not in a hot path, the clear concise solution may well be better. Besides, with small n there is more to the cost of an algorithm than just its asymptotic growth.
However, to the point of the person you've replied to: It's good to know that stuff to be able to assess the situation in the first place.
I'm fond of saying that one of the big differences between theory and practice in CS is that in practice, we don't immediately drop the coefficients. Instead we study the problem to learn what the coefficients are and use that knowledge to guide our approach. For instance, there are times a linear search will just flat out beat a binary search due to cache effects.
But if you care about the coefficients the first time you write the code, you are prematurely optimizing. The theory here is wiser than it intended to be :).
First draft: don't be stupid with asymptotics as PP says.
*only if there is actual problem in prod or whatever: bust out the concrete measurements.
It may also be faster for small N, as the big O notation swallows the constant factor. It’s not accidental that standard lib’s sort algorithms will fall back to simple insertion sort at the end of recursion when n is small. (Though please correct me if I’m wrong, I’ve only once digged into the sort algorithm’s code in cpp and java’s std lib)
in my 25 years of software development, 90% of the time, you don't need to optimize the code and the worst solution in time complexity is totally fine. then there is another 8% of situations where trading in space for time complexity is the solution, i.e. using lookup tables. the remaining 1-2% of the cases you might need a combination of some data structures or containers from the STL or boost (in the case of C++). I've never had to roll some special version of a data structure or container or algorithm myself from scratch, and I work on some really complicated real time software.
> The older I get, the more I appreciate dynamic languages. Fuck, I said it. Fight me.
The other ones are quite obvious but this is the one that really resonated with me. People, with experience, tend to get more pragmatic and less "academic".
Pragmatic for me is knowing what’s wrong at compile time, and not being told much much later at runtime that I nested this array in a dictionary in an array wrong, or that my string is an integer.
I got older, stopped using python for exactly the reasons above (I just really felt that it was wasting my time for trivial reasons), and found mypy which made it bearable again.
Yeah, I'd be on board with that if 99% of the people talking about typed languages at the moment weren't using Typescript for use cases where you literally get instant feedback from hot reloading as you code.
Every time I see an example of a bug that TS would solve, it's something that I routinely find in 2 seconds by looking 10 degrees to the left at my second monitor and noticing the screen is white and there's some red text in the dev tools. "Compile time" doesn't mean anything if it consistently happens 0.5s before "run time".
As your application gets larger it's not that easy. Even if your code hot reloads all the time, getting to the piece of code that breaks might take 10 clicks or you are working on some feature that has quite a few combinations for use and/or settings that influence it. Maybe some feature flags thrown in. With this setting off and that feature flag on, suddenly your integer is a string. Boom runtime error but only because your QA thought of it when testing it 3 days after you already moved on to another ticket.
Yeah, the funny thing for me is that I run my business on Haskell exactly for the sake of pragmatism.
The last time I had a job was on a Clojure team, and that team has since abandoned Clojure and some of my former colleagues have come to me and essentially said that they got tired of fighting with runtime errors.
I keep getting downvoted on Reddit for stating that I personally hate TypeScript to the max. It ruined the beauty of JavaScript (once you know it, anyway, I understand there are difficulties for JS-novices) and I honest to goodness 100% do not EVER find myself thinking: "Gosh, thanks TypeScript!" - on the contrary, it's always: "For fuck's sake you stupid POS TypeScript, you're wasting my time for no benefit at all."
Strong-typed languages (especially in the damn browser) make no sense to me. It's not like we need to manage memory or anything.
20+ Years of experience here, and I hate TS with the passion of a thousand suns.
I disagree on strong typing. I think this is actually a good improvement that prevents many silly mistakes. But static typing and constant (re-)compilation is the part that kills the joy when creating software.
Count yourself lucky then. I've tried to work with TDD cult members and it was one of the worst experiences of my professional life. A good sign that people have moved into the cult is when they start saying that people who are not using TDD are "unprofessional" or produce "shit" code.
Clojure goes against so many of lisp's timeless philosophies and principles that it can hardly be described as a lisp.
When someone says "lisp is the greatest programming language" it's these principles that they refer to, most of which Clojure discards so it can play nice with Java and promote very specialized ways of solving problems in order to best fit a particular niche.
The best way to discover the essence of lisp is to read SICP and learn Scheme.
There it is again. Care to name any of those timeless philosophies without resorting to the minutiae of cons cells?
Clojure has very tangible and definite downsides and tradfeoffs (to name some: weaker REPL, though not as weak as some Schemes. JVM required. Heavy interop reliance), but it has served well as the flagship functional lisp.
I think there is Clojure and cloJure. The language is very different if you can mostly get by writing pure Clojure code, and another language altogether if you need to interop with Java constantly.
If you can get by mostly writing Clojure code (either by wrapping the Java libraries that you will use on helpers, or by using third-party libraries), it is a great language, even if in the end it is a very different from any other Lisp (but I'd argue that the changes are for the better, for example first instead of car, thread macros, protocols, immutable data structures). But yeah, for sure Clojure is much more optionated than any other Lisp.
Now, if you need to interop with Java code constantly, yeah, Clojure can be a pain. A good chunk of the code you will write goes to appease the alien structure that is the concept of Class on a FP language.
> The most underrated skill to learn as an engineer is how to document. Fuck, someone please teach me how to write good documentation. Seriously, if there's any recommendations, I'd seriously pay for a course (like probably a lot of money, maybe 1k for a course if it guaranteed that I could write good docs.)
Highly recommend a journalism class at local community college.
Or just learn how to take notes. Notes should be made with the intent that someone new could understand what's going on. With that approach, if you can't make good notes, you're thinking too much about what you know and less about what others don't.
I studied journalism at college as part of a more general media studies course.
It does help but I am not sure I would describe _my_ education as a panacea.
The main reason it was good was because it helps frame how you think of writing,
The most important things go first. Statement of fact, then you go into what the implications are, then you start introducing less and less relevant elements.
when you’re writing you keep the 5 “W’s” in mind, make sure you answer them. (Who what where when why), for instructive documentation you add: How.
Obviously an education bakes this into you in a better way than I can convey here.
What I learned is probably only decent for writing overviews.
Personally I find structure to be the biggest bottleneck/difficulty when making documentation.
What do others normally struggle with that don’t have this education?
This is good he is not drunk enough to tell the main dark truth - there are no senior engineers, only some people who finally managed how to cope with their imposter syndrome. Oh whait...
If you keep learning and improving, imposter syndrome never goes away. After you settle in, you can get used to what you are capable of. I’ve never lost mine, it’s learn one thing, discover two more you didn’t realize were important that you don’t (yet) know, recursively
> If I'm awaken at 2am from being on-call for more than once per quarter, then something is seriously wrong and I will either fix it or quit.
Sometimes fixing the problem will require special access to Production which you don't have, or even a specific role with that extra bit of initiative.
If you're frequently being paged for stuff you literally can't fix, then the process/monitoring/alerting has broken down somewhere and needs to be fixed. If it can't/won't be fixed, then the company needs to hire ops people whose specific job is to react to and triage system failures -- devs should not be treated as escalation machines. If the process can't be fixed and the company won't hire people to handle the process, then you quit.
Very true. I do ops, I'm on call. Calling a developer at 3.00AM is not something I do lightly, it would have to be insanely critical.
Operation, and on-call staff fixes broken systems just enough, that they will work until 8:00AM when the developer is back at work.
Just this week I talked to a developer, and he asked if I could switch the phone numbers, so issues would get routed to him first. My question: Why? You can't really do much without me being awake as well, so maybe I get the first call, and I call you... IF I need to?
This is true unless you own a meaningful portion of the company (>1%). To dump your entire investment because of extra hours is not a way to get that asymptotic upside.
Maybe... none of this applies to people who own meaningful portions of the companies they work for.
Probably people are honest when they are drunk. There is nothing to argue with any of those comments.
But this one stood out for me!
> Titles mostly don't matter. Principal Distinguished Staff Lead Engineer from Whatever Company, whatever. What did you do and what did you accomplish. That's all people care about.
That's true unless you get acquired. At that point, your title is what your new overlords will use to determine your role, salary, and even whether to lay you off.
I've seen that happen to an excellent domain expert at a company during radical downsizing. Though TBF, that round saw fat, muscle, sinew, and bone cut.
> Good code is code that can be understood by a junior engineer. Great code can be understood by a first year CS freshman. The best code is no code at all.
This a thousand times. Having empathy for future devs, maintenance, and bug fixes is so important.
Amen. Something bizarre I have noticed though in junior-almost-senior engineers is that they pride themselves in obfuscating and writing "highly complex" logic, with no documentation. It's almost like they are demonstrating their new abilities in the worst way possible. I have been dealing with one of these engineers recently, and they have expressed to me that they love writing <highly problematic, confusing code> because it's so terse. It's been a point of friction, actually, because I have been trying to get other engineers to help on the software they have been contributing to, but it is nearly indecipherable without the original author's help.
> Something bizarre I have noticed though in junior-almost-senior engineers is that they pride themselves in obfuscating and writing "highly complex" logic
I think it happens because most measures of code quality are quite fuzzy, but brevity (which is valuable, other things being equal) is relatively objective. "Have I made the code shorter?" is a much easier question to answer than "Have I made the code easier to understand, modify and maintain?"
> "Have I made the code shorter?" is a much easier question to answer than "Have I made the code easier to understand, modify and maintain?"
This is something that comes with experience though, and I think a lot of people don't truly grok this until they are trying to maintain their own terse/clever code written months/years earlier. Nothing is quite as humbling as doing `git blame` on some crappy code only to see your own name there.
I think another place people can end up here is if they don't know what the compiler is doing under the hood, it's easy to assume the shortest code will perform the fastest or something like that. "Presumably this cool trick avoids these extra steps" type of things.
Before I had written much assembly, I used to think ifs to avoid assignments was smart. Turns out avoiding branching is better for both testability and performance.
A similar thing I've noticed a trend of recently in frontend react codebases is overuse of memoization. It seems as though people don't realize how it works and that it is often _less_ performant than just doing some low cost computation on each render (like a comparison or basic math).
Amen to that too. That's probably the most complicated part of being a manager or tech lead. You have those amazing junior-almost-senior engineers that could be way more productive and yet deliver better code, purely by "doing less", but the over-engineering gets in the way. You know they could be top-contributors, so you don't want them to leave. But at the same time it's very tiring!
I noticed that they put a lot of their self-worth in the sophistication of their code, so it's difficult to criticise without making them feel bad. You need alternative methods of getting them to "see the light" and write code that's more understandable and maintainable by others.
What alternative methods have you found to get them to "see the light"? I've found myself wishing they'd do therapy, but that doesn't help and can't be expressed.
Mostly public feedback (for only the good things, of course). Put their "good" code on code samples, documentation, code guidelines, tell the team "look everyone please do it like person X did here". It will surprise them in a positive way.
Also on PRs try to point to their own work as sample of how to do things better. This doesn't hurt the ego much, because the role model is themselves.
Also, I feel like most of the time this is an impostor-syndrome/perfectionism issue that also happens with other workers too, so HR can give tips on how to deal with those issues in a more sensible way and tell you what you can or can't say.
That hurt me to read! Have you communicated this to your manager? Might be worthwhile to have a decision "from the top" that is essentially: all logging is good, so long as it doesn't hurt performance or contain PII/PHI.
three things i find to be true of every web app i work on:
1. good logging is the most important part of the app. whatever the app is meant to do is secondary. the app should be a logging app first, and then a backend service to sell widgets second.
2. assume performance requirements for request latency and transactions per second will be at least 3x whatever the product owner tells you at the start of the project and plan accordingly. never trust any suggestion that you can 'ignore performance for now'.
Boundaries between anything. I recently was dealing with an issue in a Jenkins pipeline, where I didn't realise that state was being serialized to string form between job stages until I explicitly logged it out. The thing that was a list in the previous stage was suddenly a string, but then Groovy would happily accept the join method on a string because it's still an iterable. Auuugh.
I wish more managers and business stakeholders investigated this more carefully. Team members of this type add a shadow overhead that impacts velocity dramatically. It’s always visible to average competent devs on the team, but can be invisible to managers who don’t investigate as to why only one person is particularly productive on the team. Most people won’t go to their bosses and say ‘so and so writes overly complicated code that’s making my life a living hell’.
In fact, I think a third-party auditor would be a valuable service for dev teams to utilize at least once a year. Totally neutral party that can come in and say ‘We’re pretty sure the codebase is too complex and we noticed the commits came from so and so’. The business value here is you can root-cause velocity issues that can come from decent people who need to be reigned in (not necessarily fired).
I’m literally prepared to pay to have these people objectively assessed.
I would do this auditing job, no joke. A kind of "code-smell" service, that can yield problematic areas, along with a report of engineers that could use additional guidance/training/reigning-in would be super valuable from a manager's perspective. And because it's a neutral party, they can feel good that there's no politics.
One challenging bit about this service would definitely be quantifying improvements. Since the problem is somewhat hidden by nature, you would almost need testimonials from other engineers on the team.
Another way to remove finger-pointing is to identify features that should be reasonably easy to implement, but for whatever reason don’t get done in time, or worse, don’t get done well (end results being bad).
If a team was tasked to make a simple landing page for example, and it was oddly hard or time consuming for an average team member, it would be good to dig into why. If the answer is ‘you should see the boilerplate involved, or the deploy process ...’, then you can make a neutral analysis as to the cause.
Testimonials aren't a bad thing though. I think every dev should have their code read and evaluated by at least one other person, although getting as many people as possible to read it would be best. If code legibility to help team members understand, debug, and improve upon the code is essential, the best metric to use for code quality would be their collective feedback on said code.
> Totally neutral party that can come in and say ‘We’re pretty sure the codebase is too complex and we noticed the commits came from so and so’.
This sounds like it'd reduce psychological safety on the team, to have someone without the project context come in and criticize your engineers. The morale decline of such a choice could outweigh the benefits.
It’s a suggestion. Generally, code complexity is created by someone that is actually pretty knowledgeable and competent. A straight confrontation won’t easily neutralize such a person in a discussion. They will know how to defend. If they also have peers they are close with, those friends will also negligently condone it with a simple ‘I don’t see anything wrong with that implementation’.
It’s a tough one, so I don’t even know where to begin other than an independent arbiter. Anyhow, I agree with you that it is a delicate matter from an emotional perspective (even though the underlying can be a reasonably objective matter).
The same goes for junior writers who think that complex sentences and words are a sign of superiority, and later discover that the real (and bigger) challenge is writing clearly.
Along the same lines, I recently reviewed a junior engineer's design document and pointed out to them a diagram showing the actors, their roles and interactions would have saved two pages of dense, complex text, and made the solution clearer.
I think there's an element of pridefulness too, in having the ability to manage dense and intricate stuff at all. They're very smart, and it makes them feel good to be able exercise that and juggle and retain so much context at once. And they don't realize how fragile that juggling is, that it's going to take a ton of effort for them or other people to come back to it.
I think this is more prevalent for some languages/stacks than others, too; there's definitely a cultural aspect fostered the language owners or whoever the leaders are.
> Something bizarre I have noticed though in junior-almost-senior engineers is that they pride themselves in obfuscating and writing "highly complex" logic, with no documentation
I've noticed this too. One thing I've had mild success with is the concept that a particular programming document (especially in functional programming) is really a series of mini-documents. Each mini-document has function-level comments, a signature, body, and returns that tell part of the story of what that function does. The minute that the collective of those fail me and I find myself reverse engineering code, then we have failed the team and cost the company money.
Some complicated things must be done, especially at the size and scale of our products, but complex things are painted with a fine veneer of interfaces and documentation.
I think another exercise that can help is putting junior engineers front and center to architecture. Whether it's exposing them to review, the review process of a Senior engineers design, or putting them front and center to design implications. I've seen having to figure out the difference between a controller and a service cause some really positive abstract thinking that puts people on the order of thinking for the group rather than their own merits.
I worked with a guy very much in that vein. He had enough years of experience to call himself senior, but it was clear his actual skill level was halfway between junior and senior at best. He wrote the most clever, fancy, opinionated code I've seen in a while, and he wrote a lot of it. I weep for the programmers that will come along in a year or so that have to figure it out.
Does that matter. A team with a senior/junior separation should have in place a system of peer review where a code not understandable to a peer will not get merged upstream. There would be a CI system with a linter that forbids abusing syntax for writing dense/obscure code. If a code requires documentation there should be in place a doc-coverage tool which forbids new undocumented code from being merged upstream.
If these systems are not in place, and a senior developer can get away with writing overly complex code, then that is the fault of management, not the developer.
It goes too far though. The virtue of simplicity needs to be balanced against the virtue of making proper use of advanced language features.
A first-year student is unlikely to understand C++ template metaprogramming, or just about any Haskell code, but that's not to say they should always be avoided in production code.
> The best code is no code at all
This can be interpreted as advice to avoid the 'inner-platform effect' anti-pattern. Good advice, but personally I'd rather express it in terms of the inner-platform effect.
IME good commenting alleviates a lot of the “problems” with using complex language features. I’m thinking redis style comments (see here[0] for antirez’s philosophy on the issue). If you’re doing something that’s not immediately obvious, explain what you’re doing! That way others can verify it during review, and when someone is reading the code later they can read the comment to understand what’s happening rather than having to parse the code. IMO this applies just as much to simple constructs as to complex ones. Big for loop? Throw a comment at the top telling me what it does so I don’t have to read it when I’m skimming later. Better yet use `map` with a well-named function. Either way, provide a semantically meaningful summary of what’s happening.
> If you’re doing something that’s not immediately obvious, explain what you’re doing!
Agreed. Comments have their place, and some code is unavoidably involved, just by the nature of unavoidable complexity. The solution isn't always to write simple code. If it were, we wouldn't bother studying clever and efficient algorithms.
Also, it's important to ensure comments are updated when code is changed. I don't know who originally said it: Stale and inaccurate comments are no longer comments, they're lies.
That blog post looks worth reading properly, I admit so far I've only skimmed it.
> A first-year student is unlikely to understand C++ template metaprogramming, or just about any Haskell code, but that's not to say they should always be avoided in production code.
They're just the people to read a book on the topic and try to use it everywhere...
This sentiment is exactly why programming in an org-chart is so much different than programming as an individual.
Don't apply corporate best practice designed to withstand turnover to personal programming - you're leaving abstraction and efficiency on the table.
The better code for your own projects is almost definitely inscrutable to a newcomer a lot of the time. It's okay for there to be prerequisites to understanding.
I agree with this in principle but in practice code I write that’s quick & easy and not very readable is usually not understandable by me in a few months either. So if it’s a personal project I hope to last I still want to keep it simple with my code.
I'm not saying to hack away and make a mess necessarily.
Sometimes the simplest solution also requires learning and building upon other concepts. Or sometimes, a simple interface is written around a complicated core.
For example: The OP's quote is used time and time again to argue against FP concepts in industry - a newcomer doesn't know the first principles, so by the OP's folksy razor[1], that code isn't as good as less abstract code that doesn't require learning a new concept or two once and for all.
[1] Folksy razors are the essence of every principle-ish-level engineer's methodology I've run into. Corporations value the ability to remove all human agency & decision-making from software development where possible.
> Corporations value the ability to remove all human agency & decision-making from software development where possible.
Corporations value the ability to continue as an operating entity and make changes to the code after the proponent of Kleisli arrows and lenses has departed for greener pastures.
Doesn't mean I have to respect for be sympathetic to it. It is just organized stupidity at scale.
That said, it's hard not to play the game and buy in. I just write my vanilla Java, say right-sounding things in meetings, and somehow get Paid despite barely doing a thing.
Corporate software development is a great career tbh - instead of paying me to use Kleisli arrows for the company's gain, the company effectively pays me to use Kleisli arrows on my own IP lmao. Gotta love all that frothy waste that's produced by Worse is Better. Waste that the average dev can now reap thanks to the boom in remote work!
I’m not sure, you can become the newcomer yourself when you have to come back to parts of your code base months later. My experience writing simpler code has been pretty successful to respond to customers wanting random new features.
I disagree strongly with it on multiple fronts. That concern should be secondary to your program actually doing its job well. Your customer will literally not care how elegant or ugly your code is; they just see the end result. And when the program fails them, it really doesn't matter to them whether your juniors understand the code or the error. Moreover, not every abstraction is (or can be expected to be) accessible to an entry level engineer. Some technologies just take a long time to master, and doing that can also require higher-level abstractions.
So I would say great code is code that:
1. Does its job well (robustly, performantly, etc. in whatever proportion is applicable)
2. Is maintainable by engineers with reasonable expertise in the tooling
in that order. If you can manage all that and make it accessible to your junior devs, that will of course make your code greater. But don't lose sight of what your customers care about. Your business isn't there to make you feel good about maintaining code, it's to provide customers with value.
It really does matter to the customer if the junior understands it when the code fails though.
Easy to understand code can be more quickly patched and repaired by anyone on the team. If you don't need to call in the senior who built it two years ago to repair it, and you can have someone do it right away, it is better for the customer.
I never said it doesn't matter. I said it matters. What I'm saying is the scenario you're portraying literally cannot play out pretty much by definition (and also empirically, from what I've seen) unless you accept that code readability for juniors is secondary to program quality. When you make readability your primary concern, it comes at the cost of fixing certain bugs and design issues... precisely because the best solutions may not t be trivial or easy to understand by the junior folks 100% of the time. So you never get into your purported state where everything was well designed and implemented in the first place and now you have to worry about getting a junior to fix a bug. Everything ends up clunky from the get-go and you never get a high-quality, robust program at all. Just something of mediocre quality with a ton of patches from devs of all level to get something like 85% working, shipping with know issues you could've avoided if you hadn't artificially restricted yourself and tied your hands behind your back for the sake of the juniors.
It depends on what you're optimizing for. I like to think of it this way. A good "programmer" can take ideas and turn them into working software that is performant enough, meets all of the requirements, etc. This is a mostly static operation. A good "engineer" can take ideas and turn them into working software that can be changed, updated, and maintained for years to decades by multiple programmers.
There are code bases at my current employer that are entirely "ok" and still being worked on from before I was able to spell my name. Projects that have had continued development for >20 years by armies of engineers and the code is still readable and simple to understand.
As with all software engineering, it’s all about trade offs and context.
That performant code that you wrote maybe at the expense of readability? It could very well become bad code when you leave the company and it falls to a junior engineer to modify it to fit some changing business requirement. Or, there’s a bug in the code and the amount of time it takes to fix it is a direct function of how quickly and completely that junior engineer can understand the code.
For me, the hard part is knowing when and how to make that trade off. I’ve definitely erred on both sides often enough.
I strongly disagree. You're packing a bunch of different metrics of quality into a single bullet and somehow suggesting those are separate than the second bullet point. Readability is just as dependent a metric as the others. If you make things that are hard to read, I can guarantee they are not going to be robust, as well as likely not performant.
In my experience, the easiest to maintain code is very often the most efficient and robust as well, because people haven't felt the need to hack around it at every corner.
This assumes you are absolutely certain you know what the code should do. And that it does what you think it does. Hence while you might think “performs its task” is an easily defined I’d disagree. I’d take clear code that wasn’t working over code that was hard to reason about and somehow worked every day.
No because I’d fix the easy to fix code, and make it work correctly. The other code is useful for sure. I mean people have built billion dollar businesses on crap software that barely works. And very rarely do they manage to fix them... I’m just suggesting I have a preference for what I’d rather work on. You might like code that works and is impossible to understand and sits there surrounded by an even more obtuse test suite (if at all), but it’s not something I enjoy is what I was saying. To some degree this is inevitable but I think it’s always worth trying to fight the good fight.
Most likely the code I write has a bug in it. Or, at the time of writing, the customer requirement is fuzzy. Or, I have a limited grasp of the problem domain. Even if it is not any of the above, most likely there will be a change in a business requirement that impacts the code.
So whenever possible, I opt to write code that is either stupidly obvious, trivially testable, or easily replaceable.
How many entry level engineers come onto a project per year? 5? Is onboarding them onto the project is a deliberate and streamlined way such an undue burden that you must change your programming style to avoid it?
But, I do not know if this metric is quite 'complete'. Because, I am very sure, wrapping concepts in mind is more difficult than understanding the code.
I am not saying the code cannot be made better or more clear. But, it also depends on who you are writing to. Somebody who is not familiar with certain style of programming cannot easily read the code of certain level of complexity.
When I was hacking away my first big program, I could not write functions. Or find reading functions easy. The whole thing was a big wall of glorified assembly sewn together by labels. I am not sure why I was like that then, but I found concepts 'functions' and recursion or any other conceptual stuff really hard. My code was, in its own twisted way, 'most simple' and utterly unreadable.
I find the same sort of difficulties while reading some FP snippets. I confess it was a very short affair, but I had some difficulty reading it and even when I understood, I could not just write or think code in the same style.
There are ways to make your code better, your intentions clear but 'can be understood by a first year CS freshman' is bit abstract criterion.
It is kind of like, vocabulary and prose. You can make your prose clear. But, people have to work on the vocabulary on their own.
> The best code is no code at all.
This is completely agreeable.
Edit : Changed some poor word choices. Added an analogy.
Absolutely, you need to care for that future dev who's an idiot when writing code today because that's going to be you in a week or two when you've forgotten all about it.
I agree with having empathy for future devs, but I think it only goes so far. I've often seen junior engineers unable to differentiate between code they don't understand and bad code.
Usually they end up thinking they can do a better job, decide to rewrite the thing from scratch, and take 10x longer to rewrite it than they thought it would take. And accomplish nothing in the end, because the thing they rewrote worked in the first place.
I think of it as writing code for the computer/compiler, rather than for human readers. If the computer "understands" the code, you think you're done.
I real life working on a team with ever changing code, that is the barest rank minimum.
As a young programmer, I thought I was a master when I got the code to work. Now I know that is just the start. Making it readable and changeable is where real mastery lies.
This is very hard to convey to young fools as I used to be.
Thats what I've noticed with TDD advocates - the amount of code required is enormous and distracts from the flow of control. Everything replaced with mocks and stubs and there's objects replaced with instances from global scope, not very good.
I've been fighting this at my company lately. We have a CDF that _clearly_ rewards people who write really advanced Ruby code. We have a lot of working, but not perfectly architected code that people come back through, pull it out into a module, and add a bunch of "included" and meta-programming.
It works. I look at their code and think "that's neat", but you added 0 functionality while making it hard for the lowest half of the engineers to work with. You could have accomplished the same thing with hard-reference to a class.
Someone please translate that to Latin and start plastering that on office walls so people take it more seriously. It’s going to save all of our mental health in the long run.
It's been seven years since I've tried writing any Latin, so you should assume this is butchered. (edit: I think it's less butchered now)
codex bonus a discipulo prendatur
codex magnus a novo prendatur
codex optimus nullus est
Part of the problem is I couldn't find any good word for "code". "Codex" sounds cool but may not be the best fit here.
EDIT: Forgot a word. Also, I think "prendere" is better for "understood" here than "scire", which is more like "to know".
EDIT2: My friend suggested using the subjunctive for "comprehend" so that it's "may be comprehended" instead of "is comprehended". Also I got the tense wrong initially and I think that's fixed now.
EDIT3: "Ablative agents" are a thing. This is a rough language. Thanks, James.
EDIT4: prendar -> prendatur; aka "oops, should have used third person"
That's a good find, but I was unsure of whether "program" is semantically equivalent to "code" here. Plus I'm tempted to leave codex since it sounds so good.
"I don't know why full stack webdevs are paid so poorly."
- The barrier to entry into webdev is low. You just need a laptop.
- An awful lot of websites seem to have been thrown together by morons; shipped; and never improved.
- For most sites, performance isn't an issue - strong tuning skills for PHP, SQL and Javascript are not an issue.
- A "full-stack webdev" generally doesn't do the full stack - few of my webdev colleagues have been interested in networks, for example.
TBH I don't know. In my last position I was paid £45K; I was the most versatile and experienced developer in the company (12 people). The bosses constantly complained that there was a dire shortage of talent to recruit. They recruited quite a few overseas visitors.
Most of full stack web devs I know know everything but know it anywhere between poorly and ok-ish. It works more or less fine until there is some issue which requires deep knowledge of either of the stacks.
Generally I'd prefer to have front end dev + backend dev in my team over two full stack devs.
> Tech stack matters. OK I just said tech stack doesn't matter, but hear me out. If you hear Python dev vs C++ dev, you think very different things, right? That's because certain tools are really good at certain jobs. If you're not sure what you want to do, just do Java. It's a shitty programming language that's good at almost everything.
I think HN is big enough that you're not going to beat the averages with advice from the comments here. They're filled with the kinds of opinions that are more popular on big company teams than good startup teams.
Yeah. I'm nominally a full-stack developer, but I know fuck-all about React; I can maintain an existing project, but it would take me a significant amount of effort and googling to kickstart one from scratch, and diagnosing and debugging issues and writing effective tests is always a struggle.
YUP. They're "two jobs", but I've yet to work with a full stack developer who is master of either. Most commonly they're great at front end, and have a very superficial understanding of how to do anything on the backend, and even then the backend basically has to be node.js for them to contribute. No way am I getting them to change the database schema or build a new service.
Adding on to the thoughts about everyone writing terrible code sometimes, there is also “terrible code makes lots of money”.
Source: over 20 years doing software dev at various companies with terrible code that makes millions. Happily my current job has really great code (imho).
How often do we see a collapsed and failed start up where people will say "Wow there was some good tech at that company". Good code and cool code exist on an independent axis as good business plans.
1) People skills dominate quality of work output in terms of value to the business starting at around this level. Really, introverts should avoid programming altogether which is weird because this was one of the few disciplines in which introverts could truly thrive but those days are gone.
2) What company leadership tells you they value has nothing to do with what they actually value. Look instead to what the company punishes you for doing or not doing.
> Good people write shitty code. Smart people write shitty code. Good coders and good engineers write shitty code. Don't let code quality be a dependent variable on your self worth.
That's like saying that eminent book authors write shitty books. Sounds like an excuse.
> I've become what I've always hated: someone who works in tech in a career but avoid tech in real life. Maybe that comes with being old.
It's all about finding the right project. REST + CRUD shit gets exhausting after 5 years. There's an entire world of excitement outside of REST + CRUD.
> The older I get, the more I appreciate dynamic languages. Fuck, I said it. Fight me.
I see this in myself also. Static typing is such a fever at work especially amongst juniors and mids. Sometimes it’s almost as if they believe object types will spontaneously change at runtime, unpredictability.
But that’s exactly the point. It is too easy to introduce a code change that effectively will unpredictably (because it was unintended) change a type to something unexpected. And then blow up much later in spectacular ways.
Try writing something asynchronous in python with futures and complex nested data types. See how long you enjoy being told that you’re trying to look up a key in something that doesn’t seem to be a dictionary before you switch to static typing with mypy, and can spend your energy on solving the real problems.
I’ve been programming for way over 20 years, and I do not understand how people enjoy debugging something that the compiler would just tell them.
I get all that, however I am saying that they’re being superstitious about it. I’ve been programming at least 20 years and thoroughly love programming in a dynamic language. My preferred being Common Lisp.
That said, the reason I like static languages for teams is where the "burden of proof" lies. With dynamic languages, the default being rather loosey goosey means my colleague writes a function that "asks" whether the thing handed to it is a list or an object and I have to somehow convince them that it should just take a single type, for consistency's sake... but with a static language even though people try probably exactly the same amount of weird, shit, it sticks out like a sore thumb and needs justification to include in the first place.
I’m operations focused (traditionally Sysadmin but I can code to a decent degree and do professionally) but I suspect we’ve all been bitten by something being inferred we didn’t expect.
YAML interpreting NO (Norway’s country code) as false being the most immediately obvious.
Bash has a lot of these kinds of issues too, I don’t remember all the rules so I quote everything to be safe and still get bitten by variable expansion sometimes.
Types that change depending on what the value is: that scares me.
> I don't know why full stack webdevs are paid so poorly. No really, they should be paid like half a mil a year just base salary. Fuck they have to understand both front end AND back end AND how different browsers work AND networking AND databases AND caching AND differences between web and mobile AND omg what the fuck there's another framework out there that companies want to use? Seriously, why are webdevs paid so little.
AMEN to this. I've been in the industry for 20+ years and at this point I know so many different frameworks (front-end and back-end), databases (SQL and NoSQL), networking protocols, browser differences, ORMs, it goes on and on.
I think people in the industry are paid based on the potential value they can create (whether real or not), knowing many things or understanding fancy technologies matters little if one can't put them to use for the benefit of the company.
I've helped found multi-million dollar companies that I left before I could "cash out" because they were desktop based companies working with technology I considered growing stale and I wanted to switch to the web.
Now I'm doing the same thing with another company with the web, expect I'm handling everything as a sole-engineer, full stack.
Here I'm dealing with creating an Angular application to replace an aging ASP.Net MVC app, having to rewrite hundreds of SOAP services into a proper REST architecture, and dealing with an Oracle database who's schema we need to keep in place. With a C# .Net Core back-end.
This company needs me, badly, but I doubt this is going to be come a cash cow either (no equity, just a paycheck, and their budget is tight).
It seems to like you are refactoring an existing application, I'm personally in the middle of something similar but rather than doing it solo, I'm trying to engage other Dev's, grab their interests by demoing the new architecture and see if I can get a few more hands on this work.
I'm doing all of this extra work because I think the best way to create value for a company is to make it easy for other/new developers to jump in to the project and contribute something of value quickly.
My refactoring would have failed completely if other Dev's are not able to contribute the new code
> It seems to like you are refactoring an existing application, I'm personally in the middle of something similar but rather than doing it solo, I'm trying to engage other Dev's, grab their interests by demoing the new architecture and see if I can get a few more hands on this work.
The problem is they literally can't afford it. They looked at "near shoring" companies but they were too expensive. They said we got a quote for developers from another country who will do this work for $20/hour.
I explained that this has the potential of ruining a green-field project that absolutely needs to be architected and coded properly from the ground-up.
If we go that route the best I can do is enforce code standards and handle every pull-request.
These days you can hire from around the globe, instead of looking at off shoring companies, just look to hire a few talented people directly. Theres no reason a company can't afford to hire a few people remotely. Basically create a small team that is focused and invest in them.
Enforcing code standards and reviewing pull requests is a suitable role for a senior engineer. Not all things need to be architected, there's ton of glue stuff and features that can be developed by juniors provided the code around them is laid out nicely. Focus on the bigger picture, if u implement every little detail, then it starts to feel like everything is so important that it can only be done by you. That's a trap.
I've never seen so many job opportunities in my life. Getting what you want out of employment right now should be as easy as filling up on vitamin D; just walk out the door.
Could you expand on why you seem to think (if I’ve interpreted your words correctly), that a project is potentially doomed if the programmers aren’t from your country? That sounds awfully prejudicial to me.
Also, most companies like to brand themselves as ‘tech’ to signal that they are growth based. Why is Peloton a tech company? They’re not.
A lot of us have jobs because companies need to fulfill the image. It’s half the reason why so many people are allowed to do full-stack when in reality they would have no business dealing with those parts of the stack in a real operation.
So no, you don’t actually deserve more money because you are working on more things in an inconsequential space (e.g What the entire Peloton engineering team does is probably bullshit. You need maybe a few devs).
There are certainly better examples than Peloton, but that’s what came to mind (a home gym tech (lol) company). Can’t wait until Bowflex starts hiring out web developers.
I see where you're coming from, I do all that too. But it's the "jack of all trades" thing. You "know" all that, but do you actually __know__ all that.
I can develop a nice relational database design, write the SQL stored procedures to manipulate it, write a backend API and write the front end SPA for it. I don't think I'm an expert in any of those things though, and if I am then it's more focused on the backend API stuff.
Like, I understand CSS better than most people, but I'm not a guru like some. I can write SQL for anything I need but I couldn't tell you anything about performance tuning my SQL outside of seeks vs scans and the size of a lock.
Understanding the basics of these things isn't really a daunting task and that's why full stack devs aren't paid half a mil a year. But you have to accept at some point that if you're full stack then you're rarely going to be considered an expert in any field. That's not bad though having rounded knowledge is super good.
>You "know" all that, but do you actually __know__ all that.
Companies that underpay fullstack web developers usually don't actually care if their engineers __know__ all that, either. They just want the cheapest, fastest, CRUD app they can demo and ship out ASAP. As a result, these employers fail to recognize (and reward!) the fullstack web developers who do actually __know__ the full stack.
At my last job there was a rockstar web developer engineer who could easily double their compensation by moving to a larger company instead of a startup. My advocacy for them to get a raise or promotion was cast as 'the engineer is complaining again."
From my experience this is true because the team is so focused on getting the backend business logic sorted out, catering to new customer demands, that they develop new features overnight.
Then just assume the front-end will consume it and output a little, inconsequential DOM element here or there.
The issue is in actually using all that knowledge in a single position and creating enough value. Larger organizations have specialized teams that move faster at their function rather than generalized devs.
The way for full-stack devs to profit most is to work at smaller companies and trade that for equity and seniority.
It has one of the more confusing interfaces I've seen, and operator overloading just makes it worse. Numpy, on the other hand, is fairly straightforward and intuitive (and to be fair, a much simpler tool).
They basically created a DSL within Python. It's a total nightmare unless you're someone who uses it in notebooks every day (and it's impossible to typecheck)
not a single comment in here addressing two of the most important points brought up by the OP. the redditors managed to do it constructively. while gender and racial politics are clearly orthogonal to the concerns of the average HN poster, the tech we build is imbued with them and we are responsible for the issues produced by the lack of their consideration.
> There's not enough women in technology. What a fucked up industry. That needs to change. I've been trying to be more encouraging and helpful to the women engineers in our org, but I don't know what else to do.
> Same with black engineers. What the hell?
I'll start. I have experienced multiple occasions of men being derisive towards women in the workplace, and have been nowhere near forceful enough in shutting it down. I have offered support after the fact, but I need to become more confident at voicing my displeasure in the moment. This applies to all of my male co-workers, and I would bet nearly all of you as well.
If you care about this issue can you consider editing your first paragraph? It's likely to preemptively derail the discussion you're trying to get started.
I certainly don't mean to imply that misogyny (which those kind of attitudes are a specific expression of) begins in the workplace. It's just where I currently am in life and where it has been most apparent to me in the past several years. I agree that it starts much earlier and there is significant work to be done at that level as well.
"My self worth is not a function of or correlated with my total compensation. Capitalism is a poor way to determine self-worth."
This is gold. I think the whole industry is failing to value an engineer. For 5+ yoe, I think performance is a function of eagerness to code and innate intelligence. Yet the industry is paying engineers largely on function of yoe
I don't know. Possibly it's simply because when you accept a career in the thing you're passionate about the thing you're passionate about becomes work.
Another thought that has occurred to me of late: when I started in this field it was "Computers" (capital "C") — a thing really by nerds, for nerds. Increasingly it's the web, mobile. Our "customers" are increasingly not us and so the decisions that would have come so readily to us on how to proceed, what features to implement are instead handed down to us from design, marketing....
There are quite a few topics in CS I like, from compiler construction to robotics. That was the "hacker tech" for me. For a long time, I managed to have work with an interesting angle, but slowly it got to the point where I'm solving a fucking npm caching problem that caused a junior headaches, moving a server with an EOL OS with minimal service interruption, looking for another 2FA mechanism, and getting older tools to play well with mobile. None of it is interesting, and it takes a lot of time.
Not that it used to be better: many of my fellow students ended up in business administration. Some might even be architecturing COBOL systems on an IBM mainframe.
Yeah, I don't think there's anything special about working in tech. I've seen the same exact dynamic happen in lots of other fields. People who love cooking and become professional chefs only to leave after a few years is extremely common, for instance.
So much so that I would have said it is pretty common advice that if you try to turn a hobby you love into a job, there's a good chance you'll end up losing your passion for it.
I still love programming and if I wasn't working I'd be doing it as a hobby. But its halfway satiated/suppressed by the sheer amount of it I have to do every week now.
It happened to me too. In fact I'm in the process of leaving the field for something else altogether. I do tend to get weird looks and "whys" when I mention it, but its difficult for me to really explain in a concrete way, but I've just lost all excitement and motivation to do anything with programming (despite my mind still operating in very much a hacker mindset, where I see X and thing "I bet I could make X do Y)
That's the thing, I've been trying to do that for years now. I've got about a dozen cool ideas that bounce around in my head and a new one every few months or so. I can spend all my non-free time thinking about and designing them in my head, but when it comes time to actually write the code, I just kinda lose all motivation.
Just do it at work. All this side project stuff is ok but a lot more stress (I.e. deep thought killer) than just being a bad attitude employee and doing something awesome because you think it will be good. Don’t spend too long on it. If your learn something, you win. If it is useful to other people at your job, double win. The worst type of employee or a software project is the well-intentioned sincerely obedient one. That is how mega disasters happen in software. Despite sprint scheduling stuff, you can take a week and play with some new stuff. You will look bad in Standup for a while, but cost of victory is looking bad for a little in some bogus unimportant context. When you finish the new thing, write a story for it and then sell the hell out of it to the other people.
Been doing this professionally 15 years, and still am enamored with tech and programming. I actually avoided getting into the field because I was worried it would ruin my favorite hobby if I did it professionally (which is why I majored in Philosophy instead of CS in college)
I get sick and tired of all the rest of the bullshit, but never with tech and programming itself.
Being the best coder doesn’t make customers want your product. So at a certain point, I got over coding. I am now more interested in business and product managements.
Perhaps because we don't feel productive anymore. 90% of the day is meetings, slack, and dealing with production issues because of the unnecessary complexity, then the last 5%-10% of your time is developing - usually involving getting JSON to and from a database.
The other day, I started playing this game called TIS-100. The game simulates something like assembly programming and I had so much fun with it. Then I realized it’s been nearly half a year that I’ve actually written proper code for something (rather than a GitHub workflow script or account provision mechanism for testing framework) and built something that I was actually proud of.
The love and passion are still there, but their buried under the needs of my current mediocre job. I need to find a position that will let be get back to the task of building things again, rather than just tweaking scripts and keeping the machinery humming.
Part of the problem is, to borrow an analogy, the frame has gotten really big and the space for the painting is smaller. We recently finished a two or four sprint to ... deploy yellow world. It has bitbucket pieces, Jenkins pieces, EKS terraform, et.c etc., we were all super happy that it worked, but it doesn’t do as much as you can do with twenty minutes in basic in a 1990 PC. And most of the people that wired all this stuff together can’t even write code to draw parabolas or calculate or whatever. It used to be I would develop for a few months then spend a week or two getting stuff in production, for new projects, or few days adding features/bug fixing form existing code, then get it shipped in a few days. There was thorough testing thru code all along, and evil network layers, and so on., and most of the head space was in the code. Now it seems like teams end up spending months trying to get a docker image with all their dependencies and yaml files and so on. And the code is just taking some SQL that ran in a database and running it on some new cloud platform that is still solving acid problems Oracle mastered twenty years ago. Half the code is just orchestration run this hello world Spark job that moves data from here to there. It is cobol level Scala.
I know this is a tangent to your main point, but in case you haven't already done so, check out some of the other Zachtronics games. SpaceChem is my favourite; it's not literally a programming game like TIS-100, but conceptually it pretty much is.
They also have another game (the name of which escapes me right now) that is sort of like a bigger version of TIS-100, where you have to write code for microprocessors. I didn't get into that one -- it was a bit less pure and straight to the point compared to TIS-100 or SpaceChem, and one of my favourite things about those games is their fundamental simplicity and transparency -- but I've heard it is good too.
I think I’d feel that way to if I had stayed in web development (which I only did as a short stint many years ago by now), and I felt it happening already.
It just doesn’t have much to do with computers and technology in the specific ways that I got enamored with computers and technology. Playing with bare metal stuff at work does. Of course, for someone else it might be the opposite.
Really? This one aligns pretty well with what I've learned.
It seems that the entire stack of people in software development is tasked with identifying problems and subdividing them in to smaller problems. This goes for coders structuring lines, architects structuring modules as well as managers structuring teams of architects and coders.
The quality of a software engineer shines through all these layers.
I can't say that it seems to me to work this way. I had a boss that was a brilliant technical person, and talking to customers, assigning clear tasks, no problems. But when (I do security testing, our projects are 0-3 weeks typically) the project got a little bigger than a month and needed a few more hands, but the work was hard to divvy up, shit just hit the fan. Grossly underestimated the amount of work, associated deadline way too tight, and hard to work efficiently when nobody knows what anyone else is doing. He didn't seem to be good at organising this (at the time, idk if they learned in the meantime). But of course this is n=1, perhaps it's an outlier, but so this is one of the handful of bullet points that are somewhere between "needs more explanation" and "my experience is quite the opposite but okay". I'm not sure being a good manager is breaking problems down and assigning the pieces, though of course that must be part of it. Other parts seems to include managing the people that are doing the pieces; communication and organizing communication. Divvying up a big task, ask any engineer indeed...
He sounds like me when I'm extremely sober, so surprising to see how unoffensive and tame people's "secret thoughts" are, well at least the ones that get massive upvotes on reddit.
I disagree with the comment regarding the tech stack. It does matter - it only doesn’t if you’re willing to spend more money scaling worse preforming applications.
> I don't know why full stack webdevs are paid so poorly.
In my experience it’s because they either don’t really know any of the stack well enough to do more than implement someone else’s design, or else they don’t know most of the stack well enough to do even that without very poor, confusing implementations.
I’ll take a backend person and a backend person willing to do front end work before a full stack “dev” any day of the week. And I’d take the full stack dev before a front end one.
No, I just would have to think really hard to recall a front end specialist who didn’t get bogged down in fad chasing or wrote code better than a backend dev half-assing it.
Possibly, but my experience is that front-end dev specialists tend to over-complicate things (usually so they can tinker with some new javascript flavored garbage).
> The older I get, the more I appreciate dynamic languages.
Exactly the opposite for me. I just can't stand hovering a variable or a parameter and not getting its exact type, or typing "." after a variable and not having my editor gives me all the available methods on that variable, or running my code just to discover that it instantly crashes because I made a typo or forgot an argument or passed the wrong argument or tried to call a method that doesn't exist on that variable or whatever other issues that happens only with dynamic languages. What a waste of my time.
As I said in another ranty response in this thread, I do not understand how people enjoy spending time debugging trivial issues, that even a simple static type system would just plain tell them at compile time.
I've been programming for over 20 years too, and I like dynamic languages. I like them a lot more when they're properly tested and well architected, but even the tire fire codebases are at least debuggable. The compiled stuff helps with types catching the trivial bugs, yes, but it's way too complicated to quickly debug things like seg faults. Dynamic languages let you introspect and modify things way more easily, and this makes things like fakes and mocks for testing way easier. It makes debugging easier. And not having to wait an hour for something to compile is nice.
That said, I love the speed of compiled languages. I once converted a simulation from Python to Cython and saw a 10000x performance boost because of CPU caches and all that. Usually the gains are closer to 10x to 20x, but in some rare moments it's like a rocket ship vs a hang glider.
EDIT: Reading again, I think you are comparing interpreted with compiled languages, not so much static with dynamic type systems.
Seg faults are a prime argument for static typing. The more static and stricter the type system, the less things like seg faults can even happen. Compare Rust to C (both are static, but one more than the other), and at the extreme end handwritten assembly (extremely dynamic). Those are languages used by kernel developers, where errant memory accesses of all kinds are a constant concern.
I don't know why dynamic languages would make introspecting things easier, or debugging in general. I agree that mocking can be easier with dynamic types. Compilation rarely takes long nowadays (incrementally it's usually just a few seconds), so the time saved in knowing that the code is still correct at least within the confines of the type system is well worth it.
As an kernel developer, would you want to take some of the many shell scripts that the kernel has and rewrite them in C?
Different tools for different purpose. Unless there ecosystem was really designed for it, I would not write a driver in a dynamic language. At the same time, I would prefer not to write all the bootstrap scripts during booting in C. If all I am doing is calling other programs, I use shell. If all I am doing is calling a bunch of low level system calls in a restricted environment I would use C. If I am doing a bunch of string editing, process flow management, data compiling with some calls to external programs, I would use a dynamic langue like Python.
I want add a personal opinion in regard to C. Every function gives out an return code which is a kind of "type" that does not get enforced by the compiler. The return code is defined by the manual page and it is up to the programmer to catch it and react correctly to it. If the wrong code occur and the program explode during runtime its the fault of the programmer for not write a program that manage the return code. I would claim that the wast majority of crashes that occur in programs written in C is because programmers failed to realize the full list of possible return codes and what they mean. Here I do prefer dynamic languages because they usually do not leave it up to the manual to defined what return code -42 means compared to -41, and debugging errors when the errors themselves have class names and inheritance tend to be a bit easier in my experience.
The kernel itself does not consist of any shell scripts. It may have them for building the kernel, but just like the shell scripts at boot, the problems solved there are much simpler (mostly call compiler and linker on a set of files). So I agree: For simple high level problems a dynamic language is sufficient.
As to your second paragraph, I do agree that C has a vastly insufficient type system from the 70s (even though I think better type systems were already available at the time, but the inventors of C might not have known or cared about that). Rust solves the problem you described, and what you complain about is actually that errors in C tend to be represented not statically enough.
That is an interesting aspect about rust I have not heard, and since over 50% of my C code tend to be about managing return codes with proper tear down, rust suddenly do look a bit more interesting. However doing a surface look, it seems that rust simply terminate the program when encountering some of the more critical return codes from syscalls, which does exactly feel like it solves the problem. I guess it is also a reason why the kernel might not switch to using rust any time soon, as oom should not cause drivers to just terminate hard. From my surface look, it also seems rust simply uses Result<> where in C you would store the value in an int, and both leaves it up to the user to read the manual and interpret the number into actually meaning. Of course I could be wrong.
In a way it also demonstrate a other line between when I would use shell, Python or C. With shell everything either existed OK or did not, and the return data is always strings, and so the programs written there is built with that in mind. With python I work with structured data, but I don't spend much work or thought on syscalls and what state the machine is in. With C, syscalls management and the state of the machine is the majority of the code. As such one pick the language based on what the code will be focusing on. Dynamic vs static basically becomes a shorthand for that focus.
What's a seg fault? I jest, but static languages have come incredibly far since C++ (where they are already less common than in C) and I truly haven't dealt with a segmentation fault in the past many years working with static languages.
Haskell, Kotlin, Scala 3 (with a compiler flag) will all remove null from the set of values acceptable by type. (There are others as well) So a String can’t be null ever, you have to do ‘Maybe String’, ‘String?’ or ‘String | Null’ as a type respectively.
If this is what you are asking.
> but it's way too complicated to quickly debug things like seg faults
I know you've listed the common argument for static vs dynamic (dynamic -> so fast to code but slow to run, static -> way too complicated)but after a decade in SE I still have yet to see some good evidence of this.
Yes some static languages (like Java) will make developing certain things slower vs JS but is Java a good statically typed language ? Maybe these statements are "true" today with the current implementation of one or the other but there are a lot of languages that I just can't see getting in the way.
A new example of this: Kotlin and Swift are statically typed and I would love love to see where it slows these mythical developers that are so fast in a dynamic language but would be slowed down using them. There's obviously going to be a cost for the actual compile time but that should be minimal.
Unfortunately I'm starting to believe that this is just another case of certain developers are used to certain languages.
The trend of the JS move to TS also points to this. Basically JS looks very similar to Kotlin and Swift (TS is basically identical).
To look at your specifics
> Dynamic languages let you introspect and modify things way more easily, and this makes things like fakes and mocks for testing way easier. It makes debugging easier. And not having to wait an hour for something to compile is nice.
> Dynamic languages let you introspect and modify things way more easily
In what way ?
> and this makes things like fakes and mocks for testing way easier
Fwiw this is what that fake/mocks look like for a static language
`val x = mock<User>()`
To be fair that's using a library and maybe that's part of your criteria ?
> It makes debugging easier
? how, I can see the argument for the other way (one less thing the developer has to worry about - typing issues) but how is dynamic easier to debug ?
> having to wait an hour for something to compile is nice
Completely Fair. Now whether or not the thing you're working on would take an hour to compile I highly doubt. If you're working on a project that would hypothetically take an hour to compile then I really hope it's not written in a dynamic language.
Not trying to pick on you at all, I believe a lot of developers would agree with you but I'm starting to think that there are developers that are just used to one or the other. I have to point out that I could be thinking this way with respect to statically typed languages but I really have a hard time seeing this point (as I would be if I was falling into the same trap I'm "accusing" you of).
Also if you live near to high quality, it is easier to keep quality high. I worked in one place with a lot of C servers. Any time they segfaulted, the developer got an email with the back trace and a link to the core. Counts were kept and managers made sure people knew to fix them. For my code, it was always easy to fix each segfault. They were rare and usually the stack trace showed all that was needed.
I also worked in a place that was far from quality and they had totally given up on memory leaks and most segfaults. If the segfault happened deterministically enough, it might be fixed. infinite loops would be fixed. But sporadic segfaults were just ignored. It was too hard to get close enough to quality to make it worth fixing.
Read what they wrote again, but this time replace "static" with "compiled" and "dynamic" with "interpreted", and suddenly it made sense to me.
It's true that overall, compiled languages tend to be more static, and interpreted languages more dynamic (and there are good reasons for why they end up that way besides mere convention), but nevertheless that's not what this discussion is about.
As a Swift dev, the reason why it slows me down is that I often end up fighting the type system.
I start with a concept, design my data structures on the whiteboard in a way that makes sense, then I try to code it and because of some detail in the type system it doesn't work, and I end up spending huge amounts of times wondering how to translate my concept into code.
I mean by now it should be clear to everyone that there are certain trade-offs in the choice dynamic vs static typing.
I do like clarity of static type declarations, also the absence of weird polymorphism like functions returning a number or al ist of numbers depending on their parameters, etc.
But then, many statically typed code bases are just tested abysmally. It is as if the type signatures would constitute proper tests. I realise that you don't need to write as many tests in a statically typed setting, but in many cases - and esp. in underpowered type systems - the types won't test the program's logic.
You do need to write less tests with a static language. The types in your program are proof that your program is correct within the confines of the type system (literally, even in the mathematical sense).
The stronger the type system, the more properties can be proven through it (at the extreme end there are, unfortunately not Turing complete, languages where you can prove every single property--those are more used as theorem solvers however).
Back to "common" statically typed languages, there is still heaps and loads to test, as you say. Not writing those tests is not really the fault of the language...
I am bringing up this point that there are tons of miserably tested Java/C++/etc applications because its a problem correlated with their usage, just like runtime type-errors correlate with dynamic languages. A fair comparison mentions both.
Of course a strong type system can drastically reduce the unit test coverage you need. But last time I checked, the strong type systems that allowed for this were all not used in our corporate code bases.
I think it depends on what you’re doing. The author of the the Reddit post mentioned he’s primarily working with data systems. I can see the appeal of someone running quick, ephemeral data analysis not wanting to deal with static typing. But for long term use cases the static typing guard rails is definitely nice.
> I can see the appeal of someone running quick, ephemeral data analysis not wanting to deal with static typing
I've seen and heard this, (Data science field definitely loves their Python and numpy) but I really believe the common problem of non-reproducible research is partly due to the language choice (and probably more to the root cause - this sentiment in research).
Early in your career, you lean into one or the other. And then years later, after you're confident you're right, you find yourself trying the opposite paradigm and liking things about it.
Both have pros and cons, and if there was a correct answer we'd all just go with that one!
Yeah, this is something I’ve been pondering about. I’ve been doing 10 years of C++, and after a 6 months affair with Haskell fell in love with LISP. Now I’ve been doing Clojure professionally for about 5 years, and now am in a phase where I realize the grass is not green anywhere. I’ve been shocked at some of the bugs in my Clojure code that went unnoticed for way too long, and at the same time I remember the amount of “compiler fighting” that C++ or Haskell required.
It’s just a trade-off, in the end, and depends on what poison you can digest.
I've switched between the two several times – not out of choice, just because that's what was needed. The order was BASIC, C, Python, Java, Go, Python, with Javascript mixed in the for the past few. I mostly prefer dynamic languages because static typing is just redundancy. The point of programming is to express high-level concepts which can't be captured quickly with types - if you don't understand the concept of the arguments and return types, you are missing the contract. Types can be helpful as the beginning of docs, but that's it.
Yes, types are redundant, and that is their entire point. Just as much as giving your functions and variables names is redundant, you could just number them. So is splitting up your project into multiple files, all comments, and even structural keywords themselves--you don't need "for" and "while", you just need "goto".
Take all that redundancy away and what you get is not even assembly, it's exactly machine code. We used to program computers that way when they were invented. We still do sometimes in extreme situations. We got away from it for almost all of programming because it's incredibly error prone (and tedious).
Yeah, same here. I started my career as a huge dynamic languages fan, and Python was my favourite language for over a decade.
But now, after 20 years, I appreciate a static language with proper IDE support and code completion. Offload the work to the computer, that's what we do for a living after all.
However, after spending a year working in Rust, I think this can be taken too far. The safety guarantees in Rust are amazing, but the overhead for contorting programs to a form the borrow checker will accept, and the mental overhead related to async/await compared to goroutines is too much.
My favourite language is now Go, and I find it strikes a good balance between static checks and productivity. Rust is still a more elegant language in many ways with things like generics and iterators and their enum types (algebraic types I think is the term?) and zero-overhead abstractions and clean error handling. Go feels a little hacky by comparison. But it's simple and way more productive for me personally, so I prefer it.
Interestingly Evan Wallace (constexpr here on HN) implemented esbuild in Rust initially, and switched to Go and stayed with it for much the same reasons, but also noted that the Go version performed better: https://news.ycombinator.com/item?id=22336284
> But at a high-level, Go was much more enjoyable to work with. This is a side project and it has to be fun for me to work on it. The Rust version was actively un-fun for me, both because of all of the workarounds that got in the way and because of the extremely slow compile times.
After a year of working with Rust and switching back to Go, I second this. I'm enjoying programming again and finding it easier to put in long hours.
Agreed. Try going from "I think this is a callback that returns a promise which can return a string or an int?" to "The compiler/IDE are telling me this future can only return an int and won't let me advance until I make my code comply"
Agreed. I did this journey twice. It was all static types when I was in high school and early college. Then I thought I was too smart to need the computer to do all that type checking for me ("I know what my program does, I don't need a compiler's help!") later in college and early in my career. Then I got incredibly sick and tired of working on really big projects in dynamic languages lacking the ergonomics of good static analysis.
I started programming with .Net languages via Visual Studio (which is quite a good IDE), and I disliked dynamic languages exactly for the reasons you list. But nowadays I mostly prefer dynamic, optionally typed languages ala Julia.
Typing a '.' and seeing the members is very nice, except when the type is not concrete and it's not clear what type is actually being returned. Then you'd have to do trial-and-error using a slow compile cycle. In a dynamic language like lisp, you could just `(inspect x)`. In Python, you can just `embed()` and run, e.g., `x.__dict__`.
The IDE telling you about syntax errors and non-existent functions etc is very nice, except when you use macros and meta-programming and now your stupid IDE won't just shut up (I have this problem even with Python in VSCode).
I've been doing TypeScript for 4+ years and been a web developer for 20+ years. And I experience literally zero benefit from TypeScript. Never has it given me anything useful. To me, it's a massive pain in the ass that slows down myself and my team, even if they think it doesn't. They just don't know JavaScript or have shitty quality of code to begin with.
That, and TypeScript generics can get so freaking complex that the code does NOT become simple to read at alllllll. It's a massive waste of time.
In response to that article: Well, probably the gold-standard for "native JavaScript" tooling is actually provided by TypeScript itself lol. In VSCode it's the TypeScript compiler and language server that's providing the excellent JavaScript autocomplete.
So I'm in agreeance then? Well, no, and that's why I air quoted "native JavaScript". It's extra good because TypeScript/VSCode is silently utilizing the ".d.ts" files that third party libs ship with in the background! I believe as well, at least at one point, it would auto-fetch existing "@types" packages for libs that don't ship their own.
TypeScript, PHP, and Python have support for typing and the commiserate IDE benefits, and I'd imagine these languages account for a super majority of software written in dynamic languages.
Fair. My point was that I was able to make python "behave more like a statically typed language" to make it bearable, but you're right, it ultimately is still a dynamically typed language at runtime, with the type ultimately bound to the value, and any untyped code still getting away from the "compiler" (which is just a type checker here), to wreak havoc at runtime.
Especially the typos part. In Ruby and plain JS sometimes you feel you need unit tests even for the dead simple stuff cause there might be a typo in there... and such tests are an awful chore to write and much more efficiently caught by static typing
Yeah, I loved it when I started because I could easily try things out in JavaScript. Eventually I've come to love Typescript because I don't waste time on dumb things anymore.
The ideal for me would be some strong static typing mode for the main code providing all the guarantees you want and some dynamic typing mode for the tests which lets you test everything well.
The main downside for me of the static typing is that it's close to impossible to provide a good testing experience, DSLs, mocks and spy objects kind of require some form of dynamic typing to be usable.
That’s exactly what the decade old Java for code and Groovy for tests is. Though it’s not used too often nowadays, mostly because Java is enough for testing for most people.
I'm not sure what to make of that. Maybe you just haven't bothered to look? You personally not knowing about something is a reflection of your own knowledge, and not of the state of the world.
No, I've look enough I think. Maybe those better frameworks do exist though indeed, but I have no proof of that. Usually they just have the bare minimum of assert checks and call it a day.
A testing framework should be able to mock and patch any class (or applicable) of the running instance of the program without modifying your code for example, tell me if a method was executed or not, intercepting all HTTP requests without changes, have complex assertion partially matching objects, factories, change the current time... I could add a lot more here.
All of that is necessary harder I think in a static typing environment.
> mock and patch any class (or applicable) of the running instance of the program without modifying your code
Why is it important for the sake of testing to be able to alter the runtime behaviour externally without changing the code? This is contrary to all TDD literature I've read which advises to make production code easy to test, e.g., by coding against interfaces. After all, something being hard to test is exactly the feedback you're looking for when doing Test Driven Design. If it's hard to test, it's probably too tightly-coupled.
> tell me if a method was executed or not
Spies are possible with e.g. the ReaderT pattern.
> intercepting all HTTP requests without changes
My earlier two points are applicable here too, although I'll add type classes as another viable solution.
> have complex assertion partially matching objects
Pretty easy with lenses or just making assertions against record field lookups.
> factories
I don't know what this means. I looked at several articles describing some kind of factory pattern in TDD — all of which were horrifically verbose — and all I can glean from that is we are talking about mocking some function which generates objects.
> change the current time
This is no different from mocking other system boundaries, which I have already addressed.
> I could add a lot more here
You're welcome to, and I imagine my suggested solutions will continue to follow a theme. I'm not sure you're here to have your mind changed though. It seems you've reached your conclusion already.
> Why is it important for the sake of testing to be able to alter the runtime behaviour externally without changing the code?
Because that's additional cruft that you don't want when reading your code. Yeah I get it, you can pass dozen of abstract classes to each constructor for each thing you are mocking (or equivalent if it's not a class-based language) that you replace if needed, the only problem with that is that it's inconvenient, prone to mistakes (you can forget some) and makes the code ugly (you don't want to read testing code on the main code).
I've done it in multiple static languages and at best it's a workaround that you should not have to deal with. I should add that any additional barrier to writing tests like this one also reduces the likelyhood that your app is well tested since some developers of your team might not bother going as far as that.
> Spies are possible with e.g. the ReaderT pattern.
After looking online, I'm not sure how that works, that does not looks very convenient for sure. Is that possible with that to say something like "the method X of class Y was executed 2 times with the parameters Z" without changing your code?
> I don't know what this means. I looked at several articles describing some kind of factory pattern in TDD — all of which were horrifically verbose — and all I can glean from that is we are talking about mocking some function which generates objects.
Maybe because they are expensive to breed and keep them alive. Not an expert to say if they are worth or not. Nature wants them out but human wants to keep them.
Ive never had a good experience with whiteboard paint. The walls are not smooth so you always leave some residue when removing the ink. Over time it becomes pretty noticeable unless you spend a significant amount of elbow grease cleaning it.
> How do you know if you have a good [recruiter]? If they've been a third party recruiter for more than 3 years, they're probably bad. The good ones typically become recruiters are large companies.
No kidding. I've been hired through third party recruiters twice, and both times were fantastic. I am just guessing, but I got the distinct impression that both were doing better and were better compensated working independently (one owned his own business, the other was highly placed) than if they were recruiters for a large company.
OTOH, my experience with same-company recruiters was that they merely existed to act as a simple filter and PR person. They didn't invest anything in me (the candidate) nor really examine if I was a good fit before I got passed further on up the chain.
Now that I really think about it, the quoted advice is pretty much a mirror opposite to my experience.
Exactly. An excellent recruiter will build up a pool of candidates and clients who want to work with them, and will start their own agency where the compensation is usually a multiple of what they'd get in-house.
In-house recruiters tend to be one of:
* HR professionals who do some recruitment as part of their job
* A surprising number of people who end up taking on some recruitment responsibilities as part of their secretarial or admin work, and then end up doing it full-time
* Agency recruiters who didn't enjoy or could cut it being agency recruiters
I know a small number of excellent internal recruiters, but they're really the exception not the rule.
> If I'm awaken at 2am from being on-call for more than once per quarter, then something is seriously wrong and I will either fix it or quit.
Yes, something is wrong. But it could be many things. Here is how you can find out:
1. Is the thing that's broken a bug in your code? Then it's your fault, so fix it. This means you need better testing too, and maybe a redesign to resist failures. Try to get alerts at 9am in dev so that they don't come in at 2am from production.
2. Is the thing that's broken a server thing that Ops is supposed to deal with? Probably you should quit. You can also work with Ops to redesign the server stuff to be something less prone to failure. Often Ops can't do this themselves because they don't know enough about how your apps work. Go talk to them, help them out. Or quit.
3. Is the thing that's broken a false alarm, or not important? Quit. Or work with Ops to create better alarms and tests. Ops doesn't know your app, so you need to help them craft the SLIs and SLOs.
4. Did Ops create all these alerts themselves without your involvement? Quit. Or take ownership of the tests and alerts for your apps.
5. Is it a huge slog to try to figure out how the alerting works, to work with Ops to make changes, to add tests, or to figure out what's broken or not and troubleshoot it? Definitely quit.
This is one of my favorite excerpts. I once worked in a lab where we would have frequent catastrophic failures because there was never any disaster planning or contingency management plan. I personally triaged 3 such incidents alone or with people who happened to be there when the problem arose and attempted to disseminate some suggestions for how to prevent similar problems in the future. No one was interested. People were primarily interested in tearing my head off because I hadn't handled the problem the way they would have done it (of course, they were out drinking beers or sleeping while I was dealing with the issue at 12 AM or on a weekend).
After the third time I said fuck it, the next time there is an issue I am going to insure my own projects are safe and then I'm going home and turning my phone off. Let someone else deal with it. That is the not the culture you want to be promoting.