Hacker News new | past | comments | ask | show | jobs | submit login
Don't Call Yourself a Programmer, and Other Career Advice (2011) (kalzumeus.com)
570 points by greyoda on Oct 20, 2019 | hide | past | favorite | 307 comments



> Do Java programmers make more money than .NET programmers? Anyone describing themselves as either a Java programmer or .NET programmer has already lost, because a) they’re a programmer (you’re not, see above) and b) they’re making themselves non-hireable for most programming jobs. In the real world, picking up a new language takes a few weeks of effort and after 6 to 12 months nobody will ever notice you haven’t been doing that one for your entire career.

In the real world this is (unfortunately) not how technical people are hired. The stack they are using seems to be the first thing hiring managers look at.

You would think it would be only clueless recruiters who would think this way, but my experience is that well-respected and successful entrepreneurs also operate along these lines. It is not clear to me whether their success is related to or in spite of this way of thinking. To give them the benefit of the doubt, startups often need to move quickly and those few weeks-months needed for people to become familiar with a new stack they're unfamiliar with may simply not be available.


Stack is relevant though. If you need a front-end architect, and you hire someone with 10+ years Java experience, they aren't going to know the first thing about web accessibility, polyfills, bundlers, cross-browser support, etc. The domains of implementing microservices vs. writing front end code are fundamentally driven by very different problems.

The stack an engineer has experience with can be one indicator of the areas they excel in.


> the first thing about web accessibility, polyfills, bundlers, cross-browser support, etc.

Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

Polyfills aren't exactly quantum mechanics. They're not some radically different mental paradigm that takes years to develop fluidity. If you already have a sound basis in the major software engineering concepts, learning polyfills is just a matter of reading the docs to learn the details.

Let's not forget that Jordan Walke was working on back-end infrastructure code just prior to inventing ReactJS. Miso Hevery was working on Java testing frameworks just before inventing AngularJS. So it's not like there's a history of front-end being such a unique problem domain, that it's impossible for outsiders to pick up.

The only major sub-domain of software that I think this may potentially be true of is embedded systems. And even then, that's a big maybe.


>Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

Individually yes, but all of these things would take more than a few weeks - possibly months. With onboarding on an average software project also taking months, you could be looking at paying for a lot of unproductive time if you don't hire somebody with some prior knowledge.

The idea of language fungibility is a joke. Sure, it takes you a day to learn the basics of syntax in another language. Learning the whole ecosystem takes considerably longer.


> Individually yes, but all of these things would take more than a few weeks - possibly months. With onboarding on an average software project also taking months,

So, let's say that is the case. The lesson is that hiring is expensive and risky, even if you find a candidate whose experience exactly matches your stack. Like you mention, just onboarding for the internal technology and project itself takes months.

You're already sinking a big investment into a new hire. So most times you should choose the best engineer even if it means additional O(project onboarding) time. From a business standpoint a great engineer who takes 6 months to get up to speed is a much better investment than a mediocre one who takes 3 months. Not always (maybe you're an ultra-high growth startup who needs bodies on the floor ASAP). But usually.

And the reality is that dropping stack requirements drastically improves the candidate pool. For one you have way way more candidates to select from. Two, most times when a company is hiring for [tech X] it usually means that the market for engineers who know [tech X] is super-tight. It's just the nature of the business cycle. If [X] is in demand at your company, it's probably in-demand everywhere, and therefore in short supply.

All of which means that if you insist on [X] experience, most of the time you're scraping the bottom of the labor market barrel and getting mediocre engineers. If you're willing to hire from anywhere, then usually there's some sub-sector that's in a downturn. That's a huge opportunity to poach talented engineers, who are being mostly overlooked because their stack experience doesn't align with the hot growth sectors.


>And the reality is that dropping stack requirements drastically improves the candidate pool. For one you have way way more candidates to select from.

There's definitely a sweet spot in terms of stack specificity. There's very little issue hiring a flask developer if you need somebody to work on django. However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.

>All of which means that if you insist on [X] experience, most of the time you're scraping the bottom of the labor market barrel and getting mediocre engineers.

I don't think that's necessarily true. Nonetheless, this could be market specific. I can imagine that hiring a developer in, say, Ohio might make your approach more worthwhile compared to hiring in SF, where the talent pool is deeper.


> However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.

I don't think that's true at all. I've met tons of Java devs who moved to Ruby with minimal pain. It's still OO and many of the same patterns apply.


> There's definitely a sweet spot in terms of stack specificity.

The GP's calculation had a very important weakness that it does not take retention under account.

The sweet spot will be mostly determined by retention. And the very bad places that have impossibly (sometimes literally) specific requirements are probably correct on requiring a narrow set of competences. But of course, they would gain more by improving themselves so the developers don't quit as often.


> However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.

You think? I went the other direction and I don't think it was so hard. The language was the easy part of onboarding, the hard part was all the company specific stuff.


As atomic points, I agree. Polyfills aren't rocket science. If the browser API is not defined, the use the polyfill function (at least, that's how most work, but then there's ponyfills, which don't pollute the global scope).

But front-end development of complex applications isn't done in a vacuum by one person. It's done by multiple teams, possibly distributed across different physical locations.

As an architect, what is your polyfill story? Does each team ship ES Modules that are later compiled by Webpack? Is each team responsible for loading their own? It's probably more efficient to handle it globally --knowing what all teams need at once as part of a build step-- to prevent duplicative loading of the same polyfill by 5 different teams.

It gets nuanced quick. Nobody can master all these concepts --and reasonable ways of managing them in distributed team environments-- in a few weeks.

And if you think JavaScript is "simple", I encourage reading "You Don't Know JS". What does "this" mean, in JavaScript? I could ask a handful of questions from the multitude of sections in this book, and people writing JavaScript for years wouldn't be able to answer all of them. React and Angular are such a thin slice of the overall problem. Even simple projects will require a dozen or so more NPM packages until you have something remotely useful.


> I could ask a handful of questions from the multitude of sections in this book, and people writing JavaScript for years wouldn't be able to answer all of them.

I think this might prove my point. There's tons of great Javascript engineers out there, who are consistently delivering business value, who actually aren't even that knowleadgable about all the details inside Javascript.

Most of the variance in quality between software engineers has to do with technology-agnostic skills rather than stack-specific knowledge. They architect well-designed, modular systems. They communicate with stakeholders. They write robust, readable, testable code. Their documentation is understandable and comprehensive. They're thoughtful about naming things. They can large codebases and keep complexity contained. They understand performance tradeoffs, and anticipate bottlenecks before they occur.

Very little of that has to do with stack-specific knowledge. A developer who does all of the above, but doesn't know all the ins-and-outs of the JS coercion model, is going to be much more productive on almost all practical business problems.

The most important skill usually isn't knowing every single detail of your underlying ecosystem. It's knowing enough about its overarching landscape to be aware of the things you don't know. As long as you know how to ask the right questions and where the limits of your knowledge lies.


> Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

How about average engineers with average fundamentals? The kind most people will on average be hiring and working with.


Another career advice: Try to not join a company which mostly hires average engineers with average fundamentals.


A lot can be learned working with average engineers. You shut yourself out of a lot of jobs by avoiding average and will be probably screwed if the "above-average" places consider you an average candidate.


Most engineers are average. If you are above average you are likely working at a company that pays above average.


Jordan Walke and Miso Hevery are outliers working at companies that pay a lot. An average company is not going to be attracting talented engineers with strong fundamentals. They are going to be attracting engineers with average aptitude and average fundamentals.

An average company, paying average wages, solving average problems is better off getting an average developer specialized in the area they need.


This is only true because average teams get annoyed at having people trying to push the project along in ways they do not understand. In most of my jobs, average people really appreciate stability and do not like change. To fit in, you end up having to do less work and transfer your energy to side projects or hobbies


> The only major sub-domain of software that I think this may potentially be true of is embedded systems. And even then, that's a big maybe.

The big problem with embedded is that when things go wrong, it has a cross cut with electrical engineering--you have to understand datasheets, read a scope, and understand that digital signals sometimes aren't.

I have never known a good embedded software person who is software-only. Even DSP-types with EE degrees don't often operate on hardware very well.

Generally, the best embedded software types are good hardware EE's and passable software people.

I consider myself a very good embedded engineer, but my software is merely "straightforward". Of course, the best software people I know claim I should take that as a compliment, and they are all happy to work with my code.


Exactly but now embedded systems look more and more like personal computing used to be not much time ago. We're certainly on par with mid to late 90s in regards to that, and even farther if you consider things like the RPi as embedded. Not only processing power but relating to the skill set needed.

I don't take lightly your skillset at all, but it comes a time when people start using inneficient solutions because it's faster to produce, and you have plenty of hw resources, so... why not? Do it in Python and improve it later... maybe, and then you don't. :-)

I believe you get my point. Not that I personally think using micropython is a worthy path solution for embedded dev as of now, in a professional context. But there will come a time when that can make sense, as it's already the case in the RPi, as I mentioned.

> I consider myself a very good embedded engineer, but my software is merely "straightforward"

And that's where I make my business. I'm merely a straightforward embedded engineer but focus on the software/hw integration making the excellent work of people like you work with the external world, databases, desktop/web UIs and all of that using software engineering practices. Basically doing the "cloud", "edge" and "IoT" buzzwords.


> Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

The most important 67-80% of front-end work can probably be learned in a few weeks by an experienced engineer who already has a high level understanding of the web. The last 20-33% hides some things that are trickier to wrap your head around, along with more than a few WTFs, and a never-ending stream of subtle cross-browser differences.

It may still be plausible that a talented engineer with strong fundamentals in software design and cs could cover the ground quickly. But the message of the "don't call yourself a programmer" piece might be understood as being as much about how you're going to spend your time as it is how you're classified professionally (though those are certainly related): are you a stack-whisperer, or are you a badass problem solver with expertise in translating a problem domain into a formal-ish system which enables expands the capabilities of the human system (and networks of machines) it's embedded in?

My bet as someone who did years of front-end focus is that for all but the truest 10xers, if you focus on front-end and try to be thorough, your risk of being a stack-whisperer jumps dramatically because of how much of your time corner-case ephemeral arcana will chew up. Front-end is certainly not the only place in the industry where we've let that hazard run away with the lives and time of too many talented people, but it's a popular one.


"Polyfill" itself is a word that was invented by some inexperienced people who had never programmed anything outside of the browser that needed per-platform conditional code to isolate itself from missing or different functionality.


> Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

It's web development, and let's face it, many companies don't want long term talent, they just want someone to finish their current project and move on. Which can make sense in some circumstances. You don't need a software engineer level guy to make a dashboard.

People that are beyond the framework user level end up also moving on from even considering working on those kind of jobs.


That's more a question of overall area of past work, though, than particular language or framework. If you've been working on frontend stuff, problems of layout, matching redlines, response latency, etc, should be in your deck regardless of which programming language you've been using.


Is this the right way to hire? Not saying it’s wrong but I’ve taken a different approach which has generally worked out.

Even if I was hiring for a specific role (front end), I care less about familiarity with specific bundlers or frameworks and something more like - “Can this person problem solve and deal with varying levels of ambiguity? Do they seem capable of figuring something out without a ton of direction?”


That is exactly the thing I select for. I recently had to hire 4 developers, and I was not all that interested in how many years of experience they had in javascript or typescript (our main language), and even less if you know Vue or Express (our stack). I mean, it's good to know javascript, and certainly ES6 or typescript, but I'm much more interested in: can I discuss stuff with you, do you have interesting opinions, can you make clear decisions in the face of unclear requirements, are you creative, and can you fix stuff and build features without anyone having to hold your hand? In case of doubt, knowledge of ES6 or Typescript or any relevant front-end framework (Vue, Angular or React) can make a lot of difference, but I'd rather hire someone who knows a dozen languages we don't use than someone who only knows exactly the one language we do use.


And then when you need to ship now you have a bunch of dead weight while you wait for them to ramp up. Also, since now you are taking time from your devs who do know the stack to mentor your new hires and code reviews because you can’t trust them to do good work without messing up your code base. They are now doing “negative work”.


That's why I don't hire dead weight, or people who require handholding, but people I can trust, who can pull their own weight, and know how to learn. Someone who merely has X years in technology Y means little to me. Plenty of people manage to still be bad after years of working on the same technology.


But if you're in a secondary employment market as opposed to startup heavy SV you might work with this person for a very long time or have a high chance of working together in the future. So the right, right now isn't always what people are optimizing for.


After spending 10 years at one company and suffering from not learning anything new but just as importantly salary compression, for the next 8 years, and 5 jobs, I learned the best way to make more money is to do Resume Driven Development and change jobs. While your market value goes up tremendously, HR departments insists on giving 3-4% raises to existing employees while being forced to bring in new employees at market rates.

This is the “secondary employment” market where HR policies aren’t optimized for in demand fields like software development. The immediate managers are usually helpless to do anything about it and see employees they spend time training leave for greener pastures. Being that is the reality, what other course is there but to find people you don’t have to train?


Compared to what, needing to ship now and not being able to find any "qualified candidates"?

It really depends upon the exact situation. We can all make up bullshit examples to make either option look stupid.


It’s “bullshit” to open req to hire a developer because you need to ship a product within the next six months and you need more bodies? Why are you hiring if you don’t need the extra people? If you didn’t need the extra people for a year until they ramp up, wouldn’t it be better just to take your time hiring until you found someone that didn’t have to spend a year learning on your dime and then leave in two years?


I needed extra people, but not merely warm bodies to fill a seat. I need people who don't need a year to ramp up, but can do so in a few weeks.


It’s not current speed that matters but acceleration and max speed


There is no "right way to hire". All "ways" are flawed and miss things. If hiring could've been solved by now, it would have already happened.


I think what he is saying is, if you are hiring a React developer, but someone only has experience in Vue, would you not hire them because of their lack of React experience?


I agree with you, it's not how technical people are hired. But, I think it's entirely necessary in certain contexts, once we realise the software world is bigger than just Web.

Learning languages and libraries is easy. But new paradigms, domains, and entire new ways of thinking? It takes time to master.

So I'd be inclined to add disclaimers to the idea that "after 6 to 12 months nobody will ever notice".

This might be the true if you're moving from 1 web programming gig to another web programming gig, but the author himself acknowledges the vastness of software out there that underpins every aspect of industry.

So - if you're going to be working on enterprise middleware in investment banks, or embedded systems on airplane simulation kit, or APL, or missile guidance software - but all you've done is web... then tech stack does matter. Insofar as being familiar with the paradigms, standard patterns and unique industry norms.

The problem is that some middlemen (ie recruiters, HR or non-technical hiring managers) do not understand that, for instance, Java and C# are somewhat interchangeable, but C# and APL are not.

Fantastically written piece in all though, lots of great stuff in there that I wish I had either known (or stopped denying!) earlier in my career. But specifically on this point I think it's a dangerous idea to imply to undergrads that deep, hard-earned experience in both tech and domain can be overcome in such small time scales.


>>Learning languages and libraries is easy. But new paradigms, domains, and entire new ways of thinking? It takes time to master.

Here's why I think this is sometimes a false dichotomy: going from Ruby to Python will be relatively easy. Going from Ruby to Haskell, on the other hand, will involve not just learning Haskell the language (syntax etc.) but also new paradigms and entire new ways of thinking.


It's the first thing some hiring managers look at.

Speaking for myself as a hiring manager I'm not looking for specific tech stacks. I'm looking for general patterns. Someone who has spent their entire career doing front end work is unlikely to be a great fit for embedded development and vice versa, but I don't care so much about the specific technologies they used in the process.

My experience has been that concepts and fundamentals trump specific tech stacks. However I also take a longer range view of things. I care far less about relative productivity of two candidates a month from now than I do a year from now.


Until your customer needs a feature a month from now and not a year from now or you only have a six month runway....


The percentage of hires for companies in that situation is going to be extremely small, yet the "hire for tech stack" motif is cargo culted throughout the industry.


Why else would most companies have open reqs if they don’t see a need for man power soon unless you are one of the four or five big tech companies that are trying to accumulate smart people (tm)?


Because they project for incoming needs. Hiring for today’s needs is a fools errand, it can take a good amount of time to identify a quality candidate. If I need something filled right ie or the company goes under it is already too late.


Doesn’t brooks say that adding manpower to an already late project makes the project later?


who said the project was late? You get a new customer or want to get to market with a new feature fast you still need new people.

On the other hand, you have a few “full stack developers” on hand and you need someone to concentrate on one part of the stack so your current developers can concentrate on other parts.

I’ve found it remarkably easy to hire a front end developer who knows the stack we want to use, who doesn’t know anything about the business besides - here is what we want from the website, give us some mockups and we come up with the APIs you need.


Why would a company choose someone who doesn’t know the stack they are using over someone who does? Let alone 6-12 months.

Knowing the ecosystem, best practices, frameworks, etc takes longer than a few weeks.

Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?


Because someone is smarter or better at the job in general that the lower quality specialists they can find. It's very hard to find a good generalist. If you can be a good generalist, you're exponentially more valuable than a generic specialist. I have done mostly backend work in Python, but in my current role I've worked in the frontend with Angular, I've debugged critical issues on our iOS app, I've fixed our AWS infrastructure, I've moved us away from dragging files into windows boxes to CI/CD, I've helped architect solutions, I've worked with our customers, I've led efforts across multiple teams, etc.

Being a "Python Programmer" in no way measures my value to the company, in fact, it necessarily limits what I'm actually capable of contributing to the company. If you can show that you can add value outside of a specific tech stack, you're worth the few weeks to learn a new technology. And yeah, it is basically a few weeks to get up to speed and be a contributing member of a code base if you don't know the technology. 6-12 months to know all of the details if you're a solid engineer.


I would say just the opposite. When it comes to the entire “web developer” tech stack, sure I can put together a website good enough for internal use, but I’m not very good at the front end.

I consider myself to be very good at C#, okay at backend JS development and passable at Python.

I have enough experience from doing a lot of ETL in a previous life to know how to automize queries and schemas for speed and to not lock up a database.

I’ve set up CI/CD solutions from scratch with what is now called “Azure Devops”, AWS’s CodeBuild/CodeDeploy/CodePipeline, OctopusDeploy and Jenkins

I could just as easily and competitively apply for jobs as an “AWS Architect” who knows most of the popular AWS offerings for developers, Devops, netops, and system administrators and I have experience with them.

But in many of those areas - especially on the front end and with the netops, system administration stuff, at any scale. It wouldn’t make any sense to hire someone who “kind of” knows what they are doing over hiring a specialist.

If I need something now I’m not going to want to wait for you to get up to speed in year.

Do you really think you’re as good at any of those areas as a specialist? AWS alone announces dozens of new things every month on their podcast.


I think we might be talking in different time scales, and maybe different responsibilities. If you're looking to bring on someone to solve an urgent problem this month, you're going to need a specialist in the area. If you're bringing on an early hire to build a team, or another engineer to bolster your team with some specialists, you may want a solid generalist. There might also be some significant difference between organizational structures, if you're joining a team centered around building and supporting a product, or maybe a startup with only one or two products, a generalist might be much more effective. If you're joining a massive organization where there are highly specialized teams dealing with specific problem areas, a specialist makes a lot more sense.

As a sweeping statement, I'm better at solving "a problem" than a specialist. If you define the problem area tightly, they may be the right person for the job, but if the role you're hiring for has uncertainty and flexibility, the front end specialist probably isn't the right person to figure out why your database is slow, your load balancer isn't working, your build and deploy process is stuck, etc.

There are definitely roles that are much more fit to one or the other, but the generalist can handle a lot of things pretty well. All of that being said, we can probably agree that the best setting is having both.


> Why would a company choose someone who doesn’t know the stack they are using over someone who does? Let alone 6-12 months. > Knowing the ecosystem, best practices, frameworks, etc takes longer than a few weeks. > Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?

I would separate this into two separate categories - big tech companies (or others with similarly large shared internal infra) and others. For big tech companies, for most positions, the tech stack is proprietary internal stuff such that knowing the language and best practices only get you about 10% of the way. For other types of companies, it's generally more important that you hire people with the right business context than the exact tech stack.

With that said, native mobile development isn't just a stack - it's more of a different, though overlapping, career path - the main reason not to hire non-mobile developers into a mobile role isn't that the stack is different and takes time to learn, but that the workflow is so different that they may or may not know what it is that they are even signing up for. Hypothetically, you'd rather hire someone with Xamarin background with no Java experience for an Android java role, than someone with no mobile dev experience, but lots of Java backend experience.

My first mobile development experience was a moderately complex Android project on an app that was used by tens of millions of people daily. There was no ramp-up - I had zero prior experience before signing up for this project, never even played around with any mobile development before and I had never professionally programmed in Java - and I was the sole engineer working on both mobile and backend. It was a little painful but everything shipped on time.


> It was a little painful but everything shipped on time.

Your last sentence would seem to contradict the entire body of your comment. You describe mobile development as a fundamentally different thing, but then your first gig was to work on a large project, the result of which was shipping on time at the cost of a “little” pain.


That's fair - what I meant by it being a different career isn't that skills aren't transferable at all but that you solve different kinds of problems entirely, as distinct from a difference in stack. And if you hired me then as a mobile developer, I'd have quit, so you don't want to hire non mobile engineers into mobile roles. Language/stack is a red herring here in the sense that Java backend development is a lot closer to Ruby backend development than it is to Android development.


> And if you hired me then as a mobile developer, I'd have quit, so you don't want to hire non mobile engineers into mobile roles.

That is a great point. I’ve also had the experience of working for a short time on a system, knowing that I would hate for it to become a regular part of my work. So hiring someone with experience is one way to mitigate staff turnover from undesirable tasks.


I don’t know Ruby. But I can tell you the difference between having the rails of the compiler and type safe language and having those rails taken away in a language like Javascript and Python caused a lot of heartache early on.


This is fair too but I think most experienced engineers for whom stack is a valid consideration have some experience with at least one dynamically typed language and one statically typed language and can self-select out of roles if they have a strong preference either way. And I don't know anyone for whom this was an actual blocker as opposed to an annoyance (in either direction) that they got used to after a while. On the whole, I think it's less likely for this to be a serious issue than, say, the details wrt how the system is architected and how the organization is run, some of which you won't really get to know until you start.


> Why would a company choose someone who doesn’t know the stack they are using over someone who does?

If the two candidates are perfectly identical on every other criteria, sure. But that's not the case, the point is that most other criteria are more important that the specific tech experience when dealing with competent people.

> Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?

If you have web experience, I'm not too worried about your productivity on Android or iOS. If I'm hiring for Django and you have experience with any of Symfony, Rails, Spring, .Net or NestJS, the tech expertise is the least of my concerns.


You’re not worried about their mobile experience until the app fails to work in a tunnel because they are expecting an always on connection or they have half their records on their mobile and half on their server and need to figure out which to use or they never had to figure out a syncing algorithm because you usually don’t have to sync things on the web since you never expect long periods of not being connected.

Edit: and I forgot to mention the biggest UI failure I made when I did mobile development a decade ago on WinCE ruggedized devices. I didn’t even think about actually taking the device out into the sun where are all of the field service techs would be working and seeing how the screen, colors and contrast looked.


I would argue that this isn’t solely the responsibility of the programmer. This is where testing and QA comes in. Also, these are pretty basic considerations when building mobile apps and I would argue that they aren’t language specifics.


So now we are going to spend more on development and probably rework because the developer decided he was just going to wrap the website in a webview and call it a “mobile app” because he didn’t think through these scenarios. Anytime that you have to go through the development -> QA cycle more then once it costs money and time.

That’s kind of my point. Learning a language is easy and for the most part useless without knowing the frameworks and architectural best practices.

I don’t know what Android has, but iOS has built in frameworks for handling syncing. If an Android developer didn’t know all of the built in frameworks available to them and re-invented the wheel, that would also be a waste of money.


The argument made here is a good generalist tech guy would definitely think these things beforehand as much as an android or iOS guy ... That is language knowlegde !== problem solving ...


I know all of this because I’ve Architected two solutions on Windows CE devices and built one on top of proprietary mobile/web form builder that would let you do real coding in Javascript.

Any company that would hire me as a modern “mobile developer” would be absolutely foolish. I have only written 30 lines of Java my entire 20 year professional career, never written a line of Swift or Objective C. Why would they hire me over someone with relevant experience as a developer? If they want to hire me as a team lead/architect and then find mobile developers, I would know what to look for.


While it's not solely the responsibility of the programmer, most of the responsibility should fall on the programmer.

It's a much more efficient system for the programmer to own UI/UX and for testing and QA to simply verify. In contrast the the programmer doing whatever, and leaving the full responsibility of UI/UX to the testing & QA cycle.


I notice this a lot on apps in the subway (who doesn't) and the app just dies

A lot of problems in life can be simply avoided this is no different... If you're getting that tiny last bit of differentiation because you're 90% market share and you can afford to hire people to solve that exact specific problem then great. But that isn't most places. Most apps would do better to 100% avoid the problem and just say "No Internet Connection" if there's no WiFi or 3g. On top of that you get mobile developers who swore to God they solved this problem but guess what they actually have no idea what they are doing. They think it works but it doesn't because it's actually a database concurrency control problem. This https://www.postgresql.org/docs/current/mvcc-intro.html and what I expect for someone who claims to "solve the problem" not some "algorithm" they invented.

So I don't buy it, and I don't buy the business need for it unless it's an app specifically made for disconnected use. Unfortunately it sounds like one of those things people do to make themselves feel important or smart (no nice way to put it; I see it as bad as someone who invents their own encryption "algorithm" without realizing how ridiculous that is).

In short I would say don't do it. And if someone does it better be a real business requirement.


And you’re kind of demonstrating my point - the difference between people who think it’s an easy problem and the proportion who have experience.

Enterprise mobile apps aren’t about the apps you download from the App Store. Usually they are distributed using an on-site mobile device management system.

1st use case: I worked for s company that wrote field service applications for ruggedized Windows mobile devices. Some had cellular, some had WiFi, and some had neither. You had to actually dock the device. The field service techs had to have all of the information they needed on the device to do service calls including routes whether or not they had connectivity. They would record the information and it would sync back to the server whenever they had a connection.

2nd use case: worked for a company that wrote software for railroad car repair billing. Repairs are governed by Raillinc (https://www.railinc.com/rportal/documents/18/260737/CRB_Proc...) all of the rules and audits had to be on the device and the record of the repair had to be available whether or not they had connectivity. It had to sync back with the server whenever a connection was available.

3rd case: software for doctors. Hospitals are notorious for having poor connections.

4th case: home health care nurses had to record lots of information for Medicare billing. Again you can’t count on having mobile connections.


It's not an easy problem. It's a database concurrency issue and well beyond the skillset of most mobile devs. Database isn't even a required course for a compsci degree it's usually an elective.

My point is the problem you think you solved, you didn't and it will break under dozens of scenarios. Maybe the clients are happy and they think it works but you just haven't encountered the case where data goes missing or overwritten.

In other words what I am saying is it is wrong and hard to know it is wrong unless you directly attack it. The word "enterprise" is an euphemism for low cost and potentially low quality. It's a buzzword. I wouldn't take an Enterprise mobile developer over a B2C mobile developer just because of the word enterprise.

Not everything in the world should exist that leads to 737 Max. The word "sync" has implications way beyond the concerns of a mobile developer.

So I call bullshit; the fact the industry does it, that everyone does it, that you consider "real" mobile devs to require it, that customers want it doesn't mean it is a good idea or that it's mathematically or scientifically sound. It may cover most cases and nobody may notice the problems except once in a blue moon but that doesn't make it right because operational systems need full data integrity.

The correct way to handle such a request is not to "sync" but to collect data push it to the backend and let the backend sort out the mess. Not "sync" by whatever stretch of the imagination no matter what cottage industry or cult beliefs have been born of it.


So you’re saying “it’s not a good idea” to have software that actually fulfills the need? The mobile app that doesn’t work in the majority of use cases to solve the problem that it was meant to solve is useless.

And yes “the problem” we solved, a mobile app that could route field technicians dynamically at a level of quality we needed we did solve.

The word "enterprise" is an euphemism for low cost and potentially low quality. It's a buzzword. I wouldn't take an Enterprise mobile developer over a B2C mobile developer just because of the word enterprise.

Again this comes from someone who thinks they have experience versus someone who does have experience. Did you read the link I posted about the industry required rules for repairing railway cars? That isn’t even the entire regulation. If the typical B2C app doesn’t work, oh well. For the railroad industry, if you don’t submit your railcar repair just right - it gets rejected either by the interchange or the customer and you can only submit your invoices and rebuttals once per month.

The correct way to handle such a request is not to "sync" but to collect data push it to the backend and let the backend sort out the mess. Not "sync" by whatever stretch of the imagination no matter what cottage industry or cult beliefs have been born of it.

How well does “one way server syncing” when you’re a field tech doing routes and the customer calls customer service and cancels one of your routes while you’re in the truck? How well does it work when your back end system needs to calculate where each truck is on the road and needs to re-assign routes on the fly? How well does it work when one tech needs a part and they need to know where the parts are based on what other techs have already been at the warehouse and now they have the part? But wait, they went to the customer’s house and found that they don’t need the part at all and it’s available on the truck a mile away? All of this involves dynamic two way syncing...

Again, the difference between someone who has real world experience and someone who thinks that because their Twitter app doesn’t need to work in the subway nothing does.


I value experience a lot, over almost anything.

What you mention is very dangerous to the data. Take the medical app example. Suppose there's an app to update a chart that doctors carry around. Suppose there's five doctors and/or nurses working on the patient. Whose prescription or orders do you take? On top of that it gets worse -- there might be dependencies between the orders, orders might be to countermand other orders or in response to others which may or may not exist. It is not a problem that any algorithm or programming can solve, because the whole point is to take the experience and skill of the doctors which is being blindly ignored for some process that the doctors may or may not be aware of who submit the information. Similar problems could appear for any of the examples you mentioned if you dug hard enough.

As for the submission you can simply ban submission unless you have an active Internet connection. 737 Max is also "real world experience" Boeing panicked at Airbus and instead of going through a 10 year design and 10 billion dollar process for a plane they surrendered to market realities at the cost of lives. The fact that "enterprise" has onerous business requirements or even legal requirements demanding technical sacrifice doesn't make it any less technically wrong. If asked to make a sync on the client side I would make it as simple and straightforward as possible and assume nothing.

I suppose so long as it doesn't cost lives or ruins people I don't particularly care if you value handling data on the client in this way as a qualification for "enterprise" mobile developer. As long as it's "good enough" to meet the requirement, great. But it doesn't mean I like it, and it doesn't mean one should ignore technical flaws. Unless it's ACID you don't guarantee anything it's just a feel good (and possibly done in a much simpler way). For all the scenarios you mentioned I can mention another half dozen scenarios or even a very simple one, one person with same seniority making exactly the same change to the same record. Then your system tosses one or the other or even merges them -- in other words you dive into expert systems, NOT anything to do with "syncing".

Experience is important but there's a theoretical foundation to everything and it's wrong to expect an offline node in a distributed network to act as a source of truth for any period of time. Sorry.


> And yes “the problem” we solved, a mobile app that could route field technicians dynamically at a level of quality we needed we did solve.

Just look for people good at handling split data updates and ownership, there are a lot of people working on that kind issues on the backend, I really doubt you find more mobile devs with those skills than backend devs with it.


What makes you think that the two are distinct?


What two are distinct? Yes you sometimes have conflicting data on two machines when you work on mobile apps, but that is the core of a lot of distributed computing problems. So if you only look for those skills in mobile devs you will miss out on a lot of people with relevant experience.


> Most apps would do better to 100% avoid the problem and just say "No Internet Connection" if there's no WiFi or 3g.

Lol we are in such a bubble.


If the bubble is large enough, it's the others who are in a bubble, and we're just "in the world"...


I recently came across a company similar to Udemy offering cheap “on sale” video courses like Udemy does.

This company has an app and in it you can download the videos and watch them offline, just like Udemy.

That’s great, I have a limited amount of data on my plan.

And better yet, I can watch these videos when I am completely offline, for example on the 30 hour connecting series of flights I went on recently. Except... whereas the Udemy app actually works completely offline, this other app needs internet access in order for the “my courses” tab to work.

You still have access to the videos through the “downloads” tab. But there they are not organized neatly. So I decided to do other things than to look at any of the videos.

Also, a lot of apps are bad at properly syncing data. For example I think neither Udemy nor this other one properly syncs the course progress data. Even when they do have a connection.


> But that's not the case, the point is that most other criteria are more important that the specific tech experience when dealing with competent people

Unfortunately, the more important criteria are harder to assess, and hiring in the real world very heavily weights the east-to-assess bits whether or not they are actually important.


It’s like saying “well, we’re an agile company, so we prefer to hire people who know standups and sprint planning”. In principle they will ramp up faster, sure. But it’s just not a useful thing to select on, and if you do select on it you’ll end up with a weird monoculture.


Knowing an ecosystem well on a senior level won’t happen in a few weeks.

But to your example, I’ve seen developers who couldn’t adjust to developing where they had a rapid release cycle because they were use to the big design up front. How you develop software where you don’t have all of the requirements for the next year is a completely different mindset.

Even on comments on this post, I see people who aren’t actually willing to actually talk to the customer to decide what they should work on.


Really senior level is about talking to people; “Knowing the ecosystem well” (if by that you mean a specific tech stack) is almost irrelevant. Senior is more about solving the right problem than solving the problem “right”


And then you end up re-inventing the wheel, setting up servers and spending more on maintenance and development because even though your entire company is already on AWS, you didn’t know you could just click a button and make that entire part of the product someone else’s problem.....


Come on. Even highschool kids know that AWS exists, that doesn't make you "senior". How did you go from "Knowing an ecosystem well on a senior level won’t happen in a few weeks." to an example of someone reinventing the wheel? That's absolutely an example of non-senior behavior. But I was talking about e.g. someone that doesn't know Spring, or Angular - it seemed to me you were saying "can't hire a non-Spring-expert as senior if we're using Spring and he doesn't know it".


There is a difference between “knowing it exists” and knowing what it can do. No, despite what a bunch of old school netops guys who watched an ACloudGuru video and now can click around and duplicate their on prem infrastructure on AWS (and cost more) think, it’s much more than just hosting VMs.

And you kind of proved my point....

If you don’t know the wheel exists, you don’t know you’re reinventing it.


Look, I don't even know what your point is. I thought you were claiming "you can't/shouldn't hire someone who doesn't know well the technologies that you currently use". If it's not that, then don't mind me, I was debating something else/ I misunderstood your point.


That’s exactly what I’m saying, in my example the old school netops guys who didn’t know anything about the “tech stack” in this case AWS, ended up designing horrible, inefficient solutions because they were learning on the job. They could have been “senior network engineers” because they spent years managing a colo. But, they definitely weren’t as efficient as someone who had built real world solutions on AWS.


The linked article actually mentions that ("there are people with title 'senior' that can't do fizzbuzz"). So the fact that some "senior network engineers" did something stupid doesn't prove your point or invalidate the article in any way.

There's a big difference between not having experience with the <framework_du_jour>, and not knowing foundational technologies. E.g. there's a good chance Jeff Dean doesn't have much experience with most of AWS technologies, but there's no reason to believe it would take him more than a few weeks to get up-to-speed with them if he'd really need to. Not on the "expert" level, mind you - but enough to not make big mistakes.


I agree the problems you're describing are real and important to select for, but just requiring agile experience won't help you there. Lots of people practice a weird form of agile development, where they go through all the motions of fast iteration, but almost all tasks consist of non-negotiable internal dependencies and not value delivered to a customer.

I'd argue a similar thing is true for tech stacks. If there's some correlation between knowledge of all the fiddly bits of C++ and ability to write clean, performant systems-level code, I've yet to see it.


> almost all tasks consist of non-negotiable internal dependencies and not value delivered to a customer

It’s especially nice when you realize that as soon as you’ve completed all the non-negotiable features, a bunch of other things magically becomes non-negotiable.


I've hired for the stack specifically and it really narrowed our applicant pool. Lately I've relaxed this filter. We're a Rails app, but what if there was a really great engineer that uses Python or Node?

They might not know the right libraries to use, but would probably know there are libraries, have a good concept of application design in general, and know how to ask the right questions. And we're hoping to hire someone for years.


I have made the jump from one language to another several times on the job. It’s considered lower risk to use someone on the team than higher someone new. It’s also going to take a while to higher someone and get them up to speed. After a year you might not be as experienced as someone using the same stack for 5+ years but the difference is not that extreme.

Further stacks tend to share quite a bit. Knowing SQL, HTML, JavaScript, CSS, jQuery etc takes a long time and has little to do with Java vs .Net.


Languages are easy. But do you think you could jump from systems programming in C to mobile programming on Android “easily”.

I’ve seen people jump from Java to C#. You can see the difference in their coding style, them not taking advantage of the features of the language, reinventing the wheel because they didn’t know there were popular packages that would do it for them, creating horrible inefficient queries and practices using EF etc.


Sure, jumps like that are not as bad as you might think. Style wise the code is often poor, but users don’t really care about the code.

I started with Object Pascal on Mac OS 8/9 which forced you to do a lot of low level tasks like deal with Handles and suffer through a cooperative multithreaded OS. Rewriting the network layer from AppleTalk to TCP/IP was closer to systems programming in C than you might think.

I have also been paid to write C programs, Windows application pre .Net in Visual C++ and post in C# and VB, written Java and C# websites, and most recently Angular SPA’s. Add to that a few random oddities like XSLT.

PS: I even turned down working on Android, but could have made that jump.


The jump is huge. Web development has a completely different model than mobile development. In web development, losing a connection is a failure case. Developing for mobile it is expected. Especially for enterprise apps. Logic is usually distributed, syncing is required, knowing which is the source of truth etc.

This is really an example of hiring someone who has experience and hiring someone who thinks “it’s easy and just like what I did before”.

Then I could get into all of the old netops guys who think AWS is just like what they did on prem and end up costing the company more...


They are different, but I don’t think networking is one of the major differences.

Website backends often have significant dependence on other services that can be down. Further standalone apps can have zero network dependence or a lot. So having written both, and similar code for each, I don’t feel the networking side is all that different.

If anything networking is probably the largest similarly between them.

PS: I did some iOS development in my spare time even worked with some old J2ME, so I assume Android is fairly similar.


>In web development, losing a connection is a failure case.

That's changing with the advent of progressive web apps. It's possible to write web apps now that are robust in the face of network problems, e.g., the web app renders and is functional even without a connection.


I’m not questioning whether you can do it with web technology. But you still have to design it where both your business logic and your data can live on the device and it can queue submitted data until there is a connection available and if multiple people are updating the same record, knowing how to merge the information or knowing which one takes precedence. I’ve even seen cases where the logic depended not on just the record, it depended on knowing which fields were updated by the device when it was disconnected and the prior value of the field to make sure that the user had seen the most recent value before they updated it. If not, someone on the back end had to do a “manual merge conflict resolution” by calling both people.


You frequently encounter similar issues between multiple web servers. Either due to scaling to multiple data centers or when multiple independent services all update the same information. This can get really complex when clients start talking to multiple different services.


> You can see the difference in their coding style, them not taking advantage of the features of the language, reinventing the wheel because they didn’t know there were popular packages that would do it for them, creating horrible inefficient queries and practices using EF etc.

That's why code reviews are a mentoring opportunity. Help your colleague level up.

(Also there should be team dialogue about how to solve some problems more efficiently, I mean, if I was unsure of something I would go and ask colleagues for some guidance, or you would go and do some research on your own)


Now not only do you have a less effective developer who takes time to ramp up, you’re also taking time from senior developers.

I’m not saying this is always the right course and of course this doesn’t scale to larger companies, but my first project at my current company a week in was to develop a feature from scratch which ended up involving coding an API for the front end developers, writing an ETL process that used both Redshift (AWS OLAP database), and MySQL designing the schemas, configuring the AWS resources with CloudFormation, setting up queues, messages, lambdas, dealing with the vendor we were integrating with and learning the business vertical. How much longer would it have taken if instead of already knowing their stack - C# WebAPI, MySQL, and AWS - I came from a Java/Mongo/GCP background?

They needed someone useful now to get a feature out that they wanted to charge customers for.


>Why would a company choose someone who doesn’t know the stack they are using over someone who does?

Because a stack defined programmer is a limited programmer...


As a person who was responsible for hiring in a previous role and I still sit on a lot of technical interviews, I didn’t care if you were “limited” to the stack we were using as long as you could get the job done without six months to a year of training.

If you started your career learning Java 20 years ago and you kept up with the latest trends of Java, you could still find a job now. The same is true for C# around 15 years ago.


Agreed. The job doesn’t exist to train you.


It’s more nuanced than that. Most job reqs have “must haves” and “nice to haves”.

My current job had one must have - C#/MVC/Web API and some Javascript experience - their current tech stack. Nice to haves were React, the fiddly bits of AWS, and Python. I was immediately useful because I was strong in the must haves, had a little AWS experience, and knew nothing about Python or React.

I’ve since leveled up on the nice to haves except for React, I refuse to jump on the $cool_kids bandwagon of front end development. Especially seeing that everything else I listed, pays more, and doesn’t change as often.


I find React's relatively simple architecture makes it easy to get into for JS devs; I've always found it extremely productive and maintainable. My sickness is to the point that it’s hard for me to imagine an immediate future where it hasn’t entirely consumed the front-end web environment like jQuery did 15 or so years ago. Maybe you're onto something, but I'm loving the new digs so far.


I don’t necessarily have anything against React as a technology, but as the old saying goes, a man with one ass can’t dance at two weddings. Almost everyone else at the company are better at the front end than I am but most haven’t taken the time to learn the backend. Why would I learn something, that I still wouldn’t be as good at in a year and couldn’t negotiate a higher pay based on bringing an above average amount of value to the company than other employees? I also wouldn’t be as competitive in the overall job market.

This isn’t directed toward you, just a general comment.


I run a small company and I m looking for Web Developers. We primarily use PHP and getting into Golang but I really dont care about the language since what matters is concepts and experience with html/css/js http requests/response, client server architecture, database/CRUD, REST API etc. So i m not one of those employers :)


Same here, hire the person, treat them well, and the language doesn’t matter as long as they have experience in similar languages.


This is entirely company dependent. My team is primarily Ruby and we hire folks with no ruby experience all the time.


I need a job and can write code in nearly any language. Hire me?


Yeah downvote a guy looking for work. Thanks.


Don't take it personally. It's not you that's being downvoted; it's your comment.


Because ruby programmers are rare nowadays and your company still need people to build software. At least, that's my assumption.


They're not rare. I know a ton and many still write Ruby. Willingness to work on brownfield Ruby projects is rare; as with other similar systems like Django or roll-your-own Express stuff, such systems strongly tend towards chaos and a lack of maintainability after a certain point if they haven’t already had strong leadership end-to-end. In my neck of the woods, the folks who you want doing that work don't want to do that work anymore. Unless you pay a lot.


> Willingness to work on brownfield Ruby projects is rare

After working in a dev agency that took on outside Rails projects several times while I was there, my conclusion is that Rails (not the only way to write even Web-focused Ruby, and not the only one I've used, but the only one that'll score you any points for hiring) is a pretty bad framework for any project that will have multiple teams on it over its lifetime, absent heroic technical direction & testing efforts that I've never actually seen in the wild—probably because young or outsourced teams picking Rails are doing it to move fast, over all other concerns.

You can take on or resurrect an average outside or old Rails projects. It's just slow and expensive.

Too much magic, too much room for doing things some way that the next person will never have seen before, too little grep-ability, too hard for your tools to help you.


Agreed in full. I don't love Rails, either; I always used https://github.com/modern-project/modern-ruby (which I almost definitionally like, 'cause I wrote it). Rails is a framework that depends on developer continuity. I view Ruby as a tool for writing very specific things for experts, more than anything else. Web dev in general seems to be against that principle; Ruby could probably benefit a lot from more effort spent on ways to make it hard to do the Wrong Thing at this point.


Honestly, I find Rails projects to unupgradeable.

We found it to be easier to just rewrite the project when moving between major versions (Rails 4 to Rails 5 for example). Shoehorning old functionality into a newer Rails just made this inconsistent.


Good call, coming on to a Rails project that's been upgraded—or worse upgraded twice—is... usually very unpleasant.


> They're not rare

Depends on how you define rare. Exceedingly uncommon at the very least. stares at a pile of 150 resumes Ruby is a language languishing from being embedded in a few niches, having poor performance, and otherwise being unremarkable, imo. Ruby was made with the idea that picking it up would be easy (which it is), leading to less incentive to learn it.

Putting Ruby in the job description gives you applicants with Ruby experience, which is why the blog post was (and is) impractical. Signaling works.


Calling Ruby "unremarkable" is a significant understatement, I think. For my money nothing in common usage approaches the same kinds of metaprogramming and solving-a-category-of-problems-at-once nature of a really comfortable and dynamic Lisp as does Ruby. (Clojure isn't in common use.)

I get that that's not what you want, you want somebody to close tickets, but that's not Ruby's fault. And most of the people I know still working in Ruby are at a level where that kind of grunt work just isn't worth their time; most others have, of course, moved on.


Point taken, but the language is not the thing I would hire for anyway. It's the environment and skill-set beyond the language. A mobile developer even requires a completely different set of instincts to be successful than an engineer who's got work experience building scalable backend systems. C++ running across 1000 inter-connected systems is not the same C++ compiled down to run on a gaming console with limited CPU cycles to spare. C++ shared across iOS and Android is also different because Android requires that you bridge the library across Java and iOS may require an Obj-C layer if you're in Swift.

If you spent your whole career writing code in Python and you wanted your next job on iOS, my concern wouldn't be, can you learn Swift in a short time. My concern would start at something as simple as how you would cache data when the network connection times out. Or can you manage your data models in a way that can maintain 60fps scrolling, something iOS engineers pride themselves in achieving. Of course all of this can be learned, but my point is that it's more than the language.


It's also not true with respect to people noticing what you've been doing with your career. A programming language isn't just a language, it's also a culture. I've been writing Java professionally for almost 20 years. I can definitely tell when a person isn't a "java programmer." I am struggling to describe exactly what I mean, but the best metaphor I can come up with is it's kind of like hearing someone speak with a heavy accent. It can all be technically correct, the grammar is perfect, but you can still tell it's not native.

Sometimes this is good. It can bring in new/interesting/helpful ways of thinking about problems to come up with better solutions. Sometimes it's bad because anything that's a little jarring or unusual impedes support and maintenance by "native programmers" in the same way a heavy accent can impede communication.


I would argue that it is the first thing a manager looking for a cost center, cog type engineer looks at. They are not looking for an engineer who actually understands business they are looking for a “ plumbing engineer” that just keeps their head down and grinds out code. There is little upside in this position and this type hiring manager mentality should filter those jobs out for you.

I do agree that most start ups will probably restrict them selves to hiring in the same stack as their tolerance for learning curves will be extremely short.

However, I feel that many employers are used to engineers who don’t “get business” and so they just want somebody to plunk down code. This leads them to “what kind of dumb cog/stack are you?” questions as they are already assuming they won’t be able to hire a truly “full stack” engineer who understands the revenues and costs associated with their work and how they impact corporate strategy etc.


The point is to acknowledge that the way most technical people are hired isn't the way that maximizes outcomes for technical people. Patio11's point is that the way to maximize outcomes as a technical employee is to _break out_ of this paradigm. It might reduce some of your top of funnel (random recruiter outreach) options, but it will increase the value of the opportunities that do arise.


> In the real world, picking up a new language takes a few weeks of effort and after 6 to 12 months nobody will ever notice you haven’t been doing that one for your entire career.

I think this is only true if what you're working on is simple, like crud apps.

It takes longer to be intimately familiar with a language's concurrency model, pros and cons of major libraries and frameworks and the ability to make such decisions quickly and without learning hurdles, etc.


In my experience folks can be reasonably productive essentially immediately in a language they've never touched if they're diving into an existing code base. Give them a machine that's already set up like the other developers, point them at a couple easy bugs or features, do some code reviews, and yes they'll be slightly slower through this process than someone that's already familiar with the stack, but not much.


I'm not in the usual hiring circuit but went to one or two interviews for having a feel for that world. I always had the attitude of being the guy who will learn whatever is needed to successfully finish the task at hand, and I have work to show for it. It's amazing how's that was worth nothing for people that interviewed me, compared with how many years of react experience I had.


Sort of same kind of experience. I presented myself as a problem solver, with algorithmic skills and a deep understanding of language's principles as this is my research area...

"Sure, but can you do python"

Well, I definitely can learn it. But can I do it right their, right now? I never really worked in python, so no.

And so... no.


It's not even just start-ups. Established companies work the same way. Getting pidgeon-holed is a very real and common phenomenon.

It's also kind of silly in a lot of cases too. It always feels like "Oh I see you do Angular... but could you _really_ be capable of developing in React? I don't know..."

So retarded. Yet this attitude seems to dominate.


> The stack they are using seems to be the first thing hiring managers look at.

I try to avoid companies that hire in this manner; it shows a lack of vision and planning. It's unlikely that a given developer will be working on the same stack in five years; the stack will have evolved or the developer will have moved on.


That’s kind of why you do want people who know the stack. If you as the manager or you expect the developer to move on with three years. Why waste a year getting a developer up to speed without having anything to show for it?


First, developers rarely become tech fluent in 6-12 months.

Second, startups don't have 6-12 months.


Exactly what I thought when I read this paragraph. But still some CTOs are willing to let the team take time to learn a new stack and that in the end helps the organization (was part of one).


This is fantastic practical advice for software engineers early in their career. Here's some more based on the common pitfalls I see,

- Don't apply to 50+ companies at one go thinking only a % of them will call you. Choose ~5 companies, do a lot of homework about their business & write to key people at these places telling in ONE paragraph what you can do for their business. If it fails, choose next ~5 and so on.

- Other things being equal choose team over company. E.g. working at core technology or customer facing team at a less-known growing business is much better than working at some internal tools team in Google. FAANG carries some brand value for future recruiters, VCs but you should really optimize learning rate early in your career.


- Don't apply to 50+ companies at one go thinking only a % of them will call you. Choose ~5 companies, do a lot of homework about their business & write to key people at these places telling in ONE paragraph what you can do for their business. If it fails, choose next ~5 and so on.

This is one of the best advices for people starting their careers. This comment should be at the top of this thread!

EDIT: Formatting


No, no, no, arghh no! No matter how much you research, study, network, and prep for any one special particular company, if you are the one to reach out you’re going to get ghosted a certain percentage of the time. If that percentage is around 90-95%, which is what it is in my experience being in the industry for 20 years, then focusing on 5 companies may mean zero callbacks. You need to cast a wide net, especially as a junior engineer.

I’ve tried this strategy of targeting an insider in my network and pitching myself through him. Several times. It has never worked for me. Ultimately it ends up at “Whelll, nice talking to you! Next step is to apply for this job id online. Good luck!!” What does work is if conpany initiates contact, THEN you go through your network to gather information about the role and hiring manager, and so on. Having an insider pushing for you after company has already expressed interest has been a lot more reliable to me.


I think the key is that you need to motivate them for why they should hire you. I recently went through the job interview process and I got ghosted by a company with a job opening on LinkedIn that I know I was qualified for. I reached out to the recruiter and within a week I had an interview scheduled, and within two weeks I had an offer.

Sometimes you need to just get a human's eyeballs on your resume.


I think you misunderstood the parent post. They didn’t say only apply to 5 companies. They said don’t send 50+ Low quality applications in one go. Send five high quality ones to places you think are a good fit, follow up, and then send five more.

I think it’s good advice. Although the points you make are generally true too.


This has been my experience as well.


Agreed on this generally being solid early career advice. Some of it is outright wrong though at more senior levels. E.g;

* Not giving salary expectations upfront lead to a waste of time if the company expectations are below yours

* It's not really relevant how bad many other programmers there are since you'll tend to look at switching to "better" companies, not worse.

* This is a bit too negative on equity grants for companies with strong product market fit (mostly article just hasn't aged well with companies staying private for so long now)


> write to key people at these places telling in ONE paragraph what you can do for their business.

Do you have an example of this? What can you possibly do for their business?

Do key people respond to random solicitations for a job. Don't you still have to go though their original Leetcode interview process. I am really not seeing what the advantage of this approach.


Maybe for small stage startups or deliberately tiny, private ventures. Otherwise, yeah, might as well just apply via the normal channels since the employees will forward your request to HR/hiring anyway.

OP's advice is wishful thinking. I highly doubt they themselves took that approach


Learning rate doesnt really matter anymore. If you want to do really well you basically have to work for FAANG, who will basically only vet you on your algorithms skills. They completely negate everything else. So none of it matters, all that matters is if at interview time you can find the most optimal solution. If you want to work for companies paying less money, then sure, experience counts.


There are lots of opportunities to make that same type of comp at non FAANG companies. They're just not so clumped together, and don't have a cult-like following pumping them up in every thread.


Much harder to make 300k+ at a non faang/unicorn corporation


No it isn't. If you can make 300k+ at FAANG you can find a 300k+ job across dozens of public companies.


What if you truly need employment and you are a run of the mill beginner? Is that 5 company application approach still optimal?


IME no.

This is going to be an extremely cynical take but most software jobs (>80%) are basically the same. That goes for the companies as well, we aren't as unique as we think; and the companies that are unique or doing unique work aren't exactly having trouble finding employees (FAANG vs nearly everyone else).

If you want an employment as a beginner apply to as many postings as possible. Only do research for companies once they want to talk to you, anything else is a waste of time. It also takes very little time filling out job applications (outside of obnoxious companies that ask lots of behavior Q), it should less than 10 minutes to fill out job apps.

IDK how to feel about cover letters, every company I've worked at (startups, national ISP, to massive insurance companies) have stated they never read cover letters sent. They just want to make sure candidates have all the keywords on their resume before even talking (this part is largely automated away).

I have a basic cover letter that explains what I'm doing at my current job and how I'd like to work at $company doing $unique_stuff. Basically my cover letter is 90% the same between job apps, but I change the intro paragraph to match the title, company, and job description.

But as a beginner or moving to a new city where you know no one, apply to everything everywhere. It's a numbers game and even as you progress in your career, you may not command enough talent to target specific companies.


Like many things in life, applying to jobs is a numbers game. You don't know what the other side is doing/thinking. You don't know how much there is out there (answer: a lot). To negotiate well you need multiple offers and practice negotiation. If you are nervous about the application / interview process at all you gotta do it more.


I’d say yes, with an asterisk.

The trick is to treat finding your first job as a search problem, not a dice roll. You’re searching for employers that need someone with your skills and experience, and those same employers are also searching for you.

The asterisk is that I think this “5 company” approach is meant to be aimed at companies that don’t have the resources to do very “active” searching. A rule of thumb: if the company has a specialized university recruiter, they are probably too “active” for the list, if you’re a new grad with no intern experience.

You don’t find these companies on job boards or LinkedIn. You find them by “searching”, like an investor. You can begin by looking for local VCs, the look at the portfolios of those VCs, then at the LinkedIn history of people who work at those portfolio companies. By now you probably have seen 20 interesting companies, and you can proceed to whittle it down to 5 small companies that will already know you have the requisite energy and enthusiasm, since you went through the real effort of finding and choosing them.

TLDR: yes, but not if your five spell out FAANG.


Yes.

With 50-company approach you increase the absolute number of first callbacks you get (hence everyone thinks it's optimal). With 5-company approach you increase the odds of actually getting through to a single job offer. Think about it.


>you increase the odds of actually getting through to a single job offer

You haven't AT ALL explained why your "~5 companies approach" is more likely to get you an offer.

It's not. That sort of approach would be less likely to get you an offer anywhere that I've ever worked.

999 times out of 1000 you don't have a clue to a company's current business needs just by looking at public information, even if you had insider information, a random insider probably doesn't know the current business needs of anyone outside of their day-to-day job. Even if you did know the company's current business needs, unsolicited job applications from strangers to people who aren't recruiters mostly go straight into the trash bin. If I personally received unsolicited job application from a stranger, I'd probably just ignore it. The most I'd ever do is reply telling them to apply via our career site like everyone else.

Plus what exactly would you say as a "run of the mill beginner" for "what you can do for their business" that would catch anyone's attention that a quality resume and cover letter submitted though the job portal wouldn't?


It's because E(#offers) = #applications * P(initial_contact) * P(successful_interview | initial_contact)

The ~5 companies approach tries to optimize P(ic) and P(si) at the expense of #apps.

The 50 approach optimizes only #apps because it assumes P(ic) is low, and disregards P(si).

Especially P(si) is very important because the interview's purpose is to determine that you're a good match for them. If they're just another company you spammed an application out to, that becomes very obvious.

The ~5 companies approach forces you to have already selected them because you think there's a solid business case for them to hire you. And you can make that case during both your initial contact and the interview, thus maximizing P(ic) and P(si).

> 999 times out of 1000 you don't have a clue to a company's current business needs just by looking at public information

I'd say it's more like 9/10, but that absolutely makes the case for the ~5 company approach. You're filtering out the ones that have a poor recruiting process.

It takes a lot of time to put together a good application and you have to focus on companies that will not only be responsive but follow through all the way to the hiring decision.

So focus on companies that make it clear what they do and what the position entails will tend to maximize P(ic) and P(si).


How would one do this though? Should I find these people on LinkedIn and message them? Should I try to find their email? I’m applying rn and I’ve heard this advice before, I just don’t know how it could be implemented in practice.


Re: Profit Centers vs Cost Centers

I was once on a team that maintained a SaaS application that was in some sort of bizarre super-position of "cost center" and "profit center" a few years back.

The company technically sold the app and made money from it, but they also tended to give it away almost for free as long as the buyer was going to be paying lots of money for the company's $FLAGSHIP_BUSINESS_SERVICE that integrated with the app. (Think of it like a free mobile game with micro-transactions.)

The primary way customers interacted with $FLAGSHIP_BUSINESS_SERVICE (and generated micro-transaction revenue) was through our application. However because the application wasn't what the customer was actually paying for, the executives had an impossible time deciding whether to treat the app like a cost or a profit center.

Whenever the app was working fine and everybody was happy the executives were convinced it was a cost center and looked down on it. Whenever it wasn't working fine and companies began to get angry and threaten to take their business elsewhere suddenly the executives would realize that without the SaaS application they wouldn't be able to sell more $FLAGSHIP_BUSINESS_SERVICE.

Long story short, it was a strange and frustrating experience in many ways.


My best conclusion is that there is no definition of "cost center" and "profit center" that makes sense. I challenge anyone to give me such a definition.

This is as good a place as any to put this:

Peter Drucker originally coined the term "profit center" around 1945. He later recanted, calling it "One of the biggest mistakes I have made". He later asserted that there are only cost centers within a business, and "the only profit center is a customer whose cheque hasn’t bounced."


Most places I’ve worked had the attitude that profit center = sales and everyone else was a cost to be controlled or eliminated. It shouldn’t be really surprising that management thinks management is the most valuable function, and actually making the product is not.


This has been my experience exactly.

I used to think that the team building the product was a profit center. The more time we spend solving customer problems, the more revenue the company generates.

The reality is they view engineering teams like assembly line workers. The revenue was generated by the sales team and now they come to the engineers to get the bad news of how much that revenue is going to cost them.


not sure this is an airtight definition, but think about employee skill vs revenue in a particular department. if you're selling software, you would expect revenue to go up a lot if you increase the overall skill of the developers. if you're a restaurant, you probably don't see a huge increase in revenue from hiring a very good website designer. basically, how hard is it to get people who do a "good enough" job? maybe "not very hard" means you're looking at a cost center and "very hard" means you're looking at a cost center.

of course the problem is that things are very interrelated at a company. maybe you can't hire good developers if you don't have good HR people to recruit them. if you have bad or not enough cleaning staff, maybe none of your "profit center" employees are actually willing to work for you because your office is filthy. maybe the event planning staff do really good parties that help with retention.

I guess I'm inclined to agree with you that there's no clear, workable definition of "cost center" or "profit center", but there are certainly areas of your company that generate more ROI than others, and I think that's kinda the point of this (crude) way of thinking about it.


I understand that we're mostly agreeing, and I think that we both have an intuition for what it means for something to be a profit center.

Still, I'd like to show a gap in what you said: You talk about what kind of increase in revenue you'd have if such and such people did a better job, and what kind of ROI you'd get from each department.

But the question is, an increase relative to what, or an ROI relative to what?

Let's take the billing department. If you made sales for $10 million, and your billing department works okay, then you would have a revenue of $10 million. If you hired the best billing experts on earth, you still would get just $10 million. So that might make you look at the billing department as a cost center rather than a profit center.

But maybe if the billing department fucked up royally, either by not charging a customer, or overcharging them and hurting the relationship, or breaking obscure laws and getting the company in legal trouble. Then you could be out a lot more than $10 million.

So in effect, you could view the billing department's ROI as "how big a disaster they can prevent, and how good they are at preventing it." That kind of thinking makes sense for any department in the company.

That's the reason why I don't see the logic behind branding some departments as profit centers and some as cost centers.


I agree with you 100% and have attested to the same claims on other threads. With small exceptions, businesses exist to make profit. The whole discussion going all the way back to the original terms should be recast as "ROI positive centers" and "ROI negative centers". From that perspective I would agree, it's better to be working for an "ROI positive center" in the company. And even though it might seem the security compliance department is ROI negative, if the alternative is being shutdown by a government agency or paying heavy fines then it really is ROI positive; it just might be harder to make those calculations up front.


The janitor unlocking the doors in the morning is a cost centre but nothing would get done without one. I agree with the coiner that there are only cost centres.

However, close to the top of the pyramid is where the money is at. I think that's more important than whether it's core business or whatever.


>I challenge anyone to give me such a definition.

If the balance of a department is positive then its a profit center. It's a matter of income attribution inside of a company and can be manipulated by people in charge.


Profit centers consist of people who can directly attribute revenues or savings to the work that they do. Everyone else is a cost center. No, it's not fair.


The bullshit is hidden in the word "directly". You can define this word to mean whatever you'd like to mean. You can say a salesperson directly adds revenue to the company, but it's not really directly, the billing department is much closer to the actual money. You allow yourself to take the billing department for granted, but that's just because you don't work for the billing department. In the same way you could view the developers as the profit center, and take the sales department for granted.

I'll say it again, I'll be happy to see an objective definition of profit centers and cost centers.


Profit center: spending more money on it can make you more money. Cost center: spending more money will not be able to increase your revenue, hence the only way to improve your bottom line is by reducing costs.

A lot of the confusion on this thread was conflating necessary functions with profit centers. Billing and compliance are both necessary. Done poorly they can cost a lot of money. Done perfectly they can never increase the actual income of t he company. That is why sale and product development are traditionally the only parts of the business identified as profit centers. It doesn’t mean the others aren’t important just that they have different goals. Profit centers look to expand the business while cost centers look to increase efficiency through optimization.

They have different value during the life of a company. During growth, profit centers are the most valuable source of effort while for a mature company that has saturated its market cost centers are most valuable to focus on. The advice to stick to profit centers is somewhat equivalent to sticking to growth which nearly always has a higher return than optimization.


If compliance are doing a good job they protecting engineering from late changes in projects which might give a better product and more revenue.

It's symbiotic.


The billing department is a profit center because it converts accounts receivable into cash and there are metrics like days sales outstanding that can be used to directly measure their impact on the bottom line.

Cost centers on the other hand are indispensable, but evaluating their impact on the bottom line is hard to quantify and therefore subjective.

Again, it’s not fair to the invaluable work done by the cost centers, but that’s what it is.


That "directly" is carrying a lot of weight, though. The salesmen have nothing to sell without all of the work of the other employees. I prefer the conception of profit and cost in "The Goal": you have throughput (money from sale of your product), inventory (goods in the system not yet converted to sales), and operating expense, the cost of translating raw materials to goods or services (e.g. payroll, cost of electricity for your buildings). In that model it's obvious that salespeople are an expense like everybody else, and the profit from their sales is a reward of the effort of the entire system, not just the salespeople.


Sales people are usually profit centers, as I understand it. Without sales, no profits.


But without product, no sales?


I suppose the real difference is between short term and long term.

If you double the number of salesmen, you can increase your income right now. Firing all the salesmen would drop the income to zero.

If you double the number of developers, there is no difference during the following month or two. If you fire all the developers, you can still continue selling the product for a few months, maybe years.

As a manager, you may be tempted to increase the short-term profit, grab your bonus, and get promoted to somewhere else, so that someone else will have to deal with the long-term impact.


Nah, you can totally have sales without a product - it's just not very sustainable.

It's a little tongue in cheek, but I think illustrates a real perspective - improving a product doesnt necessarily extract more revenue from existing customers. Selling more copies does extract more revenue from existing products though.


Likewise, you can totally have sales with just the product. There is saying good product sell itself.


Indeed. Not long ago our sales team was our existing customer base. Users would call up their friends who work in other companies and tell them "you need to get this software".

We've since added a dedicated sales person, but we still get a significant amount of sales this way.


Yes that is true. Despite this, I think this is the way executives sees it. For example, without logistics, no physical products would reach customers. Despite this, logistics is traditionally a cost center.


Without the janitors there would be no profit either.


Drucker also stated that the basic functions of a business are marketing and innovation because those produce results; everything else is a cost.


The marketing department is a cost centre. The accountants are a cost centre. HR is a cost centre. Middle management is a cost centre. But the logic of profit centre vs cost centre is only applied to engineering departments, or workers in general. That’s really all you need to know about it.


Marketing is a huge profit center.

Improving efficiency in marketing spend can bring in millions in new business value.

If you can do that, it will definitely be noticed by the business.


Joga classes in the office can improve employees and management efficiency and their comfort of life so such spend can bring in millions in new business value by healthier and happier staff. Is joga trainer a profit center?


If engineers (the Product department) are a profit center then yoga trainers will be approved by Management.

Try hiring yoga trainers for the IT and Procurement departments and see how quickly the request is rejected.


Marketing brings in new customers directly.

Joga classes for employees do not.


Marketing spend is a cost. You are talking about reducing costs.


Everything is a cost.

Marketing brings in more revenue if done efficently making it a profit center. It is no different than sales.

Marketing spend is usually fixed in costs and the goal of efficiency is to acquire more customers for the budget.

Most engineers dismiss this part of the business, but if you can do it successfully, businesses will pump more money into it.


The better way to think about it is that you want to be close to the money. If you can quantify your contribution easily (I was the lead on product X that was a huge success/ brought in $Y additional revenue) you have little trouble getting some share of that success. If you can't (I fixed the tests so that they're no longer flaky and improved the overall product quality), then generally it's a lot harder to get the deserved recognition, even if your impact was actually larger! The trick is to go back to money, when you can (be specific, "build machines are costing us $X/Mo, people used to rebuild three times on average to get a "green status" so just by that I saved 0.66*$X/Mo; add an estimated y% of developer time; add cost of extra incidents/ support tickets if you can quatify it; and all of the sudden you can reasonably claim a larget $ amound that much more will easily get the attention of a high-level manager).


I received this advice quite a bit and it took me a while to understand it.

Early in my career I worked with a few folks who had PhD after their names. I don’t have a PhD so I was completely unaware of what that really meant and how to value it. Almost exclusively they were pompous, sucky programmers and dbas, and project managers, and other roles common in profit center tech companies making software.

I thought PhD was a signal for dummy based on my super small sample of 20 or so people.

I eventually got to work with a ton of awesome PhDs and realized that I was wrong and scientists working in their field leveraging their life passion of study is very different than someone having a PhD and working outside their element.

I think this is the basis for how many people view programmers. They just see the person who cranks out a billion lines of SharePoint and costs them a lot and don’t even get to experience a world where a programmer creatively removes the need for a billion lines of SharePoint.

So there’s different kinds of programmers. My current idea is to call myself a programmer and describe my roles and outputs in a way that people can assess if that’s something that helps them or not.

I generally don’t think labels and titles mean too much by themself and actively want people who think that to avoid or misinterpret my profile as I probably don’t want to create something with them.


I think the author is going out of his way to try and sound like a grizzled veteran of career pragmatism, but I think he's just giving advice on how to make your life harder. He's basically trying to find some idealised description of "how things are" and adivising you that, well, that's how things are and you should shut up and suck it up, or you'll never go anywhere in your life.

Well, what kind of advice is that? You should just blindly follow what everyone else does, regardless of whether you like it or not? That's just self-limiting.

Burnout is a thing because people who want to do something radically different convince themsleves that their only option is to work for a profit center, or something equally soul-sucking and don't use their brains to find a worthy alternative and make it work.

There is nothing to be gained by limiting yourself to what seems like "how things are" and trying to do what it looks like everyone else is doing. That way you can only hurt your prospects for personal development and progress.

>> Co-workers and bosses are not usually your friends

Oh wait. Now _that_ is solid advice. That is 100% true.


The advice given in the blog is colored by the author's life experiences: they had a job they weren't satisfied with in Japan, tried their hand at an ISV (this is what start-ups were called back then), then at consulting and found what I assume to be success with that. This happened over the last 15 years I think.

The funny thing is that calling oneself anything except X programmer has become pretty common. In fact, with more and more money flowing into programming, there's no shortage of bullshit artists and self-promoters.



Good read- thanks.


> ... just like how most good candidates aren't on offer.

I know this is a minor statement (and not really important to the advice otherwise), but I liked this discussion that there can be various reasons why good candidates would struggle to find a job: https://danluu.com/hiring-lemons/


Thanks for the read... That was exactly how I've felt only I was not able to articulate it so well.

The best dev I know didn't go to college (he tried to be a traffic controller when other people would have been at university). He's exactly the kind of person who would be doing "elliptic curve partial nonce bias attacks" over the weekend.

I've seen him have to explain stuff like endianness to people with masters degrees.

I don't know him well enough to ask, but I think he is stuck working for us because 80% of jobs are off-limits without a degree.


There’s no shame in the word programmer and there was no need to culturally appropriate the word engineer from another profession. Bring back the word programmer.


More and more I think the word programmer has come to mean nothing and will continue to lose meaning. More and more people need to program for their work and would not call themselves “programmers” (Scientists, Marketers/DevRel, other Engineering fields, Support, Ops, etc).

BTW, there are people who definitely “Engineer” software they are rare, but they exist, especially at the FAANGs. There are lots of people who have the word “Engineer” in their title that shouldn’t, a fine alternative is “Developer”.

What’s the difference? A developer is implementing a known solution in a specific domain. An engineer is dealing with a significantly unique problem space that has only been addressed by theory, if that. Engineers spend a lot less time programming than Developers do, because their primary occupation is solving tough problems with their colleagues through documentation, RFCs, etc. They often are not the ones who even implement their own ideas. Wait, isn’t that a Software Architect? No. Software Architects plan and documents in known solution space, they aren’t solving unknown problems.

Most people that I’ve worked with in Software have never worked with a Software Engineer before.


"Engineers spend a lot less time programming than Developers do, because their primary occupation is solving tough problems with their colleagues through documentation, RFCs, etc."

Sorry but 100 times no. Engineering is about understanding and using the laws of nature in scientific terms (mathematics). Strength of materials, thermodynamics, hydraulics, that's the stuff engineers study and do.


Yeah, sorry I omitted what you just said, but computer science is a theory driven branch (of mathematics?) that can be incredibly difficult to implement as much as other engineering fields struggle to implement new theory breakthroughs in other branches of science.


I am doing exactly what you describe as a "Software Engineer", but I would still call myself a Software Developer. Why? Because I don't think your definition is widely accepted like that. I especially think that because the law in my country disagrees.

For me to call myself an engineer legaly I have to have an Engineering degree. I don't have one, so I will not call myself that.

I am a Software Developer. I do the exact same things as the Software Engineer sitting in front of me. We work on the same type of problems in the same Avionic Project. The difference is that he has a PhD and I dont.


>An engineer is dealing with a significantly unique problem space that has only been addressed by theory, if that.

This is a very high standard for defining engineer and I'd guess most practicing engineers don't meet it. Sure, there's people bridging the gap between theory and practice, but most engineering effort goes into applying the same well-understood theory and practice to slightly different situations. I'd posit that your definition of software engineer is closer to what most people would call a research scientist.


To laypeople (like management), 'programmer' means 'a technical person who (for a living) tells computers what to do'. Terms like 'developer' or 'software engineer' are unfamiliar.


If you like your job, call yourself a programmer and build stuff you like. This article is marginally good advice for your pocket but bad for your soul. Don't be a programmer and act like a lawyer. It's one of the few jobs where it doesnt matter if you 're a dog. You create value, easily, you don't just extract it. It's part of the reason why i m a programmer and not an e.g. academic.


You do realize academia is where a lot of the research for programming comes out of, right? Seems like an unfair way to characterize academia.


not CS-related academia for me. But yeah academia requires a lot more politics, networking etc nowadays in the way that the "industry" doesnt


I think if you call yourself a programmer you can do consulting at a relatively high rate for companies who need to get some project for their profit center done. You do that, demonstrate your worth, leave with lots of money, and they might just call you again next time they need to do something for their profit center.

There's definitely ways in which being a programmer can be part of a rewarding career.


The point is that you should call yourself a consultant or businessperson, even if you want to be a programmer.


Patrick is not mainly a programmer. He does some programming, but he is mainly a n A/B testing consultant to companies with small tech teams. That's great, but consultant isn't the only tech job. There are tens of thousands of jobs where companies have highly leveraged programmers (called "engineers") that are well compensated for doing what they are good at -- programming.


That depends on who you want to be hired by. Some companies already know exactly what they need - they need some specific software written so they are looking for a programmer.


I mean, that’s sort of Patrick’s exact point - you should avoid being hired as a generic “programmer” as you’ll make far less income. Instead, if you frame yourself as an expert in a particular business problem (and have the data to back up your results), you’re far more valuable to the average manager at BigCo, or to any business in general.

A good example of this is the acquihire phenomenon. BigCos often acquire small startups purely to hire the people working there. Half of the time, they shut down the startup and throw away the code. The objective is to hire smart people whom are also domain experts in a particular problem.


You can as well argue that you are expert in programming that your expertise are not limited in a solving a particular business problem. This way you open up yourself to more company or any business in general.


For permanent roles sure. You might as well be a contractor at that point and contractors get paid more.


Even those companies who know exactly what they need don't only need programmers. They also need architects and solution providers. Who are mostly better-paid than the programmers themselves.


It’s not like I only get one word to describe myself. My business card can read “Programmer Businessperson” or “Programmer, Foo Consulting.”

If I’m a high rate programming consultant then I want my customers and potential customers to understand what I do, and it’s likely programming, business, consulting- in that order.


Is it relevant to bring up that time major tech companies stole ~$8B from their programmers, more-or-less because they could?

https://pando.com/2014/01/23/the-techtopus-how-silicon-valle...

Oh my, I just now saw this follow up: https://pando.com/2014/03/22/revealed-apple-and-googles-wage...

> Confidential internal Google and Apple memos, buried within piles of court dockets and reviewed by PandoDaily, clearly show that what began as a secret cartel agreement between Apple’s Steve Jobs and Google’s Eric Schmidt to illegally fix the labor market for hi-tech workers, expanded within a few years to include companies ranging from Dell, IBM, eBay and Microsoft, to Comcast, Clear Channel, Dreamworks, and London-based public relations behemoth WPP.

So, anyway, yeah, don't be a chicken, be a fox. But better than being a fox, be a human being.


Some solid advice, some not so much. I do call myself a simple programmer. That is how I introduce myself either. Helps a lot when dealing with people whose job title require a third of A4 page to fit.

>How do I become better at negotiation? This could be a post in itself. Short version:

He misses the most important thing. The most powerful thing into negotiation is to know you can stand from the table and leave.


I really like that argument and I feel when interviewing I am in that position often, but if you leave money on the table and leave have you ‘won’ the negotiation? Potentially there are greener pastures out there but that’s hypothetical.


Yea but you're selling yourself and all of us short -- you're either a hacker, a poet or a craftsman, haven't you heard?


And it's really all of those, or none...


Yes! Add to that fixer, tinker -- as knowledge workers, our role is to create knowledge and disseminate it into the organization which is frankly the hardest task in the world, especially when we're working against our own selves


I call myself a programmer to avoid people thinking I want to be a manager or spend my days in meetings. I want people to expect 90% of my output to be code.

If this is good for a career? Don't know don't care. It's what I do and enjoy, and there are many slippery slopes to not coding and becoming rusty, and ending up on the M train by default.


I’ve met many programmers/engineers who are terrified of becoming management and make (what I believe to be) poor career choices trying to avoid it. Instead they get passed over for pay raises and interesting opportunities because they’re seen as being difficult to work with. When times get lean, I’ve seen those people get laid off first because no one likes working with them.

I work on a team of 50 employees. For those 50 people, there are two managers. 1 out of 25 isn’t great odds of being accidentally promoted into management without your consent.

Your working style may be right for you, I’m not questioning that. But in a thread targeted at career advice for junior engineers, I thought it’d chime in from the other side. The chances of accidentally ending up as a manager are very slim. Don’t destroy your career and miss out on interesting opportunities to avoid something that’s not likely in the first place.


If only we could retroactively demote all the managers who suck at it!


If you're at a healthy organization, they support managers transitioning back into engineering.


> I work on a team of 50 employees. For those 50 people, there are two managers. 1 out of 25 isn’t great odds of being accidentally promoted into management without your consent.

Where I work it's about 8/50.

But I don't have to worry about being a manager - because they only get hired from outside.


The author's point is that your job is not to produce code, but rather to produce good business outcomes. Obviously you do that by producing code (and participating in design meetings and collaborating with other parts of the company in various ways).

Firefighters don't call themselves ladder and hose operators. Those are just the tools they use. Their job is to fight fires so they call themselves firefighters.


I think this post had tons of good advice, but I feel like programmer vs engineer is a micro-optimization.

There's no standard terminology and nearly every company has a slightly different definition of terms.


How do you know what to code or whether you should even be coding at all if you aren’t willing to spend the time knowing the business and the customers?


In the case of me

>How do you know what to code

Someone need something to be made or done, I help them by writing code.

>whether you should even be coding

I've been doing it for long time

>if you aren’t willing to spend the time knowing the business and the customers?

That's precisely the reason I prefer code. I'm not that interested or even willing to spend time knowing the business and the customers.


I would argue that knowing the business and the customers makes you a much better programmer though. I would rather a programmer spend 75% of their time programming and 25% seeking to understand why they are programming instead of 90%+ heads down with little thought or attention to why they are programming.


This shouldn’t be a problem in a business with well defined roles. Boss/manager says customer wants x. I don’t care why they want that, and I may even think it’s dumb, but I can make it happen. After all, I get paid to solve problems with code, not interpret customer requests.


I was once interviewing for a job as a senior engineer/lead role. The CTO - my eventual manager - was asking me a lot of high level technical questions even though he was a very competent developer.

Then, another developer who had been there for a decade asked me how I would go about writing “address validation routines” - It was a real world problem he was facing. I told him that I wouldn’t. That’s not the vertical the business was in. The best code is the code that you don’t have to write. Address validation software is a solved problem and there are plenty of third party CASS solutions and yes they cost money but being able to write address validation software is not the company’s competitive advantage or its differentiator in the market. It’s better to outsource it. That’s one less thing we would have to maintain and debug.

The developer wasn’t impressed. The CTO was.

I’ve spent my entire career asking questions and making sure we were building the right things.

My first job out of college over 20 years ago I was the sole developer on a project to write a data entry system that was used by a new department with a dozen new employees. I had to actually talk to our potential client to gather specs myself.


> I’ve spent my entire career asking questions and making sure we were building the right things.

But this gets back to what OP was saying. Spending time asking questions == spending time in meetings. Not writing code.

I’m not advocating for never asking questions or suggesting better ways, just saying that I prefer to work where someone else handles the bulk of that so I can focus on what I specialize in.


> After all, I get paid to solve problems with code, not interpret customer requests.

But how to know what to solve... At least for my job, interpreting customer requests is key. The customer knows what they want, but they don't always realize what they want is not the best solution.

Often I can steer the customer to a vastly superior solution to what they initially had in mind. Not seldom this superior solution can re-use existing code, sometimes entirely.


Because there are hard problems to solve. I'm into computer science not capitalism.


Do you not consider finding out what the person asking for things actually wants to be a hard problem to solve?

I live to build things and continuously fend off attempts to slide me into management, but I find actually programming things to be only half the part of the puzzle.

Humans are extremely bad at articulating what they want and will generally tell you the wrong thing up front because they haven't thought it through fully.

A day spent with a stakeholder carefully picking their brains and properly crafting a plan can be just as satisfying as a day of straight coding - sometimes more so.

(tbh this is probably also why I find UX work so interesting - getting a user to click on a button is considerably harder than making the button in the first place)


Pet peeve of mine: solving practical problems often involves capitalism in our system, but that doesn't mean practicality is capitalism. If you're writing ad code, you're working on capitalism only. If you're writing railway logistics code, you're doing practical work that any society needs, capitalist or not. And yes, this might involve understanding customers (or as a different system might call them, "people").

The important question would be "is this work useful," not "does this work involve capitalism."


Speaking of railway logistics. Railroad car repair billing is esoteric and complicated. I wrote software dealing with this at one job.

https://www.railinc.com/rportal/documents/18/260737/CRB_Proc...


I bet the company you work for is “into capitalism”.


You don't know anything about my particular situation, and happen to be very wrong. But assuming the role of your rhetorical me, there are thousands more companies just like "mine". they all provide paychecks.


And how do those companies earn money to give you a paycheck?


Most of these paychecks come from unprofitable companies. I'm not sure what point you are wanting me to walk into. Are we still talking about programmer management strategy?


And whether they are “unprofitable” or not doesn’t mean they don’t either seek to become profitable or get bought out - ie your livelihood is still based on capitalism.


This is genuinely really good advice.

However, I want to caution against one interpretation: When you focus your attention on projects to cut costs (and risk) and grow revenue, it is possible to put that as your top priority. It should be your second.

Your real top priority should be the health and stability of yourself and your family. Ideally, this is aligned with focusing on visibly-profitable work for your employer. This is often true, but not always.

—————

Example: Suppose you are working at a company that takes code quality and automated testing seriously. This has real business value because it means that you are able to quickly execute projects which accelerate the sales team and unlock partnerships. So far, so good.

Then suppose there is an opportunity which involves taking over a codebase written by another company. The business case is strong—by taking several months to make UX improvements, you can significantly raise revenue. But, the codebase is in a poorly-documented PHP framework, has haphazardly inconsistent naming, and no automated tests. Should the company do the project? Maybe. Should you do the project? Maybe not.

Here’s what can happen: Your team agrees that its going to suck but you’re all in this together. You start on the project and you find it much harder to make progress. Because of the new toolset and lack of tests, you find it hard to maintain focus. You also lose the ability to create reasonable estimates, so your communication with internal stakeholders erodes—taking trust down with it. The number of noisy automated alerts that come in also increase, also decreasing your focus. You try to find time after work to focus on working through a PHP book, but its hard. The noisy background of your open office combines with your lack of understanding of your toolset to mean you feel useless making day-to-day progress. You talk to your manager and team lead about this. They care, but they are overworked. Its not like they they can change the open office anyway. Don’t worry—The project will be over soon. Everyone is in the same boat.

At the end of several months, you need to fill in your semiannual self-evaluation. You choke—words aren’t coming out. You’re caught between knowing you need to brag about your accomplishments and...not feeling like you have none. You’re bad at lying. After spending 2 full days on your self evaluation, you submit it mostly empty.

Soon, you are searching for a new job.

—————

So yes, when thinking about how the company views engineering, look at revenue and costs. But before that, look at the things that you need in order to be effective. Keep your mise-en-place. Use tools that fit your brain. Learn to zealously advocate for what you need.


Nit / friendly amendment: mise en place

https://en.wikipedia.org/wiki/Mise_en_place


Thank you


Curious why you pick PHP?

I find Go codebases more likely to be full of ad hoc undocumented stuff than PHP.

I guess it depends what you are used to.


Because thats what the external codebase from my experience was written in.

This is going to be specific to each person. Some people find PHP way easier than ruby, python, or haskell. Different brains are different.


>>> mize-en-place

Yes yes yes. And generally so much easier with OSS transferring between jobs - if you can !


Top two posts on hacker news today -- https://news.ycombinator.com/item?id=21302498

^ this is the other one, wherein the second line of the article states: 'As programmers...'


Much of what he says about business and programming is all true, at least in theory for well run, competitive businesses. But, for instance, sometimes in govt, they never actually calculate the ROI of a project or employee. They just think: well, we have money that needs spending, let's higher some people to build some seemingly useful software. Determining what's useful and what's not ahead of time is harder than it seems.

But, the part about most programmers not being able to implement FizzBuzz is incredibly false based on my experience. I've worked for over half a dozen average companies (plus some government too!) in the bay area and I've never once met an engineer who couldn't implement FizzBuzz. They're able to program FizzBuzz and much much more complex things, far more than job requirements need. Not only that, but I've never even met an engineer candidate (i've interviewed dozens) that couldn't do either FizzBuzz or other basic programming tasks. So, I really don't know how you people are finding all these unqualified candidates.


Don't be in the bay area, that's how. That's why they pay bay area peeps the big bucks: the standard of skill is higher here, because of the community and other factors.


Nah it’s just harder to fix wages low when you can job hop so easily.


Even if I am an registered engineer (civil), when asked, I say I am programmer because that's what I do. Absolutely nothing wrong with it.


Yep. It doesn't seem to matter much in the US, but in other parts of the world you can't call yourself an Engineer unless you are a P.E.

I typically call myself a software developer because everything I do that isn't actually writing software (documentation, design, system stuff) is in support of that.


>but in other parts of the world you can't call yourself an Engineer unless you are a P.E.

I've been told the same is true in some US states--or, at least, the licensing boards would like people to believe it is. (e.g. Texas) But, in my experience, hardly anyone pays any attention.

You do need a P.E. for certain things--mostly when dealing with regulators, etc.--but my understanding that in software specifically, the exam has been discontinued because essentially no on e was taking it.


I’m not sure Patrick would entirely stand behind this as written anymore, just because the long term trajectory of just a programmer looks better than when he wrote this.

https://mobile.twitter.com/patio11/status/117561737079606886...

An under remarked-upon trend: SFBA/NYC engineering wages are starting to drive engineering wages in the rest of the US as these companies open other US hubs and print remote offers, in both cases keeping salaries in relatively tight bands for e.g. internal fairness reasons.

If you were going strictly by geographical market standards you could have a $200k new graduate in SFBA reporting to a 15 year veteran EM making, hmm, $120k or so in Chicago, but since nobody can tolerate that, that EM gets offered $250k.

“Are there enough engineers from SFBA companies for this to matter nationally?”

There are approximately 3 million software engineers in the US and approximately, finger to wind, 300k work for AppAmaGooBookSoft alone, and that number increases by 50k per year.

The modal software engineer in the US is still working for a digital marketing agency or midwestern insurance company but capitalism is starting to say "Look if you can paint pixels on a web application and you presently work in a cost center, that's unacceptably inefficient."


I would argue the opposite: the trajectory of being a programmer has been really good for the past 10 years or so, but we've pretty clearly reached Peak IPO and a correction is on the way. And if/when it does hit, the person with the skillset "can increase revenue or reduce costs" will be in much better shape than "can program mumbo-jumbo".


I’m pretty sure the entire startup “industry”, companies like Stripe and Airbnb which are well past finding product market fit and the risk of death included has less employees than Microsoft. AppAmaFaceGooSoft probably hire more people every year than every YC company put together ever employed, at their peaks for those that went to startup Valhalla.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: