In the real world this is (unfortunately) not how technical people are hired. The stack they are using seems to be the first thing hiring managers look at.
You would think it would be only clueless recruiters who would think this way, but my experience is that well-respected and successful entrepreneurs also operate along these lines. It is not clear to me whether their success is related to or in spite of this way of thinking. To give them the benefit of the doubt, startups often need to move quickly and those few weeks-months needed for people to become familiar with a new stack they're unfamiliar with may simply not be available.
The stack an engineer has experience with can be one indicator of the areas they excel in.
Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.
Polyfills aren't exactly quantum mechanics. They're not some radically different mental paradigm that takes years to develop fluidity. If you already have a sound basis in the major software engineering concepts, learning polyfills is just a matter of reading the docs to learn the details.
Let's not forget that Jordan Walke was working on back-end infrastructure code just prior to inventing ReactJS. Miso Hevery was working on Java testing frameworks just before inventing AngularJS. So it's not like there's a history of front-end being such a unique problem domain, that it's impossible for outsiders to pick up.
The only major sub-domain of software that I think this may potentially be true of is embedded systems. And even then, that's a big maybe.
Individually yes, but all of these things would take more than a few weeks - possibly months. With onboarding on an average software project also taking months, you could be looking at paying for a lot of unproductive time if you don't hire somebody with some prior knowledge.
The idea of language fungibility is a joke. Sure, it takes you a day to learn the basics of syntax in another language. Learning the whole ecosystem takes considerably longer.
So, let's say that is the case. The lesson is that hiring is expensive and risky, even if you find a candidate whose experience exactly matches your stack. Like you mention, just onboarding for the internal technology and project itself takes months.
You're already sinking a big investment into a new hire. So most times you should choose the best engineer even if it means additional O(project onboarding) time. From a business standpoint a great engineer who takes 6 months to get up to speed is a much better investment than a mediocre one who takes 3 months. Not always (maybe you're an ultra-high growth startup who needs bodies on the floor ASAP). But usually.
And the reality is that dropping stack requirements drastically improves the candidate pool. For one you have way way more candidates to select from. Two, most times when a company is hiring for [tech X] it usually means that the market for engineers who know [tech X] is super-tight. It's just the nature of the business cycle. If [X] is in demand at your company, it's probably in-demand everywhere, and therefore in short supply.
All of which means that if you insist on [X] experience, most of the time you're scraping the bottom of the labor market barrel and getting mediocre engineers. If you're willing to hire from anywhere, then usually there's some sub-sector that's in a downturn. That's a huge opportunity to poach talented engineers, who are being mostly overlooked because their stack experience doesn't align with the hot growth sectors.
There's definitely a sweet spot in terms of stack specificity. There's very little issue hiring a flask developer if you need somebody to work on django. However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.
>All of which means that if you insist on [X] experience, most of the time you're scraping the bottom of the labor market barrel and getting mediocre engineers.
I don't think that's necessarily true. Nonetheless, this could be market specific. I can imagine that hiring a developer in, say, Ohio might make your approach more worthwhile compared to hiring in SF, where the talent pool is deeper.
I don't think that's true at all. I've met tons of Java devs who moved to Ruby with minimal pain. It's still OO and many of the same patterns apply.
The GP's calculation had a very important weakness that it does not take retention under account.
The sweet spot will be mostly determined by retention. And the very bad places that have impossibly (sometimes literally) specific requirements are probably correct on requiring a narrow set of competences. But of course, they would gain more by improving themselves so the developers don't quit as often.
You think? I went the other direction and I don't think it was so hard. The language was the easy part of onboarding, the hard part was all the company specific stuff.
But front-end development of complex applications isn't done in a vacuum by one person. It's done by multiple teams, possibly distributed across different physical locations.
As an architect, what is your polyfill story? Does each team ship ES Modules that are later compiled by Webpack? Is each team responsible for loading their own? It's probably more efficient to handle it globally --knowing what all teams need at once as part of a build step-- to prevent duplicative loading of the same polyfill by 5 different teams.
It gets nuanced quick. Nobody can master all these concepts --and reasonable ways of managing them in distributed team environments-- in a few weeks.
Most of the variance in quality between software engineers has to do with technology-agnostic skills rather than stack-specific knowledge. They architect well-designed, modular systems. They communicate with stakeholders. They write robust, readable, testable code. Their documentation is understandable and comprehensive. They're thoughtful about naming things. They can large codebases and keep complexity contained. They understand performance tradeoffs, and anticipate bottlenecks before they occur.
Very little of that has to do with stack-specific knowledge. A developer who does all of the above, but doesn't know all the ins-and-outs of the JS coercion model, is going to be much more productive on almost all practical business problems.
The most important skill usually isn't knowing every single detail of your underlying ecosystem. It's knowing enough about its overarching landscape to be aware of the things you don't know. As long as you know how to ask the right questions and where the limits of your knowledge lies.
How about average engineers with average fundamentals? The kind most people will on average be hiring and working with.
An average company, paying average wages, solving average problems is better off getting an average developer specialized in the area they need.
The big problem with embedded is that when things go wrong, it has a cross cut with electrical engineering--you have to understand datasheets, read a scope, and understand that digital signals sometimes aren't.
I have never known a good embedded software person who is software-only. Even DSP-types with EE degrees don't often operate on hardware very well.
Generally, the best embedded software types are good hardware EE's and passable software people.
I consider myself a very good embedded engineer, but my software is merely "straightforward". Of course, the best software people I know claim I should take that as a compliment, and they are all happy to work with my code.
I don't take lightly your skillset at all, but it comes a time when people start using inneficient solutions because it's faster to produce, and you have plenty of hw resources, so... why not? Do it in Python and improve it later... maybe, and then you don't. :-)
I believe you get my point. Not that I personally think using micropython is a worthy path solution for embedded dev as of now, in a professional context. But there will come a time when that can make sense, as it's already the case in the RPi, as I mentioned.
> I consider myself a very good embedded engineer, but my software is merely "straightforward"
And that's where I make my business. I'm merely a straightforward embedded engineer but focus on the software/hw integration making the excellent work of people like you work with the external world, databases, desktop/web UIs and all of that using software engineering practices. Basically doing the "cloud", "edge" and "IoT" buzzwords.
The most important 67-80% of front-end work can probably be learned in a few weeks by an experienced engineer who already has a high level understanding of the web. The last 20-33% hides some things that are trickier to wrap your head around, along with more than a few WTFs, and a never-ending stream of subtle cross-browser differences.
It may still be plausible that a talented engineer with strong fundamentals in software design and cs could cover the ground quickly. But the message of the "don't call yourself a programmer" piece might be understood as being as much about how you're going to spend your time as it is how you're classified professionally (though those are certainly related): are you a stack-whisperer, or are you a badass problem solver with expertise in translating a problem domain into a formal-ish system which enables expands the capabilities of the human system (and networks of machines) it's embedded in?
My bet as someone who did years of front-end focus is that for all but the truest 10xers, if you focus on front-end and try to be thorough, your risk of being a stack-whisperer jumps dramatically because of how much of your time corner-case ephemeral arcana will chew up. Front-end is certainly not the only place in the industry where we've let that hazard run away with the lives and time of too many talented people, but it's a popular one.
It's web development, and let's face it, many companies don't want long term talent, they just want someone to finish their current project and move on. Which can make sense in some circumstances. You don't need a software engineer level guy to make a dashboard.
People that are beyond the framework user level end up also moving on from even considering working on those kind of jobs.
Even if I was hiring for a specific role (front end), I care less about familiarity with specific bundlers or frameworks and something more like - “Can this person problem solve and deal with varying levels of ambiguity? Do they seem capable of figuring something out without a ton of direction?”
This is the “secondary employment” market where HR policies aren’t optimized for in demand fields like software development. The immediate managers are usually helpless to do anything about it and see employees they spend time training leave for greener pastures. Being that is the reality, what other course is there but to find people you don’t have to train?
It really depends upon the exact situation. We can all make up bullshit examples to make either option look stupid.
Learning languages and libraries is easy. But new paradigms, domains, and entire new ways of thinking? It takes time to master.
So I'd be inclined to add disclaimers to the idea that "after 6 to 12 months nobody will ever notice".
This might be the true if you're moving from 1 web programming gig to another web programming gig, but the author himself acknowledges the vastness of software out there that underpins every aspect of industry.
So - if you're going to be working on enterprise middleware in investment banks, or embedded systems on airplane simulation kit, or APL, or missile guidance software - but all you've done is web... then tech stack does matter. Insofar as being familiar with the paradigms, standard patterns and unique industry norms.
The problem is that some middlemen (ie recruiters, HR or non-technical hiring managers) do not understand that, for instance, Java and C# are somewhat interchangeable, but C# and APL are not.
Fantastically written piece in all though, lots of great stuff in there that I wish I had either known (or stopped denying!) earlier in my career. But specifically on this point I think it's a dangerous idea to imply to undergrads that deep, hard-earned experience in both tech and domain can be overcome in such small time scales.
Here's why I think this is sometimes a false dichotomy: going from Ruby to Python will be relatively easy. Going from Ruby to Haskell, on the other hand, will involve not just learning Haskell the language (syntax etc.) but also new paradigms and entire new ways of thinking.
Speaking for myself as a hiring manager I'm not looking for specific tech stacks. I'm looking for general patterns. Someone who has spent their entire career doing front end work is unlikely to be a great fit for embedded development and vice versa, but I don't care so much about the specific technologies they used in the process.
My experience has been that concepts and fundamentals trump specific tech stacks. However I also take a longer range view of things. I care far less about relative productivity of two candidates a month from now than I do a year from now.
On the other hand, you have a few “full stack developers” on hand and you need someone to concentrate on one part of the stack so your current developers can concentrate on other parts.
I’ve found it remarkably easy to hire a front end developer who knows the stack we want to use, who doesn’t know anything about the business besides - here is what we want from the website, give us some mockups and we come up with the APIs you need.
Knowing the ecosystem, best practices, frameworks, etc takes longer than a few weeks.
Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?
Being a "Python Programmer" in no way measures my value to the company, in fact, it necessarily limits what I'm actually capable of contributing to the company. If you can show that you can add value outside of a specific tech stack, you're worth the few weeks to learn a new technology. And yeah, it is basically a few weeks to get up to speed and be a contributing member of a code base if you don't know the technology. 6-12 months to know all of the details if you're a solid engineer.
I consider myself to be very good at C#, okay at backend JS development and passable at Python.
I have enough experience from doing a lot of ETL in a previous life to know how to automize queries and schemas for speed and to not lock up a database.
I’ve set up CI/CD solutions from scratch with what is now called “Azure Devops”, AWS’s CodeBuild/CodeDeploy/CodePipeline, OctopusDeploy and Jenkins
I could just as easily and competitively apply for jobs as an “AWS Architect” who knows most of the popular AWS offerings for developers, Devops, netops, and system administrators and I have experience with them.
But in many of those areas - especially on the front end and with the netops, system administration stuff, at any scale. It wouldn’t make any sense to hire someone who “kind of” knows what they are doing over hiring a specialist.
If I need something now I’m not going to want to wait for you to get up to speed in year.
Do you really think you’re as good at any of those areas as a specialist? AWS alone announces dozens of new things every month on their podcast.
As a sweeping statement, I'm better at solving "a problem" than a specialist. If you define the problem area tightly, they may be the right person for the job, but if the role you're hiring for has uncertainty and flexibility, the front end specialist probably isn't the right person to figure out why your database is slow, your load balancer isn't working, your build and deploy process is stuck, etc.
There are definitely roles that are much more fit to one or the other, but the generalist can handle a lot of things pretty well. All of that being said, we can probably agree that the best setting is having both.
I would separate this into two separate categories - big tech companies (or others with similarly large shared internal infra) and others. For big tech companies, for most positions, the tech stack is proprietary internal stuff such that knowing the language and best practices only get you about 10% of the way. For other types of companies, it's generally more important that you hire people with the right business context than the exact tech stack.
With that said, native mobile development isn't just a stack - it's more of a different, though overlapping, career path - the main reason not to hire non-mobile developers into a mobile role isn't that the stack is different and takes time to learn, but that the workflow is so different that they may or may not know what it is that they are even signing up for. Hypothetically, you'd rather hire someone with Xamarin background with no Java experience for an Android java role, than someone with no mobile dev experience, but lots of Java backend experience.
My first mobile development experience was a moderately complex Android project on an app that was used by tens of millions of people daily. There was no ramp-up - I had zero prior experience before signing up for this project, never even played around with any mobile development before and I had never professionally programmed in Java - and I was the sole engineer working on both mobile and backend. It was a little painful but everything shipped on time.
Your last sentence would seem to contradict the entire body of your comment. You describe mobile development as a fundamentally different thing, but then your first gig was to work on a large project, the result of which was shipping on time at the cost of a “little” pain.
That is a great point. I’ve also had the experience of working for a short time on a system, knowing that I would hate for it to become a regular part of my work. So hiring someone with experience is one way to mitigate staff turnover from undesirable tasks.
If the two candidates are perfectly identical on every other criteria, sure. But that's not the case, the point is that most other criteria are more important that the specific tech experience when dealing with competent people.
> Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?
If you have web experience, I'm not too worried about your productivity on Android or iOS. If I'm hiring for Django and you have experience with any of Symfony, Rails, Spring, .Net or NestJS, the tech expertise is the least of my concerns.
Edit: and I forgot to mention the biggest UI failure I made when I did mobile development a decade ago on WinCE ruggedized devices. I didn’t even think about actually taking the device out into the sun where are all of the field service techs would be working and seeing how the screen, colors and contrast looked.
That’s kind of my point. Learning a language is easy and for the most part useless without knowing the frameworks and architectural best practices.
I don’t know what Android has, but iOS has built in frameworks for handling syncing. If an Android developer didn’t know all of the built in frameworks available to them and re-invented the wheel, that would also be a waste of money.
Any company that would hire me as a modern “mobile developer” would be absolutely foolish. I have only written 30 lines of Java my entire 20 year professional career, never written a line of Swift or Objective C. Why would they hire me over someone with relevant experience as a developer? If they want to hire me as a team lead/architect and then find mobile developers, I would know what to look for.
It's a much more efficient system for the programmer to own UI/UX and for testing and QA to simply verify. In contrast the the programmer doing whatever, and leaving the full responsibility of UI/UX to the testing & QA cycle.
A lot of problems in life can be simply avoided this is no different... If you're getting that tiny last bit of differentiation because you're 90% market share and you can afford to hire people to solve that exact specific problem then great. But that isn't most places. Most apps would do better to 100% avoid the problem and just say "No Internet Connection" if there's no WiFi or 3g. On top of that you get mobile developers who swore to God they solved this problem but guess what they actually have no idea what they are doing. They think it works but it doesn't because it's actually a database concurrency control problem. This https://www.postgresql.org/docs/current/mvcc-intro.html and what I expect for someone who claims to "solve the problem" not some "algorithm" they invented.
So I don't buy it, and I don't buy the business need for it unless it's an app specifically made for disconnected use. Unfortunately it sounds like one of those things people do to make themselves feel important or smart (no nice way to put it; I see it as bad as someone who invents their own encryption "algorithm" without realizing how ridiculous that is).
In short I would say don't do it. And if someone does it better be a real business requirement.
Enterprise mobile apps aren’t about the apps you download from the App Store. Usually they are distributed using an on-site mobile device management system.
1st use case: I worked for s company that wrote field service applications for ruggedized Windows mobile devices. Some had cellular, some had WiFi, and some had neither. You had to actually dock the device. The field service techs had to have all of the information they needed on the device to do service calls including routes whether or not they had connectivity. They would record the information and it would sync back to the server whenever they had a connection.
2nd use case: worked for a company that wrote software for railroad car repair billing. Repairs are governed by Raillinc (https://www.railinc.com/rportal/documents/18/260737/CRB_Proc...) all of the rules and audits had to be on the device and the record of the repair had to be available whether or not they had connectivity. It had to sync back with the server whenever a connection was available.
3rd case: software for doctors. Hospitals are notorious for having poor connections.
4th case: home health care nurses had to record lots of information for Medicare billing. Again you can’t count on having mobile connections.
My point is the problem you think you solved, you didn't and it will break under dozens of scenarios. Maybe the clients are happy and they think it works but you just haven't encountered the case where data goes missing or overwritten.
In other words what I am saying is it is wrong and hard to know it is wrong unless you directly attack it. The word "enterprise" is an euphemism for low cost and potentially low quality. It's a buzzword. I wouldn't take an Enterprise mobile developer over a B2C mobile developer just because of the word enterprise.
Not everything in the world should exist that leads to 737 Max. The word "sync" has implications way beyond the concerns of a mobile developer.
So I call bullshit; the fact the industry does it, that everyone does it, that you consider "real" mobile devs to require it, that customers want it doesn't mean it is a good idea or that it's mathematically or scientifically sound. It may cover most cases and nobody may notice the problems except once in a blue moon but that doesn't make it right because operational systems need full data integrity.
The correct way to handle such a request is not to "sync" but to collect data push it to the backend and let the backend sort out the mess. Not "sync" by whatever stretch of the imagination no matter what cottage industry or cult beliefs have been born of it.
And yes “the problem” we solved, a mobile app that could route field technicians dynamically at a level of quality we needed we did solve.
The word "enterprise" is an euphemism for low cost and potentially low quality. It's a buzzword. I wouldn't take an Enterprise mobile developer over a B2C mobile developer just because of the word enterprise.
Again this comes from someone who thinks they have experience versus someone who does have experience. Did you read the link I posted about the industry required rules for repairing railway cars? That isn’t even the entire regulation. If the typical B2C app doesn’t work, oh well. For the railroad industry, if you don’t submit your railcar repair just right - it gets rejected either by the interchange or the customer and you can only submit your invoices and rebuttals once per month.
How well does “one way server syncing” when you’re a field tech doing routes and the customer calls customer service and cancels one of your routes while you’re in the truck? How well does it work when your back end system needs to calculate where each truck is on the road and needs to re-assign routes on the fly? How well does it work when one tech needs a part and they need to know where the parts are based on what other techs have already been at the warehouse and now they have the part? But wait, they went to the customer’s house and found that they don’t need the part at all and it’s available on the truck a mile away? All of this involves dynamic two way syncing...
Again, the difference between someone who has real world experience and someone who thinks that because their Twitter app doesn’t need to work in the subway nothing does.
What you mention is very dangerous to the data. Take the medical app example. Suppose there's an app to update a chart that doctors carry around. Suppose there's five doctors and/or nurses working on the patient. Whose prescription or orders do you take? On top of that it gets worse -- there might be dependencies between the orders, orders might be to countermand other orders or in response to others which may or may not exist. It is not a problem that any algorithm or programming can solve, because the whole point is to take the experience and skill of the doctors which is being blindly ignored for some process that the doctors may or may not be aware of who submit the information. Similar problems could appear for any of the examples you mentioned if you dug hard enough.
As for the submission you can simply ban submission unless you have an active Internet connection. 737 Max is also "real world experience" Boeing panicked at Airbus and instead of going through a 10 year design and 10 billion dollar process for a plane they surrendered to market realities at the cost of lives. The fact that "enterprise" has onerous business requirements or even legal requirements demanding technical sacrifice doesn't make it any less technically wrong. If asked to make a sync on the client side I would make it as simple and straightforward as possible and assume nothing.
I suppose so long as it doesn't cost lives or ruins people I don't particularly care if you value handling data on the client in this way as a qualification for "enterprise" mobile developer. As long as it's "good enough" to meet the requirement, great. But it doesn't mean I like it, and it doesn't mean one should ignore technical flaws. Unless it's ACID you don't guarantee anything it's just a feel good (and possibly done in a much simpler way). For all the scenarios you mentioned I can mention another half dozen scenarios or even a very simple one, one person with same seniority making exactly the same change to the same record. Then your system tosses one or the other or even merges them -- in other words you dive into expert systems, NOT anything to do with "syncing".
Experience is important but there's a theoretical foundation to everything and it's wrong to expect an offline node in a distributed network to act as a source of truth for any period of time. Sorry.
Just look for people good at handling split data updates and ownership, there are a lot of people working on that kind issues on the backend, I really doubt you find more mobile devs with those skills than backend devs with it.
Lol we are in such a bubble.
This company has an app and in it you can download the videos and watch them offline, just like Udemy.
That’s great, I have a limited amount of data on my plan.
And better yet, I can watch these videos when I am completely offline, for example on the 30 hour connecting series of flights I went on recently. Except... whereas the Udemy app actually works completely offline, this other app needs internet access in order for the “my courses” tab to work.
You still have access to the videos through the “downloads” tab. But there they are not organized neatly. So I decided to do other things than to look at any of the videos.
Also, a lot of apps are bad at properly syncing data. For example I think neither Udemy nor this other one properly syncs the course progress data. Even when they do have a connection.
Unfortunately, the more important criteria are harder to assess, and hiring in the real world very heavily weights the east-to-assess bits whether or not they are actually important.
But to your example, I’ve seen developers who couldn’t adjust to developing where they had a rapid release cycle because they were use to the big design up front. How you develop software where you don’t have all of the requirements for the next year is a completely different mindset.
Even on comments on this post, I see people who aren’t actually willing to actually talk to the customer to decide what they should work on.
And you kind of proved my point....
If you don’t know the wheel exists, you don’t know you’re reinventing it.
There's a big difference between not having experience with the <framework_du_jour>, and not knowing foundational technologies. E.g. there's a good chance Jeff Dean doesn't have much experience with most of AWS technologies, but there's no reason to believe it would take him more than a few weeks to get up-to-speed with them if he'd really need to. Not on the "expert" level, mind you - but enough to not make big mistakes.
I'd argue a similar thing is true for tech stacks. If there's some correlation between knowledge of all the fiddly bits of C++ and ability to write clean, performant systems-level code, I've yet to see it.
It’s especially nice when you realize that as soon as you’ve completed all the non-negotiable features, a bunch of other things magically becomes non-negotiable.
They might not know the right libraries to use, but would probably know there are libraries, have a good concept of application design in general, and know how to ask the right questions. And we're hoping to hire someone for years.
I’ve seen people jump from Java to C#. You can see the difference in their coding style, them not taking advantage of the features of the language, reinventing the wheel because they didn’t know there were popular packages that would do it for them, creating horrible inefficient queries and practices using EF etc.
I started with Object Pascal on Mac OS 8/9 which forced you to do a lot of low level tasks like deal with Handles and suffer through a cooperative multithreaded OS. Rewriting the network layer from AppleTalk to TCP/IP was closer to systems programming in C than you might think.
I have also been paid to write C programs, Windows application pre .Net in Visual C++ and post in C# and VB, written Java and C# websites, and most recently Angular SPA’s. Add to that a few random oddities like XSLT.
PS: I even turned down working on Android, but could have made that jump.
This is really an example of hiring someone who has experience and hiring someone who thinks “it’s easy and just like what I did before”.
Then I could get into all of the old netops guys who think AWS is just like what they did on prem and end up costing the company more...
Website backends often have significant dependence on other services that can be down. Further standalone apps can have zero network dependence or a lot. So having written both, and similar code for each, I don’t feel the networking side is all that different.
If anything networking is probably the largest similarly between them.
PS: I did some iOS development in my spare time even worked with some old J2ME, so I assume Android is fairly similar.
That's changing with the advent of progressive web apps. It's possible to write web apps now that are robust in the face of network problems, e.g., the web app renders and is functional even without a connection.
That's why code reviews are a mentoring opportunity. Help your colleague level up.
(Also there should be team dialogue about how to solve some problems more efficiently, I mean, if I was unsure of something I would go and ask colleagues for some guidance, or you would go and do some research on your own)
I’m not saying this is always the right course and of course this doesn’t scale to larger companies, but my first project at my current company a week in was to develop a feature from scratch which ended up involving coding an API for the front end developers, writing an ETL process that used both Redshift (AWS OLAP database), and MySQL designing the schemas, configuring the AWS resources with CloudFormation, setting up queues, messages, lambdas, dealing with the vendor we were integrating with and learning the business vertical. How much longer would it have taken if instead of already knowing their stack - C# WebAPI, MySQL, and AWS - I came from a Java/Mongo/GCP background?
They needed someone useful now to get a feature out that they wanted to charge customers for.
Because a stack defined programmer is a limited programmer...
If you started your career learning Java 20 years ago and you kept up with the latest trends of Java, you could still find a job now. The same is true for C# around 15 years ago.
I’ve since leveled up on the nice to haves except for React, I refuse to jump on the $cool_kids bandwagon of front end development. Especially seeing that everything else I listed, pays more, and doesn’t change as often.
This isn’t directed toward you, just a general comment.
After working in a dev agency that took on outside Rails projects several times while I was there, my conclusion is that Rails (not the only way to write even Web-focused Ruby, and not the only one I've used, but the only one that'll score you any points for hiring) is a pretty bad framework for any project that will have multiple teams on it over its lifetime, absent heroic technical direction & testing efforts that I've never actually seen in the wild—probably because young or outsourced teams picking Rails are doing it to move fast, over all other concerns.
You can take on or resurrect an average outside or old Rails projects. It's just slow and expensive.
Too much magic, too much room for doing things some way that the next person will never have seen before, too little grep-ability, too hard for your tools to help you.
We found it to be easier to just rewrite the project when moving between major versions (Rails 4 to Rails 5 for example). Shoehorning old functionality into a newer Rails just made this inconsistent.
Depends on how you define rare. Exceedingly uncommon at the very least. stares at a pile of 150 resumes Ruby is a language languishing from being embedded in a few niches, having poor performance, and otherwise being unremarkable, imo. Ruby was made with the idea that picking it up would be easy (which it is), leading to less incentive to learn it.
Putting Ruby in the job description gives you applicants with Ruby experience, which is why the blog post was (and is) impractical. Signaling works.
I get that that's not what you want, you want somebody to close tickets, but that's not Ruby's fault. And most of the people I know still working in Ruby are at a level where that kind of grunt work just isn't worth their time; most others have, of course, moved on.
If you spent your whole career writing code in Python and you wanted your next job on iOS, my concern wouldn't be, can you learn Swift in a short time. My concern would start at something as simple as how you would cache data when the network connection times out. Or can you manage your data models in a way that can maintain 60fps scrolling, something iOS engineers pride themselves in achieving. Of course all of this can be learned, but my point is that it's more than the language.
I do agree that most start ups will probably restrict them selves to hiring in the same stack as their tolerance for learning curves will be extremely short.
However, I feel that many employers are used to engineers who don’t “get business” and so they just want somebody to plunk down code. This leads them to “what kind of dumb cog/stack are you?” questions as they are already assuming they won’t be able to hire a truly “full stack” engineer who understands the revenues and costs associated with their work and how they impact corporate strategy etc.
Sometimes this is good. It can bring in new/interesting/helpful ways of thinking about problems to come up with better solutions. Sometimes it's bad because anything that's a little jarring or unusual impedes support and maintenance by "native programmers" in the same way a heavy accent can impede communication.
I think this is only true if what you're working on is simple, like crud apps.
It takes longer to be intimately familiar with a language's concurrency model, pros and cons of major libraries and frameworks and the ability to make such decisions quickly and without learning hurdles, etc.
"Sure, but can you do python"
Well, I definitely can learn it. But can I do it right their, right now? I never really worked in python, so no.
And so... no.
It's also kind of silly in a lot of cases too. It always feels like "Oh I see you do Angular... but could you _really_ be capable of developing in React? I don't know..."
So retarded. Yet this attitude seems to dominate.
I try to avoid companies that hire in this manner; it shows a lack of vision and planning. It's unlikely that a given developer will be working on the same stack in five years; the stack will have evolved or the developer will have moved on.
Second, startups don't have 6-12 months.
- Don't apply to 50+ companies at one go thinking only a % of them will call you. Choose ~5 companies, do a lot of homework about their business & write to key people at these places telling in ONE paragraph what you can do for their business. If it fails, choose next ~5 and so on.
- Other things being equal choose team over company. E.g. working at core technology or customer facing team at a less-known growing business is much better than working at some internal tools team in Google. FAANG carries some brand value for future recruiters, VCs but you should really optimize learning rate early in your career.
This is one of the best advices for people starting their careers. This comment should be at the top of this thread!
I’ve tried this strategy of targeting an insider in my network and pitching myself through him. Several times. It has never worked for me. Ultimately it ends up at “Whelll, nice talking to you! Next step is to apply for this job id online. Good luck!!” What does work is if conpany initiates contact, THEN you go through your network to gather information about the role and hiring manager, and so on. Having an insider pushing for you after company has already expressed interest has been a lot more reliable to me.
I think it’s good advice. Although the points you make are generally true too.
Sometimes you need to just get a human's eyeballs on your resume.
* Not giving salary expectations upfront lead to a waste of time if the company expectations are below yours
* It's not really relevant how bad many other programmers there are since you'll tend to look at switching to "better" companies, not worse.
* This is a bit too negative on equity grants for companies with strong product market fit (mostly article just hasn't aged well with companies staying private for so long now)
Do you have an example of this? What can you possibly do for their business?
Do key people respond to random solicitations for a job. Don't you still have to go though their original Leetcode interview process. I am really not seeing what the advantage of this approach.
OP's advice is wishful thinking. I highly doubt they themselves took that approach
This is going to be an extremely cynical take but most software jobs (>80%) are basically the same. That goes for the companies as well, we aren't as unique as we think; and the companies that are unique or doing unique work aren't exactly having trouble finding employees (FAANG vs nearly everyone else).
If you want an employment as a beginner apply to as many postings as possible. Only do research for companies once they want to talk to you, anything else is a waste of time. It also takes very little time filling out job applications (outside of obnoxious companies that ask lots of behavior Q), it should less than 10 minutes to fill out job apps.
IDK how to feel about cover letters, every company I've worked at (startups, national ISP, to massive insurance companies) have stated they never read cover letters sent. They just want to make sure candidates have all the keywords on their resume before even talking (this part is largely automated away).
I have a basic cover letter that explains what I'm doing at my current job and how I'd like to work at $company doing $unique_stuff. Basically my cover letter is 90% the same between job apps, but I change the intro paragraph to match the title, company, and job description.
But as a beginner or moving to a new city where you know no one, apply to everything everywhere. It's a numbers game and even as you progress in your career, you may not command enough talent to target specific companies.
The trick is to treat finding your first job as a search problem, not a dice roll. You’re searching for employers that need someone with your skills and experience, and those same employers are also searching for you.
The asterisk is that I think this “5 company” approach is meant to be aimed at companies that don’t have the resources to do very “active” searching. A rule of thumb: if the company has a specialized university recruiter, they are probably too “active” for the list, if you’re a new grad with no intern experience.
You don’t find these companies on job boards or LinkedIn. You find them by “searching”, like an investor. You can begin by looking for local VCs, the look at the portfolios of those VCs, then at the LinkedIn history of people who work at those portfolio companies. By now you probably have seen 20 interesting companies, and you can proceed to whittle it down to 5 small companies that will already know you have the requisite energy and enthusiasm, since you went through the real effort of finding and choosing them.
TLDR: yes, but not if your five spell out FAANG.
With 50-company approach you increase the absolute number of first callbacks you get (hence everyone thinks it's optimal). With 5-company approach you increase the odds of actually getting through to a single job offer. Think about it.
You haven't AT ALL explained why your "~5 companies approach" is more likely to get you an offer.
It's not. That sort of approach would be less likely to get you an offer anywhere that I've ever worked.
999 times out of 1000 you don't have a clue to a company's current business needs just by looking at public information, even if you had insider information, a random insider probably doesn't know the current business needs of anyone outside of their day-to-day job. Even if you did know the company's current business needs, unsolicited job applications from strangers to people who aren't recruiters mostly go straight into the trash bin. If I personally received unsolicited job application from a stranger, I'd probably just ignore it. The most I'd ever do is reply telling them to apply via our career site like everyone else.
Plus what exactly would you say as a "run of the mill beginner" for "what you can do for their business" that would catch anyone's attention that a quality resume and cover letter submitted though the job portal wouldn't?
The ~5 companies approach tries to optimize P(ic) and P(si) at the expense of #apps.
The 50 approach optimizes only #apps because it assumes P(ic) is low, and disregards P(si).
Especially P(si) is very important because the interview's purpose is to determine that you're a good match for them. If they're just another company you spammed an application out to, that becomes very obvious.
The ~5 companies approach forces you to have already selected them because you think there's a solid business case for them to hire you. And you can make that case during both your initial contact and the interview, thus maximizing P(ic) and P(si).
> 999 times out of 1000 you don't have a clue to a company's current business needs just by looking at public information
I'd say it's more like 9/10, but that absolutely makes the case for the ~5 company approach. You're filtering out the ones that have a poor recruiting process.
It takes a lot of time to put together a good application and you have to focus on companies that will not only be responsive but follow through all the way to the hiring decision.
So focus on companies that make it clear what they do and what the position entails will tend to maximize P(ic) and P(si).
I was once on a team that maintained a SaaS application that was in some sort of bizarre super-position of "cost center" and "profit center" a few years back.
The company technically sold the app and made money from it, but they also tended to give it away almost for free as long as the buyer was going to be paying lots of money for the company's $FLAGSHIP_BUSINESS_SERVICE that integrated with the app. (Think of it like a free mobile game with micro-transactions.)
The primary way customers interacted with $FLAGSHIP_BUSINESS_SERVICE (and generated micro-transaction revenue) was through our application. However because the application wasn't what the customer was actually paying for, the executives had an impossible time deciding whether to treat the app like a cost or a profit center.
Whenever the app was working fine and everybody was happy the executives were convinced it was a cost center and looked down on it. Whenever it wasn't working fine and companies began to get angry and threaten to take their business elsewhere suddenly the executives would realize that without the SaaS application they wouldn't be able to sell more $FLAGSHIP_BUSINESS_SERVICE.
Long story short, it was a strange and frustrating experience in many ways.
This is as good a place as any to put this:
Peter Drucker originally coined the term "profit center" around 1945. He later recanted, calling it "One of the biggest mistakes I have made". He later asserted that there are only cost centers within a business, and "the only profit center is a customer whose cheque hasn’t bounced."
I used to think that the team building the product was a profit center. The more time we spend solving customer problems, the more revenue the company generates.
The reality is they view engineering teams like assembly line workers. The revenue was generated by the sales team and now they come to the engineers to get the bad news of how much that revenue is going to cost them.
of course the problem is that things are very interrelated at a company. maybe you can't hire good developers if you don't have good HR people to recruit them. if you have bad or not enough cleaning staff, maybe none of your "profit center" employees are actually willing to work for you because your office is filthy. maybe the event planning staff do really good parties that help with retention.
I guess I'm inclined to agree with you that there's no clear, workable definition of "cost center" or "profit center", but there are certainly areas of your company that generate more ROI than others, and I think that's kinda the point of this (crude) way of thinking about it.
Still, I'd like to show a gap in what you said: You talk about what kind of increase in revenue you'd have if such and such people did a better job, and what kind of ROI you'd get from each department.
But the question is, an increase relative to what, or an ROI relative to what?
Let's take the billing department. If you made sales for $10 million, and your billing department works okay, then you would have a revenue of $10 million. If you hired the best billing experts on earth, you still would get just $10 million. So that might make you look at the billing department as a cost center rather than a profit center.
But maybe if the billing department fucked up royally, either by not charging a customer, or overcharging them and hurting the relationship, or breaking obscure laws and getting the company in legal trouble. Then you could be out a lot more than $10 million.
So in effect, you could view the billing department's ROI as "how big a disaster they can prevent, and how good they are at preventing it." That kind of thinking makes sense for any department in the company.
That's the reason why I don't see the logic behind branding some departments as profit centers and some as cost centers.
However, close to the top of the pyramid is where the money is at. I think that's more important than whether it's core business or whatever.
If the balance of a department is positive then its a profit center. It's a matter of income attribution inside of a company and can be manipulated by people in charge.
I'll say it again, I'll be happy to see an objective definition of profit centers and cost centers.
A lot of the confusion on this thread was conflating necessary functions with profit centers. Billing and compliance are both necessary. Done poorly they can cost a lot of money. Done perfectly they can never increase the actual income of t he company. That is why sale and product development are traditionally the only parts of the business identified as profit centers. It doesn’t mean the others aren’t important just that they have different goals. Profit centers look to expand the business while cost centers look to increase efficiency through optimization.
They have different value during the life of a company. During growth, profit centers are the most valuable source of effort while for a mature company that has saturated its market cost centers are most valuable to focus on. The advice to stick to profit centers is somewhat equivalent to sticking to growth which nearly always has a higher return than optimization.
Cost centers on the other hand are indispensable, but evaluating their impact on the bottom line is hard to quantify and therefore subjective.
Again, it’s not fair to the invaluable work done by the cost centers, but that’s what it is.
If you double the number of salesmen, you can increase your income right now. Firing all the salesmen would drop the income to zero.
If you double the number of developers, there is no difference during the following month or two. If you fire all the developers, you can still continue selling the product for a few months, maybe years.
As a manager, you may be tempted to increase the short-term profit, grab your bonus, and get promoted to somewhere else, so that someone else will have to deal with the long-term impact.
It's a little tongue in cheek, but I think illustrates a real perspective - improving a product doesnt necessarily extract more revenue from existing customers. Selling more copies does extract more revenue from existing products though.
We've since added a dedicated sales person, but we still get a significant amount of sales this way.
Improving efficiency in marketing spend can bring in millions in new business value.
If you can do that, it will definitely be noticed by the business.
Try hiring yoga trainers for the IT and Procurement departments and see how quickly the request is rejected.
Joga classes for employees do not.
Marketing brings in more revenue if done efficently making it a profit center. It is no different than sales.
Marketing spend is usually fixed in costs and the goal of efficiency is to acquire more customers for the budget.
Most engineers dismiss this part of the business, but if you can do it successfully, businesses will pump more money into it.
Early in my career I worked with a few folks who had PhD after their names. I don’t have a PhD so I was completely unaware of what that really meant and how to value it. Almost exclusively they were pompous, sucky programmers and dbas, and project managers, and other roles common in profit center tech companies making software.
I thought PhD was a signal for dummy based on my super small sample of 20 or so people.
I eventually got to work with a ton of awesome PhDs and realized that I was wrong and scientists working in their field leveraging their life passion of study is very different than someone having a PhD and working outside their element.
I think this is the basis for how many people view programmers. They just see the person who cranks out a billion lines of SharePoint and costs them a lot and don’t even get to experience a world where a programmer creatively removes the need for a billion lines of SharePoint.
So there’s different kinds of programmers. My current idea is to call myself a programmer and describe my roles and outputs in a way that people can assess if that’s something that helps them or not.
I generally don’t think labels and titles mean too much by themself and actively want people who think that to avoid or misinterpret my profile as I probably don’t want to create something with them.
Well, what kind of advice is that? You should just blindly follow what
everyone else does, regardless of whether you like it or not? That's just
Burnout is a thing because people who want to do something radically different
convince themsleves that their only option is to work for a profit center, or
something equally soul-sucking and don't use their brains to find a worthy
alternative and make it work.
There is nothing to be gained by limiting yourself to what seems like "how
things are" and trying to do what it looks like everyone else is doing. That
way you can only hurt your prospects for personal development and progress.
>> Co-workers and bosses are not usually your friends
Oh wait. Now _that_ is solid advice. That is 100% true.
The funny thing is that calling oneself anything except X programmer has become pretty common. In fact, with more and more money flowing into programming, there's no shortage of bullshit artists and self-promoters.
I know this is a minor statement (and not really important to the advice otherwise), but I liked this discussion that there can be various reasons why good candidates would struggle to find a job:
The best dev I know didn't go to college (he tried to be a traffic controller when other people would have been at university). He's exactly the kind of person who would be doing "elliptic curve partial nonce bias attacks" over the weekend.
I've seen him have to explain stuff like endianness to people with masters degrees.
I don't know him well enough to ask, but I think he is stuck working for us because 80% of jobs are off-limits without a degree.
Oddly of all the jobs I've gotten most have been via that route. I've never had recruiters do anything but waste my time and trying network has always seemed to land me among people who are looking for a job as well.
WRT "networking" I agree if, by networking, you're referring to "networking events" which, as you say, probably tend to be attended by people passing out business cards and looking for jobs. But every job I've had since my first one out of school has come through emailing people I've known professionally at aa company I've been interested in working for.
BTW, there are people who definitely “Engineer” software they are rare, but they exist, especially at the FAANGs. There are lots of people who have the word “Engineer” in their title that shouldn’t, a fine alternative is “Developer”.
What’s the difference? A developer is implementing a known solution in a specific domain. An engineer is dealing with a significantly unique problem space that has only been addressed by theory, if that. Engineers spend a lot less time programming than Developers do, because their primary occupation is solving tough problems with their colleagues through documentation, RFCs, etc. They often are not the ones who even implement their own ideas. Wait, isn’t that a Software Architect? No. Software Architects plan and documents in known solution space, they aren’t solving unknown problems.
Most people that I’ve worked with in Software have never worked with a Software Engineer before.
Sorry but 100 times no. Engineering is about understanding and using the laws of nature in scientific terms (mathematics). Strength of materials, thermodynamics, hydraulics, that's the stuff engineers study and do.
For me to call myself an engineer legaly I have to have an Engineering degree. I don't have one, so I will not call myself that.
I am a Software Developer. I do the exact same things as the Software Engineer sitting in front of me. We work on the same type of problems in the same Avionic Project. The difference is that he has a PhD and I dont.
This is a very high standard for defining engineer and I'd guess most practicing engineers don't meet it. Sure, there's people bridging the gap between theory and practice, but most engineering effort goes into applying the same well-understood theory and practice to slightly different situations. I'd posit that your definition of software engineer is closer to what most people would call a research scientist.
Andrew Yang brought this up in his interview with Joe Rogan a few months back; how people in The Bay will basically say, "Yes, we're trying to automate away these jobs." And to some extent, we're trying to automate away jobs in the tech industry as well.
Look at the Sears (Willis) tower. That building was originally dedicated to all the manual processing needed by Sears Roebuck & Co to process store and catalog orders; logistics for the largest retailer in America .. the Amazon of the early 1900s. A few decades later and the tower was mostly rented out to other customers. Even when things were still going good for Sears, they were able to drastically reduce the number of people they needed using machines and automation.
Perhaps that's their way of making themselves feel better, when they say that everyone's doing it.
There's definitely ways in which being a programmer can be part of a rewarding career.
A good example of this is the acquihire phenomenon. BigCos often acquire small startups purely to hire the people working there. Half of the time, they shut down the startup and throw away the code. The objective is to hire smart people whom are also domain experts in a particular problem.
If I’m a high rate programming consultant then I want my customers and potential customers to understand what I do, and it’s likely programming, business, consulting- in that order.
>How do I become better at negotiation? This could be a post in itself. Short version:
He misses the most important thing. The most powerful thing into negotiation is to know you can stand from the table and leave.
If this is good for a career? Don't know don't care. It's what I do and enjoy, and there are many slippery slopes to not coding and becoming rusty, and ending up on the M train by default.
I work on a team of 50 employees. For those 50 people, there are two managers. 1 out of 25 isn’t great odds of being accidentally promoted into management without your consent.
Your working style may be right for you, I’m not questioning that. But in a thread targeted at career advice for junior engineers, I thought it’d chime in from the other side. The chances of accidentally ending up as a manager are very slim. Don’t destroy your career and miss out on interesting opportunities to avoid something that’s not likely in the first place.
Where I work it's about 8/50.
But I don't have to worry about being a manager - because they only get hired from outside.
Firefighters don't call themselves ladder and hose operators. Those are just the tools they use. Their job is to fight fires so they call themselves firefighters.
There's no standard terminology and nearly every company has a slightly different definition of terms.
>How do you know what to code
Someone need something to be made or done, I help them by writing code.
>whether you should even be coding
I've been doing it for long time
>if you aren’t willing to spend the time knowing the business and the customers?
That's precisely the reason I prefer code. I'm not that interested or even willing to spend time knowing the business and the customers.
Then, another developer who had been there for a decade asked me how I would go about writing “address validation routines” - It was a real world problem he was facing. I told him that I wouldn’t. That’s not the vertical the business was in. The best code is the code that you don’t have to write. Address validation software is a solved problem and there are plenty of third party CASS solutions and yes they cost money but being able to write address validation software is not the company’s competitive advantage or its differentiator in the market. It’s better to outsource it. That’s one less thing we would have to maintain and debug.
The developer wasn’t impressed. The CTO was.
I’ve spent my entire career asking questions and making sure we were building the right things.
My first job out of college over 20 years ago I was the sole developer on a project to write a data entry system that was used by a new department with a dozen new employees. I had to actually talk to our potential client to gather specs myself.
But this gets back to what OP was saying. Spending time asking questions == spending time in meetings. Not writing code.
I’m not advocating for never asking questions or suggesting better ways, just saying that I prefer to work where someone else handles the bulk of that so I can focus on what I specialize in.
But how to know what to solve... At least for my job, interpreting customer requests is key. The customer knows what they want, but they don't always realize what they want is not the best solution.
Often I can steer the customer to a vastly superior solution to what they initially had in mind. Not seldom this superior solution can re-use existing code, sometimes entirely.
I live to build things and continuously fend off attempts to slide me into management, but I find actually programming things to be only half the part of the puzzle.
Humans are extremely bad at articulating what they want and will generally tell you the wrong thing up front because they haven't thought it through fully.
A day spent with a stakeholder carefully picking their brains and properly crafting a plan can be just as satisfying as a day of straight coding - sometimes more so.
(tbh this is probably also why I find UX work so interesting - getting a user to click on a button is considerably harder than making the button in the first place)
The important question would be "is this work useful," not "does this work involve capitalism."
However, I want to caution against one interpretation: When you focus your attention on projects to cut costs (and risk) and grow revenue, it is possible to put that as your top priority. It should be your second.
Your real top priority should be the health and stability of yourself and your family. Ideally, this is aligned with focusing on visibly-profitable work for your employer. This is often true, but not always.
Example: Suppose you are working at a company that takes code quality and automated testing seriously. This has real business value because it means that you are able to quickly execute projects which accelerate the sales team and unlock partnerships. So far, so good.
Then suppose there is an opportunity which involves taking over a codebase written by another company. The business case is strong—by taking several months to make UX improvements, you can significantly raise revenue. But, the codebase is in a poorly-documented PHP framework, has haphazardly inconsistent naming, and no automated tests. Should the company do the project? Maybe. Should you do the project? Maybe not.
Here’s what can happen:
Your team agrees that its going to suck but you’re all in this together. You start on the project and you find it much harder to make progress. Because of the new toolset and lack of tests, you find it hard to maintain focus. You also lose the ability to create reasonable estimates, so your communication with internal stakeholders erodes—taking trust down with it. The number of noisy automated alerts that come in also increase, also decreasing your focus. You try to find time after work to focus on working through a PHP book, but its hard. The noisy background of your open office combines with your lack of understanding of your toolset to mean you feel useless making day-to-day progress. You talk to your manager and team lead about this. They care, but they are overworked. Its not like they they can change the open office anyway. Don’t worry—The project will be over soon. Everyone is in the same boat.
At the end of several months, you need to fill in your semiannual self-evaluation. You choke—words aren’t coming out. You’re caught between knowing you need to brag about your accomplishments and...not feeling like you have none. You’re bad at lying. After spending 2 full days on your self evaluation, you submit it mostly empty.
Soon, you are searching for a new job.
So yes, when thinking about how the company views engineering, look at revenue and costs. But before that, look at the things that you need in order to be effective. Keep your mise-en-place. Use tools that fit your brain. Learn to zealously advocate for what you need.
I find Go codebases more likely to be full of ad hoc undocumented stuff than PHP.
I guess it depends what you are used to.
This is going to be specific to each person. Some people find PHP way easier than ruby, python, or haskell. Different brains are different.
Yes yes yes.
And generally so much easier with OSS transferring between jobs - if you can !
Oh my, I just now saw this follow up: https://pando.com/2014/03/22/revealed-apple-and-googles-wage...
> Confidential internal Google and Apple memos, buried within piles of court dockets and reviewed by PandoDaily, clearly show that what began as a secret cartel agreement between Apple’s Steve Jobs and Google’s Eric Schmidt to illegally fix the labor market for hi-tech workers, expanded within a few years to include companies ranging from Dell, IBM, eBay and Microsoft, to Comcast, Clear Channel, Dreamworks, and London-based public relations behemoth WPP.
So, anyway, yeah, don't be a chicken, be a fox. But better than being a fox, be a human being.
^ this is the other one, wherein the second line of the article states: 'As programmers...'
But, the part about most programmers not being able to implement FizzBuzz is incredibly false based on my experience. I've worked for over half a dozen average companies (plus some government too!) in the bay area and I've never once met an engineer who couldn't implement FizzBuzz. They're able to program FizzBuzz and much much more complex things, far more than job requirements need. Not only that, but I've never even met an engineer candidate (i've interviewed dozens) that couldn't do either FizzBuzz or other basic programming tasks. So, I really don't know how you people are finding all these unqualified candidates.