Hacker News new | past | comments | ask | show | jobs | submit login
Facts every web dev should know before they burn out and turn to painting (baldurbjarnason.com)
615 points by caseyross 40 days ago | hide | past | favorite | 420 comments



The thing that burns out web developer is web development. The constant shift of technologies (just for the sake of it) and the nerfed (to oblivion) environment is hell. As a web developer, after you learn the basics (~5 years) there is no feeling of where the "up" direction is. Everything feels like sidesteps. There is no feeling of permanence to your knowledge, unlike a lawyer or a doctor. The knowledge feels like water pouring into a full container - as much of it goes out, as it goes in.

Switched to embedded systems 7 years ago to avoid the burnout. It is better, in my opinion. Funny enough, there is a natural barrier to entry that keeps most programmers away - You have to understand how computers work, how operating systems work and you have to be able to code in C/assembly. I have a lot of fun and actually picked up electronics as a hobby, which will help me better myself professionally. I think there is enough here to keep me entertained until I retire.


> There is no feeling of permanence to your knowledge

I couldn't disagree more. Maybe it's my situation as a full-stack web developer that makes it different, but over the past 5 years my understanding of all the layers that a website interaction goes through (from mouse click to database and back) has only deepened, and none of it feels like it went out of date over that timespan.

Sure some company may use technology X for the job another company uses Y, but overall it feels more like different paint jobs for the same thing with slightly different trade-offs, than an ever-changing barrage of completely new things.


Agreed. It's more like switching from latex to enamel paint, and then someone inventing a new type of paint that has some better properties, but some worse, and is situational.

Nothing is static for any discipline.

Doctors have a constant influx of new medicines and procedures. The doctor that operated on my wife had heard of a just-in-case tip that ended up saving my wife from having full-blown have-to-get-kemo cancer. Most doctors still don't do it that way, and it's an incredibly rare situation... We got super lucky that that doctor was on the ball.

And while it's not life-threatening, I see the same kind of thing in webdev constantly. New 'best practices' come into effect all the time because of security vulnerabilities, and new tools come into existence to make hard things easier.

Yup, those same tools sometimes make easy things harder, but that's the trade-off. You do not need to make the switch. But it has its benefits.


I agree, once you know the patterns and the foundation of the web new tech is not that hard to adapt to. I agree as a new dev it can feel like a bottomless pit, in the same way low level programming feels for most new devs.

That said web frontend has had a wild ride the past years. But its really starting to stabilize now, around a few core frameworks (react and angular for example). Sure there is still forks and revamps but its not the big world changing differences (like going from jQuery to react was).


Here's a few examples of knowledge becoming obsolete:

  - JSP, JSF and PrimeFaces in Java are essentially dead (server side and hybrid rendering technologies)
  - Vaadin and GWT are essentially dead (Java to JS transpilation of sorts)
  - AngularJS is essentially dead (Angular 2+ are way different)
  - jQuery is on its way out
  - class based React is on its way out
Each of those have a lot of internal complexity that i'd suggest is more than just a paint job. Of course, it's inevitable that technologies that didn't work out for one reason or another will be sunset and will die out, however at the same time there is definitely churn.

For example, we don't know if something like Bulma or TailwindCSS will work out and will be around in their current form in 5 years. You could say the same about how state was managed in React, nowadays we have MobX and Redux, but also the Context API. You can say the same about most languages, approaches, frameworks and libraries - nothing is set in stone.

I'm not sure whether that's a bad thing or a good thing, i just personally hope that eventually the more stable approaches will survive and the developer experience will be all the better for it, as opposed to introducing more and more complexity to the point of rivaling that of JSF/PrimeFaces.

My personal approach is just vaguely along the lines of:

  - learn the fundamentals that are unlikely to change much (abstract concepts, architecture etc.)
  - learn the most common technologies, refresh knowledge every few years (HTML, CSS, JS)
  - learn the frameworks and libraries to the point where you feel vaguely competent with them (React, Angular, Vue, have a vague idea of how to do stuff with CSS libraries/frameworks like Bootstrap)
  - maybe pick up a few useful tricks here and there, if you have the time explore a new technology or two in non-prod projects every now and then


Tangential. I don't understand why class based react is disliked. Hooks are complex magic, and don't work like rest of language. Compared to that, class component will do exactly what you say.


I actually agree with this and liked when you could see the full behaviour of a particular concern within a component, for example, its shouldComponentUpdate() method.

But now, with hooks, you might instead have 3 or 4 useState hooks, each of which could cause a component update. Worse yet, with useEffect hooks, you might run into the interdependent dependency array problem, which would cause a refresh loop.

Now it feels like your components have now become containers for smaller loosely coupled units of code with their own behaviour and life cycles, all of which may result in lower coherency as well.

I actually voiced some of my concerns a while ago in a slightly satirical portion of my blog, called "Everything is Broken": https://blog.kronis.dev/everything%20is%20broken/modern-reac...


Once you grok hooks, they are just better.

With hooks, it is easy to create re-usable state controllers than with class based components. Components that use hooks are easier to read, understand and modify. Hooks are also easier to test and annotate with typescript.

In classical class based component you would often have complicated code in componentDidUpdate, componentDidMount or getDerivedStateFromProps. Code ended being complex and interaction between lifecycle methods and constructor hard to debug.


> Once you grok hooks, they are just better.

Disagreed. It's likely that there are factors at play here that we're not considering, such as personal preference or certain people enjoying different patterns more.

> Components that use hooks are easier to read, understand and modify.

Hard disagree. In my experience, hooks take a single concern within a component (whether it should update, or reacting to a certain event within it) and scatter the logic throughout multiple hooks in any non-trivial application, which leads to the control flow becoming harder to understand and lower overall coherency of the code.

> In classical class based component you would often have complicated code in componentDidUpdate, componentDidMount or getDerivedStateFromProps.

This feels like an indicator that you should split your components into smaller parts and compose those, as opposed to look for other ways which may or may not be better (because of the aforementioned differing preferences) to express that same level of complexity.

> Code ended being complex and interaction between lifecycle methods and constructor hard to debug.

I apply this very same quality to hooks.


In my case I found that for most web dev jobs I needed to learn the libraries thoroughly or live in constant stress because I was always getting behind my expected velocity.

I find most frontend libs code too convoluted to be able to understand what is happening just by taking a glipse at the code.

In contrast I've doing backend in .NET since 2008. To catch up with the latest releases I need at most a couple of weekends a year and probably less.


> I find most frontend libs code too convoluted to be able to understand what is happening just by taking a glipse at the code.

This is definitely a valid concern, especially when the authors of some frameworks attempt to be "clever", which unnecessarily increases the cognitive load that one has to deal with.

That said, it's also probably a matter of the abstractions that are used, for example, in Java for back end you can use any of the following: Spring, Spring Boot, Dropwizard, Quarkus, Helidon, Vert.x, JSP/JSF with something like PrimeFaces, Apache Struts, Play and others. Whenever you run into a framework that uses MVC or its own way of templating, you're stuck learning all of its specifics and idiosyncrasies, even if the end result that you want to achieve is the same.

Thus, i'd posit that if you're not the person making the choice of which framework or library to use (the ones that you're familiar with), on some level you've already lost.

If you're used to Angular, for example, but instead get thrown into a project with React, Redux and Tailwind CSS because someone wanted it on their resume, you'll both be less productive and will look less productive compared to everyone else.


I think it's hard to disagree with the parent on the frontend side. New frameworks every few months (and it has been like this for years) and even stablished ones keep changing in incompatible ways (Angular, React, Vue).

Backend is much more stable, with the exception of Node, which seems to be suffering a disenchantment, a bit of reality check after the early euphoric years.

I am yet to be convinced that any of the 3 top frontend solutions (Angular, React, Vue) are worth the trouble.


> New frameworks every few months

React is going on 9 years.

> even stablished ones keep changing in incompatible ways (Angular, React, Vue)

React's API has been incredibly stable. There has been one major breaking change in the API (deprecation of the lifecycle methods) and even those are still supported.

These arguments are tiring. Think of something new.


What about hooks? That's a pretty significant change.

Even if class based components are still supported, the community is moving to hooks be the standard so you don't have much choice but to learn them.

Redux became popular and then has slowly been replaced by context and libraries like React Query.


You can learn hooks in a weekend and it's really the only major change in years.


What about Node has changed? It's been the same for several years save for the introduction of some new (optional) APIs and support for language features in JavaScript.


I would say, using it with Typescript became much more common.

There is also deno (https://deno.land/), from the same creator.


Well, it's not a web thing. All of software development is migrating into strict/expressive/implicit type systems. The web frontends just happened to move faster there because Typescript has a great type system and is backwards compatible.

Embedded and system programming aren't much behind, because of Rust. Everything else isn't migrating to Haskelllike languages nearly as quickly, but most software will go eventually. The slowness is probably more due to bad market dynamics for the tools than to long term issues.


I just see node growing stronger adoption wise every day.


Similar route myself, though I've been in embedded longer. I do appreciate the relative stability of embedded, where in many ways I feel like we're still living in the 90s when you could realistically master the whole chain of tools and ideas as an individual. Things do change, of course, but at a much slower pace, and depth of understanding of Arm, C, various transports and their oddities, etc., is more important, but the underlying tools and ideas rarely shift in a major way.

You need a more fundamental understanding of how your MCU works (again, similar to any home PC in the 90s or early 00s), but once you learn those fundamentals they transfer very well, and knowledge has a high degree of transfer from one project and generation of devices to another.

I also find it highly satisfying to work on things you can actually touch or point to, versus spending 6-18 months of your life for something that just sits on a server somewhere as a cog in a short-lived system.

You do need a certain type of brain to enjoy this kind of low level development and HW design since there is some overlap and you often have to roll your own versions of common-seeming operations, and I suspect salaries are lower than in some shinier fields, but overall I've always appreciated the work I do, and the colleagues I've had in this field. The egos tend to be smaller, and the drama minimal since the people doing this kind of work tend to have been at it long enough to have some perspective on things.


>like we're still living in the 90s when you could realistically master the whole chain of tools and ideas as an individual.

Well, I personally quite near the beginning of my career had the exact same problem as the OP in the 90s. I started off learning the MacOSX API and C, then C++. Then started looking at Windows API - DirectX and COM were also new then (Win 95) and MFC was just getting started as a direct response to OWL and the Borland tools and VB 3.0 was starting to make big inroads into RAD dev. Then the straw that nearly broke my back was Delphi, which I have never used but was wildly popular and a 'must learn'. By the time Java came out I had decided to stick to the WIN32 API and COM and ignore the rest so I had a career focus but a lot happened in the 90's and it was easy to be totally befuddled as to where to focus your energies.

Programming is hard, let's go shopping.


>MacOSX API

I think you mean what's now called classic MacOS. System 7, Mac OS 8/9

>Programming is hard, let's go shopping.

What you are describing is not programing, but life and career choices, which is much, much harder in my opinion and most of us are sorely unprepared for.


Yes, you're right, System 7. I had a Performa 275 with a 68030 CPU and learned to program C & C++ with Metrowerks (a fantastic IDE for the time). I don't have that anymore but might still have a copy of Inside Macintosh somewhere.

>Programming is hard, let's go shopping. Phrase borrowed from Jeff Atwood :) https://blog.codinghorror.com/programming-is-hard-lets-go-sh...

>What you are describing is not programming, but life and career choices, which is much, >much harder in my opinion and most of us are sorely unprepared for.

This is true, you can either simply bob along on the currents of fate and take whatever comes to you, or try and take control. At first I was just happy with a job, and luckily (all my working life actually) there's been plenty of opportunities for my skills; but later chose more carefully. I'm sure that's the experience of plenty others too.


Any suggestions for books / courses for a developer who wants to go into embedded development both professionally and as a hobby?


Update since I should have mentionned HW as well.

If you're looking for something to play with at the HW level, it can be overwhelming to know what to start with. Stick to Arm Cortex-M. That will get you the furthest the fastest IMHO. If I was to suggest one company -- even though they all have their strengths and weaknesses and niches, and I work regularly with several like NXP, ST, etc. -- I think Nordic Semiconductors does a very good job with their support forums, SDKs, tools, etc. BLE is also a very rewarding place to start since you can talk to your phone, laptop, etc., which is Nordic's niche.

A development board like the nRF5340 DK has a HW debugger on it out of the box (to program the MCU and debug programs), is reasonably priced, works cross platform, and packs a lot of processing and peripheral punch. Being based on the Cortex M33 it has a solid security foundation (TrustZone and TF-M), and works well with a first-class RTOS like Zephyr with open source networking and wireless stacks.

You'll find answers to common questions with this chip on their forums and online.

There are other options -- ST and NXP have many excellent MCUs and inexpensive dev boards -- but the nRF boards and the ecosystem around them make them a good choice to make a serious start and learning embedded, and they are one of the only vendors who reliably answer questions on their support forums. The nRF53 dev board brings a lot of value as a serious learning platform if you're getting started.

Again ... just an opinion!


Like any niche, it's hard to know where to start and it also depends if you are more interested in HW design, or the firmware side of things. You need some knowledge of both since they overlap in many areas in embedded, but they are different paths.

Assuming you mean more writing firmware, the biggest thing to understand is that embedded is all about C. You'll absolutely want to learn the basics of C and properly understand pointers. A key part of C is understanding data types (signed, unsigned, float) and notations you rarely used in other fields like hexadecimal which is omnipresent in embedded. If you grew up learning C#, node, etc., you likely don't properly appreciate these fundamental types, and you'll need to learn those fundamentals, but that will come with learning C.

For books, I like Jack Ganssle's "The Art of Designing Embedded Systems". He does a good job of laying a solid foundation for planning embedded projects. It's opinionated, but you could do worse than start with his ideas.

And start with a professionally maintained foundation for your projects. Arduino is good for some people, but it won't scale and won't give you the skills you need professionally, and scripting languages like MicroPython won't help you later in life. Use a language (C) and platform you can go to production with, such as Zephyr RTOS, Azure RTOS, FreeRTOS, etc. It's more work and harder up front, but the investment will pay dividends and you'll learn good habits from the start.


The comment above is great advice overall, but I break with it on the last paragraph. I think most people in the "I don't know C or electronics, but want to get into electronics and firmware" camp should start with Arduino (or clones).

Not because it's great technically and not because the editor is great (frankly, it's awful). The reason I argue to start there is that they've made the first 15 minute experience stupidly easy and convenient and, as a result, it's become wildly common and popular and you can readily find Arduino-platformed examples for most of the basic electronics technologies. If you're the type to learn best when you can see glimmers of visible progress, Arduino gives you smooth on-ramp.

You will need to wean yourself from that reliance/training wheels at some point, but I think it makes the first 2 months 20x easier, especially if you're trying to learn datatypes, bit-packing, pointers, memory management, analog electronics, digital electronics, communications protocols, in-circuit programming, and everything else (PCB design?) all at once. Break it up a little.


I certainly don't disagree, and wrote many an Arduino library and drivers and tutorials myself.

As long as you eventually take the training wheels off, and know that Arduino isn't a path that leads to being able to create financially viable products, and it's a first step.

It won't teach you certain good habits, but it will perhaps get you hooked and motivated, which is useful in itself, and does give you the satisfaction of making motors whirr and LEDs blink quickly.


I really, really like The Art of Programming Embedded Systems but I think that (1) it's out of print now and (2) it has some dated advice (that was excellent at the time) that can send a modern programmer down the wrong path. OTOH, Jack has a mailing list: http://www.ganssle.com/tem/tem432.html that is current and very informational.

I agree that Arduino is not a good start if you intend to become a professional. Arduino works really well for the non-technical person who will never progress beyond the arduino ecosystem, and for the experienced embedded programmer who is well aware of its limitations. Beyond that, if your intent is to learn embedded systems professionally, pick up an ST Discovery development system (about $20 IIRC) and have at it. Although I worked for a company that standardized its embedded development on Nordic processors, I don't recommend them unless you need wireless in your project.


Replying to myself: ST Discovery only brings up Star Trek references. If you're looking for the dev kit, try searching for STM32 Discovery instead.


This comment appears in every HN thread about web development. Always coming from someone who clearly doesn’t understand the web deeply.

If you did you would realize things haven’t changed that drastically in 5+ years now. The new libraries and frameworks are small iterations that take a weekend to check out and pick up.

Once you have the deep knowledge you’re set in this field. The problem most people have is that it can be less straightforward and much wider surface area, especially if you’re doing stuff like React then React Native, and also trying to understand native libraries while also bundling code for both platforms.


> The new libraries and frameworks are small iterations that take a weekend to check out and pick up.

If this is the case, then i congratulate you on being a pretty productive and capable front end developer!

However, that's definitely not the case for me and many other people: i'd say that you need at least a week to get comfortable with both using technologies, ways of misusing them, their footguns and to get a deeper understanding of what you're actually doing. I've seen sites fail to work correctly because people attempted to use React hooks with overlapping dependency arrays, where one of the hook bodies calls a state mutation, which ends up in an endless loop and breaks the entire page.

If hooks indeed were such a small feature that could be reasonably understood in a few days, then we wouldn't see cases like that, nor would it be difficult to write your own hooks. You can say the same about the Context API, as well as the migration from class based components to the functional ones.

If that's the only set of technologies, then good. But i work in a company that does consulting and occasionally i have to unravel messes in everything from React, to Angular, to Vue, even jQuery and AngularJS. If i want the lifestyle where i spend long evenings and weekends of my own time working on learning these technologies, then sure, but the company has neither the resources nor the patience to wait for days to weeks until i become productive in a project.

Is that a social issue rather than a technical one? Perhaps, but at the same time these kinds of factors cannot be ignored. Sure, the churn isn't too bad and hopefully has good outcomes in the end, but at the same time it definitely is there and sometimes it definitely is detrimental from the point of view of just getting things done (e.g. AngularJS needing to be abandoned for Angular or something else - it should be better in the end, but as someone who has a rewrite of an app that has had thousands of hours sunk into it looming over me, i'm not overjoyed).


>If you did you would realize things haven’t changed that drastically in 5+ years now. The new libraries and frameworks are small iterations that take a weekend to check out and pick up.

I think the issue is the fact that the browser has no native rich and efficient components like desktop tookits have.

So you implement some coold grid view to showcase all the user projects but later some user created many projects and because angular performance is garbage you need to solve it now, You can do a shit solution and implement pagination or waster your time and implement a DataGridView similar like in desktop toolkits that is actually smart and efficient or maybe you do the shittiest thing and install some random stuff that might give you what you need. Same problem would not exist on desktop and probably mobile tookits.

Repeat same issues for all possible widgets that exist in a mature toolkit.

On the web you always spend a lot of time on shittiest things instead of the cool part. A designer wants say a horizontal scrollbar that would work only with the mouse(no Ctrl to press) =? install a library, a designer wants a dropdown with some extra styling -> install a library (or create a inferior dropdown - see YT autocomplete one that gets stuck on all the time) , modals -> create your own system. The only widget that is not handicapped in the browser is the text input one.

People will say how in the past on desktop you can drag and drop widgets and make the app , the reason it works is because those widgets where powerful and efficient. Probably mobile devs can understand this point, you don't have to hunt for a framework or library to get a smart and efficient table widget.


>someone who clearly doesn’t understand the web deeply

Maybe this is the root of the problem. Should you need that deep knowledge in order to be proficient? I mean, the point of so many libraries and frameworks was to remove the need for a deep understanding of the underlying technology.

Perhaps the need to understand it deeply indicates that they've largely failed?


My theory is that most of these comments are coming from backend devs who rarely do frontend and then complain that the space has evolved since MooTools was hot shit.

React is going on 9 years people.


> Once you have the deep knowledge you’re set in this field.

Examples of this deep knowledge?


The issue is that web development encapsulates too many different roles, each of them is a job in itself. The role has grown to encompass too many specialties:

-You need to be an expert* JavaScript developer with strong understanding of computer science at the same level as say a Java engineer.

-You need to be an expert on cross browser, cross-device and cross-OS compatibility and performance of Javascript, HMTL, CSS.

-You need to be an expert in animations, transitions, SVG, canvas.

-You need to be an expert of analytics, statistics and presenting data.

-You need to be an expert of responsive design, accounting for countless myriad possible platforms, compatibilities, viewports, constantly evolving standards and best practices.

-You need to be an expert of accessibility, semantic html.

-You need to be an expert in working with various types of databases, or API and local storage, caching etc

-You need to be an expert at test automation, writing unit and integration tests with increasingly high stakes.

-You also need to spend time and have strong understanding of ancillary issues, such as package management, devops, CI etc etc

-You need to do all this and your work is pretty much almost entirely customer facing (or stakeholder). People who don't understand the constraints, or dont appreciate the difficulties of what a problem may require.

And this is without talking about things like Typescript, React (and the millions of other dependencies, libraries, hot new things) you have to deal with

Just look at the URL shared on this article and see how many subject mentioned were totally not part of web dev say just 10 years ago.

5-10 years ago my job was:

- build websites with a bit of javascript DOM, ajax, or jquery UI interaction

Now my job is:

- same as above but with the added responsive aspects

- building single page applications

- building browser extensions

- building command line tools

- building react native mobile apps

- building electron apps for desktop

- writing tests

- building tooling and managing dependencies

- building internal libraries, style guides and associated documentations

*this is the expecations, i know the reality is not really the same.


> The role has grown to encompass too many specialties

The good news is that none of the successful lead web developers I have worked with meet these requirements. I feel like you're talking about a large team more than a single developer, and in many places where you say "expert," I would say "willing and able to tackle the subject to the extent required."

I think a better way to sum up web development, instead of making it sound like an impossible job, is to say there is virtually unlimited scope to make yourself better and more valuable at the job by expanding your technical skills.


Now if only people stopped saying "you need to be an expert" and said "you need to not be clueless at it" then it would be easily achievable over a few years.

The "expert" expression is abused by approximately the whole tech sector. No one meets those job requirements anymore, and yet the world (of web development) still moves on.


Exactly. People like the parent make themselves crazy thinking they need to be the expert at absolutely everything. You don't. Learn your fundamentals, and learn how to learn. All of these front-end frameworks, that's just tech. All of them do the same thing in slightly different ways. You'll work in a team anyway, someone will be good at something, someone will be good at another thing.


> -You need to be an expert* JavaScript developer with strong understanding of computer science at the same level as say a Java engineer.

I would put Java engineers in the "software engineering" category more than the "computer science", but that may be just an expression of my bias.


I think it's one of the most complex jobs the only thing that helps is that technical expectations generally are low.


There’s a certain churn to it sure, but I wouldn’t call all the development “sidesteps” I think it’s just the scope of changes that’s staggering. You knew jquery, gulp and express and you were fine, but thats such a low low bar if you take the whole of human achievement in CS. Web developers just re-experience the growth and invention of all the other branches. For example you get virtual dom with react - an idea that’s been in client side’s dev arsenal for quite a while. Webpack/esbuild are re-discovering compilers, etc.

It’s all just a process of convergence in my eyes. And if I put it in that way I can cope with it. Any new fancy library comes up - I check ok were have the ideas come from, language, environment etc. Go read that and understand the original concepts, and then watch the web dev slowly move in that direction.

To be honest it’s pace is sometimes actually frustratingly slow if anything. You can see some Ideas that have been tested and you think they would be great, but takes time to actually permeate into web dev.


Yeah. My current take on the shit coder jobs I've taken, as compared to the rocking blue collar jobs I've performed, is when you're a coder you're getting paid to learn.

Paid to learn. Does that sound noble? Maybe it is. It depends what you're learning, frankly. Learning to code for the very first time, and seeing the fruit of your imagination materialize in front of you? Great! Learning the idiosyncrasies of a system that was designed to dodge responsibility?


Learning during a meeting where you were introduced as the software expert even though you only took over the solution last week?


> The thing that burns out web developer is web development. The constant shift of technologies...

I have a friend who manages a dev team in a small webdev shop (~8 people total) and is about to quit because of the customers delivery schedules. These customers are BIG name (billion dollar) companies so they feel pressured to grind and get the site built otherwise the customer will just pay some other desperate company who will. So it seems that on top of tech churn there are insane customer demands ruining the mental health of everyone involved.


Can you work remotely on embedded development? Giving up remote work is a huge drawback for me.


Sure. It is a little more difficult though, since you need a little bit of an additional equipment. It is not uncommon for people to be mailing each other oscilloscopes. :P

I'm thinking of investing a little bit of money to buy myself my own equipment, since I want to use it for fun as well as for work.


A logic analyser (Saleae Logic, etc.), a HW debugger (J-Link for Arm/RISC-V), a good quality multimeter (spend the extra money here, don't get the $10-20 crap DMM off Amazon), and an entry level 200 MHz oscilloscope (Rigol, etc.) should set you back somewhere from $1-1.5K I guess, but that will give you 90% of what you'll need to debug most embedded MCUs (Arm Cortex M, etc., not Embedded Linux or Arm Cortex A which is a WHOLE other world).

That may or may not be a large amount of money for someone, but those tools can last you a good part of your career for the price of a low/mid range development laptop.


Anecdotally, I manage fine with just a multimeter for debugging my hobbyist projects. I generally use ARM Cortex M0 or 8051 microcontrollers. 90% of debugging, as in any domain, is just thinking hard about what the problem could be and trying out all the hypotheses. Not to deny that a logic analyzer and oscilloscope could be useful, but I wouldn't want someone on a budget to be discouraged into thinking that these things are necessarily required.


I guess it depends on the problem you are debugging. I rarely fire up my oscilliscope anymore (maybe once or twice the past year), but it's hard to replace a cheap logic analyzer when writing drivers for new sensors. You really want to see the signals going out, and the response coming in to understand if you have the I2C/SPI/I2S/etc. bus configured properly, or if the polarity is correct, etc.

A logic analyser is cheaper than a scope, and does a much better job displaying this kind of data in volume. I'd say a DMM + some equivalent to a Saleae Logic are the two tools I couldn't live without ... IF you ever have to write drivers. But so much of embedded is about interacting with other devices, and it's a common enough requirement to have to port a driver over to a new chip, etc., that I can't imagine anyone regretting buying one sooner rather than later.

You can get by with printf, clearly ... but an analyzer is worth it's weight in gold for the right problem.


Most of my projects are built around interfacing microcontrollers with I2C sensors. I've never had too many problems getting I2C communications working. I think a lot of it is psychological. Without a logic analyzer you can feel a bit helpless when you're not succeeding in connecting to an I2C peripheral. However, there's only a few things that plausibly can be going wrong with something as simple as I2C. (You have the wrong address, you have it wired up wrong, or you're misusing the uc's I2C peripheral.) It's quite feasible just to work through that list of possibilities until you find what's wrong. It does require a certain amount of faith, as you generally see no indication of progress at all until you get it working.

I'm not denying at all that a logic analyzer can be helpful. I'd just encourage people who can't justify the expense to have a try without one.

Edit: That said, I see that low-end logic analyzers are actually pretty cheap. I should probably get one!


Low end analyzers are good for slow signals. 24Mhz is a common sampling rate for less than $30. Just remember that your bus needs to be less than half that speed, and even then the analyzer may lie to you if your bus speed is anywhere near half your sample rate. Also the timings in your bus can greatly effect what the logic analyzer will show vs what the cpu sees. I had a false positive last week when I should have just looked at the rx the cpu was seeing when trying to get an SPI peripheral working on a new chip.

So I may be in the market for a better analyzer soon. But all in all my $30 was a good investment, and has made it easier to setup new serial protocols.


Word of warning. I lost several days of debugging what looked like a working serial bus using a logic analyzer. I was about to give up when a friend asked what it looked like under the scope. The problem jumped right out at me. Certain bit patterns would distort the signal and I wasn’t crossing a threshold. Solution was trivial to fix after that, just replace a pair of pull up resistors and the scope showed good squares again. Had a similar problem with an optocoupler after that. Logic analyzer hid the fact that the optocoupler was too slow, and none of the leading edges were ever square. Ended up replacing that outdated part with a similar priced newer part. My client at the time was strggling with a whole batch of these optos for a number of SKUs that were within spec, but the variance was so high that it was common to get a batch that weren’t good enough for his designs.


Haha. I remember having exactly the opposite experience about a year ago. I was debugging serial communications with a scope and everything looked perfect. Couldn't understand why it worked at power up and stopped working a few seconds later.

However, looking at the data on a logic analyzer and being able to see several seconds of data at once showed that the external module I was trying to interface with was buggy. Turned out that the unit we had was a preproduction prototype!


Can you explain your remark about "not Embedded Linux or Arm Cortex A which is a WHOLE other world"? Do you need oscilloscope with GHz frequency?


Exactly. Measuring things at 10-200 MHz is easy and relatively accessible, and there are only so many things that can go wrong.

Dealing with anything in the GHz range is not only extremely expensive (order or magnitude more), but you also start to deal with far more complex problems that boil down to the need for a very good understanding of the underlying physics of signal transmission: concepts like impedance matching, crosstalk between signals on the PCBs, etc.

The design AND debug requirements are far more complex, and you need to account for a lot more explations of why something isn't working as expected ... and the software/firmware AND physical level.


Hmm. A followup question: are there any cheats/hacks that would make it possible (if painful) to for example explore the world of USB3, PCIe, or Linux on low-end-ish ARM (eg https://www.thirtythreeforty.net/posts/2019/12/my-business-c..., based on the tiny 533MHz https://linux-sunxi.org/F1C100s), without needing to buy equipment in the mid-4-figure/low-5-figure range, if I were able to substitute a statistically larger-than-average amount of free time (and discipline)?

For example, I learned about https://github.com/GlasgowEmbedded/glasgow recently, a bit of a niche kitchen sink that uses https://github.com/nmigen/nmigen/ to lower a domain-specific subset of Python 3 (https://nmigen.info/nmigen/latest/lang.html) into Verilog which then runs on the Glasgow board's iCE40HX8K. The project basically creates a workflow and hardware to use cheap FPGAs for rapid iteration. (The README makes the point that the synthesis is sufficiently fast that caching isn't needed.)

In certain extremely specific situations where circumstances align perfectly (caveat emptor), devices like this can sometimes present a temporary escape to the inevitable process of acquiring one's first second-hand high-end oscilloscope (fingers-crossed the expensive bits still have a few years left in them). To some extent they may also commoditize the exploration of very high-speed interfaces, which are rapidly becoming a commonplace principal of computers (eg, having 10Gbps everywhere when USB3.1 hits market saturation will be interesting) faster than test and analysis kit can keep up (eg to do proper hardware security analysis work). The Glasgow is perhaps not quite an answer to that entire statement, but maybe represents beginning steps in that sort of direction.

So, to reiterate - it's probably an unhelpfully broad question, and I'm still learning about the field so haven't quite got the preciseness I want yet, but I'm curious what gadgetry, techniques, etc would perhaps allow someone to "hack it" and dive into this stuff on a shoestring budget, on the assumption the ride would be a tad bumpier? :)


USB3.0 bluetooth interference issues anyone?


You'd likely need a higher frequency scope if you're debugging timing issues or something on a processor that runs in the GHz range.


There are some excellent multimeters around the $25 price point, like my Aneng AN8009 (especially after doing the capacitor mod)

Take a look at the reviews at https://lygte-info.dk/info/DMMReviews.html, they're very through.


You need about $3k worth of equipment to start to effectively do basic hardware from home, but can maybe start for less than $1k. A scope and logic analyzer are a must, for dealing with misbehaving peripherals, so there’s around $900 just to get started. There’s other tools that may or may not be necessary depending on what you’re working on. But I’ve spent just under 3k in the last 2 years to do embedded work from home. That includes troubleshooting hardware, which may be someone else’s job. I also have a bunch of tools and materials to make it easier to breadboard and prototype.

If you’re thinking about making the switch, the hard part is learning all the things you didn’t know you needed to know. That’s where having someone you can walk up to, wait until they look interruptable, then ask a really dumb question is priceless, and really hard to replicate remotely.


I did WFH during initial Covid (changed job later) and I had to go to the office like 2-4 times every two weeks to test code on hardware. So it works, but unless you can have the hardware and any testing equipement on your desk I don't see "fully remote" work out.


Yes. I'm doing it right now and was for the previous year at my last job.

That said, I do have a suite of hardware development tools since I've been doing this a long time. But the only times I've needed to turn on an oscilloscope recently was because I was doing some low-level hardware debug.

Once you get past the really low-level stuff like measuring signal levels or debugging a weird PWM behavior, i.e., the hardware is pretty stable and you're building out functionality, you really won't need anything more than a JTAG or SWD connection and the hardware for those is trivially cheap.


I don’t recognise a lot of what you’re saying here.

Yeah, there is churn in JavaScript frameworks but to me that’s all they are: frameworks that sit on top of the DOM, and the nature of the DOM (for better or worse!) hasn’t changed all that much in a long time. When I think about what those frameworks are actually doing there’s a lot of consistency (this thinking also helps you avoid the stupid turf wars)

I personally like that’s there’s no “up” in web development. I’ve gone from perfecting REST APIs to understanding video encoding and streaming, to sending data via WebSocket or WebRTC… the list goes on. I hope someday soon I’ll have an excuse to dive into WebGL and WebAssembly.

And the development environment (i.e. the browser) is actually pretty wonderful compared to XCode, Android Studio and the like. There’s a reason native developers have pushed and strained to get things like hot reloading that browsers can do very easily!


I'm curious how to went about making the switch to embedded systems. I've always been fascinated by embedded programming, but I have no idea how to get employed in the field.


I've met 2 people who have tried to make that switch and overwhelmingly they get told by shops they apply to after passing the technical interview that they have to take a 30k (usually more) paycut along with worse culture/benefits to go into it and change their minds.


or just pick a stable stack like go with something like hotwire for FE and just roll with it.


After 10 years fullstack I moved into WordPress. The code I deal with is a mess but to be honest so was everything else I've ever been paid to work on.

WordPress is stable and maintains backward compat for as long as possible.

It's a real joy to build things on top of as you know you won't need to run framework updates every month to prevent a Nasty suprise down the line.


After writing 2x more CDK code than actual app code for the last year I can really relate :)


Is this sarcastic?


go as in Golang and hotwire to have simple server side rendering that feels like SPA for the user


I understand the references, and maybe its that I work in robotics and not web dev. But if the criticism of web dev is that domain knowledge erodes unlike law or medicine, the response shouldn't be that long term stable skills exist such as these two things that have come into prominence in the last decade in go's case, and the last year? in hotwires case


Hotwire is not really a skill. It's a tiny lib that lets your old school server rendered app feel like an SPA. There are obviously other options for your serverside code you can use c if it's up your alley.


fair, I might give it a look if I ever need to create web project. I think ultimately the critique is still valid though. At least in robotics, new knowledge is typically complimentary of existing knowledge. My ML projects don't negate my knowledge of control theory, physics, operating systems, embedded computer architecture, etc.


To be fair in Go's case, it came to prominence at or before the time many of us started. And the Go I learned at the very start of my career (10 years ago) is still usable today.


Nope why?


I'm pretty up-to-date on front-end developments but Hotwire is barely on my radar, so I had to look up the release date, but it seems to have been released just this year? (I thought it was last year.)

Though my stack has certainly evolved, it's been mostly the same since 2016, with really not that much new to learn. Adopting something released just this year doesn't seem like the most obvious choice when looking for a stable stack.


Hotwire and a few analogues (e.g. LiveView for Phoenix) are an emerging design pattern.

I think for a lot of business cases they are going to replace js frameworks. They are domain specific (Rails, Phoenix, etc..), but they allow you to essentially replace a lot of SPA functionality with a library that takes an afternoon to grok for a back-end dev.

They won't replace React, Vue, etc... But they can replace it in places where a full js framework is overkill (which is probably a majority of the places that I see it).

As a backend dev, I think that they have incredible promise. They make interactivity MUCH easier.


Hotwire is lang/framework agnostic.


Hotwire is just a name for a bunch of tech that has been around for a while. The components of Hotwire - Turbo and stimulus - have improved over time, for sure, but the core ideas haven't changed much, if at all. I worked on a hotwire side project recently and described it to a friend as "writing Rails like it was 2009." It feels really good because I've always found it hard to grok SPAs and the like, but could generate tolerable server-side generated webapps.

Old wine in a new bottle, basically.


Even if it is not hard to grok it's often unnecessary complexity as users literally can not tell the diff. in most cases.


It's a tiny lib that replaced turbolinks from DHH and the gang. It's used for many prod deployments and it's pretty convenient for people who do not enjoy FE SPA style dev. There are obviously other options.


Hotwire or htmx? :)

I use them too, but even In this niche there are several, like liveview for Phoenix.


liveview is great and Elixir is really nice lang.


Actually the web debelopment is much more stable now than it was 7 years ago. Major frameworks and tools are the same. Web standards are much more stable now, and languages are quite established. I haven't touch React or Vue for 5 years, but my knowledge is still relevant. Server side technologies are very stable for 10+ years too.


As a web developer who's a keen electronics hobbyist I definitely feel this too. But how did you make the switch? In London, there are few embedded roles, and they seem to pay worse than web developer roles. On top of that, companies seem to be looking for people who already have extensive professional experience of embedded development (which is fair enough, I guess).


All true. In my case I had to take a huge pay cut just so I was able to get into the industry. I'm not sure if this is the correct way (or the only way) to get in from a web development background. It is just what I did.

There are fewer companies that do embedded systems, but there are also not that many developers. Headhunters call me a few times a month just to ask if I'm willing to change my job, since experienced real-time embedded systems engineers are so rare that they have to actually steal one from somewhere else.

Really, really, depends on your situation.


It tends to be a very small world where you get to know everyone after a certain point, or you know someone who knows someone, and jobs tend to be had by word of mouth. If you have a good reputation (which can be gained working on problems on visible open source projects), work tends to present itself. Even more so than in other fields because it is such a small world, despite the enormous amounts of money involved in semiconductors.

There aren't a lot of advertised jobs, but I think this is often because they tend to get filled via word-of-mouth and employability is rarely a problem for a decent embedded engineer.


> Everything feels like sidesteps. There is no feeling of permanence to your knowledge, unlike a lawyer or a doctor.

Then you are doing it wrong.

Focus on the fundamentals, not the implementation. Any job or degree that doesn't value growth and a good grasp of the fundamentals is a pretty big red flag.


Do you have any tips on how to improve in the embedded domain? I've got some professional experience, mostly doing embedded application (MMU-less) development. I'm looking into linux driver development, but it feels a bit above my skill level.


If you want to improve, look at an open source project with a lot of momentum, and learn from what people are doing there.

I'm actively involved with Zephyr RTOS, for example, which is backed by the Linux Foundation and has people working on it full time from almost all the major MCU manufacturers. The development tends to correlate closely to Linux kernel as a model, and a lot of the key members were/are Linux kernel contribs, so skills learning with Zephyr can help you move to Linux dev later if you're new to it.

There is a non-trivial learning curve with Zephyr (device tree, KConfig, etc.), but it has more momentum than any RTOS I've seen in my career, and it has a very helpful community, while keeping the technical bar high.

If you prove your worth there, it's also an avenue that can possible even lead to job opportunities, which I've seen and benefited from myself.

That may or may not be of use, but if you want to improve in the embedded space, getting involved with something like Zephyr is some of the best use of limited time in my opinion to learn from some of the better embedded engineers working in the open.


Wow, the list of supported targets is a tad longer than I've yet seen elsewhere: https://docs.zephyrproject.org/latest/boards/index.html


The number of PRs and committers is what boggles my mind ... it's an endless stream, which is highly unusual for an embedded project. This is where the momentum and technical investment is happening today for open source embedded, IMHO.


> I'm looking into linux driver development, but it feels a bit above my skill level.

Same here. I managed to reverse engineer some of my laptop's USB functions and write Linux drivers for them. That little driver was the most fulfilling software development project I have ever made. Tried the ACPI and I2C stuff but couldn't figure it out. I really want to learn it all but I don't even know where to start...


Most Doctor's training is out of date within a decade of completing med school. It's why ongoing and continuous training is critical for advanced professions just like how you and I need to catch up on the new tech and systems in IT.


Any suggestions for books / courses for a developer who wants to go into embedded development both professionally and as a hobby?


When you made the switch how did you go about getting embedded employees to take your applications seriously?


Can you provide some guidance on how a web developer can transition into the field of embedded development?


> 64. DRY (don’t repeat yourself) is a luxury for when you have a coherent theory in your mind of the app’s code.

> It’s for when you have that theory contained in a worldview in your mind that helps you quickly test out decisions and plans against that theory.

> Until then, you should repeat yourself. Seeing your code repeat itself – when rhyming phrases start to pop up throughout – is a vital tool for building a theory about your app.

Wow, this is the best explanation of how to apply DRY that I've ever heard.


Sandi Metz says "a bit of duplication is better than the wrong abstraction", which is a rephrasing of the same idea.


I've seen so much code being messed up by dogmatically trying to follow "best practice". The worst offender is exchanging state variables for storing the state in the code path.


I’ve never heard of that one, what does it look like?


Yes! "AHA" (Avoid Hasty Abstractions) beats DRY.


My favorite expression of this idea is "Write Everything Twice", if only because WET has a nice symmetry with DRY.


That's interesting. DRY is one of the few things I'm very insistent about. I don't trust myself to fix a thing in multiple places. I've been bitten by this too often. I trust my colleagues even less, because they might not even know that the code is repeated elsewhere.

Am I misreading this comment perhaps?


Maybe.

The comment and the article doesn't advocate repetition for repetition sake. It serves as a warning against prematurely trying to "factor out" seemingly related code that can, later or in that moment, prove to not be exactly the same. This complicates the "refactored" or "improved" piece of code, as it now has to serve different purposes, maybe requiring more parameters, or a more complex state input. This further complicates the callsites of all the clients of the new DRY piece, adding complexity to the system which is more often than not worse than the original state.

It also makes everything around those places more difficult to change over time.

Since this "refactor" is often easy to spot, it tends to be abused by less experienced developers trying to improve things, or "advocates" of this practice that aren't in touch with how it can end up causing more damage than it fixes.

A DRY refactor has much higher chance to stand the test of time if you know enough about the system and how it will evolve at the time of performing it.

As with most things, it's about striking a balance. Since the internet is full of DRY! refactor! advice, this serves as a counter to the mindless call to DRY by adding some nuance.

But OFC there are instances in which code that is exactly the same is copy pasted around even if it's known one won't diverge from the other and done out of laziness... that'd be the trivial, positive DRY refactor case.


I think it depends -

1. you are writing some code, there are a number of functions dealing with similar things and you know how all the stuff relates to each other, you find yourself repeating some code in places and immediately realize you will need to repeat it a lot of places because all these things are related together. So you make a function you call.

2. You have come to a new codebase you don't have familiarity with, you see some bits of code repeated around in various places and you don't know if this is just because the things actually are related to each other or if it is basically by chance. You replace these bits of recurring code with a function. Later on people keep coming to your function and start adding in parameters and branching logic to handle the different use cases that keep popping up in your application where it is being used because there was actually not a very tight logical connection between these places it was being used and as a consequence over time the places it was being used are diverging in their needs.


I won't offer an opinion of my own because I don't have one yet, but I've seen several articles/comments here on HN lately expressing that one should not follow DRY blindly because there are cases where it leads to over complicated code. I remember there was an example showing one such case.


Your not misreading it. You're doing the right thing. Carry on. :-)


Agree, although with the caveat that if you find yourself deliberately copy-pasting duplicate code, you should consider going DRY immediately. You might later realize the code really should be duplicated (because it is "accidentally alike" rather then "essentially alike"), but it is much easier to turn DRY code into duplicates than going the other way.


I'd argue the exact opposite, it's much easier to take duplicated code and abstract it away. If you find that's not the case, maybe that's precisely because it wasn't such a good abstraction to begin with.


In practice, what I've seen too often is that code that was duplicated but should have been factored out ends up evolving with people having forgotten that there were duplicates. Bug fixes that should have been implemented in all instances of the duplication only end up in a few of them. When you do go to factor that duplication out, you're stuck with 5 to 10 different versions that all have different bug fixes, and theoretically they all need all of them.

On the other hand, if someone finds that one instance of the duplicated code really is different, they either turn it into a new function (the right answer) or add a new argument. Even if they take the wrong path by adding a new argument, it's easier to turn that one function into two functions than merge 5 to 10 different versions of what should have been the same code.


Yeah exactly. If two block of code are literally duplicates, it is easy to consolidate. But in reality they will drift apart over time, making it more difficult and risky.


The problem typically comes down the road when those abstractions touch so many pieces of the code that they become extremely expensive to change. The problem isn’t the hasty abstraction, the problem is the lock-in it creates.


This is a false dichotomy in my opinion. DRYing some code doesn't prevent one seeing how it is used in many places. To the contrary, I would say that DRYing makes it easier to see repetitions, because what you thought was a repetition could be slightly different otherwise.

Code should be kept alive. It is fine to DRY wrong. You can make it better when you discover a better way.


Changing abstractions is harder than changing duplication, thus wrong dry is harder than delayed dry.


If one writes an AbstractFactoryFactory where a function would suffice, I wouldn't consider it DRY. That would be a premature abstraction. And premature abstraction is the root of all evil. By wrong DRY, I meant a DRY done at the right level of abstraction, but in a way that doesn't address short-term evolution of the code. In such a case, it should be easy to change the abstraction.


Wrong DRY is necessarily a premature abstraction because every act of DRY creates a new abstraction. If the DRY was wrong, the abstraction (new method) it generated is premature.


I should have stated my definitions clearly, because I might be using rather unorthodox definitions of “wrong” DRY and premature abstraction.

Premature abstraction is premature at the time of abstraction. It aims to address future needs that may never happen. Wrong DRY is correct at the time of abstraction. It is wrong only in hindsight when a different need that is not addressed by the DRY arises in future. In that case, it should be easier to change the abstraction than to find duplications and DRY them. DRYing early also allows getting the benefits early until the needs change.


I think of it that way too. You want to make it obvious where the regularities and the differences flow to. It’s a bottom up, gardening kind of programming.

What’s more important IMO is to keep things together and in harmony. As in the same file, or similarly named files, or something that let’s you see it somehow. That’s a structured kind of repetition that is rarely harmful.

The problem with DRY is when you have multiple knobs at different places that you have to turn in sync that are structurally unrelated. Now you need to refactor or redesign to have a clearer pipeline and knowledge representation.

DRY problems can often be symptoms of bad structure. You can only accidentally fix it by patching or abstraction over repetition.


Sometimes I think that choosing identifier names and deciding between copying code and extract common code are hardest problems in programming. There's no right answer, but choosing wrong answer hurts program maintainability.


Agreed. Relevant article about this: https://news.ycombinator.com/item?id=26027448


I agree, this is very on point. It's hard to take the "DRY everything" out of a junior (catchy acronyms get the best of them); I will lean on this.


There's a catchy acronym "WET" for "Write Everything Twice" (i.e. don't generalise until at least the third instance).


The problem I have with this approach is that I don't have a counter in my head. I can tell whether I saw it before, but cannot tell at how many different places. Therefore, I find that it's best to DRY at the second instance. Doing so also avoids neglecting to DRY some instances and potential DRYing of them in another way.


Then again, sometimes it's even harder to take the "DRY nothing" out of them once they decide "DRY everything" was wrong but misunderstand why.

I mean, while the explanation above is nice, it's fairly easy for a junior to speed read through it as "DRY is a luxury [...] You should repeat yourself [...]" -ignoring all the parts that sound too complicated or nuanced- and quickly jump to the completely opposite conclusion.


The way I see it, business logic needs this approach, but all API boundaries must be and remain DRY from the get-go. Interface definitions must not define behaviour; they are glue.


both have good points, thanks. good food for thought


My experience is the opposite. Juniors repeat code (andacutla in-life procedures) a whole lot and are either somewhat-uninterested in DRY or lack the skill to not repeat themselves.


Nice read. I've been doing web dev for 15 years. What has got me burnt out now:

-Spend most of my time dealing with tooling issues.

-Increasingly complex demands. Even the simplest problems now seems to be multi layered with dozens of edge cases to be accounted for. It's seemingly impossible to get anything right. You can spend hours testing a responsive design on every browser, every viewport - send it to the client and you'll get after 5 minutes a list of 50 issues, half of them totally asinine.

-Lack of respect/authority/autonomy. After 15 years I'm considered a subordinate to a PM with 6 months experience and zero technical knowledge. This one can be improved a little by working remotely I think. But it just comes down to how management view or value different roles. We are a liability to them, something to be managed, coerced and controlled.

-Oversaturated market, learntocode, let's be honest and say for 95% of companies, web devs are a commodity. I don't have an issue with people who are excited and want to learn coding, but the reality is the job market is over stuffed.

-Leetcode interviews. I just don't have the energy, desire and sometimes specific knowledge to deal with these interviews. Its a straight no from me now, if i get contacted by a recruiter requesting I do one of these.

I can see clearly that other people in non technical fields are getting paid better or the same as me and dealing with far far less work and far less stress.

It's like there are two type of work category: "reviewers" and "implementers". Reviewers are the stakeholders, they make all the demands and do none (or very little/superficial) of the actual work. Implementers, well we are the ones who do the heavy lifting and deal with all the real problems.


> -Spend most of my time dealing with tooling issues.

I just spent 3 days debugging a bizarre frontend issue. Turns out I just didn't understand how webpack works and was loading it wrong.

And once I fixed that, I had to fix my postcss config because something somewhere had a breaking change at some point.

I feel like almost everything I learn these days is how to deal with the idiosyncrasies of dozens of versions of dozens of tools. My actual programming ability is stagnant. That's what's burning me out.


Yes, I recently installed an npm package I built about 4 months ago and got an error the main attribute in the package.json file is deprecated. Googled it, without any meaningful search results.

Just a small taste of my day, another rabbit hole to get lost in for a few hours whilst deadlines for actual work are looming.


Do not use Webpack for small personal projects. Zero-config tools like Parcel are much better for that.


>-Oversaturated market, learntocode, let's be honest and say for 95% of companies, web devs are a commodity. I don't have an issue with people who are excited and want to learn coding, but the reality is the job market is over stuffed.

It's interesting you say that.

Based on one source [1] there is a 10% decrease in job outlook 2020-2030 for "computer programmers".

Another says [2] there is a 22% increase in job outlook 2020-2030 for "Software Developers, Quality Assurance Analysts, and Testers"

"Info Security Analyst" [3] shows a 33% increase in job outlook 2020-2030

Yet in another another... The annual projected job openings for software developer for 2018 - 2028 is 134,000. [4]

That's 1.34 million job openings over the course of those ten years.

[1] https://www.bls.gov/ooh/computer-and-information-technology/...

[2]https://www.bls.gov/ooh/computer-and-information-technology/...

[3] https://www.bls.gov/ooh/computer-and-information-technology/...

[4] Visualize it: Wages and projected openings by occupation Domingo Angeles and Elka Torpey | November 2019 -- https://www.bls.gov/careeroutlook/2019/article/wages-and-ope...


Maybe the poster is in another country. I live in a third world contry where most work is grunt off-shored work. Most positions here are for devs with 1-3 years of experience, and there are loads of them. If you are more experienced and want a good job you need to work remote with an overseas company.


The one thing you should understand is the average programmer is not a good programmer.

The reason is simple.

The average programmer is a newbie with little to no experience.

It's like a pyramid or a triangle. There's more area or volume at the bottom than at the top.

Every year, more new people arrive at the scene.

New programmers are easily fooled by "shiny" objects.

So, whatever place you join, there's a high likely hood that the culture at the place is dominated by what is considered popular, with little to no regard to what actually works well.

It's different at each place, but almost every company I've been too has things setup in a way that's very painful and frustrating to work with. Every thing takes many steps. "Compiling" javascript takes 3 minutes. "Hot Module Reloading" takes 30 seconds and refreshes the page 5 times. You have to jump between 4 different repositories to fix a small bug. etc etc etc.

If you are experienced and notice that things at your company are broken, you either try to advocate for fixing things or just leave out of desparation. So the organizational culture continues to be dominated by people with very little experience.

If you are not experienced, you may just think that the "suck" you have to deal with on a day to day basis is just what progamming is like, and you might well decide to quit programming. It's hard not to think so when you have never seen a better version of how things can work.


This, so much this.

Just know though, that there are different places out there. I surely am tempting Murphy right now. Please recognize my sacrifice.

My current place is rather good in this regard. People actually care. The Devs genuinely care about both their work environment and code base. We actively try to hire other people that care and not let in the ones that don't.

We do hire 'inexperienced' as well but that doesn't mean you gotta take bad dev experience. For example I have a really 'inexperienced' guy on my team. But we hire him anyway because he showed signs of 'caring' during the interview and coding challenge (take home - I know what you think. It's different than you think and I'd be happy to elaborate if you want to). He is awesome! I try as hard as I can to give him the space to be awesome and not let the 'bad parts' of the company (which exist even in awesome companies) reach and discourage him (and others like him).

We keep our builds fast, we spend a bunch of money on giving every dev a prod like environment. A monorepo.


This sounds like our shop. One big fat monorepo, every merge to master must satisfy a checkbuild.

Speed is a big deal for us too. The entire thing can be built from clean in ~3 minutes by GH action workers. We buy our developers threadripper workstations, so local rebuild times of the solution during typical development iterations are closer to 15 seconds. We are much more interested in developer ergonomics than with simulating prod environment.

I genuinely believe having fast builds is the most important part of the developer experience. Somewhere around a build taking more than 30 seconds is when bad things start to happen to my engagement loop. If I can keep it under 30, I can stay in flow the whole time. Other developers on the team have expressed similar tolerances.


> I genuinely believe having fast builds is the most important part of the developer experience. Somewhere around a build taking more than 30 seconds is when bad things start to happen to my engagement loop. If I can keep it under 30, I can stay in flow the whole time. Other developers on the team have expressed similar tolerances.

This, a thousand times. Anything after 30 seconds is where you start to lose your focus, and your work output suffers for it.


I don't get the initial short build time thing. My prior work project took 15h too build from clean - obviously a nightmare. It was due to bad layout of dependencies and a lot of generation. So it is still important to not mess up.

I care mostly about cached build time. A clean build time of eg. 1h does not bother me.


With a bad enough (or just unfortunate) codebase or technology choices, every build could easily turn into a full build. I care about cold build and cold restart times exactly for this reason.


Please do elaborate on your take-home; in a profession with no formal credentials (i.e. no MSBE for SDEs), I don't mind putting a little skin in them game for the same purpose.


Well why I said that is because of all the discussions on HN about people hating leetcode, hating take homes hating... well just about any form of interview if I read it right.

Now I am/have been on both sides of this.

I happen to like take home for devs. I like it because it takes out some of the stress of interviewing. If done right. I have been advocating internally at my company for stripping down the take home test. Who wants to do a "this should take you 4-8 hours" test? Which nobody internally actually tried to do themselves in that timeframe. Nobody! Especially not if you have a job at the moment. And a family. But want to take advantage of the market and/or look for a better company.

What do I look for in a take home?

Not running code! I couldn't care less! I have never in my life run up a take home test someone sent in (backend). I look for code cleanliness, consistency, abstractions, DRY, WHY comments. If you send in something without any tests you are out. If you have a hundred tests to show 'you do tests', you are out. If you have 5 tests that are of awesome quality and show that you know what is worth testing and how to write tests that will survive a refactoring of the code and actually tell you that your refactoring didn't break anything I will hire you and pay you money to write a hundred of those tests when it actually matters. Maybe add a comment that you could write the same kind of tests for the rest of the code but your kid needed a good night song and the deadline for the test was up.

I was hired on such a take home. My code deliberately did not run. I called out to external services that I did not bother to include in the take home test because it asked to solve the problem and I knew they used microservices. And business wise those things should've been handled in those other microservices and not the one I was writing. I also left out the database. Not worth even an h2 in pom.xml. They happened to use gradle. They didn't care I used maven. Still love them for it. Had the best interview ever. Loved the pros and cons discussion with my interviewer. Awesome guy.

Now depending on what position you get hired for this might change. An architect position for example a whiteboarding session might be more appropriate. Just have a problem to solve (e.g. one of your own main use cases) and have them design it with you. Who cares if they come up with the same solution you did. It's about the discussion. The pros and cons. The general attitude towards solving the problem. If they admit they don't have the answer to something off hand but its a 'googleable thing' they've just never had to solve before, that's fine!


We do take home tests and I find them valuable. We do, however ask them to host it somewhere to be used.

1) we want the time investment to be <4 hours

2) the test uses public APIs and is directly related to our work (crypto)

3) we had the team (mine) do the test themselves before making it official

4) we also are looking for things like tests, thoughtful architecture, etc. It's hard for a junior dev to fake well structured and organized code because it takes experience and thought to do it that way.

If I were interviewing I would much prefer a take home rather than an in person leetcode interview. I also don't mind in person pair programming (like help us fix the bug in this code or whatever).


> If I were interviewing I would much prefer a take home rather than an in person leetcode interview.

It's tricky, because this can vary within one person across time. For example, in my area (DS), i think that the take-home gives the best signal for the actual work (as long as its well scoped).

But, as a parent of a small child, I'd much prefer to just do in-person tests/discussions as it takes less time away from my family.

So it's hard, and whatever you choose will lead to you missing good candidates.

Unfortunately, as in many areas, there's no silver bullet.


Exactly agreed. We implemented such a take-home as well, hopefully it takes candidates 1-2 hours to complete, but we need more data on that. You're scored on various well-defined areas, and it is, as you say, to show us that you know how to organize code. We don't even try to run it (though CI should pass).


Your company sounds great. Are you hiring?


You lost me at monorepo. That might work at a very small scale, but past that monorepos create more problems than they solve.


Such cargo-cult statements never work in absolutes.

Saying that "monorepos are only good for large scale" is pure bullshit, and something that only someone very inexperienced would say.

I'm in a very small company (less than 10 devs) but we still use a monorepo, because we have a set of common libraries for all our products, and having a monorepo helps us update those libraries without breaking anything. On the other hand, some very large companies have tried and have failed.

Monorepos are just a tool. The implementation and the context matters more than anything else.


Not disagreeing, but I think he was saying the opposite of "monorepos are only good for large scale"


A monorepo saved our company. When you let every developer have their own personal repo like it's some isolated minecraft instance, you will struggle to create 1 big valuable thing.

Monorepo forces the tough conversations. Maybe the organization should only use 1 language now. Maybe we should talk about code review policies and standardization of other processes since everyone is in the same bucket. Why can't everything just compile to 1 exe?

Another massive advantage with a monorepo if you use GitHub is that you now have 1 issue bucket that covers the entire enterprise. Linking issues to commits is super powerful when this technique covers 100% of your code and exists under 1 labeling scope.


Like google scale?


Sure, if you want to write your own version control system and also your own computer language to support it.


There are many other examples of large scale monorepos. They in no way require custom version control nor programming language.


What are the other examples you’re referencing?


I'm not a fan either. In my opinion to do monorepos right you have to have a lot of tooling to handle all the special cases and scenarios and small teams just don't have the bandwidth so they use off the shelf tools that force them into a specific workflow and they end up at the mercy of the tools.


like what?


Like versioning services, deployment of the code base, breaking out new microservices, transitioning to any container based deployment solution. Monorepos aren’t just a nightmare for release management, they’re a nightmare for any sort of single responsibility oriented architecture. I’m really exhausted of “senior devs” that believe they’re architects championing monorepos just because they’re too lazy to change minor versions in a repo.


Why is any of this necessary?

You have to understand, many of us think microservices is a stupid idea, so the fact that the core of your argument is "monorepos make microservices difficult" is not an appealing argument.


Where is this 'microservices is not the answer to everything' club? I want to sign up.


The definition of an ideology is a school of thought that claims to have the answer for everything. What does that tell us about the microservices fan club?


Possibly that `problem-solving` is an external microservice that the microservices fan club depends on for all their answers.


If you have a hammer everything looks like a nail.


I don't know any online community that esposes some of the ideas I believe about web technology. There's "handmade network" but they don't seem to be much into web programming.


I think you're massively misrepresenting the senior devs you've worked with if you think the reason they've advocated for monorepos is because they're "too lazy to change minor versions".

If a monorepo puts a barrier in the way of turning everything into a microservice, then all the better, as far as I'm concerned.


How does a monorepo make any of these things harder? Make a new folder, slap a version on it, push your versioned artifacts.

Container based solutions become harder? Again… build your artifacts and deploy all the containers you want? What is so hard about this?

Meanwhile, with multi-repo you lose out on a whole host of opportunities to robustly track dependencies and keep things up to date.


Monorepos don't make it harder.

Everything in software is about dependencies, and monorepos are playing double or nothing with dependencies.

The issue with Monorepos is that they can amplify bad decisions and tech debt. Any bad decisions are magnified across the organization. But they definitely don't make it easier or harder to make that initial bad decision.

The upside is that dependencies are just there and not hidden behind interfaces.

The downside is that a bad dependency cannot be abstracted behind a micro-service interface. Once it's out in the wild it can do crazy things.


>How does a monorepo make any of these things harder? Make a new folder, slap a version on it, push your versioned artifacts.

not liking microservices myself but - if I did this wouldn't it mean that the code in version 19 was now removed from the code in version 18 and back in git history making it more difficult to figure out just where things went wrong on a difficult little edge case.


I don't see where microservices or not have much to do with having a monorepo or not. We have a monorepo and we have both. We have "microservices" and we don't. It's also a huge variable term. What I call microservices you guys might call "a bunch of monoliths communicating with each other". Same difference.

git bisect helps to easily identify hard to find but reproducible bugs. Since we try to have small PRs once you found the breaking commit it is usually easy enough to the find the bug. Murphy and exceptions obviously apply.


Like anything it's a tradeoff.

You trade simplicity of having everything in one place vs. ability to independently version pieces at the cost of more complex tooling and build systems required to test.

Usually there's some sweet spot, but the answer isn't obvious and one isn't clearly better.


OP here.

We do not version our services in that sense. It's a monorepo after all.

We do have microservices.

We use kubernetes.

Deployment is a breeze. Only services with changes are actually deployed.

We do hourly deploys to production.

None of the issues you describe apply here and the monorepo works awesomely for us. This is a SaaS situation where all services making up that SaaS solution and that we host are in that monorepo.

YMMV if you are in a different situation such as having to ship versioned software for customers to self host/install for example. Our accompanying software that is usable in conjunction with the SaaS solution is not part of that Monorepo and each of those are versioned and deployed to various external marketplaces.


There’s plenty of reasons to want to version APIs in a SaaS offering…


That's a different story and a monorepo has zero effect on that. The versioning that was mentioned before was versioning of the services themselves, i.e. this build produces version 1.5.4, next build is 1.5.5 etc. We don't do that any longer since moving to a monorepo. We deploy by commit hash basically.

That is way different from providing v1 of your API and changes come out in v2 of the API while you also still provide v1 of the same API. You can (and we do) do that perfectly well with or without a monorepo. Mostly for the public API. Internal API's often don't need to do versioning and one can go with backwards compatible changes and/or rolling a change out over multiple commit deploy cycles.

Some of this will depend on your size I suppose. The larger the org, the more services etc. the more stable versioning I would suppose happening.


> The larger the org, the more services etc. the more stable versioning I would suppose happening.

Which is why I said in my original comment this may work fine in a small environment, but it won’t scale well.


I do agree to some degree with that (I'm the OP not the siblings :)). We just have to define what small means. Netflix or Google are way up there in scale. In relation to them we are small. We also don't actually have microservices vs. what I'm reading about Netflix's architecture. We probably have "macroservices" in comparison with them but we definitely aren't one monolith. We got a bunch of "small monoliths" so to speak.

That said, we're not small either. In our niche, which you might recognize if I said more, we're the top solution customers choose (but I won't go into much more detail than that for obvious reasons).


I actually couldn’t agree more. Figuring out what “small” means really is the key and that thoughtful level of analysis is what’s really important for determining what the right architecture is for a business. I must’ve articulated my thoughts poorly earlier, while I’m not crazy about monoliths, I’m not opposed to them either. It’s the monorepos that contain many monoliths that concern me. Microservices in general are hard to execute on, and even the most successful companies that have realized microservices have monoliths running somewhere in the background. Your enterprise sounds interesting and they’re fortunate to have such a self reflective engineer on their team.


> If you are experienced and notice that things at your company are broken, you either try to advocate for fixing things or just leave out of desparation.

I've tried the advocation thing several times, but it usually ends up being you volunteering to do a lot of extra work. And then working extremely hard to try to gather some sort of consensus or buy in from the team, and receiving either push back or non response at every step.

I've now decided to just try not to care as much. I focus on writing my tasks to the best of my ability and improving whatever small bits of code that I can along the way, but I now just consider the codebase the property of the CTO and engineering management. If they want me to focus on helping fix what's broken, then they need to assign me to it and allocate time for it.

That kind of sucks too and takes some of the joy out of work, but at least I'm less likely to overwork myself again.


The other way is to just go skunk and fix a problem. Then if the fix works and people appreciate it, you get more latitude the next time you want to fix something.

Getting pre-approval consensus is not really the way to go about those things, at least not in that kind of environment. Anything "important but not urgent" will not get the sufficient level of consensus. Generally if you're faced with a problem where you're certain people will find it worth the effort after it's done, but where you can't get approval beforehand, going skunk is the only way.

Most of my steps forward in technical reputation have been from those efforts.


> The other way is to just go skunk and fix a problem. Then if the fix works and people appreciate it, you get more latitude the next time you want to fix something.

This might surprise you but people don't like you fixing their mess behind their back. They take it as personal insult. They correctly recognize that you think what they did is messy and bad and you fixing it looks to them like an attempt to dethrone them from whatever little position they have carved for themselves within the organization.


I guess it varies from person to person. If somebody fixed something I wrote without asking I would be thrilled! I recognize that I am imperfect, limited by time, and striving to move to more impactful projects so I would be more than happy if somebody wants to improve a system I was inexperienced with or didn’t have the time to fix. I would probably give them the low down on the system and try to transfer maintainership of the system to them.


Reading the GP comment (going skunk), I wondered if the factors behind that turning out positively was the position of the fixer relative to the code/systems in question. If they're not stepping on anyone else's toes it works great.

This comment echoes that sentiment a bit, I think; there's an implicit dependency in there on the project not having a collective structural investment, whether that's explicit (overt, with explicitly delineated boundaries) or implicit (implied, fuzzy, part of collective culture), on the thing being fixed.


Yes! I agree. Best to fix up code of people who are long gone instead. No one else will touch it so no merge conflicts either! Just make sure the fix makes others appreciate it. The populist fixes are fixes in dev tool chains that everyone in the team will appreciate. If you want a hobby just make any part of CI faster for some love from the team.


In a healthy team no one person owns an area of the code, and most complex areas are a weird amalgamation of several people's commits anyway. I haven't worked with anyone who takes fixes as personal insults for a very long time.


I'm not a fan of secret work like that. It can definitely depend on the size and composition of the team, though (and the size of the fix / improvement). Once a team gets sufficiently large, people going rogue with side projects like that can cause more harm than good, in my experience. Hurt feelings / resentment, because somebody else had already planned to fix that issue ("why didn't you just bring it up so we could discuss it first?"). It conflicts with some new feature another team is working on ("if we knew you were working on this, we would have done things differently; let us know next time"). Multiple people might decide to tackle the same problem concurrently, without knowing others are doing the same. You misunderstood something fundamental or lack some context that invalidates the entire solution that could have been cleared up by discussing it with the team first.

And then maybe the biggest problem is the whole "putting in extra voluntary work" thing. Higher risk of burnout because you're spinning too many side project plates. Either your main work gets impacted, you start putting in overtime, or management starts thinking they're not giving you enough work, if you have all this time to do side stuff.


Ok then don't do it


The risk here is that your fix might significantly improve the code in some way but also introduce a new bug or break some obscure feature. As a dev you're always sticking your neck out when you improve code in ways that aren't directly tangible to users or management. Leadership & management need to buy into the concept that code cleanup and refactoring are necessary but always carry some risk.


I find your comment spot on, there are a lot of good places out there, but in the same time, the amount of developers needed and the demand surpasses the offer, so a lot of less experienced people are drawn into positions that they are not ready for.

I know I might get some hate for this, but one of the most demoralising things for me recently was to feel out that I wasted my life and youth for nothing. I spent 4 years in a CS focused high school, where 60% of our classes were just maths and CS, then spent another 5 years at the university. Even though most of the stuff learned there is theoretical and most of my knowledge comes also from freelancing and learning on my own, the actual formal education I think it also helped, especially in the university where we had some amazing professors.

What I see now is that everyone is getting into learn to code programs, a colleague from my company, who was a PM went to a 3 month JS/learn to code course and is now officially a "programming instructor", another friend was a journalist, took a 6 month JS course online and is now a "senior" developer. I understand and know people that switched careers, spent a lot of time on learning to code and are great developers, but most of the people aren't like that, and it feels somehow demeaning to our craft, that now after a 3 month online course, everyone is a developer.


Dont focus on the titles, focus on what you work on, it pays off and it's easy to create a contrast with the more amateurish.

First, dont consider your job is mainly coding, it's not, it's mainly solving problems. If your amateur journalist solve company problems as well as you with that little education: change company to solve harder problems, it's not their fault they re super good where they are and you're just the same seething you over educated yourself while stuck at changing button colors.

Now focus on how you solve problems beyond coding: do you understand requirements, can you even propose some, is what you build always either well working or fast delivered (both are ok, rarely done by the same people, both can increase your reputation).

I work in a large bank full of very old devs and empty of fad. I m a bit of a wild card because I m only 35 yo, so I know javascript for instance, prefer maven to ant, etc: but my true value is that I deliver prototypes fast that I increase in quality as they are used rather than spend 2 months hesitating over impossible edge cases: this worked fairly well and I have enemies as well as support, and that's fine. Do what you can, help your company, change if you're useless, and that ll make you worthy of the craft :)


That cuts both ways though.

I switched careers, spent a lot of time on learning to code and am a great developer but have to work alongside people who have a CS degree but consistently write poorly-designed spaghetti code.

Boot camp graduates probably do drag down the average competence level a bit, but it was already shockingly low.


I agree with what you are saying. In one of the teams that I work with we have a developer who was a truck driver for a couple of years and then decided to learn to code. He is one of the best developers I have ever worked with, he spent so much time learning on his own, studying and the passion and drive that he has is inspiring. Even now, after 15 years in the field he is always there and improving himself. In the same time we also had someone with proper formal education, and he was one of the least talented and most arrogant developers that I met until then. So it can go both ways, of course.

In the same time, I think now people just take this profession for granted. I don't like being X, I heard that being a developer pays well, so I will become one in 3 to 6 months. I wouldn't be able to be a tax consultant in one of the big 4 for example, just by reading about taxes on Udemy.


There are two emotions at play here.

One of them is "I spent years to learn this and now these plebs can just learn it in 3 months then take my title?". I'm not saying this is _your_ emotion, but I certainly notice this in me.

Anyway, I think this kind of emotion is not productive. If programming really is so easy to learn then maybe we made a big mistake by taking a 4 year university program to learn it.

That said, I actually think programming is hard and whatever you learn in a 3 month bootcamp will in no way give you all the tools and experience you need.

The problem though is the average.

New programmers look out to what everyone else is doing in order to learn.

But because most programmers are beginners, it ends up just being a cycle of beginners teaching each other.

When people with experience do speak out, what they say sounds so completely different from what the rest are saying, so the beginners don't listen to their advice. They will think these people with contrarian perspectives are just weird.


They make take your title, but they cannot replicate the code quality that should go with it, if that's of any consolation.


It's not just about the code quality, although that's an important aspect of it. It's the quality of everything. The development environment. The architecre. The product itself. The "engineering" mindset.


Unless they write higher quality code than you of course.


Then they would deserve the title, and I would be proud of them.


> demeaning to our craft

This is exactly how I feel. Even having gone through a more practical IT education, most of the folks in my class were really really bad (and not very excited about programming besides). That combined with folks that do a little bootcamp and then end up doing some sort of frontend development is honestly just depressing. It's hard enough to help managers understand good vs bad engineering, I don't want to do it every day for the people I work with too. Especially not knowing that a large percentage of them will never get any better.


I feel you. I'm one of those those that jumped from Biochemistry to programming. I remember the day in 2012 I wanted to switch to CS while in my second year of varsity and the head of that department looked me in the eye and straight up told me she had no space - despite the painfully obvious lack of queues and scores of empty chairs.

But alas, I've spent 6-7 years learning and grinding, I wanted to be a proper web developer. Just this morning I read another post discussion about RISC-V and ARM architectures and thought to myself - I'm still an idiot. I have no idea what these guys are talking about. I still feel like an impostor. I've helped my company deliver some awesome products and get paid really well but theres is still too much I don't know.

Now Im in charge of training new devs, freshly minted CS grads and they can barely write readable code. I'm still jealous that they have a CS degree which can open many more doors.


I studied applied mathematics. I feel like it gives me an insurmountable advantage over most developers when it comes to the real high-leverage stuff: Process automation.

Every company needs a website and there are plenty of cheap services offering to build one. But the real money is made if you can make someone's internal processes or production planning more efficient. For example, what's the worth of an AI that automates a process that currently costs $1 mio annually? Management will be ecstatic to pay you $200k annually to get those cost savings...

And for those tasks, you need a strong algorithmic and mathematical background.


> And for those tasks, you need a strong algorithmic and mathematical background.

Why do you say this? If you implement it via AI then sure, but I’ve done plenty of process automation without needing a strong mathematical background.

What I have in mind is speaking to business people to understand what they do, learn what the business rules are and write systems that implement those flows (and hopefully skip steps that are actually redundant).

Perhaps you meant something else?


> You have to jump between 4 different repositories to fix a small bug. etc etc etc.

For me this one also seems inevitably connected to the idea of microservices, where people set them up in the worst possible way (de facto tightly coupled interfaces but with REST or RPC calls in the way, separate git repos, multiple databases and multiple Docker/Kubernetes deployments for even small projects, etc etc) and so add huge amounts of developer overhead to web apps that would have been single Rails projects back in the day.


I refuse to work for any company that does microservices again. Or the "distributed monolith" spaghetti that it usually ends up being. It's an exercise in misery.


I can deal with lambdas if every lambda has the entire application inside.


That's a little extreme.

Maybe instead you could probe at interview time whether they are happy with their microservices, how many do they have, do they feel happy that the service boundaries they finished up with are the best possible etc.

Not all microservice setups are shit shows.

Plenty of monoliths are.


> Maybe instead you could probe at interview time whether they are happy with their microservices

There is no way anyone tells you in interview that the codebase is crappy or that they are unhappy with.

You’ll always be told « well, like everywhere, we have legacy, but we learnt for it and now every new project is in [put REST + shiny front tech here] and we are happy with it. We migrate step by step ».

And then you’ll realize that the codebase they called legacy is what makes the company’s money, it’s not that bad compared to new project and is a pain to maintain because nobody works on it anymore except for urgent bug solving.

So, they’ll probably tell you that « they are happy with their microservices » (or quite happy) because they already try to fool themselves about it.


This is so accurate it hurts. No interviewer is ever going to say "dude, our codebase is terrible. The CTO wrote most of the core logic 9 years ago and everybody is afraid to touch it, so we just paper over it with increasingly complex interfaces and abstractions. There are like 25 different ways of doing everything, because every 6 months a dev comes in and thinks we should use their favorite new design pattern everywhere, but then gives up halfway through implementing it. Everything is either massively over engineered or complete spaghetti."


To be fair, I have had that conversation in a couple of interviews but then I'm usually coming in as a contractor to patch up these kind of things.


> Not all microservice setups are shit shows.

Yeah they are. One service per team, the thing that Amazon was originally doing that was the inspiration for all this, works, but you wouldn't call that "microservices" these days; the thing that people call "microservices" is always a shit show.


I've worked at one of those "one service per team" and, boy it was the suckiest most incredibly stupid thing ever.

I needed a place to store marketing and PR templates that were completely unrelated to the core of our product.

Of course the rule was "one service per team", so the CTO demanded that I store it together with our service, which is the part that performs customer authentication. Meaning, the part that is responsible for securing customer data and everything else. Whenever we had auditors checking it they're puzzled as to why there's marketing material storage inside the auth microservice.

This could have been the poster-child example of a good place do have a separate service, but no, the law was the law.

It's almost as if there's no such thing as a silver bullet.


So why was the team that did customer authentication responsible for marketing and PR templates?


Because "one thing per team" is equally as idiotic and bureaucratic as "one service per team". The company was technologically dysfunctional, not organisationally dysfunctional.


Why is it extreme? There are plenty of companies out there that don't do microservices. There's no need for me to work for one that does. Maybe I'll miss out on a job with an incredibly well architected microservice that I would love, but I'm perfectly content with missing that (rare) job.

I've found it incredibly hard or impossible to gauge the quality of a codebase during interviews. (Beyond immediate red flags like "oh, we don't write any tests or do code reviews" or whatever.) And to be honest, most codebases are bad, it's just a matter of degree. And I would almost always prefer working in a bad monolith than bad microservices.


> For me this one also seems inevitably connected to the idea of microservices

Oh yeah. But I've seen this also happening in frontend libraries, and sometimes even inside monoliths.

I find that inexperienced developers tend to break up services/libraries/classes purely because they believe that "code should be neatly organized", not because there is a logical boundary between two parts of a system. Thus, we get lots of tight coupling, chatty microservices, and 20 lines of imports at the top of the file.

I find that A Philosophy of Software Design by John Ousterhout [1] has a good rule of thumb: classes (and services, libraries, repos, applications) should be "deep, not shallow". The "surface" (aka the part that touches external components) should be as small as possible, while the internals should be larger. Of course that doesn't sit well with people looking to "neatly organise code".

[1] https://www.goodreads.com/book/show/39996759-a-philosophy-of...


I believe that code should be neatly organised, though I have no preference about how this is accomplished. What would you suggest I should do, if not split classes?


You should split classes! But only where it makes logical sense, where there is a clear logical boundary.

Splitting code as a way to visually organize code can lead to tight coupling, which tends to be worse than large classes.

If you split a class but there’s still multiple “points of contact” between both, it wasn’t a good split at all.


People split classes for visual reasons?

That sounds unwise. I only ever split because the class is getting too large, or because some functionality is better off being moved to its own class. And those other classes also should preferably only have one or two methods exposed to the outside, there's nothing worse than a large surface area unless it's a library class.


Yep, you are completely right, this is extremely unwise. Problem is, a lot of material aimed at beginner developers basically instructs that. Also a lot of linter documentation shepherds users into doing it (I'm thiking of some Ruby linters here).

> there's nothing worse than a large surface area unless it's a library class

Yep, exactly. Even big classes are better than tightly-coupled classes with large surface area.


Totally agree. But I would say it's not just lack of experience. It seems a lot of the people now who have 2 years of experience are "Leads" or "Seniors" and they have a very dogmatic way of thinking based on the the exact trends and headaches you described.

We had a new lead join, whose first job was as a Lead(!) and he insisted on adding a pre-commit prettification hook. Now due to the specifics of our work this formatting creates all kinds of bugs and means the local code you work on doesnt match the work on the repo or server, which is terrible but I have no power to change things because they've been taught to believe that this is the only way.


To be honest, I find that in the long term using an autoformatter greatly reduces what I call "diff spam". There's always that one guy using another IDE with his own auto-formatting tool that rewrites whole files. This is extremely annoying when reviewing merge requests and commit histories. Using an autoformatter solves this issue (and many other formatting related timesinks).

In addition, I think for most use cases having formatting-dependent code is kind of bad practice (not talking about indentation based code scoping à la python here). Although I do have the occasional struggle with ascii diagrams / comment formatting.

All-in-all I understand both sides, with a slight preference for using one. Manually formatted code in some cases can be much more beautiful than autoformatted counterparts. It's certainly not something I would impose; rather a team-decision after a short discussion.


While I am a fan of auto-formatting, I do use it sometimes and don't think it is necessarily dumb in all cases (and so agree with your premise of making it a team decision)... but, doing it as a pre-commit hook--as opposed to doing it when your file gets saved and merely verifying it was done pre-commit--is extremely dangerous as it leads to these issues like "code is tested locally... and then something different is committed that should be the same, but might not be" and is the real problem being described.


For example, I recently spent several hours fixing a pre-commit hook that would apply clang-format to any file changed and then `git add` the file.

Thus, the patch shown in `git commit --verbose` would be very misleading. It would show the original patch, not what was actually going to be committed. Due to the hook, any other changes that happened to be in the same file would be silently commited (even if they were never staged for commit).


(OMG I somehow edited this sentence so many times I dropped a "not": I am not a fan of auto-formatting ;P.)


But how did we get to the point when a full-source patch is even acceptable? Back in the day I’d come to this guy to ask why a single feature modified every place and question their understanding of what we use version control for. Why do we give a fuck about some funny IDEs which do not get it either?

That points us to the source of all the complexity we have now: it’s not newbies who broke things, it was tooling written by complete idiots.


It's really a trade-off. If the benefits outweight the possible negatives then I agree but in our case it's clearly not good, since the code being sent for review does not match the code on your local dev environment.

Worst case scenario you can format code on your local machine using your preferred formatting rules. I prefer to do this with the A and B documents since they will be using the exact same formatting (B comes from a different repo/team without our pre commit prettification).


Putting an auto formatter as a commit hook is just one of many small things that accumulate to make your develpment experience frustrating and annoying.

If you have a guy who always does a diff spam, just talk to him and ask him to separate his commits and minimze the diff spam. You don't need to ruin everyone's experience just to counter that.


Well, I think it is because so many people don't use version control in the intended way nowadays. I always review my changes before committing / pushing, but I have the impression many developers nowadays just stage, commit and push, as if VC is some sort of simple storage mechanism. I have talked to numerous people over the past few years to ask them to disable auto-formatting and/or to check their changes before pushing, but alas not always with the intended result. In the end I decided to push for using auto-formatting commit hooks in some projects (even though I dislike it on my own code).

An unfortunate observation: the majority of these projects were bloated, frontend javascript applications. In complicated backend applications I seem to encounter a higher SNR, and thus no necessity for commit-hooks.


Ultimately version control is about easily merging changes from people working on different machines.

Being able to "review" changes as clean set of patches is not essential.

The mindset of "we must review every change before it goes in" is not based in reality anyway.

Have you ever seen a system of code reviews produce useful benefits that would not be there without it?

You can always review already-committed code and discuss it within the team. It does not need to happen at commit time.


> pre-commit prettification hook

At my work we have this the "right" way in C# with a gerrit flow: run "dotnet-format --check" in the pre-push hook. So you can locally commit what you want, it must be checked before pushing, and it doesn't automatically change anything without asking you or letting you review it.

> due to the specifics of our work this formatting creates all kinds of bugs and means the local code you work on doesn't match the work on the repo or server, which is terrible but I have no power to change things because they've been taught to believe that this is the only way.

.. but this is the real problem.


You could suggest that this hook only be executed on a small subset of the repo to start with.


I have. I suggested limiting it to certain unproblematic areas. But, its a dogmatic all or nothing type of attitude unfortunately.


I think there is a Stockholm syndrome somewhere there. I did some consulting for places where they had floors filled with web developers. What I saw was basically that they would assemble insane amount of tooling, dependencies, frameworks, processes only to create something akin to "hello world" when comparing to the amount of efforts spend. They would also think that this is the "right" way of doing things and are enjoying it. The more complex their stuff is the smarter they think they are. They are also super opinionated and God save one from ever questioning what / how they do things.


Oh, so true. All this shiny tech is only useful for few big corps out there who can hire best over-your-head smart guys for floor mopping roles. Otherwise, it’s just a snakeoil. “We need to make our code modular!” — a company that writes a todolist-like spa behind the frontpage. “We need scale!” — a company that will never exceed ten requests per minute. “We need a complex build system with plugins and transformers!” — a company that has less than a hundred of assets, manageable by a single zero-expertise role. This plague is everywhere now.


You know how many people like puzzle games?

For a lot of people, toying with complex systems and configuration files is fun in the same way that puzzle games are fun.

These people don't play puzzle games. They already spend all their day playing this puzzle game.

They are not in the business of making something useful. They're in the business of pretending to be wizards who can weild technology to their will.

If they were in the business of building useful products, they would try to simplify the process as much as they can.

But since they're in the business of playing the configuration puzzle game, they have every incentive to not simplify anything.

The more complex it is, the more rewarding it is for them when it finally "works".


The flip side of this is thinking you can replace a complicated but working system "in a weekend at most", convincing management with a prototype that's missing all the edge cases and most of the features, and unnecessarily rewriting everything until you end up back at the start. An engineer needs a few attempted rewrites under their belt to learn humility in the face of business rules.

Of course, sometimes you actually can rewrite it in a weekend.


I've dealt with this mindset almost daily, but replace weekend project with a small to medium (sometimes large) project.

when they fail, they do not unfortunaly learn humility and do not end up back at the start (jobs on the line).

but instead massively descope the project and add a layer of 'shineyness' to the old working system. and now the company has to keep some of the old system in place, thus creating a service with two systems with two different type of technology. (pita for BAU run teams)

rinse and repeat every 5-10 years and the service now becomes a monsterous complex system


I find it still very concerning that you need to trick management into a better system... Edit: plus sacrifice an unpaid weekend of work for that.


> If you are experienced and notice that things at your company are broken, you either try to advocate for fixing things or just leave out of desparation.

See also the Dead Sea effect:

http://brucefwebster.com/2008/04/11/the-wetware-crisis-the-d...


Thanks for the link I think! (I might have read that before).


Everything sucks, but I have come to the conclusion this is a good sign. It means there are not enough programmers to do the things we know are perfectly possible. It's not just web dev, which actually I don't touch. Recently I'm writing LaTeX again and I find the experience surreal. Hell, there are several open source computer algebra systems with lot of man years inside and they are barely maintained. Most (all?) of them written in Common Lisp, they work, but need some serious interface work. Our imagination has no limits but our time is finite.


The title of this blog post is interesting, i.e., the use of the word "facts"

The post seems to be a "listicle" of opinions, not facts.

If we embrace the opportunity to generalise like the OP, we might wonder if "every web dev" has difficulty appreciating the difference between fact and opinion

As for this top comment, is the use of the term "average programmer" significant, considering the OP is specifically referring to "every web dev". The parent comment appears to reframe the context from "every web dev" to "the average programmer".


They address why in the beginning of the piece.


There are even companies mottos like "Hire talent over experience until it hurts" which contributes to said desperation when you first hear it.


The orgs with unecessarily complex and unecessary setups does not make sense though. Aren't the more experienced devs responsible for that? If so, shouldn't they have the wisdom to avoid those newbie pitfalls?


it's usually those at the middle of the pyramid that get attracted to that complexity. once you climb further you revert to simple stuff that work effectively, because now you focus more on the essence than the apparent.

here's an example in haskell: https://www.willamette.edu/~fruehr/haskell/evolution.html


It takes self discipline to stop trying to make things more complicated than they need to be, but I think for me at a certain point you just lose interest in overcomplicating things. At least I have. I am much less interested in doing some new fancy thing in Haskell as opposed to solving it in a straightforward, effective manner.


Yes, but the software stacks aren't the popular choice because the programmers are new, it is because the programmers want to be seen as "not new" within 2 years, there is a heavy selection bias for very driven programmers to become senior programmers by their second year of experience, because they are experienced in the popular new shiny thing. That's the causation behind the correlation, by being not easily replaceable, and commanding a high premium for their time faster because of it. Sometimes there is actual utility in knowing that newer stack, sometimes.


It becomes popular in the first place because the newbies find it interesting.


For real, the current js landscape seems like a mix of two very distinct components:

- js itself which was worse and now is, ahem, less worse. npm kinda fits this bill as well

- solid libs and technologies made by people who know what they're doing (React, Angular, TS, etc). Of course, those are not perfect, but you can see the engineering behind.

- a mishmash of "works for me" crap done by people who feel like importing left-pad


Angular is really bad. So is Redux. They are some of the worst over engineering projects I've seen.

Typescript is really cool, but no one on that team cares about "compile times". Of course what I actually mean is "type checking time".

There are very few good things on npm.

React has the right idea (The DOM is slow, so let's put a "virtual dom" infront of it so we minimze our interaction with the DOM), but the implementation could be a lot simpler (and smaller). See "preact" for example. But I'd go further and question the whole "Class components" and "function components" and "hooks". Why not just functions that take arbitrary arguments and return a vdom tree? You can easily add constructs for caching return values if none of the inputs changed since the previous run (some system like the 'observables' from Knockout or 'atoms' from Jotai/Recoil).

esbuild is really great, it completely changed my development experience. The interesting thing is it's not written in Javascript at all. It's written in Go.


Also, even among the not newbies, most programmers are average.

Because most people are average, thanks to our Gaussian nature for talent. Besides, for most people, IT is a job, not a passion. Like in any industry.

It's true for everything, doctors, electricians, geographers.

So if you love your field, and you are passionate about what you do, enjoy that, but don't expect others to follow.


This, and inexperienced managers really like to boast about "their" team size, so they will much prefer to hire more cheap and inexperienced developers than paying fair rates for one experienced developer. Especially because the latter one might command a higher salary than the managers themselves.


> If you are experienced and notice that things at your company are broken, you either try to advocate for fixing things or just leave out of desparation.

If you’re “experienced” then you just fix these issues (especially the ones you mentioned)


You're not allowed though.

One of the counter intuitive functions of code reviews is to keep the code quality just around average.

If someone experienced tries to fix things, his fix will look bad to the inexperienced because it's different from what they already know, so they will push back very strongly.

Unless you have an iron will, an iron fist, and an official title within the organization, it's near impossible to fix things.


I'm idly toying with the idea of just switching to DevOps permanently and keeping my programming as a hobby. I still consider myself a Java dev first and foremost, but since I switched to DevOps I've had less stress.


Same here but a Python Dev! Though DevOps is haunted by its own monsters in particular that when company starts all of DevOps related things are afterthought and patched by layers of temporary solutions until a person dedicated to handle it is hired and first thing you're meeting with is how to untangle the conceptual spaghetti of technologies.

Still, I'd take DevOps problems any day over pure programming.


I don't know. For me "modern" DevOps is a much worse nightmare than "modern" web development.

My devops consists of uploading files via scp and launching an executable via ssh.

"modern" devops involves a confusing web of config files and programs whose names I don't even want to remember.

I'd go further and argue that probably a lot of what makes "modern" web development terrible is kind of downstream from "modern" devops.


A new programmer is not a bad programmer, in a normal setting their responsibility is limited. A new programmer if bad will mess up code style, a bad senior will destroy the project.


I will never understand why running JS through webpack is so much slower than building two dozen projects with 200k lines of code through MSBuild.


You just summarized my career so far.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: