2. Most devs I know at least consider writing simple web apps/interfaces as the most boring work you could possibly do.
This isn't quite as prevalent in hardware applications where even well funded startups have to be concerned about actually achieving positive margins and cannot just hope for growth to eventually outpace whatever cost structure they have burdened themselves with.
I’m not sure this is a problem though. Many large companies are not hiring in the valley because the high level of compensation is making it uneconomical.
Sometimes I feel SF is all SW and HW has moved to Shenzhen.
I left embedded dev because I saw this coming and I'm just a slow developer that wouldn't have cut it with the demands of mobile software emphasizing cranking out stuff faster when that's just not how I work.
I get the impression embedded is still a somewhat slow market. Or has that slowness meant it's dried up really badly?
Embedded has been consolidated into a bunch of huge multinationals like Texas Instruments, Qualcomm, etc. and they're doing alright financially. It's just a very innovation-lacking industry compared to software, so you have a chicken-egg problem with investors. It'd be like trying to start a CPU company now when we've got Intel, AMD, IBM, and maybe one or two more boutique makers teetering on the edge of bankruptcy constantly.
On the opposite side a private company that makes billions $$$ with its website should be more prone to pay its devs better.
The React developer will have to retrain and rebrand themselves as Bojombo developers in 2 weeks and as Klazoum Framework ninjas in 2 years.
I find that almost every new embedded project comes with yet another vendor architecture, vendor-specific toolchain, documentation that's wrong, totally different peripherals and drivers, weird limitations, undocumented pipeline glitches, and so on. Even the CPU architectures and instruction sets are often different, if it's a DSP, or an SoC with DSP coprocessors glued on.
Basically, new things have to be learned often in that domain, at least with the jobs that come my way. For a while I got a bit anxious on new projects because I couldn't understand how to make seemingly broken tools do things the client wanted. Things like getting signal data into a DSP simulator when all the (GUI only!) buttons didn't do anything at all for example; but eventually I learned that it takes about 2 weeks on every new project of restraining the urge to swear, and than all the undocumented stuff, tools working differently than documentation says, secret header files and options, libraries that no client of the vendor has ever used and are full of empty function bodies or just broken, but sold to my client as if they are actually in mainstream use, weird memory models etc. starts to make sense and I'll be productive. So I just expect that on any new architecture project now, and it works out ok. Just budget 2 weeks of swearing-restraint at the beginning.
On front end webdev, there is infamously a huge amount of churn too. E.g. every time a new Webpack comes out I have to spend quite a while figuring out how to rewrite the config files for the new version for best results. WASM differs from asm.js; grid and flexbox are a new way to achieve what we used to do with floats, and there's an interesting new browser API each week.
But React hasn't change much in half a decade, and continues to be one of the main front-end frameworks. You could have learned React 4 years ago, and keep on using the same skills today and continue to be well paid if you're good at it. That pay is, indeed, at least 50% more than the pay offerred for low-level embedded, even though React work is way, way easier.
But remember, you can negotiate... Just taking a software dev position as an individual is not the route to the best pay in embedded hw/sw projects. If you can instead sell product development, where the client wants something built for them, and you build it, complete with project management, architecture, technology selection, supply chain, and subcontracting or setting up a project-specific company, then the scarcity of relevant skills works in your favour and the pricing is quite different - better. Those projects are great if you can get them, and sometimes a lot of fun.
Embedded C/C++ development has been around much longer and is far more conservative in terms of adopting new technology. So they have a much larger labor pool of experienced developers who are going to be familiar with the core technologies of the job. Web shops migrated quickly to technologies like Angular and React, whereas the vast majority of C++ shops likely won't be adopting C++17 any time soon unless it's a smaller team's greenfield endeavor. I'd also argue there's less training necessary to go from, say, C++03 to C++17 than going from ES5 jQuery to ES6+ (or Typescript) React.
demand / supply
You might find 1000 devs with react but if the need is 10000 while there is 50 for 40 VxWorks devs then it explains the salary.
If you downvote pls. explain how supply and demand is not a thing for tech salaries.
HN is starting to look like Reddit where people bury your comment if it conflicts with their views.
I feel downvote on HN should affect your own karma with, say, -100.
(I also have started seeing significant misuse of flagging on HN)
But it's not really enough to explain it because the engineers who do Vx could always just switch to a better paying platform.
I wonder if the low salary indicates that the Vx Devs at that low price aren't able to switch out. Either due to location, lack of ability, or some other capture.
More importantly, frontend is extremely demanding in terms of UI. While for e.g. QT, WxWidgets, native GUI toolkits, it is perfectly acceptable for things to look the same. On the modern web, every significant app is expected to have a "design language" and not look like each other. On the other hand, HTML by default isn't very stateful, sure it has some stateful components like checkboxes but in general you have to store the state in code. And every code base needs a custom UI. You end up with a state space explosion. That is also why frontend frameworks these days are trying to force things into finite state machines i.e. Redux, Elm architecture. Whatever term you prefer. Half of your UI codebase often have absolutely nothing to do with business logic and instead animations and user experience code to make things feel "slick". A ASP.NET WebForms just ain't going to cut it if you wanna build a SaaS. Sad because tons of money have been wasted on essentially marginal improvements on user experience, but good if you are the consumer of course.
Gaming has similar-ish needs but is famously oversupplied.
Quant finance pays really well, though.
Given the... mixed... reaction to large defense projects, it's possible that the defense contractors are not focusing on engineering per se.
This is an important milestone, because this is a significant actor in the embedded world, where suppliers are often rather conservative regarding language/toolchains they provide, that is putting some of its weight behind Rust.
 RTP are VxWorks Real Time Processes.
Right now, we are looking at moving away from vxWorks and onto a Yocto-built Linux with the RT patches to the scheduler. From a requirements standpoint, the RT patches meet our needs for "real time" since our timings require more of a soft real time than a hard one. There's also way more support out there for Linux, and I am also slightly biased in that I am more of a Linux guy than a vxWorks guy simply due to familiarity and ease of troubleshooting.
This latest news with vxWorks is great, however. I actually sent this to management because we are attempting to figure out what direction to take as our current version is reaching EOL.
This decision is probably out of your hands, but I'm happy to share a few words of caution. I've been the yocto distro maintainer at a large firm for about three years and I absolutely despise it. BMW released a slide deck about their experience with yocto, and it was mostly "it sucks." I read that recently and it matched my experience down to a t.
If you absolutely need third party commercial support, yocto is probably the only game in town, but if you don't, I would highly recommend looking at nix (which recently gained cross compile support) or buildroot.
I'm dead serious about avoiding yocto if at all possible, I've seen things I wish I could unsee. But that sounds about right for a military project.
EDIT: I put my email into my profile if you want to chat about yocto more in depth.
Due to cost and availability of modern tools (compilers, testing, V&V, CI/CD and virtualization), there is a big trend right now to migrate those projects into RT Linux, which has come of age. I guess the support for modern languages from Wind River is their attempt to avoid losing customers or attracting new ones. They used to have an edge due to the certificated environments, but modern development and systems are becoming so complex (eg networking, cloud, etc) that using a pre-certified kernel is just a tiny bit of the certification process, and in many cases the complete system has to be validated anyway.
My favorite part of VxWorks was WindView. It's an awesome tool that shows task execution.
It's the defacto standard in defence engineering.
having said that, there is nothing special about vxWorks, skills learnt there apply directly to other RTOS's.
I'm very curious about your experience in that market. Especially whether the GPU vendors give you enough information under NDA to make them reliable, if you mostly black-box it using a subset of their features, and so on. Also, the QA techniques you use in such scenarios. I think some of tricks on your end might be applicable to other types of hardware.
For graphics APIs we implement a subset of OpenGL or Vulkan listed under their safety critical standards. We are part of Khronos and worked with them to define the OpenGL SC 2 standard and are currently working on defining the Vulkan SC standard. The saftery critical version of the APIs are a reduce subset that eliminate anything that would be more difficult to implement in certifiable code.
As for our QA process, its really split into two distinct things. There is just normal QA side with all the run of the mill testing and verification, and then there is the actual certification team who is responsible for meeting all the DO-178 requirements as defined by our process. Like a lot of other companies who do cert work, we don't create all the artifacts from the get go. There are a lot of defense contracts where they want "certifiable" but don't want to pay for the actual evidence, so we won't go out of the way to make it, if no one is buying it. We do have a strict coding standard and code reviews to ensure that any code checked in will actual be able to be certified, but certification is way more work than just writing the code. Once we have a paying customer for cert we will go through and write all the test cases, fill out all the requirements, and make sure every i is dotted and t are crossed.
Does that answer all your questions?