Don't go there. There's really not a lot of work. But if you must, low level is more a calling than a learned skill.
You probably shouldn't be learning assembler. First, compilers are really quite good. Yes, it's possible to beat them (I do) but generally not by much. And not by much ain't gonna put bacon on the table. You can probably get what you need from gcc inline asm() calls. Take a look at the linux sources and figure out why and when assembly is used there:
Secondly, writing in assembler is not low level. You just think it is. You should really be understanding caches and you can improve your cache performance in C.
Anyways, unless you deeply know what's going on inside of the microarchitecture of a modern superscalar, out of order, speculative, renaming, μop-cached, hyper-threaded, multicore beast then you shouldn't be fooling yourself by writing in assembler.
Unless you've really read Intel's Intel 64 and IA-32 Architectures Optimization Reference Manual (and ARM's Cortex®-A72 Software Optimization Guide) and meditated on the suras of Agner Fog's Microarchitecture you won't even know what's going on with something as simple as mov RAX, RBX.
Look, most compiler writers don't even know this stuff (Intel C Compiler yes, llvm occasionally) and frankly, it isn't very useful because Intel spends a billion dollars a year to make your bad x86 code run reasonably fast. Consider a switch statement which compiles into an indirect branch, jmp reg. That branch has to be predicted by the BTB before the jmp reg instruction is even fetched and that's really hard to do. Every year they get better and better to the point that you're not even aware of it. But if you want to the help the CPU out, you could put a UD2 right after the jmp reg. This is insanely hard to understand and will help very little.
I agree not to go into low level programming expecting a wealth of job opportunities to suddenly open up, but I wouldn't tell people not to go there at all. Not only is assembly really fun to play around with, I feel like I've gotten a lot out of the bits of assembly I've read/written. Even though I've never written assembly code for work, being able to read the disassembly in gdb has come in handy before. Also, a lot of the subtleties of C/C++ never quite clicked for me until I had an idea of how the generated assembly would work.
It's similar to learning a functional language. I have no idea if I'll ever use Haskell professionally but learning a bit of it has been a good way to see problems and logic differently. I think both assembly and Haskell have made me a better programmer, even if I never become truly proficient in either or use them directly in my job.
Totally agree. When I taught x86 assembly in college (in the 90's), the goal of the class was to give the student a better idea of how things worked under the hood to improve their C programming, not to be an assembly programming.
Just today, I helped a coworker debugging a segmentation fault that occurred in a library where the debugging info was stripped out by looking at the assembly code.
> Just today, I helped a coworker debugging a segmentation fault that occurred in a library where the debugging info was stripped out by looking at the assembly code.
Being able to debug applications which don't have debuginfo (particularly optimized builds) is highly valuable though.
It really does give you insight into some of the lower level stuff. It won't give you the full picture but it will give you ideas about how you might be able to take better advantage of your hardware by changing how you use data.
As an embedded programmer, I wholeheartedly second this.
I'm in an area where there are relatively few opportunities to do close to the metal work. The ones that do, don't pay any more than backend web developers make. In fact I've come to assume that they can offer less due to either the intrigue of the work, or because you often compete with computer/electrical engineers who aren't expecting outsized developer salaries.
Likewise, while I have loved the skills I've learned in embedded and low-level work, working at an embedded shop has taken a lot of the joy out of the learning. You trade off an inherent rewarding development environment with the realities of developing against hardware, where project cycles are long, you're often dependent on horrible vendor APIs and support, and where everything moves much slower.
The counter to that is the type of work I believe you're describing, which sounds like HFT optimization on Intel. I got into embedded because I wanted to eventually end up there, but again, it's an extremely limited market. And as someone who has indeed dabbled in the Fog optimization manuals, you quickly learn to realize how much the ROI of assembly isn't worth it. The future of speed isn't going to be going lower down the programming stack. It's going to be in new hardware: FPGAs and ASICs, writing RTL for system specific CPUs. And that is highly exciting, though it necessitates a pure programmer has to learn hardware at a deep level as you imply.
As for me, I concur with the statements others are making about using my skillset for security research and reverse engineering. Working in embedded development has bored me to tears, and taken the magic out of learning to love the skills themselves. I'd much prefer to work at a pace that isn't limited by the pace of a cross functional team of software and hardware engineers.
Learn the skills, yes, completely. The knowledge is worth it. But for the love of God, don't get expect to get joy out of working in it.
Could not have said it any better. Embedded development is dominated by hardware engineers who make decisions very often without the software development teams input, placing embedded folk square in the target of management angst as well. Choosing Broadcom chips without drivers, 3rd party experimental hardware who wont release data sheets, choosing USB chips that don't support host mode, miswired memory interfaces that mismap fpga access, FPGAs that cause spurious bugs when consuming more power than necessary in certain modes, all lay squarely on the embedded devs shoulders and can manifest at first looking like pure software problems with the schedule pressing. Not a pretty world.
Are you me? Honestly, that all sums it up even better. In an embedded role, you are always a second class citizen. After 6-12 months of planning by the hardware folks: "The hardware is all done. Where's the software? What do you mean it's not finished, you had all year...?" You're a complete afterthought, and it's only amplified when working with a team that doesn't really "get" that without access to the actual, finalized hardware you're extremely limited in capabilities. It only gets better when they decide to go with one-off vendors who have a single product support engineer and its pulling teeth any time something goes wrong. It's often hardware related, but who cares -- you're the guy at the end of pipeline who's tasked with making it work, so yeah, you feel the wrath.
I'm only extra cynical because I'm dealing with that right now. As I have many times before, though. It's part and parcel.
OK, I've worn both hats - chip designer and embedded systems programmer - it makes me better at both - I get to be thye chip guy who understands software and the software guy who understands hardware - it means I get the interrupt setting/clearing logic race free and make sure all the registers can be read, and can make tradeoffs that minimise the amount of hardware we have to build.
The big issue is timing (and latency) - the chip guys are working on long timelines - they're already working on the next chip when the first chip's silicon comes back, their investment and attention is elsewhere, the software guys aren't going to rev up much before real hardware is in their laps, and certainly aren't going to spend any time on that second chip while they're still wrangling the first one - it's not so much a cultural gap between the groups as a gap in time
yes, but it's not cheap and is essentially a second parallel tapeout path that slows your chip design time - if you're building a CPU you likely build a high-level software model and code to that (with some register-virtualisation layer), then test against that as part of the chip DV (QA) process
In defense of HW engineers, they're often staring at a cost of goods spreadsheet.
Earlier in my career, I had a driver that compiled to about 8500 bytes. I noticed the part was spec'd at 16K and wondered if it could get re-spec'd at 8K if I got it down to 8K. Yep and I got mad props from the HW types.
Understood, but when they stare at that COG sheet they should also have the prescence of mind to look for hidden costs. A toolchain and IDE with a C compiler is going to cost much less in the end to the project than a 8K chip with an assembler and no debug environment/support tools will save it. It's very easy to be penny wise and pound foolish in this area.
Please don't discourage people without having the full view.
There are many good low-level teams with software first people. At least in the bay area there are many exciting projects for people with this skill set. Self driving cars (Waymo, Tesla etc) , VR / AR headsets (Microsoft, Facebook etc.), Phones (Qualcomm, Google etc.) , Wearables (Apple, Fitbit) GPUs (Nvidia) and startups AirWare etc..
Sure my experience may be skewed too but there are definitely great jobs with rewarding experiences of shipping tangible products for low level programmers. Not many people get the joy of shipping something your friends and family use and literally say "wow".
There are plenty of reasons to know assembler besides performance. I am completely ignorant of all the pipelining mechanics you discussed but have gotten paid to use assembly for years.
1. Writing embedded software. You're not going to get very far if you don't know how to set up your memory, text section, etc.
2. Debugging embedded software. Good luck interpreting your JTAG output if you don't have a good knowledge of assembler. Even if you get to use gdb/Wind River Debugger/etc., you will want to be able to tell what is happening. Symbols only go so far. A backtrace will get messed up and you'll need to look at the stack to figure out what happened. An access violation will occur and you'll need to figure out what line of code it corresponds to.
3. Debugging without source code. Same as #2, even more difficult.
3. Reverse Engineering. You are trying to figure out what is happening in a proprietary driver binary blob (such as NVidia's Linux driver). You are trying to figure out how to get your software to work with an ancient piece of software whose author has long since been out of business. Even if you have the cash for Hex-Rays decompiler or can readily use the Hopper decompiler, this code is simply not high-level enough to interpret without knowing assembly and having a deep understanding of the stack and memory.
4. Vulnerability research / hacking. You are writing an open-sourced client for a proprietary protocol, looking for a memory corruption vulnerability, etc. You can pretty much forget it unless you know assembly.
To add to this, most micros these days are ARM based which are specifically designed to not need assembly in their boot code. I basically never use it anymore. That said, you should have some understanding of what C code will map to what assembly for the purposes of memory management and performance.
Also: to say its a calling is accurate. I love what I do because what I create is physical and interacts with the physical world. I can hold my projects in my hands and say "I made this". I've done web and desktop stuff and can say that this work is significantly more fulfilling to me.
I've been telling people since the late 00's that starting my career in embedded was the single best thing that happened to my career. I quickly follow that up with the fact that leaving embedded was the second best thing that happened to my career. I've dreamed about going back, because I do miss it, but the jobs aren't there (in my area and especially now that I've fallen in love with full-time remote work).
I know a hell of a lot more about what our code is actually doing up and down the stack than my colleagues, including most of the managers I've had and "Principals" I've worked with and so on. It helps, a whole lot, in fact, and I find that it's personally rewarding, but it doesn't amount to much more than being the person who gets asked the challenging profiling/performance/optimization questions.
Someone has to write those compilers. Those with the skills to do so are vital to the industry. If we want more nice high-level languages and improved operating systems, we need new people entering this area of software engineering. Otherwise, where is the next iteration of systems software going to come from?
LLVM backend developers generally write in TableGen. Their assembly knowledge to say nothing of their microarchitectural knowledge is limited.
That may sound harsh but it's actually the point of a good compiler design (which LLVM has). You can write an optimizer pass without understanding the C parser. You can write a backend without understanding register allocation. You can get something working without really getting it tuned. You can specify superscalar machine instruction scheduling latencies without really understanding instruction scheduling itself.
I'll add that while full-time native work may be in short supply, it's an awesome skill to have in your belt. You many not need it often but when you're suddenly up against limits where current technology doesn't suffice(Java/Python/Ruby/etc) it can seem like magic to break down into the lower stack and get 10-50x performance boost.
Having that escape hatch available is incredibly valuable and for me distinguishes the engineers I hold in high regard.
> Anyways, unless you deeply know what's going on inside of the microarchitecture of a modern superscalar, out of order, speculative, renaming, μop-cached, hyper-threaded, multicore beast then you shouldn't be fooling yourself by writing in assembler.
Assembly is useful for understanding early initialization code for embedded platforms (startup file). Depending on how aligned the vendor's board support package is with your actual target, you might have to touch this, and since this code runs at a time where there is neither stack nor heap (can't have a stack unless the stack pointer has been set up), this is easier in assembler than in the stackless C variant that you might coerce your compiler to support. The same applies to debugging issues bootloaders, where you'll have to inspect how the reset vector looks.
> First, compilers are really quite good. Yes, it's possible to beat them (I do) but generally not by much.
Genuine question: what would you quantify “not by much” as here? In my experience, it's not uncommon to see 2x+ speedups from well-written software pipelined hand-optimised assembly in hot loops. (Especially so for in-order processors, like you might see in embedded applications or as a LITTLE core in your smartphone.)
2x in a hotspot is not much compared with better cache management in the rest the program. But if that 2x is important, great. I know this sounds really boring but you should write it in C first and then find that hot loop in a profiler like VTune and then rewrite. 2x of something really unimportant is still unimportant.
Also while software pipelining is possible on something like Haswell ... its limited register set makes it a limited technique. Modular variable expansion is tough with not so many registers but renaming does help somewhat.
I think tools like VTune are awesome for finding hotspots and reading assembler is like reading Latin. But programming in assembler? I think it's best to disassemble and rewrite your C accordingly.
I should have mentioned this in the first post: if you're not a VTune ace, if you're not looking at the Intel PMRs and scratching your head, you probably should not be writing in assembler in 2017. Also, VTune deals with C (and Java) quite nicely (just not on OS X).
Anyways, you may get 2x+ on something but that approach won't work with a GPU. Similarly, Apple doesn't provide microarchitectural information on the a10. Nvidia doesn't on Denver. Even though assembly will still be with us, this low level approach is going away. Apple, Intel, Arm, Nvidia, ... really want you to write in C.
The more a company spends on infrastructure, the more they need low-level people. A good low-level programmer can reduce cost requirements 10x or more. Any company spending millions or billions on infrastructure can make enormous savings by hiring the right people. Crucial for Google, Amazon, Facebook, and even midsize startups can see a big improvement.
And its difficult to fill those jobs. Which means, there's opportunity there.
I fully agree with you. It's not just infrastructure but also devices and the surrounding ecosystem. I work in this space and let me assure you there are many jobs with GREAT pay.
I"ll go on and make another bold claim. Focusing on lower level stuff and systems concepts lay an excellent foundation for designing complex systems regardless of the language used. Once you understand how a program is executed from the ground up i.e. right from the program counter to page table lookups to cache hits to cache coherency, all abstractions are easier to deconstruct and understand. I work up and down the stack and this has been my, quite possibly just anecdotal, experience.
Game development also needs people going low. Both in terms of cranking every last bit of performance from an end users machine, to packing as much information into a packet as possible to stream to the server.
The funny thing is, its the opposite for the low level stuff. Nobody does it, so there's a huge demand, which results in too much work for those who are filling those roles.
Game development is broken for a litany of reasons, but I resist the urge to go into it to keep things on topic. I will say that it gets kids in STEM, and for the youngin's, getting into the low level stuff is a safe bet for a profitable career.
> The funny thing is, its the opposite for the low level stuff.
This was not my experience or the experience of people I know at all.
I imagine that it depends on which companies you work with, or what country you're in, but my experience was that low level skills often didn't give you very much in the way of extra job security, and certainly didn't save you from crunch, low pay (much lower than everywhere else in software), etc.
Because those companies, and their mindsets, represent a fraction of all software jobs. The performance improvements that come from having good systems programmers appeal to only a small subset of companies. Even if the opportunity is there, features are what matter primarily to most companies. Only the ones with deep pockets and forward thinking will shell out for talent who will improve the infrastructure in a way that leads to often incremental and unnoticed or "unnecessary" improvements.
Gosh... Lets say you have to purchase X servers at Y dollars, where X*Y>1E8.
How much does an engineer making $200k a year have to reduce X by before he pays for himself? Not much, and simply finding a hotspot and doubling the performance by aligning a data structure or whatnot could pay for his salary for a decade.
The old adage that computers are cheap and engineers are expensive is about 15 years out of date for a large portion of the software being written today. That is when companies stopped being able to generally externalize the costs of shitty code. Plus, as microcontrollers ate the world, the ability to ship a $1 micro controller instead of a $10 one on a product run of 100k units can mean the difference between a competitive growing company and a dying one.
Writing and designing code that is mindful of latencies (Amdahl's law). Temporal / Spatial locality, avoiding needless copies etc.. This approach needs to start from design and also follow into implementing the code. Moreover, this is done without compromising on readability.
Umm, Amdahl's law is actually a negative result. It's saying that if you infinitely improve some magical hotspot that is 1% of your workload, the best you can only ever get to is 99%. If anything, Amdahl is telling us not to grind on trivialities.
If a system performance is profiled as being dominated by 80% A , 19% B and 1 % C. I'd focus on working on A first to get maximum gains. Amdahl's law gives you the backing as to why you should do this.
As a real world example, if an operation involves a network call and you see the RTT dominating the time. You may want to think of ways to avoid the call (caching etc..) if possible to get really good gains.
An upper limit on the benefits. To me, an upper limit is a negative result. When Leon (in Blade Runner) finds out that he only has 4 years to live (an upper bound) he takes it pretty hard.
Amdahl:
A fairly obvious conclusion which can be drawn at this
point is that the effort expended on achieving high
parallel processing rates is wasted unless it is
accompanied by achievements in sequential processing
rates of very nearly the same magnitude.
A point made when we read the paper was that Seymour Cray always made sure that his computers were also the fastest scalar computers even though they were sold as vector processors.
Thank you for the excellent explanation; I will consider it more thoroughly.
I think mathematicians consider an upper limit a positive bound - positive, in the sense of being well defined; you're using negative in the other sense? I actually like that quite a bit.
Even if you work in domains where you don't need to do low level programming, knowing how the hardware works at a low level helps you do a better job.
When you know what the stack and the heap are, when you know how memory locality affects performance, when you know the full cost of allocations, you can write better Java/C#/OCaml/Haskell, you can check on the assembler they output, you will be ready to occasionally write small C libraries if portions or slow, etc.
Ignoring this stuff completely leads to text editors made in electron.
Tbh the text editors in electron do try to squeeze as much as they can from the web API's. The VSCode tokenizer putting dense info in typed arrays as bitfields is a good example of someone understanding how memory access works under the hood.
However I do agree with you, getting into web dev is easy. Doing things closer to metal is harder. I would definitely love to do more low level C programming.
I'm studying (at my own pace) the book, practicing the problems, and watching the lectures--I even downloaded the Panopto iPhone app to watch lectures while on the bus ride into work.
The course is very rewarding: x86, memory systems, systems programming, linking, socket programming ... all packed in a single book, providing a holistic view of systems, from a programmers perspective.
Here's the situation: No Starch, being as awesome as they are, are totally happy with printing an open-source book. The Rust project wanted to do a second edition of the book for various reasons, so Carol and I have been doing that, but also, working with No Starch's editors. So they've been collaborating with us on it. It also means you can get a nicely laid out eBook, the paper copy, all of that.
As someone who can code in C/C++ and was recently looking for work, I don't think someone should learn low level programming for a career. What I would consider simple PHP jobs were easily paying more than C++ jobs.
If you look at everyone reinventing the wheel in electron and not caring about performance I don't think the situation is going to get better anytime soon.
Please don't spread generalizations like these. I'm sorry about the job situation in your area but I live in a Canadian metropolis where 1) there's no shortage of low-level work and 2) they will pay much more than your average PHP consulting shop. In fact, I maxed out very early what someone can hope to earn at my level of experience.
My point is that your job market is not representative of the whole world and you're acting as if it were.
what the flying fuck. I live in Toronto and all I see is cloud this and cloud that. I'm an embedded software guy with a lot of embedded software experience and the only people who contact me in Linkedin focus on the one or two lines in my resume about web development.
So I can only speak for myself, but I've been exclusively working in low level C type work for the last 9 years. Think embedded, kernel dev, reverse engineering, etc. type stuff. I'm good enough at it, but hardly an expert.
I can't say I've ever lacked for work and I make $125k in a very low cost of living area, and if I wanted something new I could have interviews arranged tomorrow both where I currently live and pretty much any major US city.
It's probably not on the same scale as web dev, but there's also a lot fewer of us working on this side of things.
Do you have a background in electronic engineering / good understanding of hardware issues beyond cpu caching ? Just trying to understand what $125k requires in this space.
My degree is in Computer Science and my hardware skills are mostly nil. I mean I know how to read data sheets and talk to hardware through memory mapped IO or ports, but as far as circuits, hardware design, etc. nothing more than a rudimentary understanding.
I wound up in this field doing a co-op during college with an employer that had a variety of different job types. I have always liked C so I went with more low level jobs during my co-op rotations.
My primary skill set is being comfortable developing on bare metal or in the kernel, reverse engineering, and debugging painful problems. One of my first tasks was porting part of a custom OS to hardware that didn't have JTAG (it wasn't our hardware... someone else made it and we were tasked with getting our OS on there.)
The only thing I had to debug with were memory dumps in raw hex. It was painful but a lot of fun. It's not so much that I'm particularly bright but I'm too dumb to know when to give up.
Actually now that I think about it, being too stupid to know when I'm in over my head and being willing to dive into an impossible task are probably the most valuable skills I have.
You're taking a very specific case and generalizing it like crazy. The CS industry isn't all Electron apps and simple jobs.
The most obvious example would be the video game industry were you're not straying very far from C++ anytime soon. Performance, speed and native compatibility are a big deal there.
I went to college with a guy who did industrial, embedded stuff you see in factories. One of his projects he told me about was a robot that cut the glass windows or something like that at a Volvo plant. Anyway, he wrote the software for sensors, control, etc. Integrated it into GUI tools or backends. I said something kind of like you said to him.
He replied that he'd have better job security than any of us in IT doing stuff like PHP or Java. He said the reason is he had to be on-site to do his work. Impossible to out-source. When I pointed out in-sourcing (H1-B, etc), he said they preferred people with both strong command of English (avoid costly misunderstandings) and experience (avoid costly mistakes). The experience often comes from working locally in colleges or companies.
So, I definitely encourage people to explore coding in C and C++ for local, embedded systems at the least. There's other jobs that don't require local or embedded where those languages are used. They're not outsourcing-proof w/ consistently good pay, though. ;)
The problem is skill transfer. What does he do when Volvo shuts down his factory? Even if there are other potential employers nearby, they are unlikely to be hiring because they already have their guy.
You kidding? His company builds the equipment for one deployment. Then they do that for another. Then another. He might get sent in for maintenance or they might send someone else. He isn't doing day-to-day admin at the factor or something.
Myself and my colleagues do a lot of work in finance and machine learning where low level C++ is king, and we are highly paid for knowing the language intimately. You can generalize this across many parts of the finance industry in my experience (I own a mid-sized ML consulting firm).
Interestingly, from what I'm seeing, in London even most HFT jobs are in Java now. From what I've read, they figure it's easier to write HFT Java than C++, which says a lot about their difficulties with C++ (not only the language itself, but difficulties with hiring people expert enough in C++).
Depends on your location.
Canada for example has very few embedded/hardware dev jobs vs. Israel were there is a huge amount of C/C++/Assembly jobs for the Aerospace/Military industry.
That's disappointing to hear but thank you for the honest response. I currently work in Ruby but my favorite language is C, I feel like it helped me become a better developer and improved my understanding of imperative languages. Sucks that something so useful is cast aside for something like PHP.
I respectfully disagree. We've recently seen a lot of development in the space and there will always be a need to make new hardware work. Yes, margins for hardware companies will always be slimmer than CloudWidgets Inc. but there's a very solid demand for people who know what they're doing in this space. Just about every electrically powered product you buy will have a uC in it, from your flashlight's led controller to the hundred+ that exist in your car.
And some of those aren't even C++ jobs, switching sorting from date to relevant gets you C++ jobs from 14 days ago that would already have moved along in their hiring process.
To me those requirements seem stricter than what i see in the PHP jobs (experience with MySQL + some form of MVC framework). And on top of that it's paying less.
Another example is the finance job I applied at that was paying 70-90k. But to get in the high end of that range you would need previous stock trading software experience. They also wanted 5 hours of my time for the interview process, but that's a separate topic.
I've never been somewhere where I can't work remote at all, but you end up having to work closely with electrical engineers and that often leads to needing to be near an office. I'd say I actually need to be in my chair about 20-30% of the time
That is an exaggeration, for most embedded jobs you don't need $100k in test equipment. I am sure there are exceptions, but usually you will need the target, a debugger and _maybe_ a logic analyzer or an oscilloscope.
Interestingly enough, I worked as an embedded dev for 3 years already and I used a scope and a Logic analyzer maybe 1 week during this time. I do admit that my job is kind of shitty and I'm working on messy in-house frameworks. I bet Saleae + oscope are indispensable when doing board bringup or things like that tough.
For my home-lab I went the el cheapo way and bought my own hackable Rigol (brought it to 100 Mhz) and, to my great shame, I bought a Saleae clone. I promise, Sealae guys, I'll buy an original on my next raise, I promise!
Yea, it depends on what you're up to. Post setup, when you're just doing DSP or something you're not going to need much at all but we spin boards pretty frequently due to our contract cycle.
I've gone the same direction with my home setup. "hacked" Rigol DS1054Z + power supply and anti static mat. It's not as nice as my Tek at work but it'll do most of what I want. I usually just borrow the saleae from work if I need one and I have a little bus pirate that's occasionally indispensable for SPI/serial/I2c stuff
Low level programmer here. This is great but the job market for low level programmers is very small. Recently, while interviewing, I described a thread safe, concurrent queue implementation to a hiring manager and he asked me -- "so how does this relate to big data and ML"?
What was your answer to the hiring manager? How about:
"[buzzword] systems need lots of training data. To process that data quickly, systems must parallelize and distribute computation. Concurrent queues are a basic building block in parallel and distributed systems. They are used both internally in [buzzword] systems and directly by engineers building ingest pipelines and serving systems. Engineers need to know the trade-offs of different concurrent queue implementations to avoid introducing unnecessary bottlenecks in [buzzword] systems."
Yeah, unfortunately there are people behind the scenes pumping out a whole lot of BS and aggressively manipulating the dev ecosystem with VC dollars.
You have to wonder what percentage of devs are jumping on these trains out of naivete and what percentage are jumping on out of cynicism, as both are good explanations. The explanation that rarely applies is "this is the best way to solve our problem".
None of us should be so naive to believe that the developer market, or even subsets of the developer market, are a cozy little corner for engineers and tinkerers where rationality wins the day. It's big money (even if you don't pay anything for your tools), and that means that advanced marketing tactics are at work. Hackers are not invincible, though they like to think they are.
>"Yeah, unfortunately there are people behind the scenes pumping out a whole lot of BS and aggressively manipulating the dev ecosystem with VC dollars"
Can you elaborate on this? This intrigued me but I didn't follow your arguments after. Can you give a concrete or specific example?
Yeah OP seems to be indicating that VC or some other actor is influencing the developer "market" to deflate salaries by inflating the supply of devs, at least that's what I'm reading it as, and I'd be very curious to learn more.
I wasn't referring to the wage market, I was referring to the market for developer tools.
However, VCs do deviously manipulate the wage market in early startups by pumping a false narrative about how it takes young (read: naive, as these are virtual synonyms) people to innovate, and how when the company gets really super duper big, the equity will make up for the crappy salary. This trick is used to get young people to accept an apartment split with 6 other "founders" and some ramen, or in the case of early employees, a salary 40%-60% below market, in exchange for a pull on the VC slot machine.
I think the commenter two above was able to flesh out the meaning of your original comment for me. I do agree with you on the wage market though as well. I was thinking about this the the last time(recently) a recruiter was pitching the whole "equity in place of cash comp" to me.
If you figure a 1 year cliff and then 25 percent a year vesting schedule. You are into the startup for 5 years to be fully vested. How many of those can you do in a career? How many companies continue to be awesome enough to stay for 5 years? How many of those will actually IPO? The odds don't seem that great.
I doubt there's any intentional manipulation going on behind the scenes. It's just VCs throwing their money at the potential next big thing, which is almost never at something low level. That's where the money is, and the tooling + community follows. The rest is just network effect.
Just a few years ago, jquery was huge, and many companies were just starting to hop on the web bandwagon. Many frontend developers who didn't even know javascript could get better paying jobs than low level programmers. I thought that was a sad state of affairs, but that was just where the market was at.
Behind the scenes activity is rarely so overt that one can point to "concrete" examples, and there's a lot of plausible deniability. Surely many people are just trying to cash in on the fad (cynics as above), and they hire developers or engineers (more likely naive than cynical), and both parties end up throwing fuel on the fire, as they're both now invested in it both monetarily and emotionally. These people are, partially unwittingly, doing most of the dirty work.
I think the zeitgeist is manipulated by large players like Google and Amazon to promote their cloud offerings. They know everyone is going to follow them wherever they go head first. Thus, they focus on driving the industry toward rented hardware that they can sell at a large markup, and gives them many other ancillary advantages, not the least of which is making large swaths of the internet's infrastructure dependent on themselves, a very powerful position to be in.
There are even articles that freely admit Google released Kubernetes to get more people on its cloud [0], and they run industry groups like the CNCF ("Cloud Native Computing Foundation") to promote the idea that running your own hardware is the devil, and why don't you just let Google handle it all for you, OK sweetheart? You wouldn't want people to call you a philistine, would you? Google is cutting-edge, and you're just a working slob.
Nevermind that a lot of people are adopting k8s only because cloud servers are so expensive, and they're looking for a way to consolidate that expense. Amazon should be trying to counteract Google here and stress the high labor costs associated with running a k8s cluster and the pre-requisite conversion of apps to work well on one.
Most people don't need something like k8s and, for that matter, most people don't need something like TensorFlow. Google releases these things because they have one big positive net effect: a lot of people paying a lot of money for cloud servers.
There is no reason to run a real database like PgSQL on Kubernetes. None. If you want to do this, you are a victim.
VCs are looking for a similar type of influence, just on a smaller scale, by getting into dev tools. Dev tools are the key to platform dominance; they are, after all, the root from which the software which will keep people on your platform arises.
Microsoft understood this, which is why they were crapping themselves when Java picked up steam in the mid-90s, why they had to supplant NetScape, and why Steve Ballmer had a conniption yelling "DEVELOPERS!" not too long after that [1]. It's why they've continued to pour billions and billions of dollars into .NET and Visual Studio down to this day.
It's also why they're making moves into new platforms like TypeScript and hiring away major k8s contributors: they hired k8s co-founder Brendan Burns [2] and their acquisition of Deis was announced just this week [3]. They want to retain as much platform influence/control as possible.
Once you have the developer, and by extension the user, on your platform, there are many ways to trap them and make them give you money. Lock-in has always been the holy grail in software because it's the best way to make money. Lock-in is gained by platform control.
We need to watch our butts here and be careful about what we're willing to believe. There are many vultures looking to get a piece; I've seen this amp up ridiculously over the last 10 years, and I don't think we're even at critical mass.
Let's take a look at Google: they have the browser (Chrome), they have the OS (Android), they have their cloud, they have a popular programming language (Go), they have the internet gateways (Google search + other services), they have the data (Google analytics, data mining in all of their other services), they have the hardware (Android phones, Chromebooks), they will have the car (Android auto + self-driving efforts), they want to have the ISP (Google fiber), they have the communications (several chat efforts), they have the social network (Google+, several other efforts). I'm probably missing several things.
Now Microsoft: they have the browser (Edge, IE), they have the OS (Windows, Windows phone, Xbox), they have their cloud, they have several popular programming languages (C#, F#, Typescript), they have the internet gateways (MSN, Bing, LinkedIn), they have the data (data mining in Windows and all of their other services), they have the hardware (Surface, Xbox), they have the communications (Skype, Yammer). Once again, I'm probably missing several things.
Apple: browser (Safari), OS (iOS, macOS), cloud, languages (Swift, Objective-C), hardware (iPad, Macbook, iPhone, Apple TV), communications (iMessage, FaceTime). They don't have the data and internet gateways because they're the only ones still holding themselves back from large scale data-mining.
They say the web is open because "it opposes private, exclusive, proprietary Web solutions". But the web is built with Microsoft and Google tools on Microsoft and Google clouds and viewed with Microsoft and Google browsers on Microsoft and Google operating systems. Maybe we can swap one cloud with Amazon, or one device with Apple, but that's the current situation.
The ideals of opens-source have been almost circumvented in this age of platforms.
low level programming knowledge is fundamental if you want to work with security research (software vulnerabilities, reverse engineering and exploits) and the job market at the moment is quite big.
If not for employment opportunities, you should be (gently) exploiting these connections for educational value. They're in the industry and have likely seen examples of colleagues transitioning in from web/backend development. At the very least, they can easily tell you what mistakes to avoid.
I often have the same mindset you do about exploiting connections, and although it's been a struggle for me to change my habits, using people as educational outlets has been something that I've found to be extremely helpful to me personally and also often enhances my relationship with that individual as well.
You're not the only one. My company is desperate for good low level hackers. We can find people who know Python and Java for days, but people with the skill set we need are very hard to find.
Well, sort of. It's forgetting taxes, house prices, and more.
Places where you can buy decent stand-alone homes for $50,000 and/or have no state taxes are rather different from Mountain View and Manhatten.
It's funny how people in the pricey places look at a $100,000 salary and not see how it is, all by itself, plenty to support a large family in a home with a big yard. People living that life look back at the city folk and can't imagine how they could ever afford the same thing in a city -- it'd be roughly $2 million (a factor of 40) in San Francisco.
Maybe! We are located near Orlando, FL with offices all over the country. If you are comfortable with low level hacking, assembly, C, etc. then it might be a good fit.
The only hard requirement is you must be a US citizen.
This goes both ways - where do C people look for job. As you know most competent people have one job or another, so it's equally important to be recognizable for the employed bunch. I do a fair bit of blog posts / outreach to good looking github profiles / conferences (smaller and larger).
in a capitalist society, small/rare usually means high-paying! don't make it your career but it pays to know these things when writing your high-level code.
Small/rare in the supply side is only high paying when the demand is not as small or as rare. It really depends on that relation. Based on the comments I'm seeing... sounds like everyone is talking more about the demand side than the supply side... which sounds either on the edge of oversupply or just plain oversupplied.
Otherwise, I agree with you. Those that understand low level issues tend to, in my experience, just plain understand computing better... even if they're working in the deeply abstracted high level code.
The demand is quite low. I think if you want a good low-level code implementation, you can easily find one on github. For example, you can easily find a good compression algorithm for your CPU budget or other application needs on github. So you can get away with mostly writing high level code and using these external libraries. Second, the demand for writing really optimized code is not as high as 10-20 years ago. Now, writing reasonably fast, readable code is OK and no one expects you to run gcc -S and count cycles unless you are working in some high frequency trading firm.
In this thread there's a lot of FUD about there being no job market. One place to look is to your undergraduate electrical engineering brethren. You'll basically always be following them around to wherever they go (though often in slightly higher demand given that board design is typically faster than the software development phase). Also worth noting that in recent years there has been an emphasis on FPGA skills for a lot of jobs in the space. It often helps to have some familiarity, especially if you like working for smaller companies that wouldn't want to hire a specialist for that role.
I was having a conversation with a friend working on high volume data problems and was surprised just how low level he has to get in order to meet some very stringent but necessary requirements.
I've worked at a high level for so long that I've forgotten how much low level work I take for granted.
This is great, but does any one have any gentler intros ?
I'm trying to teach low-level programming to one of my junior devs. She graduated from a boot-camp school and is struggling a bit.
I am taking a great coursera course on the very basics of a computer (uses a hardware simulator to build the chipsets necessary to build a computer). It is VERY intro-friendly and gentle, including teaching basic Boolean algebra. Also can be very hands on, which is what I really like about it. Might be a good fit!
Not sure if it exactly matches what you are looking for, but mentioning a few anyway, in case they are of help:
- For concepts as well as execution, from the bottom up (hardware level etc.), I've read that nand2tetris is good. It gets mentioned once in a while here on HN. Check hn.algolia.com for threads about it.
- any good book on assembly language programming. I've looked at Randall Hyde's book earlier, seemed good. There is also another which I think is good, by an American professor called Paul Carter, IIRC.
Either one is fairly low-level (C, or C like), but has plenty of beginner-friendly documentation. And if they progress beyond that, moving down to assembler is supported for both.
Thank you to introduce my page to the world. I've just found this thread and understood why the traffic suddenly peaked.
I'll add several link/books/courses here to the page.
Thank you for all.
About career, is low level programming job market that small we see in this threads comments ? I want to be C/C++/Rust programmer. This is quite frightening.
Embedded systems and the Internet of Things need a lot of low-level programming. It's a small job market compared to programming as a whole, but it's not shrinking, and it pays better than web programming.
Take a look through the monthly HN "Who's hiring?" posts. Jobs of any kind that aren't web (front-end, back-end, or full stack) or mobile apps are vanishingly scarce.
We know the bias against defence contractors, against the south, against the non-urban parts of the USA, and so on. We know we aren't wanted around here.
The jobs can be nice though! My place is looking to hire dozens per year. We do emulators, JIT, hypervisors, stuff like valgrind, debuggers, manual disassembly, and vulnerability research. I've been here 11 years. I've worked with more than 10 different CPU architectures and more than 10 completely different OSes. I don't normally work overtime, and I get paid more if I do. I have extreme flex-time. I never have to worry about outsourcing or H1B people. I'm never expected to take work home or be on call. I get to live in a place with no state income tax, a stand-your-ground law, almost no crime, almost no traffic or commute, and houses that commonly go for $100,000 to $400,000. We're hiring in Florida, Texas, Virginia, Maryland, Georgia, South Carolina, Alabama... totally the dream for SF and NYC people I'm sure!
Email acahalan, at gmail, if this suits you. Be sure to mention this comment.
You're probably right. We are pretty welcoming, but on Hacker News I definitely feel like every single state we're in just gets a giant REJECT stamp from everybody. The one exception is maybe our Austin office, because Austin seems to get an honorary non-Texas label. Working for government customers also seems to earn us a giant REJECT stamp around here. Aside from this particular article, "low-level" feels like another REJECT stamp.
I happen to like all that, but I'm not the average Hacker News reader and I know it.
To convince your family about the Melbourne, FL area: you can afford 5 acres, or a house on the beach, or a commute of less than a mile, or a huge McMansion. You can surf all year long. I suspect Texas (Austin and San Antonio) and Virginia are pretty good too.
How could someone make them selves worth even interviewing for a job like that? I do Ruby/python and Devops with some small hobby c and Linux stuff in my free time
Follow the guide in the link. Do the Arduino stuff first; being unable to do something will drive you to learn exactly as much C and assembly as you need. Learn to write network code in C. Learn how TCP/IP works and try to do weird things, such as transmitting information with ICMP (ping) packets.
The next step (for that particular company's work) would be to actually reverse engineer some complex code and write about it on a blog.
If you want to basically guarantee yourself an interview, reverse engineer and find a vulnerability in some software, and write about it. Then apply with that information.
If you're looking for the "easier" way, just apply to DoD in Maryland or Virginia for an ultra-low-paying, but connected job that allows you to learn all this stuff on the job.
I was under the impression that a lot of stuff on an arduino was abstracted away. You mostly set gpio pins to high and low. I have a raspberry pi that can do that.
An Arduino is just a micro controller with a standard set of pinouts for easy access to peripherals. You don't have to use the IDE and high level programming language. At least for the AVR based ones you can write pure C or assembly, build it with the AVR tool chain, and upload and run it natively. I'm sure you can with ARM or Intel based Arduino clones too.
You busted me. I have never programmed with Arduino, just Atmel boards. I wrongly assumed they were similar. I guess Arduino is leaps and bounds easier.
I'll start with what you already have: There is a small value in the Ruby/Python experience. That kind of thing serves to script some of the tools, including: IDA Pro, Binja, gdb, and scons. There is also a small value in the Devops stuff I think, but Devops isn't my thing so... you mean server admin tasks? The hobby c and Linux stuff is most valuable. We write emulation in C. Most of us use Linux for everything, but we also have Windows experts who just use that.
Mixing personal projects and work and education...
Some of our people have backgrounds doing driver development and embedded system programming. An excellent background would be compiler development. We have hired people who did assembly back when that was totally normal, getting their first job around 1970. We hire some people straight out of
college; Carnegie Mellon University has the best program for this. We don't actually require a degree, but most people have one, and there are a few people with the PhD. Some people have worked on cars, writing software for stuff like airbag deployment (no bugs allowed!) and infotainment stuff.
We hired a former Atari 800XL game developer who was kind of famous. I had done Linux kernel development, been an embedded RTOS developer, and written the "ps" program that Linux uses. We hired a guy who wrote a TI calculator emulator. We hired a guy who got himself sued for exposing
security holes in a speech at Blackhat or DEF-CON, and almost hired another person who did the same thing. (the one not hired had an awful attitude) We hired a person who wrote code to run hardware that would extract platelets from blood. Some employees have written boot loaders. A few
people have done stuff with neural networks. Some have worked on radar. Some have done GPGPU and FPGA programming. We had an employee who wrote a PC demo scene boot sector (512-byte bootable screensaver) that used one of the bytes 3 ways, as data and as two distinct instructions. We have many people who participate in hacking competitions such as ShmooCon's "Ghost in the Shellcode" CTF and DEF-CON's CTF. We have people who got addicted to hand-optimizing code, vectorizing things and counting things like pipeline stalls. We have a person who worked on the JIT for an open source Nintendo emulator. We have some people with math backgrounds
who take interest in automated proofs that programs will or will not do various things, and who can figure out how to automatically create program inputs to make the programs excercise various parts of the code. We hired
a guy who bought his own parking meter (!!!) and then figured out how to hack into it via the pass/token thing it used. We hired a guy who hacked into a bathroom scale that had WiFi. We hired a person who did Dell's BIOS.
Yeah devops, web server admin, automation and problems developers don't want to or don't have access to deal with.
Seems like people have a lot of experience going into that field. Most of my hobby stuff isn't that interesting. I did write a mostly working chip8 emulator lol. Thanks for the info
The chip8 emulator is definitely good. That sort of thing counts. You could add a JIT. :-)
You might want to see what you can do with an interactive disassembler. Your choices are:
1. IDA Pro freeware version -- this is x86 only and kind of obsolete
2. IDA Pro demo version -- this is wonderful except that you can't save your work and it times out after about an hour
3. Hopper -- this is cheap
4. Binja (binary ninja, from Vector 35) -- mid price
Grab some firmware updates off the web, especially for things you happen to own, or some normal executables. See how well you can understand them with the interactive disassembler. You might need to decompress them; look for magic numbers that indicate compression.
Automotive, oil, gas, military and so on. These are the industries that hire C/C++ developers. Of course, if none of it opened shop in you city, bad luck.
It's relatively small, but the amount of people with those skills is small too. Currently every company I know that needs those skills has trouble finding enough people, so talented low level hackers can pretty much name their price.
Maybe that will change down the road but it doesn't look like it right now.
It is location dependent. That said, as I mentioned elsewhere in this thread, there's money to be made. Everything has a microcontroller and someone needs to program it. You don't need all the processor architecture knowledge that they're talking about, just a willingness to learn along the way. If you think you like the work, stick with it. Building real things is deeply satisfying.
I would recommend anyone with curious mind to read it, even if day to day work does not involve much low-level programming. It will open up a whole new set of ideas / knowledge in problem solving toolbox.
Compiler backends are another topic that can really extends one's knowledge of their hardware. You end up having to learn a least a bit of every layer underneath your code e.g. ISAs, Calling Conventions and scheduling.
You probably shouldn't be learning assembler. First, compilers are really quite good. Yes, it's possible to beat them (I do) but generally not by much. And not by much ain't gonna put bacon on the table. You can probably get what you need from gcc inline asm() calls. Take a look at the linux sources and figure out why and when assembly is used there:
http://stackoverflow.com/questions/22122887/linux-kernel-ass...
Secondly, writing in assembler is not low level. You just think it is. You should really be understanding caches and you can improve your cache performance in C.
Anyways, unless you deeply know what's going on inside of the microarchitecture of a modern superscalar, out of order, speculative, renaming, μop-cached, hyper-threaded, multicore beast then you shouldn't be fooling yourself by writing in assembler.
http://blog.erratasec.com/2015/03/x86-is-high-level-language...
Unless you've really read Intel's Intel 64 and IA-32 Architectures Optimization Reference Manual (and ARM's Cortex®-A72 Software Optimization Guide) and meditated on the suras of Agner Fog's Microarchitecture you won't even know what's going on with something as simple as mov RAX, RBX.
Look, most compiler writers don't even know this stuff (Intel C Compiler yes, llvm occasionally) and frankly, it isn't very useful because Intel spends a billion dollars a year to make your bad x86 code run reasonably fast. Consider a switch statement which compiles into an indirect branch, jmp reg. That branch has to be predicted by the BTB before the jmp reg instruction is even fetched and that's really hard to do. Every year they get better and better to the point that you're not even aware of it. But if you want to the help the CPU out, you could put a UD2 right after the jmp reg. This is insanely hard to understand and will help very little.
Don't go there.