There was a point in the history of aviation where anyone who could fly a plane was also capable of constructing and designing one. I wonder if there were similar concerns at that time about a future where someone could be in a cockpit of a plane without truly understanding how the machine works from first principles?
Today that type of concern would seem absurd, and we've gotten used to the idea that flying a plane and building it are separate skillsets and jobs. After all, we are comfortable with aviation engineers designing the aircraft without knowing how to mine the bauxite, smelt it to aluminium, and machine it into aircraft parts.
As comforting as it is to have an overview of how to trace a complete path from logic gate to web request, it isn't necessary. The decrease in the number of engineers with that overview isn't a cause for alarm, or a problem to be solved. It is just a sign that "tech" has matured, and it is stratifying into a set of interlocking disciplines. The same happened with every other subject in human history, and we've been just fine...
The problem is that it takes a really long time until technology is so good and so reliable that you really don't need to understand it to be able to operate it.
Take, for instance, the "Yes, let's all go back to coding in assembly!" line -- The thing is: For a really really long time after high-level languages had become mainstream, you really did still have to know assembly to be a programmer, even if you did most of your work in, say, C, or Pascal. That's because compilers for high-level languages and their debugging tools were initially a "leaky" abstraction. When your programme failed, you had to know assembly to figure out what went wrong and work your way back from that to what you could do in your high-level language to fix the problem. Nowadays, compilers and debugging tools have become so good, that those days are mostly gone, and you really don't need to know assembly any more (for most practical intents and purposes).
But the problem we have today: We pile on layer upon layer upon layer of leaky abstraction without ever giving it the time it needs to mature. We're designing for shortening the amount of time a developer spends on getting something done, under the completely misguided assumption that the developer will never leave the "happy path" where everything works as designed. This is neglecting the fact that a developer spends most of their time debugging the situations that don't work as designed. Usually, if you make the "happy path" time more productive with a side-effect of making the "unhappy path" time less productive, that amounts to a net-negative, and that's the big problem.
If you do something even slightly unusual you can quickly get into a situation where you spend more time debugging your toolchain than writing code.
I really hate the state of modern software. We have so many layers of utterly unknowable abstractions that it isn't even possible to understand what your code actually ends up doing.
And that's how we ended up with Electron, which I think is the pinnacle of shitty software driven by the unsustainable paradigm of libraries on abstractions on libraries.
Pilots must still understand how planes work. That’s called aviation. Most software devs have absolutely no idea how their software platform works. Most are overpaid API monkeys.
Tell that to the passengers of those 737 MAX flights where the pilots did not know how to disable the failing AoA correction...
I wouldn't bet that most pilots know more about how the planes they fly work than software devs know about their computers. For one, most planes today rely heavily on computers. Do they teach electronics in "aviation"?
As I understand it, the 737 MAX issue was not because the pilots didn't know how their plane worked, but because they were lied to about a feature that was installed to try to reduce costs. Had they been told it was there, no the outcomes could well have been different.
Although the pilots cannot be blamed for not understanding the undocumented controls of the 737, one (multiple?) incident(s) were mitigated due to very experienced pilots understanding the fundamentals of the plane. Think of the pilot knowledge as a second line of defense, against an incident that should have never happened (again - not the pilots fault, but the additional knowledge helped).
I recommend reading “Flying Blind” for more detailed accounts of the precursor lion air flight that almost crashed.
Not sure I understand your point. I read "it's not that they did not know how to disable the feature, it's just that they were not told how to disable the feature".
Or are you saying that they knew, but somehow did not do it?
I know. My original point was that it is not completely clear that pilots know better how planes work than software engineers know how computers work.
Not at all saying that they were incompetent. On the contrary, passenger planes today are IMO much more complex than one desktop computer loading a web page: passenger planes are a group of many computers doing safety-critical stuff in order to maintain a giant machine up in the air.
I don't see how one can say that software engineers don't really understand computers, but pilots do really understand planes.
If I recall correctly, the crew of the Ethiopian flight that crashed were very experienced and they did understand what was happening. They just couldn't mitigate it in the short time they had.
Note that my point was not that the crew was inexperienced or incompetent. My point was that those flying machines are crazy complex, and actually made of tons of safety-critical computers.
I just did not find it fair to say "pilots know how planes really work, but software engineers don't know how computers really work". Both are waaaay too complex for one person to actually understand fully.
There's a limit to how much about a plane the pilot understands (e.g. how much about the electronics, circuitry, software in the cockpit is understood? what about the chemical composition or manufacturing process behind the rubber in the tires? or the subatomic physics that helps explain why air and the plane interact they way they are known to in aviation?). I don't disagree that poor practitioners exist in every field, but that's a sign of what the technology permits you to get away with not knowing.
I think we have to make a discernment between operating and manufacturing when referring to knowing "how it works". The pilot needs to understand how a plane will behave on the fundamental level when given a set of instructions. A develop should have that understanding as well (and that's coming from a person who's been through CS, but have lost a lot of that understanding).
But what the vast majority of programmers are "operating" are programming languages, runtime environments, and operating systems—which generally treat the hardware and the CPU architecture as implementation details. The people who use programming languages and those who create/maintain them might as well be in different industries, like the pilot and the aerospace engineer.
I think that misses the point. A programmer who understands both their language and the environmental constraints with which that language, and it’s capabilities, execute likely understands enough to write and maintain original applications in that language.
As a JavaScript/Web/Fullstack developer I don’t live in that world. I live in a world of giant stupid frameworks. The only purpose of these frameworks is to supply an architecture in a box and put text on screen in a web browser. If a task cannot be performed using only the API provided by that framework then it must not be worth doing as it’s clearly far beyond the capabilities of the developer. There is far more to this software platform than merely putting text on screen in a web browser, for example: accessibility, security, performance, test automation, A/B testing, network messaging, architecture, content management, and so on.
God forbid you take the giant stupid frameworks away. It’s like castrating a person in public and then laughing at their great embarrassment. Many developers, some of whom shouldn’t be in this line of work to begin with, have built their entire careers around some framework API and absolutely cannot write code without it. The emotional insecurity is very real, as well as the completely inability to write original applications.
I think that's a highly dismissive and ignorant view of what software development, as a value-creation endeavor, actually is.
The responsibility of a software engineer is not mapping high-level constructs to low-level details. The responsibility of a software development engineer is to implement systems that meets the business requirements, and operate on those systems at the abstraction level that makes sense to the problem domain.
It is entirely irrelevant what machine code is running, or even what machine is running the code, just like being able to model fluid flow over the control surfaces of an airplane is entirely irrelevant to steer the plane. A pilot needs to know how to control the plane using the plane's interfaces. Being able to whip out a computational fluid dynamics model is entirely irrelevant for a pilot if all they want to do is turn left/right.
High-level languages and abstraction layers are the key to simplify and speed up delivering value. No one should care about what pages of virtual memory their application is writing to if their goal is to serve a webpage in multiple continents.
The only one purpose of software is: automation. The degree to which a software developer strives towards that one purpose determines their employer’s return on investment completely irrespective of the business requirements. Unnecessary abstractions exist not to simplify any return on investment but to ease candidate selection from amongst a pool of otherwise unqualified or incapable candidates.
> Unnecessary abstractions exist not to simplify any return on investment but to ease candidate selection from amongst a pool of otherwise unqualified or incapable candidates.
This take is outright wrong. One of the most basic business requirements is turnaround time for features, bugfixes, and overall maintenance, which ultimately means minimize operational costs.
All production-ready application frameworks are designed to provide standardized application structures out-of-the-box that hide the implementation details that don't change and make it trivial to customize the parts that change more often. Backend frameworks are designed around allowing developers to implement request handlers, and front-end frameworks are designed around allowing developers to build custom UI elements from primitive components, provide views to present data, and fill in handlers to respond to user interactions. Developers adopt these frameworks because they don't have to waste time reinventing the wheel poorly and instead can focus on the parts of the project that add value.
At least in JavaScript land all production ready frameworks only solve two problems: architecture in a box and put text on screen. These are trivial to achieve at substantially lower effort without the frameworks, but it requires a more experienced or better trained developer.
What you describe is a training failure, but your thoughts on the matter are an economic failure. The goal of software is eventual cost reduction via automation. I say eventual because software is always a cost center and its value is not immediately realized.
What you describe is employment, which is not the same thing. The least friction path to employment to turn candidates into commodities to ease selection and risk of rejection post-selection. Once employed the candidates perceived value is often measured in things you describe, which rarely translates into any kind of value add to the business. Churn is burn, which increases employee engagement but almost always increases operational costs. The way to decrease operational costs is with automation, which includes things like CI/CD, static analysis, test automation, and so forth. These automation efforts are not measured in churn.
That contrast is why many software developers are API monkeys, because its what they are hired for and what they are rewarded for. That is why software developer return on investment is not defined by business requirements. Many employers need people to perform low effort work and do not wish to invest in formal training. This is all measurable.
> At least in JavaScript land all production ready frameworks only solve two problems: architecture in a box and put text on screen.
I don't think you have a very good grasp on the issue.
All production-ready application frameworks are designed to provide standardized application structures out-of-the-box that hide the implementation details that don't change and make it trivial to customize the parts that change more often.
That's what they are used for: to ensure developers do not have to reinvent the wheel poorly, and to provide very flexible ways to change the things that are expected to change the most often.
Front-end frameworks are used to help develop user interfaces. Describing user interfaces as "put text on screen" already shows you have a very poor grasp on the subject and are oblivious to fundamental requirements.
Unwittingly, you're demonstrating one of the key aspects where frameworks create value: gather requirements and implement key features that meet them, so that people like you and me who are oblivious to them don't need to rearchitect their ad-hoc frameworks to support them as an afterthought.
It should be noted that those who try to make the same accusations you've made regarding complexity aren't really complaining about complexity. Instead, they are just manifesting that they are oblivious to key requirements and as they are oblivious to them then they believe they can just leave huge gaps in basic features without any consequence.
Strange. You are almost verbatim repeating what I wrote, expanding upon it, and then elaborating upon your expansion to justify the same conclusion that I wrote in about 4 words. To me this sounds like virtue signaling.
These frameworks provide value to the employer, not the developers, because it eases candidate selection and turns otherwise unqualified developers into less capable commodities. In that regard the value is entirely regressive because it requires more of the less capable people to perform equivalent work that does not achieve disproportionate scale, which is the economic goal of automation. If the given developers only return value directly proportional to their manual efforts they are merely overpaid configuration experts on top of data entry.
Some developers believe everything is always a framework or any attempt to avoid frameworks creates a new framework. I cannot help these people. Any non-religion is a cult type nonsense of affirming the consequent fallacy.
Software devs have zero control over the tooling, languages, networking and hardware.
It looks like they have a choice because the map of available options constantly shifts... but they're metaphysically locked to near-identical options in the universe of potential ways.
As long as computing is largely US based, it will always be this way. It's treason to go off-piste, in a large way.
Ask a pilot to recite Navier-Stokes from memory. They won't even know what you're talking about.
Building a gas turbine engine without training? Forget it.
The only electrical engineering a pilot needs to know is the difference between volts and amps, and what it means when a breaker pops. The EEs who design the avionics are not so fortunate.
Nobody is talking about building the engine lol, you're all making the exact "just program everything in assembly" comment the author was taking the piss out of.
You have probably 80% of people using React that have absolutely no idea how the three main functions in it's API work. That's the equivalent of a 747 captain having no idea what the TOGA button does and only knowing that that's the one you press to do a take off or a go around.
To support your point, having a locked TOGA mode can be extremely dangerous even if the pilots know exactly what it does. The equivalent would be a risk of death if you just once wrote a React component that had an infinite loop in its render function; I would venture a guess that if there was that kind of risk, most programmers would resign in very short order!
Developers for which the above argument is clever and persuasive are essentially equating themselves as the equivalent of cab drivers. Their libraries, frameworks, and hardware are black boxes to them that they manipulate in pre-prescribed ways (and sometimes just plain cargo-culting) to achieve a desired result. When their abstractions break down, they have to call in specialists to diagnose and repair them. Of course, they get very agitated and defensive when people point this out and, very much unlike what a hacker would do, try to diminish the value of expertise and skill and call it unnecessary. And, okay, for them, it is.
Yes, a cab driver does not need to understand automotive engineering because a cab driver is, in the non-pejorative, technical sense of the word, unskilled labor. Is that really the analogy you want to make though?
"One day the stars will be as familiar to each man as the landmarks, the curves, and the hills on the road that leads to his door, and one day this will be an airborne life. But by then men will have forgotten how to fly; they will be passengers on machines whose conductors are carefully promoted to a familiarity with labelled buttons, and in whose minds knowledge of the sky and the wind and the way of weather will be extraneous as passing fiction." -- Beryl Markham, West with the Night, 1942 (first person to solo the Atlantic east to west)
Thank you for sharing that here. Your post is why there should be a 'favourite' button for comments as well as posts. Edit: there is one, and I've just clicked it!
The Co-pilot of Air France 447 could fit that description. He went to fly highly automated Airbus planes as soon as possible, which means only 250 hours on different types. He didn't had a proper understanding of how stalls worked, since Airbus automation makes it almost impossible to stall the plane, unless something goes horribly wrong and it reverts to manual law, which is what happened in that flight. When the captain returned to the cockpit, he immediately recognized the situation as a deep stall, but then it was too late.
It does looks systemic. Modern planes being design to fly by themselves and new pilots being trained as button pushers with no intuition over the machine they are supposed to control in an emergency. Not that automation is bad but the brain-machine interface is degraded at this point.
well but it is very difficult to guard against all exceptional situation requiring some basic aviation skills. i think the point made is that you can fix leaks one by one, which takes a very long time, or you can know the abstraction layer below.
fixing one leak and assuming there isn’t any other is not the right strategy, is my point.
I think there are a couple of things wrong with this comment and analogy.
Its not clear the author is demanding 'from first principle'.
Second, not everybody in university needs the knowledge of flight or the ability to do it themselves - and there are alternatives to flight.
Finally, there is a upfront and physical limits to what can be done to the process of flying. Computing and the framework to think about them are endlessly malleable (and applicable).
If I take some of the BS being sold today and transplant them back into the aviation metaphor i can only describe it as a donkey ride being sold as flight. So many customer don't have the knowledge to tell the difference.
What resonates with me here (and why i don't like this comparison) is what i believe are the ingredients critical to progress. That is: Only when we've figured out how to make something simple can we build on those foundations and take huge leaps forward. There is a limit to how far or high we go when things get too complex. Similar to the idea of technical debt but at the scale of humanity.
Tragically we're very bad at recognizing those eureka moments in history when things became an order more simple because naturally we look a back and assume they're obvious.
It would be concerning if pilots, aircraft maintenance technicians and aeronautical engineers shared titles and job descriptions so similar that it was hard to tell them apart.
Heck, the titles in software are so broad you may as well have the pilot, the tech, and the aerospace engineer all sharing the same title along with the flight attendant.
A friend of mine got a pilot license several years ago. He was taught a lot about how and why planes work. With a small plane it's generally expected that a single person can maintain it, the same way it works for cars, isn't it? Of course large jet airliners are another story, because they're large, expensive, mind-blowingly complex, and owned by airlines.
I think the raspberry pi (and possibly arduino) have made learning the foundational principles more accessible. They make it possible to go from circuit fundamentals up to unix sysadmin fundamentals.
Kids today don't get enough credit. A friend's son was so excited to show off the Roblox game him and his friends were making a couple of weeks ago. Was there for (Canadian) thanksgiving so there was some family friends and tons of people from his family there. He had already picked out something he thought each of us would be interested about it, and I was the "show him the Lua!" guy!
People assume that because their phone/laptop is locked down and unable to spark curiosity that it must be the same for kids today, but I think they just become a little more un-curious themselves.
Yeah it's built on some abstractions, but I do think there's some valleys among the tall abstraction mountains that the curious few venture into, just as it's always been.
Yes, c64 BASIC lets you POKE at any memory address you want, but I was getting a pitch about a complex 3D collect-a-thon with FPS elements, from a 10 year old. That's also kind of cool.
Flash got a lot of people into programming because of how one could go from cartoons to copy-pasting snippets of ActionScript for some basic interactivity to full-blown games and apps.
Yes some people are experts and some are just adequate. Nothing new here. Not everyone can be a messiah like the author, who spends half of this short essay telling us how great he is.
This article has almost nothing to do with abstraction. The author loses interest in his own thesis after the first two paragraphs.
That's how it is with all the articles on that site, which occasionally reach the front page. I'm sympathetic to a lot of the sentiments but the author pretty consistently fails to actually make any argument beyond "I am the smartest, and I don't think this is good."
Yeah I was at least expecting some concrete example or something specific... but nothing. I don't even mind "old man yells at cloud" articles if they show an example or provide actual substance since you at least come out of it learning something... but in this case the entire article just sounds like a bad HN comment.
Same feeling. How this article came up to the top? The example he gives is terrible "someone used a modern framework was hacked, but the performance was not caused by the hacking, but the frameworks was terrible slow" - Ok, which was the framework? How did you find it was slow just by looking at a diffs? What someone not knowing how to trim a tool for performance has to do with abstractions?
The other thing is that we've adopted the wrong abstractions on many occasions. Today, in the software industry, we have an arrogant mono-culture which believes that we're at the end of history and have figured it all out... But in fact, I believe we've gone down the wrong path with many more recent tech. I feel like everything was moving on the right direction until around 2014... Then it's like progress started going backwards and we started using all the same hyped up frameworks. Companies started forcing everyone to use these frameworks and now software development has become both inefficient and demoralizing.
I know the current mainstream state of the art is inefficient for sure because I've built an SDK which I use for my own projects and I'm at least 10x more productive with it than with mainstream frameworks I used during my day job and the code is way easier to read and maintain. I could show the code to a junior dev who doesn't know any frameworks and they will be productive with it. I wrote a no-code BaaS (Back end as a Service) platform in 2 months part time using my SDK. I highly doubt I or anyone else could have done this in 12 months full time using mainstream tools and frameworks.
Most likely, your framework Is productive for you not because it’s the best framework available, but because you wrote it and know the ins and outs of it and it matches up to how you think. This isn’t too day it’s not likely better than other frameworks in at least some ways, but familiarity begets expertise which begets productivity.
Many experts in drywall installation before drywall screws were popularized swear that screws are slower and worse. However, an expert nailer and an expert screwer both complete jobs just as quickly (if not faster for the screw-adherent), and just as well.
And the ability to focus on just aspects that are relevant. That simplifies the problem immensely.
I look at the current situation of UIs in Rust, every one of which is unappealing, and think I could build my own. The basic structure and paradigm is so simple! Then, I think... Unicode support. Right-to-left text display. Affordance for text readers.
All very important, but things I could ignore for myself, but not in a publicly shared library. And... ugh. Either I would have to spend years figuring that stuff out, or pull in a number of other libraries which adds layers of abstraction and complication. So I fire up Civ 6 instead.
There is an element of truth to that, especially as I was building the SDK, it helped me a lot that I knew how all the parts worked and could make everything fit together into a cohesive whole. Though now that it's done, I think you don't need much knowledge to use it. Will be interesting to see how other people react to it as it's geared towards designers and junior devs who only know HTML, CSS and basic JS.
> Many experts in drywall installation before drywall screws were popularized swear that screws are slower and worse.
Screws are superior as an end product. Old houses will tell you that with the amount of creaking and slop that builds up with nails.
But there are videos floating around about the expertise of some old nail drywall experts. Holy smokes it could be fast. They are probably right about the speed.
> Companies started forcing everyone to use these frameworks
Companies are made of people who, hopefully, have people in decision making roles who have context and the knowledge to make the right decisions. There are reasons why we use "these frameworks" (be it frontend or backend). You may not like the reasons but corporate programming isn't just about language purity but what makes the company money, of which there are a lot of factors like hire-ability, maintainability, continuity, etc.
What happens when your company uses your bespoke SDK and you leave? You may think its easy to teach someone else how to use it but there's a lot more that goes into how companies make technology decisions.
I am curious about your SDK now. Do you host it publicly? If no, could you at least explain what is the difference between your method and mainstream frameworks?
I plan to introduce my SDK publicly via the no-code platform I've been working on as a 'low-code' alternative for cases were additional flexibility is required.
To give you a rough sense:
- My SDK has some front end components and back end components connected via WebSockets using a client/server framework I wrote years ago and have been maintaining.
- It's all declarative so for example, for the back end, I don't write much code, I just declare the models, what fields they have, what views of the data are exposed and specific access controls. The back end is mostly a large JSON object. I don't want to go into too much details but the way it's set up, it's very flexible and you can model almost any kind of data and relationships with little to no code. It guides you towards a good architecture which makes good use of database indexing (so it performs and scales well). To connect the back end models and front end, it's a twist on the old CRUD concept so that it is conflict-free. I went down a completely different path to GraphQL and I think the result is simpler, more efficient, provides better access control, simpler caching and code (or should I say markup/JSON) is much easier to read and maintain. As you build your system using it, it guides you into making optimal architectural decisions at every step, keeping complexity as low as possible (as opposed to GraphQL which, by virtue of its extreme flexibility allows complexity to grow out of control).
- The front end components provide ways to render lists and objects in complex ways (e.g. grouped, filtered based on relationships between different models; all declarative). Components are hooked into the model backend in a particular way so they update in real time by default. Real time updates are delivered to the front end efficiently. Only relevant components/views update themselves and they do so automatically. Pagination is automatic and specified declaratively as part of the HTML and in accordance to limits specified in the back end JSON. Access control is enforced automatically based on the rules specified on the back end in the JSON object.
I'm almost at a point where I can build entire complex apps using only HTML on the front end and JSON on the back end with essentially no code.
> I'm almost at a point where I can build entire complex apps using only HTML on the front end and JSON on the back end with essentially no code.
I think you are missing the point of no-code. Your solution isn't even low-code, you just created a framework with well-thought components to start a project. If you need to integrate a third-party component that uses code, you'll need to integrate that code in your HTML/JSON or create a new abstraction in JS to use in the HTML.
disclaimer: I'm creating a "similar" platform and got curious by what would set your SDK apart from mainstream frameworks but I don't think what you described is anything different. For the view part, is it using something like htmx, react or pure JS?
It's definitely no code. I built some dynamic pages of my service to manage data with it that are all HTML markup and JSON. I would have used it for all pages if I had created the components earlier.
HTML is markup, not code. JSON is object notation, not code. I used very little code to build my service. I think this type of highly flexible low-code is the right path to no-code.
I know from experience that people with zero tech knowledge can be taught HTML and CSS in a few weeks.
The ideas and approaches you talk about evoked some of the concepts from that paper for me. It talks a lot about separating accidental complexity and infrastructure so you can focus only on what is essential to define your solutions.
> A big percentage of so-called experts today only know how to configure some kind of hype-tool, but they understand nothing about how things work at the deeper level.
This resonates with me strongly. Everyone seems to know how to do things by rote memorization ("to do X, add this line to your config file") but nobody knows how to go off-script if you need to do something slightly different (not-quite-X or a variation on X). Worse, people will waste your time trying to steer the conversation away from not-quite-X back to X. (Insert the parable of the man searching for his lost wallet under a streetlight because that's where the lighting is best.)
I think it's wonderful that we have created a tech ecosystem (a techosystem?) that allows people to contribute with only a very narrow knowledge of things—or, in less positive terms, that allows companies to hire cheaper labor because fewer skills are required. I believe there are many cases where a mediocre solution is better than no solution at all.
The internet of 1995-2005 was built on mediocre solutions. It was a blast.
Deep-dive experts are not a dying breed, they're not a limited resource with secret knowledge from a time before abstractions. We have more of them now than ever before. They're not defined by having worked all the way up and down the stack, but by having the curiosity to look into layers that are not their own, because not only are we adding more abstraction layers on top, we're also changing some of the layers down the stack: NVMe, WebGPU, WebAssembly, QUIC, AVX512 !
But deep-dive experts are a luxury that most teams don't need, and one of the most important skills of a technical manager is, I would argue, to know when they absolutely must hire one.
Increases in the levels of abstractions are necessary. The human brain doesn’t become much more capable over the years, but the number of available tools does so you need abstractions to allow for focus. I don’t see why this makes the future bleak per se. A programmer working on a business problem can be extremely productive in a high-level language even without understanding EUV, compilers, assembly, instruction sets, kernels, USB protocol, HTTP, dies, substrates, and much more. The same holds for pilots. They don’t need to know everything about aerodynamics, tensile strengths, aluminium, rubber, GPS, and much more.
However, a discussion over the misalignment of incentives in modern society can definitely be had. I would trust the pilot generally much more than a business software developer, researcher, or banker because the pilot will be the first to arrive in a crash.
One thing to keep in mind is that software for a dialysis machine requires a different approach from some overly generic CRUD app exposing how many mansions you have or something.
There is a lot of asinine software in the world and that’s fine. Lots of “real problems” are pretty asinine and don’t require heroic engineering feats.
Just slapping some bullshit together is good enough in a frightening number of cases.
Hate it too myself, but I have come to accept I can either prove them wrong by building my own business and outcompeting them through my supposed superior engineering or just swallow that I’m yelling at the clouds.
> Hospitals don't check software quality when making purchasing decisions (much beyond "this button doesn't work")
This! The worst piece of low quality garbage I've seen from the inside is hospital software. Nobody really cares or understands software on the hospital side, so anything goes. The product is inherently complex, so there are only a small number of large companies who don't care either since they can rely on customer lock-in, no matter how crappy their service is.
The thing with software in which bugs can kill people is that if there is a bug and it does kill or hurt someone, the engineers who made this bug will be charged with a felony.
In other words, cargo cult programming. I wrote about this two years ago [1] and received only polarised responses that either agreed with the point wholeheartedly, or attacked me viciously for gatekeeping. I wish there was a better way to cure this disease without triggering the professional immune systems of engineers who are highly vested in their favourite technologies.
To me it was putting “react expert” , “node expert” whatever else expert in quotes that gave it away. It was pretty clear to me that he viewed his role as exposing them as not real experts.
And 2 of the people did not understand his questions and he never even tried to clarify or consider that the problem was with his question.
“I asked a “React.js expert” to compare different SPA approaches such as direct DOM manipulation, MVC driven client side templating, component-based DOM manipulation and compile-to-vanilla-JS“
I honestly don’t understand what he is asking either. What is compile to vanilla js? Is he talking about compiling typescript to js? And what does that have to do with the dom? Also what is “MVC driven component based dom manipulation”? Like I feel like I am dealing with someone who read some design patterns book and dings anyone who doesn’t use the exact same terms he does. Not someone who has a superior understanding of development.
You asked good questions in that article. It's unfortunate, and I can't help but wonder if we've lost something essential because of how uncommon it is for people to be proud of their abilities, proud of their work, proud of their accomplishments.
In a nutshell, I think people who truly enjoy the satisfaction of doing a thing well are spending more time, generally, trying to truly understand things than those who just want to do their job and be done with it.
But what does that curiosity get us? I can't help but think of this fortune(6), and wonder how many non-curious people would even get it:
A novice was trying to fix a broken Lisp machine by turning the power
off and on. Knight, seeing what the student was doing spoke sternly:
"You can not fix a machine by just power-cycling it with no
understanding of what is going wrong." Knight turned the machine off
and on. The machine worked.
> It's unfortunate, and I can't help but wonder if we've lost something essential because of how uncommon it is for people to be proud of their abilities, proud of their work, proud of their accomplishments.
Is it? this forum is full of it in the form of blog posts and replies. And it is one of the things that makes it great. By the other hand, social media is also full of people boasting about their accomplishments but more often than not is just marketing.
This idea of gatekeeping is so cancerous. Nearly every time I see that accusation deployed, there is an implicit assumption that gatekeeping is obviously bad. It's just used as a trump, "you are gatekeeping, so I win". I think this is a ridiculous idea. There's nothing wrong with advocating for a high bar of professionalism, which is essentially what gatekeeping is. I'm very glad that medicine is a gatekept profession when I visit my doctor.
I read your blog post in order to understand why you might be getting negative reactions. You are coming across off as combative and arrogant. It feels like you are quizzing people on details in order to show you know more than them.
I disagree. The questions are posed as challenges but are not charged, subjective, or overly pedantic. I also don't see where he's trying to show he knows more than they do in a self-serving way.
Some people choose to interpret any challenge as a personal attack. ¯\_(ツ)_/¯
Reading this, I had an epiphany about why open-source software is essential. How do you peak under opaque abstractions? It's just not possible.
Imagine if Kubernetes was closed source & binary distribution only. Understanding abstractions is, I think, why being open source has become table stakes for infrastructure software.
That's not really true, obfuscating JavaScript can be quite effective at making it difficult to study the program's workings. It can't be perfect at this, of course, but neither is conventional ahead-of-time compilation to machine code.
The future looks especially bleak because LLMs, for many who I work with, are doing with logic what Google did with memory. 'Just ask chatgpt' will be a thing ; maybe not chatgpt, probably indeed just google but with a their chatbot which can do 'logic' and abstractions built in.
Google -> you don't need a longterm memory, just Google it.
So with good LLMs (and the latest iteration of chatgpt is really good at a lot of things you don't want bore yourself with), you don't need to process logic and abstraction as it will do it for you.
This is not yet there for everyone but I think it will work the same. Lazy the mind and have less and less rigor where the abstractions get more abstract and also more shallow.
Yeah I heard that from coworkers. Lazy mind, yes. Read the doc, look one example, check one stackoverflow link and you will know how you make a caroussel UI component...
And please do not use it to try understanding some logic in your codebase. Try even less giving him a small snippet of the codebase, thinking it will magically understand what it does and correctly imagine what all the missing related code of snippet is and does. ( ! )
Most of the time I just wanna scream "use you brain". By the time he wrote his first sentence to the IA, I've resolved the issue, or at least have a clue. It is really infuriating because what more is that ChatGPT need a -minimum- precise request to be expected to give a useful response. When the request from the user is blatantly inprecise kinda like "help, thing doesn't work", I just feel bad for the IA having to deal with terrible communication, and for myself for having to deal with that coworker. Thanksfully when I am the one being asked help, I know the project, can look into the code, and coworkers can show me the issue instead of failing to explain it.
Yeah that is why I like programming, computers only accept precise communication, it is not "move that div on the left" but "move div by id X to the left of itself" 120px over 200ms with a linear speed".
Only once ChatGPT did better than my mind or google : someone was searching the name of a bank starting wirh the letter "o".
Your statement is correct, google replace a lot of my memory, maybe someone could call me lazy too.
While I agree with you that programmers increasingly relying on LLMs would be (is?) a problem, there is an element of neo-Luddism to this, as was excellently framed by xkcd[1] some years ago. Could we not choose to fill all that time we used to spend writing code on testing/verification instead? Or on performance, or documentation, or security? Or, to go back to the premise of the blog, on re-educating ourselves in the tools we use every day but don't actually understand? I don't know how the industry will adapt and I won't bother making predictions, but the future doesn't necessarily involve everyone becoming mindless code monkeys.
But I think it will eventually end up in having less understanding of what people are doing, which won’t help us.
And I don’t think we will become mindless code monkeys: I think we will end up telling the computer what we need in english but us having 0 clue or memory on how it achieves that. That’s the ultimate ‘too abstract’ issue.
I support the notion of this post. In the last 6 years I’ve mostly been busy removing layers of abstraction in order to uncover the set of tools which is a good balance for me.
In example, replaced clojurescript with Javascript and then eventually Typescript.
Replaced Clojure with Java.
Replaced docker with VMs.
Avoided ansible in favor of simple bash scripts.
Avoid all kinds of firewalls in favor of understanding and using iptables directly.
Etc..
When it makes sense, additional layers of abstraction can always be added on top, if the foundation is solid.
I don't know how others feel, but to me docker is definitely less complex internally than a VM. For one, I'm running a single kernel/set of drivers rather than two.
But operationally yeah I agree, the stricter separation could make it easier to use a VM to get stuff done.
Just to clarify a bit. A VM is already a container.
When you run docker inside a VM it’s another level of abstraction.
Yes, one can run docker containers on bare metal, however the isolation is poor and so is the security guarantees.
In terms of excess abstraction, with VMs or bare metal you just need to learn the essentials that you need to know anyways like for example linux networking and security. With docker there is additionally container networking.
Think about this from the perspective of the post. For us since we do use React CLJS and it’s array of libs did became an excess level of abstraction. A wrapper around the JS engine and JS ecosystem - which didn’t bring enough added value for us to justify its use.
Typescript didn’t attract me at first but eventually it does bring a lot of added value on top of js and it’s 90% JS anyways.
We build hardcore SPAs and the amazing tooling and libs for ts pay off.
You can build infra with cloud-init to achieve something similar. You start with base image (e.g. debian cloud-init image), you craft cloud-init script (like Dockerfile) and you end up with VM disk file which you can run.
It won't be better than docker, for sure, but if you're going to use VMs and stay sane, you'll need to reinvent it.
At a certain point software engineering may come to resemble medicine more than mathematics, where the lower layers of operation are known to be poorly understood and innovations emerge from tinkering rather than derivations from first principle.
It seems like many important innovations we use today were derived from first principle.
Rust uses algebraic datatypes with ML-style type condtructors and Hindley-Milner type inference to achieve, for example, types like Option<Box<T>> or Option<&T> or Option<&mut T>, so that you're forced to check for the case of a null pointer (None), and if it's not in an Option, then it's guaranteed not to be null. This was originally conceived of mathematically in the 1980s and then implemented in ML.
The Hindley Milner type system was devised in 1969 from the typed lambda calculus.
Golang was influenced a lot by Communicating Sequential Processes, a formal system for describing concurrent behavior in systems, first described in 1978 by Tony Hoare.
Graph theory is a field of math that comes up a lot in computer science, in marketing, networking, compilers (register allocation and functional programming language interpretation).
The only innovation I can think of that came purely from tinkering is Rust's borrow checking system, and they're still trying to formalize that while improving it.
Derivations from first principle are always important.
This is a constant source of pain for me since I started working 12 years ago after finishing college. I noticed when most people came to me with issues it was because they were afraid or just too lazy or didn’t know they could just read the source code or docs and go one level under the abstraction to figure something out.
My entire career so far i spent trying to get onto teams where others were doing the same as me but had more experience. In the years i spent on those teams i learned a lot. But as time has gone on a lot if those people have left into management which just isn’t the same as directly working with them on problems. I actually have also done the same recently.
When i look at almost anyone today they have no desire to understand what is going on with the tools they are using. Further more most of upper management pushes me to enable this by building more abstraction on top of abstraction they do not understand so when it breaks I or my team can fix the actual issue. Ensuring the developers have an easy out for anything that is outside knowing their framework and some basic syntax.
I always wonder if people felt the same way about me if they came from a background where they had to know assembly or other lower layers of the stack. At any rate even in the current environment I haven’t been worried about finding new work if needed. Their seems to be a very short supply of people anymore who know how things work under the hood or even have a desire to figure it out when it breaks.
Software's re-usability is both a blessing and a curse. Hardware also has modularity, but it is still common to build hardware from scratch. On the other hand, once a library or framework exists, many people no longer feel the need to understand the underlying algorithms. One side effect is that many of the frameworks or libraries still in use and that are important for the dependent software to work are written with languages that less and less people want to program in such as C/C++ or even Fortran.
I’ve seen decades old Fortran codebases that everyone is too scared to touch. So instead all the new ‘features’ marketed to clients are just window dressing around the existing Fortran core.
Engineering simulation software. So being hesitant to change code is probably fair, in case you introduce new bugs. However that view ties your hands when improving the software and at some point becomes counter-productive.
If ignorance is rewarded, then people will willingly choose to be ignorant. The advice in this article sounds nice, but the reality is the rats only want the reward. If learning Kubernetes gets a six figure job tomorrow, people will chase it while ignoring networking and OS fundamentals.
I somehow agree with this. As a web developer who started on a framework first approach (Vue + Django), I was having one hell of a time trying to figure things out because of my lack of fundamental knowledge. I think abstraction is okay but you have to understand that just because you can make abstractions doesnt mean you should.
A young colleague of mine who has also started "framework first" with Vue + Django was recently confused about what a "serializer" was. They have "written some" within Django REST framework, but he was confused about their purpose.
I had to explain the problem with a single wire signalling bits in a series, recipient having to de-serialize them into some data structure. Then I had to explain that TCP emulates such a single wire using small packets.
I think that they have understood, but it was a funny feeling explaining this to someone who routinely deserializes form and JSON data, then serializes them into SQL queries, then deserializes query results in order to serialize them into templates or JSON.
> Power steering is yet another level of abstraction that further improves the driving experience.
I am a pretty firm believer that antilock brakes are a bad abstraction that might cause fewer accidents, but often more dangerous accidents than they prevent.
They avoid a class of accident caused by the brake’s locking limiting your ability to steer. They cause a whole class of accidents where you hit things at a higher speed than you otherwise would have because your ability to actually slow down is greatly diminished. It’s a trade of braking distance for control. Basically we’re prioritizing rapidly swerving around an obstacle over less controlled but far more rapid deceleration.
I really don’t think this trade off makes any sort of sense in anything but the most sparse rural environment. In urban and suburban areas, swerving blindly around an obstacle will just mean hitting something else. Yes, you missed the car that pulled out in front of you but now you’re either throwing your vehicle into pedestrians on the sidewalk or into oncoming traffic. Both cases likely a far worse outcome than the accident you are taking evasive actions to prevent. The sanest option becomes just to hit the obstacle you would have been able to stop for were it not for antilock brakes.
In my eyes, the most fundamentally frustrating part, and what makes them a bad abstraction, is that the problems antilock brakes solve are entirely preventable by human intervention. Namely, pumping the brakes. The class of accidents antilock brakes cause are largely unavoidable. You can stop lessen their affect by not fully depressing the brake but it is still a much longer deceleration than no antilock brakes at all.
On tarmac you slow down faster with ABS. The friction is higher when the tires are _not_ sliding. Fastest deceleration happens just at the point before tires would start to slide.
On gravel/snow, ABS perfoms worse. But 99% of the time you likely are not in such a context.
I was curious about your statement, so I looked it up:
> ABS increases stopping distances on surfaces covered with gravel, snow, or other loose materials. In such situations, a locked tire digs into the snow or gravel, pushing it forward and forming a wedge in front of the tire, which brings the vehicle to a stop
I think the point was that anti-lock brakes remove options and make things less safe in certain (not very uncommon) instances.
With anti-lock brakes, you're protected if you've never learned anything and you just try to put the brake pedal to the floor.
If you know how brakes work, you're worse off than if you don't have anti-lock brakes, since you can no longer properly control your brakes. In other words, we're punishing people who learn and know how to do things in order to ostensibly protect people who can't be bothered to learn.
Consider how many people drive with their headlights on, but no other lights. It's because automakers are selling "features", so it feels like we're actively encouraging people to think and pay attention less. Unfortunately, these "automatic" lights aren't truly automatic, and the value of having simple off and on states is lost because of these "features". It's actively unsafe.
> With anti-lock brakes, you're protected if you've never learned anything and you just try to put the brake pedal to the floor.
That sounds nice on paper, but in reality you need training in order to overcome the instinct to smash the pedal through the floor... and you need regular practice to avoid reverting to instinct. Needless to say very few people can _actually_ take advantage of manually controlling the brakes.
> So what is going to happen when the level of understanding in the tech industry reaches such a low point in which the majority of people don't even know how to fix the tools they are using?
This is an insightful article, although I don't necessarily agree with the view that "everyone" needs to know everything from first principles to be good at their job.
Talking about abstractions, during my past month, I was reading nand2tetris, and it's a compelling experience if you understand the exercise you are doing, which is not about building a computer from first principles; it's much more than that.
It makes you really understand what's going on behind the layers of abstracting that have been raised. Sometimes, understanding every layer of complexity is impossible, but depending on the area you are in, we need at least to try to understand the roots of it.
However, this is not for everyone; ask a musician if they are really in the weeds of why the instrument is producing music (the physics behind it!). They are probably aware that it's vibrating air, but, in general, they won't know the theory behind it.
Everything is built on abstractions, and that's OK, with time we will add even more on top of what we have; now, the issue is when those abstractions lock you in with a mindset that prevents you from creating something new without relying on those same abstractions that help you build stuff.
Many discoveries and inventions were made because they knew the layer of abstractions on top of it, and they just started again from scratch. Even Figma, the software, is built on a new core of concepts based on how the web was working back in the time [1]
> ...ask a musician if they are really in the weeds of why the instrument is producing music (the physics behind it!). They are probably aware that it's vibrating air, but, in general, they won't know the theory behind it.
A big difference is that, unlike computing, their instrument probably won't stop working because of some subtle change to physics introduced by a seemingly-unrelated change made to the universe by some other party in the musical instrument / air / molecules / atoms / quarks "stack". Theirs is a world with some assurance of stability.
Ours is a field built upon shifting sands. Knowing what the foundations are that the edifice you've constructed sits upon allows you to affect repairs when it crumbles unexpectedly.
Thanks! That one frequently trips me up. If I think about it for a moment I know "his affect" is different than "the effect something has", but, as is sadly and normally the case, I power thru writing w/o thinking as much as I should.
Musical instruments can and do break for unknown reasons and it's exactly as reasonable to expect musicians to crack out their machine tools to fix them as it is to expect a javascript dev to fix a kernel bug.
Computers are a relatively new invention. There are still people around from the times when knowing the entire stack from top to bottom was not just valuable but necessary. They worked during periods when abstractions were far leakier than they are today and far less reliable but also far simpler.
Some of those people have persisted with the attitude that knowing the stack top to bottom is still necessary as the complexity of those stacks have grown in complexity geometrically. This does not signify wisdom, only age.
- Knowing how to change strings is comparable to knowing how to use the browser console
- Knowing music theory is comparable to understanding programming language theory
- Being competent on the fretboard is comparable to being competent in one programming language. Strumming/fingerpicking could be considered another language.
---
And then for the controversial one...
- Only knowing how to use music-making software is like only knowing how to use low-code applications for development.
> However, this is not for everyone; ask a musician if they are really in the weeds of why the instrument is producing music (the physics behind it!). They are probably aware that it's vibrating air, but, in general, they won't know the theory behind it.
Not a great metaphor. First, most instruments were invented by people who had no knowledge of the theory, not even of sound involving vibrating air. Second, many musicians do understand some of how their instrument works. It's not hard to understand how a guitar works, how to tune it, and how to fix a broken string. Most can't fix a broken body or neck, but they do understand why it stopped being playable. Hell, I knew a flutist who simply started to build her own recorders. Zero knowledge of physics, but after a few years producing great sounding period instruments.
> However, this is not for everyone; ask a musician if they are really in the weeds of why the instrument is producing music (the physics behind it!). They are probably aware that it's vibrating air, but, in general, they won't know the theory behind it.
Knowing the nitty-gritty of the acoustics theory and the related math may not help you much as a musician... (that being said, if you're building a home studio or doing any mixing/mastering, you'll find it helpful to learn about physics of standing waves etc).
However, the details of how frequencies come to form the well-tempered scale with all the tradeoffs and imperfections, and the low-level details of music theory, would be something many musician nerds would actually know.
I think everyone needs to know everything they do professionally from first principles. And if you are to build abstractions, they also need to be based on first principles in order to be easy to reason about.
Sometimes I'm learning a framework, and I'm in the part of "...you write this easy syntax and it outputs pure HTML!", which is great. But then the next line you read "also, when you use [obscure symbol], it does [obscure thing] in order to respect [obscure concept]". You have no idea what to do other than start googling those 3 new things, none of which are explained properly or at all in the current "super simple framework" documentation. I think even naming should make sense.
It's not only developers we had a regress in skill in the art/vfx world as well. If there are no tutorials with step by step instructions on youtube most people can't resolve visual/art/vfx problems anymore. It's about the most basic things if you get people with "art" degrees today maybe 3 out of 10 can draw or sketch. If they are not in front of a computer they can't communicate visually. They are also not able to adept or solve problems on the fly which is really important if you supervise on set.
Great article. Reminds me of a boss I had early in my career, who would get a bit nervous if you asked any questions about the details of what we were actually doing. I later found out it was because he just copied the procedures used at his previous employer. “We do it this way because we have always done it this way” kind of deal. I wondered why any suggestions of doing things differently fell of deaf ears. Thankfully my next boss was excellent technically and was happy to change our procedures if it would make life easier.
It's not so bad once you just face it head on: we have toolchain cancer. Yeah, cancer sucks, but are we really so happy with our current set of companies that having them die of cancer is such a bad thing? They're not people. It's ok to want them dead so we can do things differently in the future.
Not all technology feeds into the cycle of tool bloat, we just need to get better at choosing our tools wisely instead of letting somebody's marketing department influence our decisions.
Back when I was an IT leader running all our internal appdev, I had a "pet" business system that my team used exclusively to write, optimize and rewrite as a testbed and learning environment for new frameworks. It was probably not the most efficient way for me to run the org, but I honestly believe it saved lots of longer term headaches by containing a few potentially terrible decisions to the scope of a single, not-mission-critical app.
Having worked for many years in IT in Denmark I would say that the author is right, but many of these are luxury beliefs, since in Denmark people are given alot more time to do their job right than in many places. In many parts of the world developers are not given the time to understand anything, and end up working 8 hours after the job to understand the tools in their own time
Why Denmark is so special in that regard I do not know, but for Europe in general if you are directly employed by a company you get good training, and the ability to get a lot of help and support from colleagues (based on European consultancies that I have worked for). I've also worked for Asian consultancies and there is more of a sweatshop mentality in those companies where "everything" is about the billable hours and there doesn't seem to be the same sort of work/life balance (I'm half Indian just for context so try to see the European and the Asian angles)
But was the author’s work groomed? Was time scheduled for it in the backlog? What feature specifically was his Columboesque investigation supporting?
This is what happens when you turn Engineers into Devs. Software Engineering used to be viewed more as a profession but orgs have been chasing the holy grail of commoditizing software development. It’s not assembly line work and never will be.
I've been writing software since the early 1980's and I remember being pushed to get it working and thinking "they'll rewrite this and fix it in the future, this is just to get it working now". It's been terrifying to realize over the years since then that no one ever revisited the code and it still has not been fixed in many cases. Some of that code was used to control trains, nuclear power plants, chemical factories, medical systems, and satellites. I suspect some of it has been replaced, and ultimately it was the responsibility of the people building those systems to make sure they worked correctly, but poorer countries often stole software and a lot of it was still seeing use long after the product was no longer even sold. The world is in some cases hanging on a thread of old software that no one understands any more and no one is supporting. Source code may not even exist.
I've worked in software QA also and abstraction is the bane of debugging, especially when you can't even see the code in the libraries you're using. Proprietary binary blobs in embedded systems are the worst.
The article raises good points, but I feel it misses the bigger picture.
Every generation has some people more interested in the depth of their field than others. So why do we see fewer of the actual experts?
Well, we don't, we just see more of the surface level developers being able to contribute real economic value to society. They were just kept out by the more demanding requirements before.
If anything, it's a good sign we've come to have this luxury problem.
Do we? It seems like there are some people developing Unix clones for fun, and building their own RISC-V or 6502 CPUs. We don't have much innovation in OSes from amateurs, though. At least there's sel4, which is formally proven to follow a certain spec that makes some security guarantees.
I am not personally familiar with any of those OSes (other than reading about the...bizarre TempleOS), but many of them have last release dates within the last 5 years.
Abstraction is a feature of technological progress.
Not to say that OP is incorrect. But intelligent people were opposed to abstraction already in ancient times. Socrates said about the invention of writing [1]
> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
From today's perspective it seems absurd to oppose written information. The net benefit on society is clearly positive.
I am not saying that every abstraction is a net positive. But the example demonstrates that it is easy to oppose abstractions, even if they turn out to be positive in hindsight.
"Don't just learn tools, try to understand how the underlying technology works."
Sound advice imho:
"If you understand the nature of the beast, you know what it's capable of."
Rarely -if ever- necessary to know all the gritty details. But it sure helps to understand the structure of what's underneath, or how it uses whatever is underneath that.
For example: I'm no mechanic who knows the outboard ICE on my boat like the inside of his pockets. But I do know it's a 2-cylinder, 4 stroke engine, how those cylinders sit in the engine block, how to unscrew the sparkplugs & check those, what the various other parts & hoses are for, and (roughly) how much fuel it should consume for trip x, y or z.
Imho that's about the level of understanding programmers should have about the tools of their trade.
Don't worry, old guys.
The reason coders don't know all the low-level skills you've devoted your life to now is because they don't need them. If the future really becomes bleak, they'll start learning the skills they need.
These abstractions also exist to fill the void of an ever creasong job-sector.
I sometimes feel like there are multiple specialties (for which different roles, and often different people are required) where someone only has to work for way less than full-time (maybe even as low as 5% of the time). When we hire specialized people, they do the needed 5% and then have to find additional work to do to fill in the time. So we get more complex tools that incremental improvements, we get more abstractions, we get more overall complexity.
I feel like if I was alive and working 60 years ago I would have been saying the same thing about mechanics, construction, and analog electronics. But really it doesn't matter for most jobs. Knowing how things work underneath is needed if you are forming your own business and products from the ground up, it is only slightly useful if you are working for somebody else, and as time goes on what monetary value there is left in that knowledge will evaporate to nothing for 98% of workers.
Sure most people don't need to understand this stuff.
But the problem today is that very few people, in tech not just the general public, understand the low level stuff, especially among new people entering the field.
Furthermore software abstractions tend to be far more leaky than abstractions in the physical world you mention. And when they leak you need someone who understands what's going on under the hood.
Actually I don't think this knowledge is most important for people forming their own businesses at all - they have many other problems to sort out. But I think it absolutely is valuable working for someone else. These people are the "goto guys" and, ultimately, a "technical insurance policy" for the companies they work for.
A company producing tech products without at least a few people on staff who can dig into and fix almost anything is in a difficult position ultimately.
Yet, someone has to know this stuff, or be capable of learning how it works and as a consequence be able to work on those arcane layers. Otherwise we stand no chance of finding new, more general abstractions to construct a stack of fewer abstractions in total. We probably won't get to fewer abstractions, but to avoid making the problem get worse even faster, we need some counterweight.
There are some things you need to know though. For example most devs today "know" Docker, but very few realize that it opens the exposed ports on the host firewall. Exposing a port for another service to access internally is intuitively safe, until you look at the box and realize that same port was opened to the internet. Convenient thought the docker devs, but touching something so important without feedback is reckless. That's an abstraction that absolutely shouldn't be there, too many devs don't RTFM.
This could be applied to society as a whole. Currencies are an abstraction on bartering which is an abstraction on doing every task required to support your life and family yourself. Each layer of abstraction creates a specialization and efficiency. Those who consume the abstractions may lack the details, but they have levers long enough to move mountains! Those who provide them have a valuable role. Don't be afraid, we must keep making more.
Ive been doing academic biomedical research for the last 15 years. Finding competent reviewers for computational papers is like NP complete. Yet there is a factorial increase of sophisticated code in the literature. So we got the inverse problem to tech. Not enough of the old school, goto reviewer scientists have learned the foundational stuff.
I would love there to be a hands-on "History of Computing" class that runs through the history of computing (maybe starting in the 1970s?) through hands-on activities (through emulators probably) to give some perspective about how awesomely fast and powerful computers have become. This could also serve to expose where things have regressed (e.g. text editor and other UI responsiveness, complexity of the web stack). A version of this lesson could also be taught with video games to show though older games were visually simple, and just a tiny fraction of the file size, they were frequently just as fun.
Semi-related: I'd love to see a single video game that modernized the game experience as you played: Text mode only -> text mode + static VGA graphics -> 2D sidescroller -> Wolfenstein-quality -> DOOM-quality -> Quake-quality -> Half Life 2 quality -> modern-AAA.
I'm working on a team with very naive, young, non-technically trained programmers. I love it so much. Start a program, open a file, process the file one step at a time. No DI, not much unit testing, no fancy frameworks or language features. Its so refreshing to just work on adding business functionality and just testing it with users.
The author's "Advice to people studying technology" goes against the dominant practices for individuals making money in software right now.
This is sometimes a misalignment with organizations, though other times a company is playing games (e.g., priority is to appear externally as having growth or progress).
Speaking of failures traceable to levels of abstraction.
"The hacking was rather discrete and would most likely never had be discovered where it not ..." should be "The hacking was rather discreet and would most likely never been discovered were it not ...".
The abstraction at fault here? Spell checkers. It's not enough to know how to look for the squiggly red lines under words, it's necessary to know what words mean. As Jerrold H Zar put it:
Eye halve a spelling check her,
It came with my pea sea.
It plane lee marks four my revue
Miss steaks aye kin knot sea.
Kubernetes is great, but working with a tool that installs an in-cluster REST API that calls another in-cluster REST API that renders objects to be consumed by another in-cluster REST API that will also render objects to be consumed by multiple in-cluster REST APIs that will, eventually, produce containers that are externally accessible through a glorified NGINX config (with L4 filtering done by iptables, if you're lucky) can get extremely tiring.
(To be clear, this is still better than scripts that would call scripts that would write values to files/databases all over the place that are mutated by other scripts since Kubernetes does a really good job of enforcing interfaces between boundaries)
In my (admittedly anecdotal) experience, security people are the worst at this. I’m not sure if the problem is that they have so many certs that you can just memorize canned answers and get a job rather than go through an SWE coding test at most places, or something else.
The author claiming that no single person can master development and ops and security is missing the point. By abstracting over some of the details of all of those fields, someone can be proficient in all of them, and therefore gain the perspective that comes from bridging those three separate but aligned skillsets.
We haven't used _too many_ abstractions. We've merely leveraged abstractions in a way that makes a different tradeoff. A small number of abstractions trades off deep expertise for narrow perspective. A larger number of abstractions allows us to trade off a wider perspective for a shallower understanding.
Neither is wrong. Both are useful. Different situations will call for each.
So is the argument that everyone should be a mechanic? Including the long haul truckers, the race car drivers, stunt people, taxi drivers, etc? Such people use their vehicles professionally, full time. Obviously technical knowledge is useful to them in case of a problem or to fine tune their vehicle. But in general, no, being able to steer without a steering wheel doesn’t really do much for them. It’s okay to specialize at the higher layers of the stack.
Programming is more like building with legos than building or using a car. So much of programming is composing units of work, even if you are starting at a low level of abstraction. And those units are very generic, very reusable, with little need for each unit to know about the goals of the end result until you get very high in the stack. In other words, we specifically avoid using specialized parts unless it becomes necessary.
I would wager that >90% of the useful, beautiful, functional software that I enjoy using every day was written by people who don’t know how to write assembly. Whether you measure it by lines of code, work hours, whatever. And that’s okay. They had time to add features and work on their business model rather than having another go at correctly loading their data into the CPU registers.
That is a good point. What I take from the article as valid is the value of an engineer’s general attitude toward these matters.
Having worked with a fair amount of devs who are very clever, but instead of having a sense of wonder and curiosity about the tech their systems rely on, they show a disdain for it. The kinds of fixes I’ve seen almost go into prod are scary, e.g. “the packaging system will not pack this 3rd party exe in with our code, so I’ll rename it to txt” - anything to avoid understanding the lower level mechanism and devise a sane solution. And management is thrilled because they deliver on schedule; engineers who protest the “working” approach are, at best, placated with a tech debt story in the backlog.
Your comment and many others here start their criticism by conflating builders with end users - does every taxi driver need to be a mechanic? This comes off as a bit of a strawman since the author is referring to building software, not simply consuming it. If the person who designed my car is just selecting prefab components on the strength of blog posts and industry hype, with weak knowledge of how they are built, I’d be worried.
Suppose someone is a relatively well versed professional but they have a hole in their understanding…They don’t know X - where X is HTTP verbs or linear algebra or Assembly or the difference between Java and JavaScript or how to write vanilla JavaScript in general…
it’s very likely that those skills are immaterial to their job. It’s also very likely that they will never need those skills.
All of computing is layering abstractions and no one - no one - understands all of them. The author cherry-picks their own favorite layers as being “what kids these days don’t understand” while ignoring their own ignorance of other layers.
One does not need to understand alternating current in order to plug in a vacuum.
I think the more accurate analogy would be about the people who build and sell vacuums, not people who “plug them in” i.e. the end users.
The technology that underlies vacuums is a lot more mature and stable than those that in modern software development. So even if modern-day vacuum engineer doesn’t understand AC at the same depth as their predecessors, it’s less likely to be the Achilles heel that the author is referring to for modern devs having little understanding of computer technology.
I want to point out that computers in particular are just not that hard to understand. E.g. "Getting started in electronics" by Forrest M. Mims III ( https://archive.org/details/gettingstartedin00mims ) covers gates and silicon down to the sub-atomic level, and it's suitable for a bright child.
Let's not try to excuse the mess in IT by appeals to the aircraft industry until we have a semblance of their professionalism, dedication to safety, and history of handling faults and errors and learning from the process.
It’s not possible for everyone to know everything. Almost no one knows how to fix their <insert technology here, cars for example>, but at some point in history the opposite was true. Abstractions make technology available to the masses.
However, too much abstraction, on a long enough timeline, where does it end? The blob-humans in Wall-E, where no one knows anything and everything is done for us?
I’ve definitely felt in the last ~10 years of my career that the tools and libraries I use in development contain “too much magic”.
OP doesn't seem to realize that time is not infinite. Business demands require abstractions upon abstractions. If we're going to go down that path we should also understand the biological aspect of how atoms beget cells beget organs and organism to brains and then thought when then translates into creating the logic gates and machine abstractions on which we utilize code to translate thoughts into programs.
I think there needs to be a clearly defined difference here between professionals and experts.
You can be a professional in the field, but until you understand all the layers of abstraction, you aren't an expert. You can't diagnose those deep problems and fix them.
Professionals are paid to work in an area, and are thought to know enough not to be horribly dangerous.
Experts, on the other hand, are supposed to understand and have some competence in all the layers of complexity/abstraction present.
It can take decades to reach expert status in a given area.
A few weeks ago I had occasion to talk with a working computer security professional, and asked him about data diodes[1], and often they are used... he'd never heard of them. Often here on HN, I make comments about capability based security[2], and everyone mistakes it for the permissions flags on smartphones. This tells me there aren't many experts in the field of computer security.
The same is true in other fields, you can be a CNC machinist, but until you know about the Whitworth 3 plate method[3], you're not an expert.
>> You can be a professional in the field, but until you understand all the layers of abstraction, you aren't an expert.
Let's not inflate titles here. I've noticed what the author is writing about and will offer another example. Remember when the bad-guy hackers used to make hacking tools and exploits available for free? Before that was big business? Then a bunch of kids would leverage their hard work to cause trouble. Do you recall what those guys were called? Script kiddies. A lot of so-called professionals these day are little more than script kiddies. That's not to say they aren't effective or useful (the old hackers caused plenty of trouble) but they really don't know the internals of the tools they use. They can keep things going until something weird happens.
I'm not sure what I think of this state of affairs. Not everyone can go deep, but I do feel the bar has been lowered too far in many cases. It's like millions of small components... NPM: because nobody can be bothered to figure out some problem and write 50 lines of code themself.
There is no process that can prevent unknown failure modes.
Legacy software is almost always going to have issues, and some use complex frameworks knowing full well they have heavy maintenance burdens.
Saboteurs come from all skill levels and backgrounds. Integrating accountability in the development and deployment process is wise.
Most modern "Hackers" are just the sane old cons repurposing common auditing tools to check for known CVEs. Most others simply don't care about some obscure website.
When you catch unknown people poking hardware in COLO data centers... the real problems start to become apparent.
With enough coffee and doughnuts anything is possible. =)
"Abstraction" is a misnomer. This word has its useful meaning in math and art, but in software engineering, all what we call "abstraction" is automation in disguise. When you write a piece of "abstract" code, you only delegate writing the piece of concrete code to your compiler or run-time type deduction.
And as soon as this is clear, the attitude follows. Should you know how every aspect of your code is compiled or interpreted? Not really. Should you realize that every automation takes resources, creates accidental knowledge, and introduces its own probability of failure? Yes.
This is a useful insight, but I think that many automations really are based on solid, honest-to-goodness abstractions of the same type you would find in math. Furthermore, that's good and important!
It's true that a compiler is merely translating a high-level language into the actual machine code, but it's simultaneously true that you can (generally) talk sensibly about the high-level language, and maybe even prove theorems about it, without making any reference to how it will be realized on the machine. That's a good abstraction.
The better your abstractions, the more your automations make sense, and the more you can reason about them. Automation without abstraction gives you biology, which is a gnarly mess of accidental complexity that's very difficult to reason about and control.
That said, I'm going to start thinking of my code more in terms of creating efficient and low-risk automation, and less about creating nice abstractions.
Yes. I agree with you wholeheartedly, and that's exactly why I have to insist that the phenomena we're talking about is not an abstraction but something else.
You see, there is no such thing as a "good abstraction" or a "bad abstraction". In math, algebraic systems are either isomorphic or not. There is no qualitative quality to a fact. It is only in software, abstraction acquires this immeasurable quality of goodness, or leakyness, or even fashionability.
"Automation in disguise" is not a good term either. But I find it slightly less misleading and such as it brings less false promises in the connotations.
If what you claim were actually the case, the names we assign these "abstractions" wouldn't matter, they could be any arbitrary sequence of characters, since all you'd be doing is grouping code into procedures for the sake of automation. But the names we assign things obviously do matter, not only to make the reader grasp what the abstraction is about, but to decouple the abstraction from its implementation; two different abstractions may have an identical implementation and still be different abstractions, and an abstraction may remain the same abstraction even if you alter its implementation.
I claim that what we call "an abstraction" in software is something else. E. g. while you state that "an abstraction may remain the same abstraction even if you alter its implementation", I insist that the right verb here is "must" and not "may". There is no such thing as "leaky abstraction", "good abstraction", or "an abstraction that only works on x86_64" anywhere else other than in software engineering.
And I'm being nitpicky because the word "abstraction" brings in false connotations. It implies that you can reason about things on different levels as if they were equivalent. But in software, they never are. And I agree with the author in that this pretence of equivalency just doesn't hold and adding more and more abstractions on top of each other is not the way to go if you want to understand how things work and how to make them work predictably and efficiently.
I don't see how my claim diminishes the importance of naming.
I disagree and believe "abstraction" is the correct term for wgat the author intended.
> Computer science commonly presents levels (or, less commonly, layers) of abstraction, wherein each level represents a different model of the same information and processes, but with varying amounts of detail.
I don't challenge the author's use of the word "abstraction", I challenge the validity of the very word "abstraction" in the context of computer science. What we're dealing here is not abstraction, it is something else. But thinking of is as of abstraction brings us to the state of affairs the author criticises.
If you remember that there is very specific very concrete automation behind every abstraction, you don't stack them up until the pile becomes impossible to either control or understand.
There will always be two types of people, people whose interest is in getting things done regardless of the underlying means to solve an issue, and people who get stuck in why it worked , and go down the rabbit hole for hours , to see why something worked (society sometimes call these people nerds).Sometimes I wish I could be the former and not the latter. The most important thing is to be aware that there is a black-box/abstraction in front of you.
Confucius once said: "To know what you know and what you do not know, that is true knowledge."
>> A big percentage of so-called experts today only know how to configure some kind of hype-tool, but they understand nothing about how things work at the deeper level. This is a real challenge and a big problem for the future.
Oh, I couldn't agree more! And it is not just tools ans systems, it is everything from power generation to infrastructure to society and economics.
When start to work like that, the DevSecOps example used in the article but the same happens basically everywhere else too, it spills over into everything else. And yes, this is dangerous in deed.
Power steering, for example, simplifies steering thanks to a small mountain of abstractions built up over the years. Doubly so if you have adjustable power steering (i.e. you can select "sport" steering vs "comfort" steering). https://en.wikipedia.org/wiki/Power_steering
Abstraction layering is annoying, but I think abstracting hard things infinitum ad nauseam is, overall, a good thing!
We're in the tail end of a Cambrian explosion of new languages and frameworks. That explosion is understandable giving the huge shifts from mainframes to desktops to web/mobile/cloud. The TIOBE Programming Community index shows today's programming language use much less concentrated than it was 20 years ago. I expect a few dominant languages to re-emerge as the overall computing landscape stabilizes and LLMs make it low effort to translate entire systems between languages.
The problem is not only the level of abstractions but the cross dependencies and assumptions across abstractions. For example, in higher level frameworks (e.g. React) you should firmly follow the rules in a way that makes assembler a toy language comparing to the stuff you need to know ahead.
The security issues expand beyond what is described in the article: you assume the security is solved by a stack of layers where is not. A single issue impacts everything doesn't matter where in the stack it is.
- newcomers to programming reinvent wheels because it is a natural way to learn. That is why we often have alternative frameworks/CMS/languages and so on
- there are many crooks in the industry who thank their earnings mainly to marketing and contacts. They are also the most vocal to deceipt clients and developers alike
The amount of data has increased over the decades, but programming and software hasn't changed over 50 years. We face challenges to manage this workload.
Yes and no, abstractions dumbed down things for common users. But there are and will always be a set of power users who get into the details. Power users are the ones who abstract in the first place and abstraction is not a one time process. Abstraction keeps evolving too, hence there are going to be power users who know the capabilities and support in the evolution. This is any how stream works let alone software. There is no need for everyone to be a power user.
Right. So now, where to look for an organization that rewards and supports people who like to know how things work? Even if they are troublesome and grumpy.
I think people generally think this stuff is "easy" and willingly ignore the layers of complexity.
Someone else pointed out surface level knowledge of power tool users... but thats always been the IT industry.
IMHO. You either have "this" ability or you don't. There will always be a few people who can think and design complex systems and then everyone else who thinks its easy but only understands it surface level.
Very good article, 100% whole heartedly agree on devsecops and most infosec team members only knowing specifics tools. We need users with ranges of background not isolated at layer 7.
Personally I think cloud is a pretty big abstraction layer, I prefer to work on non-cloud and hate proprietary terms and disconnected tooling for common problems.
To understand a solution admins need to understand how it works via code and config access.
While I agree with the sentiment, I also struggle to find the right approach of peeling back abstractions and understanding the lower levels. Learning how something works under the hood takes time and can be a big commitment. How to do decide when to look below the abstraction? I’m most often interested in how things work under the hood, but I can’t always dig deep, which does bother me.
i see many comments disregarding the take. while the arguments put forth may not be the strongest, but there is a point to this.
most people only chase the next thing you can build on top of what we have today. they only look back inside the layers when the current tech is not enough to achieve your goals out of the box.
a recent example i see is with the quantization and tinyml developments in machine learning. while it is easier than ever to create a model and to run it, the underlying architecture, previously only up to the people designing the frameworks, is now finally being looked at. only because the LLMs cannot fit inside the memory of elusive enterprise GPUs as easily as you'd like.
in no other instance would most people care about how numbers are stored in memory in the past 15-20 years in writing software. i think necessity is mother of invention, and that would probably still apply to dealing with abstractions going forward.
What else can you do but look at files and code to figure out problems? And why use a monospace font for prose? I agree otherwise though, it can bite you in the arse if you don’t understand the layers of abstraction you’re dealing with. It’s the reason I sort of don’t like browser-based stuff. We already have a full OS stack for doing stuff, why add another on top of it?
>Today programmers and system administrators no longer exist, instead we have DevOps and even DevSecOps, in which the industry is trying very hard to stuff every single task into the job description of a single individual.
Nice to know I'm not imagining this. Platform Engineers seem to be going extinct, with the Software Engineer taking over the role, and doing it badly.
I was once managing a few large file servers, with bog standard users as well as devs using a pretty complex directory tree of "assets".
There was a pretty high-up, specific directory level, which was where the ZFS file servers were given their different loads to handle. This was a directory level where new directories were created rarely (99% at the start of the project).
For reasons of money as well as speed, I asked that the server admins (ie. me) be the ones to create any further directories needed at that particular level. The head dev refused to entertain the idea of not being able to create directories anywhere he wanted at any time, and therefore, a new system was brought in at five-figure costs in order to make the file servers into a large abstracted blob that users never had to think about the complexities of managing.
I was given an opportunity to exit the IT dept and become a Python dev and I took it, shortly before that system came in, because it caused many problems which were much worse than needing to have an admin create a directory for you maybe once or twice, and the evident ignorance of everyone I spoke to at the vendor made it very clear ahead of time that it would.
This was not the only such massive expenditure on a toxic boondoggle in the name of "simplicity" that I witnessed.
No but I forgot about that one. It's a good one too. Thank you.
The one I remember was about a "technician" in this world where nobody remembers the underpinnings of technology. There is a big competition to solve a known problem using the existing tech toys that are available. (something like a tech olympics) The technician wins the competition by using the equipment in non-standard ways and re-engineering their functions in ways nobody had ever done.
The judges are at a loss as to how he could have done this since it is not in any instruction manual. :-)
As someone who learned computer science from tinkering with abstractions until I needed to go a level deeper with a subsequent education in it—-I think this article is inflammatory at best because they are attempting to assert themselves as a voice of a wise person, or, as an expert of how the future “is bleak”
I don't think there is a maximum number of abstractions that is suitable. I think the value of an abstraction is defined by its quality.
Having many abstractions on top of each other leading to inefficiencies is a problem, but that is not a problem of the number of abstractions, but rather the poor composition.
I would need to talk to the writer to make sure we're talking about the same thing.
DevOps isn't "that guy does everything". DevOps and Dev ARE different. Dev's product is what we sell to customers. DevOps product is the construction of a software development pipeline - from PR to production that ensures the policies and procedures of the company are enforced on the code base while setting up to code to function in the real world.
Dev designs the widgets.
DevOps designs the widget assembly line. DevOps job is to eliminate the Dev's pain points around deployment, resources, security, and compliance.
I know this sounds like gate keeping - but you should question the leadership of any company that merges dev AND devops. Any one coder is cannot a complete team - there's simply too much to know - unless you heavily rely on high level deployment products like AWS app server.
Now onto "abstractions"...
Your abstractions should focus on the domain language of your subject matter experts. The abstractions should, ideally, let a subject matter expert browse your code and shouldn't be too confused or overwhelmed.
Abstractions around technology like web or gui frameworks should be decoupled from your company's abstractions. Frameworks like that are just platforms that your product should plug into.
A gold standard for a company's code is that it could be easily cut out of a framework and plopped into another environment. A web app one day could be adapted to become a batch job that's run overnight somewhere else.
Every generation we have to relearn these principles because coders are pretty bad at teaching. There are few ways to write code well - compared to the myriad ways of writing code poorly.
I’ve seen what happens many times when the abstraction fails, and people don’t understand the guts. The people who know the abstraction replace it with something based in the concrete, but without the middle layer, and which resembles the abstraction.
> The developers knew how to put together a website and an API using a "modern framework", but did not understand much of the coding of the framework itself
This example sounds like a bad choice of framework, or insufficiently skilled devs.
I don't think the problem is that we have too many abstractions. Abstractions are useful, they allow people to focus on where they are different, avoid wasteful duplication of effort (when done right), and compensate for the non omniscence of everyone. They come with a cost, you need to know it.
There's a clear tendency to just buy into new trends, anf that's always been the case. Maybe the smaller scale of the industry back then made trends smaller and the choice more limited.
I think there's clearly an issue with the lack of interest to understand what's under the abstractions. Is it training, habit, culture, a change of who is a developer today? Don't know.
It was also much harder to be a dev 15 or 20y ago without having to know at least some C and some system. Stuff seemed more brittle as well so you had to fiddle. Deployment was mostly manual and artisanal so you had to copy files over manually, run commands, shit like that. Honestly the piece of mind and safety that frequent deliveries and "devops" brought is so good that I'd find it comical if anyone would suggest to me to go back.
So I don't think there's really too many abstractions. I think that sometimes they're the wrong choices, and sometimes people don't care enough about what's under their immediate interface
> It was clear, just by looking at how bad everything was performing, that something was wrong.
Meh, zoomers didn’t invent incompetence and slow, ugly software. Not everyone is cut out to write aerospace grade software and, thankfully for those people, not everyone needs it.
And it's very very hard to fight because "Bob set up a foobar in a day and look it works!" is not the same as "we understand all the pieces and components and what they do behind foobar"
This is not just a problem in engineering. Read a news article. Talk to anyone about anything. The lifting that poorly defined abstractions are doing is toxic.
agreed, dealing with things of this nature from within a scientific field where no one understands the complexities of the data analysis and simply trusts every oversimplified metric explained to them is quite concerning
the individuals deemed to be "experts" rely on closed source software to the point that the software is the real expert
And nobody here knows how to raise a cow, milk it, and slaughter it. Neither can anyone make a fire in the rain, and god help us if the internet stops working. We’re so far from basic humaning that every generation needs to relearn how to change a baby.
I think there is a fundamental issue with the article and that's the author stating devops or devsecops is someone taking care of both development and operations. That's just not true.
Yes, there are some people who don’t know how some things work. The conclusion that the “future seems bleak” doesn’t actually follow, though. Perhaps the author was already pessimistic?
This post misses a crucial step in its argumentation: trying to understand why these abstractions happen. Blaming it on some kind of "techno-moral" decay is not an explanation, it only turns the argument into some kind of arrogant self-reward post. It's like when Plan9 fanboys argue about the lost "purity" of Unix.
The anecdote about these kind of security person is interesting, and it's not hard to sympatize with him, but he is missing the point: The industry seems to need someone to run these kind of pre-made security tools. These jobs are not pointless (perhaps they would be if the people who actually "know" other abstraction layers didn't create software that suck), they are solving some problem, people are working full time and getting paid for them. And the fact that they exist does not mean these jobs make sense or that the tasks they focus on are the right or the wrong abstraction.
> What good does an abstraction do when it breaks and nobody any longer understands how the technology under the hood works?
Not many programmers know of a kernel works internally (processes are just another abstraction). Not many know how compilers work and translate high level code to machine instructions either (there is probably no person in the world who understand all the parts of LLVM/GCC). The amount of programmers who know how CPU instructions translate to transistors is very, very rare.
Yet all these abstractions sort of work. People argued back in the day against programming in high level languages, nobody cares about these people, because the kind of problems that can be solved with high-level programming languages can't really be solved with assembly. Abstractions don't appear because companies are stupid, people are trying to solve problems with software and they need to get some concrete task done. Doing the quick hack does not mean that they are doing something wrong, it means that they are focusing into doing something right at another level. And if you can't understand that, it's _your_ fault.
Of course, plenty of times companies are doing stupid things, but that's the nature of the problem, companies try to do different things, some of them succeed, some don't, some succeed despite being horrible and some fail despite being brilliant on paper. So abstractions are created all the time, and there is a continuous dialectic between that abstraction and its usefulness, which is not measured by the opinion of other programmers, but by the success of the companies adopting and following these trends. For some people who knows a lot about systems programming and administration, it may feel stupid that these days we have people with cloud certificates who are in charge of "orchestrating" scalable and fault-tolerant platforms in the cloud, but know very little about how Linux systems work underneath. But it turns out that these people can get things working, even if they don't do it as well as you would do, and that's something that matters - it means that the abstraction sort of works, even if it leaks some times.
I guess it's not easy to spend decades learning things only to wake up one day and realise that large parts of your knowledge has been abstracted out and automated (ie. made less relevant, and thus less valuable in the job markets). But that's how things are in this field...
I think there is one interesting angle to this problem.
I am someone who grew up with the technology, as the levels of abstractions were being added. I am now benefiting from all those accumulated decades of knowledge.
As the IT / development world was changing, I had enormous privilege and comfort to learn the things at the pace they were happening. Being able to assimilate changes over long decades. Be a witness to the problems and logic behind all those new solutions. Understand how we come to have JavaScript and the browser mess we are in and so many other curious features of todays digital world.
I understand pretty much all of the layers of the computing from how CPUs achieve some of the things they are doing to bus protocols, to instructions, physical memory, low level OS internals, high level OS internals, virtual memory, userspace platform communication with OS, programming language runtimes and linking, shared libraries, IPC, networking, virtualization, etc.
The issue, as with any automation, is that new players on the scene (younger devs, devops, etc.) simply have no chance to learn the same things and go trough the same path.
For them, spending a decade working with a low level programming language before you jump into high level programming language is simply not an option.
We, people who really understand the technology that the world runs on, are a slowly dying breed. We are still here as tech leads, managers, directors, business owners. But there will be a point in time when we will go on retirement and there will be only precious few people who had perseverance to really understand all those things by diving into obscure, historical manuals.
They know only full SPA frameworks, they have never seen a dump of an HTTP message, headers and verbs are abstract things to them. Hell, many of them don't know you can have fully functional websites with zero JS, including payment, video, login, etc.
I started to write an HTMX tutorial (https://www.bitecode.dev/p/a-little-taste-of-htmx-part-1) because I noticed a lot of young coders don't understand what to do with it. They read the tweets saying it's nice, but when they look at it, it makes no sense to them.
It's really fun because I now remember how some senior coders looked at me, knowing nothing about compilation. I was struggling with Python packaging because before wheels, it required compiling a lot on linux, and it failed often. For them it was obvious: just install the headers, look you need the dev packages, wait, you don't have gcc?
Nowadays I happily patch nginx source code and compile it manually, but it took a lot of work to learn a minuscule chunk of what all those guys knew by heart.
> they have never seen a dump of an HTTP message, headers and verbs are abstract things to them.
When I was teaching programming, I had a fun party trick whenever we got to HTTP. I'd fire up netcat (in listen mode), then connect from a web browser and "serve" a website by hand. I'd show the students the HTTP request that came in, and just manually type out a simple HTTP response and they'd see it appear live in the browser. Its pretty magical.
And once I'd shown them that, I'd write (by hand) an HTTP request to wikipedia or something to show them how its symmetrical.
Of course, real websites increasingly do HTTP2 over TLS or something. So unfortunately its not as "real" as it once was. But if wide eyed look on my students faces is anything to go by, it was a great lesson.
Not quite the same, but I remember sending emails via telnet. I showed that to a younger dev who was somewhat blown away by the fact it wasn't via some REST API.
Yes, I actually remember another tech showing me how to type "get / http(whatever)" into telnet to check a browser and that is when a lot of things clicked into place. Basically it was the realization that computers are just sending text back and forth at blindingly fast speeds. Obviously I had some sense of what was going on before, but that was the demo that did it, and I had to learn it on the job because no one at school ever did that, which I find sort of backwards - school should be where you play with that sort of thing, but I suppose we all learn differently.
The email telnet thing was also a good learning experience. Gmail's servers are fun because you can see the designer's cute messages; if you forget your EHLO, they'll throw an error (or at least they did years ago) that it's polite to say hello first.
When I was at Fog Creek, we still had that as an interview question (with googling allowed). It was a good judge of how most folks approached something they used but didn’t necessarily understand. If the applicant already knew how to do it, that was a different signal as well.
Think ChatGPT would be allowed nowadays? It’d be my source for something so esoteric. I haven’t bothered to learn mail protocols because any message you send will be marked as spam unless it’s from a major provider.
This is not true. I run a mail server for myself and some friends. It started as an old desktop running under the desk at a university, transitioned to the back corner of a server room when I was working as a network engineer, and now it runs on a Raspberry Pi in a closet of my house. I had to pay extra to get a static IP at home, but everything has been off the shelf and DIY. My current Raspberry Pi has been running for nearly ten years, with only one interruption (the SD card failed). The idea that you can’t run your own mail server is a myth, and I think more people should do it. It is not hard, and you will learn a ton.
(I used to run mail for a large corporation, so yeah, I know a lot. But I learned how to do it by running a mail server out of my dorm room.)
Happy to hear that; thanks for correcting me. I was going off of what other people said, which I should have mentioned. It’s nice to know that it’s still possible to run a mail server.
No idea, but if it was available back then, we’d have been looking for whether you are critical about what it tells you and how you verified it was correct. That was a main idea behind the interview.
This reminds me of when I was a kid, and learning to write some C to "create games". At that time, things like functions were some vague and "magical" mechanisms that I had to adhere to because that is how things are. I didn't even think more about them, taking them as "holy"; except only the one weird thing that were varargs functions, which seemed to do something strange and confusing, putting a disturbing crack into that perfect picture, that I had trouble wrapping my head around. It was only when this kid started learning assembly, by virtue of a book with an amazingly baiting title "How to write computer viruses", that things fell into place, and suddenly understanding the stack made everything clear and straightforward (if no less amazing, through appreciating the brilliant genius of the procedure call protocols!).
And indeed, the first time opening a POP3 session to an email provider through telnet was also an amazing feeling; maybe even more visceral thanks to its "live coding"/immediate feedback aspect; but personally just a small bit less foundational. So, anyway, through this whole story, what I really wanted to say is - thank you for your service and approaching your teacher's post in a great way!
Yeah I used to occasionally debug and test some things by writing manual HTTP requests over telnet, and it is something I’ve noticed new devs often just don’t get how approachable HTTP/1.1 is. Even if they have worked with headers and methods, there still seems to be a lot of magic happening in their minds.
Unfortunately chrome has stopped showing cookie headers in the network tab. You have to look somewhere else for those, and I think at that point you’re just seeing the current cookies, not the raw headers. Maybe there’s a security reason for that, but it is a further abstraction layer of “we’ll present the information a certain way” rather than “we’ll just show you all the information that the server sent back” that I expect will grow into other headers (maybe already has) and further remove us from seeing the raw details and having a clear understanding of what’s happening under the hood.
(You know it’s possible HTTP/3 cookies are even passed in a different way under the hood from other response headers, and that could be part of why they’ve separated them out. I hadn’t considered that possibility.)
You can probably still do that by having a local proxy and communicate with it via good old HTTP/1.1 and let TLS and HTTP 2 and 3 be handled by the proxy.
I have worked with interns and young devs just out of school who are sharp as tacks. I had an intern a few years ago, the first task we paired on was to figure out why our MySql connections from Qt were misbehaving. We stepped through the application code, the Qt code, found where it was failing, and ended up backporting a Qt patch from upstream to fix it. Yes, I gave him some guidance, but once he caught on he was able to navigate this complex mess of C++ code across a few technology stacks and effectively reason about it and make changes.
A lot of CS skills are generalizable. Knowledge is one component you pick up with time. A good education, self-taught or otherwise, should allow you to drop into any sort of code and be effective without much spin-up. College covers CPU architecture, assembly, networking, operating systems, web, algorithms. This is not esoteric stuff, it is very standard and you can get it all in class or from textbooks free online!
There's a huge gap, though, between the boot camp methodology of essentially "here's a visual design or a set of fields, I'm going to pattern match that onto a single React component or backend endpoint that's isolated to a single file" vs. "I'm comfortable looking up functions 8 files deep into a codebase and maintaining a mental map of what data flows where."
Once you have the latter, either by having built such a codebase, having worked in one, or even having experience with puzzles or games requiring multi-step planning and understanding of the potential failure points at each step... it's absolutely transferable. But there are also a lot of people in our industry who have memorized interview questions and see their role as churning out components. And while arguably that's not a CS education, it's being called the same, and it does a disservice to those people's careers.
I’ve worked with around a dozen interns/co-ops, and only 2 stood out to me. The rest often made me wonder if they could reasonably handle this kind of career. I hope they could!
It’s tough if you feel a degree of responsibility for their success. Mentors are one of your greatest assets early on (and arguably later as well), and to try hard to have them succeed and thrive only to see them languish on trivial tasks is awful.
I think part of the problem is that CS education where I live is awful. The kids come out of school expecting real work to be wildly different than it is, and it hits them like a brick wall.
Maybe a difference between then and now is that "back in my day" CS still wasn't considered to be a "hot" college major. And there was believed to be more flexibility in choice of a major relative to whether you could get a decent job.
Today, there are probably a vast number of entrants who heard that CS is the ticket to a high paying job, and they are also told that an internship is a vital bullet point on their resume, if not a guarantee of a job at the internship site.
Then, as now, students studied under the constant drone of "you will never use this stuff once you finish college." They still have to decide if they're actually interested in the subject matter or not.
A good bellwether of career interest is the students in the youth symphony. They've all aced every subject in high school, plus rocket club, gentleman sports, and orchestra. The program for the end-of-season concert will have a little bio for each graduating senior, including their college interests. Half of these kids want to major in CS.
I also see plenty of smart young guns. And if i think back I was better myself when I was younger. I’m less excited by digging into problems with each passing year.
That’s an underestimate, and is it really that unexpected given the huge interest in CS? Previously there was an implicit filter that only the most skilled people could make it into this new world, that was just being built. Now that it is built, and everyone wants to enter - of course the average quality will decline.
>You can actually see this with new frontend devs. They know only full SPA frameworks, ......
Around I guess 2017 or even 2016? I used to think this is some sort internet troll comment about people never seen a dump of an HTTP message, until......
>Because they started with assembly.
It wasn't necessary starting with assembly or something low level. ( Although that certainly helped ) We have less entertainment, more time, and no Internet ( for most part ). Things also dont work all the time. And we have to spend time figuring it out ourselves. That is where all the knowledge comes from.
Having to figure it out is not specific to a generation. My generation had magazines and forums. Then came out google and SO. Now chat gpt.
The difference is where you start figuring things out.
Before, you needed to figure things out at your level, because it was the immediate area of mystery.
Now this level is generally solved, you need to figure things out at a different level:
- filter out the mass of irrelevant information, the out dates one, and the one from spam
- understand how all those complex abstractions interact. We have good resources on how each work individually, but the carthesian product of the monsters we build with them, we obviously don't
- debug some things that don't work when the magic fails, way below your level, or behind a service, where nobody is looking
The problem before was scarcity of info, lack of standardization and roughness of systems.
Now it's abundance, opacity and too much sophistication.
But everybody still have to figure things out. Just no the same challenges.
Agree with everything except out of date info. The old stuff is the good stuff you can’t find anymore because Google thinks you don’t want it. the newly generated SEO ai shit is what you need to filter out
It's fair to say that these are different kinds of "figuring out" though. Learning by trial and error is distinct from learning how to find and synthesize information and each lead to very different outcomes over the long-term.
I'd argue that the "find and synthesize" generation have an advantage within contemporary software paradigms because of their experience but, without deep knowledge of the foundations they are building on, they might be disadvantaged when it comes to imagining/creating new paradigms.
Then again, first-order thinking seems to be easier when you're not marred with the traditions and conventions of the past so maybe this isn't actually a disadvantage.
Trial and error is a fundamental part of how we work as human beings. There is no learning or understanding without it and no "figuring it out" without it. So that dichotomy between "trial and error" and "find and synthesize" doesn't exist. They are the opposite sides of the same coin. You can't have one without the other.
Finding and synthesizing doesn't do you any good if you're unable to meaningfully apply it or understand what you're applying and what you're applying it to.
Trial and error also doesn't do you much good without the ability to back it with knowledge and to find the relevant knowledge.
Judging solely by the date of publication is not the best criteria for filtering. For example, Vannevar Bush's article "As We May Think", published in 1945, is the oldest dated item in my reference library. Conway's Law comes from Melvin Conway's 1968 paper. David Parnas' 1972 paper "On the Criteria To Be Used in Decomposing Systems into Modules" still applies.
The "less entertainment" and "more broke things" is probably the critical piece of the environment.
This makes valuable anyone with long attention spans and immunity to boredom.
For me, it has taken some habit forming (and habit prevention), but I have learned so much in the past 10 years since giving up gaming and social media. HN is an occasional vice though...
> Nowadays I happily patch nginx source code and compile it manually, but it took a lot of work to learn a minuscule chunk of what all those guys knew by heart.
I'm about half a decade into my career and I've recently tried to take note when I hit milestones or achievements, even little ones.
I was helping an intern with a tool and it didn't have support for what we needed it for. Since the tool was open source, I just cloned it and patched it to add support for what we needed. When I told the intern this, he couldn't comprehend that I was so blase about modifying this tool that seems like witchcraft to him.
But what I quoted, I feel deeply. I've worked with so many people with so much knowledge they can't possibly communicate it, and I'm only starting to really understand the tech around me. I didn't think anything of patching the tool at the time, but its important I look back so that the younger version of me can see the progress I've made.
With all due respect, this is some elitist nonsense.
Sure, a frontend dev who strictly works on that won’t know much about, say, the OS layer. But they don’t have to, it’s not part of their job in any way.
There is no shortage of young people that work on areas that makes them require deep knowledge of many different layers of the stack, it is simply not necessary for every IT-related job. But one can absolutely pick it up if they want to.
It is not elitist to point out the flaws in modern web devs who don't know what an HTTP protocol is and the structure of an HTTP request (headers vs body etc). I interview a lot of them who cannot explain the difference between a form submitted directly vs through Ajax but they surely know how to send a POST request through node/express.
I don’t even know the difference between a form submitted through JS vs a browser. Who cares? Unless there’s a problem, there’s no reason to know such things. And I’ve written http webservers by hand.
People here have way too much confidence in their interview questions being a good signal for experience. It’s pretty wild.
My point is that your question is some esoteric gotcha party question that may as well be out of Trivial Pursuit. But you’re treating it like anyone who doesn’t care is just being a bad engineer. There are countless ways engineers spend their time, and choosing what to work on is the most important choice of their careers. It’s on you to justify the claim that knowing the difference between a JS form submit vs browser submit matters at all, let alone that it’s a distinction that comes up in day to day life.
The issue isn’t that every single person in the IT field needs to know HTTP in detail. The issue is that people who have invested in training for a role and are applying for a role do not understand fundamental technologies that are essential to that role.
For you that would be trying to work in ML not understanding any of the theory or how it works, but having used OpenCV with some premade models a few times.
For a frontend web developer who, as a large part of their role, will need to communicate with backend systems… that’s not understanding how their FE web application actually communicates with BE systems.
And from experience… yes, this is very common. And it has a noticeable impact on their effectiveness. Trying to debug why some interaction between your application and the BE application isn’t working while thinking the dev tools network inspector is just black magic and nonsense makes your job substantially more difficult.
This does make these people bad engineers. They are not able to understand and solve a huge class of the problems that they face day-to-day and instead (in my experience) often fall back on “just try a bunch of different things until something works for reasons I don’t understand” which is a poor way to approach work and leads to overly complex, buggy, brittle systems.
I strongly believe tech interviews should be about seeing _how_ a candidate works and less about _what they know_. Everyone has holes in their knowledge, but can easily be filled by training or on the job experience if they have excellent problem solving skills. Maybe they never needed to know the protocol level but can deliver excellent ux results regardless.
I've started nearly every position in my career in a different domain without prior knowledge, including workflows, protocols and the languages that implement them. I may be an extreme case, but we exist and your process inherently excludes us. I've worked in robotics, consumer electronics, healthcare, fintech (not even in that order, although that probably would have made more sense.) I've delivered in each domain as well as my expert colleagues. But if you asked me to implement a LLM, I'd shug and tell you I don't know how, but I _would_ explain my process for HOW I would learn to.
"how_ a candidate works and less about what they know"
I do get your point but depends on the role you are hiring for. I have had candidates get upset at me (for example: bootcampers) because they couldn't explain how to submit a basic HTML form but wanted me to look at their github "portfolio" that was done during the bootcamp with React/Express Code and what not. Their title was "Front end React Developer". That is a problem and I usually wouldn't hire those people.
If you're specifically _not_ looking for react front-end developers, then the on-the-job training for those candidates on the position you're filling _can_ be cost (time) prohibitive in certain orgs to "get up to speed." Likely you should filter those candidates out earlier if that's your case. Sounds like you are, but I'd like to also point out that this strategy has it's own costs.
For example, I would take a react developer with obvious gaps and a strong willingness to learn on my back-end services team over a passable back-end dev who was set in their ways (unwilling/uneager to learn new tech, entrenched opinions stated as fact, etc.)
Naturally, my approach is not a one size fits all situation and a great deal depends on org structure and mentorship opportunities being in place for it to work. The benefit being you avoid monoculture "silos", and have more cross collaboration and transfer opportunities between teams (some may call this "full stack", I wouldn't)
Several high performers on my team are "self-taught", work comfortably across several languages today and can pickup new tech easily. They came in knowing their "one stack" at the time. If they were "bootcampers" or not seems irrelevant and somewhat reductive/offensive.
That said, if the candidate in question shows _no_ understanding of their problem domain, can't reason their way out of a paper bag, and get visibility upset from questions when you try and tease that out, that is a certainly a red flag in my book.
With respect to your example given, however, it could be argued that http details because of the abstractions in place inherent in react, aren't critical domain knowledge for what is essentially a UX dev. If they can ship an experience your users enjoy more than the candidate that can recite the HTTP 1.1 RFC and cannot, then what have you gained in hiring the latter other than maintaining a culture of pedantry?
TL;DR it comes down to how much risk your team is willing to take on, as there's always risk inherent in hiring ANY candidate, including those that don't already check all your boxes in regards to tech know-how. I'm merely stating that by overlooking candidates that don't fit your self-imposed mold, you're likely missing opportunities for rewards that can pay dividends when it does work.
> Sure, a frontend dev who strictly works on that won’t know much about, say, the OS layer. But they don’t have to, it’s not part of their job in any way.
The HTTP protocol, HTML and vanilla JS are not "the OS layer", and yes, it's part of your job as a web developer to at least understand the basics of them. There are many so-called "frontend devs" nowadays who literally only know React, and if you asked them to create a basic webpage where clicking a button changes some text in an element without React, they'd be completely lost.
Usually the problem with this kind of developer is not just that they don't know, it's that they don't care. They WILL inevitably run into problems that require this basic knowledge to solve because the frameworks can only hold your hand so far, and instead of trying to figure out what's going on under the hood that's causing their issue, they will instead shrug and go "it's not part of my job" and just write a bunch of garbage spaghetti code in an attempt to work around the issue.
I think the critique is that you're applying this too narrowly to front end developers.
Pretty much anyone who has been around long enough has met someone with years and years of experience doing exactly one thing, whether that's WordPress or java CRUD or something else where they have blindingly obvious gaps in their knowledge that someone with their experience shouldn't.
The lack of deep knowledge isn't limited to a single field.
Shit developers used to cobble something together in PHP and had a bunch of horrid code on the web to copy and paste.
Looking up some deeper part of the stack is not that hard for a legit dev.
There are people who suck at their jobs in every industry. My wider point is that the skills required for a good web developer is vastly different than, say, a compiler dev. The latter is not a “webdev who is a better developer”, they need much more business-knowledge, understanding what the client meant, predicting future requirements, etc.
This is an entirely new business domain, it’s not really trying to be the same.
Besides, it's not like getting a CS degree doesn't mean getting acquainted with all layers of the stack. They still teach assembly, C, CPU architecture, low level networking, etc. It's just that most people rather specialize in something far less tedious. And there's definitely more areas to pick from today than had ever been before.
I once worked with a back-end dev who didn't know what RAM was. He knew C# and ASP.NET but couldn't point to RAM on a motherboard or explain what it really did. About all he could describe it as was "temporary storage". Their lack of basic knowledge of how computers work really showed in the crap code they wrote. It was no wonder it was their code that ate up all the RAM on the server because they never put much thought into the hardware it was running on. I know this is anecdotal, but there are plenty of devs just like this out there.
I talked to a guy who "managed software development", and when I asked him about the resources om their developer machines (some kind of VM), he got angry at me when I laughed nervously when he didn't know what RAM was.
> but couldn't point to RAM on a motherboard or explain what it really did.
To be fair, the last time I "opened up" a computer and changed its RAM was about 10-12 years ago, and since that time I've only worked on a Mac Mini and on a laptop meant that "opening up" computers became a thing of the past.
This was years ago, in a Windows shop, everyone had a desktop PC. Not important that the guy didn't know what was inside his desktop, but it's the lack of understanding of the server hardware his code was to run on that was the problem. In all servers and desktop PCs ~10 years ago, RAM comes as DIMMs and would be easy to locate for anyone with a passing interest in computers. I'm not talking about Apple know-nothing fanbois here.
I don’t understand how front end devs can get so far without learning how to use the developer tool console now standard in every major browser. How can one not learn about a POST vs GET when things like request caching and back button complications are a reality? I hit these issues as a senior backend dev fumbling around with front end so many years ago so I have a hard time understanding how that isn’t even close to the experience kids these days get.
Too much trial and error and throwing everything they know at it until it gets close enough to the desired look. I have one co-worker that has taken off after I showed him how to use dev tools and another that's still trying to guess their way through the problem after showing them how to use dev tools.
I’ll never forget the time a network engineer escalated a ticket of his to me on the infrastructure side (think L4+ support) that a developer threw his way saying that his web pages were loading really slowly and that this was a high priority, high urgency project. I asked how they came to the conclusion that it was something related to the cloud infrastructure rather than something else? They said the developer didn’t really know besides to ask “network people.” So I get on a webex with the parties where even a VP is on there because his product launch is on the hook and then I ask the developer to open developer tools to which he responded “what’s that?” I think I had to mute myself for a bit while I screamed for a while that this was what qualified one to be a senior front end engineer at the company. So I stepped through and got the network transfer chart and pinpointed that all traffic was coming through quickly except there was a big initial delay while doing DNS look-ups that was super slow to get the first byte. The network engineer on the call was stunned that such a thing existed and thanked me for saving him a ton of grief because he got a LOT of ticket like this one.
This wasn’t in like 2010 when Firebug was just coming out and people still used Firefox as a rule, this was like 2015 when dev tools was in every shipping browser approved for corporate use by a technology laggard company.
So the moral of the story is just blame DNS first.
Very nice intro. I like how you introduce networking layers. It seems you are starting to go down the path of explaining how computers and servers work - and that is certainly daunting. Maybe just stick with networking and introduce more of that and how real world systems (T1 and Ethernet) are just changing voltages at some point - and can be swapped in or out because of network layers. Then build up packet switched messages TCP, Telnet, HTTP
Speaking as one of those guys, the first computer I programed on was a Timex 2068, a reality check is discussing industry events and noticing a bunch of clueless faces among junior devs.
You can see this with a "new" ANY dev, because there are only 4 years of college and far more years are required to learn the increasingly complex technology environment.
I graduated with a Computer Engineering degree, did assembly, C, microprocessor design, computer vision, and know a good bit about lower level stuff, how memory works, how networking works, etc. All the stuff people in this thread seem to be lamenting the lack of. But I was also a shitty employee fresh out of school because I didn't know anything at all about modern software development because there was absolutely no time to learn that stuff as well.
I still had to spend a lot of time getting good before I was worth anything, just as these "new frontend devs" will, as well.
> "* I didn't know anything at all about modern software development because there was absolutely no time to learn that stuff as well.*"
Yeah, but modern software development is trivial to learn, particularly in comparison to a computer engineering degree. You see "developers" here on HN gloating all the time about how they didn't need any post-secondary education at all to get their jobs; these frameworks are literally designed to be usable even by minimally skilled coders. You were in a much better position having to learn modern software development after a computer engineering education rather than having to learn the rudiments of computer engineering on the job after getting an education in modern web development.
Don't forget us over in hpc and embedded! What are these web things, and why do have stacks of them? But more seriously, I'm a very experienced systems programmer but I like stuff like eframe and trunk because it lets me serve up a gui I write in rust like it was a native app as a wasm page without having to know anything at all about web stuff or how many stacks to put the web thingies in. Perhaps someday the wasi will bypass all the js and html deep magic I don't understand, but not yet.
I rarely do any of that, usually my abstractions are container layers, a libc, and then a kernel between me and registers and dma. But that is a lot more layers than you'd think. I can't even understand the boot process on a modern machine with tpm. And i have no clue how many processors or controllers are in my computer. Every usb controller has an entire arm core in it, mice and keyboards have 8051 running c-ish code someone somewhere wrote, there is no bottom to the complexity.
I am technically q full stack dev and I don't grok how aspnet core actually generates the web requests. It's layers upon layers of magic to me.
Similar with mediatr, I know conceptually that it "just" checks for the right classes in the loaded assemblies. But still feels like some weird incantation to use, to me. And I have such huge knowledge holes across all layers.
Sometimes i feel my lack of knowledge to such a degree, that I question whether I should even be a programmer.
> I started to write an HTMX tutorial (https://www.bitecode.dev/p/a-little-taste-of-htmx-part-1) because I noticed a lot of young coders don't understand what to do with it. They read the tweets saying it's nice, but when they look at it, it makes no sense to them.
I agree with you but HTMX? That's a big abstraction layer. A good one but still, not really a way to avoid layers of abstraction. Pure JS makes more sense.
I can last years with only Python, so those little detours won't expend as far as React.
It's more likely that I will do something like "a full web app from installing the OS on a fresh laptop to coding it to hosting it on server".
It would also be more beneficial: plenty of tutorials for individual techs, rarely they teach you how to integrate them together and go beyond the toy example on localhost.
But for that I need a better platform than substack. Therefore that's not going to happen anytime soon.
There will be an increasingly important history component in comp sci., software engineering, infosec, and associated fields. I think this is a place where the IT field has done a bad job and, as a result, we've had unbelievable amounts of "wheel reinvention".
I'd love to hear from somebody with experience in scientific or other disciplines and IT who could contrast their fields with ours. I only know that I have had to go out of my way to pick up a historical component to my IT education, whereas it seems like a historical foundation is part of traditional math, science, and engineering training.
Vernor Vinge was right, though-- software (and hardware) archaeologist will be a real job.
I couldn’t agree more and it’s something I didn’t understand at all until my final year at university. I was talking with one of my professors and complained about having to learn and use SML and Eiffel for some projects when in the real world, I’d be using something like C (this was 1994). He asked me if I was there for an education or training.
I think today it’s probably more true than ever. Schools are under pressure to turn out grads who will be productive on day one and I think it’s shortsighted.
The problem was that there was a shortage of developers, and that most universities still only had CompSci degrees, not Software Engineering degrees, coupled with those degrees were 3-4 years long... So enter the code bootcamp, where you're basically given a hammer, and taught that every problem can be solved with it..
Now those bootcamps aren't all like this, and there were and are good ones out there..
...but from what I've seen (I want to stress that this has happened rarely compared to the rest) a graduate of one of faster bootcamps, that learned a single language, like JavaScript, and try and solve every problem with whatever answer they found someone else upvoting.. without any amounts of thinking if it's the right solution for this specific problem.
The problem is as your professor said. Do you want an education or training... And also as you said, that you can't turn out effective software engineers with a quick or short amount of education.
My state university (UMass) education's CS degree had us working directly with CPUs and logic gates on breadboards (made an ALU!) We even designed our own x86-like CPU ISA to be run via simulation with our own progams. We built and programed robots from discrete components and microcontrollers written in assembly. Made our own rudimentary OS, including threading with mutexes and semaphores. Built our own pine-like terminal email client from scratch. Created design docs for our own games and executed on them in sprint teams throughout the semesters.
I cannot stress how important my education was to my current understanding and success as a software engineer in the field. A poor education is a huge detriment. When I joined the workforce I found many colleagues without one ill prepared for some of the basics, like debugging kills, using tools outside their domain expertise, etc.
That said I don't think a formal education is REQUIRED, but without one you need a certain kind of perseverance and thirst for knowledge not everyone has. I've had the pleasure of working with several folks in that category, so a formal degree isn't for everyone.
Today, it appears specialization is happening much earlier in the education process without a general foundation. Specialists will always be needed, and larger orgs can get by just fine with exclusively specialists, as long as their communication skills exist to prevent silos. Larger companies can afford this. However a good generalist can replace a team of specialists when the latter budget is out of the question; a huge win for smaller orgs, start-ups etc.
The old joke that a good generalist is the "jack of all trades, master of none" has some truth to it, but as with all things, it's a tradeoff. A team of qualified specialists will definitely produce better work within their domain expertise, but at a cost that can be prohibitive when "good enough" is within reach.
"and that most universities still only had CompSci degrees, not Software Engineering degrees"
Is a Software Engineering degree really a thing? Perhaps outside the US? Can you point to a notable university that has this major? A quick search gives me the impression this is an online correspondence program type of thing.
At UCSD in the late 90s we had a Software Engineering class but that was it. Otherwise internships were supposed to prepare you for your careeer.
UT Austin has an undergrad Electrical and Computer Engineering program, which you can specialize into Software Engineering and Systems. That’s about as close as you’ll get, I think, and since it’s ECE you’ll also have a fair amount of hardware experience - not a bad thing IMO.
They do have a full SWE track as a Master’s program, which I did, and quite enjoyed.
In some countries, e.g. Portugal, this is sorted how by having two university systems, and technical schools.
One wants to go straight into the work market, with focus on getting the bases, technical school.
One wants to go into the work market, with some more preparation than only the bases, polytechnic institut.
One wants to go into the work market, with overall experience, in a degree certified by the engineering order as qualified for professional title certification exam, university.
> Vernor Vinge was right, though-- software (and hardware) archaeologist will be a real job.
Spend some time in the coin op game community... This is all we spend our time on. Well that and cabinet repair.
But seriously though, keeping these old games alive requires serious reverse engineering of both hardware and software. Especially if they have custom chips, ASICs, undocumented features, etc. But the work has to be done or these games will be lost to time... It may be a passion project/ hobby, but it's still real work.
I like your point about digital archaeology. If it's important enough to some people to keep old arcade games running, how much more important is it to understand and keep some old mission critical computer running.
It's a lot like the tech people you see in these dystopian future movies, the ones who are like "oh yeah you have an old z80 with the qxp interface, haven't seen one of these things in years". Like Scotty from Star Trek. They're an increasingly important part of the tech ecosystem.
Having been working on writing documentation, tools and homebrew SDKs for arguably one of the most influential arcade systems ever made [1], I could not agree more with this. Hardware and software gets left behind by the original developers all the time, but that in no way makes it no longer relevant - and it certainly does not make the underlying concepts outdated. Z80 or 68000 assembly language might be completely useless to a web developer, but shall we call it "obsolete" when there are millions of these processors still around running critical infrastructure? Shall we really claim that learning to manage memory manually is futile since GC languages exist?
Besides, the PlayStation based hardware of this specific arcade PCB is not that far off from the ubiquitous 32-bit microcontrollers that underpin modern digital society; in fact the MIPS R3000 derived instruction set even bears a striking resemblance to that newfangled RISC-V thing every investor is talking about. And yet most of my 19-year-old university classmates want to stay away from "scary" embedded or kernel work in favor of web or game development.
> And yet most of my 19-year-old university classmates want to stay away from "scary" embedded or kernel work in favor of web or game development.
It seems that with more abstraction, more people throw up their hands and say "I'll never understand it", "The designers of this system did things in the stupidest way possible", "It's magic" or (in the case of the financial system) "It's a rigged game" "You're working for the man"
What all of these points of view have in common is that the people who hold them have given up trying to understand the world around them. Instead they take the intellectually lazy way out and compose their own pet theories of why things are the way they are, this is where tinfoil hat stuff comes from. This is dangerous, and is sort of the point of the original article.
I keep telling people that "the world is still knowable, still understandable". You have to put in the time and learn about it. It's easy to do, you just have to do it. There are plenty of "average" people playing in these "difficult" to understand fields.
What we desperately need to cultivate is a culture that encourages and rewards curiosity. When people are curious, they take the time to learn about the world around them, and why things are what they are.
If you don't master the world around you, it masters you.
That's certainly romantic. The real question is how much money - how much of your personal peace - you're going to give up to make space for that culture. Even if it's just putting yourself in conflict with rich people to convince or force them to pay, your asking to change the status quo when a lot of people are quite comfortable with the way things already are.
Same thing in the retrocomputing community. The video series Usagi Electric did with the Centurion minicomputer is a good example. The degree to which data about even off the shelf parts have been lost to time is frightening, let alone software.
> There will be an increasingly important history component in comp sci., software engineering, infosec, and associated fields.
Much as I'd like that, I disagree.
The momentum has always been against that. We're members of a cult of the new, which does not value or even acknowledge the old. Which is why we keep reinventing the same solutions but with different tech stacks.
This drives developer value down which drives developer salaries down, so I don't see it changing.
FWIW, my history of math class as an undergrad was a math elective that almost nobody else took. Other math students didn't want to learn history and other history students didn't want math. And the curriculum stopped at the 18th century, before things got really interesting.
> We're members of a cult of the new, which does not value or even acknowledge the old
This just isn't true. We have Heartbleed because we refuse to move off how we wrote software 30 years ago[0]. HTTP was invented in the 1980s[1]. REST in 2000[2]. TCP/IP in the 1980s[3]. Ethernet the same[4].
I'm pretty sure we acknowledge and value all of those things. Our field, depending on how you define it, is only about 70 years old. Those things are pretty old in those terms.
I've heard that advanced math has a similar problem, that some subjects are quite deep that only a handful of people really understand the topic and passing the knowledge down to younger mathematicians is not a given.
I guess in IT it's the other way around, where you can start with advanced abstractions and frameworks without bothering what kind of rabbit hole is underneath (at least while it's going smooth), it gets deeper as you go closer to the hardware. While in maths you need to understand the basics to even know what's going on with more advanced areas. And this already applies to the fundamentals too, where in school it's expected you understand the stuff from previous years. At least that's my impression as a non-mathematician.
And then there's the mix between the two in languages that rely heavily on powerful type systems.
No, you can do the same in math. It just means ignoring the proofs and just using the theorems as they are. However sometimes the application and proof requires similar ways of thinking, and you kind of miss out by skipping the proof. But the same can be occasionally true for software engineering.
Are there good historical examples of this happening, where foundational knowledge is completely lost?
I know I have read about it in books as a fiction, but I assume it must have already happened here before.
Maybe its an oxymoronic question to ask, seeing as if its lost we might not even know its lost, but more in the vein of "We put the lime in the mortar because this is what we have always done", unaware of the actual properties of lime when interacting with concrete.
>Are there good historical examples of this happening, where foundational knowledge is completely lost?
Usually following the fall of empires. Ibn Khaldun (1332-1406) wrote:
"Perhaps they have written exhaustively on this topic, and their work did not reach us. There are many sciences. There have been numerous sages among the nations of mankind. The knowledge that has not come down to us is larger than the knowledge that has. Where are the sciences of the Persians that ‘Umar ordered to be wiped out at the time of the conquest? Where are the sciences of the Chaladaeans, the Syrians and the Babylonians, and the scholarly products and results that were theirs? Where are the sciences of the Copts, their predecessors? The sciences of only one nation, the Greeks, have come down to us, because they were translated through Al-Ma’mun’s efforts. He was successful in this direction because he had many translators at his disposal and spent much money in this connection."
> Are there good historical examples of this happening, where foundational knowledge is completely lost?
Maybe not quite in the context you were asking, but philosopher Alasdair MacIntyre argues as the premise in his book After Ethics that this is what happened to the philosophy of ethics after antiquity.
Copypasting from its Wikipedia article [0]:
> [After Ethics] begins with an allegory suggestive of the premise of the science-fiction novel A Canticle for Leibowitz: a world where all sciences have been dismantled quickly and almost entirely. MacIntyre asks what the sciences would look like if they were re-assembled from the remnants of scientific knowledge that survived the catastrophe.
> He claims that the new sciences, though superficially similar to the old, would in fact be devoid of real scientific content, because the key suppositions and attitudes would not be present. "The hypothesis which I wish to advance", he continues, "is that in the actual world which we inhabit the language of morality is in the same state of grave disorder as the language of natural science in the imaginary world which I described." Specifically, MacIntyre applies this hypothesis to advance the notion that the moral structures that emerged from the Enlightenment were philosophically doomed from the start because they were formed using the aforementioned incoherent language of morality.
It is all things. Encapsulate all things. Think fractals, hopf and always remember the turbo encabulator.
The original machine had a base-plate of prefabulated aluminite, surmounted by a malleable logarithmic casing in such a way that the two main spurving bearings were in a direct line with the pentametric fan. The latter consisted simply of six hydrocoptic marzlevanes, so fitted to the ambifacient lunar waneshaft that side fumbling was effectively prevented. The main winding was of the normal lotus-o-delta type placed in panendermic semi-bovoid slots in the stator, every seventh conductor being connected by a non-reversible tremie pipe to the differential girdlespring on the "up" end of the grammeters.
— John Hellins Quick, "The turbo-encabulator in industry", Students' Quarterly Journal, Vol. 15, Iss. 58, p. 22 (December 1944)
Since you’re citing fiction, have you read the foundation series? The original inspiration for that book was The Decline and Fall of the Roman Empire, which is a history book (arguably the first modern history book).
Anyways, that just to say that people in post-Roman Empire Europe lived out the reality of lost knowledge. They lived with the remains of incredible art, architecture like the coliseum, public works like the aqueduct, etc. But they wouldn’t have known how to reproduce those works. How to make concrete, etc. was knowledge that was lost to them.
I think that's essentially what the medieval dark ages were before the renaissance revived historic cultural and intellectual achievements and began building upon them again.
We lose knowledge all the time. Every person that dies carries a lifetime of experiences and information.
The thing is, we tend to lose the knowledge that is deemed useless. Anything considered useful is well spread. The problem is when our opinion doesn't match the reality, we have something around a generation to prove it, or the chance is gone.
Mathematics is full of examples of things that were discovered again and again, until somebody found a use for them and they entered our textbooks.
With the tech and politics of the late 1960s it would have been far easier to actually land on the moon than fake it with the vast reams of evidence we have.
no I'm talking about a manned mission to the moons surface including all the cameras and "live" coverage as well as the lander. apparently that tech is lost.
>I think this is a place where the IT field has done a bad job and, as a result, we've had unbelievable amounts of "wheel reinvention".
Some of this just seems to be due to the maddening popularity contest of technologies and frameworks within companies. There is a frantic rush to make sure you're doing everything that your competitors are doing, but not much introspection regarding whether things will actually help your company. You're just keeping ahead for the sake of keeping ahead.
I wouldn't call IRC decentralized, but yeah, all the new messaging systems like Slack/Discord are basically proprietary versions of IRC with only one (shitty ElectronJS) desktop client.
Analogue electronics design has been almost entirely replaced by digital micro-controllers to the point that finding old-school electrical engineers that can design non-digital circuits is becoming a challenge.
I believe parent is referring to design at the PCBA level, where this is mostly true outside of military/government designers who lag.
The reality in that sphere is there is little reason to limit oneself to the constraint of expertise in analog circuit designs when one can achieve the same functional outcome digitally, and use those skills more broadly.
Only where component sourcing is artificially limited, or risks of digital operation sufficiently large, and where the job market will support it, does it make sense to proceed in growing in analog circuit design expertise.
Meanwhile these designs are nearly all being functionally lapped by those in general industry, which participates in all the accelerating gains of digital technologies.
My software engineering degree had such information, that is how eventually I went from a UNIX/FOSS zealot, to someone discovering the alternative universe of Xerox PARC, ETHZ and having the seed to research the department library about systems programming languages of the past, since Fortran came to be as first alternative to raw Assembly.
Unfortunately not all degrees, or universities are well prepared to offer this, or have staff that cares about making it happen.
A friend of mine (who coincidentally once shared an office with Vernor Vinge...) remarked to me early in my career that CS programs do a terrible job of teaching the history of the discipline. Students end up not knowing the failures (or even the successes!) of the discipline, and so end up rediscovering old new things. Makes a lot of software and systems practice more faddish and hype-driven than it needs to be.
This is not consistent with my experience with younger devs. There are teenagers doing kernel-level hacking and there's also older people who have been comfortably driving RAD tools for decades. This is fine. Software development has been gradually specializing and it will continue to do so. The tools and resources available for people who want to pursue low-level engineering are better, cheaper, and more accessible than ever.
My $management have said that there are nearly no teenagers who will apply for kernel work. You got someone I can reach out to when the option comes up again ?
Unless you're two hundred years old, you joined the development of this technology in the middle. There was already decades of progress in electronics and telecommunications theory and engineering that everything you listed was built on top of. Your predecessors likely thought it was all newfangled abstractions that obscured more fundamental knowledge, and your successors will think the same about whatever world they grew up in vs. whatever one they leave.
Abstractions are only relevant when they are leaky. People alive today were flipping toggle switches to load individual machine codes into memory. Some of them literally knew what every single transistor in the machine was designed to do.
Programmers at the time went from thinking these kit computers are toys to watching all the old ideas like virtual memory, cache, pipelining, networking, and multiple cores get reintroduced as the anemic transistor budget exploded. None of it was particularly novel, but the slow introduction of older ideas was a great way to learn all this stuff from the ground up.
That's the kind of argumentation the article argues against. There's a middle ground. Or, if you prefer, there's an optimum somewhere, and right now, we're on a descending slope.
To build mobile phones, we need a lot of tech built on tech build on tech. Dwarfs on giant's shoulders. And there's a place for people who don't understand the underlying tech. I've got a colleague who doesn't understand any of it, yet can do useful frontend work. But when you stand too high above the ground, the problems in the article become real, and that will get us stuck.
I don't need to know assembly to write a web app - it's totally magic, and it's fine.
However, if I am using an abstraction like an ORM without understanding what is happening with the database under the hood, or why it's getting slow all of a sudden, it is bound to bite me in the ass.
I shudder to think of how much money and energy the world is burning every second just because of this one alone.
A client of mine is enduring growth induced ORM hell right now. The struggle and existential threat is real.
It can take days for them to even track an evil SQL query back to the actual code. There were fetch-one calls that literally carry a million rows back over the wire for that one returned entity. Indices get skipped due to the magic translation of a row.col.toUpper()=="STRING" into SQL.. when the db collation is case insensitive in the first place.
But hey, the devs don't need to know SQL, so there's that.
Exactly. How far should we take this? Should you learn how to build a processor from vacuum tubes before understanding the abstraction of silicon? Build the power plant to power the whole thing and lay the wires?
Specialization always exists in an industrial society.
If you’re managers and tech leads then why aren’t you making sure the people you’re supposed to lead obtain the understanding?
We work with both high and low level tools where I work, and in my experience there isn’t necessarily a lack of understanding among younger developers. The CS courses still teach you a lot of the basics, and to actually use what you call “hype” tools efficiently you typically need a rather good understanding of how they work.
I actually don’t think we have enough abstraction, especially not in the DevOps field. In my opinion we’ve overcomplicated the deployment procedure without making sure it was abstracted. We don’t need our software developers to know how our Cloud Networking and Virtual Networking works, we don’t need them to spin up resources and make various internal DNS and Firewall rules, we simply need them to focus on what they’re good at and then hand off the container to the operations department where people actually specialise in that knowledge. I’m actually fine with infrastructure as code, but it needs to be templated to the point that our developers simply “order”resources based on whatever template they want to deploy so that they don’t have to keep up with the constant changes or learn how networking in enterprise organisations work.
As far as the CPU stuff goes. Young developers do a lot of low level code. We write a lot of embedded software for our solar plants, and a lot of the time, they seem far more capable and “modern” in their approaches than the “old guard” exactly because they’ve been taught the same curriculum but also all the lesson learned by the “old guard” along the way. That being said, it’s not really necessary for a lot of developers to keep an active knowledge of how an x86 CPU works, because it’s very unlike that they’ll have to work with it. It will often be far more useful to know how various types of ARM processors work, as that’s something a lot of us actively work with.
> We don’t need our software developers to know how our Cloud Networking and Virtual Networking works, we don’t need them to spin up resources and make various internal DNS and Firewall rules, we simply need them to focus on what they’re good at and then hand off the container to the operations department where people actually specialise in that knowledge.
I am afraid that many businesses prefer the vision where the software developers do the operations too, and the company can fire the operations department. Consider how much money you could save... before it all falls apart.
Short term optimization is how managers get their bonuses. The trick is to move on to the next project before the old one falls apart. Then it becomes someone else's problem.
Agree that devs not needing to know this is how things should be, and that CTO/tech management however is not acting this way.
It is basically idiotic to me, as a highly paid senior application dev.. that they basically want me to spend any cycles on stuff that would ordinarily be done by someone making 1/5th my TC.
If I can do it more than 5x as fast/efficiently, sure, but I don't. IaC is something I worry about at the start of a project, as hamfistedly as possible, probably over-provisioning so I don't have to go back to it anytime soon. All so I can move onto what I am paid to do - delivering functionality to stakeholders. Someone who deals with IaC as part of their full-time job will be the expert who can move more quickly & correctly through it.
Sometimes this is just galaxy brain budget arb, where the infra org gets to show a cost save, to the detriment of the appdev org.
Staff SRE here: I comfortably exceed your TC by a factor likely approaching 2.0 doing harder shit than your whole team combined. That’s nothing against you, only that you don’t really know what you’re talking about here. I can safely conclude that because 20% of your app dev TC is not in any ops in the US, particularly not DevOps-school, not even “legacy-style” SA; hospital SA, for example, used to be a high six-figure, pensioned gig, until we commoditized ops as a concept in our reinventions and pushed everyone to AWS.
Hint: I wrote operating systems for fun before becoming an SRE. Imagine how much fun I have explaining context switches and CPU cache invalidation and their implications on their app’s performance, to application developers who consume frameworks and look down on my salary! Building a distributed computer at scale is much, much, much harder than your Linode tutorial expectations of what ops does.
Native software is mostly dead, so operations is software engineering now, at least those parts of software engineering you folks threw out when Docker came along and turn your nose up at, and I make more money every year cleaning up the low-hanging fruit y’all leave around, so…
It may be true your TC is so high, and I salute is so. Clearly my tone implied a level of looking down my nose I didn't intend.
Now take all the stuff you are an expert at, and imagine the CTO assigned to you - build dashboards of accounting information. You don't do UI, and you don't know accounting (maybe YOU do, but your average SRE/DevOps do not).
It wouldn't make sense right? And a lot of those tasks can be done by a junior "data analyst / BI dev". So it would be an expensive mistake as well.
> stuff that would ordinarily be done by someone making 1/5th my TC
Did you mean for this to be condescending towards ops? Because that’s how it sounds.
Ops is not easy, at all. Even if you’re doing zero IaC, I defy the average dev to try to provision a Linux box from scratch and install, configure, tune, and maintain the stack you need to run your code. Throw IaC atop that, and now you need to understand declarative programming and OOP concepts to be able to do it efficiently. Then, there’s K8s…
Not to mention the architecture side of things. Devs love to grab whatever shiny thing they saw on a Medium blog, even when it’s a poor fit to their problem (or their problem is unoptimized code). You probably do not need a columnar DB, you just need to normalize your tables and learn relational algebra.
I do not mean to be condescending. I am being literal.
Every job is hard, and the pay is not exactly commensurate with effort.
Ops is absolutely hard and theres a lot to learn. And in doing so you generally may end up with DevOps teams with minimal to zero domain knowledge in what the company delivers to end users.
So if you have a team of people who built a deep level of expertise in something over 10, 15, 20 years and that something is business facing, domain knowledge, etc.. then it is not of value to have their time spent on other tasks.
Often these DevOps roles are filled by more junior staff at the earlier stages of their career. This is another reason the staffing is usually lower cost.
Agreed. DevOps was a mistake, Full Stack was a mistake. Go back to teams of specialists. Ops doesn’t necessarily need to gate deployments - CD is a good thing when properly done - but believing that the latest AWS offering means your dev team can handle everything from the database to frontend is a fallacy.
Excellent point. Giving my perspective as a relative newbie (~5 years) to the industry with no comp. sci. background.
It was around the year 3-4 mark that I decided to knuckle down and try to improve my fundamentals (data structures, algorithms, memory models, concurrency, CPU architecture and some network fundamentals) by reading popular papers and literature and writing all of my personal projects in C and C++. I’m about two years into this study and while it’s been immensely rewarding, I’ve found it to be a huge undertaking while juggling life and a full-time job.
What I’ve also noticed is that, while I understand a lot more about what the CPU is doing, memory manipulation and how to write more efficient programs, I haven’t found it be particularly beneficial to my daily work (still Python and JS). I would love to be able to put these concepts into practice for many hours of my working day but it’s difficult to move from general web-stack development to more performance-oriented development (embedded, low-latency, OS, etc.).
My guess is that this is one of the reasons we have ended up in this situation. You can get away without knowing the fundamentals (a good sign of progress?) and that if you really do want to pursue these areas that promote building this kind of knowledge as part of your career, the barrier to entry is quite high and the positions are fewer than say a decade or two ago. I find it a shame because in my eyes, these areas are the most interesting and exciting areas of programming. It’s an art.
> What I’ve also noticed is that, while I understand a lot more about what the CPU is doing, memory manipulation and how to write more efficient programs, I haven’t found it be particularly beneficial to my daily work
I think this is because of the effect OP is describing: no matter how much time you spent studying, the odds that you learned about the specific thing that will make the difference in your work this week are slim because the technology is so broad and deep. Odds are that what you choose to intentionally study is irrelevant to your work.
You've picked a slice to learn about, but those who were working on it as it got layered know it all. Once you have that kind of knowledge, the odds of being able to explain some unexpected behavior approach 100%, but acquiring that knowledge took decades.
I've found a method that does work pretty well is to dig really deep into the topics that come up in your actual work. Instead of a random sample of fundamentals, find the parts of your job that feel like magic and explain that magic. Skipping over V8 and straight to the CPU might not be useful for a JavaScript dev very often, but a deep understanding of V8 is relevant more often than you might think.
I agree about the high barrier to entry, but I disagree about the modern job market.
There’re huge areas of the software industry where performance matters to this day. Examples include videogames, content creation, video processing, engineering software, science related HPC, and now AI.
However, transitioning from web development to these areas gonna be hard. That software doesn’t run in web browsers, the code is either desktop apps, deep inside web backends, or supercomputer programs. And high-performance C++ is often not enough for them, depending on the area ideally you might also need GPU graphics and/or GPGPU compute.
I was lucky I have never worked on web apps. I know TCP/IP reasonably well, I have some general understanding of HTTP and HTML, but I’m totally clueless about the modern frontend technologies.
What I’ve found most helpful for understanding how something affects Python specifically (but this holds true for nearly any language) is godbolt [0].
Find a chunk of code you want to optimize, simplify it as much as possible, then put it into godbolt. Match the interpreter version to yours. Then go look up the bytecodes and see what they do. Try changing the Python version to see how it differs, or a different approach to the problem (e.g. a map instead of a list comprehension).
This takes an enormous amount of time, of course, but you can learn quite a bit.
I feel similar, I was a Linux Sysadmin in the early days of Gentoo, studied C and Network programming, configured hardware and RAID arrays and did some courses like "From NAND to Tetris". Over the years I did a LOT of programming.
When "the cloud" came out, I could easily understand the magic behind it all, so it made a lot of sense to me and it still does.
I feel "computer literate", however, even a lot of solid but younger engineers I work with struggle with lower level concepts and just don't have the knowledge and experience with things like networking protocols to be able to solve hard problems.
>We, people who really understand the technology that the world runs on, are a slowly dying breed.
And web development has the worst tech abstraction of all.
There used to be a time where any Software Engineers ( i think the term wasn't even invented then ) or simply programmers would know at least a thing or two about Hardware. We now have so much abstractions we have people working in tech who have zero knowledge on either software or hardware, or any low level stuff.
That is why not only do we need some open source or open standards ( I am not a zealot for everything open sources ), we also need to simplify everything we have today. To basically refractor everything we have learned from Hardware to Software.
To be fair, its hard for anyone to know anything about hardware these days unless its a company with a large on-prem footprint and it doesn't make financial or regulatory sense to put in the cloud or co-locate, which is a vanishingly small number. Hell I haven't been in a physical datacenter in 5 years and that's where I started my career many moons ago.
There will always be hyper-nerds and just super smart people who will study all of the low level stuff. Granted, they will be vastly outnumbered by high level programmers.
I am continuously hiring new people, mostly senior devs and tech leads. Maybe one in a hundred has any understanding of what virtual memory is. Or why one process can't access memory of another process and, if you really want it, how to set it up.
Two decades ago that was common knowledge.
It went away just as the knowledge of how networking works. Currently, if they try to open a connection and it does not work they are pretty much at a complete loss to what happened, where and how to fix it. Aside maybe from a simple problem like DNS resolution error (and not even that -- most devs don't understand how DNS propagation works, for example).
So if those problems happen, they are mostly deferring to me as a guy who can solve every problem. I see no clear path to achieving a goal of having someone else to learn to do the same.
>It went away just as the knowledge of how networking works. Currently, if they try to open a connection and it does not work they are pretty much at a complete loss to what happened, where and how to fix it
also, devs seem to not understand that a computer network is basically a massive distributed system (on the routing level anyways), and that things inside this system can and do fail nearly constantly.
things like latency, endpoint connectivity and available bandwith is not a fixed given!.
So many devs in my experience seem to write code which assumes the network is always available with the same characteristics as the point in time in which they wrote the software.
Not to even mention devs are even entire companies building HA mechanisms which rely on ethernet connectivity to work, making it very hard if not impossible to stretch them properly across network segments/locations without stretching ethernet. (which brings its own set of problems).
One little-appreciated fact is that many of the most accomplished and knowledgable engineers were tinkerers from an early age. If you start your computing journey at the age of 18 with a compsci degree, you might already be a decade behind some of your peers.
It’s not too different from pro tennis players, who typically start playing before the age of 8.
For those at the top of the game, computing is a calling as much as a job. They will do substantial learning on their own initiative.
You might be a decade behind, but there's still diminishing returns kicking in hard even just a few years in. That's disregarding inefficient learning and what else which may close the gap further.
Your latter point is far more important to the matter. Those who treat it as a passion more so than a job, are more likely to be the trendsetters. Growing up and being free from responsibilities makes it easier for that thing to become your passion.
And let's also not forget, a few decades ago, computers weren't exactly a cheap thing for parents with little understanding to let their kids tinker with at will. Being born in a family with enough wealth to get a computer, enough wealth / understanding to let a kid tinker with it, was an immense boon. A long with anything that type of family tends to have going for it alongside wealth. It's not that far-fetched an idea that it's the other things, rather than the early age interest of the kid itself, that got them into such a position later in life.
The thing is as someone who twenty plus years ago was able to write the webserver at socket level and so on, if you handed me a keyboard today I have forgotten all the context, the skills - they have pages out of my brain and now have redis and javascript and hadoop.
But you know those things exist and how things relate to each other and affect each other, or could affect each other, and you don't need to already have a mental text book in ram to solve a problem. I don't know you and yet I know this.
How often does a front-end developer need to know about /proc/pid/mem?
I think it's important to keep things in perspective here - when I took my first programming job the first week was setup. On my team, our onboarding has you branch, review and merge on your first day. Those abstractions allow for people to focus on the areas they're working on, and iterate there. A game developer doesn't need to understand the complexities around js type conversions, any more than a js developer needs to understands rendering thread latency in a browser.
On every team I've ever worked on, that headspace from the team would have been much better served by understanding the product and project, IMO.
Great question - is their app deployed to a container? In that case, they should know about it, because buried deep under layers of abstraction are cgroups. Answering questions like “why did Node try to use 32 cores” is much easier when you understand this.
Unless they're working in that space they won't know it. And that's fine. There's enough to keep in our memory and tools and resources for us to look up information if we need it.
I think you may be confusing what you see with the cause.
For example: You and I HAD to be willing to go through that, and we were fascinated by it. There are still plenty of great FE and BE young devs who see something and instantly ask "How does this work" and keep going down. The difference is, most people in employment don't HAVE to do this like all people in our early career did. If those shortcuts to getting paid existed in our early careers, many of us would have taken the money and not cared. It's not an experience or a "You had to be there" thing, it's simple curiosity and motivation that sets people apart in this regard.
> For them, spending a decade working with a low level programming language before you jump into high level programming language is simply not an option
As a counterpoint, I'd say that it's also chance to build on top of these abstractions without having your brain cluttered with how they're implemented. That's also how science grows.
And don't underestimate the new players. They're perfectly capable to understand the low level details if they have to, and they have much more resource available to learn too.
>The issue, as with any automation, is that new players on the scene (younger devs, devops, etc.) simply have no chance to learn the same things and go trough the same path.
The worst problem, which TFA alludes to, is that those "layers" are not really needed, they are leaky kludges upon kludges to make some lower layer unsuitable for later needs more palatable or able to handle some unforeseen use case that's against its design. Other stuff is just added as ways to sell new enterprise tooling, support and consulting.
Seeing the layers and technology stacks being added over time can give the impression of watching some organic evolution happen, when it's often merely accumulation, rough patching, and corporate attempt to push its technologies/NIH.
Yes. And we know this process from legacy code. Rather than fix the problem, it is usually easier to just write some new code to manage the old problem. We tack on new layers, modules, abstractions, workarounds all because at the time it seemed to be easier than refactoring the cause of the problem.
But it only works so far and at some point the application becomes unmaintainable mess and the effort to rewrite it from scratch starts.
Only we can't rewrite the world after it has become so unmaintainable that nobody can figure out how to change a detail of some layer inbetween the program and the transistor.
Heck, we are already stuck trying to fix the bad design of IPv4.
I'd look at it in the opposite direction. Good abstractions are powerful when you understand what they build on. That recurses all the way down. Learning bottom up is a natural path, but there is still plenty of opportunity to explore top down. The difference is that top down discovery needs to be an intentional journey.
>We, people who really understand the technology that the world runs on, are a slowly dying breed. We are still here as tech leads, managers, directors, business owners. But there will be a point in time when we will go on retirement and there will be only precious few people who had perseverance to really understand all those things by diving into obscure, historical manuals.
I'm gonna guess this same message, in different words, has been repeated often throughout history for as long as people had the ability to reflect on their life and the next generation.
So really, what's different this time? What's the difference between forgotten technology that nobody needs and an unpublished author who's works were forgotten? I think the answer is none. We just think it's more important because this is the industry we care about.
We're actually really good at reinventing the wheel in this industry, so even if some software is forgotten, you can probably bet it'll be reinvented or reverse-engineered if it's really needed.
While we should preserve technology knowledge (because it's pretty cheap to do so), in terms of the economy, sometimes things are a stepping stone and should be forgotten.
>We, people who really understand the technology that the world runs on, are a slowly dying breed. We are still here as tech leads, managers, directors, business owners.
There will still be people at e.g faang/semiconductor companies
that do this stuff day2day and even push advances in those areas and have already created solid training materials for younger employees
If you're worried that this knowledge will disappear, then feel free to write it up in form of tutorials.
Do you have any ressources to share ?
I am a self taught developer, I do stuff at my level (typescript, lisp), also because when you work alone, such high level languages allow you to express things nicely and fast. I kinda know how a cpu works, I understand what is going on when you perform some bare metal programming, but I never barely went down the rabbit hole and performed some low level stuff.
I'm in the situation where I don't know what I should learn to be on par with graduates/undergraduates students. I am searching ressources about networking, operating systems, or anything that could enhance my proximity with low level stuff.
For very foundational stuff, Charles Petzold has put out some great books.
His book Code is fantastic. He starts at simple battery and lightbulb circuits and builds and builds towards a simple CPU.
He also wrote The Annotated Turing which is a breakdown of Alan Turing seminal paper and you only need high school math to get through it.
When I was in school my favorite course was compiler design and we used Compilers: Principles, Techniques, and Tools (aka the dragon book). It’s one of the best textbooks I’ve used but that was 30 years ago. There might be something better now. Understanding parsers and lexers and (especially) state machines is something that will serve you well.
Ah, I'm interested in parsers these days (I just wrote one for parsing org data, I am a bit unsure about the architecture of the project but at least it does what I needed personally).
I am right now at a friend's place that just showed my the dragon book from its shelves, that's a funny coincidence.
I will check out Code.
I'm also self-taught. I've been doing it since (roughly) 2009, and like many others I started at the highest level of abstraction with front-end web dev. I've worked with some mid-level languages like Objective-C and Java, but like you I've never really dug deeper than that.
I know this is tired and cliche at this point, but literally this week I sat down with ChatGPT and asked it to teach me how to write WAT (web assembly text format) so I could understand how memory is managed at a really low level (but not so low that I risk crashing my computer).
It turned out to be super valuable. This is where AI shines for me - I can ask it any question that pops into my head, and also validate whether what it's telling me is true by running the code and seeing whether it works. It was amazing.
If you're curious, I'm fine sharing a link to the chat:
Yeah I was running all the code to validate what it was giving me, but I also try to subtly prompt it to repeat itself so I can see if it’s being consistent.
I’m sure it’s not error-free, but the speed of learning makes up for it IMO. Like I’d been struggling for a while to learn tree-sitter, the documentation is overwhelming. I had a chat with AI and got a working solution for my problem in probably 4-6 hours, and now I can write tree sitter grammars without help. It’s really incredible.
(Most of the content is not actually specific to Python)
He beautifully pulls back the curtain on so many lower level concepts like virtual memory management, dynamic linking, heap/stack, fork(), copy-on-write.
The talk is broad in nature, not deep. It takes you just below the surface of many magic black boxes, and, as you put it, enhances your proximity with those topics.
For me, so many things clicked in this single talk:
- How virtual memory works (incl. paging in/out, swapping)
- Why there's those discrepancies between RSS / PSS
- What segfaults and page faults are
- What actually happens when I get errors related to dynamically linked libraries, either at build time or runtime
Godbolt [0] is an invaluable resource. But simply setting up tasks to yourself and completing them may be the best course of action. Then you'll find whatever resource you need for a concrete objective.
For example, if you have a week, I'd suggest to start "in the middle", and move up and down according to your tastes.
- Write a hello world program in C, compile it and run it. Try to compile it statically and understand the difference.
- Ask the compiler to produce an assembly file. Try to build the executable from that assembly.
- Try to write a hello world in assembly language by yourself, compile and run it.
- Write a program in assembly that prints fibonacci, or prime numbers.
- Now, forget about assembly and move upwards. Download python source code, compile it, and run a python hello world with that interpreter.
- Look at the Python source code, which is written in C, and try to follow what happens when you run your hello world.
- Try to change the "print" function of the python interpreter so that it prints your string backwards.
Depending on your experience, you may need more than a week (5 days) to complete all these steps. But that's OK! In a few months you can find a new spare week and continue your work.
> people who really understand the technology that the world runs on, are a slowly dying breed
Not completely. Curious people will always pop up, it's just that they will take as much time as you did to learn everything.
I like to think of myself as one of the curious people. Ever since I was a kid, I had a huge desire to understand how the hell does this magic box called a computer work. Decades later, as a software engineer, no matter how much I learn, no matter how deep I go, there's always this voice in my head going "but WHY? WHY are things the way they are?".
I did start top-down instead of bottom-up, starting from the high-level languages running on modern operating systems, going more and more into the (professionally unnecessary) low-level through pure curiosity, but I still believe I will get to the bottom one day.
Hey I've struggled with this notion as well.
But then I sometimes compare this with crypto - or how a bunch of kids simulated the entire financial system and made the same mistakes made 100s of years ago, with the end effect of an new generation that taught itself finance.
I suppose that is what will keep happening. At some point, someone will decide to re-write everything from scratch and re-discover lower layers of abstraction or challenge them.
An important thing to remember is that not only did you get to watch the layers be added, you also have decades of experience. In today's field, where we double in number every N years, that's enough time to accumulate enough knowledge to feel like a demigod—to have more than 20 years of experience is extremely rare in the industry right now. That won't always be the case: eventually we'll stop growing exponentially and we'll find an equilibrium, and eventually the people with 30 years of experience will stop being a vanishingly small minority. While that may not give us the kind of breadth that your generation has, I would expect many of them to be able to master the specific slice of the tech tree that they actually build on top of.
30 years is plenty of time to learn the everything from the browser down to the CPU, or the HTTP framework down, or the database down, and probably enough to learn two such slices. Part of why it seems impossible now is just because most developers not only don't have the historical context, they haven't yet had any time.
its more than just time. the sad part about this evoluation is that younger people are enculturated to believe that the foundations that they are standing on represent some sort of inviolate physics and not just some technical decision someone made 20 years ago.
i think whats required to move forward isn't just some dusty studying of these physics, but a reawakening to an understanding that these are just systems with tradeoffs, and we can choose to make other systems just as easily.
thats the only way the sandpile collapses and we get .. for example .. secure operating systems and network protocols. or systems that were designed to exploit the massive amount of concurrency available in modern machines.
> younger people are enculturated to believe that the foundations that they are standing on represent some sort of inviolate physics and not just some technical decision someone made 20 years ago.
This is simply not my experience of young people or of being young. There's a reason why radical movements are led and populated largely by the young—to the extent they err, it's on the side of "let's tear it all down and start over".
This is the perfect articulation of something I think about all the time. It is so much easier if you were in it while it was happening.
I've been trying to write a blog post about this, and how to address the problem specifically for front end development, where it's absolutely ludicrous. It's a very serious problem, especially when you think about security.
You have all these people clamoring to get cloud certifications so they can work in IT but its so complex. A certification isn't enough. There just aren't enough people with enough knowledge and practical experience to fill all the needs that companies have.
"But there will be a point in time when we will go on retirement and there will be only precious few people who had perseverance to really understand all those things by diving into obscure, historical manuals."
I just hope when that time comes I'm well compensated for studying erudite texts on low level computer science while chilling on my phone instead of browsing social media or playing games. But honestly, I enjoy reading and learning about ABIs, C programming, network protocols, etc... more than mindless scrolling anyway.
Although I worked with computers most of my career, it wasn't in the tech sphere at all. The article meant a lot to me because of trying to wade through tons of websites for info about signing up for SS and Medicare. These sites were not set up for non-computer tech people.
Like you, I enjoy learning techie stuff, but most of my friends don't. Because, IMO, most websites are written for younger people, they feel almost helpless.
For example, I am NOT a cloud expert, and barely a novice, by any measure. When dealing with various CloudOps/Cloud Architect/etc types, without fail, the good ones are also complete Linux gurus who would be at home architecting on-prem infrastructure as well.
Unfortunately there are a lot of "experts" who are only comfortable working in abstraction and so as soon as we have an issue that scratches below the surface, it turns into an all-hands-on-deck mayday situation, looping in other teams/experts to try and save them.
I've also seen horrific implementations by abstraction enjoyers.
Imagine a continuous integration environment of the following - k8s where one pod simply runs a forever shell script that is basically "git checkout master;git pull; sleep 300;". Then the other pod actually runs the application on top of the shared storage that the first pod writes to. This script, unsurprisingly, fails silently and hangs frequently. To debug this you need to go through some cloud auth portal and click through some web UI to then open a console in your browser (which doesn't support copy/paste).
The idiot version of doing this on-prem would be a single small VM that has a cron running every 5 minutes, which invokes your deployment script, with cron configured to email on failure. This would be simpler, more reliable (not relying on a forever script), and more supportable (it emails on fail!). And it uses like 30 year old tech that just fkn works.
I totally relate to your point of view and experience. I'm so happy to ignore all the software related to the attention economy. What I find left over is the computing environment that I dreamt of when I was 12. I marvel that it took a whole generation of man hours to get to this point.
I think we are real inflection point between an analog world and a digital world. I also think that the Bronze Age never went away and analog techniques will just develop into craftsmanship.
I haven't posted a link to these Autonetics parts in while so I will do so again.
What I love about Autonetics is that they were solving the inertial guidance problem in whatever computing form they could get a hold of.
I'm working on a proper website but the images linked here represent the inertial guidance problem being solved with discrete components arranged in circuits and later those circuits integrated in silicon.
> We, people who really understand the technology that the world runs
on, are a slowly dying breed. We are still here as tech leads,
managers, directors, business owners. But there will be a point in
time when we will go on retirement and there will be only precious
few people who had perseverance to really understand all those
things by diving into obscure, historical manuals
Some people like to say this is a perennial problem that every
generation bemoans. I get why that might seem intuitive, and for the
longest time I've hoped they're right and I'm merely getting old.
Sadly they are wrong. Long term historical patterns of empire collapse
follow this over-reach and under-education cycle that leads to a
catastrophic rebuild capability gap that is triggered at some point.
Thomas Thwaite's "The toaster project" [0] is a wonderful commentary
on this. I cited it in a talk I did about education in world that
will no longer pay for teaching kids how to use chips and breadboards
in skinflint universities that prefer to teach them Microsoft Excel or
use some crappy simulators of everything [1].
When it comes to cybersecurity we can only obtain defence in depth if
we have knowledge in depth, and there are ever fewer of us around who
understand how computers actually work. It's more than a little
unnerving to meet CISOs and folks with high-flying titles who have no
idea why using a phone app to control a Fortune 500 company infra
might not be a good idea.
I used to think that the historical development of a subject can not possibly be the best way to learn a new topic because it is so path dependent and random. But with experience I have seen that there are very few instances where educators mixed up the historical timeline of a subject and made it more easily absorbable. It still baffles me that this is even the case for abstract fields like math
A sci-fi semi-solution is to hope that all the relevant information is embodied in these LLMs being built.
A funny possible history is everyone forgetting what’s necessary and then having the relevant information they need accidentally blocked by an AI system they have no idea how to build or fix (maybe because of terms like master/slave going against elite values).
I understand pretty much all of the layers of the computing from how CPUs achieve some of the things they are doing to bus protocols, to instructions, physical memory, low level OS internals, high level OS internals, virtual memory, userspace platform communication with OS, programming language runtimes and linking, shared libraries, IPC, networking, virtualization, etc.
If by "understand" you mean "have a passing familiarity with" then.. sure, but if you mean "my understanding is reflective of reality" then there's absolutely no way.
We, people who really understand the technology that the world runs on, are a slowly dying breed.
There are more people doing "low level" programming right now than were doing it a few decades ago. The only knowledge that is fading is knowledge that's applicable to legacy and/or retro systems.
I suspect this happens in all technical professions as things advance: you learn first principles and low level implementations in a formal training program/degree, but the demands of the industry to be productive require that you use and trust abstractions.
Even my doctors don't know some things about various drugs they prescribe me, because I can afford to spend the time reading the literature that only pertains to me. I'm sure their biochemistry background means they could definitely outperform me if they wanted to do a deep dive, but they know the important parts, and they know how my condition stacks up in the compendium of total knowledge and clinical experience they have. I don't think anything important is lost if they cannot at the drop of a hat tell me the half life and mechanism of action if I grill them on it.
One mitigation of the “mastering computers takes 30 years” thesis is that many of the abstractions and technologies that we spent those decades learning are now obsolete, and newly minted grads are blessed with learning materials and environments that make catching up easier than ever.
Older workers are supposed to pass along this knowledge to younger workers in an expedited fashion. Acquisition of the understanding of increasingly complex systems must be optimized to operational needs. One of the many, many gripes Millennials have with Baby Boomers and Gen-Xers is that they jealously guard institutional knowledge in the name of job security, for years, and then just... retire. If you're lucky, your company will shell out massive consultation fees to have them come back and drip-feed what they know to younger personnel.
It's not just the natural order of things that such knowledge passes out of this world. It's one of many collective and affirmative decisions by people born before 1980 to take as much with them as possible when they go. I would be skeptical if it were just one thing, but after affordable college, affordable housing, affordable healthcare, affordable retirement, and a viable biosphere, it's become a pattern. That you seem to understand this on some level and don't care to try to mke a difference is even worse.
I believe that the first lesson plan for any subject should be to follow its historical progression. That's not the be-all and end-all but, whether you're talking about computer science or physics or economics, it's a good starting point because it's one proven way to deal with the field's complexity. You just have to be clear that the simpler models from older times are pedagogical tools, not current understanding.
For the last 10 years you have been adding stuff that did nothing but increase the complexity. Now you wonder why new people can't keep up with the complexity.
and how many of today’s CS-degree holders would barely understand any of it. As someone who has also “grown up with all the technology”, I’ve learned and experienced all that. But as a percentage of “software engineers”, there’s fewer and fewer that do every day.
Unfortunately, knowing all this won't help you get a job when the market demands an army of React devs and little else.
Under our current system, knowledge that cannot be monetized is useless. Worse than useless, in fact, because time spent learning low-level concepts is time that could be spent learning marketable skills.
The industry is effectively paying us all to forget.
Sure, learning the USB stack and differential signaling is fun, but unless my job involves implementing custom USB devices from scratch then it's pointless trivia that won't pay the bills.
This sounds plausible but I don't think it's correct. The body of knowledge you refer to is large but not that large. I think by the time a teenager with a persistent interest in computers is 25 or so they can accumulate a similar level of expertise, and without no lifing it.
I am exactly the same, coming from making my own add on boards for early PCs and programming in machine codes and then following tech as it developed. Yes it is hard to find young persons with the same level of understanding but they do exists.
I am not conflating anything. For me it was my day to day work for decades. Some new technology happened and I had to learn it to be on top of my game.
Trying to learn all this in 1/10th of the time to start your career is what requires perseverance.
I think this is unduly pessimistic. Take low-level kernel work, for instance. Between Linux, Microsoft, and Apple, there are lots of people who understand it. It's just a smaller proportion than used to understand it. The absolute number of people has probably grown.
But we don't need every programmer to understand low-level kernel stuff, precisely because the kernel abstraction is good enough at hiding that stuff. Most of us don't need to care (whereas 40 years ago, many more did need to care).
Personally, I myself never want to care about low-level kernel things. When I'm trying to write a program to do something, if I can write it without having to worry about low-level things, that's a win.
I have that knowledge but don’t know how to monetize it. Kids are coming out of school with 5 YOE and making double my salary with almost 30 YOE. And they ONLY know how to do React.
Where do we draw the line though? Surely those proficient in low level programming languages don't then need to become proficient in building motherboards from scratch?
I think that, at the very least, there's a huge benefit in having a bit of understanding one abstraction down from everything you interact with. Knowing what an abstraction is abstracting, and why, goes a long way in coexisting with that abstraction. A low-level programmer doesn't need to actually build motherboards from scratch, but they'd probably benefit a lot from a bit of understanding about processor caches, registers, and bus latencies.
This is a non-problem. If there's a shortage of people working at X level of abstraction (where X is possibly very low) then salaries for those with that knowledge will go up and more people will seek it out.
Free markets don't solve every problem but they solve this one.
I think a good first step towards improvement in this space is by separating the terms 'abstraction' and 'indirection'.
Programmers too often add indirections which don't provide abstraction:
* The programmer wants to POST an object to a web server.
* The programmer also wants to think about it at the level of POSTing an object to a web server.
* And yet the programmer creates an HttpClient.java and an AbstractClientManager.java an ClientManager.java.
* These extra classes are just pointless indirection - not only do they not prevent the programmer from needing to reason about POSTs, but they actively make it harder to do so.
In contrast, if you have a collection type (Set, History, Permissions) and want to treat it as a monoid, that's a huge leap in abstraction, with only one level of indirection.
Yes. Anything that is destroyed during the process of compilation is not architecture. It is just a method of code organisation, and we've turned the subject into a holy war, pulling in mindshare that should be spent on more important problems (e.g. we have fibre lines and Ghz multi-core processors, but why are user interactions slower than they were in 1995?).
80% of code organisation problems require doing a couple of things right: directory design (directories, file names, file contents), function design (purpose, naming, parameters, context and return type). I've been coding for almost 25 years and I've encountered the balance "20%" extremely rarely. Even then, 100 lines of simple code is easier to read and modify than 20 lines wrapped in clever abstraction (e.g. I once had the dubious honour of refactoring a broken and unmaintainable state-machine driven code base back to a simple series of if/else/switch/case statements, and the improvement in the development and troubleshooting times seemed almost unfair for such a simple "trick").
This an amazing way to concisely explain the difference between indirection and abstraction. Works for me at least.
Anything undone by the compiler (perhaps even inlined) is just indirection. Everything beyond is actual abstraction (which cannot be broken down further by the compiler as it’s far too limited in its understanding).
Folks that have really strong opinions on issues of taste, but then they output apps that send a couple of thousand requests ( with several second latency ) are extremely difficult to reason with.
We do? In my neighborhood, we can’t get fiber lines because of some combination of NIMBYism and political back-scratching allowing a cable company to continue to serve us shit slow and unreliable internet. In the middle of a city!
I’m more interested in solving _those_ problems than how long a request round trip takes.
I think the reason people do that stuff is because they do provide abstraction, but they only do it in UML diagrams, not in the code.
I've found the shittest programmers to work with are the ones that think visually, because in their head they just deleted two boxes and four lines, but in reality all they did was add 40 lines of code that do nothing.
Because almost all programmers tend to add not only abstractions, but also shortcuts. And as everybody knows, shortcut is a longer but more challenging route.
This, a manager and domain manager for everything. One of the reasons I love go is that is inherently encourages being direct. All of the verbosity that people complain about it is infinitely offset by simply doing the thing. Some examples:
- how directory structure is part of the code organization of the language, and how it's something you have no choice about (so have to spend zero brain power on), and works out of the box.
- the concept of exported/unexported fields (no need complex logic/annotations, ineffective _prefixes)
- named returns eliminating the need for explicit, so useful when exiting early
- default values (+ named returns)
Any of these *manager concepts just seem out of place in Go.
As a 15+ years webdev my recent projects in the front-end are almost pure "vanilla JS" (Just JS) + Html, and CSS with only Petite Vue / Alpine on top.
I find the modern Vue, React etc. stacks absolutely insufferably complex and prone to breaking in 1000 places each time you upgrade some package, or change som random thing in the already stupidly complex "Tooling/Build" chains people are setting up by default.
And it's because no one, not even experts in the field seem to understand 5% the stuff that is contained in these monster codebases.
I found myself using 90% of my time on this setup and tooling, and i'm pretty sure most devs do not need these.
I'm wondering if i can somehow pivot into making only these elegant products for customers. Have anyone done this as a solo dev, contracting or in an agency? Maybe make a "frameworkless" agency? Will hopefully save my sanity and my career.
I find the code much easier to read, the codebase is way smaller, and the app's can do the same thing. You can always pull a library here and there if needed.
Also coding and putting together a project is suddenly fun again, and it seems most languages; CSS, JS etc. are now so mature you can do almost anything with them without the bloat.
And as a bonus you actually have time to understand the different underlying concepts like back in the day instead of using weeks in the issue trackers and playing tetris with dependencies.
Although I agree that many sites could get away without the bloat of client side JavaScript frameworks, sometimes the complexity is truly justified and it enables you to deliver a better product. Complex dashboards with data intensive interactions and explorations are so much easier with React or something similar
I do agree that advanced dashboards may still require some dependencies like texteditors, routing etc and i would have agreed in say Vue 2. Vue 3 is too complex just as React in my opinion.
For example if you just create a naming scheme or prefix vars in a scope i don't get why datasharing in an app between stuff right next to each other have become so cumbersome these days. You have to import and export absolutely everything in some microservice-like declarative way that is overkill in 99% of cases with code that looks way too complex and disallows binding unless you create complex subscription patterns.
Just let me emit, just let me share through the Window object between teams, everyone will save years of reading manuals and reinvent absolutely everything. If you name your data properly it will make sense. Especially when proxy's and listeners are now available in JS as it is.
Agree with you here. Also need the frameworks to scale work out to a team.
But that is what subdomains are for, dashboard.example.com will deal with all that over head, but forcing the marketing-friedly splash page on example.com to be in React is overkill.
React and JSX are still my preferred method for writing HTML - so much better than templating and there's no reason you can't have the output be a static site for something like a marketing friendly splash page. Now is React a great fit for that? Probably not, but it let's me use a consistent set of tools.
12 yr FE dev here, and I’m a big fan of static site again, when possible. Who needs SPA when your round trip cost requires no JS eval or boot time? I wrote this library that is intended for somewhat tech savvy folks to easily generate static sites using AWS: https://github.com/dclowd9901/posse
I agree: building sites like that brings the fun back to web development in a big way.
I would really love to know where you put imba.io in your tools evaluation.
It's the only one I can grok that allows me to work at high levels and just plain js when I need to. I hate that it has any external dependency, but I'm just blazingly fast building with it that I justify the 50kb extra.
I've been looking outside the JS world for an alternative for doing full stack dev and Laravel Livewire seems like an amazing alternative for like 80-90% of use cases. Something like Alpine, vanilla, or even Lit for the more sophisticated interactions.
I’ve been super impressed with Livewire on several small projects.
I’ve been sold these same sorts of promises before (yeah you write backend code and we put some glue in and then everything works like it’s running in browser) and usually it’s a case of a very narrow happy path and indescribable horror once you’ve fallen off and have to wade through or work around the “magic”. I did not go into Livewire with high hopes.
Everything I tried to do with it basically just worked. Coming from a background where I’ve worked with PHP/Laravel/Blade quite a bit already, it was really easy to pick up.
For stuff that Livewire isn’t necessarily the right fit for, alpine fills the gap perfectly and integrates well with Livewire.
My Livewire component has one PHP class that contains, basically, the “controller” that exposes the BE actions for the component and a blade template that contains the “view”, including any basic FE interactivity inline by way of alpine. I use Tailwind and Tailwind UI pretty heavily, so all my styling is inline as well. I’ve found this pretty simple and easy to follow, reason about, and maintain.
I’m pretty sure it’s the most fun I’ve had working on FE code in a long time. It’s definitely the most productive I’ve been. It takes no time to stand something up.
I've used both Laravel and Python Flask. Both are very mature, fast and relatively simple of you strip them down. Though to be honest i'd like to go even more barebones on the backend and have experimented with pure PHP (which is also quite mature these days) + a few symphony - packages but isn't totally there yet.
you would need to add alpine.js or something equivalent though. there is nothing wrong with a few core libraries that are long-lived, thoroughly debugged and understood etc.
You are totally right, i actually forgot that Petite-Vue is the only core library i'm using atm, also tried Alpine and they are great.
And their tiny scopes are actually nice dogmas to play around, i haven't yet experienced them not being enough, and when they aren't you hack together some stuff to extend them, and that actually makes coding fun again. It's like the demoscene, it's way more fun to code with some restrictions and limitations you have to be creative to solve, instead of setting up some dependency/folder hell.
Optimize, minimise, make more elegant, make more readable, that is where the fun is!
> Question everything. Especially things that don't make any sense to you. Don't just assume that someone else knows better - that's how you quickly turn into a blind follower.
That’s what I do by default since I was a kid, and I can tell you the social pressure not to is significant. I recall an interview I did once, and one reason I failed it was "questioning everything". It didn’t even felt like it, I was just asking questions about their system, it’s supposed to be basic curiosity. At my current job there’s this architect that explicitly asked me for continuous feedback, but now shows subtle signs that maybe I went a little too far.
> Questioning everything gets results, not friends.
I'd argue that it depends how it's done. I used to work with a brilliant Data Scientist who had a PhD in AI. She asked so many questions — often about fundamentals — that it was like working with the Riddler. She generated a ton of insight as a result and identified missteps before they happened. Despite questioning everything she was arguably the most liked person at the company because of how she approached it. It didn't feel like being grilled, but rather that she was super interested in people and what they were working on.
My point is that questioning everything can get results and friends if done correctly.
Not surprised that they are a "she". Guys tend to be more aggressive in their questioning, and guys are also more likely to be considered a rival (to other guys, at least).
To make this gender neutral: When questioning, don't be aggressive, and try to obfuscate that you are a potential rival.
Subconsciously, I think some people know their system designs can’t hold up to sustained scrutiny. And their technical aesthetic is weak, oftentimes it is a mental symlink to “best practices” and “what HN/X says.” So talking about the way it has grown, and the why, can provoke anxiety.
So you end up in this weird place where it feels like there is a big pull to make tech about everything but the mechanics of what you actually do on a day to day basis. Or rather, it’s viewed in a very transactional way: “I learn this skill to get that job.”
Personally, I find pithy comebacks like this to sound convincing to but to actually be mostly devoid of utility. They tend operate as a sort of social persuasion tool, but I'd like to Get Things Done, which requires deep domain knowledge.
One thing I've been noticing with my current clients is the vast amount of churn generated by them only having a patchwork understanding of their overall architecture. Some grok the low-level infra, some the deployment levels, others the backend pieces, and others the frontend stuff. However, since there is no story collecting the pieces into a cohesive whole, we see them treadmilling through different tech stacks in an attempt to gain "velocity". They're definitely running fast; it's just they're not getting anywhere very quickly.
It's a pattern I see a lot. On the flipside, when clients have devs who mostly grok the entirety of their system, then we can focus on the questions that really matter: purpose, market fit, empirical evidence from users, _etc._
Questioning things costs time right now, not questioning costs 10 times that in the future.
> It's a pattern I see a lot. On the flipside, when clients have devs who mostly grok the entirety of their system, then we can focus on the questions that really matter: purpose, market fit, empirical evidence from users, _etc._
So uh, not questioning everything eh...
Sounds like you agree with me. That questions should be reserved for the meaningful and insightful.
And that's probably healthy for society to have a check. If the questions about the sky keep the questioner in bed they might not do the things they need to ensure their continued existence. In one respect that might be food but in a business that would be delivering change or supporting the business.
Another example: as a child you understand what + means and take it for granted. As part of Maths undegrad you start understanding the type of operators and the axioms of maths. The child was perfectly find completing maths problems without that depth.
Ah you went there eh. Yeah an alternative form on my reply would have been to point out that kids are the ones who question everything. Adults learn to identify the questions which will expand knowledge, and ask those.
asking why the sky is blue when you don’t know is perfectly fine and even admirable when everyone else seems to know and you don’t. i would expect such a person to at least try some research first and then share what they learned and what questions they still have, but if someone is asking earnestly and i have time, then why shouldn’t i help?
i don’t thjnk the logical extreme you position here is correct; if it’s an interest for the person why shouldn’t they pursue it? questioning everything for me is just getting to the point that it’s not magic anymore.
i probably don’t need to know the entire kernel code base to figure out why my eth0 won’t stay up. it might help a bit to know some of it if i’m getting weird kernel-ey messages in the logs.
The question was not "why is the sky blue", but "why is the sky a different shade of blue".
Every day the sky will be slightly different. There exists a reason why, and given effort you could identify the differences and relations.
Or you could go to work.
> It's a pattern I see a lot. On the flipside, when clients have devs who mostly grok the entirety of their system, then we can focus on the questions that really matter: purpose, market fit, empirical evidence from users, _etc._
My approach is to just question everything "else" where there is a chance I might be able to do something about it. This is why I'm not popular because I don't have much interesting trivia for party conversations. But I already have come to terms with that. This doesn't help with staying up late though.
One cause I've seen of this is the "college is not needed to be a coder" trend from the past few years (decade?). it is true that most of my skills were acquired after college, but there is some sort of global system understanding that you get from college (in my case computer+software+network engineering) where you can quickly narrow down potential sources of a problem, all the way down to say, the TCP/IP stack, CPU architecture, memory management, how some code might be optimized incorrectly by the compiler. It also helps a lot in asking the right questions. If all you know is [latest framework/language/tool], there is a whole world around this that can bite you when circumstances stop being perfect.
One superpower is looking at a marketing page for a tool and knowing within seconds what it can't do and whether it would be a good fit for your use case, just by what is claimed on the page.
Regarding abstractions: I've sort of always ran away from languages that encourage it a lot. Java was one of them, where the entire OOP trend just fell flat to me even in college (I understood the benefits, but meh). The concept of dependency injection is another one, I think it was introduced in Javascript? Idk, I use it if the framework forces me to, but IMHO, it only makes sense when I'm the one building it. Define constructors that take interfaces and instantiate them explicitly with whatever implementations you want (ie. mocks vs real, etc). So that you understand everything that is happening.
My Biggest fear with frameworks (especially JS) is they are abstracted to such a level that I can't hope to fix an issue in a reasonable amount of time without begging for help in forums or github bc I don't understand even the concepts. What are effects? How does react work? How does Angular work? etc.
While I also attended college, I sometimes wonder if a college degree is really needed to learn these general stuffs. I learn a lot while doing projects completely unrelated to schoolwork, before registering for the course. I still have to register for the course because of graduation requirements, but I just skipped all the lectures and only took the exams. There are nice materials and textbooks available, and I think anyone motivated enough can learn them without needing a college degree, and probably have a better understanding than most college graduates. I really hope that colleges can just allow students to take an exam and skip some of the courses, so to not waste everyone's time. This is 2023, not 2003, most people have access to internet and CS education is widely available. No need to force everyone to attend introductory courses that are not well designed anyway.
Regarding abstractions, I think the craziest thing is that frontend projects are probably the most complicated beasts that I have ever dealt with. I worked with compiler written in Python, embedded system projects where pointers are everywhere, numerical algorithms which are notoriously hard to debug, but still frontend projects still managed to be a lot more complicated than them. I generally stay away from frontend stuff recent years...
I think it's dangerous to think in this way: I succeeded without college, so everyone can, so college should be abolished.
Not everyone has the discipline to self learn through a course. Not every one knows how to properly and effectively absorb and retain knowledge, and not everyone knows how to make that knowledge useful. What college does is provide structure and a favorable environment for learning where you're among peers, sharing the same experience and learning together "how to learn".
Some people are able to get fit by themselves, others need structure and guidance, so you have a whole industry of gyms and personal trainers and programs that require attendance.
My point is that this attitude towards college (or other forms of structured/formal learning) might be what is creating large numbers of full stack developers, devops who only know terraform, etc, who couldn't debug the first networking issue even if the error message was right in the browser. Moreover, the official point of college is that it provides some level of confidence to your peers, employers and customers that at the very least you know the fundamentas of your profession, because without them it is not enough that you're a react wizard, or a microsservices expert.
I think if there were a baseline of required knowledge, most products we build would perform better, consume less resources and fail less.
Yeah I don't think college should be abolished, I just think that college should adapt to this trend of online education, instead of just acting like 100 years ago where there is no internet. And they should probably try to make better use of online education materials instead of offering their own courses that are worse...
> but there is some sort of global system understanding that you get from college (in my case computer+software+network engineering) where you can quickly narrow down potential sources of a problem, all the way down to say, the TCP/IP stack, CPU architecture, memory management, how some code might be optimized incorrectly by the compiler.
That's one potential place to get that kind of understanding. It's not the only one though. I've been fascinated by computers since I was a kid and I've put together a pretty good understanding of this stack.
Don't get me wrong, college isn't a bad place to get that understanding, I'm going back myself (mostly because my employer is paying for it), but it's definitely not the only place.
Given the push toward FP principles in JS, it’s become much easier to mete through library code since now we don’t have to assume some reference is getting clobbered somewhere, and we’re not monkey patching some base library. Those were the bad ol days of browser disparity and (let’s face it) IE.
Now, not all libraries are built with modern principles, but I’ve found digging through the source is not nearly as onerous as it used to be (as long as you avoid tooling like yarn PnP, which basically undermines your entire ability to inspect library code just so you can… avoid unzipping archives?)
It is all about how much risk do we tolerate in the western world.
People are getting nervous because if a war with China would happen with a rapid de-globalization something akin to the "1970s energy crisis" would happen in the semiconductor and digital world.
Well price of our beloved computers and devices would probably need to go more than 10x and around 100x for things to get really interesting.
If this would happen, your cloud native scalable & hyper convergent smart platform powered by AI would feel like a car that have been designed when oil was $15 a barrel. Not exactly what you need right now.
Company would seek the services and even surrender to the blog author like a life long addict seek help and realize he wants to live once he reached the bottom.
Abstractions are like tech debt. When you consume an abstraction, you also get all the klocs (thousands of lines of code) that it represents, they have a price. When you have multiple sources (as in suppliers) of that abstraction this greatly lessons the risk. As does have an abstraction which is total.
The problem with the abstraction tower is not that it is a tower, but that it is square, when you can't tell where you are relative to the abstractions above or the abstractions below, then your abstraction gradient is too small and you are just generating mush. We have too much mush.
Way too much mush. And when questioning the mush, some people get bent out of shape because being critical of the temporary constructs that have collapsed “brings people down”.
>> Some students today apparently don't even know what files and folders are?
This isnt just coders. I work with people every day who do not understand directory structures. I blame "cloud" apps like o365. They dont understand that the human can dictate where a file is saved and stored. They just expect that the machine will save it somewhere according to the type of file, or that all files for a project will be in one big space defined by the UI interface for that project. But then i ask for them to send me a copy of a file for whatever reason. All they can do is attempt to send the file to me from withing the UI, only to another user within whatever app they are logged in to. They dont know how to move discreet files between discs or directories on thier own, or even that such things are possible.
I blame Android (and to a lesser extent, iOS). In early Android, you had a filesystem resembling desktop Linux. Nowadays, apps' files are secreted away in a dir who-knows-where and other apps can't even see them, let alone read them—for good reason—but the same lack of privilege extends to the user via their file manager (if the OEM was generous enough to leave them one).
Compare this to a time when floppy disks (either kind) were ubiquitous, and while the average PC user then may have been less informed of how their machine worked than today, I assure you that they would have had a good grasp of moving files around. Routinely handling disks with your own two hands contributed to that, I feel.
As a systems administrator who has been doing this for a while, I can attest to this becoming more and more of a problem.
I've worked with CUDA programmers who weren't aware of what "architecture" means in the context of computing until I asked them to consider running binary code meant for NVIDIA on AMD GPUs.
I've worked with network administrators who couldn't conceive of the simple math behind something like single IP, 64k ports and approximate memory per entry in a NAT state table to figure out that it's actually not impossible to have NAT states that persist indefinitely.
I've worked with people to whom I've had to explain that bits are bits, and just as in the audiophile world where there's a huge amount of money and energy spent to try to convince people that pricey bits are somehow better than cheap bits, so unless you process them differently, they're bits and are exactly the same.
There's this tendency to believe and follow trends, even when there'e ample evidence of how these trends aren't much more than marketing fluff and more often than not lead to dead ends. It's fine if others want to believe the marketing fluff, but when they want me to change my workload to support something because it's "all the rage", I have to put it in terms they understand ("You love Go? Great. Now what if I told you to use Rust because it's all the rage, and I expect you to rewrite everything in Rust?")
It's unfortunate that students aren't taught more history of computing. If they were, they'd likely learn that computing is computing, and that things don't really change that much, and that if you make and use things that aren't different for gratuitous reasons, they'll still be around and relevant years from now.
> and just as in the audiophile world where there's a huge amount of money and energy spent to try to convince people that pricey bits are somehow better than cheap bits
I always assumed this came down to the analog part (usually some sort of speaker) being better, and all the extras on the cords were really just there because a cheap looking cord would ruin the aesthetic they're going for.
If you want to go down a deep, dark, sad rabbit hole, look in to the digital audio part of the "audiophile" world.
For me it began when I looked at a friend's catalogue that had a $2,500 (25 years ago) CD player that claimed incredible fidelity in part because they bathed the underside of the disc with blue light while reading with a red laser, as though this somehow made reading the bits better.
I explained to my friend that barring scratches and speed fluctuations, bits are bits, and none of that matters one iota until you get to the digital to analogue stage.
My example was that you could write the data on strips of cloth from old pyjamas using free crayons that restaurants give to kids, put them in Dixie Cups and transport them across the Sahara on camelback, and so long as you reassemble them in the right order, the bits will be identical to the bits from a $2,500 CD player.
He's a brilliant guy, but it took him longer to understand what I was saying than I could've predicted. The whole thing - the scammy market, the FUD, the bullshit, the desire to believe - was and still is fascinating, albeit sad.
Someone somewhat recently did a review, if you can call it that, of "audiophile" ethernet switches. Yes, they want you to believe that the bits somehow are better if you send them using their special ethernet switches. Might be worth a quick search.
Software simply mirrors the development of modern society. Highly complex and relying on countless abstractions to tame that complexity and be able to operate.
This design is sort-of working under plain sailing conditions (lots of available resources, booming markets, peaceful cooperation). But it is very non-resilient under stress (subdued expectations, broken promises, resource scarcity, hostility and strife).
Remember the pandemic? It was painful lesson and we like to forget pain. But it was an instance of our abstractions failing at a global and systemic level. Broken supply chains, panic, implosion. Thankfully it wasn't the worst kind of negative development but it shows that in the way we design essential pieces of the economy we are basically simply ignoring entirely plausible scenarios.
Now, how are we to respond to this momentous challenge? We can't go back to some low-tech autarky. People cannot build and program their own silicon from ground up (though it would be really cool to be able to do it at least as a proof-of-concept).
The gist of the right approach seems to me is to make sure that abstractions are tethered. If you need dozens of pieces to be in place to deliver something, the combinatorial possibilities of some of them not being available grows exponentially. If you need the latest and most expensive gear to operate this means the long tail of humanity is left behind.
Another analogy might be the drastically different structures you can build out of carbon atoms. You can have graphite layers that peel off at the slightest pressure or you can have diamonds that are the hardest of them all. The difference is the more connected nature of the diamond lattice. The different abstractions should be really fitting together very tightly with strong bonds.
“Everything in the tech industry is driven with a very hardcore eye for profit and very little interest in anything else”
What is up with the discourse lately - of course it’s all about the profits - we’re talking about companies right not non-profits? How else are they going to pay you?
The whole “expecting companies to care about so many things” apart from profit seeking mindset is just bizarre to me
Not all profit is created equal. Sometimes in order to chase greater short-term rewards, you sacrifice your access to long-term rewards. Pushing your dev team into unsustainable practices may increase next quarter's profits, but might also mean that your company is bankrupt 2 years from now.
I would amend what the author wrote to say that a lot of stuff is being driven with an eye for short-term profit, at the expense of long-term value.
this is the kind of thing you say during an interview, get rejected and then wonder if you just experienced ageism instead of any introspection about your own rant
A charitable interpretation of the article is of course a balance is what actually matters. It is good, on some level, that abstractions free up developers or whoever to focus on the business instead of just tiny little details. Their example, however, proves you need escape hatches sometimes, and you need to have a mind for the details at times, and right now the balance is far in the "abstract everything" direction which has its own issues.
I think you might've missed the point a little bit. It doesn't read to me like "abstractions bad", it's saying if you're going to use an abstraction you should learn how it works (and that we've built so many that nobody bothers to learn them, which is true).
I know Javascript, React and Express like the back of my hand. I can solve absolutely any problem I've run into with that stack, I can't even remember the last time I was stuck on something. But I've ended up on like 5 projects in a row with stuff like Typescript, NextJS and NestJS because they are red hot on the fad scale right now. Despite that, I still don't feel like I know much about them because I set that bar reasonably high, and out of all the people forcing me to use tech I don't want to use, none of them even come close to reaching that bar.
I'm sick of asking fairly basic questions about frameworks and libraries I'm not as familiar with, and getting back "oh, I dunno, have you tried...?" 15 times in a row before solving the problem. That's not an efficient way to work, no matter what 90% of the industry seems to think.
No, this is the kind of thing you say during an interview, get rejected, and then realise you were probably not going to like the work environment anyway.
That’s cope, none of the flags in an interview have anything to do with anything, especially after taking enough jobs to you realize that you’ll probably get called in last minute to interview additional people with no preparation whatsoever on the interview process so you just google a couple things to ask and wing it yourself
In the example at the end, people don’t necessarily need to know what files and folders are so its not bad that they dont know, it’s better for a new developer that they have a blank slate to learn about trees of storage paths, as the skeumorph is outdated. just like the save icon being a floppy disk was so outdated that instead of replacing it, people realized we dont actually need save icons anymore and everything should just always save
Meh. Unless something very unexpected happens, human programmers aren't going to be a thing anymore, starting in a not-too-far future. And programming AIs will neither have trouble understanding all those layers of abstraction, nor will they add additional ones, since their main purpose is to help limited human minds cope with the inherent complexity of problem domains.
So the only way the mounting complexity of software could create a "bleak future" would be if it somehow prevents us from bootstrapping those programming AIs, and I doubt that will be the case.
AIs of today don’t seem to understand anything, they will often output code which uses methods that don’t exist or use libraries that don’t exist, and this is just dealing with the top layers.
These posts are naive and reminds me more of an angry teenager being told the world is a tough place to be in.
Deal with it.
And if you really want to change shit around you, be damn sure to have lote of time and equal patience. And not least some understanding of how the human mind work in a professional capacity.
It's too fucking easy for any of us to simply throw our hands up and say "this is soo bad".
If you really want to understand something try to fucking change it and tell us how that went instead.
Everyone can tell stories about how bad shit is but very few can tell a story of the backbreaking commitment it takes to change shit.
If that's the case building a house, farming, and everything in life if an abstraction of blind leading the blind.
The reason we are so great is because we could stand on the shoulders of the giants who came before.
This is especially true if software. I don't need not understand assembly to get my business tested and in front of users.
To me this is the same argument that the genetic older generation uses to complain about a younger generation. They'll destroy themselves if they don't do it like I did.
Change is here. Adapt and learn like you used to. Do that and you'll be fine.
Today that type of concern would seem absurd, and we've gotten used to the idea that flying a plane and building it are separate skillsets and jobs. After all, we are comfortable with aviation engineers designing the aircraft without knowing how to mine the bauxite, smelt it to aluminium, and machine it into aircraft parts.
As comforting as it is to have an overview of how to trace a complete path from logic gate to web request, it isn't necessary. The decrease in the number of engineers with that overview isn't a cause for alarm, or a problem to be solved. It is just a sign that "tech" has matured, and it is stratifying into a set of interlocking disciplines. The same happened with every other subject in human history, and we've been just fine...