If we assume most of us don't really know what we're doing, that totally explains language preferences. We don't choose the best tools, we choose the best tooling. And best tooling is the one which we can comfortably fit in our minds, so advanced concepts are mostly ignored and the hype wheel constantly turns. To put it shortly - smart people write better and better tools, so the rest can handle bigger and bigger projects writing mediocre code.
Of course, as every programmer, I live in constant fear that I am part of the plumbing crowd waiting to be exposed.
You can get an awful lot done by "plumbing". Entire businesses like SAP are built on it. It can also be mission critical; in SpaceX, is the literal plumbing of hydraulic fluid and fuel flow unimportant? No.
I'm trying to understand the industry, as it appears to be(at least to me) different than what I thought was true. I believe it to be important if we're going to do better and there is a ton of metrics showing we should do better(percent of projects failing, percent on projects exceeding budget and time, percent of projects becoming unmaintainable).
If you could prove that only a handful of people is capable of actually developing software project pass the stage of 'piggy-backing' on libraries, that would probably distinctly change the way we develop software. Maybe we could prevent death marches better. Maybe we could improve our working environments so nobody has to crunch or have a depressing spaghetti-code maintenance job.
It doesn't mean in any way that 'plumbers' should/would be treated worse. If anything, I would expect the opposite.
It reminds me of how MIT changed their intro-to-programming course, from the Scheme-based one to a python based one, because "the SICP curriculum no longer prepared engineers for what engineering is like today. Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems. [...] programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?'. The analysis-by-synthesis view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant." (http://www.posteriorscience.net/?p=206)
Also reminds me of Vernor Vinge's "Zones of Thought" novels, where in the far future, the starships don't have exactly programmers, but rather a kind of software archelogists who assemble systems from software components that may be a thousand years old.
Failures here are almost definitely related to lack of adequate mentorship rather than anything else. College doesn't go half the way to prepare you to be a successful engineer.
There are people out there that can be self-motivated to do better, but in almost all those cases they're building skills that do the dirty work but don't feature best-practices necessary in a collaborative engineering environment.
Your first employer/team, and their ability to mentor and develop new engineers, makes a huge impact on your success as an engineer. Really capable engineering mentors are worth their weight in gold (diamonds? printer ink?) and their contribution has an exponential effect.
This is something I'm hearing alot at the moment, and not just about engineering. What would you say college taught you?
- are well specified and known to be completable
- start from a blank slate
- produce relatively short programs
- once complete and accepted, will never be run or looked at again
- are required to work individually
Whereas in a real software engineering department:
- goals will be to some extent vague and fluid, may be contradictory => requiring negotiation skills with PM, customers etc.
- you will nearly always be adding to an existing project => requiring ability to read code and perform archaeology
- programs end up huge => requiring schemes for better organisation, modularisation etc
- have a long life and a maintenance overhead => requires design for adaptability
- are required to collaborate => requiring use of a VCS, not having complete freedom to choose tools, techniques like CI and branching for managing incomplete vs complete work fragments.
others comments about the difference between school work and real work are spot-on.
It's funny how much I hated group projects as an undergrad, but how in some ways they were the best preparation: How do you still get things done when everyone has different ideas, varying levels of competency, available time, and motivation?
The how to learn bit is (and has been, for the last 20 years for me) massively helpful. It's rare that I think back to a particular thing I learned about (it still happens though), but I cherish knowing how to move from one subject to another when trying to work out how I should solve something and where I should look next.
What learning those things does do is drastically increase your future flexibility as a developer - new databases, new languages, new jobs entirely, whatever. It's all built on the same primitives and if you have that fundamental understanding it makes it easy to ramp up on new technologies given you have the willpower and motivation. There's still a learning curve for specialized fields (of course) but that's fine.
Colleges may well be adapting since I left, but the main issue is that people aren't really holding you to the standards of software that exist at capable software firms. Correctness is about all that matters in university. Students don't know how to optimize for testability, maintainability, deployability, monitorability, etc etc. And learning and developing those skills makes you far better at the 'correctness' bit too.
There are some courses that are collaborative, but in industry the code you write can affect hundreds or thousands of other engineers and there can be real economic consequences of issues in your work (see: plenty of interns/new hires that have had the opportunity to kill $100,000-$1,000,000 or more in revenue by taking down a site - not blaming them, it's just an issue that actually exists in the real world). The order of magnitude is just so different.
This isn't a problem per se, I don't think universities should be expected to perfectly prepare you for this (this is why internships are crucial, and are one of the strongest interview signals for new grads). But somebody does have to - the onus is really on employers of new grads to raise functional engineers if they want to have top notch engineering teams.
I'll be honest - I didn't really grok CS until my first internship had passed, but that one summer really changed both my existing knowledge and my desire to build those skills further. I'm really grateful to have worked with some people that sparked that interest in me. I was at a 2-fulltime-dev startup with a ton of opportunity to work on different pieces of the stack, and it was just tremendously fun.
Side note: an interesting note on taking down applications is that as software enterprises get more mature and taking down a site is that much harder, it feels like (to me) that new engineers in your organization actually have a less opportunity to learn-by-doing for the foundational pieces. This is a very bizarre catch-22 that I suspect has real consequences for the growth of new engineers in software organizations. Very hard to calculate that effect though.
Prototyping mindset allows you to try many things due to low cost of failure; operations does not. Most software work and in general engineering work outside of academia has at least a touch of ops flavor (i.e., high cost of errors), so to be successful (e.g., not to be labeled a loose cannon) one must be able to impose self discipline required for this. But most organizations have systems and environments for prototyping (or will gladly set up one if you clearly express your wishes and articulate some benefits).
Engineer that wants to work on new ideas then must learn to wear multiple hats (prototyping vs ops) and switch them as needed: the moment one glues one on he is limiting himself to either a rigid ops work (no, we cannot try new things) or a junior level dev (he cannot touch the real systems; his code does not work well enough).
It sounds to me like there needs to be some sort of deep-dive "onboarding" program where new hires can work on a curriculum of projects and learn the SOPs of the organization.
Colleges could take some of it on, of course (testability and maintainability for example), but one of the complaints I heard even ten years ago is that they can't keep up with the changes in the field. No true fundamental best-practice principles have evolved, it's largely company-dependant.
Now, in other areas, we have a distinction. We have physics departments that teach people theory, and we have separate engineering departments that prepare people for careers putting the theory to useful work. Well, where does the CS program live? At least where and when I went to college, the CS program was part of the Engineering department.
So I think it's fair to say that colleges should take on considerably more of the job of preparing software engineers for real-world careers in software engineering. Hiding behind "we teach CS, not software engineering" is a cop-out, especially if CS is within the College of Engineering.
This is maybe more obvious in hardware design.
At the top of the tree you have people like Maxwell, Heaviside, and Shannon, who invent entirely new possibilities out of pure math.
At the other extreme you have technicians who don't truly understand math or theory, but can build a circuit that will probably work if handed a cookbook.
In the middle are people who can work with abstractions like DSP and filter design as long as the ground has been broken for them. They understand enough math to find their own way through a problem from first principles, but aren't creative enough to invent anything truly original.
CS is more amorphous, the levels are maybe harder to separate, and it's cursed by not having a truly consistent practical mathematical foundation analogous to the applied physics that underlies engineering of all kinds.
But IMO there are similar skill levels - although at the higher levels, abstraction can become a bad thing rather than a good one.
The problem is that although there's math in CS, after Church/Turing - which is pretty elementary compared to most physics - there isn't anything that passes for a formal theory of problem/solution specification and computation.
Without that objective basis, a lot of CS is academic opinion instantiated in a compiler. And a lot practical commercial CS is a mountain of hacks built on language and tooling traditions with no empirical foundation.
Commercially, the most productive people will be middle-rankers - not so clever they'll be thinking of new applications for category theory in an iPhone app, but clever enough to be able to think beyond cookbook development.
My employer does exactly this, both within the R&D organization and within our services/consulting group. All new hires from college do a 3-4 week "boot camp" where they do all the common indoctrination stuff, from HR paperwork, to learning the shared tools, to a mini programming project.
Expecting a college graduate to show up ready to contribute like a 5 year veteran is ridiculous. As the parent message says, college is mostly for education, not training. Internships and co-ops fill some of the gaps, but high quality internships are few and far between.
We don't need a cult of brilliance. What we do need is an atmosphere of humility. Modern software/hardware systems are of breathtaking complexity. It turns out that's simply hard for humans to hold in their minds as a whole, but we still develop software like we actually could.
For a while, we had a glimpse on what a simpler world could look like. (The early days of the web - when everything was GET/PUT/POST). We promptly proceeded to layer complexity on top of that.
And that's OK, because it gave us a lot more power. But we pay a price for that power. And every time we attribute that price to lack of brilliant people, we mostly show that we haven't even come close to understanding what it even is that makes projects succeed.
The genius myth is just magical thinking in disguise.
Could you explain how such proof would lead to those changes?
> death marches better. Maybe we could improve our working environments so nobody has to crunch or have a depressing spaghetti-code maintenance job
These aren't software problems, they're business and social problems. No concievable level of productivity improvement will eliminate the death march.
If we split it into "framework writers" and "application writers" then organizing teams along these lines might improve efficiency.
I train in machine learning and other areas, and I often make a "framework/application" skill level distinction made here -- where framework just means the meta-development activity.
What the op comment appears to be saying here aligns along my experience of working & teaching pretty exactly.
I don't think it would matter.
In one instance, you're debugging and Apache server, in the other it's an in-house server implementation. You handjam a CSS file or you can use a preprocessor to help. You can create your own SPA implementation (as I painstakingly and naively did) or use any of the hundreds of existing ones. So do you want to debug business logic or debug all of your in-house implementations and your business logic? External tools are not perfect, but the idea is that they've been battle tested to know where they shine and edge cases that were missed. On top of that, decisions about the tooling must still be made. Relational or NoSQL? You must still know the difference when choosing.
At a certain level, what happens is that you get into the realm of math / engineering problems. And there definitely is mindset and focus differences between engineers and business -- one that's good at one is not necessarily good at the other. I just don't think the coding aspect of that separation is as stark as you make it out.
It has everything to do with management practices, organization, anxiety, fear or personal wish to be seen as hero.
Is the distinction between an aerospace engineer and aircraft mechanic "elitist"? Which would you prefer to have designed the next aircraft you fly in?
Plumbing is really what the vast majority of us do. With varying levels of skill, we glue various pre-written libraries and packages together with a bit of business logic, solving problems that a million others have solved before.
The same is true for a lot of software development. It is true that I am not going to design and develop ALL of the components of my software. For some things I will use libs and frameworks someone else made. I don't see a reason to be dismissive about that.
For instance I will be using the libs our hardware manufacturer provides to talk to the avionics bus of the A320. I don't see a reason to redesign our own ARINC429 avionics bus libs, but I also don't see a reason why this would make what I do not "actual software development".
Irrespective of the title, can they design and build the object in question. This applies to all fields.
The point that we must look at is whether or not the problem before us can be solved by us. It doesn't matter if you build from the ground up or use some pre-built parts. Is the solution going to work and solve the problem at hand. If it does then you have success, if it doesn't then you have failure.
It is a matter of understanding the problem (in terms of the problem space), what solutions you use only matters if you can't solve the problem.
Moreover, the point of the comment isn't about the titles assigned to these people by someone external: it's about the actual skills they have that would cause you to assign such titles to them.
If you know people who can design new aircraft, then they are, by definition, aircraft engineers. They may have a job as an aircraft mechanic, but that's besides the point. That doesn't mean the average aircraft engineer is capable of designing aircraft.
My other point is that solving the problems at hand is the more important function, irrespective of what title is attributed to you.
And it seems ridiculous at times to even throw around the title of “Software Engineer” when the field has no standards of certification or regulation like other engineers. The only distinction from the programmer and engineer is the engineer makes architectural decisions, and the larger the scale the more accurate the title. “Plumbers” are cheap and no one cares if you fire them.
I think it will be a long time before the field slows down enough that standardising it will be viable.
Those days are over. When society was illiterate, people who could simply read and write might have been held in the same prestige as those who write novels or manuscripts. As literacy grew however, so did society’s ability to distinguish between skill levels. The same will happen with code.
I remember when mobile development was at its hottest peak, declaring I was an iOS developer practically made people bow down in awe and throw offers my way for help with developing their mobile app idea (usually in exchange for equity or “revenue share”). Nowadays the field is so commoditized I don’t even mention my 8 years experience with iOS except in passing conversation.
A computer science degree is enough to call yourself a software engineer because most people can’t tell the difference these days. But for people who know the industry, a front end dev whose job is to basically push pixels on to a page is hardly an engineer, and I’d say is our modern version of a mid ‘00s website designer.
Having a timeless standard for what makes someone a Software Engineer that we can all agree on and can be verified by third parties would be helpful. Naturally, this will be met with resistance because there are many people who will not qualify, and who do not want an engineering license that would require them to be liable for their work.
I would think that if this were possible, it would have happened by now.
Programming as a profession - while not as old as other "engineering professions" - is much older than what you are insinuating here; for instance, COBOL dates from 1959, FORTRAN is a couple years older, and there are a few older languages before that.
But let's use COBOL as a "standard" - since there is a ton of COBOL out there still running and being maintained (and probably more than a bit being created "new"). That's almost 60 years worth of commercial software development (using various languages starting with COBOL).
If a standard could be considered and developed, it would have likely been done by now.
There are more than a few arguments as to why it hasn't, but one of the best ones I have read about has to do with the fact that our tools, techniques, etc - seem to change and are updated so quickly, that standards for testing would become outdated at an insane pace. An engineer certified on a set of standards might be obsolete before he left the building!
Ok - somewhat hyperbolic, but you get the idea. For the "classic" engineers, their tools and techniques don't change much if at all over long periods of time - so certification is more straightforward. For some engineering professions, you can pretty much take someone who was certified in 1970, and be pretty certain that he or she would be able to do the same kind of work today. That would definitely not be the case for a proverbial "software engineer" certified to the standards of that time...
Don't get me wrong - I run a very small programming shop and I make judgment calls every day about whether to borrow or build. My operating system, hardware drivers, compiler, dependency manager, email server (etc.) I borrow these because it seems obviously practical and I have an appreciation for the complexity underneath (although I have some unkind things to say about hosting tiny apps on full-blown linux virts, the waste is unbelievable). I use Unity for client side development for games, which is probably the decision I'm least happy about, but I simply don't have the bandwidth to deal with the monstrous complexity of client-side programming (especially in a gaming context).
Frameworks are generally bloated monstrosities that conceal performance problems behind walls of configuration and impenetrable cruft that has developed over decades of trying to make the "out of the box" experience configurable while pleasing myriad experts. They do more than one thing relatively badly, and the engineers who work with them often haven't developed the ability to deep dive into them to solve real scaling problems.
You don't get simplicity, you never get zero-touch, and your learning when working with a framework often doesn't generalize, so you're basically renting a knowledge black-hole rather than paying down a mortgage on a knowledge library.
Anyway, that's my two cents on why I think having solid fundamentals is important, at least in my line of work.
I don't think it is. Someone that is slapping together libraries from npm, but has no idea how to debug with gdb or use strace/ktrace/dtrace etc to diagnose problems with the resulting system, or does not have the skills to fix bugs or add new features to the "plumbing" - that person is not an actual software developer. There is a huge gap in skills, knowledge, and as a consequence the capabilities of these two camps of people.
Yeah and then there's actual code writing: the heavy lifting has been done, all the functions/classes/modules/... are still small and wait to be stretched with a lot of nice code. In well sorted projects the latter is a trivial tasks for simple features.
But yes, I agree with the sentiment that there is a lack of general "understanding of stuff". I'm not sure if you must be able to use strace to be productive but it sure helps if you're able to get to know tools that are installed on most systems. Coming back to the OP's topic: LISP is a language that uses formalisms (ways to plumb ;)), tools (ever heard of asdf?) and syntax complete alien to even long-term computer addicts. It always puzzled me how people can be comfortable using this kind of stuff.
You will be productive until whatever runtime/library you are using has a serious bug or performance issue. Then you either use tools like strace or your productivity drops to 0. It is not a matter of marginal or order-of-magnitude productivity differences. You don't "understand stuff" for sentimental reasons, you "understand stuff" because the alternative is your manager has to call in people like me that "understand stuff" and pay them tens of thousands of dollars to fix things that you can't.
> It always puzzled me how people can be comfortable using this kind of stuff.
It's called learning.
> It's called learning.
I think he meant fidgeting with pre made solutions hoping they will work, instead of having principled ways to craft things.
If SpaceX were to "plumb" as he meant it, they would use of the shelf modules and ideas for quick results. Instead they actually designed their rockets mostly from scratch (very rare in space, where it's often a rule not to go off what's provent to work).
I know I'm supposed to think that that is bad, but I can't come up with why it would be.
Then the prototypes get handed off to the main engineers to run with, and support is provided. By the time the thing sees production usage, the original inventor has long since moved onto solving other problems on the horizon.
Praise is attributed to the last and loudest to have touched a project. Innovators are deeply satisfied by that and don't need the praise because it actually gets in the way.
The question is why elitism is wrong. Your statements rest on whether that perception of elite status is wrong. Well, if that is ever resolved, is elitism wrong or not? Ought power, profit, and position be shared?
> Entire businesses like SAP
Companies like SAP mostly waste people’s money by extracting huge sums from municipalities.
Perhaps, but you are the person who prepended 'mere' in front of plumbing.
You are correct - however, it doesn't mean that this statement is necessarily wrong.
Usually when I see a statement that reeks of elitism, I immediately assume lower probability of it being true, because elitism of a statement correlates with falsehood - but it's worth remembering that this correlation is not absolute, and sometimes (although rare), elitism is indeed deserved and true. And in this particular case, out of my personal professional experience, elitism is absolutely deserved. There are a lots and lots of software engineers out there that can't or just won't fit more complex ideas into their minds.
And yet, you are of course, correct: these developers can, and will, do good work. They can have different skills, like knowing the users, or creating beautiful animations, or having a great game design sense. These developers don't need to be looked down upon. And yet it would be very useful for all of us to acknowledge that these developers think in a different way and require different tools - they're not "worse" than "real" engineers, but they are different.
That said, I do agree with you. I think that 1) it can be be very important to make distinctions between types of programmers, because 2) it makes it easier to actively explore the field and finding what suits your 'type'.
For example, for many people, someone who 'does' HTML/CSS with some jQuery plugins, is a programmer. But personally I'd not really call such a person a programmer. I don't mean that as a value judgment, but rather to make a distinction between 'types' of work.
Making that distinction earlier in my career could've helped me (whether the label it programmer/non-programmer is used or not), because I spent more time than I'd have liked being such a HTML/CSS/JS guy. I learned tons of very specific rules/tricks/lore that were necessary, but did not help as a programmer, and spent countless hours doing this kind of work, not really enjoying it all that much.
(thankfully those things are still useful when I do full-stack type stuff, but still)
Ironically, I do the precisely opposite, due to my personal experiences. Over the years I lived on this planet, I've been both in the "elitist" groups and the groups that accuse someone else of being "elitist" - and I universally found it that it's the "elitist" group that was always right. I've learned to recognize accusations of elitism as ways to cater to one's feelings of insecurity (and I was guilty of that in the past, too).
Yes, I think this is very much underappreciated. When people find a language or system that matches their way of thinking, it works a lot better for them. But not necessarily for the next person. So you get small communities of people who find that e.g. Lisp has been amazing for them, who can't see that other people think differently about programs and systems.
However the market does't care for that. The market prefers short term gains over long term gains. Perhaps we can blame wallstreet? Due to the demand for short term gains, the ask from most developers is "how fast can you build this" not "how can you build this to be most efficient and cheapest in the long run and lifetime of the product" To move quick, developers must then employ abstractions layered upon abstractions.
The short term winners get lots of money for demonstrating wins quickly. The losers conform to keep up.
I agree with this. For example, Sun was sold for $5.6B in 2009. 
While Skype was sold for $8.5B in 2011 .
Sun had Solaris, Java, SPARC, and MySQL.
Skype was a chat tool.
Even today many popular databases find it hard to get billion-dollar valuations, while multiple Social Companies has done it.
The market doesn't care about core CS. It cares about monetary gains.
Solaris, Java, SPARC and MySQL had all demonstrated that they weren't going to acquire major revenue streams, at least under the ownership of Sun. Their valuations reflect two different types of company, something the market is very able to understand.
And ironically, a shortage of "core CS" was a major factor in that failure. Usable, high-fidelity, encrypted VOIP is an enormously difficult challenge, and after some early successes Skype failed to offer a quality product. Claiming the valuation as an argument that core CS doesn't matter looks pretty backwards to me.
Their android app was a horror story. There desktop app was ho hum, chat reminded me of icq and it was completely obvious that the major players were going to be people who had hip social networks or were well positioned like say google with all android users and apple with all iphone users.
Skype was a tool to talk to grandma and I didn't forget the fact that they tied the number of people in a group chat to less than 10 if you didn't use an intel processor and resolution to buying a logitech camera.
Turns out even grandma has a facebook account now in fact she had one when lunatics decided it was a good idea to buy skype.
> The market doesn't care about core CS.
Yeah, implementing a peer-to-peer voice over IP with Skype's late 00s quality (which fell significantly since then) is not "core CS" and is just "plumbing", right.
I'm not sure why it would be framed as low-difficulty, except that consumer-facing tools tend to get written off as simple.
The monetary gains the market sees are the direct result of clever math and CS, not bolt-on solutions that can be lifted from libraries on GitHub.
Most jobs working with software development is about creating business value. Either as an end-user product that your customers are going to use or with internal tooling that will help the business have more access to data, streamline processes, increase efficiency, etc.
This "no true Scotsman" approach to software development is actually quite funny after a while being a professional, it's a huge industry, there are terrible companies, there are terrible managers, product managers, etc., but there are also great ones. You can work for good ones if your current gig is mostly being pushed around by unreal expectations from your stakeholders.
The demand is not for short-term gains, how can you justify that it's going to take a year to build your perfectly architected software if the business really needs that done in 3 months and you assess it's quite doable if you decide on some constraints? Your job as a professional engineer is to be able to find ways to do your work as best as possible given constraints, to design something to be improved on over time, to communicate with stakeholders and, given your area of expertise (software), give valuable input to the business decision so they can be the best as possible at that moment.
Seeing software engineering as some grandiose goal by itself is quite wrong, software exists MOSTLY to fulfil business purposes.
It's not about "conforming", there is software that is fun to work, that are intellectual challenges by themselves but that really have no way to be justified on a business level.
This defensiveness against "business" is part of a mindset that should be broken among engineers, we should embrace we are part of a much larger process, not that we are the golden nugget and the top of the crop at a company. Our work is to enable others' work.
BTW, I'm now part of the business side now and a manager. If you give me time constraints, you don't get to give me feature demands, you can give me your prioritized feature list and I can tell you what we can deliver given all other constraints. If you demand all the features, then I take away the time constraint.
I think that the debt analogy is a sound one; if you are buried under credit card debt then you might not even be able to make the interest payments. But if you take on debt in a considered and thoughtful fashion, then you can achieve things you wouldn't otherwise be able to do (e.g. buy a house, in the debt analogy). I have found that the debt analogy is very useful for communicating these tradeoffs to the business stakeholders.
So sometimes it actually is reasonable for the stakeholders to request you to compress the timeline on a sequence of features, if there's an external deadline that must be hit; we just have to make sure we get buy-in to come back and repay our debts (preferably immediately after the deadline).
I couldn't answer this question accurately. I can't even give remotely accurate time estimates for projects larger than "build a CLI tool to do this one thing," much less give an informed estimate about the tool's TCO. I feel like I'm just floating down the river, incrementally improving upon the stuff that we've already built and that nothing we planned to accomplish ever is.
I'm sure lots of people here have executed their Grand Vision for a project. But I'm also certain many of us never have.
When he says "...knowing Lisp destroyed my programming career" he just means that at some point in time he switched to other things. There was no "destruction". It was "a pivot"-- to use HN lingo.
I think anyone who makes programming (let alone programming in a particular tool/language) the absolute focus of their career is in for a major disappointment. The OP was NOT crushed by his realization. He just moved on and appears to have been very successful regardless. Not a big deal unless one is obsessed with Lisp.
The sales pitch is clear: don't become a better programmer, get a better toolkit.
I have been quite fortunate to have come into computers before this cloud of marketing madness overtook us. I got to watch the layers roll out one-by-one.
Honestly I have no idea how I would learn programming if I had to do it again today. It's just too much of a byzantine mess. Hell if I would start with the top layers, though. I'd rather know how to write simple code in BASIC than have a grasp of setting up a CRUD website using WhizBang 4.7 When you learn programming, well you should be learning how to program. Not how to breeze through stuff. Breezing through stuff is great when you already know what's going on -- but by the time you already know what's going on, it's unlikely you'll need the layer or framework. (Sadly, the most likely scenario is that you've invented yet another framework and are busy evangelizing it to others.)
This guy's story strikes me as poignant and indicative of where the industry is. They don't care if you can solve people's problems. They care if you're one of the cool kids. It's nerd signaling.
What is a "real programmer", anyway? Is it knowing how a CPU works? Managing memory? If you rely on the garbage collector, do you really know what you're doing? If you write a Rails app without fully understanding HTTP, are you just plumbing?
Does it matter?
The reason we build tools and abstractions is to allow us to accomplish higher-level tasks without worrying about lower-level ones. Does it help if you understand lower level ones? Sure. But for millions of apps and websites that aren't Facebook or Hacker News or Firefox, the only problems are high level.
Abstractions are leaky. I've yet to encounter one which doesn't leak.
Though if you're working in a team, it's not necessary for everyone to be real programmer. One or two is enough. Those tasks which require real programming are rare, most of tasks are mundane.
Understanding one level of abstraction doesn't mean that you understand the levels above or below it. It's perfectly possible (likely, even!) that a team will have someone who can build a good Rails app and someone else who can make sure the HTTP code and infrastructure is secure, yet those people won't have many skills that overlap.
Yes, there are definitely good programmers and bad programmers. IMO, that has to do with how effectively you can solve problems you care about, not (as the comment to which I was replying suggests) the level of abstraction you're comfortable with.
I'd much prefer 'highly skilled' or 'advanced', if at all necessary.
Most programmers never have to write low-level code.
This is a good thing.
It doesn't mean they can't, it just means we've moved on from wasting our time re-implementing solutions to problems already adequately solved for a general case. Finally.
Manual memory management is frankly insane outside of extreme edge cases today.
It's the goal of software to provide more and more features. The problem is, we usually achieve those features by abusing abstractions we are using. Once the problem becomes apparent, somebody writes a library, which allows us to write a couple of times more crappy code before everything collapses. Rinse and repeat.
Since we are unable on scale to choose right abstractions when necessary, we quite often get ourselves in situations, when it's actually preferable to rewrite everything using lessons learnt. Rewriting comes with its own set of new problems and cycle is complete.
As a result, we are fundamentally unable to reuse already written code on scale, crumbling development and putting the whole industry in constant early-stage.
Actually I disagree - it's just that when something does become reusable it immediately vanishes from people's consciousness. You can see this both in things that become standard library features, and open source components that end up ubiquitous. The list of "incorporates software from" in licenses gets ever longer as people embed copies of SQLite, logging libraries, ORMs, serialisers, and so on.
One of the original article's points was that the things he loved in Lisp became features of other languages, and so ceased to be special.
So far, I just know it to mean making your function more widely applicable. I'm assuming there's a better definition I'm missing?
To illustrate, let's use a simple example. Let's assume you need a logging feature in your application and your language/framework doesn't provide one(pretty much impossible nowadays).
The simplest solution is to just append to a hard-coded file. It solves the imminent problem. It's also very bad solution unless you will never write log again.
Instead, you build a Logger class/module/whatever. The logger provides 'write' functionality, which is abstracted away - client(i.e. code using logger) does not know or care how data is going to be logged. It's a responsibility of the logger to do that and make decisions on how to do that. Now you have abstraction - you depend on ability of Logger module/class/whatever to do its job, without knowing/caring/influencing actual implementation.
Now, let's talk about leaking abstractions. Let's assume you're need to differentiate between different log levels(info/warning/error). Logger is provided as a library(so not easily modifiable) and only has generic 'write' method. Proper solution would be to create a new abstraction, LoggerWithLevels, which would provide methods like 'write_warning', 'write_error' etc. Underneath, it would probably call just call Logger, prepending severity string to logged message. The important thing though is the fact, that we still base our code on abstraction - we don't know how our new logger represents different levels - we use appropriate methods and we assume it's handled alright.
What will likely happen though is that programmer will not create new abstraction layer. Instead, he will start manually or semi manually format strings sent to a logger. Effect will be the same, but abstraction will be now 'leaking', because severity level is now dependent on implementation(prepared log messages), not on logger API. Therefore, any change in the process(like saving logs to online service instead of disk file) will now require changing code in multiple places instead of just one.
Instead, assume the previously mentioned Logger does internally write to a file. Now, when the disk becomes full, Logger throws an exception and the caller must do something about it. The abstraction of just writing anything to log is broken and the underlying complexity leaks through.
This example also demonstrates the difficulty of keeping abstractions from leaking. You can't exactly make the code automatically free up disk space or sign up to a remote logging service when disk space runs out.
I definitely need a new word for situations like this.
Of course, they also require a fair bit of custom code that is purely plumbing to hook them up (translate data from one source and stick it into the tool).
I watch teams a lot. When I listen to them, I notice how much they talk about tooling instead of solutions. The more they talk about the tooling, the more they're sucking wind.
The truly good stuff just disappears from awareness. That's how you know if a framework or abstraction layer is working for you (instead of you working for it.) It's invisible.
That's the goal. The key question is: are we writing features that users find valuable or are we writing features that developers find valuable? They're two different things. What's happened is that the community has become incestuous. Instead of focusing on value to the user we're focusing on selling frameworks to one another.
>>Software development severely lacks any objective metrics of performance and quality.
Agreed here as well. But this is related to my first point. Nobody wants to focus on the users. It's far too easy to focus on the technology (or other developers). You write a feature for a user, you can instrument it and see whether users use it or not. You write a feature for a developer, even if nobody uses it you can argue that somewhere, somehow, that feature is going to be critical. It's an abstract value argument -- which is intractable.
>>As a result, we are fundamentally unable to reuse already written code on scale, crumbling development and putting the whole industry in constant early-stage.
It's a sad state of affairs. I suggest that your conclusion continues the broken thinking I'm describing above. We shouldn't be striving for reusable code at scale. We should be striving for people who have strong basic programming skills and are experts in some business domain. Then we would shoot for the minimum amount of technological abstraction necessary for these programmers to solve problems in that particular domain. Because that's really what the whole point is, not creating large codebases that last twenty years, creating tiny bits of code almost instantly that solve business problems for twenty years. We've got our head stuck in the wrong bucket.
I think a deeper problem is that we're not really doing such a great job writing features for developers either. I'm amazed at how much of what I do is still done in the terminal or a bare-bones REPL, not so different from how they worked a few decades ago, when there's so much that could actually improve my day-to-day in small but very noticeable ways.
For example, I make almost constant use of the autocomplete feature when I work with the BEAM REPL (Elixir). I was amazed to find out that this was a relatively new feature. I can't count how many keystrokes it saves me throughout the day.
Of course, I still can't use VIM-style keybindings, and I'm being forced to write my REPL-code line by line instead of a Chrome Devtools style snippets. But, as you say, I do have tons of frameworks to choose from that all do mostly the same thing, and yet none of them do the relatively common thing that I need, so each of them requires me to read documentation and figure out how to configure it 'just-so'...
What they don't allow though is assessing code quality in terms of single project(i.e. judging if it's good or not). Since in many cases we do not have a good baseline, we are unable to consistently evaluate work. That leaves us with aforementioned problems.
The business doesn't care about your toolset, probably. What they care about is solving a problem.
I've met no small number of very gifted, very creative developers that had this same mindset -- didn't give a crap about the Cool New Language or pure CS, but really DID get excited about building connections and features within existing systems to solve business problems.
These two visions of development are in tension. Few folks can get jobs writing Haskell or Lisp, but there are LOTS of jobs for .NET developers.
Neither path necessarily makes a better developer, though.
Obviously doctors are still good, and don't need to be doing research to make the world a better place. I agree that dismissing 'plumbing' programmers or 'rote' doctors would be a serious mistake. But... well, I can't help drawing some connections between programmers implementing already-broken security, and doctors putting in heart stents that don't actually help patients.
I don't think we're just being metaphorical here, I think research vs practice doctors often show the same patterns as programmers, for the same reasons. Creating new knowledge is neither necessary nor sufficient for keeping up with other people's knowledge, but we seem to be worryingly bad at keeping people who implement that knowledge up to date.
(Context on the doctors: https://www.theatlantic.com/health/archive/2017/02/when-evid...)
That said I found something weird, sometimes, and with a little bit of adequate libraries (guava for instance) I enjoy doing some Java. It's verbose, way more than lisp, clojure, kotlin (not even mentionning mr Haskell of course). But I find a little pleasure in doing code there. It's manual, it requires doing a lot of things but even though its slower and more "work" it's another kind of stimulation, that I think is one large factor of people doing code in subpar languages. They're just happy doing things and solving things their own way.
Some say that lisp and haskell can't be mainstream because their power is best fit for complex problems, and I think that hints to my previous point. People who need more than mainstream have the brain power and desire to solve non mainstream things.
ps: about the plumbing thing, you might have heard that MIT switched to python exactly for that reason. I was and still am stumped that the people would brought SICP decided that plumbing was the way to go.
There are plenty of software that is not about taking pre-existing packages and gluing them together, but rather involves construction from the ground up. Building a CRUD app using an existing framework, and an existing rest API with little to no custom business logic is plumbing.
Using sci-kit for your ML app is plumbing. Writing your own novel ML/deep learning algorithm, shoving the data in and out of GPUs is not plumbing.
This is plumbing as well by your definition. Writing your own novel ML/deep learning algorithm also take "pre-existing packages" and glues them together. Unless you are actually going to write your own custom OS, Language and drivers you are going to reuse stuff.
I don't think there is anything wrong with using pre-existing packages and the like. We all are doing that anyways unless your are Terry A. Davis off course.
Is doing front end not actual software development? What does that even mean?
Are you considered a real software engineer if you can write code in a certain language? ..
Or does it mean you are really good at O(1) problems?
I think we should all agree that software development is a team effort, it requires vast knowledge and skills that different individuals bring to the table.
Doing CSS adds just as much value as writing the backend API.
Here is a good analogy. For an aircraft to function, pilots, aircraft mechanists, aerospace engineers all come together.
Does that mean one is doing "plumbing"?
Certainly not, they all add value.
In today's environment, 'pure programming' is a small fraction of what needs to get done.
As a long-time programmer, the most leverage I have had is when my work connected to some business objective and produced a result. My engineering background, which emphasized a "problem-solving attitude" helps.
I think the average developer can do this too if they'd just get out of the programming echo chambers and trust their gut.
How true, the 50% of jobs I see are actually semi-skilled overpayed jobs which won't last.
In many cases you configure somebody's else product and you manage it at best, it won't last. It is already happening.
Well, the cheapest, safest assumption is that your skill level is about average and the majority of the programmers you 'll meet are going to have the same kind of skills as you.
Yes, well done abstractions are how we keep large projects manageable. No one can learn all of them at once. Neither those writing "plumbing" nor those doing libraries.
I didn't write Auster for me, I wrote it because other people needed a tool and it fit the parameters. I'm not writing Modern for me, I'm writing it because there's a hole in the ecosystem that somebody has to solve, and I'm a someone.
 - https://github.com/eropple/auster
 - https://github.com/modern-project/modern-ruby
Probably the best advise is to learn as much as possible, but focus on basics. I feel people often mistake knowledge for skill and that harms them in the long run.
If you learn a concept, you can use it in anything you create. If you learn a framework, you will have to learn a new one in 5 years. Focus on transferable skills.
Hunter-gatherers on capitalist democracy, Capitalist democracy on hunter-gatherers, China on India, India on China, US on India, India on Iran, Iran on Israel...
You get the picture.
Perhaps what it really means is "communications failure: exception thrown in cultural assumptions".