Hacker News new | comments | ask | show | jobs | submit login

For a while now I've had a feeling that all the comments about lack of engineers(especially in software) are vastly underestimated. Probably around 10% of us is capable of doing actual software development. The rest writes plumbing and can handle the project for only as long as abstractions available through libraries can hold the complexity.

If we assume most of us don't really know what we're doing, that totally explains language preferences. We don't choose the best tools, we choose the best tooling. And best tooling is the one which we can comfortably fit in our minds, so advanced concepts are mostly ignored and the hype wheel constantly turns. To put it shortly - smart people write better and better tools, so the rest can handle bigger and bigger projects writing mediocre code.

Of course, as every programmer, I live in constant fear that I am part of the plumbing crowd waiting to be exposed.




Maintaining a distinction between "actual software development" and "plumbing" is elitist, even if you're placing yourself on the downside of that comparison and setting yourself up for imposter syndrome.

You can get an awful lot done by "plumbing". Entire businesses like SAP are built on it. It can also be mission critical; in SpaceX, is the literal plumbing of hydraulic fluid and fuel flow unimportant? No.


I'm not trying to impose that one is more "noble" than the other. As you said yourself, businesses usually run on plumbing.

I'm trying to understand the industry, as it appears to be(at least to me) different than what I thought was true. I believe it to be important if we're going to do better and there is a ton of metrics showing we should do better(percent of projects failing, percent on projects exceeding budget and time, percent of projects becoming unmaintainable).

If you could prove that only a handful of people is capable of actually developing software project pass the stage of 'piggy-backing' on libraries, that would probably distinctly change the way we develop software. Maybe we could prevent death marches better. Maybe we could improve our working environments so nobody has to crunch or have a depressing spaghetti-code maintenance job.

It doesn't mean in any way that 'plumbers' should/would be treated worse. If anything, I would expect the opposite.


Indeed, the "plumbing" role is getting more and more important as there are more reusable software out there, so less need to write it from scratch.

It reminds me of how MIT changed their intro-to-programming course, from the Scheme-based one to a python based one, because "the SICP curriculum no longer prepared engineers for what engineering is like today. Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems. [...] programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?'. The analysis-by-synthesis view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant." (http://www.posteriorscience.net/?p=206)

Also reminds me of Vernor Vinge's "Zones of Thought" novels, where in the far future, the starships don't have exactly programmers, but rather a kind of software archelogists who assemble systems from software components that may be a thousand years old.


Yes, and virtually all software job postings/interviews are tests for experience with specific fittings (abstractions/frameworks)


I was one of the last classes at Caltech that used SICP for the intro to programming class before they switched for similar reasons (this would have been ca. 2005)


> only a handful of people is capable of actually developing software project

Failures here are almost definitely related to lack of adequate mentorship rather than anything else. College doesn't go half the way to prepare you to be a successful engineer.

There are people out there that can be self-motivated to do better, but in almost all those cases they're building skills that do the dirty work but don't feature best-practices necessary in a collaborative engineering environment.

Your first employer/team, and their ability to mentor and develop new engineers, makes a huge impact on your success as an engineer. Really capable engineering mentors are worth their weight in gold (diamonds? printer ink?) and their contribution has an exponential effect.


College doesn't go half the way to prepare you to be a successful engineer.

This is something I'm hearing alot at the moment, and not just about engineering. What would you say college taught you?


In general, undergraduate assignments:

- are well specified and known to be completable

- start from a blank slate

- produce relatively short programs

- once complete and accepted, will never be run or looked at again

- are required to work individually

Whereas in a real software engineering department:

- goals will be to some extent vague and fluid, may be contradictory => requiring negotiation skills with PM, customers etc.

- you will nearly always be adding to an existing project => requiring ability to read code and perform archaeology

- programs end up huge => requiring schemes for better organisation, modularisation etc

- have a long life and a maintenance overhead => requires design for adaptability

- are required to collaborate => requiring use of a VCS, not having complete freedom to choose tools, techniques like CI and branching for managing incomplete vs complete work fragments.


For whats worth i think that in college you mostly learn to learn... So what u learn Isn't that important, to me you are just proving that u can learn fast.


It's a little trite but the best thing college taught me was how to read critically and teach myself as needed. The second best thing was providing plentiful examples of good pedagogy (I went to a teaching college, not a research U.) which turned into models for how I try to mentor.

others comments about the difference between school work and real work are spot-on.

It's funny how much I hated group projects as an undergrad, but how in some ways they were the best preparation: How do you still get things done when everyone has different ideas, varying levels of competency, available time, and motivation?


For me it was "How to learn more" with a fair bit of "how to work with others" and a lot of the theory.

The how to learn bit is (and has been, for the last 20 years for me) massively helpful. It's rare that I think back to a particular thing I learned about (it still happens though), but I cherish knowing how to move from one subject to another when trying to work out how I should solve something and where I should look next.


I imagine this varies, but for me most of college was very much about the pure bits of computer science - how things tick, so to speak. But very little of your day to day at most enterprises is about writing new versions of data structures, or academic-level operating systems/database work (obviously there are some roles in the industry where this is the task, but it's not the majority). That's not to say learning it wasn't important - I'd argue that it's a crucial foundational aspect of being a very capable software engineer, it's just not the whole picture.

What learning those things does do is drastically increase your future flexibility as a developer - new databases, new languages, new jobs entirely, whatever. It's all built on the same primitives and if you have that fundamental understanding it makes it easy to ramp up on new technologies given you have the willpower and motivation. There's still a learning curve for specialized fields (of course) but that's fine.

Colleges may well be adapting since I left, but the main issue is that people aren't really holding you to the standards of software that exist at capable software firms. Correctness is about all that matters in university. Students don't know how to optimize for testability, maintainability, deployability, monitorability, etc etc. And learning and developing those skills makes you far better at the 'correctness' bit too.

There are some courses that are collaborative, but in industry the code you write can affect hundreds or thousands of other engineers and there can be real economic consequences of issues in your work (see: plenty of interns/new hires that have had the opportunity to kill $100,000-$1,000,000 or more in revenue by taking down a site - not blaming them, it's just an issue that actually exists in the real world). The order of magnitude is just so different.

This isn't a problem per se, I don't think universities should be expected to perfectly prepare you for this (this is why internships are crucial, and are one of the strongest interview signals for new grads). But somebody does have to - the onus is really on employers of new grads to raise functional engineers if they want to have top notch engineering teams.

I'll be honest - I didn't really grok CS until my first internship had passed, but that one summer really changed both my existing knowledge and my desire to build those skills further. I'm really grateful to have worked with some people that sparked that interest in me. I was at a 2-fulltime-dev startup with a ton of opportunity to work on different pieces of the stack, and it was just tremendously fun.

Side note: an interesting note on taking down applications is that as software enterprises get more mature and taking down a site is that much harder, it feels like (to me) that new engineers in your organization actually have a less opportunity to learn-by-doing for the foundational pieces. This is a very bizarre catch-22 that I suspect has real consequences for the growth of new engineers in software organizations. Very hard to calculate that effect though.


I agree with what you say for both the college (goal, value) and much of the real work on actual systems. To me, this splits pretty neatly into "prototyping" vs "operations".

Prototyping mindset allows you to try many things due to low cost of failure; operations does not. Most software work and in general engineering work outside of academia has at least a touch of ops flavor (i.e., high cost of errors), so to be successful (e.g., not to be labeled a loose cannon) one must be able to impose self discipline required for this. But most organizations have systems and environments for prototyping (or will gladly set up one if you clearly express your wishes and articulate some benefits).

Engineer that wants to work on new ideas then must learn to wear multiple hats (prototyping vs ops) and switch them as needed: the moment one glues one on he is limiting himself to either a rigid ops work (no, we cannot try new things) or a junior level dev (he cannot touch the real systems; his code does not work well enough).


I agree that that's one really interesting way to consider the split, good call on that! Being able to experience those multiple hats is definitely important.


Thank you for your thorough reply. Colleges are under fire, even from their own ranks, by people whom I believe confuse training with education, but it's clear that's not your issue.

It sounds to me like there needs to be some sort of deep-dive "onboarding" program where new hires can work on a curriculum of projects and learn the SOPs of the organization.

Colleges could take some of it on, of course (testability and maintainability for example), but one of the complaints I heard even ten years ago is that they can't keep up with the changes in the field. No true fundamental best-practice principles have evolved, it's largely company-dependant.


It seems to me that, out of ten graduates from a CS program, only one is going to go on to do academic computer science; the other nine are going to become computer programmers. CS programs are doing a poor job of preparing those nine.

Now, in other areas, we have a distinction. We have physics departments that teach people theory, and we have separate engineering departments that prepare people for careers putting the theory to useful work. Well, where does the CS program live? At least where and when I went to college, the CS program was part of the Engineering department.

So I think it's fair to say that colleges should take on considerably more of the job of preparing software engineers for real-world careers in software engineering. Hiding behind "we teach CS, not software engineering" is a cop-out, especially if CS is within the College of Engineering.


There are still different levels of expertise and talent in engineering, broadly dependent on mathematical ability and skill for abstraction.

This is maybe more obvious in hardware design.

At the top of the tree you have people like Maxwell, Heaviside, and Shannon, who invent entirely new possibilities out of pure math.

At the other extreme you have technicians who don't truly understand math or theory, but can build a circuit that will probably work if handed a cookbook.

In the middle are people who can work with abstractions like DSP and filter design as long as the ground has been broken for them. They understand enough math to find their own way through a problem from first principles, but aren't creative enough to invent anything truly original.

CS is more amorphous, the levels are maybe harder to separate, and it's cursed by not having a truly consistent practical mathematical foundation analogous to the applied physics that underlies engineering of all kinds.

But IMO there are similar skill levels - although at the higher levels, abstraction can become a bad thing rather than a good one.

The problem is that although there's math in CS, after Church/Turing - which is pretty elementary compared to most physics - there isn't anything that passes for a formal theory of problem/solution specification and computation.

Without that objective basis, a lot of CS is academic opinion instantiated in a compiler. And a lot practical commercial CS is a mountain of hacks built on language and tooling traditions with no empirical foundation.

Commercially, the most productive people will be middle-rankers - not so clever they'll be thinking of new applications for category theory in an iPhone app, but clever enough to be able to think beyond cookbook development.


So, you're suggesting that computer engineering be separated from computer science, with CS being in the Math department. And you would expect CS to be a relatively small major.


I think that would be reasonable, yes. It would parallel other science/engineering splits.


It sounds to me like there needs to be some sort of deep-dive "onboarding" program where new hires can work on a curriculum of projects and learn the SOPs of the organization.

My employer does exactly this, both within the R&D organization and within our services/consulting group. All new hires from college do a 3-4 week "boot camp" where they do all the common indoctrination stuff, from HR paperwork, to learning the shared tools, to a mini programming project.

Expecting a college graduate to show up ready to contribute like a 5 year veteran is ridiculous. As the parent message says, college is mostly for education, not training. Internships and co-ops fill some of the gaps, but high quality internships are few and far between.


I know of a company that hires fresh graduates, and doesn't expect them to really be able to contribute for two years. They're in Indianapolis, though, so they may have considerably fewer problems with their employees getting poached by others before they can contribute enough to pay back the training period.


It would be interesting to see a course than spanned multiple years building upon the same project with the same team, emphasizing different aspects year-by-year, but I imagine that would be pretty nightmarish on the scheduling front.


It's also a problem of how much you can ask a student to do. There was a time when the expected time to complete a "4-year degree" was approaching 6 years in the engineering-ish fields, and the tuition-check-writers got pissed and called their state legislators to put a stop to it. So adding some sort of multi-semester capstone project would have to be woven into what is already in place, but I would imagine it wouldn't entirely fit into the curriculum without increasing the student level-of-effort (meaning, all of the stuff being taught would still have to be taught, but in addition the extra bits special to the scale of the project would add to the overall workload).


It wouldn't have to be that big. Even if part of one semester was to take a month as a class and try to add some features and fix some bugs to a codebase that the previous decade of classes had worked on, that would be an eye-opener to most students.


Death marches are caused by the people who project the image that they understand software development better than others. Spaghetti code is caused by them as well, they just pretend it's so brilliant you can't understand it. (After their departure, the RDF usually fades)

We don't need a cult of brilliance. What we do need is an atmosphere of humility. Modern software/hardware systems are of breathtaking complexity. It turns out that's simply hard for humans to hold in their minds as a whole, but we still develop software like we actually could.

For a while, we had a glimpse on what a simpler world could look like. (The early days of the web - when everything was GET/PUT/POST). We promptly proceeded to layer complexity on top of that.

And that's OK, because it gave us a lot more power. But we pay a price for that power. And every time we attribute that price to lack of brilliant people, we mostly show that we haven't even come close to understanding what it even is that makes projects succeed.

The genius myth is just magical thinking in disguise.


> If you could prove that only a handful of people is capable of actually developing software project pass the stage of 'piggy-backing' on libraries, that would probably distinctly change the way we develop software

Could you explain how such proof would lead to those changes?

> death marches better. Maybe we could improve our working environments so nobody has to crunch or have a depressing spaghetti-code maintenance job

These aren't software problems, they're business and social problems. No concievable level of productivity improvement will eliminate the death march.


I've experienced death marches occurring precisely because the team didn't chose the right abstractions up-front because, roughly, they weren't skilled enough to have done so. Though, more than skilled enough to understand it when handed to them.

If we split it into "framework writers" and "application writers" then organizing teams along these lines might improve efficiency.

I train in machine learning and other areas, and I often make a "framework/application" skill level distinction made here -- where framework just means the meta-development activity.

What the op comment appears to be saying here aligns along my experience of working & teaching pretty exactly.


> If you could prove that only a handful of people is capable of actually developing software project pass the stage of 'piggy-backing' on libraries, that would probably distinctly change the way we develop software. Maybe we could prevent death marches better. Maybe we could improve our working environments so nobody has to crunch or have a depressing spaghetti-code maintenance job.

I don't think it would matter.

In one instance, you're debugging and Apache server, in the other it's an in-house server implementation. You handjam a CSS file or you can use a preprocessor to help. You can create your own SPA implementation (as I painstakingly and naively did) or use any of the hundreds of existing ones. So do you want to debug business logic or debug all of your in-house implementations and your business logic? External tools are not perfect, but the idea is that they've been battle tested to know where they shine and edge cases that were missed. On top of that, decisions about the tooling must still be made. Relational or NoSQL? You must still know the difference when choosing.


For many programming jobs, the ultimate goal is not the code, but solving a business problem. So to me it's fine to just "piggy-back on libraries" if the library doesn't get in the way, doesn't lead to poor performing code, and saves time.

Even as a business programmer, I agree that I do think it's a good idea to occasionally step into some places that are a little more low level. Yet from what I've seen (playing around with embedded systems and VSTs and the like), the actual coding process is, more or less, more similar than different. Both in process ("the basics" of Javascript translate to some degree to "the basics" of C), and even at the "lower level", libraries are also used quite frequently. For instance, JUCE is a package pretty frequently used for developing VSTs (VSTs itself are an SDK). These packages save time and allow developers to focus on the meat of the audio plugin, the DSP algorithms.

At a certain level, what happens is that you get into the realm of math / engineering problems. And there definitely is mindset and focus differences between engineers and business -- one that's good at one is not necessarily good at the other. I just don't think the coding aspect of that separation is as stark as you make it out.


Crunch has zero to do with quality of programmers or tooling.

It has everything to do with management practices, organization, anxiety, fear or personal wish to be seen as hero.


> Maintaining a distinction between "actual software development" and "plumbing" is elitist...

Is the distinction between an aerospace engineer and aircraft mechanic "elitist"? Which would you prefer to have designed the next aircraft you fly in?

Plumbing is really what the vast majority of us do. With varying levels of skill, we glue various pre-written libraries and packages together with a bit of business logic, solving problems that a million others have solved before.


I could go and say that the aerospace engineer is also just doing "plumbing". The engineer is not going to design an engine for instance. "All" he does is specify what kind of engine he would use and then plug it to the fuel system and the wing structure. The engine was designed by someone else at a different company most likely. (Example: the Airbus A320neo uses PW1000G turbofan engines made by Pratt & Whitney. "neo" stands for "New Engine Option" by the way). This happens with a lot of the parts of an aircraft. Dismissing that as "plumbing" would be absolutely insulting to the engineer.

The same is true for a lot of software development. It is true that I am not going to design and develop ALL of the components of my software. For some things I will use libs and frameworks someone else made. I don't see a reason to be dismissive about that.

For instance I will be using the libs our hardware manufacturer provides to talk to the avionics bus of the A320. I don't see a reason to redesign our own ARINC429 avionics bus libs, but I also don't see a reason why this would make what I do not "actual software development".


From what I read on /r/engineering, a lot of engineers complain that their jobs look like what you described - compiling together parts from different vendors' catalogs. A lot of people are resentful that they get to use maybe 2% of the cool math-based knowledge they got in university. It looks very similar to the resentment a lot of CS grads working in software engineering are experiencing.


A better question: Which would you rather have doing the inspections an your aircraft?


As I know as couple of people who either build aircraft for a living or build aircraft for their personal use, neither of them is an aerospace engineer. But I'd happily fly in their aircraft.

Irrespective of the title, can they design and build the object in question. This applies to all fields.

The point that we must look at is whether or not the problem before us can be solved by us. It doesn't matter if you build from the ground up or use some pre-built parts. Is the solution going to work and solve the problem at hand. If it does then you have success, if it doesn't then you have failure.

It is a matter of understanding the problem (in terms of the problem space), what solutions you use only matters if you can't solve the problem.


People who build aircraft are not people who design aircraft.

Moreover, the point of the comment isn't about the titles assigned to these people by someone external: it's about the actual skills they have that would cause you to assign such titles to them.

If you know people who can design new aircraft, then they are, by definition, aircraft engineers. They may have a job as an aircraft mechanic, but that's besides the point. That doesn't mean the average aircraft engineer is capable of designing aircraft.


My point is that the aircraft fly. An awful lot of designs fail to fly and these are deigned by aeronautical engineers.

My other point is that solving the problems at hand is the more important function, irrespective of what title is attributed to you.


In general plumbing is considered the “blue collar work”. No one would put a boot camp front-end JavaScript grunt in the same league as an MS or PHD guy doing artificial intelligence research, compiler assembly, or reverse engineering malware.

And it seems ridiculous at times to even throw around the title of “Software Engineer” when the field has no standards of certification or regulation like other engineers. The only distinction from the programmer and engineer is the engineer makes architectural decisions, and the larger the scale the more accurate the title. “Plumbers” are cheap and no one cares if you fire them.


Interesting that you put those three together; while cutting edge AI research might require that level of academic background, I wouldn't say that compiler work does. It's more within the reach of an undergraduate degree course. And malware (for and against) is a completely different field altogether, full of people with unconventional backgrounds and a large chunk of poachers-turned-gamekeepers.

I think it will be a long time before the field slows down enough that standardising it will be viable.


The point is there was once a time when sprinkling some HTML and putting a site on the web automatically made you a wizard, and if you were young enough you might even be hailed as “the next Bill Gates”.

Those days are over. When society was illiterate, people who could simply read and write might have been held in the same prestige as those who write novels or manuscripts. As literacy grew however, so did society’s ability to distinguish between skill levels. The same will happen with code.

I remember when mobile development was at its hottest peak, declaring I was an iOS developer practically made people bow down in awe and throw offers my way for help with developing their mobile app idea (usually in exchange for equity or “revenue share”). Nowadays the field is so commoditized I don’t even mention my 8 years experience with iOS except in passing conversation.

A computer science degree is enough to call yourself a software engineer because most people can’t tell the difference these days. But for people who know the industry, a front end dev whose job is to basically push pixels on to a page is hardly an engineer, and I’d say is our modern version of a mid ‘00s website designer.

Having a timeless standard for what makes someone a Software Engineer that we can all agree on and can be verified by third parties would be helpful. Naturally, this will be met with resistance because there are many people who will not qualify, and who do not want an engineering license that would require them to be liable for their work.


> Having a timeless standard for what makes someone a Software Engineer that we can all agree on and can be verified by third parties would be helpful.

I would think that if this were possible, it would have happened by now.

Programming as a profession - while not as old as other "engineering professions" - is much older than what you are insinuating here; for instance, COBOL dates from 1959, FORTRAN is a couple years older, and there are a few older languages before that.

https://en.wikipedia.org/wiki/History_of_programming_languag...

But let's use COBOL as a "standard" - since there is a ton of COBOL out there still running and being maintained (and probably more than a bit being created "new"). That's almost 60 years worth of commercial software development (using various languages starting with COBOL).

If a standard could be considered and developed, it would have likely been done by now.

There are more than a few arguments as to why it hasn't, but one of the best ones I have read about has to do with the fact that our tools, techniques, etc - seem to change and are updated so quickly, that standards for testing would become outdated at an insane pace. An engineer certified on a set of standards might be obsolete before he left the building!

Ok - somewhat hyperbolic, but you get the idea. For the "classic" engineers, their tools and techniques don't change much if at all over long periods of time - so certification is more straightforward. For some engineering professions, you can pretty much take someone who was certified in 1970, and be pretty certain that he or she would be able to do the same kind of work today. That would definitely not be the case for a proverbial "software engineer" certified to the standards of that time...


I don't agree. The typical undergrad CS program doesn't prepare people for AI research, but that's because it dedicates lots of time to discrete/combinatorial stuff and low level systems knowledge. Drop/condense architecture, OS, compiler, exotic data structures and algorithms, networking, programming languages. Add more calculus, linear algebra, probability, optimization, statistics, symbolic AI. That undergrad would be equally good at AI work as a traditional undergrad would be at compiler work. AI field is rarefied now but will not be so forever.


I don't consider it elitist at all. Someone with a solid knowledge of fundamentals can often accomplish more, faster, and more maintainably in vanilla Go (for example) than an engineer who is trained in frameworks. The former can write a pretty solid pubsub/distributed queue/disruptor/stream parser/load balancer/etc. that will outperform most off-the-shelf solutions, cost very little to host, and be tailored to a specific application. The latter generally cannot.

Don't get me wrong - I run a very small programming shop and I make judgment calls every day about whether to borrow or build. My operating system, hardware drivers, compiler, dependency manager, email server (etc.) I borrow these because it seems obviously practical and I have an appreciation for the complexity underneath (although I have some unkind things to say about hosting tiny apps on full-blown linux virts, the waste is unbelievable). I use Unity for client side development for games, which is probably the decision I'm least happy about, but I simply don't have the bandwidth to deal with the monstrous complexity of client-side programming (especially in a gaming context).

Frameworks are generally bloated monstrosities that conceal performance problems behind walls of configuration and impenetrable cruft that has developed over decades of trying to make the "out of the box" experience configurable while pleasing myriad experts. They do more than one thing relatively badly, and the engineers who work with them often haven't developed the ability to deep dive into them to solve real scaling problems.

You don't get simplicity, you never get zero-touch, and your learning when working with a framework often doesn't generalize, so you're basically renting a knowledge black-hole rather than paying down a mortgage on a knowledge library.

Anyway, that's my two cents on why I think having solid fundamentals is important, at least in my line of work.


Good points. You could go with Alpine Linux if you're looking to use a lean distro.


> Maintaining a distinction between "actual software development" and "plumbing" is elitist

Maintaining distinctions like this is necessary to have the right people do the right jobs. Some developers like being puzzled by hard problems and will get bored writing adapters for Java classes or connecting A to B in some set of Javascript frameworks. Others are motivated by seeing their high level design realized and don't like having to give too much thought about the lower abstraction layers. Giving these people the wrong jobs is a waste of time and money.


> Maintaining a distinction between "actual software development" and "plumbing" is elitist

I don't think it is. Someone that is slapping together libraries from npm, but has no idea how to debug with gdb or use strace/ktrace/dtrace etc to diagnose problems with the resulting system, or does not have the skills to fix bugs or add new features to the "plumbing" - that person is not an actual software developer. There is a huge gap in skills, knowledge, and as a consequence the capabilities of these two camps of people.


I think this is going into the wrong direction. In my world "plumbing" means realizing ways to do things. When you want to write a simple web app that shows a Hello and has a contact form: even in 2018 that's still a reasonable amount of work. It's really not a lot of code but every line needs to be chosen wisely. That's what I call "plumbing". Most people new to Software Development become desperate at such tasks and to the rest it becomes embarrassing quickly... :-)

Yeah and then there's actual code writing: the heavy lifting has been done, all the functions/classes/modules/... are still small and wait to be stretched with a lot of nice code. In well sorted projects the latter is a trivial tasks for simple features.

But yes, I agree with the sentiment that there is a lack of general "understanding of stuff". I'm not sure if you must be able to use strace to be productive but it sure helps if you're able to get to know tools that are installed on most systems. Coming back to the OP's topic: LISP is a language that uses formalisms (ways to plumb ;)), tools (ever heard of asdf?) and syntax complete alien to even long-term computer addicts. It always puzzled me how people can be comfortable using this kind of stuff.


> I'm not sure if you must be able to use strace to be productive

You will be productive until whatever runtime/library you are using has a serious bug or performance issue. Then you either use tools like strace or your productivity drops to 0. It is not a matter of marginal or order-of-magnitude productivity differences. You don't "understand stuff" for sentimental reasons, you "understand stuff" because the alternative is your manager has to call in people like me that "understand stuff" and pay them tens of thousands of dollars to fix things that you can't.

> It always puzzled me how people can be comfortable using this kind of stuff.

It's called learning.


Wow nice. I want to do this one day too ;)

> It's called learning.

Haha


Call it what you will, there are ranges of skills needed in technical activity, and those of the engineer are not the same as those of the technician, though it is much more of a continuum than any two words suggest. In many branches of technology, there are certain tasks that need an understanding of calculus, but it is not so clear that there is an equivalent in software development, and I certainly do not think that programming in Lisp as opposed to C (or any other language choice) counts as such.


That's a misinterpretation of parent's comment.

I think he meant fidgeting with pre made solutions hoping they will work, instead of having principled ways to craft things.

If SpaceX were to "plumb" as he meant it, they would use of the shelf modules and ideas for quick results. Instead they actually designed their rockets mostly from scratch (very rare in space, where it's often a rule not to go off what's provent to work).


I think you have misunderstood OPs remarks. The way I see it, the distinction is between the innovators and the regurgitators. Very few people can innovate well when writing software. Others rely on the work of existing innovators to piece together their solutions. Both have their place, but one takes more practice to attain a high level of proficiency. Not elitist - just observing reality.


> is elitist

I know I'm supposed to think that that is bad, but I can't come up with why it would be.


It really depends on some pretty subjective values held on faith, that position, power, and profit ought to be shared.


Or, as seems more accurate in this example, subjective belief that anyone should get arbitrary amount of praise for just showing up, instead of having to earn it through the process of honing and applying their skills.


In my experience the innovators must toil in obscurity because they see a vision that others cannot yet see, until the thing is realized.

Then the prototypes get handed off to the main engineers to run with, and support is provided. By the time the thing sees production usage, the original inventor has long since moved onto solving other problems on the horizon.

Praise is attributed to the last and loudest to have touched a project. Innovators are deeply satisfied by that and don't need the praise because it actually gets in the way.


Presumably, though, in the hypothetical presented in the thread, the "top" 10% wouldn't be people who just showed up? And the "plumbers" are the ones who cannot build on their own?

The question is why elitism is wrong. Your statements rest on whether that perception of elite status is wrong. Well, if that is ever resolved, is elitism wrong or not? Ought power, profit, and position be shared?


It's not elitist. It's exactly what it is. Plumbers make great money, especially ones who plumb oil pipelines underwater and such. So SAP might be a big money cow, and plumbing, but what OP was saying is that such things are not employing the "best principles" of computer science, to write well thought and efficient programs.


Nor indeed the plumbing in your house! It would be a very gross world indeed without it.


What if reality is elitist? Are you saying that because you don’t like a certain thing, then it can’t be reality?

> Entire businesses like SAP

Companies like SAP mostly waste people’s money by extracting huge sums from municipalities.


> "actual software development" and "plumbing" is elitist,

Perhaps, but you are the person who prepended 'mere' in front of plumbing.


I'll take it out again if people are going to argue by string matching rather than reading the tone of the original post.


It might be best.


> Maintaining a distinction between "actual software development" and "plumbing" is elitist

You are correct - however, it doesn't mean that this statement is necessarily wrong.

Usually when I see a statement that reeks of elitism, I immediately assume lower probability of it being true, because elitism of a statement correlates with falsehood - but it's worth remembering that this correlation is not absolute, and sometimes (although rare), elitism is indeed deserved and true. And in this particular case, out of my personal professional experience, elitism is absolutely deserved. There are a lots and lots of software engineers out there that can't or just won't fit more complex ideas into their minds.

And yet, you are of course, correct: these developers can, and will, do good work. They can have different skills, like knowing the users, or creating beautiful animations, or having a great game design sense. These developers don't need to be looked down upon. And yet it would be very useful for all of us to acknowledge that these developers think in a different way and require different tools - they're not "worse" than "real" engineers, but they are different.


I'd say you're basically redefining elitism to mean something it doesn't mean (or at least not how it's used here).

That said, I do agree with you. I think that 1) it can be be very important to make distinctions between types of programmers, because 2) it makes it easier to actively explore the field and finding what suits your 'type'.

For example, for many people, someone who 'does' HTML/CSS with some jQuery plugins, is a programmer. But personally I'd not really call such a person a programmer. I don't mean that as a value judgment, but rather to make a distinction between 'types' of work.

Making that distinction earlier in my career could've helped me (whether the label it programmer/non-programmer is used or not), because I spent more time than I'd have liked being such a HTML/CSS/JS guy. I learned tons of very specific rules/tricks/lore that were necessary, but did not help as a programmer, and spent countless hours doing this kind of work, not really enjoying it all that much.

It was only when I started doing 'proper' javascript stuff that I realized how much fun programming is, and how my relative enjoyment of the whole front-end stuff depended on the bits of 'real programming' I would occasionally get to do. In hindsight I wish I'd figured that out earlier, and not spent so many brain cycles and storage on CSS layout tricks and discussing how much 'semantic HTML' matters.

(thankfully those things are still useful when I do full-stack type stuff, but still)


> Usually when I see a statement that reeks of elitism, I immediately assume lower probability of it being true, because elitism of a statement correlates with falsehood

Ironically, I do the precisely opposite, due to my personal experiences. Over the years I lived on this planet, I've been both in the "elitist" groups and the groups that accuse someone else of being "elitist" - and I universally found it that it's the "elitist" group that was always right. I've learned to recognize accusations of elitism as ways to cater to one's feelings of insecurity (and I was guilty of that in the past, too).


> developers think in a different way and require different tools

Yes, I think this is very much underappreciated. When people find a language or system that matches their way of thinking, it works a lot better for them. But not necessarily for the next person. So you get small communities of people who find that e.g. Lisp has been amazing for them, who can't see that other people think differently about programs and systems.


I sort of agree, but I also disagree. I think 90% of us are capable of doing "actual software development."

However the market does't care for that. The market prefers short term gains over long term gains. Perhaps we can blame wallstreet? Due to the demand for short term gains, the ask from most developers is "how fast can you build this" not "how can you build this to be most efficient and cheapest in the long run and lifetime of the product" To move quick, developers must then employ abstractions layered upon abstractions.

The short term winners get lots of money for demonstrating wins quickly. The losers conform to keep up.


> However the market does't care for that. The market prefers short term gains over long term gains.

I agree with this. For example, Sun was sold for $5.6B in 2009. [1] While Skype was sold for $8.5B in 2011 [2].

Sun had Solaris, Java, SPARC, and MySQL. Skype was a chat tool.

Even today many popular databases find it hard to get billion-dollar valuations, while multiple Social Companies has done it.

The market doesn't care about core CS. It cares about monetary gains.

[1] https://en.wikipedia.org/wiki/Sun_acquisition_by_Oracle [2] https://en.wikipedia.org/wiki/Skype


Skype is / was not "a chat tool" - Skype was an opportunity to own the IP telephony space for both the business and consumer markets. That is potentially a huge revenue base, and fits neatly into a company like Microsoft that sits astride both.

Solaris, Java, SPARC and MySQL had all demonstrated that they weren't going to acquire major revenue streams, at least under the ownership of Sun. Their valuations reflect two different types of company, something the market is very able to understand.


Skype is basically the Type I error offsetting "passed on Google" style Type II errors. It's a crappy investment in retrospect, but it was a strong player in an obviously valuable market. IP telephony remains enormously valuable - Skype just didn't win the contest.

And ironically, a shortage of "core CS" was a major factor in that failure. Usable, high-fidelity, encrypted VOIP is an enormously difficult challenge, and after some early successes Skype failed to offer a quality product. Claiming the valuation as an argument that core CS doesn't matter looks pretty backwards to me.


The fact that it wasn't going to win the contest ought to have been obvious to anyone who actually used skype.

Their android app was a horror story. There desktop app was ho hum, chat reminded me of icq and it was completely obvious that the major players were going to be people who had hip social networks or were well positioned like say google with all android users and apple with all iphone users.

Skype was a tool to talk to grandma and I didn't forget the fact that they tied the number of people in a group chat to less than 10 if you didn't use an intel processor and resolution to buying a logitech camera.

Turns out even grandma has a facebook account now in fact she had one when lunatics decided it was a good idea to buy skype.


> Skype was a chat tool.

> The market doesn't care about core CS.

Yeah, implementing a peer-to-peer voice over IP with Skype's late 00s quality (which fell significantly since then) is not "core CS" and is just "plumbing", right.


Skype seems to have floundered not on lack of features, but on app stability and call quality. Honestly, it looks to me like an example of a technically-challenging product that failed by underperforming on core CS.

I'm not sure why it would be framed as low-difficulty, except that consumer-facing tools tend to get written off as simple.


The issue I have with this view is that unless you have people on your team with fundamental, "core CS" capability, creating a behemoth like Skype or Snap just isn't possible. Of course, there's a balance, and who knows, maybe it's Pareto with 80% of the team focused on engineering and 20% focused on math/CS/R&D.

The monetary gains the market sees are the direct result of clever math and CS, not bolt-on solutions that can be lifted from libraries on GitHub.


I don't agree with this view.

Most jobs working with software development is about creating business value. Either as an end-user product that your customers are going to use or with internal tooling that will help the business have more access to data, streamline processes, increase efficiency, etc.

This "no true Scotsman" approach to software development is actually quite funny after a while being a professional, it's a huge industry, there are terrible companies, there are terrible managers, product managers, etc., but there are also great ones. You can work for good ones if your current gig is mostly being pushed around by unreal expectations from your stakeholders.

The demand is not for short-term gains, how can you justify that it's going to take a year to build your perfectly architected software if the business really needs that done in 3 months and you assess it's quite doable if you decide on some constraints? Your job as a professional engineer is to be able to find ways to do your work as best as possible given constraints, to design something to be improved on over time, to communicate with stakeholders and, given your area of expertise (software), give valuable input to the business decision so they can be the best as possible at that moment.

Seeing software engineering as some grandiose goal by itself is quite wrong, software exists MOSTLY to fulfil business purposes.

It's not about "conforming", there is software that is fun to work, that are intellectual challenges by themselves but that really have no way to be justified on a business level.

This defensiveness against "business" is part of a mindset that should be broken among engineers, we should embrace we are part of a much larger process, not that we are the golden nugget and the top of the crop at a company. Our work is to enable others' work.


I actual agree with you. However when business claims to need it done in 3 months, do they? Lots of business software projects don't meet the deadline, they end up taking longer, not having complete feature and worse still ridden with bugs. They try to meet deadline, cut corners and end up with flaws. When the deadline is missed, more pressure is placed on developers and things get worse. Business tries to have their cake and eat it too. A business that exerts demand for a deadline, can't also demand for all features, cheap cost, and high quality. There are trade offs and these trade offs are often ill defined. The idea of lean software and agile is great and hopefully with time will solve this for the industry.

BTW, I'm now part of the business side now and a manager. If you give me time constraints, you don't get to give me feature demands, you can give me your prioritized feature list and I can tell you what we can deliver given all other constraints. If you demand all the features, then I take away the time constraint.


Agree with all of this -- I'd just like to add that I think there's a second order point here that doesn't get much discussion; technical debt is often considered as something that's always bad, but sometimes the correct decision is to trim "quality" (from your trifecta) and get something out to the market faster.

I think that the debt analogy is a sound one; if you are buried under credit card debt then you might not even be able to make the interest payments. But if you take on debt in a considered and thoughtful fashion, then you can achieve things you wouldn't otherwise be able to do (e.g. buy a house, in the debt analogy). I have found that the debt analogy is very useful for communicating these tradeoffs to the business stakeholders.

So sometimes it actually is reasonable for the stakeholders to request you to compress the timeline on a sequence of features, if there's an external deadline that must be hit; we just have to make sure we get buy-in to come back and repay our debts (preferably immediately after the deadline).


> "how can you build this to be most efficient and cheapest in the long run and lifetime of the product"

I couldn't answer this question accurately. I can't even give remotely accurate time estimates for projects larger than "build a CLI tool to do this one thing," much less give an informed estimate about the tool's TCO. I feel like I'm just floating down the river, incrementally improving upon the stuff that we've already built and that nothing we planned to accomplish ever is.

I'm sure lots of people here have executed their Grand Vision for a project. But I'm also certain many of us never have.


You are forgetting opportunity costs in besmirching "build fast" over optimising for TCO. If I can use the code to gain an immediate business advantage it is entirely possible that I'll take something in 3 months than 6, even if it costs more in the long term. The language, and negotiation, required is of the Engineer, not the Scientist - there is still optimisation going on.


AFAIK, the guy has had a marvelous and productive career.

When he says "...knowing Lisp destroyed my programming career" he just means that at some point in time he switched to other things. There was no "destruction". It was "a pivot"-- to use HN lingo.

I think anyone who makes programming (let alone programming in a particular tool/language) the absolute focus of their career is in for a major disappointment. The OP was NOT crushed by his realization. He just moved on and appears to have been very successful regardless. Not a big deal unless one is obsessed with Lisp.


Nerds screw up everything we touch. I don't think we mean to. Whatever the system, we make it more complicated, er featureful. Then we add an abstraction layer. Then we make that layer more complicated. Repeat and rinse. Some abstraction layers help more than they hurt, but the ratio is about 1-in-10 or so. For any given project, there are probably a dozen cool-sounding frameworks or layers, one of which is absolutely needed. The rest are there because somebody wants to be promised that even crappy programmers can use this to make cool stuff happens.

The sales pitch is clear: don't become a better programmer, get a better toolkit.

I have been quite fortunate to have come into computers before this cloud of marketing madness overtook us. I got to watch the layers roll out one-by-one.

Honestly I have no idea how I would learn programming if I had to do it again today. It's just too much of a byzantine mess. Hell if I would start with the top layers, though. I'd rather know how to write simple code in BASIC than have a grasp of setting up a CRUD website using WhizBang 4.7 When you learn programming, well you should be learning how to program. Not how to breeze through stuff. Breezing through stuff is great when you already know what's going on -- but by the time you already know what's going on, it's unlikely you'll need the layer or framework. (Sadly, the most likely scenario is that you've invented yet another framework and are busy evangelizing it to others.)

This guy's story strikes me as poignant and indicative of where the industry is. They don't care if you can solve people's problems. They care if you're one of the cool kids. It's nerd signaling.


Being able to solve people's problems doesn't mean you need to be able to program.

What is a "real programmer", anyway? Is it knowing how a CPU works? Managing memory? If you rely on the garbage collector, do you really know what you're doing? If you write a Rails app without fully understanding HTTP, are you just plumbing?

Does it matter?

The reason we build tools and abstractions is to allow us to accomplish higher-level tasks without worrying about lower-level ones. Does it help if you understand lower level ones? Sure. But for millions of apps and websites that aren't Facebook or Hacker News or Firefox, the only problems are high level.


I think that it matters if you can dig into anything. Given reasonable timespan being able to learn x64 machine code, debug and fix buggy driver if necessary, for example. If you're program has GC issues, being able to read papers, read GC source code if necessary and find right set of parameters or change some lines of code. If your Rails app has vulnerability because of underlying HTTP issues, being able to learn more about those HTTP issues, read RFC if necessary, read Ruby http code if necessary and provide required fix. If your manager heard about Meltdown issue and wonders whether our services are vulnerable, real programmer should be able to read papers, understand vulnerability and whether it's important (to shut down services, for example) or not.

Abstractions are leaky. I've yet to encounter one which doesn't leak.

Though if you're working in a team, it's not necessary for everyone to be real programmer. One or two is enough. Those tasks which require real programming are rare, most of tasks are mundane.


Sure, I'll concede all of those points! But I think they fundamentally change the question, because now you're presenting a different set of problems to solve.

Understanding one level of abstraction doesn't mean that you understand the levels above or below it. It's perfectly possible (likely, even!) that a team will have someone who can build a good Rails app and someone else who can make sure the HTTP code and infrastructure is secure, yet those people won't have many skills that overlap.

Yes, there are definitely good programmers and bad programmers. IMO, that has to do with how effectively you can solve problems you care about, not (as the comment to which I was replying suggests) the level of abstraction you're comfortable with.


I'd say the world 'real' muddles things up and injects too much value judgment, and with little purpose I can see other than making oneself feel special (or inferior).

I'd much prefer 'highly skilled' or 'advanced', if at all necessary.


Without low level knowledge you can't tell which feature requests are easy or hard to implement. https://xkcd.com/1425/


The XKCD is a bad example in this case, as it is mostly about domain specific knowledge of computational theory or standardized algorithmic approaches to the problem spaces presented within those domains, whereas "low-level" is something different ... primarily referring to common layers underlying many domains.

Most programmers never have to write low-level code.

This is a good thing.

It doesn't mean they can't, it just means we've moved on from wasting our time re-implementing solutions to problems already adequately solved for a general case. Finally.

Manual memory management is frankly insane outside of extreme edge cases today.


I'd put it differently. Software development severely lacks any objective metrics of performance and quality. We just haven't invented any(at least any practical enough to become mainstream). As a result, we quite often misjudge our(and others') skill and make bad decisions.

It's the goal of software to provide more and more features. The problem is, we usually achieve those features by abusing abstractions we are using. Once the problem becomes apparent, somebody writes a library, which allows us to write a couple of times more crappy code before everything collapses. Rinse and repeat.

Since we are unable on scale to choose right abstractions when necessary, we quite often get ourselves in situations, when it's actually preferable to rewrite everything using lessons learnt. Rewriting comes with its own set of new problems and cycle is complete.

As a result, we are fundamentally unable to reuse already written code on scale, crumbling development and putting the whole industry in constant early-stage.


> we are fundamentally unable to reuse already written code on scale

Actually I disagree - it's just that when something does become reusable it immediately vanishes from people's consciousness. You can see this both in things that become standard library features, and open source components that end up ubiquitous. The list of "incorporates software from" in licenses gets ever longer as people embed copies of SQLite, logging libraries, ORMs, serialisers, and so on.

One of the original article's points was that the things he loved in Lisp became features of other languages, and so ceased to be special.

http://uk.businessinsider.com/how-many-lines-of-code-it-take...


Good point. I would like to point out however that all of your examples appear to do well in abstracting mostly technical problems, not business problems. Technical problems are problems common across our field and natural areas of interests of those '10 %'. Business code is something written by majority of us(or at least I would expect it to be so) and it doesn't appear to scale and grow so nicely.


Sorry to interject, but I'm curious: as someone who is in their first semester of a CS education, could you elaborate on what you mean by abstraction?

So far, I just know it to mean making your function more widely applicable. I'm assuming there's a better definition I'm missing?


Nope, that's pretty much it. It just have very far reaching consequences.

To illustrate, let's use a simple example. Let's assume you need a logging feature in your application and your language/framework doesn't provide one(pretty much impossible nowadays).

The simplest solution is to just append to a hard-coded file. It solves the imminent problem. It's also very bad solution unless you will never write log again.

Instead, you build a Logger class/module/whatever. The logger provides 'write' functionality, which is abstracted away - client(i.e. code using logger) does not know or care how data is going to be logged. It's a responsibility of the logger to do that and make decisions on how to do that. Now you have abstraction - you depend on ability of Logger module/class/whatever to do its job, without knowing/caring/influencing actual implementation.

Now, let's talk about leaking abstractions. Let's assume you're need to differentiate between different log levels(info/warning/error). Logger is provided as a library(so not easily modifiable) and only has generic 'write' method. Proper solution would be to create a new abstraction, LoggerWithLevels, which would provide methods like 'write_warning', 'write_error' etc. Underneath, it would probably call just call Logger, prepending severity string to logged message. The important thing though is the fact, that we still base our code on abstraction - we don't know how our new logger represents different levels - we use appropriate methods and we assume it's handled alright.

What will likely happen though is that programmer will not create new abstraction layer. Instead, he will start manually or semi manually format strings sent to a logger. Effect will be the same, but abstraction will be now 'leaking', because severity level is now dependent on implementation(prepared log messages), not on logger API. Therefore, any change in the process(like saving logs to online service instead of disk file) will now require changing code in multiple places instead of just one.


This is not what I understand a leaking (or more commonly 'leaky') abstraction to mean.

Instead, assume the previously mentioned Logger does internally write to a file. Now, when the disk becomes full, Logger throws an exception and the caller must do something about it. The abstraction of just writing anything to log is broken and the underlying complexity leaks through.

This example also demonstrates the difficulty of keeping abstractions from leaking. You can't exactly make the code automatically free up disk space or sign up to a remote logging service when disk space runs out.


Well, you're right, I misused the term a bit. I used 'leaky', because it well describes that abstraction cannot hide implementation details(and in the second example those implementation details are used to achieve new functionality). However, it is because of the user abusing abstraction, not because abstraction is inherently leaky in that regard(and that is the reason the term was coined).

I definitely need a new word for situations like this.


Most marketing solutions are abstractions of business problems. CRMs, tracking, automated reporting tools, etc.

Of course, they also require a fair bit of custom code that is purely plumbing to hook them up (translate data from one source and stick it into the tool).


That's a great point.

I watch teams a lot. When I listen to them, I notice how much they talk about tooling instead of solutions. The more they talk about the tooling, the more they're sucking wind.

The truly good stuff just disappears from awareness. That's how you know if a framework or abstraction layer is working for you (instead of you working for it.) It's invisible.


>>It's the goal of software to provide more and more features.

That's the goal. The key question is: are we writing features that users find valuable or are we writing features that developers find valuable? They're two different things. What's happened is that the community has become incestuous. Instead of focusing on value to the user we're focusing on selling frameworks to one another.

>>Software development severely lacks any objective metrics of performance and quality.

Agreed here as well. But this is related to my first point. Nobody wants to focus on the users. It's far too easy to focus on the technology (or other developers). You write a feature for a user, you can instrument it and see whether users use it or not. You write a feature for a developer, even if nobody uses it you can argue that somewhere, somehow, that feature is going to be critical. It's an abstract value argument -- which is intractable.

>>As a result, we are fundamentally unable to reuse already written code on scale, crumbling development and putting the whole industry in constant early-stage.

It's a sad state of affairs. I suggest that your conclusion continues the broken thinking I'm describing above. We shouldn't be striving for reusable code at scale. We should be striving for people who have strong basic programming skills and are experts in some business domain. Then we would shoot for the minimum amount of technological abstraction necessary for these programmers to solve problems in that particular domain. Because that's really what the whole point is, not creating large codebases that last twenty years, creating tiny bits of code almost instantly that solve business problems for twenty years. We've got our head stuck in the wrong bucket.


> That's the goal. The key question is: are we writing features that users find valuable or are we writing features that developers find valuable? They're two different things. What's happened is that the community has become incestuous. Instead of focusing on value to the user we're focusing on selling frameworks to one another.

I think a deeper problem is that we're not really doing such a great job writing features for developers either. I'm amazed at how much of what I do is still done in the terminal or a bare-bones REPL, not so different from how they worked a few decades ago, when there's so much that could actually improve my day-to-day in small but very noticeable ways.

For example, I make almost constant use of the autocomplete feature when I work with the BEAM REPL (Elixir). I was amazed to find out that this was a relatively new feature. I can't count how many keystrokes it saves me throughout the day.

Of course, I still can't use VIM-style keybindings, and I'm being forced to write my REPL-code line by line instead of a Chrome Devtools style snippets. But, as you say, I do have tons of frameworks to choose from that all do mostly the same thing, and yet none of them do the relatively common thing that I need, so each of them requires me to read documentation and figure out how to configure it 'just-so'...


Objective metrics of performance are easy to come with: speed, memory usage, latency, throughput, energy efficiency, depending on type of your software. I can launch Windows Task Manager and those metrics are right there. Those metrics can be measured and compared. Quality isn't hard either, just count bugs. Many customers don't want to pay for performance or quality, they want features, delivery speed, shiny UI and listen to marketing too much. Slack is a joke when it comes to memory consumption. But it has smiles and marketing campaigns, so everyone's using it anyway even if chatting was a solved problem 30 years ago in kilobytes of memory.


Don't agree. Metrics you provided allow to compare two projects. They also allow to see progress in the project(or regression).

What they don't allow though is assessing code quality in terms of single project(i.e. judging if it's good or not). Since in many cases we do not have a good baseline, we are unable to consistently evaluate work. That leaves us with aforementioned problems.


It's rather disappointing how the industry hasn't made any progress on CRUD application development productivity in the past 20+ years. Microsoft Visual Basic 4.0 allowed low-skilled developers to build working client/server CRUD applications far faster than any modern web development framework. Of course it's easier to distribute web applications than thick client Windows applications, but other than that we haven't gained anything.


We're trying to move this forward at https://anvil.works - the usability of Visual Basic, with the ease of distribution of the web. Biased I may be, but I'd call that progress :)


Reminds me of this picture entitled Life of a game programmer: https://i.imgur.com/sBih7ol.jpg


There's a real tension in this field between capital-C Computer Science and the work of writing code for businesses.

The business doesn't care about your toolset, probably. What they care about is solving a problem.

I've met no small number of very gifted, very creative developers that had this same mindset -- didn't give a crap about the Cool New Language or pure CS, but really DID get excited about building connections and features within existing systems to solve business problems.

These two visions of development are in tension. Few folks can get jobs writing Haskell or Lisp, but there are LOTS of jobs for .NET developers.

Neither path necessarily makes a better developer, though.


It's like saying a doctor isn't doing interesting things if they're just content to apply medical knowledge to improve community health. The doctor is just using tools developed by other people.


I'd feel better about this if it weren't for the growing evidence that an awful lot of what we actually get is cargo-cult medicine - procedures that have been supplanted or invalidated, but are still widely used by non-research doctors who don't know better.

Obviously doctors are still good, and don't need to be doing research to make the world a better place. I agree that dismissing 'plumbing' programmers or 'rote' doctors would be a serious mistake. But... well, I can't help drawing some connections between programmers implementing already-broken security, and doctors putting in heart stents that don't actually help patients.

I don't think we're just being metaphorical here, I think research vs practice doctors often show the same patterns as programmers, for the same reasons. Creating new knowledge is neither necessary nor sufficient for keeping up with other people's knowledge, but we seem to be worryingly bad at keeping people who implement that knowledge up to date.

(Context on the doctors: https://www.theatlantic.com/health/archive/2017/02/when-evid...)


I agree 80% with this, I'll add something. I always had a lispy side, I hated Java, PHP, C and others prevalent languages that most companies would bet their money on, I like APL, I like Forth.

That said I found something weird, sometimes, and with a little bit of adequate libraries (guava for instance) I enjoy doing some Java. It's verbose, way more than lisp, clojure, kotlin (not even mentionning mr Haskell of course). But I find a little pleasure in doing code there. It's manual, it requires doing a lot of things but even though its slower and more "work" it's another kind of stimulation, that I think is one large factor of people doing code in subpar languages. They're just happy doing things and solving things their own way.

Some say that lisp and haskell can't be mainstream because their power is best fit for complex problems, and I think that hints to my previous point. People who need more than mainstream have the brain power and desire to solve non mainstream things.

ps: about the plumbing thing, you might have heard that MIT switched to python exactly for that reason. I was and still am stumped that the people would brought SICP decided that plumbing was the way to go.


"actual software engineering" is like saying that you're not doing "actual farming" unless you're using some oxen and an iron plough.


I reckon all software is plumbing. Plumbing as in taking data in, processing it, and pushing the data out. Doesn't matter if it's a database, a compiler, or a CRUD app; it's all just plumbing.


No, plumbing is taking prebuilt pieces like pipes and elbows, joints, valves and just putting them together.

There are plenty of software that is not about taking pre-existing packages and gluing them together, but rather involves construction from the ground up. Building a CRUD app using an existing framework, and an existing rest API with little to no custom business logic is plumbing.

Using sci-kit for your ML app is plumbing. Writing your own novel ML/deep learning algorithm, shoving the data in and out of GPUs is not plumbing.


> Writing your own novel ML/deep learning algorithm, shoving the data in and out of GPUs is not plumbing.

This is plumbing as well by your definition. Writing your own novel ML/deep learning algorithm also take "pre-existing packages" and glues them together. Unless you are actually going to write your own custom OS, Language and drivers you are going to reuse stuff.

I don't think there is anything wrong with using pre-existing packages and the like. We all are doing that anyways unless your are Terry A. Davis off course.


Your distinction doesn't hold up to analysis: at some level any system can be said to consist of elbow joints - eg. "processor opcodes are just elbow joints." The same can be said not only of the subsystems but also the intellectual heritage of so-called "original" systems. Nothing and everything is new. News at nine. Someone watches the news. They create a new system ever so subtly influenced by yours. Ad infinitum.


I think that no matter where you are as a programmer, you're part of the plumbing crowd whether you like it or not. If you hack together websites using a web framework like Rails or whatever, then that web framework is likely your plumbing. If however you write a web framework, then the language (Ruby, etc) is your plumbing. Those who write languages have operating systems and those who write operating systems have hardware. Even hardware has different levels of abstraction in it from folks designing instruction sets down to folks who have to actually deal with the physics of circuits on silicon. The things is though, a lot of those folks wouldn't be able to deal with the stuff that a typical web developer deals with because they don't understand the domain. Writing a highly performant web server is a different task from writing a complete web application that uses that web server. I think that's it's useful to have a deeper understanding of the tools we use, but ultimately we all have to work at the level of abstraction that makes sense for the work we do and that's not a problem.


Sigh.. I for one will never understand why everyone is obsessed about titles.

Is doing front end not actual software development? What does that even mean?

Are you considered a real software engineer if you can write code in a certain language? .. Or does it mean you are really good at O(1) problems?

I think we should all agree that software development is a team effort, it requires vast knowledge and skills that different individuals bring to the table.

Doing CSS adds just as much value as writing the backend API.

Here is a good analogy. For an aircraft to function, pilots, aircraft mechanists, aerospace engineers all come together. Does that mean one is doing "plumbing"?

Certainly not, they all add value.


So let's think about this in a different way. Rather than classify programmers in buckets, e.g., plumber, non-plumber. Instead, the tasks of vast teams of programmers has grown to include a significantly richer environment. Skills beyond pure programming that we all need to do include connecting vast sets of existing environments together to build the next thing that we are working on. Few environments that we work on don't have networking, acres of existing APIs, already built ecosystems of data.

In today's environment, 'pure programming' is a small fraction of what needs to get done.

As a long-time programmer, the most leverage I have had is when my work connected to some business objective and produced a result. My engineering background, which emphasized a "problem-solving attitude" helps.


The 10% of developers you mention are the ones who don't cargo cult program. They don't seek "Neo" architectures. They are the ones who knuckle down and solve actual, real problems, not some fantasy generalized version of it. They are the ones who code and hone their craft. They don't spend any time arguing over React vs VueJs, Rust vs C++, etc.

I think the average developer can do this too if they'd just get out of the programming echo chambers and trust their gut.


The real problem is that the majority of us probably are actually capable of doing actual software development right when we come out of school. Most of us get slotted into crappy plumbing and middleware jobs, and after 5-10 years those skills get rusty. They can come back; but if you end up out on the job market, being asked some of the advanced questions in interviews. . . well good luck. That situation exposes one fairly quickly.


>Of course, as every programmer, I live in constant fear that I am part of the plumbing crowd waiting to be exposed.

How true, the 50% of jobs I see are actually semi-skilled overpayed jobs which won't last. In many cases you configure somebody's else product and you manage it at best, it won't last. It is already happening.


Your post resonates, but let's be fair: the business of a lot of the money in software now - users downloading deploys that change daily or more frequently, on multiple devices, with expectations to interoperate with so much other software - has really changed the incentives of what to focus on.


>> Of course, as every programmer, I live in constant fear that I am part of the plumbing crowd waiting to be exposed.

Well, the cheapest, safest assumption is that your skill level is about average and the majority of the programmers you 'll meet are going to have the same kind of skills as you.


Why do you think that writing those tools is harder? Oftentimes it is not. A lot of it is easy.

Yes, well done abstractions are how we keep large projects manageable. No one can learn all of them at once. Neither those writing "plumbing" nor those doing libraries.


Plumbing does get very complicated when your building size increases. And after a while its not just plumbing. Ensuring no pipe leaks in a skyscraper is no easy task.


For the past four years I've architected, built, and maintained a system largely designed using the pipes and filters pattern. Does that make me Mario?


As someone not even worthy of the status of a plumber, I quite agree. But the wonder of it all is that we can attend on the shoulders of giants so easily.


for any given level we use abstractions, those abstractions maybe in form of algorithms or tools - so isn't everything plumbing?


So how does one make sure they're not plumbing?


Any suggestions on how I can get into that 10%?


Write more code. Stretch yourself. Learn how other people work and figure out how you can improve what they do, because that tends to be the bailiwick of what he's referring (mistakenly, IMO, but the bucket is probably fine even if the label isn't) as "real software development".

I didn't write Auster[0] for me, I wrote it because other people needed a tool and it fit the parameters. I'm not writing Modern[1] for me, I'm writing it because there's a hole in the ecosystem that somebody has to solve, and I'm a someone.

[0] - https://github.com/eropple/auster

[1] - https://github.com/modern-project/modern-ruby


If I knew, I would do it myself and wouldn't be afraid ;)

Probably the best advise is to learn as much as possible, but focus on basics. I feel people often mistake knowledge for skill and that harms them in the long run.

If you learn a concept, you can use it in anything you create. If you learn a framework, you will have to learn a new one in 5 years. Focus on transferable skills.


You seem to be experienced and ignorant at the same time


You can accurately attribute that quote to any external culture, re: another culture.

Hunter-gatherers on capitalist democracy, Capitalist democracy on hunter-gatherers, China on India, India on China, US on India, India on Iran, Iran on Israel...

You get the picture.

Perhaps what it really means is "communications failure: exception thrown in cultural assumptions".


It's easy to differentiate: real software developers use butterflies [0].

[0] https://www.xkcd.com/378/




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: